content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Under what conditions may it be acceptable to abbreviate a CSR?
An abbreviation of a clinical study report is accepted when; the policy of the authority accepts abbreviation, in cases where some data was lost and where the study involved does not seek to clarify a phenomenon.
Where in the CSR should a sample of the ICF be placed?
At the beginning of the study report.
What information should be included in the “Investigational Plan”?
This part contains information on how the study will be carried out. This information include what population is to be studied and the criteria of choosing a sample is specified. This section also specifies the treatments under study and the way they are given. Diagrams are used to make the study be easily understood.
What is the purpose of “Disposition of Patients” diagram?
This diagram shows the behavior of the population under study. It shows; the number of people who get treated at every stage, those who did not pass through every stage and reasons for being left out are stated. The number of people who are screened and those excluded from screening is also shown in the disposition diagram.
Wiki Stats
Give 3 examples of “experimental hypotheses” as defined in Wiki.
An experimental hypothesis is a statement that a statician states. The statician then aims at proving whether this statement is true. It shows the dependent variable which is a cause and the independent variable which is similar to an effect.
Why is “random sampling” critically important in statistics?
Random sampling is a method that gives equal probability of being chosen for all items in a population. Random sampling is useful because it eliminates bias. Bias is a situation where some items have a higher probability of being chosen than others.
Do the descriptions of processes of “data cleaning” in Wiki match the process of data cleaning in clinical research? Explain.
Data cleaning is the process of eliminating variables that do not follow a pattern formed by other variables. Data cleaning in wiki is similar to that in clinical research. In clinical research, the doctor uses his own judgement to eliminate observations of certain patients. Elimination of some patients results is known as data cleaning. This is because patients who are eliminated do not follow a certain criteria.
What do the mean, mode, and median tell us?
Mean of a data shows the expected value. The mode shows the value that is most frequent, while the median shows the value that is at the middle of all others.
What does variance and standard deviation tell us about a population?
The standard deviation of a data shows the spread of the data around the meanwhile the variance is the square of the spread of data around its mean.
Why is a basic understanding of “probability” vital to the clinical researcher?
Knowledge of probability is important to a clinical researcher as it helps them in understanding of various probability distributions such as the normal distribution which is common. Probability helps a researcher to know the chances of a certain phenomenon occurring as well as the chances for non occurrence.
In the section “Testing Statistical Hypotheses”, how do we determine if two populations are similar or statistically different?
Testing statistical significance of two populations involves testing for the difference between the means of these populations. The two populations must have a normal distribution and their standard deviations must be known in advance.the null hypothesis states that the means of the populations is equal while the alternative hypothesis states that the means are different. A level of significance at which testing is done is chosen. If the null hypothesis is accepted, it means the two populations are similar. Rejection of the null hypothesis means the populations differ.
What is the purpose of “significance testing”?
Significance testing is done to tell whether a certain hypothesis is true or false.
Statistical Slide Presentation (revisited)
Can “observational data gathering” (like number of men and women dying from lung cancer OR distribution of types of brain tumors among humans) yield important information? Explain.
Yes, it helps identify details that the respondent may not tell the researcher, but which are important to the study.
What are the two ages groups associated with significantly higher numbers of auto accidents involving deaths?
19-20 and 21-24 years
Which 3 variables in the t-test equation collectively determine “significance”?
Mean, standard deviation and population size
Can you manipulate clinical study design to increase the likelihood of detecting a significant difference between 2 populations? Explain.
Yes, this can be achieved through modifying the variables that affect the significant difference; for example, increasing the population size.
The Typical FDA Drug Approval Process
How can the sponsor remove a “clinical hold” placed by the FDA on a potential clinical study presented in the IND?
Do you need to complete long-term animal safety in order to start a phase 2 trial? Explain.
No, you cant start phase 2 during this period using early access.
FDA Accelerated Approval (revisited)
What is the general basis for determining usual or accelerated approvals?
Usual approval is determined if diagnosis with a certain drug provides longer and better life to patients. Accelerated approval on the other hand is determined on grounds that are weaker than those of determining usual approval.
US Code of Federal Regulations (CFR) - The Animal Efficacy Rule
According to the letter of the law, when does this rule “apply”?
The animal efficacy rule applies to situations where it is not possible to conduct a study to test the drug involved.
The rule was in discussion in 1999 and approved in 2002. Why do you think it was approved in 2002 (a reasonably fast process)?
This rule was approved in 2002 because the drugs were found to make the worst conditions be better. The drugs were found to lengthen the life of a person. The rule was also approved because testing of the drugs was found to be almost impossible. This is because testing the drug would result to worsening of the health of a person.
When does this rule Not apply?
The rule is not applicable where a drug can be tested using set standards. Instead of using this rule, a drug is approved if it meets the set standards.
Federal CFR - Animal Rule Slide Set
Why is this rule called the “Efficacy” rule?
If you believe that your drug requires approval via the Animal Efficacy rule, what should your IND (data packet) contain?
The Animal Efficacy Rule - Listing of Approvals
In general, what characteristics do these approvals share?
Findings from the FDA are analyzed before making the final decision.
The purpose, benefits and risks of a drug are considered.
The data from pre-trial and clinical studies is analyzed.
MOST RECENT Drug Approved via the "Animal Rule" - Raxibacumab
What is the indication for this new drug?
For the treatment of anthrax in patients when the condition is caused by Bacillus Anthracis.
How is this drug administered?
The drug is administered intravenously
What is the only warning regarding this drug?
It may possess infusion reactions
In the major population (N = 283) what were the 4 AEs reported?
4
The Development of a MoAB for Treatment of Rare Disease - FDA Type 2 Meeting Re: Animal Rule Approval Path
What was the “Osaka outbreak”?
It was an outbreak of the norovirus in a hospital in Osaka
Why did the company predict “no return in investment”?
The cost of flushing out the infected milk, conducting studies to identify the cause of the infection and funding PR campaigns resulted in no profit margins for the company.
What were 4 of the major reasons why this trial might be difficult or impossible to conduct?
The risks involved were high
The cost of funding the research was high
It involved healthy patients yet the drug was meant for infected people.
The adverse reactions associated discouraged other participants from volunteering in the study.
Did the FDA agree that the protocol design was adequate?
No, they did not.
What were some of the reasons for the high incidence of STEC infection in this foreign country?
There was increased bacterial production in a dairy plant at Osaka during a power outage. The STEC was caused when dairy products from the firm were sold to consumers.
Of the 4 potential trials suggested by the sponsor, how long would it take to complete the longest trial,,,,,the shortest trial? Why the difference?
Carcinogenicity, fertility studies, genotoxicity, and animal studies
What were the two most useful animal models for this infection? Were the results of treatment impressive / repeatable?
Rabbit and monkey models.
The results were impressive.
STEC Infection in Washington State
Briefly describe the parameters of this infection:
How many cases reported?
501
How many hospitalized?
151
How many cases of HUS?
45
How many deaths?
3
What was determined to be the cause of this outbreak?
E-Coli resulting from unhygienic meat cooking and processing
What was the most recent STEC outbreak in the US?
It was the Shiga Toxin outbreak in June 2012
Did it involve beef? Describe.
The CDC was not able to identify the source of the E-Coli infection
What parts of the US were involved?
Florida, California, Georgia, Alabama, Maryland, Kentucky, Virginia, Louisiana and Tennessee
A Mini Seminar in Drug Advertising
What is the role of the OPDP?
It ensures public health by making sure that prescription drugs meet the health and medical requirements set in place.
Why is DTC advertising a “relatively new area of prescription drug promotion”?
This is because pharmaceutical companies used to target health care organizations and middle-men but are now engaging customers directly. This form of advertising is also new because it begun only 15 years ago.
A “product claim ad” must contain 4 key components: _____?
Condition it treats
Advantages
Risks
What is a “boxed warning”? Is it important?
It is a warning on the packaging of a drug indicating that the medicine poses significant risk for the consumer. This warning is essential since it is a national standard set by FDA, which helps protect consumers.
What is “fair balance”?
Presentation of an accurate analysis of the risks associated with a drug and its advantages.
What is the “prescribing information”?
It is a piece of literature inserted into a drug’s packaging that indicates how a drug should be used, the side-effects and its purpose.
Briefly, what are the 4 problems with the “Incorrect Product Claim” ad?
- It did not state all the risks of the product.
- It did not indicate that the product could only be sold and distributed by licenced practitioners.
- It did not precisely state the uses of the product.
- The advertising was misleading
TV and Radio-generated Warning Letters
What were the FDA’s two major concerns regarding the “Quadramet” advertisement?
- Failing to disclose essential information about the risks of the drug.
- Overstating the effectiveness of the drug
Why is conducting and pursuing the development of a potential cancer drug different, relative to most other therapeutric areas?
Because developing a cancer drug requires more studies that increases the costs of research and production. | https://www.wowessays.com/free-samples/research-paper-on-the-fda-ich-clinical-study-report-csr-guidance/ |
Educators should benchmark themselves against the globally-recognised ISTE (International Society for Technology in Education) standards to gauge their preparedness for the post-pandemic teaching and learning landscape.
Dr Por Fei Ping, Lecturer from the School of Education, Humanities and Social Sciences, was speaking at WOU’s online talk on ‘Equipping Educators for the Post-Pandemic Teaching and Learning Landscape’. The event today organised by the School in collaboration with the Penang Regional Centre was attended by about 50 people.
She said the Covid-19 pandemic had compelled educators to engage students through the various online platforms, and so encouraged them to grab the opportunity to improve themselves for the post-pandemic era.
She spoke about digital access, digital skills and digital literacy for digital readiness. Digital access is access to different devices, software, tools and the Internet, while digital skills refer to the use of devices for online and offline teaching.
Dr Por focused her talk on digital literacy, which involves evaluating the strengths and weaknesses of the different tools and the shortcomings of the lessons, and then designing a lesson that meets the students’ needs and market demands.
She said digital literacy is guided by the ISTE standards, and educators can use this benchmark to evaluate their readiness for post-pandemic teaching and learning. The seven standards are Learner, Leader, Citizen, Collaborator, Designer, Facilitator and Analyst.
As a Learner, educators should learn from and with others to use the potential of technology. They must set professional learning goals, build learning networks, and stay current with the latest educational research, tool and the latest pedagogy, she stated. She added that as a Leader, educators must share their vision and actively shape students to achieve that vision. “We must lead our students to join the learning process, and support student empowerment and success.”
Dr Por called on educators to also be a responsible Citizen in the digital world, and not post or share posts without filtering. “We need to be responsible in the use of digital resources, and educate our students to be ethical in using the digital tools.”
She said educators, as a Collaborator, will exchange ideas with colleagues and students, and discover new digital resources towards solving problems and achieving better student learning outcomes.
While educators as a Designer, she continued, apply instructional design principles to design innovative learning environments to accommodate different learning styles and learning needs, and create personalised lessons.
Dr Por remarked that as a Facilitator, educators facilitate learning with technology to achieve students’ learning goals, “We need to create a learning culture where students are responsible for their own learning outcomes, and are independent learners.”
Lastly as an Analyst, she said educators must analyse the data obtained from the formative and summative assessments to design lessons that meet students’ needs, and provide timely feedback to the students and parents.
Dr Por encouraged educators to upskill, re-skill and cross-skill themselves, especially in times of uncertainty so that they are versatile and ready to learn new skills beyond their job scope to be multifunctional. | https://www.wou.edu.my/getting-prepared-for-post-pandemic-teaching-and-learning/ |
Public-Private Insurance Partnerships Bolster Latin American/Caribbean Resilience
Globally, three of the ten most costly natural disaster events in the last 35 years occurred in total or in part in the Latin America/Caribbean (LAC) region; losses from Hurricane Matthew in the Caribbean are still being assessed.
Today, 80 percent of the LAC population lives in urban areas, second only to North America (82 percent) and well above the global average of 54 percent. The region’s 198 large cities (>200,000 residents) contribute over 60 percent of gross domestic product (GDP), and its 10 largest cities produce 50 percent of that total. As the region’s population, swelling middle class, urbanization and GDP concentration continue to grow, the effects of climate volatility will likely increase the impact of natural perils losses on these economies.
Damage from these losses as a proportion of GDP tends to be much higher than in developed economies (see tables 1 and 2). If the LAC region is to build on the economic and social gains of recent years, its governments must align with the private sector through public–private partnerships to improve risk management and disaster preparedness strategies.
The ultimate cost of catastrophe-event responses puts particular strain on the public balance sheets of emerging markets, increasing public debt and ultimately burdening taxpayers. Adding to the problem is the lack of insurance coverage in developing countries. Average property insurance penetration in developing countries was only 0.21 percent in 2014, compared to 0.77 percent in industrialized countries. Another estimate indicates only 3 percent of potential loss is currently insured in developing countries versus 45 percent in developed countries. Mature economies can also often fall back on fiscal safety nets to cover insurance shortfalls. The $83 billion budget appropriation approved by the U.S. Congress after Hurricane Katrina hardly registered on the U.S. budget, but most developing economies cannot afford such amounts. For example, a year after Hurricane Ivan hit Grenada in 2004, the country defaulted on its foreign debt.
The Maule, Chile earthquake of 2010 burdened the country with $32 million in economic losses, or 15.1 percent of GDP. Despite the high level of insurance coverage in Chile—even by developed world standards—75 percent of the costs were ultimately assumed by the government, leaving significant opportunity for the private sector to reduce the state’s financial burden.
Many recent catastrophe events in the Latin America/Caribbean region provide examples of the protection gap: Only 5 percent of the $8 billion economic loss from Haiti’s 2010 earthquake was insured; and the insured portion of the $2-3 billion economic loss caused by the April 2016 earthquake in Manta, Ecuador, is expected to reach no more than 15 percent. The 2016 earthquake has deeply impacted the local economy and government finances as unemployment increased by approximately 50 percent and the government was compelled to increase sales taxes by two percent to fund national reparation and recovery costs. In general, emerging markets face a much larger protection gap than developed economies:
Given the overall impact of catastrophes on public-sector finances, governments in Latin America are transitioning from an over-reliance on post-event disaster financing to a pre-event approach to disaster risk mitigation. Societies are realizing that transferring risk to the private sector provides efficient and cost-effective solutions that relieve already strained public-sector budgets.
The insurance industry also empowers the mechanisms and innovation needed to “build back better.” This concept relies on three key ideas—risk reduction, community recovery and implementation—to increase community resiliency. By improving building codes and land-use planning, cities can reduce the future vulnerability of their physical infrastructure. At the same time, social and economic recovery is supported through market-based incentives and subsidies to finance aid and reconstruction efforts. Finally, stakeholder education, legislation, regulation, community consultation and monitoring and evaluation must all be used to ensure compliance with appropriate and culturally sensitive standards and targets.
“Building back better” institutionalizes disaster assessment and recovery frameworks at the national, municipal and community levels as well as in the private sector, academia and civil organizations, improving coordination and risk governance. Sharing regional and global best practices and establishing international aid standards further supports sustainable recovery and reconstruction.
A Lesson in Resilience from Mexico
The Mexican federal government’s risk management strategy exemplifies a modern, resilient disaster preparedness plan, including pre- and post-event approaches and public–private partnerships. Following the 1985 Mexico City earthquake, the Mexican National Civil Protection System (SINAPROC) was created, establishing a multi-level system to integrate stakeholders from the three levels of government, the private and social sectors, academia and scientific organizations. Its purpose was to provide an institutional framework for the improved coordination of emergency response. Its capacities in the areas of risk assessment, early warning, preparedness and disaster risk financing were developed. As SINAPROC evolved, it added risk reduction practices to shift from a reactive to a preventative, holistic and integrated risk management plan.
Recognizing that risk comes from multiple factors—politics, land-use planning, cultural norms, and more—SINAPROC mainstreamed the plan throughout government, private and social sectors. Through the Secretariat of the Interior, it coordinates civil protection with other key policies, such as urban development, housing, climate change and education, by clearly identifying responsibilities. SINAPROC works with the Secretariat of National Defense and the Secretariat of the Navy to implement emergency preparedness, communication and relief and recovery plans in addition to creating institutions to set policies and budgets, develop best practices, coordinate government, promote social and private-sector agreements and research scientific and technological improvements in risk management. It also aligns with the Ministry of Foreign Affairs to oversee international compliance and assistance and establishes provisions for government accountability.
SINAPROC also created the General Risk Management Directorate to oversee financial risk management instruments in conjunction with private-sector stakeholders. One such instrument was developed in 1996 to respond to a continued need for post-event budget allocations. The Fund for Natural Disasters (FONDEN) is a transparent financial vehicle by which the federal government provides pre-event funding from tax revenues for post-disaster response and reconstruction. Its resources are allocated by law, and distributions are made by the state-owned development bank from sub-accounts dedicated to specific reconstruction programs.
Through FONDEN, the Mexican government established relationships with international capital and reinsurance markets that have proven critical in accessing risk transfer schemes. In 2006 it purchased Mexico’s first catastrophe bond, Cat Mex. In 2009, it replaced Cat Mex with the MultiCat Mexico bond, expanding earthquake coverage and adding hurricane coverage. The bond was renewed again in 2012 before making a $50 million payment to the Mexican government for losses from 2015’s Hurricane Patricia. SINAPROC further mitigated the storm’s impact by using its early warning system to evacuate most of the affected population, resulting in only a handful of casualties despite the fact that Patricia was the second-most intense tropical cyclone on record.
In 2011, FONDEN also placed a traditional insurance program covering 100 percent of the federal government’s assets. To incentivize prevention, it covers up to 50 percent of provincial assets if municipalities implement formal risk transfer strategies. The program renewed in 2012 with over 40 international reinsurers and demonstrated considerable buying power by convincing the market to accept its own damage assessment and adjustment procedures.
Mexico’s risk management strategy has earned a strong reputation in the international community. The World Bank said it is “at the vanguard of initiatives aimed at the development of an integrated disaster risk management framework, including the effective use of risk financing and insurance mechanisms to manage the fiscal risk derived from disasters,” highlighting it as an example for other governments to follow.
Another market leader in public–private partnerships, CCRIF SPC (formerly the Caribbean Catastrophe Risk Insurance Facility) is the world’s first multi-country-risk-pool-utilizing parametric insurance backed by both traditional insurers and capital markets. Created in 2007 with the support of the World Bank, the government of Japan, and other donors, CCRIF provides protection against earthquakes, hurricanes, and excessive rainfall to 17 Caribbean and Central American countries. Leveraging its diverse portfolio, the facility provides affordable reinsurance for members through catastrophe swaps with the reinsurance market. In 2014 it accessed catastrophe bond markets for the first time with a three-year, USD 30 million bond covering hurricanes and earthquakes, providing CCRIF multi-year access to reinsurance at a fixed price.
The risk pool mitigates cash flow problems faced by its members after major natural disasters by providing rapid, transparent payouts to assist with initial disaster responses. It has made 22 payouts to 10 members for a total of $69 million, all within 14 days. CCRIF was the first to pay claims associated with the 2010 Haiti earthquake and has paid out more than $29 million in response to 2016’s Hurricane Matthew.
Microinsurance Helping Close Insurance Gap
Not to be outdone, the private sector has demonstrated its commitment to bringing insurance solutions to emerging economies through the industry consortium and venture incubator Blue Marble Microinsurance. Blue Marble’s founding consortium has committed to launching 10 microinsurance ventures in the next 10 years to deliver risk management solutions to the underserved. Through collaboration with strategic partners, including government and quasi-government entities and innovative technology-enabled platforms, Blue Marble seeks to improve sustainability by expanding the role of insurance in society. These ventures will consider unique distribution methods, local partnerships, product development and impact services.
Blue Marble is currently working to close the protection gap in the risk that climate change poses to smallholder farmers in Latin America with the intention to launch pilots in 2017. Blue Marble understands the value of public sector–private sector partnerships in achieving its mission; it is coordinating its initiatives to bolster agricultural production and the management of associated risks with local government officials, including Ministers of Agriculture.
Given the recent slowdown in global demand for commodities and the persistent social inequalities and corruption in some nations, it is more important than ever for governments in the Latin America/Caribbean region not only to protect the economic and social gains made in the last decade, but provide the systems and institutions to promote further sustainable growth. Partnering with the private sector ensures the best practices, innovations and risk reduction and management techniques of the insurance industry are combined with the risk knowledge of regional governments, thereby ensuring resilient cities and communities are poised for strong future growth. | https://www.brinknews.com/public-private-insurance-partnerships-bolster-latin-americancaribbean-resilience/ |
The purpose of this study was to investigate the rates, widths, and pitches of university double bass players' vibrato in relation to pitch height, fingers used, and tempo. Forty (N = 40) undergraduate and graduate double bass players were individually recorded performing three music exercises that were used for analyses. Each exercise was comprised of three identical excerpts that were transposed for first, fourth, and thumb positions. Excerpts in first and fourth positions utilized fingers 1, 2, and 4, while excerpts in thumb position utilized fingers 1, 2, and 3. The overall mean vibrato rate and width of university double bass students in this study was 5.17 Hz and 19 cents. A comparison of the vibrato rates and widths of participants' 1st and 2nd fingers revealed that the 2nd finger (5.22 Hz, 21 cents) used both significantly faster and wider vibrato than the 1st finger (5.07 Hz, 18 cents). Additionally, the descriptive data from this study revealed that the 3rd and 4th fingers vibrated faster than both the 1st and 2nd fingers, and they had a wider vibrato width than the 1st finger, but a narrower width than the 2nd finger. The 3rd finger had the overall fastest recorded vibrato rate for any finger in any position. Analysis of vibrato data also indicated that university double bassists use significantly faster vibrato rates as they perform in progressively higher registers. When comparing the combined mean vibrato rates of the 1st and 2nd fingers, participants vibrated at 4.88 Hz in first position, 5.06 Hz in fourth position, and 5.50 Hz in thumb position. Vibrato widths also increased with pitch register. Mean vibrato widths in first position (16 cents) were significantly narrower than mean vibrato widths in both fourth position (21 cents) and thumb position (22 cents). Tempo also significantly affected mean vibrato rates and width. Musical examples played with a fast tempo were faster and wider (5.35 Hz, 20 cents) than musical examples played with a slow tempo (4.94 Hz, 19 cents). Additionally, analysis indicated that university double bassists vibrate almost equally above and below the in-tune pitch. Using the descriptive data for all fingers in all position, the total difference found between mean pitches of vibrated and non-vibrated tones was 1 cent. Music educators can use these results to create more consistent descriptions of double bass vibrato, and potentially, more efficient methods for teaching vibrato.
Music
Double Bass, Orchestra, String Instrument, String Music Education, Tempo, Vibrato
March 13, 2012.
A Dissertation submitted to the College of Music in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
Includes bibliographical references.
Alice-Ann Darrow, Professor Directing Dissertation; Melanie Punter, University Representative; John Geringer, Committee Member; Steven Kelly, Committee Member.
Florida State University
FSU_migr_etd-5036
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). The copyright in theses and dissertations completed at Florida State University is held by the students who author them. | https://diginole.lib.fsu.edu/islandora/object/fsu%3A183011 |
If you cannot view this page please visit www.oecd.org/sti/ict/newsletter.
|December 2009 issue|
|OECD Information and Communication Policy News|
|Communication Infrastructures & Services - Information Economy
|
Security & Privacy - Consumer Protection - Statistics & Indicators
|
|
In this issue:
UPCOMING EVENT
|
|
NEW PUBLICATIONS
Background Report on Empowering E-Consumers, Nov. 2009
The report serves as background for the OECD Conference on E-consumers. It examines a series of emerging issues including vulnerable consumers such as children, new forms of interactions for example consumer-to-consumer e-commerce or product ratings, consumer participation through new channels such as social networking sites, and access to a variety of digital products including films, videos, and music.
New Policy Recommendations on Consumer Education, Nov. 2009
Governments, working together with consumer organisations, teachers’ and parents’ associations and other civil society groups, should do more to promote consumer education. They should help consumers develop critical thinking and raise awareness of potential issues, according to new policy recommendations recently endorsed by the OECD's Committee on Consumer Policy.
The Impact of the Crisis on ICT and ICT-Related Employment, Oct. 2009
Employment is continuing to drop in the information and communications technology (ICT) goods sector and remaining flat in most ICT services, according to a new OECD report. Employment in ICT manufacturing dropped by around 6-7% year-on-year in most countries in the second quarter of 2009. Even Chinese ICT manufacturing employment has dropped 5% in the second quarter compared to second quarter 2008.
Measuring the Relationship Between ICT and the Environment, Jul. 2009
While the links between ICT and environmental outcomes are becoming clearer, there is no separate statistical field that links the two. This report explores relevant official statistics and suggests a conceptual framework for a new statistical field 'ICT and the environment'. The report makes recommendations on how to improve statistical collection to better measure the links between ICT and environmental outcomes.
RECENT EVENTS
OECD Open Forum on the Importance of Internet Access and Openness for a sustainable recovery, IGF, Egypt, 17 Nov. 2009
Participants at the Forum discussed the role of the Internet and broadband in the economic recovery as enabling other sectors -healthcare, education, or smart transportation and electricity grids- to develop. They agreed a multistakeholder approach is needed to address current challenges, in particular the global deployment of the newer version of the Internet protocol (IPv6) and ensuring the Internet remains open to innovation.
Workshop on Using ICTs and the Internet to Meet Environmental Challenges, Egypt, 17 Nov. 2009
This workshop focused on policies and next steps for improving environmental performance, making more effective use of ICTs and the Internet in meeting environmental targets, harnessing the ICT sector’s potential for sustainable growth and employment, and underpinning and speeding green growth.
OECD/InfoDev Workshop on Expanding Access to the Internet and Broadband for Development at the Internet Governance Forum, Egypt, 16 Nov. 2009
This workshop focussed on the spread of mobile throughout the developing world based on prepaid services and the budget telecom network model, which exploits long-tail markets. Participants discussed the importance of effective competition, of access to spectrum, of removing bottlenecks in international connectivity, and discussed the merits of rationalising taxation in the ICT sector, including by phasing out universal service levies.
Workshop on Global ICT Services Sourcing Post-Crisis: Trends and Developments, Egypt, 14 Nov. 2009
This workshop examined recent trends in international sourcing in view of the crisis. It concluded that despite global turbulence in ICT services markets, prospects for services supply from emerging and developing countries remain bright, provided the right policy framework is in place. The Workshop was opened by Dr Tarek Mohamed Kamel, Minister of Communications and IT, Egypt, and benefited from high-level presentations and debate.
“Green ICT” Side-Event at UN Climate Change Talks in Barcelona, 5 Nov. 2009
Innovation in ICTs is key to achieving ambitious CO2 emissions reductions. At the UN Climate Change talks in Barcelona - four weeks ahead of COP15 - the OECD, ITU, GeSI and international partners discussed how ICTs can help combat global warming. Key messages
Cloud Computing Technology Foresight Forum, Paris, 14 Oct. 2009
OECD/infoDev Workshop on 'ICT for Development: Improving Policy Coherence', Paris 10-11 Sep. 2009
Subscribe/Unsubscribe
Subscibe/unsubscribe and view back issues at www.oecd.org/sti/ict/newsletter. If you know someone who would like to subscribe, please forward them this link: www.oecd.org/sti/ict/newsletter/register. | http://www.oecd.org/internet/broadband/oecdinformationandcommunicationpolicynews-december2009.htm |
Red Fort has now been renovated and it has added to the beauty of the Fort. Red Fort, also known by all as the Lal Qila, lies on the Netaji Subhash Marg in New Delhi stretching towards Old Delhi next to Chandni Chowk and can be reached by Metro Link with Kashmiri Gate as the nearest Metro Station.It is the seventh fort of Delhi and was constructed in 1639 that took ten years to complete in 1648 over the 'Mughal city' and 'seventh city of Delhi' named. The palaces and other city buildings are constructed out of beautiful white marble. This 16th-century Mughal monument situated nearby the gardens of Taj Mahal. Mar 31, 2013 · They are—The Agra Fort and The Red Fort. All of it took place at the behest of the British. The foundation stone of the Fort was laid in 1639 and it was completed after about nine years Essay of Red Fort (Delhi ka Lal Kila) The Red Fort is a very old and must-visit tourist place of Delhi. It is a splendid masterpiece of the Mughal architecture in India. Yes, it's called the Red Fort, but it was not originally built that way. Agra fort stands on an ancient site just by the river Yamuna. The fort was the palace for Mughal Emperor Shah Jahan’s new capital, Shahjahanabad, the seventh city in the …. The complex of buildings inside the fort, which is reminiscent of Persian- and Timurid-style architecture, forms a city within a city Jul 06, 2019 · Red Fort Essay 1 (100 words) The Red Fort is a great historical monument in the India. Skip navigation Sign in. 'Red Fort' was built by the Mughal Emperor Shah Jahan. It stands on the bank of river Yamuna.
Ways To Preserve The Environment Essay Sample
How To Begin Writing A Reflective Essay In this Red Fort Essay in Hindi and English language giving a short information about lal kila (Red Fort). It consisted of Diwan-e-Aam, Diwan-e-Khas, Rang Mahal etc. This period is …. Source: agra.nic. It is a sign of Mughal power and majesty. It is situated at south bank of Yamuna River in Agra, Uttar Pradesh. The kohinoor diamond today is found in the British crown. The architecture of this building has a splendid impact of red stone and marble works. Mar 30, 2019 · About Agra Fort History & Story In Hindi, All Useful Detail Information & Essay Of Agra Qila In Hindi, आगरा के किले का इतिहास और जानकारी, Agra Red Fort. India has a lot to offer as a tourist destination, even now with many places untouched by the tourism industry. Tourists cannot explore all the territory of the fort.
It attracts the tourist from India and abroad. The Red Fort is also known as Lal Quila and is the landmark of Delhi. every 15 August national independence day occasion prime minister of India addressed all of us from this Red Fort New Delhi. Lahore Gate and Delhi Gate are the two gates of the Red Fort. From the Fort to Taj Mahal-the distance were 2.5 km and every single day the emperor Shah Jahan used to see the progress of Taj Mahal. Not only did the name, but the colour of the fort was also changed to red. Red fort is situated in Delhi and Agra fort is located in Agra. The sanctuary is roofed with three bulbous domes built of light white marble and stand on the red sandstone walls Oct 03, 2018 · Taj Mahal Essay 6 (400 words) Taj Mahal is a great Indian monument which attracts people’s mind from all over the world every year. It is situated at least 2.5 km away from the Agra Fort We visited the Red Fort first. Sep 11, 2015 · The Red Fort as we know it, was actually called Qila-e-Mubarak or the blessed fort. Important Structures of Agra Fort 1 Agra Fort History This massive red sandstone fort was built on the banks of the Yamuna River in 1565 by Akbar, the first Mughal emperor in India. It is a well-known monument and famous all over the world. Besides, Taj Mahal, this fort is also an important monument in Agra May 09, 2017 · Agra Fort and its Harem. Besides, Taj Mahal, this fort is also an important monument in Agra The Agra fort is enclosed by a double battlemented massive wall of the red fort. | http://wolferecords.com/uncategorized/essay-on-red-fort-agra |
Full-text links:
Download:
(license)
Current browse context:
cs.LG
Change to browse by:
References & Citations
DBLP - CS Bibliography
Computer Science > Machine Learning
Title:Analyzing Neuroimaging Data Through Recurrent Deep Learning Models
(Submitted on 23 Oct 2018 (v1), last revised 5 Apr 2019 (this version, v2))
Abstract: The application of deep learning (DL) models to neuroimaging data poses several challenges, due to the high dimensionality, low sample size and complex temporo-spatial dependency structure of these datasets. Even further, DL models act as as black-box models, impeding insight into the association of cognitive state and brain activity. To approach these challenges, we introduce the DeepLight framework, which utilizes long short-term memory (LSTM) based DL models to analyze whole-brain functional Magnetic Resonance Imaging (fMRI) data. To decode a cognitive state (e.g., seeing the image of a house), DeepLight separates the fMRI volume into a sequence of axial brain slices, which is then sequentially processed by an LSTM. To maintain interpretability, DeepLight adapts the layer-wise relevance propagation (LRP) technique. Thereby, decomposing its decoding decision into the contributions of the single input voxels to this decision. Importantly, the decomposition is performed on the level of single fMRI volumes, enabling DeepLight to study the associations between cognitive state and brain activity on several levels of data granularity, from the level of the group down to the level of single time points. To demonstrate the versatility of DeepLight, we apply it to a large fMRI dataset of the Human Connectome Project. We show that DeepLight outperforms conventional approaches of uni- and multivariate fMRI analysis in decoding the cognitive states and in identifying the physiologically appropriate brain regions associated with these states. We further demonstrate DeepLight's ability to study the fine-grained temporo-spatial variability of brain activity over sequences of single fMRI samples. | https://arxiv.org/abs/1810.09945 |
Tutoring students in a one-to-one setting without typical classroom constraints has its advantages. I enjoy being able to select appropriate materials, tailor activities to student interests, and address skills without the pressure of teaching the core curriculum. On the other hand, I am frequently in the same battle as resource teachers and other specialists. Homework and projects routinely impact my valuable time with students. You know that I am not keen on homework, if you’ve been following this blog. After an hour or more of tutoring, I don’t want my students to face a stack of homework, so I typically assist them to complete it as quickly as possible during our session. But the disconnect between students’ skills and their homework drives me NUTS!
Here’s what happened today: I was teaching a fourth grader who is struggling with math. I wanted to continue our work on place value and rounding numbers. Instead, I checked his homework and took a deep breath. It was algebra (or “algebraic,” as he told me). Knowing that he works much better on frustrating tasks with me than his parents (it was that way with my own kiddos), I decided to bite the bullet. Here is a sample problem: Sue had 5 times more pencils than Nate. Together they had 18 pencils. How many pencils does Sue have? How many more pencils does she have than Nate? My student was required to model the problem using symbols and write three or four equations to demonstrate how he solved it.
I imagine some kids in his class are totally ready for that problem. But my student was not. He had no idea where to start, was dealing with abstract procedures that made no sense to him, and didn’t have sufficient opportunities to work with manipulatives (and perhaps understand) what “5 times more” actually means. This is a student who does not know when to add or subtract. Not only did we lose valuable instructional time on the skills which match his current math understandings, but he needed two brain breaks in order to survive that portion of our session. And what does he know after our “guided practice?” Not a lot.
I was facing the dilemma described in an interesting article called “The Hard Part” (thank you, Tony’s mom!). In his column in the Huffington Post, Peter Greene writes about teaching: “The hard part of teaching is coming to grips with this: There is never enough. There is never enough time. There are never enough resources. There is never enough you.” Indeed!
I do understand that the classroom teacher has her own constraints. She is required to teach “algebraic” for a short period of time and then assess, assess, and reassess. How can she “individualize” the above assignment for my student when it is totally inappropriate for his current level of functioning? He needs more opportunities to model multiplication, much less solving problems with variables. His dilemma reminds me of my post from yesterday on “How The Brain Learns Mathematics” by David Sousa. Sousa describes prerequisite skills for learning mathematics successfully, including the ability to visualize and manipulate mental pictures and the ability to reason deductively and inductively. My 4th grader is particularly weak in those skills. When will he have time to catch up? Isn’t that what summers are for? | https://teachezwell.me/2014/11/18/brain-based-teaching-of-mathematics-dyscalculia-insufficient-time-for-teaching/ |
In the world of science, recognition of scientific performance is strongly correlated with publication visibility and interest generated among other researchers, which is evident by downloads and citations. A published paper’s number of downloads and citations are the best indices of its importance and are useful measures of the researchers’ performance. However, the published paper should be valuated and indexed independently, and the prestige of the journal in which it is published should not influence the value of the paper itself. By participating in and presenting at congresses and international meetings, scientists strongly increase the visibility of their results and recognition of their research; this also promotes their publications. Status in Research Gate (RG), the so-called RG Score, the Percentile, and the h-index give researchers feedback about their performance, or their place and prestige within the scientific community. RG has become an excellent tool for disseminating scientific results and connecting researchers worldwide. RG also allows researchers to present achievements other than publications (e.g., membership in recognized associations such as the American Chemist Society, a biography in Marquis Who’s Who in the World, awards received, and/or ongoing projects). This paper discusses questions regarding how the RG Score, Percentile, and h-index are calculated, whether these methods are correct, and alternative criteria. RG also lists papers with falsified results and the journals that publish them. Thus, it may be appropriate to reduce the indices for such journals, authors, and the institutions with which these authors are affiliated.
Keywords
Education Institution, Quality of Publication, Recognition in Scientific Community, Criteria of Judgment for Publication, Indexing, Falsified Research
Share and Cite:
1. Introduction
Publications in high-indexed journals are rewarded by RG with a higher ranking and are more highly valued within academic institutions such as universities and research institutes. However, the methods by which a journal’s index is calculated and whether it is appropriate are open questions. Other questions include whether the journal’s index should include falsifications and forgeries, whether these should be retained after the forgeries have been disclosed (i.e., when the forged papers are not retracted by the journal), and what influence this should have on the journal’s index. These issues concern all sciences, especially those for which accurate reporting of scientific results has a crucial impact on development of new knowledge and replication, including support for or renunciation of recent discoveries. This process is important in 1) chemistry, including analytical chemistry, 2) other branches important to the medical sciences, such as toxicology (especially occupational toxicology), 3) experimental physics, and many others. Notable examples of inaccurate reporting practices within medical toxicology have included the side effects of thalidomide and the high prevalence of contamination of primaquine with quinocide . These publications led to the prevention of crippling of thousands of patients due to contaminated primaquine. Should the journals which published forgeries suffer reduced index values, especially in cases where they hid and retained such forgeries intentionally? Should those journals which have denied publication of the truth suffer reduction of their index values? Should those journals which have published the truth and divulged the fakes be rewarded with higher index values?
With the disintegration of the USSR in the 1990s, hundreds of newly established colleges and universities began to appear in its former territories. The same process occurred following WWII, after disintegration of the colonial system in developing nations. Many of those newly established centers of education lacked sufficient scientific ethics and traditions of scientific quality and morals . We have an international scale to measure the performance of scientific institutions and universities. The indexing of publications must therefore be normalized in accordance with the source of their production (i.e., the educational or research center) but not in accordance with where it is presented (i.e., the journal of publication). Indexing should be based on the same criteria that have been accepted for the production of industrial goods within the international marketplace. Goods produced by well-known manufacturers cost more than replica goods, goods produced without a license, and goods from less qualified locations. The RG Score, Percentile, and h-index of publications submitted from one of the top 100 institutions within the international scale should be multiplied by 1.0; those submitted from secondary institutions (i.e., the next 100) should be multiplied by 0.75; those submitted from tertiary institutions should be multiplied by 0.5, etc. The rationale for normalizing publication values in this way is that research carried out at the lower-tiered institutions follows a less critical approach to the scientific process (see examples below).
One example of this is publication of the “super discovery” of thin-layer chromatographic enantiomeric separations without using selectors in the mobile or stationary phases and without transforming the substances to diastereomers . Critiques of this amazing publication in were published and presented in . Another example is the inclination of scientists working at institutions that have a lower international scale (i.e., have a worse reputation) to accept industry bribes, in the form of grants, for presenting analyses indicating that medications are of better quality than the research actually showed . The deep scientific quagmire of the researchers, journal reviewers, and Editor-in-Chief who published the results of capillary electrophoresis were all involved in obvious fraud. As if this was not enough, a variation of the same fraudulent text was published again in . The fraud was exposed in , and a critique of these two publications ( and ) was also presented in . Involvement of the pharmaceutical industry in false publications about drug quality via bribery of scientists with low ethics and who work at lower-tiered institutions has also been described - .
The Nobel Prize is an ultimate recognition of scientific and other human performance, yet even this distinction is vulnerable to fraud, especially when awarded for political reasons. If respectable researchers publish in respectable journals, this does not guarantee that the truth is presented in these “scientific research” texts (some of which are more accurately described as pseudoscientific research), as shown in the case of Linda Buck, who shared the 2004 Nobel Prize in Physiology or Medicine. She was exposed for fraud in two papers, published in 2005 and 2006, which formed the basis for granting her the Nobel Prize. She later retracted both publications. Both retractions—from the Proceedings of the National Academy of Sciences (PNAS) and Science —nullified both her research and the basis for her decoration with the Nobel Prize. Thus, even publication in prestigious journals such as PNAS and Science and decoration with the Nobel Prize is insufficient to ensure the quality and truth of published data. There have been especially negative effects of such politically or gender motivated decisions by members of committees granting the Nobel Prize for Science (in Sweden) and for Social Science (in Norway). Albert Einstein has never been decorated with the Nobel Prize in Physics for his Theory of Relativity. He received the Nobel Prize for his “services to theoretical physics, and especially for his discovery of the law of the photoelectric effect.” This occurred due to a misunderstanding of relativity by many scientists, and misunderstanding of the paragraph in the constitution of the Nobel Prize stating that achievements must be supported by proof with time. However, this did not prevent Linda Buck from receiving the Nobel Prize (despite the fact that only a short time had passed since her publications and no support for her data was available). The same has occurred regarding politicians whose future achievements were predicted, yet unsupported by proof at the time of the award. US President Obama received the Nobel Prize for his future efforts to establish peace around the globe. He was rewarded with the Nobel Prize in advance, before he did anything and despite despicable military and other acts against countries around the globe that occurred later during his time in office. The Nobel Prize committee apparently could not wait several years for the proof of time, in accordance with its constitution, to award the Nobel Prize to Linda Buck, Barack Obama, or Al Gore.
It is shameful when, despite well-documented cases of fraudulent publication, the Editor-in-Chief of a journal conceals the fraud and denies strong arguments demanding that this fraud be withdrawn; such acts, reflecting lack of trustworthiness of the publisher, deserve a reduced index for the journal. Editor-in-Chief Bezhan Chankvetadze of the Journal of Pharmaceutical and Biomedical Analysis, published by Elsevier, concealed a fraudulent publication (perhaps an unsurprisingly corrupt action, since Mr. Chankvetadze is a compatriot of former Soviet dictator Josef Stalin, in a society where fraud and corruption were common). The fraudulent, concealed publication and documentation of the fraud have been published in .
No argument or publication in the international scientific literature as in can help in the struggle to restore truth or lead to results, because the representatives of this deep scientific and publishing house entanglement are supported by a corrupt industry and collusion. For example, one paper , a stolen text which was later published. was submitted to the Asian Journal of Chemistry, where it was reviewed by Dr. Surendra Prasad from Fiji and Editor-in-Chief Mr. Agrewal, both of whom were in possession of the submitted and unpublished original manuscript , which had been disseminated or possibly even sold. The author of the fraudulent paper , Mr. Dongre, the reviewer, and the Editor-in-Chief were guarded from disclosure of their activities by their society of corrupt colleagues in a broad, deep publication industry morass. However, fresh forces exist beyond this deep scientific quagmire. Disclosure of the facts about this fraud to the rector of Mumbai University resulted in deletion of the fraudulent paper from Mr. Dongre’s list of publications, and he was later fired from his position at Mumbai University. The facts surrounding this fraud and misuse of trust by the reviewer and Editor-in-Chief of the Asian Journal of Chemistry were disclosed in a letter to the Department of Higher Education, Central Universities, Ministry of Human Resource Development in India, resulting in the closure of the Asian Journal of Chemistry, eliminating this dirty business of selling the texts of submitted manuscripts. Another example of shameful fraud is the publication of two papers , through a collaboration between the Editor-in-Chief of the journal Current Chromatography, Mr. Nesterenko, who then retained one of the authors of these papers, Mr. Aboul-Enein, as Associate Editor for the journal. More information and a critique of this event have been published in .
2. Indexing
Indexing of journals should not be based on duration of publication of journal, since even one year can be enough if the articles in the journal are of sufficiently high professional quality and interest. Rather, high professional interest in the published papers should be the main criterion for journal indexing. In addition, the index should only reflect interest in the papers in the journal.
The existing practice of rewarding scientists with university permission to use grant funds to publish in a journal of distinction, or in a journal published by a publisher of distinction, is wrong. It is wrong because the journals or publishers themselves are often connected by interests with universities, and vice versa, and thus they support one another with both high-quality and fraudulent publications. This deep scientific mire was established centuries ago.
Scientists should have freedom to choose their publishers and journals without university pressure. The rationale for abolishing existing practices is the partiality of some scientists who serve on the editorial boards of university-recommended journals. These scientists provide their universities with lists of journals with which they are affiliated, and in which scientists can publish papers using grant subsidies. In this way, universities are subsidizing these journals and the journals, in turn, are promoting the university’s scientists. This practice is corrupt and unacceptable. A journal’s prestige and indexing should be based on the value of and professional interest in the articles it contains—the prestige of a paper should not be based on the prestige and indexing of the journal in which it is published. It is simple to measure the usefulness of and professional interest in a paper based on its number of views, downloads, and citations. These are the only correct judgment values. For example, one paper was viewed 55,596 times, downloaded 54,800 times, and cited by Google Scholar 9 times and by CrossRef 3 times; another paper was viewed 35,234 times, downloaded 17,326 times, and cited by Google Scholar 139 times and by CrossRef 80 times; a third paper was viewed 25,108 times, downloaded 22,066 times, and cited by Google Scholar 3 times and by CrossRef twice; a fourth paper was viewed 19,580 times, downloaded 7,959 times, and cited by Google Scholar 59 times and by CrossRef 42 times. Professional interest in these papers can be evaluated based on the total number of views and downloads using the index of views and downloads (V/D index) which, for these papers, are 1.0145 , 2.0336 , 1.1379 , and 2.4601 . The paper with the V/D index closest to 1 has shown the highest professional interest.
Why Are Total V and D, Indexing by V/D, and Citation Important?
In the pre-Internet days, scientists used printed catalogs such as SciFinder Scholar to access short descriptions of published papers and their citations. These accessible guides allowed scientists to decide whether a paper was useful and interesting to them and whether they should read similar publications (i.e., those cited in the paper). If so, such services allowed scientists to send a postcard of request to the author; for the author, the number of postcards they received was a success indicator. In the Internet era, open access journals have especially good statistics about the numbers of views, downloads, and citations for each published paper. The drawback of RG is its lack of overview of the number of views and downloads because it only counts those that were done through RG. The same drawback is true for Google Scholar and others.
As such, popularity and availability influence the indexing results. If a single index is to provide a real measure of the success of a paper or a scientist, the total numbers of views, downloads, and citations from all sources should be included.
3. Research Gate
RG is a unique tool that allows scientists to judge their own and others’ performance and quality. However, the practice of ranking journals and publications with the RG Score, Percentile, and h-index should be more objective and independent of pre-existing indices. Along with RG information, every connected researcher can post their projects under development, questions, and answers, allowing them to communicate with other scientists and to provide and discover papers free of charge. In this way, RG may obtain a leading role as an institution for development of scientific reflection, connections, and development.
4. Conclusions
Journal indexing should be dynamic and revaluated annually. Indices must be based on both interest in the papers they publish and the fraudulent papers they publish, especially those renounced publicly as fraud rather than being retracted by the Editor-in-Chief. Neglecting to retract a fraudulent paper forthwith should be punished by zeroing that journal’s index.
A list of researchers, reviewers, editors, and especially Editors-in-Chief who have participated in publishing fraudulent papers should be published by RG and elsewhere (for example, the paper , which publicly exposed two fraudulent publications in ). This is the best method to drain the deep scientific mire of fraud, corrupt editors, and corrupt journals.
RG should be further developed as a complete system (i.e., an institution) with independent scales for judging scientific presentations and annual reports of the best publications, which will allow scientists a broad arena for discussions about improving the system of recognition within RG, such as the RG Score, Percentile, and h-index, as well as a venue for disclosing fraud. RG should allow the opportunity to disclose fraud in scientific publications, academic societies, the scientific community, and educational and research institutions. Direct use of a journal’s index by RG for calculating the RG Score, Percentile, and h-index, by which scientists are recognized in RG, should be changed to the total V and D, the V/D, and the number of citations.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.
References
Copyright © 2022 by authors and Scientific Research Publishing Inc.
This work and the related PDF file are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. | https://scirp.org/journal/paperinformation.aspx?paperid=97863 |
Our subject choice guidance
The choices students make at the start of their post-16 education can have a significant impact upon their options at degree level. Recent research has reiterated this, with as many as one in five students reporting that they were unable to study their preferred degree subjects because they had chosen the wrong A Levels, and two in five reporting that they would have chosen different subjects had they received better advice.
At Lucy Cavendish College, we want to support students and their teachers to identify the most useful A Level (or equivalent) combinations for their preferred degree options. The guidance below is written to that purpose. Please note that it is not intended to suggest that there are compulsory combinations that must be studied, or that it is impossible to receive an offer with alternative combinations. Rather, it is intended to provide clear, unambiguous advice that is helpful in the majority of cases. Students should consider this alongside their own preferences and the subjects in which they attain the highest grades.
Please see below for guidance that relates largely to A-level subject choice. We aim to release more comprehensive guidance on International Baccalaureate (IB) and Scottish qualifications soon. We are also developing a separate page for students offering international qualifications.
General Guidance
According to recent data, less than five percent of UK students now take four or more A Levels. University entry requirements have adapted accordingly and it is now possible to put in a very competitive application to all subjects on the basis of just three A Levels. There are only three subjects at Cambridge where four can be an advantage – Chemical Engineering, Computer Science and Physical Natural Sciences (although it is less essential for the latter two). In the vast majority of applications, therefore, it conveys no advantage to be studying four subjects instead of three. Indeed, in general, universities would rather students focussed on fewer subjects and achieved higher grades. However, it is essential to ensure that students taking three select a competitive combination of subjects and that there are not one or more which are ‘out of place’ within this. Student should therefore consider which subjects they enjoy, which ones represent their academic strengths and which ones form a solid and relevant academic foundation for the course they wish to study at university. This makes it more difficult to keep students’ options open, particularly to select combinations that would open up both arts/humanities and science/maths degrees. Our advice to students applying to the most competitive universities is that it is in their interests to work out which of these broad fields they wish to follow prior to selecting their A Levels and to choose their subjects accordingly. In our experience, when students seek to be able to keep both arts/humanities and sciences/maths open, they in practice reduce their options, particularly on the sciences/maths side.
If students feel their interests are drawing them towards a degree in this area, they would be well advised to pick from a range of academic, essay-based subjects in the first instance. They may also wish to add in one or more science or mathematics subjects if they are skilled at these subjects, too. This will not harm their application to an arts, humanities or social sciences course as long as it is not instead of a crucial, relevant subject. Indeed, it may even strengthen it – subjects such as languages, Law or Philosophy, for instance, strongly regard the thinking skills gained by studying Mathematics, whilst others, such as Archaeology or Anthropology, benefit from a knowledge of sciences like Biology. If a student is talented at Mathematics in particular, this will never be detrimental to an application. Beyond that, any academic subjects which build relevant skills and knowledge will be beneficial
Broadly speaking, degrees in these subject areas tend to be more numerically competitive, so it is advisable to use A Level choices to consolidate students’ knowledge and skills and to form the most appropriate academic foundation possible. We would advise any student interested in a science or mathematics degree at a competitive university to be taking at least three science and mathematics A Levels (i.e. three from Biology, Chemistry, Mathematics and Physics). It is definitely possible to secure an offer with just two, but it is less likely, certainly at Oxbridge. Above all, this is about seeking to present as competitive an application as possible and we find that students who study more science and mathematics at A Level tend to do better in the admissions process that those who do not. Furthermore, for students interested in the physical sciences – physics, engineering, computer science, etc. – and mathematics itself, or related disciplines, we would also strongly recommend Further Mathematics. It is compulsory for Mathematics at Cambridge.
Regardless of which academic field students are interested in, we would always advise taking one or more strong, academic subjects as a foundation for students’ A Levels. We call these subjects ‘keystones’. They are as follows:
|
|
Arts, Humanities and Social Sciences ‘Keystones’
|
|
English Literature
|
|
History
|
|
Mathematics
|
|
Languages
|
|
Sciences and Mathematics ‘Keystones’
|
|
Biology
|
|
Chemistry
|
|
Mathematics
|
|
Physics
We would recommend at least one of these, plus two or more additional, relevant, academic subjects as the foundation for every student’s choices.
There are some subjects which are generally less competitive as one of ‘only’ three A Level choices for applications to Oxbridge. These are Art & Design (unless applying for an art-related degree course, such as Architecture at Cambridge or Fine Art at Oxford), Business, Criminology, Drama and Theatre (unless applying for related courses, such as the English, Drama and the Arts track in Cambridge’s Education degree), Film Studies, Law, Media Studies, Photography and Physical Education. We would also advise against any Vocational Level 3 courses as part of ‘only’ three choices – these do not meet the Oxbridge entry requirements. Students wishing to take one of these subjects and apply to Oxbridge or the most competitive Russell Group universities would be well advised to do so alongside three academic A Levels, rather than as one of three.
Cambridge Course-Specific Guidance
All Cambridge courses
Please see here for our full guidance on A-level combinations for all Cambridge course.
Applying for Competitive Subjects at Cambridge
On average across all subjects, Cambridge receives roughly five applicants per place available. However, there are certain courses that receive a much higher number of applicants per place, or are particularly competitive given the high quality of the applications received. Students applying for these courses often ask how they can ‘stand out’. One way to do so is to ensure that your subject combination in your post-16 qualifications provides as appropriate and solid a foundation as possible for your preferred course. The information below suggests ways to do this. It is not intended to be prescriptive and there are always alternative combinations which may also be appropriate – prospective applicants should get in touch with us if they want to discuss their options before or after making their choices. However, we hope that in the majority of cases it will be a helpful guide.
Please see here for the full guide for competitive subjects.
Getting Support
We hope the advice above is useful in helping students and their advisors to identify the most suitable A Level subjects that will enable them to make the most competitive applications for the university courses in which they are interested. If you would like further, individual guidance, please do not hesitate to email us at [email protected] or [email protected] or to book in to one of our regular, online admissions clinics. Take look at our visits and events page to book on. | https://www.lucy.cam.ac.uk/subject-choice-guidance |
I have often heard the vague assertion that a liberal arts degree will best prepare you for a job. But I’m more interested in concrete results. Will it get you hired? That information is surprisingly hard to find. So I was excited to find referenced sources on the Phi Beta Kappa (liberal arts and sciences honor society) website, on a page called their toolkit. (You might want to start with the first post in this series, What is a liberal arts degree?)
I started out by looking at the references for statements that sounded substantial, only to discover that I didn’t find their sources to be very rigorous. (That’s a nice way to say that they looked too vague to convince me that they meant anything. They were only barely better than worthless.) Frustrated, I decided to quit reading the conclusions on the PBK website and just jump to their referenced sources and finish them in order. Were the statements on the website based on anything that seemed to provide substantial evidence?
Let’s look at the next reference.
AAC&U and NCHEMS, “Liberal Arts Graduates and Employment: Setting the Record Straight,” Washington, D.C.: 2014
You have to pay to see the full report, so I looked at the brochure with selected findings.
The first data given is from the source AAC&U, “It Takes More Than A Major, which I looked at earlier and found to be largely irrelevant.
Among some of the other data, I found this frustrating assertion as a title to an infographic: “Drivers of US Intellectual Capital: More Liberal Arts and Sciences Majors Attain Advanced Degrees.”
Of course a large number of liberal arts and science majors get advanced degrees! Because they can’t get jobs with their undergraduate liberal arts and science degree. In case you’re wondering, given the cost of an undergraduate degree, I don’t find a major that leads you to get a graduate degree to necessarily be a good thing. (But a university trying to make money from graduated students would think so!)
It follows that I don’t really care about the presented data that graduate degrees increase earnings either. To really mean something, the earnings would have to factor in the cost of obtaining that extra education, as well as the years of studying with little or no income. (Although I will admit that I would be alarmed if you didn’t get a salary bump for going to even more school. Data that showed no salary increase would be alarming! So I suppose it does have some usefulness.)
Further in the brochure was the one data set that I thought might be interesting. “Liberal Arts and Sciences Majors Close Earnings Gaps with Professional Majors” brochure of selected findings AAC&U and NCHEMS, “Liberal Arts Graduates and Employment: Setting the Record Straight,” Washington, D.C.: 2014 .
On a graph they give the starting salaries for the Humanities and Social Sciences: $26K, Professional and Preprofessional: $31K, Physical Sciences, Natural Sciences, and Mathematics, $26K. To sort that out, the liberal arts and science degrees start off with the same salaries, $26K, compared to the Preprofessional degrees, $31K.
Then they show what they call “peak earning ages 56-61,” which might be more appropriately titled “average salaries during the ages of 56-61,” unless we’re going to really justify their label. Anyway, moving past that, the earnings are Humanities and Social Sciences: $66K, Professional and Preprofessional: $64K, Physical Sciences, Natural Sciences, and Mathematics, $87K.
I have no idea why they didn’t arrange that Professional $64K, Humanities and Social Sciences $66K, Math and Science $87K, because it seems like it would have emphasized their point better to group the liberal arts and sciences on the end. But whatever.
Next up, was to make sure I understood the meaning of professional or preprofessional degree. And I did. The most common preprofessional degress are preMed, preLaw, and prePharmacy. That means that you would have to assume that at least some of those students then went on to become pharmacists, lawyers, and doctors. I think we’ll all agree that there are very few doctors and lawyers making $64K a year when they’re 60. Maybe they are adjusting the yearly salary with Medical and Law school loan repayment? That is only kind of a joke. Another reason could be that a low percentage go on to get their professional degree? While this might seem like I’m trying to add to the data here, it’s more of me trying to decide if I believe this data.
So to make sure we get this straight, we’re comparing earnings at 56-61 among people who graduated with preprofessional degrees and the liberal arts and sciences degrees which include, humanities and social sciences, and science and math degrees.
While the difference between the humanities and social sciences $66K and the Professional $64K might not look as impressive a gain over a pre-professional or professional degree (especially given that the starting gap is twice as large), let’s think about that $87K in the sciences, which does look like an impressive gain.
From my experience, I’m going to guess that the $87K is a result of PhDs, and more specifically PhDs like Physics and possibly chemistry pharmaceutical research. To get their PhD, they would have been in school just as long (or longer) than those with “professional” degrees. And who knows how many of those people thought they could get a job with their undergraduate degree, and then found that their only option was to get a graduate degree, which would put off their accumulation of earnings. I also know firsthand that PhDs in physics also end up getting hired in places like banking and computer programming.
This is interesting, because while I’m trying to look for numbers to prove that liberal arts and science degree can make you employable, right there I’m reminded of a “liberal arts and sciences degree” that does lead to employment. However, I’m talking about Physics PhDs, which I would guess is not a large percentage of liberal arts degrees conferred. And it’s not a BS or BA degree, it’s a graduate degree that would require 4-7 years past college. It just goes to show that I’m probably trying to look for proof of something that’s too broad. But I’ll keep looking at this data never-the-less.
Now notice that they conveniently left off engineering and computer science from this graph. Also not included were business degrees.
So now I’m even less impressed.
What about you? What do you think? Does this data help convince you that a liberal arts degree will lead to higher earnings? (Yes, I realize there is more to a career than the money earned.) In my next post, I’ll look for more data. | https://www.highschoolcollegesuccess.com/liberal-arts-degree-earnings/ |
Pon has taken a number of precautionary measures in response to the coronavirus (COVID-19) and is closely monitoring the situation. Pon’s primary focus is to ensure the health and safety of our employees and business partners, and to mitigate the risk of spreading the virus.
The following measures apply to all Pon Cat staff, visitors and facilities including field workers, warehouse, workshops and offices.
The procedures are based on guidelines provided by Health Authorities, Ministries of Foreign Affairs and Pon Holding.
Visitors must confirm that they are not likely to carry the virus by answering a questionnaire prior to their visit or when entering the reception area.
All facility entrances will have hand disinfection available and posters with hand hygiene recommendations to prevent virus transmission
Separate facilities have been provided for truck drivers.
Extensive use of home office
Extended measures have been implemented regarding personal- and on-site hygiene.
Extended measures have been implemented to ensure physical separation between employees in the cantina and workplace in general
Strict restrictions have been put on travelling, receiving visitors and attending face to face meetings
Employees in quarantine are required to stay away from offices and other employees according to the Pon guidelines and the recommendation by the Health Authorities.
Updated information is made available to all employees on the Pon intranet and via SMS/messaging
Employees with health complaints are kept at home according to Health Authority guidelines as a precaution.
Orders outside the Netherlands & Norway are scheduled after April 6 or according to the latest instructions.
Employees abroad are brought back to the Netherlands & Norway.
Employees do not participate in group consultations or meetings.
Employees keep at least 1.5 meters away from others.
Employees will limit (customer) contact as much as possible to what is necessary.
Employees have the right to act according to the 'step back' principle when in doubt.
For ships and mobile manned installations, the three previous mooring or departure locations must be submitted in writing.
Pon is taking measures to ensure continuity in deliveries and support to all our customers. However, the situation can have an effect on the employability of service technicians, both during regular working hours and during breakdown service.
Pon is continuously evaluating and implementing appropriate risk mitigation measures.
Pon expects that our business partners have implemented similar measures to mitigate virus transmission. Pon will respect and follow such measures, just as Pon expects our business partners to follow our measures when dealing with our employees and visiting our facilities.
Above procedures are valid as long as the Health Authorities advise special precautions.
Sincerely ,
Pon Power
Pon Equipment
Opening hours: 08:00 - 16:00
We will get in contact within 24 hours.
Fill in the form below and I will contact you as soon as possible. | https://www.pon-cat.com/en-no/pon-power/nyheter/coronavirus-covid-19-precautions |
Because radian measure is the ratio of two lengths, it is a unitless measure. For example, in [link] , suppose the radius were 2 inches and the distance along the arc were also 2 inches. When we calculate the radian measure of the angle, the “inches” cancel, and we have a result without units. Therefore, it is not necessary to write the label “radians” after a radian measure, and if we see an angle that is not labeled with “degrees” or the degree symbol, we can assume that it is a radian measure.
Considering the most basic case, the unit circle (a circle with radius 1), we know that 1 rotation equals 360 degrees, We can also track one rotation around a circle by finding the circumference, and for the unit circle These two different ways to rotate around a circle give us a way to convert from degrees to radians.
In addition to knowing the measurements in degrees and radians of a quarter revolution, a half revolution, and a full revolution, there are other frequently encountered angles in one revolution of a circle with which we should be familiar. It is common to encounter multiples of 30, 45, 60, and 90 degrees. These values are shown in [link] . Memorizing these angles will be very useful as we study the properties associated with angles.
Now, we can list the corresponding radian values for the common measures of a circle corresponding to those listed in [link] , which are shown in [link] . Be sure you can verify each of these measures.
Find the radian measure of one-third of a full rotation.
For any circle, the arc length along such a rotation would be one-third of the circumference. We know that
So,
The radian measure would be the arc length divided by the radius.
Find the radian measure of three-fourths of a full rotation.
Because degrees and radians both measure angles, we need to be able to convert between them. We can easily do so using a proportion where is the measure of the angle in degrees and is the measure of the angle in radians.
This proportion shows that the measure of angle in degrees divided by 180 equals the measure of angle in radians divided by Or, phrased another way, degrees is to 180 as radians is to
To convert between degrees and radians, use the proportion
Convert each radian measure to degrees.
Because we are given radians and we want degrees, we should set up a proportion and solve it.
Convert radians to degrees.
Notification Switch
Would you like to follow the 'Algebra and trigonometry' conversation and receive update notifications? | https://www.jobilize.com/trigonometry/test/identifying-special-angles-measured-in-radians-by-openstax |
studies by Consumer Reports indicate these can be highly effective (Consumer Reports April 16, 2021; ... (approximately 15–20%) tend to last longer than lower percentages, but children should avoid using high ... windy or going to rain during or shortly after pesticide application. Because of the risks associated ...
-
Soil Sampling to Develop Nutrient Recommendations
https://ohioline.osu.edu/factsheet/AGF-513
over the entire field, changes in soil test levels in two areas over a 10-year period are shown. Table ... 1 ppm 20 8 2 20 8 2 20 8 2 Expected ppm change in soil test over 10 years 3 0 0 +5 +10-6-11 1 Crop ... to change soil test 1 ppm is 6–10 depending on cation exchange capacity. 3 Example calculation: ...
-
Biomass Availability in Northwest Ohio
https://ohioline.osu.edu/factsheet/AEX-541
levels are too high, harvesting equipment excessively compresses the soil, adversely affecting yields in ... Harvest is only possible when soil moisture levels are acceptable for field operations—when soil moisture ... require contracting. If yields on marginal soils approach 6–9 tons, then pricing of $50–$60 per ton should ...
-
Team Dynamics online
https://leadershipcenter.osu.edu/events/team-dynamics-online-3
optimal level. A team that is performing at a high level is able to capitalize on the strengths of the ... embracing common sense with uncommon levels of discipline and persistence. Ironically, teams succeed because ... their goals. Objectives Identify the benefits of high performance teams Define the different stages of ...
-
The Science of Human Sensory Measurements: Theoretical Principles and Industrial Applications
https://fst.osu.edu/courses/fdscte-7560
perception measurements in sensory evaluation with a focus on sensory discrimination testing. This course ...
-
Soil Carbon Sequestration—Fundamentals
https://ohioline.osu.edu/factsheet/AEX-510
accomplished by management systems that add high amounts of biomass to the soil, cause minimal soil ... with a longer and drier growing season. The soil organic matter content may decline, increasing risks ... breakdown of soil aggregates accelerates erosion. A soil with high organic matter is more productive than ...
-
Using Cover Crops to Convert to No-till
https://ohioline.osu.edu/factsheet/SAG-11
years, the soil is converting and storing more nitrogen as microbe numbers and soil organic matter levels ... has decreased soil organic levels by 60 to 70 percent with 30 to 40 percent soil organic carbon stocks ... the following crop. Legume cover crops fix nitrogen from the air, adding up to 50 to 150 pounds per ...
-
Promising Practices and Benefits of 4-H Members Saying Thank You
https://ohioline.osu.edu/factsheet/4h-0055
educator who has obtained that level of education If you are writing a thank you to an organization, ... Written notes should be neatly handwritten on an unlined card or high-quality paper (Rivetto 2013). All ...
-
Lime and the Home Lawn
https://ohioline.osu.edu/factsheet/hyg-4026
between 6.0 and 7.0. The amount of lime required to raise soil pH to this level for a particular soil is ... performing a soil test are strongly discouraged because alkaline (high pH) conditions may develop. When Is ... and pH of 7.0 is considered neutral. For turfgrasses used in Ohio home lawns, a soil pH between 6.0 ...
-
Comparing Traditional Meat and Plant-Based Meat
https://ohioline.osu.edu/factsheet/anr-0103-0
a negative factor for traditional meat products, particularly for people with high cholesterol levels. Table ... absorbed more easily consist of higher cholesterol levels have a higher nutrient density due to the protein ... quantity and quality contain 50% saturated fatty acids are the only natural source of Vitamin B12 consist ... | https://agbmps.osu.edu/search/site/scenario%20soil%20test%20p%20levels%20between%2050%20ppm%20150%20ppm%20p%20risk%20index%20medium%20high?page=4&f%5B0%5D=hash%3Aajcnw6&f%5B1%5D=hash%3Al1wg85&f%5B2%5D=hash%3A2spxwe&f%5B3%5D=hash%3An0w4rm&f%5B4%5D=hash%3Ata6d9c&f%5B5%5D=hash%3Apr1wnh&f%5B6%5D=hash%3Abm1igs&f%5B7%5D=hash%3Ad6o9bp |
Erin practices in the area of employment law and provides advice to clients on all aspects of the employment relationship. Having university qualifications in law, human resources management and industrial relations, Erin understands the complexities of people management and provides practical advice that helps her clients achieve their desired commercial outcomes.
Erin works with the senior management, in-house counsel and human resources personnel of her clients to assist with workplace issues including compliance with the Fair Work Act 2009 (Cth) and various awards, termination of employment, performance management, disciplinary matters, employee complaints and grievances, discrimination, restraint of trade, and redundancy.
Erin also has experience in drafting and advising on employment contracts, contractor agreements, company policies and employee handbooks. She works with her colleagues in our Corporate team to assist with the employment aspects of sale of business transactions and corporate restructures.
Erin has experience in conducting workplace investigations, including in respect of bullying and harassment, misconduct and employee grievances. She also prepares and conducts workplace training sessions in respect of bullying and harassment, workplace investigations and performance management of employees.
Erin acts for employers in respect of employment-related claims, such as unfair dismissal, general protections and breach of contract claims.
Erin has assisted both public and private sector employers in a range of industries, including hospitality, restaurants, professional services, mining, engineering, media and entertainment, advertising, manufacturing, private equity, insurance, transport, construction and infrastructure.
Employment law
Performance management
Bullying and harassment
Workplace investigations
Workplace training
Bachelor of Laws, University of Newcastle
Bachelor of Business (Industrial Relations and Human Resources Management), University of Newcastle
Admitted as a solicitor in New South Wales and in the High Court of Australia
Member of the Law Society of New South Wales and the Women Lawyers Assoc. | https://mccabecurwood.com.au/people/erin-kidd/ |
Chemistry curricula incorporate many abstract concepts that are important but difficult for students to understand. Educational researchers have recently begun to concentrate on the development of a wide variety of visualization tools and novel pedagogies to aid students in science learning at all levels. This study used the Roger Frost organic animation package to ascertain the impact of interactive computer visualization (ICV) in the teaching of organic chemistry at the Nigeria Certificate in Education and Degree levels. It adopted a quasi-experimental research design and used Structured Personal Data Questionnaire (SPDQ) and Semester Result (SR) for data collection. A mobile virtual classroom was created and used throughout the study. The students identified mechanism of reactions, cycloaddition reactions, synthesis of proteins and certain aspects of IUPAC nomenclature as challenging areas in their study of organic chemistry. After the three months teaching period using the organic simulation program, results from evaluating the students showed that interest and confidence in the selection and answering of questions from topics taught using ICV improved, the mean performances of the students almost doubled when ICV was introduced in the teaching program, a mean value of 48.20 out of 60 was obtained which indicated that the animated teaching was effective to a high extent and a general enhancement of the interest of students in the study of organic chemistry. The researchers therefore recommended that teaching of organic chemistry should be enriched with relevant illustrations, organic chemistry text books should be sold with CDs having computer simulations of organic reaction mechanisms for better appreciation. Virtual chemistry room for computer assisted instructions should be included in the design of chemistry laboratory to enhance instructional delivery using the 21st century pedagogy. Computational chemistry should be included in the university chemistry curriculum. Also basic computer knowledge by academic staff should be emphasized to enable them cope with the proposed digitalization of the course.
Chemistry is one of the important branches of science and occupies a central position in preparing students who wish to pursue career in medicine, industrial chemistry, food science, engineering and other applied/related disciplines. Chemistry curricula commonly have many abstract concepts that cannot be easily understood if these underpinning concepts are not sufficiently grasped by the student 1, 2. The abstract nature of chemistry concepts along with other learning difficulties means that chemistry classes require a high-level skill for proper application 3. One of the essential characteristics of chemistry is the constant interplay between the macroscopic and microscopic levels of thought, and it is this aspect of chemistry learning that presents a significant challenge to novices 4. The abstract concepts of chemistry require thinking on several levels and organic chemistry is no exception to this.
Organic chemistry is a component of the first year General Chemistry curriculum in the undergraduate program of most African universities. Beginners in the learning of Organic chemistry usually have confusion and difficulty because there are no problem-solving algorithms, it requires three-dimensional thinking and has an extensive new vocabulary. One of the major difficulties for students in organic chemistry is the understanding of the three-dimensional nature of molecules which they have great difficulty converting between the two dimensional drawings used in text books and on classroom boards to represent molecules and their three-dimensional structures. Without this understanding, to survive the course, students have to memorize a large vocabulary of molecules and rules to pretend they understand the three-dimensional structures. The difficulty encountered by undergraduate students in understanding the course prevents many of them from continuing with this career path.
Educational researchers have recently begun to concentrate on the development of a wide variety of visualization tools and novel pedagogies to aid students in science learning at all levels. These tools describe a spectrum of learning environments that support many different types of visualization from concretizing abstract concepts to understanding spatial relationships. Tools are now available that allow students to visualize experimental data sets, simulate experiments, or construct models of imperceptible entities. Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization which involves visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Scientific visualization is the use of interactive, sensory representations, typically visual, of abstract data to reinforce cognition, hypothesis building, and reasoning.
At their core, each of these tools presents students and instructors with several unique opportunities for teaching and learning science that allow students to visualize complex relationships directly from computer-generated visualization tools to enrich traditional pedagogies 5, 6, 7. Bennett 8 has suggested urgent changes in the roles of teachers, students and computers, so that students would interact collaboratively with teachers and technology. Technological tools that integrate multiple representations provide students with opportunities to visualize chemistry and promote conceptual understanding 9.
A cursory look at the performance of chemistry students in A.I.F.C.E in Organic chemistry III (CHEM 323 for degree program) and Natural Products and Amines (CHE 323 for NCE program) from 2010-2012 leaves a lot to be desired in the learning of this area of chemistry. Observation was made of a progressive increase in the percentage of students failing the courses by the years i.e 7 %, 55 % and 61 % for CHEM 323 and 40 %, 47 % and 52 % for CHE 323 in 2010, 2011 and 2012 respectively. A number of chemistry tutorial software and applications have been development but so far the researchers are yet to see anyone that have been incooperated in the teaching and learning of organic chemistry in Nigerian tertiary education institutions. The concern of the researchers in not allowing this trend to continue led to this study. This study adopted a new pedagogical approach in content delivery to ascertain the impact of using interactive computer visualization since it has been discovered that visualization tools aid students in the learning of science.
Specifically, the study focused on;
1. Comparing students’ examination performances in the selected organic chemistry courses after a semester of exposure to the use of ICV with previous performances in the courses.
2. Determining the extent to which the use of ICV affected the interest of students in the study of organic chemistry.
2013/2014 third year degree and NCE students of the Department of Chemistry, Alvan Ikoku Federal College of Education, Owerri were used for the study. Two research questions were posed for this investigative study. They are;
1. Is there any significant difference between students’ examination performances in the selected organic chemistry topics taught using ICV and their previous performance in the courses?
2. To what extent does ICV approach impact on the interest of the students in the study of organic chemistry?
A quasi-experimental research design was adopted as already existing intact group were used. The population of this study comprised of the 2013/2014 third year degree students and final year NCE students of the Department of Chemistry in the School of Sciences AIFCE, Owerri. There were 56 Degree students and 76 NCE students giving a total population of 132 students and all were used for the study since it is a manageable size. This population was chosen because of the observed failure rate of students in the Organic chemistry courses (CHEM 323 and CHE 323) taken at these levels.
Structured Personal Data Questionnaire (SPDQ) and Semester Results (SR) were used to collect data for analysis.
A preliminary survey was carried out to determine the students’ preferred interest in the different fields of chemistry and what influenced their preference. This study was done by randomly distributing structured questionnaire to the students in the classroom without prior information to them.
Mobile virtual classroom was set up using a laptop connected to a projector and screen. Roger Frost Organic Chemistry Teaching Tools purchased from Russet House Cambridge, UK was deployed for the study. Lesson plans for the two courses i.e CHE 323 and CHEM 323 were prepared from their course outlines as spelt out in the 2012 edition of the NCCE minimum standard and University of Nigeria Nsukka handbooks respectively. The content delivery was handled using two pedagogical techniques; ICV and conventional lecture method. The problematic areas of the courses were taught using ICV while the rest were taught by the conventional method. At the end of the second semester, test instrument which is the semester examination was administered to the students. Percentage and grand mean scores were used to answer the research questions. Independent t-test was also employed to determine the impact of ICV on the performance of students.
Result of the preliminary study revealed the following;
100 chemistry students were given the questionnaire while responded and expressed their preferred interest as shown above. Only 1 % of the respondents preferred organic chemistry. Responses gathered to determine the reasons for not choosing organic chemistry were are shown in Table 2.
51 % of the respondents considered organic chemistry too abstract since they cannot paint a good picture of 3-dimentional objects and structures on a 2-dimentional teaching tool. This could have also made 16 % of them to consider their lecturers as unable to communicate the content of the course to them. A student who considers a course to have abstract concepts that the lecturers are unable to communicate will definitely lack interest in the said course. This finding agrees supports that better information retrieval occurs when people can attach verbal material to pictorial images and pictorial images to verbal labels, since the two systems are richly interconnected.
The results from the analyses of the data obtained from the post-test questionnaire prepared on a four point likert scale with accepted grand mean of 2.5 administered on sixty eight (68) NCE students and forty (40) degree students to determine the impact of use of ICV on their interest in study of organic chemistry gave Grand Means of 3.13 for NCE students and 3.38 for degree students. These values are clear pointers to the fact that ICV teaching demystifies and builds the students’ interest in organic chemistry, makes them understand and retain what has been taught and improves their appreciation of the courses taught with these animations.
The results showed that the mean performances of the students almost doubled when ICV was introduced in the teaching program. This is an indication that the lectures were better understood when animations are used for teaching. In the degree class, the mean performance was approximately 35 out of the total score 70, which can be regarded as a good result. In the NCE class though the mean performance was 21 out of 60, it can still be regarded as a reasonable performance compared to the earlier 13 out of 60 that was achieved without teaching with animation. The results of the t-test obtained from the two results showed that there were very significant differences between the examination performances of the two classes of students when ICV is infused in the teaching of the selected topics in organic chemistry. The use of ICT in schools act as a catalyst in transforming the teaching and learning process as well as improve students’ skills while causing behavioural changes.
With almost a hundred percent improvement in the students’ performance in the final assessment relative to their previous performance, the use of ICV in the teaching of the selected areas in organic chemistry can be said to be very effective. The interest of the students in the study of organic chemistry was greatly improved using this teaching method. This improved interest was later translated into better performance in the courses as was observed in the final evaluation of the ICV program.
From the findings in this research, the following recommendations have been made in other to encourage students in the learning of organic chemistry:
1. Teaching of organic chemistry should not be too verbal but rather be enriched with simulations and computer aided visualizations.
2. Chemistry teachers, educational technologists and software developers should synergize their expertise and develop educational resources that will meet with the 21st century pedagogy of teaching and learning.
|||Ayas, A & Demirbas A. (1997) Turkish Secondary Students’ Conception of Introductory Chemistry Concepts, Journal of Chemical Education, 74(5), 518-521.|
|In article||View Article|
|||Coll, R. K. & Treagust, D. F. (2001). Learners’ Use of Analogy and Alternative Conceptions for Chemical Bonding, Australian Science Teachers Journal, 48(1), 24-32.|
|In article||View Article|
|||Fensham, P. (1988). Development and Dilemmas in Science Education. 5th Edition. London: Falmer.|
|In article||View Article|
|||Bradley, J. D. &Brand, M. (1985). Stamping Out Misconceptions. Journal of Chemical Education, 62(4), 318.|
|In article||View Article|
|||Copolo, C.F., & Hounshell, P.B. (1995).Using three-dimensional models to teach molecular structures in high school chemistry. Journal of Science Education and Technology, 4(4), 295-305.|
|In article||View Article|
|||Wu, H. k., Krajcik, J. S., &Soloway, E. (2001).Promoting conceptual understanding of chemical representations: Students' use of a visualization tool in the classroom.Journal of Research in Science Teaching, 38(7), pp. 821-842.|
|In article||View Article|
|||Crouch, R.D., Holden, M.S., &Samet, C. (1996). CAChe molecular modeling: A visualization tool early in the undergraduate chemistry curriculum. Journal of Chemical Education, 73(10), 916-918.|
|In article||View Article|
|||Bennett, F., (2002). The Future of Computer Technology in K-12 Education. Phi Delta Kappa Magazine. Available at http://www.cris.com/~faben1/phidel~1.s.html.|
|In article||View Article|
|||Kozma, R.B., Chin, E., Russell, J., & Marx, N. (2000). The roles of representations and tools in the chemistry laboratory and their implications for chemistry instruction.Journal of the Learning Sciences, 9(2), 105-143.|
|In article||View Article|
Published with license by Science and Education Publishing, Copyright © 2018 Ngozi-Olehi L.C., Duru C.E., Uchegbu R.I. and Amanze K.O. | http://pubs.sciepub.com/education/6/3/15/index.html |
I have designed many programs using kobo for a private company , lately their IT team wanted to check with me for further work . Some of the question they asked are coped in this email below , Can you please provide your feedback on these and what you propose if the company wanted to make the data on private server ?
Many Thanks
-
Solution design light (SDL) hosting document / application architecture diagram to understand the components used and integrations to it
-
Application hosting details - On-prem/ IaaS / Off-prem (IaaS / PaaS / SaaS)
-
If this is SaaS/PaaS application, please share the latest SOC2 Type 2 report / ISO27001 certificate, Statement of Applicability, Policy and Procedure documents / Information security process document for all the security domains from vendor/supplier.
-
If Enterprise Architect (EA) has already approved the solution, please share the EA approval email. | https://community.kobotoolbox.org/t/requirements-to-install-kobotoolbox-on-a-self-hosted-server/20210 |
Become a Patreon!
Abstract
Excerpted From: Desirée D. Mitchell, Class of One: Multiracial Individuals Under Equal Protection, 88 University of Chicago Law Review 237 (January 2021) (Comment) (203 Footnotes) (Full Document)
For centuries, mixed-race Americans have felt a sense of isolation as unique as their racial makeup. Whether society perceived a multiracial person as White or non-White could determine everything from whom they could marry to which jobs they could work to which areas and homes they could live in. The racially mixed nation that the United States has been since its foundation has resulted in a society in tension with entrenched notions of racial classification. The Equal Protection Clause of the U.S. Constitution-- passed to promote equality of former slaves-- says that “[n]o State shall ... deny to any person within its jurisdiction the equal protection of the laws.” Yet there is reason to believe multiracial individuals are not offered equal protection under the law.
Perhaps unsurprisingly, courts have largely failed in classifying the cases of the multiracial plaintiffs before them. Particularly in the context of White-Black relations during the centuries-long era of anti-miscegenation laws, courts abided by a “one-drop” rule in which anyone with any traceable amount of Black heritage was legally considered Black. But even since the days in which anti-miscegenation laws were deemed unconstitutional, courts have continued to falter in how they see multiracial people for legal purposes. Historically, courts have simply understood multiracial individuals to be akin to a single minority race of which they are at least partially composed. For instance, in the infamous race-based case Plessy v. Ferguson, the Supreme Court accepted the notion that the plaintiff--a man who was “seven-eighths Caucasian and one-eighth African blood”--was, for all legal purposes, Black. Because of this limited understanding of racial identity, the legal system has largely failed to identify multiracial plaintiffs as they identify themselves, leaving many plaintiffs feeling unrecognized and alienated from society.
Seeking to address this problem, some scholars have written about how courts might consider the multiracial identities of plaintiffs in ways such as ceasing to require some identification with a recognized racial category. Professor Taunya Lovell Banks, for instance, has joined scholars like Professors Nancy Leong and Lauren Sudeall Lucas in arguing that the law should recognize individuals' very personal multiracial identities. Relatedly, scholars like Professor John Tehranian and Bijan Gilanshah have called for a more fluid understanding of race under equal protection doctrine. This Comment largely builds off those arguments by asserting that courts should recognize multiracial plaintiffs as just that--multiracial. In doing so, I suggest that courts should adopt a mindset in which they use a framework similar to the recognized “class-of-one” equal protection doctrine.
The class-of-one doctrine allows an individual to be recognized as a class of her own for equal protection purposes. Through this doctrine, courts have been receptive to the argument that an individual who does not identify with a recognized class has nevertheless been subject to unlawful discrimination in need of judicial review. I argue the unique experience of multiracial individuals should allow them to allege discrimination because of their membership within a class of one. This option would be fitting in the context of plaintiffs who are not monoracial because the multiracial experience varies significantly by racial makeup and self-identification. It is those experiences that are worthy of recognition by courts.
Consider the following hypothetical example given by Leong:
A plaintiff claims that he was discriminated against because he was Asian. He alleges that his coworkers called him a “chink,” asked him whether he ate dogs, and mocked the shape of his eyes. He was ultimately fired for what he believes were pretextual reasons masking racial animus. The first sentence of the court's opinion is as follows: “Plaintiff alleges that he was discriminated against because he is Hispanic.” Undoubtedly, this plaintiff would feel that the court had disregarded his narrative. Not only did the court characterize him in a way that he had not characterized himself, but the way in which the court characterized him divests the other facts of their narrative impact because they are not associated with the category of “Hispanic” as they are with the category of “Asian.” My example is intentionally exaggerated, and the Reader's reaction is likely that the court's characterization was simply wrong. But that is exactly the point: just as an Asian plaintiff may believe it to be wrong for a court to characterize him as Hispanic, a multiracial plaintiff may feel it was wrong for a court to characterize him as monoracial.
A half-White, half-Black individual will have experiences of discrimination that differ in nature from the discrimination experienced by an individual who identifies as Black, White, Hispanic, or Asian. While these differences may not result in differing legal outcomes (meaning a multiracial plaintiff who is wrongly identified as monoracial may still succeed in her claim, irrespective of the court's error), each plaintiff before a court will still be unique and deserving of recognition. Further, as illustrated by Leong's example, to be meaningfully effective, courts must make an effort to truly understand the situations of claimants. Consequently, multiracial plaintiffs should have the option of having their unique discrimination claims heard and recognized as a class of one.
As described, articles chronicling the unique experiences of mixed-race individuals are not new. For the purposes of this Comment, I define “mixed-race” or “multiracial” individuals as anyone who identifies with more than one race. In Part I of this Comment, I explore the history of multiracial individuals in the United States, including how society, and courts specifically, have classified mixed-race people.
Part II then describes existing equal protection jurisprudence and how it has historically applied to multiracial people. I describe how courts have traditionally lumped multiracial individuals with other, clearer minority racial groups and ignored the unique identities of multiracial people.
The Comment then goes on in Part III to exemplify the harms multiracial individuals face under current equal protection doctrine. Most notably, I argue multiracial individuals are subject to isolation because of their “confused” identity and are subject to discrimination because of their multiracial composition itself, as opposed to the presence of some non-White heritage. Additionally, I discuss the psychological and symbolic significance of recognizing--or failing to recognize--multiracial identity.
Finally, in Part IV, I discuss courts' use of the class-of-one doctrine under equal protection and how its use could speak to the unique harms multiracial individuals face that are unaddressed under current application of equal protection.
[. . .]
As Professor of sociology G. Reginald Daniel explained, “[o]ur society is racially illiterate in general, and the greatest illiteracy is to be in the presence of a multiracial person.” So, too, are our courts racially illiterate when they misidentify mixed-race plaintiffs. Under the courts' current understanding and application of equal protection, the unique identities and experiences of mixed-race people go unrecognized and perhaps even unaddressed. As society changes and becomes increasingly more diverse, it is crucial that our courts, too, reflect the people they intend to protect through the law. An effective way in which courts can remedy this problem is to consider multiracial plaintiffs as a class of one when they sue under equal protection.
Some might argue that allowing class-of-one claims could completely undermine current understandings of race and discrimination. While this could certainly be a possibility, one must consider the possibility that we live in a society whose racial categorization schemes ought to be questioned. Rather than perceiving race as clear-cut (and often binary), it might be more useful and accurate to perceive racial categories as fluid. Nevertheless, existing categories would remain untouched by this new application of the class-of-one doctrine. The altered understanding of multiracial equal protection claims would serve as an addition to--not a substitute for--current equal protection jurisprudence.
In the past, the judicial system has played a vital role in shaping American thought and opinion on race. After the Court's holding in Brown, Americans thought about race differently and eventually adopted an overwhelmingly egalitarian attitude. Through a landmark decision, the Court set a model for society. I argue that by adopting a class-of-one approach, courts will once again lead society by acknowledging the often-marginalized identities of the multiracial plaintiffs before them instead of viewing the experiences of multiracial individuals as typical for those of the groups to which they belong. As a result and as will prove crucial to our ever-evolving society, courts--and conceivably society at large--might begin to affirm the self-identities of multiracial individuals.
B.A. 2018, Brigham Young University; J.D. Candidate 2021, The University of Chicago Law School.
Become a Patreon! | https://racism.org/articles/race/67-defining-racial-groups/biracial-and-multiracisl/9105-class-of |
by Winnie Byanyima
International agreement on the need to take action against rising and damaging economic inequality is gathering pace.
- The World Economic Forum’s (WEF) ‘Outlook on the Global Agenda 2014’ ranks widening income disparities as the second greatest worldwide risk in the coming 12 to 18 months.
- President Obama has made it a priority for his administration 2014.
- Pope Francis has called on business leaders to “ensure that humanity is served by wealth and not ruled by it.”
- The IMF’s Christine Lagarde last week said inequality damages long term growth and wastes human potential.
- UN head Ban Ki-moon has urged the international community to tackle inequalities between regions and within countries.
- World Bank President Jim Kim has described massive income inequality as a stain on our collective conscience and committed the Bank to advancing shared prosperity.
- Oxfam’s recent report ahead of the Davos WEF, revealing that the world’s 85 wealthiest people have as much wealth as the poorest 3.5 billion people, has received worldwide attention.
We have a shared agenda – now what’s needed is a shared plan of action.
Each nation and region has its own circumstances, and there is no one-size fits-all solution. But we know that in countries which have successfully reduced inequality, progressive taxation has been an important tool, enabling governments to invest in good quality health care and education for their poorest citizens.
Tackling tax havens
In the last 30 years, an expanding global network of tax havens has hidden huge amounts of wealth – we conservatively estimate $18.5 trillion is held offshore. This is largely untaxed, holding back billions that could be spent on tackling poverty and boosting economies.
This network of secrecy also facilitates the draining of large amounts of capital from the poorest countries. Some $950 billion left developing countries in 2011 in illicit financial flows. It is estimated that between 2008 and 2010, sub-Saharan Africa lost on average $63.4bn in this way each year – almost exactly the financing gap between what 20 poor countries have and what they need to meet the Millennium Development Goals.
At the same time, the ‘race to the bottom’ effect of very low tax jurisdictions has further contributed to ever-lower corporate and personal tax rates for the richest individuals and corporations. In Zambia for example, copper exports in 2011 generated $10 billion, while government revenues from the resource were just $240 million – this in a country where more than two-thirds of people live in extreme poverty.
Time for action
Last year, acknowledging that for prosperity to be sustained it must be shared more equally, the G20 endorsed a plan to clamp down on tax dodging by multinational corporates. Now leaders must get down to the work.
New global rules on corporate tax dodging are needed. Making inequality reduction a measure of progress alongside GDP growth is important. And investing in high-quality public health and education services will be crucial investments in productivity and a more equal world. | https://www.oxfam.ca/story/toward-an-international-movement-to-tackle-inequality/ |
Services set for North Central College alumnus, Naperville teacher Shaun Wild
Feb. 6, 2012—Services have been set for Naperville teacher and North Central College alumnus Shaun Wild.
The community is invited to express condolences on Thursday, Feb. 9, at Brown Deer High School, 8060 N. 60th St., Brown Deer, Wis. Visitation begins at 5 p.m., with a memorial service starting at 7 p.m.
A funeral service will be held at 12:30 p.m. Friday, Feb. 10, at Our Lady of Good Hope Catholic Church, 7152 N. 41st St., Milwaukee, Wis.
Discussions are ongoing about possibly holding another local memorial service in the Naperville area at some point in the near future.
The Wild family wishes to express its thanks for prayers of support. Family members will not be making any public statements and ask media to respect their desire for privacy at this time.
| |
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to a method and apparatus for monitoring transient physical phenomenon such as seismic activity and to monitor the passage of surface and subsurface "anomalies" such as surface vessels or submarines. More specifically, the present invention relates to monitoring transient physical phenomenon or the passage of anomalies by monitoring changes in an alternating component of a generally vertical current which emanates from the earth.
2. Description of the Prior Art
As detailed in U.S. Pat. No. 4,507,611, the contents of which are hereby incorporated by reference, many techniques are known in the prior art for performing geographical prospecting. For example, Ruehle, et al., in U.S. Pat. No. 3,363,457, teaches that the measurement of radiant energy from subsurface formations enables geophysical prospecting. Weber, in U.S. Pat. No. 4,044,299 teaches a prospecting technique which includes the use of an inductive exciter which induces alternating current energy into the area and structure of the earth which is to be observed. Measurement of the induced current energy enables an artisan to determine the underground environment of the area.
A method and apparatus for measuring subsurface electrical impedance utilizing first and second successively transmitted signals at different frequencies is taught by Madden, et al. in U.S. Pat. No. 3,525, 037.
In U.S. Pat. No. 3,942,101 Sayer teaches a geophysical prospecting device which utilizes a distortion of the atmospheric electrostatic potential gradient which is suggested to be a result of the Nernst effect. Sayer teaches that the distortion provides a means for locating subterranean sources of geothermal energy.
The earth's electromagnetic field also has naturally occurring alterations of the type known as "magnetic noise". Slichter, in U.S. Pat. No. 3,136,943, discloses that such noise is the product primarily of lightening and other electrical discharges and phenomenon. Geothermal prospecting can be performed by detecting variations in the naturally occurring electromagnetic radiations from thunderstorms or other phenomenon. The detection and measurement of short-term variations in the earth's magnetic field for geothermal prospecting is described in U. S. Pat. No. 3,126,510 to McLaughiin.
Prospecting can also be performed by comparing simultaneous variations of an underground electric field and a magnetic field which results from the circulation of telluric currents. This comparison of electric and magnetic fields requires the use of electrodes to measure the internal telluric currents in the magnetic field according to Cagniard; see U.S. Pat. No. 2,677,801.
The above mentioned generally vertical current having an alternating component is distinct from the above-mentioned telluric currents for many reasons. Firstly, telluric currents are usually direct currents. Secondly, telluric currents occur only within the earth whereas the generally vertical current having an alternating component emanates from the earth's surface over land and water. Thirdly, telluric currents exhibit local discontinuities and are not based on ionic impingement of solar winds. In contrast, the generally vertical current having an alternating component has generally predictable time variations due to the constant directivity (generally vertical), depth, and diurnal character.
A 1982 publication by the Soviet Academy of Sciences entitled Electro- Magnetic Precursor to Earthquakes includes a passage in reference to measurements made using a pair of electrodes, one positioned at the bottom of a five-hundred meter deep shaft and the other at the top of that shaft. During a period of seismic activity, electrical currents having high frequency components were measured between the electrodes.
Machts et al., in U.S. Pat. No. 2,124,825, describe an apparatus for investigating the electric and magnetic field conditions in an area being surveyed in order to locate irregularities indicative of earth strata, rock fractures, oil and water-bearing earth formations, etc.
Stanton, in U.S. Pat. No. 2,659,863, describes a method and apparatus for determining the presence of oil, mineral, and other subterranean deposits by measuring variations in a vertical potential gradient in the atmosphere near the surface of the earth.
Morrison, U.S. Pat. No. 2,784,370, describes a prospecting device for locating subterranean anomalies which utilize measurements of electrical potential at or near the surface of the earth which results from "terrestitial electricity".
Barringer, in U.S. Pat. No. 3,763,419, describes a method and apparatus for geophysical exploration which utilizes very low frequency fields produced by distant transmitters as a source of a primary field. A vertical component of that field is used as a stable reference against which variations caused by discontunities in the earth conductivity can be measured.
Miller, et al., in U.S. Pat. No. 4,041,372, describes a system which utilizes a current source to provide an input signal having predetermined frequency components, amplitude relationships, and duration. A plurality of spaced-apart detectors are used to make differential electrical measurements which permit cross correlation with the input signal.
In summary, the prior art teaches geophysical prospecting which utilizes variations in naturally occurring electrostatic potential gradient, alternations in the earth's electromagnetic field, short-term variations in the earth's magnetic field and simultaneous variations of the underground electric field and magnetic field and other devices which require the use of induced primary fields. These prior art techniques are in contrast to the present invention which utilizes a generally vertical alternating current emanating from the earth, the existence of which is confirmed by the Soviet Academy of Sciences report.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide an improved method for monitoring the type and size of a moving surface or subsurface object or the occurrence of a transient phenomena by monitoring and analyzing a generally vertical electrical current having an alternating component which emanates from the earth and the modulation of the frequency spectrum of the alternating component.
It is a further object of the invention to enable the accurate determination of the nature of a passing object by the combined use of amplitude comparison of the alternating component of a generally vertical electrical current coupled with empirically derived computer programs designed to provide specific identification of an object due to the recording and analysis of the alternating component of the electrical current.
Another object of the present invention is to evaluate and locate moving man-made and natural objects such as submarines, ice flows, or ocean currents.
Another object of the invention is to provide an alternative to the magnetic compass as a means of determining a heading reference. This is possible since the generally vertical current having an alternating component produces a stronger energy level when an associated antenna is oriented east-west and a lesser signal when oriented north-south.
It is yet a further object of the invention to provide a method for monitoring transient events such as the passing of a submarine, a seismic disturbance, or the approach of a severe weather system by providing a stationary sensor and monitoring any variation in the generally vertical current having an alternating component caused by that transient event.
The disclosed method of monitoring transient events includes the steps of providing one or more stationary detectors at a region of the earth to be monitored, measuring local variations which occur in the amplitude, frequency, and frequency modulation of the generally vertical electric current having a frequency characteristic which emanates from the earth's surface, and recording these measurements to provide a record of the transient phenomena or using the detected signal to activate a visual or audio alarm or to operate an automatic system.
Preferably, the detected signal is divided into a plurality of frequency channels so that identifying information regarding the anomaly can be extracted.
A presently preferred embodiment for monitoring the alternating frequency components is also disclosed.
Additional objects, advantages, and novel features of the invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following or will be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and form a part of the specification, illustrate various embodiments of the present invention and, together with the description, serve to explain the principles of the invention. In the drawings:
FIG. 1 represents a cross-section of the earth illustrating a surface/atmosphere interface and various subsurface and underwater anomalies as well as simulated data corresponding thereto;
FIG. 2 is a schematic block diagram of the first embodiment of a detector according to the present invention;
FIG. 3 is a schematic block diagram of the second embodiment of a detector according to the present invention.
DETAILED DESCRIPTION OF THE INVENTION
As a poor conductor, a subsurface deposit such as oil, results in an ionic "shadow". Good conductor deposits such as iron ore results in a "focusing" ionic effect. Just as light in the visible spectrum permits intelligence gleaning through frequency modulations and resulting color differentials, the above-described generally vertical current leaking from the earth is also made of a spectrum of frequencies which are modulated through electrical resonance in unique ways by specific deposits, thus enabling remote identification of any surface and subsurface anomalies through which they have passed.
Accordingly, by sensing, recording and subsequently analyzing the measured amplitude, frequency and frequency modulations of the alternating current component emanating from the earth's surface, and correlating this data with geographical location, a systematic means such as that detailed in U.S. Pat. No. 4,507,611 for exploration of subsurface resources becomes possible.
The generally vertical current leakage from within the earth into the atmosphere occurs with a predictable geographic pattern indicative of natural and man-made substructure content. The monitoring, recording, and interpretation of the rate of leakage of the generally vertical current, plus the frequency spectrum and resonance modulation of that spectrum, forms the basis for the present invention which employs the fact that anomalous conductivity results in anomalous electric current variations.
Further, it should be understood that a change in the local conductivity in the earth's interior will effect the generally vertical current leaking into the atmosphere above the earth. An anomaly within the earth will therefore be reflected as an atmospheric anomaly in terms of the characteristics of the alternating component of the emanating current.
The monitored anomalous electrical activity falls into two categories. The first category is an anomalous object which moves near the earth's surface and disturbs the earth's magnetosphere. The second category is observed electrical anomalies from apparent stationary surface or subsurface conductive anomalies, the time variation of which disturbs the earth's magnetosphere.
An example of the first category, moving objects, might comprise a submarine whose passing would be manifested by a change in the electrical current emanating from the earth.
An example of the second category, stationary objects, might be a conductive anomaly, either man-made or naturally occurring, such as a steel spheres or the like, which, if placed below the earth's surface, would result in an increased local electrical leakage within the earth's magnetosphere, thus producing a local alteration in electrical current flow.
Using the steel sphere example, it should be understood that phase and amplitude relationships existing in the frequency spectrum of the alternating component of the electric current emanating from the earth are modified by the steel sphere. If these changes are observed under water, they appear as a modulation of existing electrical noise. If these changes are monitored above the surface, the basic RF carrier from the existing current becomes a modulated RF in close proximity to the steel sphere. If the vertical alternating current leakage to the side of and directly above the steel spheres is measured, the current flow above the sphere is higher.
If relative humidity is measured simultaneously with the variations in alternating current over water, it will be noted that a local increase in relative humidity frequently accompanies the electrical anomalies. If local relative humidity increases, the remote RF energy may be forward scattered and at times, even back scattered if the local relative humidity change is great enough. Both a temperature change in the water above the submerged steel sphere as well as a change in the radiometric infrared characteristics of the surface are also to be expected. Whether the temperature and infrared emissivity increases or decreases will depend upon the direction of the emanating current. By measuring these auxiliary variables, such as humidity and temperature, one can improve upon the accuracy and sensitivity of the basic multifrequency alternating current measurement. This technique of measuring additional auxiliary variables is useful in several applications, such as the detection of submarines, the advance prediction of severe weather, and earthquake detection.
In the case of submarine detection, the auxiliary variables of humidity, D.C. vertical electric field, and atmospheric refractive index changes are useful phenomena in improving the accuracy of the measurement. Submarines cause a variety of disturbances besides current anomalies which are the principal variables measured by the present technique. For example, a moving submarine may cause hydrodynamic disturbances, including modification of the surface waves which leads to enhanced wave breaking. This will cause a local increase in humidity in the atmosphere above the submarine, and will modify the refractive index of the atmosphere. In addition, the aerosol produced by the breaking process will modify the D.C. vertical electric field above the submarine. Consequently, the simultaneous measurement of these variables will reduce the false alarm rate of the detector, and improve its accuracy.
Similarly, in the case of severe weather predictions, monitoring such phenomena as atmospheric pressure, D.C. electric field, temperature, time and lunar position can provide compensating information to enhance the accuracy of weather anomaly detection. For example, changes in atmospheric pressure frequently accompany weather changes. Specifically the magnitude and rate of pressure changes can be a sensitive although somewhat late indicator of imminent weather changes. Electrical anomalies, which are measured according to the present invention, predicts these changes earlier. However, by analyzing electrical current changes, in view of atmospheric pressure changes, the accuracy of the prediction can be improved.
In the case of earthquake prediction, there are several phenomena which are known to frequently accompany seismic phenomena. For example, microcracking resulting from subterranean rock stresses releases ozone and other ions into the air. These can be detected directly, and also act as nucleation sites for fog when atmospheric humidity is favorable. Consequently, for earthquake protection variables such as atmospheric ion levels, ozone, and ground fog may be measured simultaneously with the alternating current anomalies discussed above to enhance the accuracy of the prediction.
The inventive monitoring device is preferably designed to weigh environmental parameters such as those discussed above, and evaluate the multifrequency band alternating current measurements in view of these parameters to achieve enhanced accuracy in predicting various phenomena.
The generally vertical current having a frequency component is believed to be caused by ions which leak from the nonpolar areas of the earth as a weak, low noise signal which is reproducably measurable and which results in the above-described resonance phenomena.
The frequency, amplitude and resonance frequency modulation of the weak, low-noise signal provides data which, when properly interpreted, may be used to indicate information regarding subsurface or surface anomalies through which the signal has passed and to locate and identify the surface or the subsurface anomaly. This is due to the fact that the anomaly will provide varying amplitude, frequency, and frequency resonance modulation depending upon its ionic occultation or conduction properties. Ionic conduction variations are dependent upon the extent of the area and size of the anomaly as well as the type of material of which the anomaly is composed. Correlation of this information with empirically determined electrical resonance data allows the determination of specific substance identity, composition, and size.
The sensing, recording, and analysis of variations in the earth's generally vertical electrical current and modulations in the alternating spectrum of that current due to alternating electrical current resonances of natural or man-made objects or transient phenomena such as weather may be accomplished by the disclosed method. The present method is equally applicable to subsurface, surface and above surface observation. The disclosed invention employs variations in the alternating component of the electrical current emanating from the earth and the observed modulations in its frequency spectrum resulting from electrical resonance phenomena to provide location and identification of man-made and natural subsurface objects and/or deposits, or of transient phenomena such as severe weather. This is in contrast to the electrostatic potential gradient changes referred to in the above-noted prior art, which by definition, is a direct current phenomena. In addition, according to the present invention, there is no need for electrodes as discussed in the Soviet Academy of Sciences literature or for the utilization of telluric current or magnetic fields or variations thereof. According to the present invention, variations in the alternating component of the current emanating from the earth may be measured by a suitably resonant antenna.
The disclosed method has many applications including monitoring geological fault regions of the earth for seismic activity which may precede earthquakes or monitoring strategic regions of the earth for the passage of the submarines or ice flows. Further, since severe weather may be preceded by changes in conductivity of the earth's magnetosphere, the relevant frequency spectrum can be analyzed to predict the arrival of severe storms or weather patterns.
Turning now to the accompanying drawing and particularly FIG. 1, in the lower portion of FIG. 1 there is depicted a cross-section of a typical surface/atmosphere interface. Reference character 1 generally refers to a subsurface conducting anomaly. Reference character 2 refers to a subsurface non-conducting anomaly. References characters 3 and 4 refer to subsurface and underwater conducting and non-conducting anomalies respectively.
The upper portion of FIG. 1 illustrates several graphs which correlate data similar to that which would result from the performance of the method described in U.S. Pat. No. 4,507,611; which method will be now briefly reviewed as background.
The atmosphere above a predetermined region of the earth which is to be prospected is first traversed. This step can be accomplished by any convenient vehicle such as an aircraft. During a traverse, local variations which occur in a generally vertical current having an alternating component are measured. This measurement can be accomplished by any apparatus which allows the measurement of a root means square (RMS) or average of the alternating current component, depicted in FIG. 1 as an amplitude occurring between the earth and atmosphere. The spectral resonances depicted are measured in a 1 Hz through 1 MHz range. It should be understood that these frequency range limits are exemplary only and that a higher upper limit or lower bottom limit can be used. U. S. Pat. Nos. 3,849,722 and 3,701,940 describe prospecting apparatus which may be used for determining the complex electric field generated when an alterating component is induced into the earth. Such apparatus may be used by one skilled in the art to measure the alternating current component of the generally vertical current occurring in the atmosphere above a predetermined region of the earth as it is traversed.
The next step includes the recording of the measurements in correlation with the spatial relation of the point of the measurement to determine significant measurements indicative of surface or subsurface anomalies. Recording the measurements can be accomplished by any recording apparatus which is connected to the sensing apparatus used to measure local variations. In the alternative, the recording can be accomplished by an individual taking periodic readings. The measurements taken can be correlated with the spatial relation of the point of the measurement by a technique such as that as described in U.S. Pat. No. 3, 976,937 which discloses apparatus for recording sensor positions by the use of aircraft. A determination of significant measurements indicative of surface and subsurface anomalies requires the correlation of data with the comparison of prospecting results by the use of a computer or other means. As an alternative, the teachings of U.S. Pat. No. 4,041,372 regarding a method of deriving parameters relative to subsurface strata can be used.
An aircraft is the preferred medium for searching since it provides the highest search rate at the lowest cost. A trailing-wire antenna can be towed from an aircraft. Such a trailing-wire antenna may be attached to an amplitude/frequency processor which is used to perform the measuring step of the invention. The other side of the processor may be grounded to the airframe. This produces a time rate of change measurement of the alternating current component. Any antenna which will sense the flow of the alternating current component may also be used.
As shown in the upper portion of FIG. 1, the measurement of the alternating current component preferably includes measurement of the frequency, amplitude, and frequency resonance modulation. Considering the simulated data appearing in FIG. 1, reference character 5 refers to a graphical illustration of the full frequency RMS value of the alternating current component amplitude. It can be seen by a comparison of the graphical illustrations to anomalies 1, 2, 3 and 4 that the alternating current amplitude is anticipated to be above a given mean level; for example, 5 milliamperes, when recording over conducting anomalies 1 and 3 and below the mean level when recording over the nonconducting anomalies 2 and 4. The specific differential of the alternating current component amplitude of the mean level may be used by the artisan to determine the type, amount, and area extent of the detected anomaly. This data may be used in conjunction with the alternating frequency resonance data as detailed below.
The alternating current component frequency and frequency resonance modulation is preferably measured in a series of different frequency ranges, for example ranges a,b,c,d, . . . z. In the present example, the overall, anticipated frequency range preferably varies from 1 Hz to 1 MHz. The amplitude variation enables the observer to determine, for example, information regarding the area of the anomaly. The frequency resonance enables the observer to identify specific aspects of the anomaly through the establishment of empirical data. For example, it is anticipated that certain nonconducting anomalies will generate "signature" modulated frequency patterns such as those shown by reference characters 4a and 2a at frequency level "a". A different "signature" pattern would be generated by various conducting anomalies. In other words, specific types of anomalies will only effect uniquely defined, empirically determined frequency ranges and only a specific type of a anomaly would result in a measurement within a specific frequency range. For example, frequency "d" is indicative of a reaction from the nonconducting anomaly 4 as indicated by reference character 4d. This type of data is also applicable to conducting anomalies as shown by frequency "b" where only the conducting anomaly, 3, is illustrated as yielding a reaction; see reference character 3b.
Certain frequency ranges, such as frequency "c" may indicate a reaction to all conducting anomalies as depicted by reference characters 1c and 3c.
Due to the fact that the alternating current component comprises a part of the a vertical current leakage resulting from ionic impingement on the earth, it is advisable that the solar activity causing such ionic impingement be monitored in order to enhance the measurement process. Such solar activity can be monitored by any known technique ranging from visual observation of auroral strength or sunspot activity to the measurement of gamma, self x-ray and other radiation emanating from the sun. A primary source of such measurements is data collected from satellites.
When solar activity is high, highly conductive deposits will cause a localized increase in the alternating current component flow. However, the total data abstracted from such an anomaly will still produce a unique pattern. By recording both amplitude changes and modulation of the frequency spectrum, identifying characteristics as well as the location of subsurface deposits or other anomalies becomes possible with a precision and cost unequaled by other methods.
As will be understood by the artisan, by modifying the above described method, similar but stationary apparatus to that described above may be used to detect transient phenomena. For example, if seismic activity in the vicinity of a geological fault is to be monitored, a detector may be located near the fault. When no seismic activity is occurring, there will be no change in the amplitude of the alterating current component as the function of time since the detector is held stationary. The amplitude of alternating current components will only exhibit a time rate of change when a transient "anomaly" occurs in the vicinity of the detector. Similarly, if the passage of a large conducting or nonconducting mass past the vicinity of a given point is to be detected, a detector can be positioned at the point. A detector so positioned will display an amplitude variation in the local alternating current component as the mass moves past. Thus, the present method may be used to remotely detect the passage of a submerged submarine, ice flow, or the like.
It should also be appreciated that the present invention may be used to detect the imminent approach of large severe weather systems toward the vicinity of a stationary detector inasmuch as such large and severe weather systems will cause a characteristic change in the amplitude of the alternating current component. From empirically determined frequency distribution data, information regarding the nature of the weather disturbance can be ascertained.
Turning now to FIGS. 2 and 3, examples of presently preferred embodiments of detectors useful for practicing the present method are illustrated. In FIGS. 2 and 3, like numerals are used to designate similar components. Referring first to FIG. 2, the reference number 10 generally refers to the detector unit. Signals are received at an antenna 12. The antenna may be a loop antenna of the type described in Antennas and Transmission Lines by John A. Kueken, 1st ed., 1969, chapter 15. Alternatively, the antenna may be of the type described in Antennas and Transmissions Lines (supra), chapter 14, page 73, et seq.
The output of the antenna is amplified by a preamplifier 14. The preamplifier may be of the type described in Application Manual For Operational Amplifiers, Philbric/Nexus Research, Nimrod Press, 1968. The amplifier illustrated in circuit III.38 is suitable for use with a loop antenna and the amplifier circuit illustrated in III.39 is suitable when used in connection with a traveling-wave antenna.
The output from the preamplifier is preferably filtered into "n" bands by "n" band pass filters 16. The band pass filters may be of the type described in Application Manual For Operational Amplifiers (supra), see circuit illustration designated III.27. Preferably, the frequency bands range from very low frequencies to high frequencies. For example, the lowest frequency band might cover the range of 1 Hz to 10 Hz. The next highest frequency band may cover the range from 10 to 100 Hz, the next highest frequency band may cover the range from 100 Hz to 1 kHz, etc., up to high frequencies such as 1 MHz. The signals monitored in different frequency bands yield information useful for different purposes. For example, low frequencies may be used to provide early warnings of very deep phenomenon such as earthquakes occurring far below the surface of the earth or ocean. For monitoring transient phenomena occurring near the surface, higher frequency bands would be more useful. As a general rule, it is preferable to monitor each frequency for relative changes over time in order to obtain as much useful information as possible regarding the transient phenomena which is occurring. It is noted that for purposes of predicting severe weather systems, frequency ranges as low a 0.1 Hz to 1 Hz may be used.
The output from the band pass filter is fed to a rectifying circuit, 18, which outputs a signal indicative of the mean amplitude of the signal in each frequency band over a time period T.sub.i. The averaging period T. sub.i is chosen so that it will be much less than the time over which the transient phenomenon being observed takes place. A typical rectifier may be of the type described in Electronics Designers Handbook, 2nd Ed. revised, L. J. Giacoletto, McGraw Hill, 1977, Sec. 12. 4.
The output of each of the time-averaging rectifiers 18 is inputted to a respective comparator circuit 20, which compares the output of the time averaging rectifier 18 with the output of a second time averaging circuit 22. Each time-averaging circuit 22 will typically average the output of an associated time-averaging circuit 18 over a longer time period t.sub.i which is much larger than T.sub.i. Typically, t.sub.i is on the order of, or greater than, the time scale of the observed transient phenomenon. Thus, the time-averaging circuit 22 provides a "reference" value for the comparator 20. The time-averaging rectifier circuit 22 may also be of the type described in Application Manual For Operational Amplifiers (supra). In other words, similar averaging filter designs may be suitable for both the short-term and long- term averaging circuits 18 and 22; only the averaging time periods used in each circuit need be different. The signals (S.sub.1,S.sub.2 . . . S. sub.n) from the comparators 20 of each of the frequency bands are combined in an "adder" 24. Various suitable adding circuits are described in the Application Manual For Operational Amplifiers (supra). While the adder 24 is normally an adder, it should be understood that the signals S. sub.1,S.sub.2 . . . S.sub.n from some frequency bands are added while others are subtracted. Whether a signal is added or subtracted depends upon whether the amplitude change in the frequency range is a transiently increasing or decreasing one for the particular phenomenon being monitored.
An auxiliary signal generator 25 may be used to generate a "corrective signal" (S.sub.c) which is inputted to the adder 24 in order to enhance the accuracy of the measurements by weighing pertinent environmental parameters A.sub.1,A.sub.2 . . . A.sub.n in the manner detailed above. As will be understood by the artisan, the generator 25 may simply comprise a junction point for inputting pertinent environmental parameters to be the adder 24. Alternatively, the generator 25 may have an appropriate transfer function for weighing the inputted environmental parameters and generating therefrom the corrective signal S. sub.c.
In order to automatically determine when a transient phenomenon has occurred, the output from the adder 24 may be compared with a reference signal, V.sub.ref, in a comparator 26, the reference signal representing a threshold level. Whenever the output of the adder 24 exceeds the threshold level, a signal will be outputted to an indicator or recording device 28.
Turning now to FIG. 3, there is depicted a more general purpose configuration for the detector 10. In FIG. 3, the detector is generally referred to by the reference 30. In a manner similar to that described above with reference to FIG. 2, signals are received at antenna 12 and are amplified by a preamplifier 14. The signals from the preamplifier 14 are filtered into "n" frequency bands by "n" band pass filters 16 and the mean amplitude in the signals in each frequency band is calculated by the time-averaging rectifying circuit 18. As before, circuit 18 produces a rectified average over a time period T.sub.i of the signal which is in that frequency band. The averaging period is chosen so that it will be much less then the time period over which the transient phenomenon being observed takes place. The averaged signals (S.sub.1,S. sub.2 . . . S.sub. n) are then converted into digital signals by an analogue to digital (A/D) converter 32. A suitable A/D converter would be of the type sold by Metrabyte Corporation, Model DASH 16. The output from the A/D converter 32 is inputted into a microcomputer or microprocessor 34 for processing. The microprocessor operates to compare any variations in signal strength in the various frequency bands and to process the results in specific ways depending upon the type of transient phenomenon being observed. The microprocessor 34 may comprise a portable desk-top type computer such as the IBM PC.
As before, an auxiliary signal generator 25 may be used to input a compensating signal S.sub.c to the A/D convertor 33, or alternatively, as indicated by the dashed connection in FIG. 3, may input a digital signal directly to the microprocessor 34. A recording or indicating device 28 may be connected to the microprocessor 34 in order to utilize the output therefrom. Alternatively, an output signal from the microprocessor 34 indicative of the occurrence of the transient phenomenon might be used to actuate an alarm or other device.
The foregoing description of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many more applications and variations are possible in light of the above teachings. The described embodiments were chosen in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto. | |
Leadership is considered the main factor for an effective patient safety program. The Institute for Healthcare Improvement (IHI) developed a white paper, Leaders in Patient Safety, to assist healthcare leaders in the development of the patient safety program. This white paper recommends the following eight steps for leaders to achieve patient safety :
Establish Patient Safety as a Strategic Priority
Patient safety should be considered as one of the organization’s strategic priorities. This strategic priority and is included in all of the plans of the organization. The leadership must assess and establish a supportive patient safety culture, address the organization’s infrastructure, and learn about patient safety and improvement methods (Botwinick, Bisognano, Haraden, 2006).
Engage Key Stakeholders
The key stakeholders include the Governing Board, leaders, physicians, staff, patients and families. These individuals need to be educated about patient safety and engaged in discussions about patient safety.
The agenda of meetings should give patient safety the same amount of time as financial issues on the agenda (Botwinick et al., 2006).
Communicate and Build Awareness
During Leadership walk rounds throughout the organization, engage the staff, practitioners, patients, and others in discussions about patient safety.
Within the departments, there should be education and other activities that address patient safety in the department. This could include safety briefings, huddles, utilizing SBAR (Situation, Background, Assessment, Recommendation), and the utilization of Crew Management (Botwinick et al., 2006).
Establish and Communicate System-Level Aim
The strategic plan with identified goals needs to be communicated throughout the organization.
For example, the strategic plan includes the patient safety as a strategic objective, so education and IT departments should include organization goals regarding the implementation of new software needed. (Botwinick et al., 2006).
Measure Harm Over Time
Utilize a dashboard or balanced scorecard to observe data over time. Include triggers for adverse events, mortality rates, Root Cause Analyses (RCAs) and Failure Mode and Effects Analyses (FMEAs), and other such patient safety information (Botwinick et al., 2006).
Support everyone Impacted by Errors
The patient and family, as well as the staff who made an error, will all require support after a medical error occurs. The appropriate disclosure of information and an apology to the patient/family has a great impact on patient safety.
Align Strategy, Measures, and Improvement
The organization must align their strategic initiatives between various parts of the organization, such as between quality improvement and financial plans. There should be oversight of improvement projects, with monitoring and revising if changes are not forthcoming. The national initiatives must also be integrated into this process (Botwinick et al., 2006).
Redesign Care Processes to Increase Reliability
Reliability ensures that the patient receives the appropriate test, treatment, or medication at the appropriate time. This can be accomplished by the use of rapid response teams, CPOE systems with decision support, and many other means.
Another concept utilized is the decrease in variability. The standardization of care with guidelines and pathways leads to decreased variability and thus increased the reliability of care (Botwinick et al., 2006).
Source:
Leadership Guide to Patient Safety. Botwinick, L., B Bisignano, M ., Haraden, C. (2006). | https://abulmajd.us/2017/01/08/leadership-and-patient-safety/ |
Humans are the primate species with not only the longest life span (120 years) but also the greatest proportion of those years spent in social and biological maturity. The evolutionary legacy of aging also includes a powerful biological dimension of programmed senescence. Despite this, cross-cultural psychiatrist David Gutmann suggests elders exist not because of our species’ technical ability to keep the weak alive; instead, we attained our humanity through the very existence of elders and the significance of their postparental roles.
The simplest way of conceptualizing elders and elderhood is as the age cohort relatively older than yourself or the generation with more years than anyone else in the community. Cultural construction of this older-adult category typically combines the path of biological maturity, the developmental kinship and family cycle, and broader notions of social generation. Elderhood more often than not focuses on the latter two factors, although for women, menopause can function as an important status-turning point, signaling eligibility for elder status. However, as Rasmussen notes for the Tuareg, the ending of reproductive capacity complexly impacts the unfolding of female person-hood through realignment of kin hierarchies and other social strata affecting both males and females.
In essence, cultures are more apt to see elderhood as a marker of a social rather than a biological or time-based maturity. This is clear when we see persons, especially males, enter the beginning ranks of elder in their late 20s and early 30s among Africa’s age set societies as well as in Australian Aboriginal tribes. From another perspective, an abundance of years without the culturally prescribed markers may allow individuals never to be socially considered an elder. For example, in Peterson’s study of African American working-class women in Seattle, she found that female elders were designated by the word “wise,” a term given to women who have not only borne children, but have raised kids who, in turn, have their own offspring. In this community, the label “wise” could be attained while a woman was in her late 30s. However, females who might be in their eighth decade of life but had not accomplished the required social tasks of maturity would be considered in the same generation as teenagers.
Age along with gender and kin relations stand as the three universal bedrocks of how all human societies construct a framework of social order and biocultural succession. Passage of human populations through the life span is translated into notions of social time, created by transit through successive age-based statuses marking the cultural mapping of the life cycle. Linguistic variants of child, adult, and elder become social boundaries in virtually all societies, marked by such things as variations in dress, comportment, modes of speech, and deferential gestures. Sometimes, actual physical boundaries can be involved, such as in the traditional Irish peasant pattern of moving elders into the sacred west room of the house, where younger kin could not enter without permission. An even more dramatic and negative case is that of the Fulani, West African pastoralists. Here, after a couple’s last child has wed, the elders are regarded as socially dead. They live as dependents of their oldest son, moving separately to different outer edges of his house compound, symbolically residing over their future grave sites.
The societal definition of elder status is often differentiated from “oldness” or the cultural constructions of old age. The latter terms are more keyed to biological maturity of an individual in combination with some social aspect of one’s relative position in society. In an indigenous, Nahuatl-speaking Mexican peasant community, Sokolovsky found that elderhood was attained by having shouldered important community rituals and becoming a grandparent, or culkn. To be considered old, or culi, required at least several grandchildren plus signs of physically slowing down, such as using a cane to walk around. A more debilitated stage of oldness, where one seldom ventures far from the home, is called Yotla Moac, literally “all used up.”
One of the earliest efforts to mine anthropological data on the contribution of elders to their societies came from Leo Simmons’s classic work, The Role of the Aged in Primitive Society (1945). He showed the wide variety of ways elderly function in society, including knowledge bearing; child care; economic support; and ritual, judicial, and political decision making. Numerous ethnographies have validated how a combination of deep knowledge held in older adults’ heads and their nurturing actions toward younger generations sustains human societies. Among the Akan of Ghana, there is no adjective that exists to describe a human as old, but those who have grandchildren and are older adults are referred to by the verb nyin, to grow. Such individuals who acquire wisdom based on experience and use this for the benefit of others receive the honorific of payin, or honorable, composed, and wise. As van der Geest relates in a 2004 journal article, an Akan saying is that a “payin has elbow so that ‘when you are in the chief’s palace and you are saying something which you should not say, a payin will . . . touch you with his elbow to stop you from saying that which might lead you into trouble.’ The proverb means if the payin has nothing at all, he has wisdom, he can give advice to people.”
Globally, elderhood is less celebrated in ritual than the beginning phases of social maturity, adolescence, and adulthood. Yet in some societies, age is a predominant means of ordering social life, such as in Africa’s age set societies, where passing into elderhood and even exiting active elderhood are marked by powerful rituals. Here, persons progress through the life cycle collectively and form tightly bound groups, performing specific tasks. Societies where age groupings play such a powerful role in ordering social life have been found in Africa, among certain Native American groups, Australian Aborigines, and Papua New Guinea, but their global occurrence is relatively rare. The most elaborated forms of such cultural systems are found among East African nomadic herders, such as the Samburu of Kenya or the Tiriki.
Age set organizations for women in non-Western societies are reported much less frequently than for males. Well-documented examples include the Afikpo, Ebrie, and Mbato peoples of West Africa. It is likely, as Thomas suggests, that the paucity of age sets for females is related to the difficulty of male ethnographers learning about a realm of culture purposely kept secret from men.
References:
- Aguilar, M. (Ed.). (1998). The politics of age and gerontocracy in Africa: Ethnographies of the past and memories of the present. Lawrenceville, NJ: Africa World Press.
- Albert, S., & Cattell, M. (1994). Old Age in global perspective. New York: G. K. Hall.
- Ikels, C., & Beall, C. (2000). Age, aging and anthropology, In R. Binstock & L. George (Eds.), The handbook of aging and the social sciences (5th ed., pp. 125-139) San Diego, CA: Academic Press.
- Sokolovsky, J. (Ed.). (1997). The cultural context of aging (2nd ed.). New York: Bergin & Garvey.
- Van Der Geest, S. (2004).”They don’t come to listen”: The experience of loneliness among older people in Kwahu, Ghana. Journal of Cross-Cultural Gerontology, 19, 77-96. | https://anthropology.iresearchnet.com/elders/ |
Calculates a correlation score in one point.
Correlation window size is given by kernel_col , kernel_row , kernel_width , kernel_height , postion of the correlation window on data is given by col , row .
If anything fails (data too close to boundary, etc.), function returns -1.0 (none correlation)..
Kernel to correlate data field with.
Upper-left column position in the data field.
Upper-left row position in the data field.
Upper-left column position in kernel field.
Upper-left row position in kernel field.
Width of kernel field area.
Heigh of kernel field area.
Correlation score (between -1.0 and 1.0). Value 1.0 denotes maximum correlation, -1.0 none correlation.
Calculates a correlation score in one point using weights to center the used information to the center of kernel.
Algorithm for matching two different images of the same object under changes.
It does not use any special features for matching. It simply searches for all points (with their neighbourhood) of data_field1 within data_field2 . Parameters search_width and search_height determine maimum area where to search for points. The area is cenetered in the data_field2 at former position of points at data_field1 .
A data field to store x-distances to.
A data field to store y-distances to.
Data field to store correlation scores to.
Correlation window width. This parameter is not actually used. Pass zero.
Correlation window height. This parameter is not actually used. Pass zero.
This iterator reports its state as GwyComputationStateType.
A data field to store x-distances to, or NULL.
A data field to store y-distances to, or NULL.
Data field to store correlation scores to, or NULL.
Sets the weight function to be used within iterative cross-correlation algorithm. By default (not setting it), rectangular windowing is used. This function should be called before running first iteration to get consistent results.
Set windowing type to be set as correlation weight, see GwyWindowingType for details.
Performs one iteration of cross-correlation.
Cross-correlation matches two different images of the same object under changes.
A cross-correlation iterator can be created with gwy_data_field_crosscorrelate_init(). When iteration ends, either by finishing or being aborted, gwy_data_field_crosscorrelate_finalize() must be called to release allocated resources.
Destroys a cross-correlation iterator, freeing all resources.
Computes correlation score for all positions in a data field.
Correlation score is compute for all points in data field data_field and full size of correlation kernel kernel_field .
The points in score correspond to centers of kernel. More precisely, the point ((kxres -1)/2, (kyres -1)/2) in score corresponds to kernel field top left corner coincident with data field top left corner. Points outside the area where the kernel field fits into the data field completely are set to -1 for GWY_CORRELATION_NORMAL.
This function is mostly made obsolete by gwy_data_field_correlation_search() which offers, beside the plain FFT-based correlation, a method equivalent to GWY_CORRELATION_NORMAL as well as several others, all computed efficiently using FFT.
Creates a new correlation iterator.
Performs one iteration of correlation.
An iterator can be created with gwy_data_field_correlate_init(). When iteration ends, either by finishing or being aborted, gwy_data_field_correlate_finalize() must be called to release allocated resources.
Destroys a correlation iterator, freeing all resources.
Performs correlation search of a detail in a larger data field.
There are two basic classes of methods: Covariance (products of kernel and data values are summed) and height difference (squared differences between kernel and data values are summed). For the second class, the sign of the output is inverted. So in both cases higher values mean better match. All methods are implemented efficiently using FFT.
Usually you want to use GWY_CORR_SEARCH_COVARIANCE or GWY_CORR_SEARCH_HEIGHT_DIFF, in which the absolute data offsets play no role (only the differences).
If the detail can also occur with different height scales, use GWY_CORR_SEARCH_COVARIANCE_SCORE or GWY_CORR_SEARCH_HEIGHT_DIFF_SCORE in which the local data variance is normalised. In this case dfield regions with very small (or zero) variance can lead to odd results and spurious maxima. Use regcoeff to suppress them: Score of image details is suppressed if their variance is regcoeff times the mean local variance.
If kernel_weight is non-NULL is allows specify masking/weighting of kernel. The simplest use is masking when searching for a non-rectangular detail. Fill kernel_weight with 1s for important kernel pixels and with 0s for irrelevant pixels. However, you can use arbitrary non-negative weights.
A data field to search.
Kernel weight, or NULL. If given, its dimensions must match kernel .
Data field to fill with the score. It will be resampled to match dfield .
Method, determining the type of output to put into target .
Regularisation coefficient, any positive number. Pass something like 0.1 if unsure. You can also pass zero, it means the same as G_MINDOUBLE. | http://gwyddion.net/documentation/libgwyprocess/libgwyprocess-correlation.php |
The Department of Italian Studies is one of the best resourced in Australia offering programs from absolute language beginners to PhD. At undergraduate level, it offers a wide range of language and culture units of study. The language units cater for students at different entry points and, through innovative teaching methods, aim to develop high levels of linguistic skills. The culture units aim to develop in-depth understanding of some of the most representative Italian authors, thinkers, and literary and intellectual movements from the Middle Ages to the present, within their historical and socio-cultural context; and of contemporary Italy from a socio-linguistic, historical and cultural perspective.
If you wish to major in Italian Studies you are expected to engage with both language acquisition and cultural studies, in order to gain a deep understanding of the close connection between language and culture. All our units of study aim to develop your analytical and critical skills, so as to enable you to pursue your interest in all aspects of Italian language, literature and culture, and equip you with the necessary skills for honours and postgraduate studies. We also strongly encourage you to spend a semester in Italy as part of your undergraduate studies, and to this effect we have developed agreements with several Italian universities.
Requirements for completion
The Italian Studies major and minor are available via the pathways indicated below.
Students will follow the appropriate pathway specified in the unit of study tables, based on their individual language level*. Students completing any of the pathways below will be awarded a major or minor in Italian Studies.
* Appropriate language units are determined either by language level and grade therein achieved in Higher School Certificate (as listed in the pathways linked above) or International Baccalaureate, and/or by one-on-one interviews prior to commencement. If you are unsure of your language level or which pathway is appropriate for you, please contact the Department for advice.
Please note: A ‘gap’ year after Year 12 does not normally affect placement.
Learning outcomes
|No.||Learning outcome|
|1||Demonstrate the ability to communicate effectively in Italian to at least Common European Framework of Reference level B2.|
|2||Demonstrate extensive knowledge of the major issues relating to Italian culture and society.|
|3||Demonstrate the ability to work collaboratively and openly in cross-cultural and interdisciplinary settings to achieve high quality results.|
|4||Apply theoretical tools and methodologies developed in Italian studies to new situations, including interdisciplinary contexts.|
|5||Develop communication and digital literacy skills that are transferable to other disciplines and professional settings.|
|6||Demonstrate research and enquiry skills that foster an engagement with scholarly debates in the broad area of Italian Studies.|
|7||Critically analyse different perspectives on Italian society and cultures, and formulate arguments and hypotheses to offer alternative perspectives.|
|8||Apply ethical standards in academic research and practices while working in diverse groups and across disciplines and cultures.|
Advanced coursework
The Bachelor of Advanced Studies in SLC prepares students to actively engage in the complex and culturally diverse contemporary world. Students will utilise linguistic and methodological skills developed in their previous studies to develop their knowledge on institutions, practices and ideas that permeate different cultures in the local and global context. They will be offered opportunities to participate in projects on translation, acculturation and self-reflexivity and to examine textual and social real-world problems related to topics which include translation, migration studies, cultural diversity and social integration.
Requirements and units of study for advanced coursework can be found on the Italian Studies advanced coursework units of study page.
Honours
The honours program consists of seminars on research methodologies and on specific areas of Italian Studies, and a thesis on a topic chosen by the student in consultation with the department.
Honours admission requirements
Admission to honours is via the Bachelor of Advanced Studies and requires the completion of a major in Italian Studies with an average of 70% or above.
Prior to commencing honours, you will need to ensure you have completed all other requirements of the Bachelor of Arts or other bachelor degree, including Open Learning Environment (OLE) units and a second major.
Requirements and units of study for honours can be found on the Italian Studies honours units of study page.
Contacts and further information
More information and current contact details for academic coordinators can be found at: sydney.edu.au/arts/italian. | https://www.sydney.edu.au/handbooks/arts/subject_areas_im/italian_studies.shtml |
Michael Mooleedhar is an award winning director, living and working in Trinidad and Tobago.
'Green Days by the River' (2017) is the first feature film of rising film director Michael Mooleedhar. This probing look at Trinidadian culture is the culmination of his journey into understanding the forces of culture & race.
His previous successes were short films which he collaborated with his mentor Professor Patricia Mohammed gaining acclaim locally and as far afield as New Dehli. 'Coolie Pink & Green' (2009) not only won People's Choice at the Trinidad & Tobago Film Festival but opened the Parvasi Film Festival in New Delhi. 'The Cool Boys'(2012) and 'City on the Hill' (2015), explored inner city struggles in a tasteful thought-provoking way. 'City On The Hill' Went on to win
People's Choice Award at Trinidad and Tobago Film Festival.
Michael has traveled widely, seeking to understand the world in which he lives and to translate it into film, creating a truly unique voice in world cinema.
The greatness of a man is not in how much wealth he acquires, but in his integrity and his ability to affect those around him positively.
- bob marley
"As a son of the rich Trinidad and Tobago soil and through my uniquely Caribbean lens, my ambition is to create boundaryless, story driven cinema of the highest quality, with universal themes that are relatable to audiences around the world.
I want to focus on rich stories and interesting characters. I want my films to impact you, so that you leave learning not only about other cultures, but hopefully yourself." | https://www.michaelmooleedhar.com/bio |
They say a picture is worth a thousand words, but when it comes to fashion photographer Lenne Chai's works, I believe that may be an understatement.
Unafraid to explore topics that may invite differing opinions and clashing of perspectives, she captures the essence of her subjects with artistry and grace. Some of her recent works include dreaming up an 80s lesbian Chinese banquet wedding and creating a new religious cult - The School - for an interactive mixed media installation Salvation Made Simple™. Her pursuit of her own independent projects gives us a glimpse of her creativity beyond the commercial works for big brands such as Canon and Esquire.
Who is this woman behind these visual stories she gift to the world?
Raised in Singapore and currently based in New York, Lenne speaks earnestly about her recent works and what about fashion photography she prides herself in.
-
Your most recent work was inspired by the 377A Penal and previously you created a fictitious religion/cult - these are things that can be quite sensitive for your audiences - what was your thought process for it and doubts you faced in the process of producing them?
Both of the projects you've mentioned were inspired by extremely personal aspects of my life, and while it terrifies me that they're such sensitive topics, I try to approach these topics in a way that feels truthful (to my experience), respectful, and accurate, and hope for the best.
True stories are the hardest to distill, and requires me to reexamine my behaviour and upbringing in a very uncomfortable way.
I hate sharing such private things in public, but it's been rewarding to see how these sensitive and personal topics resonate with people.
Following on the previous question, your recent work seem to come from an almost journalistic space (culture appropriation, religion, patriotism, LGBTQA+), how do you marry that with something commercial and fantasy-driven like fashion?
While there's a lot of great art out there, a lot of high brow art sometimes don't reach an audience because of how abstract or contextual it is. As a result, my favourite artworks are the thought-provoking ones that a viewer can instantly "get". It's like a venn diagram - I like that intersection of simple and smart. How something looks doesn't correlate to the depth a piece may (or may not) have, and maybe it's a bonus that the genre my photography falls under - visually - is so digestible and familiar.
Hopefully it helps to make it easier for the audience to want to engage with the ideas I'm trying to communicate.
What do you value most as a fashion photographer?
If we're speaking strictly in terms of fashion, then how much value the image has in the context of fashion - is it interesting? Does it capture a style or a movement or a trend in a tasteful and timeless way? Does it sell the clothes or the fantasy? Is it beautiful?
But for myself, my goal is to create thought-provoking or witty images whilst hitting the marks listed above.
Having just moved to New York, how has NY been treating you? How have NY audiences been different from the other places you've been (e.g. Tokyo, LA, SG)?
New York has been kind to me so far! 3 months isn't enough for me to gauge what NY likes yet, but I think my recent work definitely resonates better with Asians, just because of very Singapore-centric topics I've been exploring lately. Someone told me that my work was cute in an Asian way, which made me cackle in a very uncute way.
How would you define your place in the local and global fashion-photography industry now? Where do you want to head (goal-wise) and how far away are you from that?
I am currently pond scum, and I hope it makes to a low hanging leaf or branch at some point. Whilst I'm grateful to be doing lots of commercial work, I'd like to shoot for more international titles and high fashion campaigns someday. Let's see if I'm lucky and persistent enough to get there! | https://www.b-side.city/post/lenne-chai |
IZANAGI AND IZANAMI – These two parents of most of the rest of the deities in Shinto myths always need to be mentioned in one entry. Before life on Earth existed the two of them stood on Ukihashi, the floating bridge between Earth and Takamagahara, the High Plain of Heaven where the gods lived. From there they stirred the primordial juices here on Earth with a jeweled spear and created the Japanese islands and a shrine still stands on Onokoro, the tiny island that legend held was the first landmass created by the duo.
Their first coupling spawned either one slug-like creature or all of the demons and monsters in Shinto mythology (accounts vary). Beginning with their second mating the woman, Izanami, began giving birth to various landmasses, animals and humans as well as gods and goddesses like the sun goddess Amaterasu, the moon god Tsukuyomi, the storm god Susanowo, the rice god Inari, the dance goddess Uzume and countless others. Their son Sarutahiko they named the guardian god of Ukihashi, the floating bridge, and for a weapon gave him the jeweled spear they had used to stir the primordial broth on Earth.
Their last-born child was Kagatsuchi, the god of fire, and the trauma of his flaming birth proved potent enough to kill even a goddess like Izanami. She became the goddess who ruled over Yomi, the land of the dead just like the slain god Osiris in Egyptian myths became the ruler over the dead. Longing to be reunited with his wife, Izanagi journeyed to Yomi to visit Izanagi. He encountered her in the realm she now ruled over, deep within the Earth. She still had eight deities in her womb when Kagatsuchi’s birth caused her to die and those entities became the eight earthquake goddesses, who roar like thunder from the subterranean land of Yomi.
When Izanami, who was now in the form of a perpetually rotting corpse failed to trick Izanagi into bringing her back to the world of the living with him, she vowed to drag one thousand living souls down to the world of the dead with her every day. Izanagi countered this by causing one thousand five hundred new souls to be born each day. In the Kojiki it was not until after this encounter and his escape from Yomi that Izanagi alone gave birth to the sun, moon and storm gods from his eyes and nose. I prefer the version of the myth in the Nihongi, in which BOTH he and Izanami are their parents.
FOR MORE SHINTO DEITIES CLICK HERE: https://glitternight.com/category/mythology/
© Edward Wozniak and Balladeer’s Blog 2012. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Edward Wozniak and Balladeer’s Blog with appropriate and specific direction to the original content. | https://glitternight.com/2012/11/14/shinto-gods-izanagi-and-izanami/ |
Dark Matter Search
The new Standard Model of cosmology implies a dark Universe, where non-baryonic Dark Matter (DM) is the dominant form of gravitationally interacting matter. With a fraction of 26% of the total matter-energy density of the Universe (Planck result), DM is playing a key role in the formation and evolution of large scale structures. The non-baryonic matter of the Universe contains a smaller admixture of primordial neutrinos with sub-eV masses (Hot Dark Matter, HDM), but it is dominated by weakly interacting massive particles (WIMPs), which act as Cold Dark Matter (CDM) in structure evolution. The identification of CDM particles would thus be of major importance in our understanding of the Universe both at the largest and smallest scales.
Supersymmetry (SUSY) is a promising theoretical framework to explain the nature of CDM, where the lightest supersymmetric particle may be a neutral particle, the neutralino. Likely to be stable over cosmological time scales and only interacting weakly with ordinary matter, the neutralino is thus a natural particle candidate for CDM. Models of thermal neutralino production after the Big Bang give the correct relic density and a neutralino-matter interaction cross section below even the most stringent limits set by direct DM searches, for example by the EDELWEISS experiment. Other DM candidates entail additional free parameters or fine-tuning to be compatible with present data, but should nevertheless not be excluded a priori. Here, we focus on our experimental programme to detect WIMPs, either directly or indirectly via excess signatures in the flux of cosmic rays originating from either our galaxy or from extragalactic sources.
Challenges
The direct search for neutralino-like CDM is based on the elastic scattering off nuclei within a particle detector. The experimental challenges in the direct detection of WIMPs arise from very low event rates (less than 1 event per kg of target material per year) and from the small energy transfer to the recoiling nucleus (as little as a few keV). In addition, the unknown mass of a CDM particle, as well as different types of interactions, call for a variety of target materials and technologies to cover as much as possible of the unknown parameter space. These requirements translate to the need for dedicated underground facilities and a very high degree of background shielding against ambient gamma and neutron background. Additionally specific detectors with very low energy threshold (keV or even below), excellent energy resolution and active suppression of electron and gamma background of the order of 105 or better are required.
The indirect DM search is based on the measurement of secondary particles from the annihilation, decay or interaction of DM. Indirect searches are sensitive to all DM particles which produce a signal above the expected astrophysical background. This includes the WIMP, but also DM candidates such as axion-type particles. The secondary particles searched for are energetic photons, neutrinos and cosmic rays detected with gamma-ray, neutrino and charged particle telescopes. The annihilation or decay signal observable with gamma and neutrino observatories is expected to be strongest towards astrophysical regions of DM overdensity such as the Galactic Centre, halos of galaxies, dwarf spheroidal galaxies and massive objects like our Sun. Axion-like particles may be seen in the measurement of the absorption of gamma rays from very distant blazars and GRBs. The major experimental challenge for indirect searches is the detection of the very weak signal expected from DM interaction and the separation of background fluxes of astrophysical origin. | http://matter-universe.helmholtz.de/topic3-challenges-dark-matter.php |
Antarctica is Earth’s southernmost continent, often called South Pole and with it’s total 14m sqm Km area, it is the fifth largest continent in the world, double the size of Australia. About 98% of Antarctica is covered with ice that averages 1.9km in thickness.
Antarctica, on average, is the coldest, driest, and windiest continent, and has the highest average elevation amongst all other continents. It is often termed as Ice desert where temperatures has been recorded at −89 °C (−129 °F) with obvious no permanent residents there. However, anywhere between 1000-5000 people reside throughout the year at the research stations scatter across the continent.
Only cold-adapted organisms survive with very active marine life, apart from fur seals and various breeds of penguins.
TRAVEL MOTIVATIONS Inspires you to Travel to Antarctica
Antarctica is the coldest and driest part of the world as well as being the most visually stunning. The landscape is dramatic and this southernmost part of the world is home to a wide range of unique wildlife. Despite its frigid temperatures, this enchanting part of the world is inhabited by huge numbers of penguins and elephant seals, and the best place to see them in on the picturesque South Georgia Island, which is home to millions of penguins as well as giant petrels, albatross and other species of animals. The area offers numerous scenic spots for animal lovers to enjoy such as Right Whale Bale, which is a great place to view herds of elephant seals, while Salisbury Plain boasts an enormous king penguin rookery.
We Feel that the Antarctica is Best for Nature Lovers
People who have a real love for nature and a strong sense of adventure are sure to enjoy exploring Antarctica. In addition to South Georgia Island, this part of the world is home to hundreds of other islands of all shapes and sizes including Peterman Island, Anver Island, Zavodovski Island and the South Shetland Islands. Visitors who arrange a cruise or another type of boat trip will have plenty of opportunities to admire the stunning scenery of Antarctica in comfort and style. No matter where they go, visitors are sure to be impressed by Antarctica’s wild natural beauty, which is marked by wide seas with clear water, dramatic mountains and islands that are simply teeming with wildlife. This part of the world is also a great place to go hiking in the summer months as well as mountain climbing and trekking.
Our Advice on the Best Time to Visit
People who are planning to travel to Antarctica should bear in mind that this part of the world is only accessible to visitors between November March, which is marked by the austral summer season. At this time of year coastal temperatures usually rise to around 14°C and there is daylight twenty four hours a day, as opposed to twenty four hours of darkness in the winter months. However, even at this time of year temperatures at inland parts of Antarctica such as the South Pole often drop to below -15°C, while during the winter temperatures can plummet to an extremely chilly -80°C.
Explore the Visual Delights of Antarctica
Visitors who explore the Ross Sea to the south of Australia and New Zealand will be greeted by the magnificent sight of Mount Erebus. This is the southernmost active volcano in the world and is situated at a mighty 3,795 metres. This area is also home to the Ross Ice Shelf, which is the biggest iceberg in the world and visitors who take a boat trip on the Ross Sea will also have the chance to view some stunning wildlife, while people who visit in the month of April will have the chance to climb to the top of Mount Kirkpatrick. Exploring Antarctica’s dry valleys is another experience that should not be missed and visitors who take a guide and pack plenty of supplies are often given the chance to book an extended trekking trip through this region and discover firsthand how harsh the arctic tundra can be.
There are many things to do like camping, snowshoeing, hiking, cross country skiing and many more activities which has become popular these days with the advent of tourists.
It takes courage or an adventure mind to explore this continent and be recognized as handful of travelers who has dread to tread this ice continent.
Contact us for more information about the Antartica Packages.
Travel the unknown with Travel Motivations. For enjoying your holidays in Thailand and other destinations of the globe, please fill up this Reservation Form for our customer service agent to contact you at the earliest.
__________________________________________________________________________________________________________________________
As a leading travel agency in Thailand, we offer complete value proposition to our clients for the major tourist attractions in Thailand with highly personalized service. Apart from standard package tours in Thailand, we offer exclusive special interest packages like Golf in Thailand, SPA & Wellness packages in Thailand, Diving Packages in Thailand, Wedding package in Thailand, Corporate events in Thailand and Medical Tourism in Thailand which includes Cosmetic surgery in Thailand, Botox Treatments in Thailand, IVF in Thailand amongst few others. | https://travelmotivations.com/explore-continents/antarctica/ |
Photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organisms'. Free essay: title a study to investigate the effect of varying light intensities on the rate of photosynthesis class ______ photosynthesis virtual labs tutorial:.
Photosynthesis is the process by which plants, some bacteria, and some protistans a common aquarium plant used in lab experiments about photosynthesis.
Like other plants, elodea absorbs carbon dioxide and releases oxygen during photosynthesis in this lab, an elodea specimen is submerged in water under a. Producing a psychology essay may be challenging endeavor for scholars who perform other activities that are essential too and must-attend their parttime jobs. Has your professor asked you to create lab reports get our support to write it properly. View notes - bio lab- photosynthesis essay from biology 101 at ramapo college of new jersey biol-112-02 11-26-07 observing photosynthesis many .
Photosynthesis is affected by light intensity, water, and temperature plants grow more abundantly because the weather is warm carbon dioxide given off by.
Free essay: elodea & photosynthesis photosynthesis is the process by which green plants and some other organisms use sunlight to. Free essay: photosynthesis lab report purpose: to research the effect of different wavelengths (colors) of light on plant growth during.
Essay #1 2007 title: the success of photosynthesis in geranium leaves using visible light wavelengths obstructed by black paper, and red, blue, and green. Free rate of photosynthesis papers, essays, and research papers lab report testing the effect of light intensity on the rate of photosynthisis - lab report.
2018. | http://vchomeworkcxqz.du-opfer.info/lab-on-photosynthesis-essay.html |
The New Hampshire lease agreements bind a property owner, the “Landlord”, and a renter, the “Tenant”, to a contract that specifies the payment of rent along with other terms and conditions. The document is to be signed between the parties and a witness is not required although it is recommended. The parties will have to adhere to the State laws in accordance to Chapter 540 (Actions Against Tenants).
Commercial Lease Agreement – For the use of office, retail, or industrial purposes. Primarily space for a business to operate.
Lease with Option to Purchase Agreement – Standard contract for habitable dwelling with a provision added for the buying of the premises.
Month to Month Lease Agreement (RSA 540:2) – All tenancies-at-will may be cancelled upon receipt from one party, landlord or tenant, to the other in the amount of thirty (30) days prior to the next payment date.
Rental Application – Provided by the landlord to the tenant in order to obtain their financial and personal information to perform background, credit, and employment checks.
Room Rental (Roommate) Agreement – An accord between the members of a shared living facility.
Standard Residential Lease Agreement – Fixed term for any type of livable arrangement.
Sublease Agreement – Tenant that decides to rent their living unit to another individual (known as the ‘sub-tenant). Usually, this requires consent from the landlord/agent.
Termination Lease Letter (30 Day Notice) – For the termination of a month to month rental arrangement according to RSA 540:2. A minimum of thirty (30) days is required.
Lead-Based Paint – Law created by the Environmental Protection Agency (EPA) that requires all landowners to inform their tenants of this hazardous paint. Only required if the residence was constructed before 1978.
Move-in Checklist (RSA 540-A:6) – Landlord must state in writing to tenant that a list of all repairs needed on the premises must be given by the tenant to landlord within five (5) days of occupancy.
Security Deposit Receipt (RSA 540-A:6) – Unless the tenant makes this payment via check the landlord must give a receipt stating the amount and financial institution of where the money is being held.
Landlord may access the rental unit at anytime, according to RSA 540-A:3, by providing reasonable notice to the tenant under the circumstances.
Maximum (RSA 540-A:6) – Landlord may ask for up to one (1) month’s rent or $100, whichever is greater.
Returning (RSA 540-A:7) – The landlord must send the deposit back to the tenant within thirty (30) days unless the tenant shared the property with the landlord. In that case the funds must be returned within twenty (20) days unless there was a written agreement stating otherwise. | https://eforms.com/rental/nh/ |
ACT FOUNDATION & HACEY HEALTH TEAM UP TO BEGIN CLEAN WATER INITIATIVE
In order to curb the incidence of frequent water borne diseases in some selected communities in south west, Act Foundation with Hacey Health Initiative came together to increase access to potable water in rural and underserved communities. The initiative aims to increase access to clean water through installation of boreholes and community health training.
This is to maintain a focus on Increasing access to potable water in rural and underserved communities to enable ideal water, sanitation and hygiene (WASH), Hacey Health Initiative has contributed key outcomes to various communities in south west states across Nigeria. Hence, various underserved communities have gained access to water, sanitation, and hygiene, this in the long run will create an ideal and sustainable society for all.
“According to the World Health Organization, WHO reports, at least 2 billion people use a drinking water source contaminated with faeces and 844 million people lack access to potable water sources, including 159 million people who are dependent on surface water” Isaiah Owolabi, Project Director, Hacey Health Initiative informed.
It has also been discivered that contaminated water and poor hygiene are linked to transmission of diseases such as cholera, diarrhoea and dysentery. Also, absent, inadequate, or inappropriately managed water and sanitation services expose individuals to health risks and death.
He also added that the incidence of water borne diseases however varies greatly among humans. Young children and pregnant women are most vulnerable to waterborne diseases when there is lack of safe water, basic sanitation and hygiene. Other reasons that leads to millions of deaths are improper feeding practices, improper handling and storage within households and hand washing without soap before food preparation and after defecation.
“The targets were Lagos and Ogun states, fifteen communities were selected based on survey of communities with poorest access to potable water” He added.
The project commenced with advocacy in all the communities, followed with series of trainings focused on Water, Sanitation and hygiene (WASH) among pregnant women, community leaders, nursing mothers and children and ended with installation of boreholes in all the selected communities.
The selected communities had over 500 nursing mothers and pregnant women and 150 community for the outreach/ WASH trainings. The training was mainly focused on water treatment, maintenance and handwashing to prevent incidences of early childhood diseases. | http://nixxhash.com/act-foundation-hacey-team-up-to-begin-clean-water-initiative/ |
How do we make sure that healthcare is affordable, of the highest quality and sufficiently accessible? Questions we are able to answer for you involve market forces in healthcare, financial incentives, measuring the effects of policy and the costs and benefits of care.
The cluster Healthcare is specialised in performing market analyses, econometric analyses, effect measurements, cost-benefit analyses.
Sub expertise
In order to evaluate the effect of hospital mergers, we analyse patient flows. This involves evaluating whether patients have access to (sufficient) alternative hospitals within an acceptable travel distance. We are often commissioned to do this by lawyers who supervise the merger. We also perform wider market analyses. For example, we investigated whether the competition law (mededingingswet) impedes cooperation between primary care providers for the Ministry of Health, Welfare and Sport (Volksgezondheid, Welzijn en Sport, VWS).
What are the financial incentives for healthcare providers and insurers under the current funding? What are the effects of financial incentives? How can performance incentives best be designed? We are able to answer these questions for healthcare institutions and the government. For example, we have conducted international comparative research into the incomes of medical specialists.
We are also working on funding systems, such as the risk equalisation system for health insurers.
We regularly perform (mostly quantitative) effect studies. This concerns the effects of policy changes, such as the release of rates, the introduction of a different funding system or changes to the deductible. We have conducted research into the effect of the introduction of the social support law (wet maatschappelijke ondersteuning, WMO) on not using WMO care in a number of municipalities.
For this, we use surveys or advanced econometric methods based on data on a personal level from the CBS.
Do you want to know the added value of the care that you provide? Or do you want to know what the government-proposed regulations will mean for your sector? We are experts in performing cost-benefit analyses in healthcare. We have performed many cost-benefit analyses for branch organisations in healthcare. Examples are the costs and benefits of rehabilitation care, speech therapy, dietetics and occupational therapy. We have also calculated the costs and benefits of a new guideline in hospital care.
We have also written a guide for performing cost-benefit analyses in the social domain, including healthcare. This guide was commissioned by the Ministry of VWS and is obligatory for all cost-benefit analyses performed for the government.
Recent publications
2021-113
2021-109, 2022-01
2020-90
2016-99
2016-35
Feel free to contact Wouter Vermeulen via e-mail or phone. He will respond to your questions as soon as possible. | https://www.seo.nl/en/expertises/healthcare/ |
Glossary of terms from social theory
Please see Wikipedia.org for further explanations of these terms.
Cognitive system the system of social institutions in an organisation which reflects the concepts which are material to production and service delivery and by means of which the world is interpreted.
Cultural postulates logical premises or statements which express deeply held beliefs about the world within a group or organisation and which are generally common and shared within the group.
Direct control the exercise of coercive power over individuals by which their conformance is enforced through directives, procedures and measurement.
Distinctive knowledge knowledge which is owned by an expert and which is particular to that person.
Emerging knowledge knowledge which is not yet externalised but emerges by virtue of it being brought out and combined in group situations and collaborations.
Externalisation the process of expressing, by language or other symbolic acts, meanings and intentions to others.
In-group prototype the conceptual template which describes and prescribes the attributes which are appropriate to signify group membership in specific contexts.
Indirect control the exercise of coercive power over individuals by which conformance is achieved by aligning the distribution of rewards and indirect pressure, such as group norms, with the objectives of the organisation.
Institution the perceptible structures of social order and patterning which indicate and determine how people in a given group should and do behave.
Institutional control the exercise of power through the creation and inculcation of cognitive and normative ways of thinking which cause people to behave in the interest of the group or organisation.
Internalisation the process of absorbing and understanding the symbols used in communication with others in a group.
Legitimation the act of making an event, a concept or activity legitimate by attaching it to existing norms and values in society.
Normative system the system of social institutions which contains values and principles which signify certain actions, methods or responses as the right thing to do.
Objectivation the process of creating new common understandings of reality within social groups.
Organisational memory the past experiences and learning within an organisation that can be brought to bear on current problems or tasks.
Proprietary descriptive knowledge knowledge which is specific to an organisation that describes how things can be done or how they are usually done.
Proprietary prescriptive knowledge knowledge which is specific to an organisation which prescribes how things must be done.
Regulative system the system of social institutions which embodies the rules of routine action, procedure, roles and responsibilities.
Reification the process by which a social concept or idea appears to exist independently of the social groups which create that concept.
Social identity the sense of belonging and self-hood which is generated by being a member of a specific group.
Socialisation the process of internalising and absorbing a group’s culture, norms, routines and values in order to participate as a member of that group.
Transactive memory system a way of explaining how responsibility for knowledge in groups can be shared by using information directories to track down or use the right source of knowledge when knowledge is needed. | http://devguis.com/glossary-of-terms-from-social-theory-web-2-0-knowledge-technologies-and-the-enterprise.html |
Social class refers to the grouping of individuals into positions on a stratified social hierarchy. Class is an object of analysis for sociologists, political scientists, anthropologists and social historians. However, there is not a consensus on the best definition of the term "class," and the term has different contextual meanings. In common parlance, the term "social class" is usually synonymous with socioeconomic status, which is one's social position as determined by income, wealth, occupational prestige, and educational attainment.
Common models used to think about social class come from Marxist theory: common stratum theory, which divides society into the upper, middle, and working class; and structural-functionalism.
Class in Marxist Theory
According to the class social theorist Karl Marx, class is a combination of objective and subjective factors. Objectively, a class shares a common relationship to the means of production. Subjectively, the members will necessarily have some perception of their similarity and common interests, called class consciousness. Class consciousness is not simply an awareness of one's own class interest but is also a set of shared views regarding how society should be organized legally, culturally, socially and politically.
In Marxist theory, the class structure of the capitalist mode of production is characterized by two main classes: the bourgeoisie, or the capitalists who own the means of production, and the much larger proletariat (or working class) who must sell their own labor power for wages. For Marxists, class antagonism is rooted in the situation that control over social production necessarily entails control over the class which produces goods—in capitalism this is the domination and exploitation of workers by owners of capital.
Weberian Class
The class sociologist Max Weber formulated a three-component theory of stratification that saw political power as an interplay between "class", "status" and "group power. " Weber theorized that class position was determined by a person's skills and education, rather than by their relationship to the means of production.
Weber derived many of his key concepts on social stratification by examining the social structure of Germany. He noted that contrary to Marx's theories, stratification was based on more than simply ownership of capital. Weber examined how many members of the aristocracy lacked economic wealth yet had strong political power. Many wealthy families lacked prestige and power, for example, because they were Jewish. Weber introduced three independent factors that form his theory of stratification hierarchy: class, status, and power: class is person's economic position in a society; status is a person's prestige, social honor, or popularity in a society; power is a person's ability to get his way despite the resistance of others. While these three factors are often connected, someone can have high status without immense wealth, or wealth without power.
The Common Three-Stratum Model
Contemporary sociological concepts of social class often assume three general categories: a very wealthy and powerful upper class that owns and controls the means of production; a middle class of professional or salaried workers, small business owners, and low-level managers; and a lower class, who rely on hourly wages for their livelihood.
The upper class is the social class composed of those who are wealthy, well-born, or both. They usually wield the greatest political power.
The middle class is the most contested of the three categories, consisting of the broad group of people in contemporary society who fall socioeconomically between the lower class and upper class. One example of the contestation of this term is that In the United States middle class is applied very broadly and includes people who would elsewhere be considered lower class. Middle class workers are sometimes called white-collar workers.
The lower or working class is sometimes separated into those who are employed as wage or hourly workers, and an underclass—those who are long-term unemployed and/or homeless, especially those receiving welfare from the state. Members of the working class are sometimes called blue-collar workers.
Consequences of Social Class
A person's socioeconomic class has wide-ranging effects. It may determine the schools he is able to attend, the jobs open to him, who he may marry, and his treatment by police and the courts. A person's social class has a significant impact on his physical health, his ability to receive adequate medical care and nutrition, and his life expectancy.
Class mobility refers to movement from one class status to another--either upward or downward. Sociologists who measure class in terms of socioeconomic status use statistical data measuring income, education, wealth and other indexes to locate people on a continuum, typically divided into "quintiles" or segments of 20% each. This approach facilitates tracking people over time to measure relative class mobility. For example, the income and education level of parents can be compared to that of their children to show inter-generational class mobility.
Social Class and Living Conditions
In the United States, neighborhoods are stratified by class such that the lower class is often made to live in crime-ridden, decaying areas.
French Estates
In France before the French Revolution, society was divided into three estates: the clergy, nobility, and commoners. In this political cartoon, the Third Estate (commoners) is carrying the other two on its back. | https://oer2go.org/mods/en-boundless/www.boundless.com/sociology/textbooks/boundless-sociology-textbook/global-stratification-and-inequality-8/systems-of-stratification-67/class-399-3427/index.html |
Abstract:
A method for controlling the operation of a plurality of consumer
electronic devices by displaying a plurality of broadcast channel
identifiers each corresponding to a broadcast channel in a display of a
controlling device adapted to command at least channel tuning operations
of the plurality of consumer electronic devices. Input is accepted into
the controlling device that functions to designate one of the plurality
of broadcast channel identifiers and the controlling device uses the
designation of the one of the plurality of broadcast channel identifiers
to cause a transmission of a wireless signal from the controlling device
to a one of the plurality of consumer electronic devices to thereby cause
the one of the plurality of consumer electronic devices to tune to the
broadcast channel corresponding to the designated one of the plurality of
broadcast channel identifiers. A condition associated with at least one
of the controlling device and the designated one of the plurality of
broadcast channel identifiers functions to determine the one of the
plurality of consumer electronic devices to which the wireless signal is
transmitted.
Claims:
1. A method for controlling the operation of a plurality of consumer
electronic devices, comprising:displaying a plurality of broadcast
channel identifiers each corresponding to a broadcast channel in a
display of a controlling device adapted to command at least channel
tuning operations of the plurality of consumer electronic
devices;accepting input into the controlling device that functions to
designate one of the plurality of broadcast channel identifiers; andand
using the designation of the one of the plurality of broadcast channel
identifiers to cause a transmission of a wireless signal from the
controlling device to a one of the plurality of consumer electronic
devices to thereby cause the one of the plurality of consumer electronic
devices to tune to the broadcast channel corresponding to the designated
one of the plurality of broadcast channel identifiers wherein a current
operating mode of the controlling device functions to determine the one
of the plurality of consumer electronic devices to which the wireless
signal is transmitted.
2. The method as recited in claim 1, wherein program schedule information
received via a network comprises the plurality of broadcast channel
identifiers.
3. The method as recited in claim 2, wherein the program schedule
information received via the network defines the broadcast channel
corresponding to each of the plurality of broadcast channel identifiers.
4. The method as recited in claim 1, wherein user input provided to the
controlling device defines the broadcast channel corresponding to each of
the plurality of broadcast channel identifiers.
5. The method as recited in claim 1, wherein the current operating mode is
indicative of a room in which the controlling device is currently being
used.
6. The method as recited in claim 1, wherein the current operating mode is
indicative of a user currently operating the controlling device.
7. The method as recited in claim 1, wherein the wireless signal is an IR
signal.
8. The method as recited in claim 1, wherein the wireless signal is an RF
signal.
9. The method as recited in claim 1, wherein the plurality of broadcast
channel identifiers are provided as a part of program schedule
information that is displayed in a touch screen of the controlling
device.
10. The method as recited in claim 9, wherein the program schedule
information is displayed in a grid of rows and columns in which each of
the plurality of broadcast channel identifiers occupies a row in a first
column in the grid and the programming information occupies further
columns in the grid in the same row as its corresponding one of the
plurality of broadcast channel identifiers with the programming
information being further arranged as a function of time.
11. The method as recited in claim 10, wherein the input designating one
of the plurality of broadcast channel identifiers comprises a user
interacting with the touch screen to select a row in the first column.
12. The method as recited in claim 1, wherein the wireless signal is
constructed according to a user preference.
13. The method as recited in claim 12, wherein the user preference
comprises requesting a transmission of an enter command as a part of the
wireless signal.
14. The method as recited in claim 12, wherein the user preference
comprises specifying a minimum number of digit indicators to be included
as a part of the wireless signal.
15. The method as recited in claim 1, wherein user input designating one
of a plurality of operating modes of the controlling device specifies to
the controlling devices the current operating mode of the controlling
device.
16. The method as recited in claim 5, wherein the one of the plurality of
consumer electronic devices to which the wireless signal is transmitted
is pre-associated with the room in which the controlling device is
currently being used via user input provided to the controlling device.
17. The method as recited in claim 6, wherein the one of the plurality of
consumer electronic devices to which the wireless signal is transmitted
is pre-associated with the user currently operating the controlling
device via user input provided to the controlling device.
18. A method for controlling the operation of a plurality of consumer
broadcast channel identifier wherein a broadcasting source associated
with the designated one of the plurality of broadcast channel identifiers
functions to determine the one of the plurality of consumer electronic
devices to which the wireless signal is transmitted.
19. The method as recited in claim 18, wherein program schedule
information received via a network comprises the plurality of broadcast
channel identifiers.
20. The method as recited in claim 19, wherein the program schedule
information received via the network defines the broadcast source that is
associated with each of the plurality of broadcast channel identifiers.
21. The method as recited in claim 18, wherein user input provided to the
controlling device defines the broadcast source that is associated with
each of the plurality of broadcast channel identifiers.
22. The method as recited in claim 18, wherein the wireless signal is an
IR signal.
23. The method as recited in claim 18, wherein the wireless signal is an
RF signal.
24. The method as recited in claim 18, wherein the plurality of broadcast
channel identifiers are provided as a part of program schedule
information that is displayed in a touch screen of the controlling
device.
25. The method as recited in claim 24, wherein the program schedule
26. The method as recited in claim 25, wherein the input designating the
one of the plurality of broadcast channel identifiers comprises a user
interacting with the touch screen to select a row in the first column.
27. The method as recited in claim 18, wherein the wireless signal is
constructed according to a user preference.
28. The method as recited in claim 27, wherein the user preference
comprises requesting a transmission of an enter command as a part of the
wireless signal.
29. The method as recited in claim 27, wherein the user preference
comprises specifying a minimum number of digit indicators to be included
as a part of the wireless signal.
30. A method for controlling the operation of a plurality of consumer
one of the plurality of broadcast channel identifiers wherein a condition
associated with at least one of the controlling device and the designated
one of the plurality of broadcast channel identifiers functions to
determine the one of the plurality of consumer electronic devices to
which the wireless signal is transmitted.
31. The method as recited in claim 30, wherein the condition is indicative
of a room in which the controlling device is currently being used.
32. The method as recited in claim 30, wherein the condition is indicative
of a user currently operating the controlling device.
33. The method as recited in claim 30, wherein the condition is indicative
of a broadcasting source associated with the designated broadcast channel
identifier.
[0005]In accordance with the description that follows, a system and method
is provided for navigating a program guide and/or for using a program
guide to command operation of an appliance. An understanding of the
objects, advantages, features, properties and relationships of the
invention will be obtained from the following detailed description and
accompanying drawings which set forth illustrative embodiments and which
are indicative of the various ways in which the principles of the
invention may be employed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006]For a better understanding of the various aspects of the invention,
reference may be had to preferred embodiments shown in the attached
drawings in which:
[0007]FIGS. 1-4 illustrate an exemplary program guide and an exemplary
system for navigating within the program guide;
[0013]FIGS. 14-18 illustrate an exemplary graphical user interface method
for configuring the hard keys of FIG. 12 to cause the transmission of
commands to command the operation of one or more appliances;
[0014]FIG. 19 illustrates an exemplary method for causing the transmission
of commands to command the operation of one or more appliances via
interaction with a program guide; and
[0015]FIGS. 20-21 illustrate an exemplary method for configuring device
transmissions in response to interaction with the program guide.
DETAILED DESCRIPTION
[0016]A universal remote control and program guide application are
provided for executing on a portable electronic device 10. By way of
example, representative platforms for the device 10 include, but are not
limited to, devices such as remote controls, lap-top computers, Web
Tablets and/or PDAs manufactured by HP/Compaq (such as the iPAQ brand
PDA), Palm, Visor, Sony, etc. Thus, a preferred underlying platform
includes a processor coupled to a memory system comprising a combination
of ROM memory, non-volatile read/write memory, and RAM memory (a memory
system); a key matrix in the form of physical buttons; an internal clock
and timer; a transmission circuit; a power supply; a touch screen display
to provide visible feedback to and accept input from a consumer; and I/O
circuitry for allowing the device to exchange communications with an
external computer such as server and/or client. Additional input
circuitry, such as a barcode reader, may also be utilized.
[0017]To control the operation of the device 10, the memory system
includes executable instructions that are intended to be executed by the
processor. In this manner, the processor may be programmed to control the
various electronic components within the device 10, e.g., to monitor
power, to cause the transmission of signals, etc. Within the memory
system, the ROM portion of memory is preferably used to store fixed
programming and data that remains unchanged for the life of the product.
The nonvolatile read/write memory, which may be FLASH, EEPROM,
battery-backed up RAM, "Smart Card," memory stick, or the like, is
preferably provided to store consumer entered setup data and parameters,
downloaded data, etc., as necessary. RAM memory may be used by the
processor for working storage as well as to hold data items which, by
virtue of being backed up or duplicated on an external computer (for
example, a client device) are not required to survive loss of battery
power. While the described memory system comprises all three classes of
memory, it will be appreciated that, in general, the memory system can be
comprised of any type of computer-readable media, such as ROM, RAM, SRAM,
FLASH, EEPROM, or the like in combination. Preferably, however, at least
part of the memory system should be non-volatile or battery backed such
that basic setup parameters and operating features will survive loss of
battery power. In addition, such memories may take the form of a chip, a
hard disk, a magnetic disk, and/or an optical disk without limitation.
[0018]For commanding the operation of appliances of different makes,
models, and types, the memory system may also include a command code
library. The command code library is comprised of a plurality of command
codes that may be transmitted from the device 10 under the direction of
application(s) for the purpose of controlling the operation of an
appliance. The memory system may also includes instructions which the
processor uses in connection with the transmission circuit to cause the
command codes to be transmitted in a format recognized by an identified
appliance. While the transmission circuit preferably utilizes infrared
transmissions, it will be appreciated that other forms of wired or
wireless transmissions, such as radio frequency, may also be used.
[0019]To identify appliances by type and make (and sometimes model) such
that application(s) of the device 10 are adapted to cause the
transmission of command codes in the format appropriate for such
identified appliances, information may be entered into the device 10.
Since methods for setting up an application to cause the transmissions of
commands to control the operation of specific appliances are well-known,
they will not be described in greater detail herein. Nevertheless, for
additional details pertaining to remote control setup, the reader may
turn to U.S. Pat. Nos. 6,225,938, 4,623,887, 5,872,562, 5,614,906,
4,959,810, 4,774,511, and 4,703,359 which are incorporated herein by
reference in their entirety.
[0020]To cause the device 10 to perform an action, the device 10 is
adapted to be responsive to events, such as a sensed consumer interaction
with one or more keys on the key matrix, a sensed consumer interaction
with the touch screen display, or a sensed signal from an external source
such as a remote computer. In response to an event, appropriate
instructions within the memory system are executed. For example, when a
hard or soft command key associated with a remote control application is
activated on the device 10, the device 10 may read the command code
corresponding to the activated command key from the memory system and
transmit the command code to an appliance in a format recognizable by the
appliance. It will be appreciated that the instructions within the memory
system can be used not only to cause the transmission of command codes to
appliances but also to perform local operations. While not limiting,
local operations that may be performed by the device that are related to
the remote control functionality include favorite channel setup, macro
button setup, command function key relocation, etc. Examples of such
local operations can be found in U.S. Pat. Nos. 5,481,256, 5,959,751,
6,014,092, which are incorporated herein by reference in their entirety.
[0021]As discussed, the platform of the device 10 preferably comprises a
general purpose, processor system which is controllable by software. The
software may include routines, programs, objects, components, and/or data
structures that perform particular tasks that can be viewed as an
operating system together with one or more applications. The operating
system, such as the "Windows CE" brand operating system or the like,
provides an underlying set of management and control functions which are
utilized by applications to offer the consumer functions such as a
calendar, address book, spreadsheet, notepad, Internet browsing, etc., as
well as control of appliances. Thus, it is to be understood that
applications in addition to or complimentary with the remote-control-like
application can also be supported by the device 10 and, as such, in terms
of the internal software architecture, the remote-control-like
application may be but one of several possible applications which may
co-exist within the device 10.
[0022]In terms of providing operating system functionality, it should also
be understood that the demarcation between the device 10 and a
host/client computer, described in greater detail hereinafter, may vary
considerably from product to product. For example, at one extreme the
device 10 may be nothing more than a slave display and input device in
wireless communication with a computer that performs all computational
functions. At the other extreme, the device 10 may be a fully-functional
computer system in its own right complete with local mass storage. It is
also to be appreciated that a hardware platform similar to that described
above may be used in conjunction with a scaled-down operating system to
provide remote control functionality only, i.e., as a standalone
application. In all cases, however, the principles expressed herein
remain the same.
[0023]To provide a means by which an consumer can interact with the device
10, the device 10 is preferably provided with software that implements a
graphical user interface. The graphical user interface software may also
provide access to additional software, such as a browser application,
that is used to display information that may be received from an external
computer. Such a graphical user interface system is described in pending
U.S. application Ser. Nos. 09/905,396, 60/334,774, and 60/344,020 all of
which are incorporated herein by reference in their entirety.
[0024]For simplifying the process of navigating a downloaded program
guide, which would be comprised of a grip of channels, times, and program
information, the device 10 utilizes a program guide interface that takes
advantage of the touch-screen style display. In particular, the program
guide interface is designed to overcome one of the more annoying aspects
associated with presently known program guides which results when
consumers attempt to step from one channel (or time) to another channel
(or time) that is relatively far away within a program guide. In
particular, to navigate within presently known program guides, the
consumer must repetitively press a navigation key, such as up/down, page
up/down, time +/-, day forward/back, etc. As will be appreciated,
navigation in this manner becomes increasingly tedious and frustrating to
consumers as the number of entries within the program guide expands
(e.g., with the addition of digital cable channels, satellite channels,
etc.).
[0025]To address this problem, the user interface of the device 10
provides a horizontal slider 2 and a vertical slider 3 that, as
illustrated in FIG. 1, allows for ease of movement through channels and
times that are contained within the program guide. When a slider 2/3 is
first touched with a stylus, finger, or the like, (i.e., a first user
input is received) a banner 4 pops up next to the slider 2/3. The banner
4 includes a representation that corresponds to the current position of
the slider as illustrated in FIGS. 2 and 3. As will be appreciated, the
current, relative position of the slider 2/3 within the slider bar is
representative of the guide information currently being displayed
relative to the entirety of information within a given program guide that
is displayable.
[0026]When a slider 2/3 is moved, the information in the banner 4 is
preferably, continuously updated to display the relative position of the
slider 2/3 within the slider bar so as to provide an indication of the
guide information that would be displayed relative to the entirety of
information within a given program guide that is displayable should the
slider 2/3 be released. For example, the banner 4 might indicate a
channel corresponding to the current position of the slider 3 (e.g.,
channel program information that would be displayed at the top of the
display as the starting point of the displayed information) or the banner
4 might indicate a time corresponding to the current position of the
slider 2 (e.g., program information for a time period that would be
displayed at a side of the display as the starting point of the displayed
information). It is further preferred that the underlying information
that is displayed not be changed as a slider 2/3 is moved until the
slider 2/3 is released (e.g., the stylus is lifted off the slider as a
second user input) as illustrated in FIG. 4. In this way a consumer that
wishes to change the channel program information being viewed from, for
example, CBS (channel 2) to BBC America (channel 264), need only grab the
slider 3, move the slider 3 vertically until it shows "BBCA 264," and
then release the slider 3. A similar approach applies to the time slider
2 which allows you to move the program guide display horizontally to any
hour in the current day. It will be appreciated that the second user
input that results in the changing of the displayed grid information may
also require acts in addition to or in lieu of the user merely releasing
the slider (e.g., a double tap of the slider, activation of another icon,
etc.).
[0027]To accommodate consumers that who prefer the old way of navigating
through the information in the program guide, the interface allows the
consumer to tap the arrows 6 at the ends of each slider bar to
move/scroll the information one logical page either vertically or
horizontally. In the illustrated example, a logical page vertically would
comprise 5 rows of channels and one logical page horizontally would
comprise a one hour time period. When moving thorough the program guide
grid in this manner, the position of the sliders 2/3 should be updated to
reflect the current, relative information being displayed. During this
procedure, it is not necessary for a banner 4 to be displayed.
[0028]To allow the consumer to change the time period for the entirety of
the displayable program guide information, e.g., to change days, the
interface may provide two options. First, if the time slider (e.g.,
horizontal slider 2) is moved all the way to its slider bar limit (e.g.,
the right which is illustrated as corresponding to 11:00 PM--i.e., the
end of the current displayable information) and the arrow 6 on the slider
bar adjacent to the limit is clicked, the guide information rolls over to
the next time period (e.g., 00:00 AM) and the time slider is
automatically repositioned to the start of the slider bar (e.g., the
extreme left hand side). A similar procedure performed in the reverse
direction would be utilized to change the program guide information that
is displayable to an earlier time period.
[0029]Alternatively, the consumer may activate (i.e., touch) a "calendar"
icon 8 which is illustrated at the bottom left of the display to the left
of the date. When the calendar icon 8 is touched, a calendar display 11
can be caused to appear as illustrated in FIG. 5. The calendar display 11
allows two purposes to be served. First, the calendar display 11 allows
the consumer to go directly to any day shown by simply selecting that day
on the calendar. Second, the calendar display 11 can serve as an
indication to the consumer of how many days worth of program guide
information remains when, for example, the consumer needs to dock the
device 10 and/or log onto a Web site in order to download guide
information as described in pending application No. 60/390,286 that is
incorporated herein by reference in its entirety. In the example shown in
FIG. 5, the calendar display 11 informs the consumer that the guide
information currently being displayed is for April 2nd, this
indication being made by providing a distinct, coloring, shading, etc. to
the date, for example. The calendar display 11 may further inform the
consumer that the consumer last downloaded two weeks worth of guide data
on March 22nd and has not logged on/synchronized with the guide
database since then, these dates being indicated by being labeled, for
example. Thus, as illustrated, the consumer is informed that they only
have two more days of current information left (April 3 and 4)--which is
indicated by the days following April 4 being labeled and not being
hi-lighted, for example. The consumer can navigate immediately to gain
access to program guide information, i.e., the programming grid, for
either of the hi-lighted days, April 3 or April 4 (or, for that matter,
to any of the days already past) by selecting that data on the calendar
page 11. Thus, it will be appreciated that the calendar page 11 functions
as a visual gauge to display the amount of schedule information remaining
and serves as a reminder that the consumer should refill this
information.
[0030]Once the consumer has positioned the program guide to the desired
time/channel information, touching a channel button 13 (e.g., the left
column) can cause the device 10 to instruct an appliance to immediately
switch to that channel (i.e., to send the IR command(s) to switch to that
channel.) Furthermore, touching a program name 15, "Friends" in the
example shown, can cause the display of additional information in a
window 17, for example, regarding that program. When information for a
program that is scheduled to air some time in the future is displayed, a
"Remind me" checkbox 19, or other known GUI element, can also be
presented to the consumer. Selecting this checkbox 19 can be used to
cause a reminder to be automatically entered into a calendar application
supported by the device 10.
[0031]For improving the visibility of programming information contained
within the program guide, particularly for consumers with imperfect
eyesight, and/or on devices such as high-resolution Web tablets capable
of displaying a large amount of information on a relatively small screen
area, the device 10 can provide a means for accessing an enlarged or
zoomed representation of a portion of the program guide. To this end, as
illustrated in FIGS. 7 and 8, a "zoom" button 12 may be provided as a
soft key at a convenient location within the display, for example, in the
illustrated bottom of the display, adjacent the display as a hard key,
etc. In response to an activation of the "zoom" button 12, e.g., by
touching a softkey with a stylus 20, with a finger, moving a cursor over
the icon and activating/clicking a hard key, etc. as illustrated in FIG.
8, an enlarged display 30 comprising a subgroup of the displayable
program guide information is presented to the user, as illustrated in
FIG. 9.
[0032]The portion of the program guide displayed as the enlarged portion
30 may be used to display a predetermined amount of programming
information to the user (e.g., programming information related to a
predetermined range of channels and/or predetermined time periods) or
simply contain as much programming information as can be fit into the
enlarged display based upon the font size, etc. selected for use in the
zooming application. Furthermore, the specific programming information
that is contained within the enlarged portion 30 of the program guide may
also be predetermined (e.g., based only upon the portion of the program
guide that is visible within the display prior to enlargement--see FIG.
7) or established using user-preferences. For example, the enlarged
portion of the program guide 30 may comprise an enlarged view of
programming information (i.e., programming grid cells) for a
predetermined number of channels (in the illustrated example the number
is 5) commencing from a predetermined starting channel number (in the
example, the starting channel number "72" corresponds to the channel
number that is at a predetermined position--such as at the top of the
un-enlarged guide as illustrated in FIG. 7). Similarly, the enlarged
portion of the program guide 30 may comprise an enlarged view of
programming information for one or more channel listings over a
predetermined time period (in the illustrated example 2 hours) commencing
from a predetermined starting time (in the example, the starting time
corresponds to a predetermined time--such as the time at the left most
portion of the un-enlarged guide as illustrated in FIG. 7). It is also
contemplated that the predetermined time could be a time commencing with
a current time that is maintained within the device 10. As noted, the
enlarged guide portion 30 may also contain programming information that
is consumer-specified, such as programming information pertaining to
consumer specified favorite channels (either commencing at a
consumer-specified channel, including only those specified by a consumer,
those determined to be most selected by a consumer, etc.) and/or
user-specified favorite times.
[0033]For the purpose of demonstrating to the consumer that the device 10
is in zoom mode, i.e., the display is showing an enlarged portion of the
program guide, the appearance of the icon 12 may be changed. By way of
example, the icon 12 can be presented with a line through it to show an
activated condition as illustrated by the icon 32 of FIG. 9. In this
case, the icon in question acts as a toggle to switch in and out of zoom
mode and thus the representation illustrated by 32 in FIG. 9 is used to
indicate that the next activation of this icon will cancel the zoom mode.
It is also contemplated that the color of the icon can be changed, the
icon can be flashed, etc.
[0034]The programming information contained within the enlarged portion 30
of the program guide may also be determined based upon interaction with
the un-enlarged program guide by the consumer. For example, the consumer
may indicate a desire to enter the zoom mode (e.g., by touching the zoom
icon which readies the device for zooming, which readiness may be
indicated to the user by the display of an icon having a changed or
changing appearance) followed by the consumer indicating a location
within the un-enlarged program guide that the consumer wishes to have
enlarged. The indication of the location may be provided by the consumer
using the graphical user interface (e.g., touching a location on the
display with a finger or stylus as illustrated in FIG. 10) to select a
cell or area of cells of interest within the displayed un-enlarged guide,
by moving the scroll bars, etc. Upon receiving the indication, the
software causes appropriate programming information to be displayed in
the enlarged portion 30 of the program guide. In the illustrated example,
the touching of the "Dark Shadow" cell within the un-enlarged program
guide may cause the enlarged portion 30 to present programming
information that commences with channel 74 and time 11:00 am. It is to be
understood that the user may navigate within the un-enlarged program
guide to find channels and/or times of interest before performing the
step of indicating which cell or cells should be enlarged. It will also
be appreciated that this two step process, i.e., indicating a desire to
enlarge the program guide followed by another user interaction with the
device, can result in the display of predetermined information within the
enlarged portion 30 as described above, e.g., favorites, programming
information commencing with the channel and time in the upper left most
corner of the displayed un-enlarged program guide, etc.
[0035]To change the programming information that is presented within the
enlarged portion 30 of the program guide, the user can exit the zoom
mode, by retouching the icon 32 for example, and then reinitiating the
zoom feature at a different location within the un-enlarged program
guide, for example, when a two-step process is utilized. Alternatively,
the graphical user interface may be used in manner that indicates to the
device that the user wishes to scroll the program guide within the
enlarged display area 30 thus changing the portion of the program guide
shown there within. This indication can be performed using standard GUI
techniques such as associating scroll bars with the display of the
enlarged portion 30, scrolling as a result of following the movement of a
finger or stylus within the display, etc. In this manner, the consumer
may conveniently navigate within the enlarged display portion 30 just as
a consumer can navigate within the un-enlarged display portion.
[0036]It is to be further understood that the zooming feature may also be
used to present control function icons and/or other aspects of the
graphical user interface in a larger, more prominent manner without
limitation. An example of an enlarged icon is illustrated as icon 12 in
FIG. 11.
[0037]To transmit command codes to an appliance (or perform local
operations), a consumer may activate hard keys 70, for example, at the
bottom of the device 10 in the exemplary platform illustrated in FIGS. 12
and 13. In this illustrated example, four individual buttons 70a and one
5-way rocker button 70b (4 directions plus a "press to select") comprise
the hard keys 70. The remote control application allows commonly-used
functions to be mapped onto the hard keys 70. For example, operations
such as "Controls" (Volume, Channel +/-, mute), "Navigation" (directional
arrows and select), or "Transport" (Play, fast forward, rewind, etc) can
be mapped onto the keys 70. The current operations to be performed in
response to activation of keys 70 can be presented in a display 72, shown
in FIG. 13, for example, by pressing one of the hard keys 70 which is
permanently assigned to the function of displaying key assignments. In
the illustrated example, in the case of the illustrated platform, the
display assignments key 70 is shown to be the upper one of the two right
hand individual keys (labeled "keys" in the display 72).
[0038]Referring now to FIG. 14, the operations assigned to the keys 70 can
be changed by the consumer activating a command button, e.g., icon 76
shown at the lower right corner of the exemplary screen shot. Activation
of the command icon 76 can be used to pop-up a menu 78, an example of
which is illustrated in FIG. 15, by which the user may change the
operations mapped to the keys 70. By way of illustration, the first three
items on the menu correspond to the three possible assignments for the
hard keys. e.g., the keys 70 may have operations mapped to them such that
activation of the keys cause the device 10 to transmit command codes to
command "navigation," "control" or "transport" functions of a target
appliance. The fourth menu choice, "Master Control," allows the consumer
to specify a specific target appliance to which any transmitted command
codes are to be sent (i.e., the command codes are formatted so as to be
understood by the target appliance). The default, in the absence of any
user setup, can be to simply have the device 10 transmit command code
signals in a format appropriate for a target appliance that has been
designated for the current device mode of the platform, i.e., the device
mode indicated by the icon 80 at the top of the device mode wheel 82.
Selecting the "Master Control" item of the menu 78 may be used to start a
Master Control Setup Wizard, an example of which is illustrated in FIGS.
16 and 17.
[0039]Turning to FIG. 16, the Master Control Setup Wizard may present to a
consumer one or more drop-down lists 84 by which the consumer can select
the target appliance for any transmitted command signals, e.g., signals
to be used for each of channel changing, volume control, and transport
functions. Preferably, the assignments performed using the Master Control
Setup Wizard are only with respect to the hard keys 70. FIG. 18 shows an
example drop-down list from which the user may select the target device
for signals to command channel changing operations. While not intended to
be limiting, the choices illustrated in FIG. 18 include only devices
which have been set up by the user in connection with configuring the
device mode wheel 82 (as described in U.S. application Ser. No.
60/334,774 which is incorporated herein by reference in its entirety,
these are the appliances the device 10 has been setup to control the
operation of).
[0040]To transmit commands to tune a target appliance to a specified
channel using a program guide, an example of which is illustrated in FIG.
19, a user need only select or activate a channel button 86 which, in the
illustrated example, is a soft key in the leftmost column of the program
guide grid. As noted previously, activation of a channel button 86 will
cause the device 10 to transmit a command signal, for example using an IR
protocol, to command the target appliance to switch to the selected
channel. The channel number to tune to in response to activation of a
channel button 86 is preferably pre-calculated at the time the guide is
downloaded from a Web site, or the like. In this regard, the guide
information is populated, for example, based on the zip code and service
provider information submitted by the user when registering for the
service as described in U.S. application Ser. No. 60/390,286 that is
incorporated herein by reference in its entirety. Accordingly, in the
illustrated example, activation of the "TLC" channel button in the
downloaded program guide would send a command to cause the target
appliance to tune to channel 280 which is the channel on which the
specified service provider is known to carry TLC content.
[0041]When transmitting a command to cause the appliance to tune to a
specified channel, the default channel changing operation comprises
sending the desired channel digits (a minimum of two, i.e., single digit
numbers are prefixed with a leading zero) as a sequence of IR data
commands in the format of the channel changing device specified in the
Master Control Setup. Accordingly, the actual physical transmission of
the sequence of commands is no different than playing back a
user-programmed macro. The consumer may be provided, however, with the
capability to adjust this default sequence described above if necessary
to suit his particular equipment. Any such adjustment may be performed,
for example, by touching a "setup" command button 88 (e.g., the wrench
icon in the lower left corner of the guide display screen). Activation of
the setup command button 88 can be used to present to the consumer a
pop-up, setup menu 90 as illustrated in FIG. 20. As can be seen from the
menu listings, among the setup choices can be provided a choice to allow
the content of the program guide to be filtered and arranged for display.
Furthermore, selecting a choice labeled "Options" can be used to invoke a
configuration screen illustrated in FIG. 21.
[0042]Using the configuration screen, the consumer may change the master
channel tuning device which may offer the same list of appliances and
would affect the same parameter as the "Master Control Setup" wizard
described earlier in connection with FIGS. 15-18. Additionally, the
configuration screen can be used to allow a consumer to modify how the IR
command sequence is to be constructed, i.e., allow the consumer to vary
the minimum number of digits to be sent and to specify if an "enter"
command is to be transmitted after the final digit is transmitted. (The
"enter" function is mandatory for a few appliance brands, in others it is
optional but often will speed up the channel changing response if used).
[0043]It will be appreciated that setup menus can also be provided to
allow a consumer to specify a target device for command(s) transmitted by
interacting with the program guide that need not be tied to the intended
target appliance associated with the hard keys 70. Additionally,
provision may be made for specifying multiple target appliances for use
with the program guide. For example, when the platform is setup in a mode
to command appliances in a first room, a first target appliance can be
associated with the program guide and when the platform is setup in a
mode to command appliances in a second room, a second target appliance
can be specified to be associated with the program guide. Similarly,
different target appliances can be specified to be the intended target
for commands transmitted as a result of interacting with the program
guide for each individual consumer setup to use the platform. Still
further, if the program guide is adapted to show information from
multiple sources, e.g., cable and satellite, different target appliances
can be associated with different listing within the program guide as a
function of the programming source.
[0044]While specific embodiments of the invention have been described in
detail, it will be appreciated by those skilled in the art that various
modifications and alternatives to those details could be developed in
light of the overall teachings of the disclosure. For example, the
programming grid need not be limited to channels being displayed in
horizontal rows and times in columns. Rather, the principles expressed
herein would be useful in connection with any manner for displaying
program information that allows the information displayed to be changed
or scrolled. Accordingly, the particular arrangement disclosed is meant
to be illustrative only and not limiting as to the scope of the invention
which is to be given the full breadth of the appended claims and any
equivalents thereof.
[0045]All of the cited references are incorporated herein by reference in
their entirety.
| |
Alpha Brain Wave Frequency Could Predict Predisposition To Pain
Alpha brain wave frequency can be used as a measure of a person’s vulnerability to developing and experiencing pain, researchers at the University of Birmingham in the UK and University of Maryland in the US have found.
Personal experience of pain is highly variable among individuals, even in instances where the underlying injury is assessed to be identical. Previous research has found some genetic factors influence pain susceptibility, but methods to accurately predict pain level consequent to medical intervention such as chemotherapy or surgery are lacking.
This study aimed to see whether, from the resting brain activity of a healthy individual, it was possible to predict how much pain they would report once prolonged pain had been induced.
Robust Thermal Hyperalgesia
The researchers induced the pain using a capsaicin paste – an ingredient found in hot chili peppers – to study participants’ left forearm and then heated it. Topical capsaicin exposure induces ‘robust thermal hyperalgesia’ – a common symptom in chronic pain.
All 21 participants in the study were induced in a state of prolonged pain for around an hour.
Using an electroencephalogram (EEG) – a non-invasive test used to find problems related to the electrical activity of the brain – the researchers found that across all 21 study participants, those who had a slower frequency of alpha brain waves recorded before the pain, reported being in much more pain than those who had a fast frequency of alpha brain waves.
The researchers also recorded the activity of alpha brain waves during the experience of pain, and if alpha frequency increased (relative to the no-pain condition) the individuals reported to be in less pain than when alpha pain decreased.
Slow Alpha Frequency And Pain
Co-senior author Dr. Ali Mazaheri, of the University of Birmingham’s Center for Human Brain Health, said:
“Here we observe that an individual’s alpha frequency can be used as a measure of an individual’s predisposition to developing pain. This has a direct relevance to understanding what makes an individual prone to chronic pain after a medical intervention, such as surgery or chemotherapy.
Potentially this means we could be able to identify which individuals are more likely to develop pain as a result of a medical procedure and take steps early on in formulating treatment strategies in patients likely to be predisposed to developing chronic pain.”
Dr. David Seminowicz and Andrew Furman, of the University of Maryland in the US, were also authors of the report.
Andrew Furman said:
“Alpha frequency has been found to be slower in individuals who have experienced chronic pain. So the fact we observed that the slowing down of alpha activity as a result of pain correlated with the intensity of an individual’s pain report was not that unexpected. What was very surprising though, was that prior to the pain — that is pain-free alpha frequency — could predict how much pain individuals would experience. | https://reliawire.com/alpha-wave-pain-assessment/ |
Valve: Low latency fundamental to virtual reality
SHARE THIS ARTICLE
'Someone has to step up and change the hardware rules to bring display latency down', says Michael Abrash
Low latency is fundamental to the success of augmented and virtual reality but is not possible with current hardware, claims Valve’s Michael Abrash.
In a blog post, Abrash said that latency was the enemy of virtual registration, assuming that there was accurate and consistent tracking.
He explained that if too much time elapses between when your head turns and the time the image is redrawn to account for a new pose, the virtual image would drift far enough so that it had clearly wobbled.
Abrash said without latency at what research indicated needed to be 15ms, or even as low as 7ms, it was impossible to deliver good experiences through augmented and virtual reality.
He added that even being right 99 per cent of the time was not good enough as the brain would still register this with the screen so close to the user.
“The key to this is that virtual objects have to stay in very nearly the same perceived real-world locations as you move; that is, they have to register as being in almost exactly the right position all the time,” said Abrash.
“Being right 99 per cent of the time is no good, because the occasional mis-registration is precisely the sort of thing your visual system is designed to detect, and will stick out like a sore thumb.”
He went on to say that that the challenge of overcoming low latency required new hardware, as many games generally have latency from mouse movement to screen updates of around 50ms or higher, while from his own experience, more than 20ms “is too much for VR and especially AR”. Higher latency he says also seems to be connected with simulator sickness.
“AR/VR is so much more latency-sensitive than normal games because they’re expected to stay stable with respect to the real world as you move, while with normal games, your eye and brain know they’re looking at a picture,” said Abrash.
“With AR/VR, all the processing power that originally served to detect anomalies that might indicate the approach of a predator or the availability of prey is brought to bear on bringing virtual images that are wrong by more than a tiny bit to your attention. That includes images that shift when you move, rather than staying where they’re supposed to be - and that’s exactly the effect that latency has.”
Abrash added that it was difficult to display a high enough resolution, an appropriate image size, tech small enough to be used by consumers while also keeping a low cost.
He said that he hoped with the upcoming release of the Oculus Rift, which has been praised by a number of developers and is highly anticipated in the virtual reality space, that the VR market would take off and the industry would be a step closer to new hardware to finally solve the problems facing the space.
“There is no way to get low enough display latency out of existing hardware that also has high enough resolution, low enough cost, appropriate image size, compact enough form factor and low enough weight, and suitable pixel quality for consumer-scale AR/VR. (It gets even more challenging when you factor in wide FOV for VR, or see-through for AR.)" he said.
“Someone has to step up and change the hardware rules to bring display latency down. It’s eminently doable, and it will happen – the question is when, and by whom. It’s my hope that if the VR market takes off in the wake of the Oculus Rift’s launch, the day when display latency comes down will be near at hand.”
| |
At Patients Know Best (PKB), we want everyone to be able to use the application easily, so we are committed to meeting Web Content Accessibility Guidelines (WCAG). These guidelines define how to make web content more accessible to people with disabilities. Accessibility involves a wide range of disabilities, including visual, auditory, physical, speech, cognitive, language, learning, and neurological disabilities.
Disabilities can be permanent, temporary or situational and all need to be considered when it comes to making sure PKB is accessible and inclusive. The following infographic from Microsoft below is a helpful representation of what to consider when it comes to inclusive design.
There are many areas where PKB is already very accessible. Most of PKB can already be navigated using just a keyboard and users can listen to most of the website using a screen reader. We have always put focus on readability, avoiding the use of long sentences and jargon so we can help people better understand the system. The “language bar” on each page has 22 available languages and allows patients and their carers to navigate content in their chosen language, and quickly understand where to view or record their data.
Taking accessibility further
Over the last two years, we have been investing time in further education on accessibility for our engineering team and working to ensure accessibility best practices are always a part of our development process. We are moving toward making every page on our application meet the accessibility requirements so it could be navigated by a non-sighted user via the keyboard and the screen reader only. This includes adding meaningful labels to form inputs, highlighting errors and giving updates when changes happen on the page.
We’ve also been making the website text as simple as possible to understand with the help of our user research panel. Details of how to get involved in the User Research Panel are included at the end.
Our most recent updates
Colour blindness (colour vision deficiency, or CVD) affects approximately 1 in 12 men (8%) and 1 in 200 women in the world. To make sure our system is accessible for users with colour blindness, we have recently updated all colours used in PKB to address any colour contrasting issues. The changes applied also give prominence to certain key elements and help guide the eye of the user to the most important features.
We have been enhancing the experience for users of screen readers by adding meaningful labels, landmarks & alerts in the code. Labels are used to describe the purpose of a form field, so if a patient is using a screen reader, it will identify what a field is for.
Landmarks identify the main sections of the page. We have started adding consistent ways to identify these sections so that the users don’t have to read every element on the page to get to the section they are interested in. Alerts will notify screen readers in real-time about changes on the page, for example, if entries have been successfully added/removed. Alerts will also notify users about errors on the page and how to resolve these.
What next?
We will continue to devote time and resources to address accessibility issues across the application.
Our next big accessibility update will be introducing a new navigation panel. Unlike traditional navigation patterns, the current navigation does not have the primary menu at the top of each page at a consistent location. Instead, the primary navigation sits in the middle of the home page and is only there, and is not accessible from the sub-pages. This causes a problem, especially for people who are using a screen-reader, as users would not be able to find the main menu options where they expect them to be. We completely rethought the navigation and built it from scratch. We have been validating the various concepts through research and carrying out user testing along the way.
We have regular user feedback sessions during the redesign of key areas to ensure that patients using our application can find everything they’re looking for with ease and speed. If you would like to take part in any user feedback sessions, or you have accessibility needs and would be interested in testing or providing feedback, please see how to get in touch on our User Research page. | https://blog.patientsknowbest.com/2022/04/08/committing-to-accessibility/ |
SUMMARY:
I am a senior business technology consultant with over 10 years in finance business operations and technology. I have a proven track record of identifying gaps, implementing successful processes and working with cross functional teams. As a certified scrum master, I promote the agile mindset among teams through coaching, and mentoring.
TECHNICAL SKILLS:
- Jira
- Rally
- MS SharePoint Teams
- SQL Server
- Bugzilla
- MS Project, Viso
- Saleforce adminstration
- Office 365
- Business Intelligence
PROFESSIONAL SKILLS:
- Leadership
- Risk Management
- Excellent presentation skills
- Process Improvement
- Team Building
- Human Resources
- Project Management
- Process Development
- Marketing B2B and B2C
- Business Development
- Planning Strategic and Development
- Anaylsis:SWOT,GAP, RCA(Root Cause Analysis)
PROFESSIONAL EXPERIENCE:
Confidential
Scrum Master
Responsibilities:
- Facilitate sprint planning sessions with the delivery team, technical leadership and the product owner as the servant leader.
- Facilitate any meetings with other development teams and make sure the team understands and meet the requirements for definition of done.
- Lead teams in planning sessions and guide the team towards the right direction without necessarily delegating
- Identifying root cause and mitigating by removing impediments.
- Conduct team backlog grooming sessions with the delivery team and the product owner
- Attend scrum of scrum meetings and make sure the team adheres to the correct scrum practice
- Continuously manage Jira to promote collaboration and adequate information within the team.
- Gather metrics from Jira and prepare status report for the agile center of excellence.
- Analyze and dissect system requirements and technical specifications to create estimates and plan for business initiatives.
- Testing various KPIs reporting tool and processes
- Assist business users in defining User Acceptance Testing, test cases and plans
- Run SQL queries to validate data against the reporting application
- Actively participate in walk - through, inspection, review and user group meetings for quality assurance
- Proficient in Scrum and Kanban
Confidential
Business Analyst/Jnr. Scrum Master
Responsibilities:
- Facilitated requirement gathering meetings and ensured that user stories and acceptance criterion were captured.
- Facilitated all scrum events and supported the development team to accomplish the sprint goals
- Supported the team in story estimation and release planning.
- Supported the BSA to elicit requirements from different stakeholders.
- Developed value stream mapping, workflows and process mapping using Microsoft Visio.
- Created story mapping and product roadmap to support project vision and objectives.
- Liaised between the IT department and the executive branch, sourced and implemented new business technology, researched technology solutions to business requirements.
- Evaluated business processes, anticipated requirements, uncovered areas of improvement and developed and implemented solutions.
- Lead ongoing reviews of business processes and developed optimization strategies.
- Utilized IT data for business insights, analyzed business needs and run A/B tests, and inform business decisions.
- Enhanced the quality of IT products and services, strategized business needs and planned for growth.
- Acted as an information source and communicator between business branches.
Confidential
Quality Assurance Manager
Responsibilities:
- Maintained leadership position in Quality Assurance and process improvement, provided and evaluated feedback from various teams.
- Root cause and remediated Quality Assurance exceptions
- Champion in using process mapping through process Improvement to identify, analyze and improve processes efficiency and quality of work.
- Created, monitored and implemented streamlined process; cleared backlog of 6-8 months, bringing to current; recognized with Foreman .
- Improved process and minimized errors, reducing error rate from 93% to 97.45%. Coach QA Associates, reducing testers’ error oversight rate and moving accuracy from 94% to 96%.
- Partnered with vendors to identify root cause on breakdown of letters going to customers.
- Achieved 100% compliance, increased customer satisfaction and decreased customer complaints.
- Researched and report significant findings, for regulatory violations and fraud exposure.
Confidential
Quality Assurance Specialist
Responsibilities:
- Reviewed and documented testing results, recommended process improvements.
- Created and maintained documentation of processes, reports, applications; increased efficiency and reduced turnaround time by one day, moving from 78% to 98% of targeted goal.
- Led project for development of Manual for Quality Assurance new hires.
- Worked closely with management to analyze business process and identify business control points.
Confidential
QA Specialist
Responsibilities:
- Documented issues, anomalies and prepared executive level reporting of monthly QA results.
- Utilized supporting materials to substantiate facts in documents provided by outside legal firms.
- Analyzed and broke down financial figures for accuracy and reviewed Confidential material in depth ensuring content accuracy increased efficiency by 95%.
- Organized vetting sessions to discuss findings, root course and remediation plan, implemented plan of action.
Confidential
Claims Specialist
Responsibilities:
- Established controls to ensure timely follow-up on claims that were not settled, denied or approved, to maximize reduction in credit losses and minimize denial from the investors.
- Achieved 100% execution, by initiating procedures for processing and monitoring all Confidential insurance claims, contact information and website access.
- Created a loss ID template, i ncreased accuracy to 95%.
- Achieved 100% pass on Quality Control audits for repurchases of Fannie Mae and Freddie Mac loans as required by Legal Department. | https://www.hireitpeople.com/resume-database/81-project-manager-resumes/188623-scrum-master-resume-282 |
The course is an advanced level course about communication theory, which mainly introduces scientific development and theory originalities, important theories and related researches of the field, current issues for discussion will be chosen according to the interest of students during class. The course will be conducted in a lecture base, assisting with after-class discussions and student presentations.
Advanced Research Methods in Communication (4 credits)
The course covers quantitative, qualitative, and critical approaches; however, emphasis is placed on quantitative methods. Through this course, students can have in-depth understanding towards empirical research method, they will also have chances to conduct empirical researches and give critical evaluations.
Media, Communication and Society (4 credits)
This course mainly provides students with the important concepts in theories of mass communication, including effects, usages, functions associated with goods and services of mass media, etc. It also introduces how will mass media affect the organization, design and understanding of messages after combining with other institutes, how to influence political thoughts, cultural beliefs and economic behavior.
Integrated Communication Strategies (4 credits)
This course provides students with information and insights about strategic communication: how messages are created and framed, why we respond to messages the way we do, and how to employ communications strategies to achieve their goals.
Cultural Theory in Communication Studies (4 credits)
This course introduces students to cultural studies and its overlapping relationship with the development of the field of communications. Key readings of Marxist approaches towards cultural studies, and studies of British and US cultures are covered in the first half of the course. In the second half of the course, it will move forward to the discussion of the concept “cultural research traditions”. Topics include cultural and representational politics, issues of identity, resistance, hegemony, and ideology.
Elective Courses
Global Marketing Communication: Special Topic 1 (4 credits)
The main focus of the course will be put on both the theories and strategies of IMC. For students who will become decision makers in almost any company concerned with consumer/customer communications (advertising, public relations, promotions, Internet marketing, marketing, media and client organizations), they will not only learn about messages and touch point integration, paying attention to the effect and measurable results of the dissemination of messages; they will also learn how to put the theories into practice. Present communication experts have greater responsibility in project planning and strategy setting.
Global Marketing Communication: Special Topic 2 (4 credits)
This course will introduce various emerging media (such as blogs, e-mail and podcasting, etc) and their effect to the market, as well as the use of these media in optimizing IMC process. Students will also be introduced and discuss about the new media issues arouse which related to creativity and ethnicity.
Media, Culture and Communication: Special Topic 1 (4 credits)
This courseintroduces cultural approaches to media studies, with a focus on major theories and critical analysis of media and popular culture. Topics covered include: cultural theory; aesthetics and taste; representation and ideology; consumer culture; media, culture and identity; gender, race, classin media; fandom and subcultures.
Media, Culture and Communication: Special Topic 2 (4 credits)
This course will provide the students with an overview of recent research on how "new media," such as the Internet and mobile phones, influence community social relationships, public and private spaces; which is also essential for students in evaluating empirical, social networking and sociological issue researches.
Strategic Communication: Special Topic 1 (4 credits)
This course provided guidance to students to think critically, express their reasoning clearly both in written and oral communications, and to understand the role of strategic communications in the historical development of the field of communications. Through case studies and readings, students are exposed to ethical issues that arise in strategic communications and are required to analyze the ethical dilemmas they will encounter in the working world.
Strategic Communication: Special Topic 2 (4 credits)
This course focuses on the issues relevant to planning, development and execution of crisis communications programs. Students will be exposed to the issues facing by public relations practitioners in businesses and organizations when a crisis situation; students will be able to master the actual and practical skills in order to achieve successful communication under crisis. During the course, discussion on the dynamic process and implications of various public relations strategies will be brought up.
Academic Activities (1 credit)
Students are required to participate in at least ten academic activities that are recognized by the Faculty of Humanities and Arts, such as the famous scholars’ seminars held on campus. Students are required to complete and hand in a one-page report for each activity.
Thesis (12 credits)
Students are required to submit and defend a Doctoral’s Thesis. Generally, the length should be no fewer than 100,000 words, excluding footnotes, endnotes and references. Students should follow the University’s format requirements for the thesis. | https://www.must.edu.mo/en/fa/programme/doctoral/doctoral-degree-programme/course-description |
This issue of Blood includes 2 review articles that summarize the recent revision of the World Health Organization (WHO) classification of tumors of hematopoietic and lymphoid tissues:
Daniel A. Arber, Attilio Orazi, Robert Hasserjian, Jürgen Thiele, Michael J. Borowitz, Michelle M. Le Beau, Clara D. Bloomfield, Mario Cazzola, and James W. Vardiman, “The 2016 revision to the World Health Organization classification of myeloid neoplasms and acute leukemia”
Steven H. Swerdlow, Elias Campo, Stefano A. Pileri, Nancy Lee Harris, Harald Stein, Reiner Siebert, Ranjana Advani, Michele Ghielmini, Gilles A. Salles, Andrew D. Zelenetz, and Elaine S. Jaffe, “The 2016 revision of the World Health Organization classification of lymphoid neoplasms”
The “blue book” monograph
The “WHO Classification of Tumours of Haematopoietic and Lymphoid Tissues”1 is one of the “blue book” monographs published by the International Agency for Research on Cancer (IARC; Lyon, France).
Eight years have elapsed since the current fourth edition of the monograph was published in 2008, and remarkable progress has been made in the field in this time period. Despite this, a truly new fifth edition cannot be published for the time being, as there are still other volumes pending in the fourth edition of the WHO tumor monograph series. Therefore, the Editors of the “WHO Classification of Tumours of Haematopoietic and Lymphoid Tissues,”1 with the support of the IARC and the WHO, decided to publish an updated revision of the fourth edition that would incorporate new data from the past 8 years which have important diagnostic, prognostic, and therapeutic implications. Although some provisional entities have been promoted to definite entities and a few provisional entities have been added to the revised WHO classification, no new definite entities were permitted according to IARC guidelines.
A multiparameter consensus classification
As underlined by the Editors of the fourth edition of the monograph, “classification is the language of medicine: diseases must be described, defined and named before they can be diagnosed, treated and studied. A consensus on definitions and terminology is essential for both clinical practice and investigations.”2
The main steps of the classification process are illustrated in Figure 1. In the introduction to the 2008 edition, Harris et al2 have clearly stated that the WHO classification is based on the principles that were adopted by the International Lymphoma Study Group for preparing the revised European-American classification of lymphoid neoplasms (REAL classification).3 In brief, the aim was to define “real” diseases that can be reliably diagnosed using the proposed criteria.
Three aspects have characterized the WHO classification so far2 :
a multiparameter approach to define diseases has been adopted that uses all available information, that is, clinical features, morphology, immunophenotype, and genetic data;
the classification must necessarily rely on building a consensus among as many experts as possible on the definition and nomenclature of hematologic malignancies. In turn, this implies that compromise is essential in order to arrive at a consensus;
while the pathologists take the primary responsibility for developing a classification, involvement of clinicians and geneticists is crucial to ensure its usefulness and acceptance both in daily practice and in basic/clinical investigations.
The 2014 Chicago meeting
On March 31st and April 1st, 2014, 2 Clinical Advisory Committees (CAC) composed of pathologists, hematologists, oncologists, and geneticists from around the world convened in Chicago, IL, to propose revisions to the fourth edition of the classification that had been published in 2008.1 One CAC examined myeloid neoplasms; the other examined lymphoid neoplasms.
The purpose of the CAC meetings was to consider basic and clinical scientific data that had accumulated in the previous 6 years and to identify disease entities that should be modified, eliminated, or added in order to keep the classification useful for both clinical practice and clinical investigations. In preparation for the Chicago meeting, pathologists and CAC co-chairs identified proposals and issues of interest to be discussed. The meeting itself consisted of a series of proposals for modifications to the existing classification, offered by either pathologists, clinicians, or clinical scientists, followed by 1 or more short formal comments from CAC members, and then by an open discussion of the issue until consensus was achieved.
There were ongoing discussions following the CAC meetings that led to refinement of some of the provisional conclusions and to better definition of the most controversial topics.
Toward a closer integration of morphology and genetics
Facing a patient with a suspected hematologic malignancy, there is no question that morphology represents and will continue to represent a fundamental step in the diagnostic process. I belong to a school of hematology in which the hematologist is expected to personally examine the patient’s peripheral blood smear and bone marrow aspirate, and to actively discuss pathology reports. However, although being essential for the diagnostic assessment, morphology is unlikely to provide major breakthroughs in our understanding of hematologic malignancies, which are inevitably associated with advances in molecular genetics. These latter will in turn generate new diagnostic approaches, improved prognostic/predictive models, and hopefully innovative therapeutic approaches according to the principles of precision medicine.4
The different levels of integration of genetic data into a clinicopathological classification of hematologic malignancies are schematically represented in Figure 2. Reducing a complex subject to a scheme inevitably involves oversimplification, and the reader should therefore consider that Figure 2 is just aimed to illustrate a few fundamental concepts; in this scheme, moving from left to right means closer integration of morphology and genetics. The revised WHO classification includes remarkable examples of closer integration of genetic data into the preexisting clinicopathological classification.
With respect to myeloid neoplasm, Arber et al emphasize that many novel molecular findings with diagnostic and/or prognostic importance have been incorporated into the 2016 revision. These include the somatic mutations of CALR, the gene encoding calreticulin, whose detection has considerably improved our diagnostic approach to essential thrombocythemia and primary myelofibrosis, though bone marrow biopsy continues to be of fundamental importance in this process.5,6 Ad hoc studies are now needed to establish whether in myeloproliferative neoplasms, driver mutations in JAK2, CALR, or MPL should be used just as a diagnostic criterion, or may also be used as prognostic/predictive factors or eventually disease-defining genetic lesions, according to the scheme in Figure 2.
Another molecular finding with diagnostic importance that has been incorporated into the 2016 revision of myeloid neoplasms is the CSF3R mutation, which is closely associated with the rare myeloproliferative disorder known as chronic neutrophilic leukemia.7 This condition can now be more easily separated from the myelodysplastic/myeloproliferative disorder known as atypical chronic myeloid leukemia, which is preferentially associated with other mutant genes, namely SETBP1 and ETNK1.8,9 A major change to the 2016 revision of myeloid malignancies is also the addition of a section on myeloid neoplasms with germ line predisposition, including those with germ line mutation in CEBPA, DDX41, RUNX1, ANKRD26, ETV6, or GATA2.
Not always has the explosion of molecular data translated into major revisions of the WHO classification of myeloid neoplasms. For instance, this is the case with the myelodysplastic syndromes (MDS), whose genetic basis is complex with several potential mutant genes.10,11 Although somatic mutations can be detected in up to 90% of patients with MDS, the same mutations can be present in elderly people with age-related clonal hematopoiesis.12 Further study is therefore required in this field to define the clinical significance of specific mutations or mutation combinations. At present, the best genotype/phenotype relationship is the association of the SF3B1 mutation with refractory anemia with ring sideroblasts.13 In the revised classification of MDS, although at least 15% ring sideroblasts are still required in cases lacking a demonstrable SF3B1 mutation, a diagnosis of refractory anemia with ring sideroblasts can be made if ring sideroblasts comprise as few as 5% of nucleated erythroid cells but an SF3B1 mutation is detected. Therefore, the SF3B1 mutation has become a novel diagnostic criterion.
With respect to acute lymphoblastic leukemia/lymphoma, 2 new provisional entities with recurrent genetic abnormalities have been incorporated into the revised classification: (1) B-cell acute lymphoblastic leukemia with translocations involving receptor tyrosine kinases or cytokine receptors (BCR-ABL1–like acute lymphoblastic leukemia)14,15 and (2) B-cell acute lymphoblastic leukemia with intrachromosomal amplification of chromosome 21 (iAMP21).16 In a recent study, BCR-ABL1–like acute lymphoblastic leukemia was found to be characterized by a limited number of activated signaling pathways that are targetable with approved tyrosine kinase inhibitors.17
In their review article on the revision of the WHO classification of lymphoid neoplasms, Swerdlow et al have included in 1 table the highlights of changes, many of which derive from the explosion of new pathological and genetic data concerning the “small B-cell” lymphomas.
Hairy cell leukemia is the paradigmatic example of a major clinical impact of the identification of the genetic basis of disease. In the 2008 WHO monograph, the chapter on hairy cell leukemia reported that “no cytogenetic abnormality is specific for hairy cell leukemia.”18 The identification of the unique BRAF V600E mutation has now provided a remarkable diagnostic tool, as this genetic lesion is found in almost all patients with hairy cell leukemia.19 At the same time, the fact that it can be detected in occasional patients with splenic marginal zone lymphoma underlines the importance of a multiparameter approach to diagnosis. The identification of the unique BRAF V600E mutation also emphasizes the importance of defining the genetic basis of disease for developing innovative precision medicine strategies. In fact, 2 recent clinical trials have shown that the oral BRAF inhibitor vemurafenib is safe and effective in heavily pretreated patients with relapsed or refractory hairy cell leukemia.20
Another remarkable example of genetic lesion of diagnostic importance is the MYD88 L265P mutation,21 which is detectable in the vast majority of patients with Waldenström macroglobulinemia, possibly in all patients using sensitive approaches.22 Combined with morphology, the detection of MYD88 L265P has now become an important diagnostic criterion for lymphoplasmacytic lymphoma, though the mutation is not specific for this lymphoid neoplasm.22
Similarly to what is found in the myelodysplastic syndromes, the situation is more complex with chronic lymphocytic leukemia/small lymphocytic lymphoma. Although there are no recognized disease-defining mutations in this lymphoid neoplasm, molecular investigations have shown a large number of mutations that occur with a relatively low frequency. Some of these mutations, namely those in TP53, NOTCH1, and SF3B1,23,24 have adverse prognostic implications, but further study is needed before they can be integrated into an updated genetic risk profile.
Additional changes in the revised WHO classification of lymphoid neoplasms include a number of provisional entities or diagnostic categories based on their molecular/cytogenetic findings, such as: large B-cell lymphoma with IRF4 rearrangement,25 predominantly diffuse follicular lymphoma with 1p36 deletion,26 Burkitt-like lymphoma with 11q aberration,27 and high-grade B-cell lymphoma with MYC and BCL2 and/or BCL6 rearrangements.28
The value of combining clinical, pathological, and genetic data for defining real diseases
As shown in Figure 2, one of the best examples of disease-defining genetic lesion is the 5q deletion responsible for the MDS with isolated del(5q). The process that led to defining this nosologic entity illustrates the importance of combining clinical, pathological, and genetic data.
The MDS with isolated del(5q) was first defined as a distinct hematologic disorder in 1974/1975 by Van Den Berghe, Sokal et al31,32 with a classical multiparameter approach. In fact, these investigators used a combination of clinical features (macrocytic anemia with slight leukopenia but normal or elevated platelet count), morphologic abnormalities (megakaryocytes with nonlobated and hypolobated nuclei), and cytogenetic data (acquired 5q deletion). In 2006, a clinical trial showed that lenalidomide not only corrects anemia but can also reverse the cytogenetic abnormality in this condition.33 A subsequent study showed that a portion of patients carry a subclonal TP53 mutation, and that this predicts poor response to lenalidomide and disease progression.34 More recent studies have revealed that haploinsufficiency of several genes mapping on the deleted chromosomal region represents the molecular mechanism of disease, and have in particular shown the crucial role of the CSNK1A1 gene both in the biology of the disease and its response to lenalidomide.35,36 The prognostic/predictive significance of the TP53 mutation has now been included into the revised WHO classification, and mutation analysis of TP53 is recommended to help identify an adverse prognostic subgroup in this generally favorable-prognosis MDS.
In conclusion, the current revision is a much needed and significant update of the 2008 WHO classification, and the 2 reviews being published in this issue of Blood represent the efforts of pathologists working closely with clinicians and geneticists. In the next few years, we should continue this collaboration to further improve the integration of clinical features, morphology, and genetics. | https://ashpublications.org/blood/article/127/20/2361/35252/Introduction-to-a-review-series-the-2016-revision |
INTRODUCTION
============
Sleep is one of the basic needs of man. Sleep patterns change over a lifetime[@r1] so that in the elderly, changes in circadian rhythm and consolidation of sleep lead to sleep disorders and poor sleep quality[@r2]. According to the World Health Organization\'s (WHO) estimate, the world\'s elderly population (aged 65 year or older) will be 1.5 billion (16% of the world\'s population) by 2050. About 71% of this population will be in less developed countries[@r3].
Changes in the quality of sleep and circadian rhythm with age cause sleep disorders in elderly people. Chronic insomnia is the most common sleep problem in the elderly, which is due to poor quality of sleep[@r4]. Sleep problems in the elderly are due to early sleep disorders, other mental disorders, general clinical disorders, and social and environmental factors[@r1]^,^[@r2]. The poor quality of sleep causes disturbances in emotions, thoughts, and motivation and increases risk of falling[@r5], depression, dementia[@r6], ischemic exacerbation, and heart attack[@r7]. Therefore, assessing the quality of sleep of the elderly is a prerequisite of improving it[@r2]. Pittsburgh Sleep Quality Index (PSQI) is a well-known and standard instrument for assessing sleep quality, which has been used in many studies around the world[@r8]^,^[@r9]. This tool includes sub-scales for assessing sleep quality[@r10]. It has been used in various studies in Iran and its reliability and validity have been confirmed for the Iranian population[@r11]^-^[@r13] and for other countries of the world[@r14]^-^[@r17].
By aging people experience a variety of physical and mental changes and the changes in the quantity and quality of sleep is one of them[@r18]. The sleep condition of the elderly people is different from that of younger people. They usually have no bed partner and their sleep is more likely controlled by family members[@r1]. Degradation of physical health and vital organs such as the cardiovascular system and the pulmonary system, as well as the prevalence of digestive problems, cause changes in the quality and quantity of sleep in the elderly[@r18]. With regard to the elderly\'s sleep situation, Landry believed that PSQI cannot be the best tool for assessing the quality of sleep in the elderly, and the validity of this tool in this age group should also be verified[@r19]. Therefore, the usability of this questionnaire to assess the sleep of the elderly should be verified beforehand. This research is an attempt to standardize and determine the validity and reliability of PSQI in the elderly population in Iran.
MATERIAL AND METHODS
====================
This is a methodological study with the sample size determined based on cluster sampling. The subjects were selected from the elderly population of Kermanshah province, based on the statistics reported by the Welfare Organization of this province in 1396. For this purpose, eight cities were randomly selected out of the 14 cities of Kermanshah province. In the case of Kermanshah City, according to the health center\'s statistical data, 50 clusters were randomly selected from different parts of the city. The inclusion criteria for the elderly were age 60 year and older and willingness to participate in the study; and the exclusion criteria were substance abuse or substance addiction, use of sleeping drugs, and psychiatric disorder history.
The questionnaires were mostly completed by the elderly. The author would help in cases where the elderly needed help due to eyesight impairment and illiteracy. The time for completion of the questionnaires varied from 20 to 30 minutes.
Totally, 800 questionnaires were completed by the study population. A number of questionnaires were excluded and 598 questionnaires were finally evaluated.
In order to study the Concurrent validity of Sleep Quality Index, Sleep Health, Epworth Sleepiness, Insomnia Severity indices, Global Sleep Assessment Questionnaire (GSAQ), and Berlin Questionnaire were also used. Then, the correlation between questionnaires was calculated.
To test the reliability of PSQI questionnaire using test-retest Reliability, 10% of participants (60 people) completed PSQI questionnaire with 4-6 weeks interval.
Pittsburgh Sleep Quality Index
------------------------------
The questionnaire is designed to measure sleep quality and to identify the people with and without sleep problems[@r14]. This scale includes seven sub-scales containing: Subjective Sleep Quality (SSQ), Sleep Latency (SL), Sleep Duration (SDu), Habitual Sleep Efficiency (HSE), Sleep Disturbances (SD), Use of Sleeping Medication (USM), and Daytime Dysfunction (DD). Responses are graded from 0 to 3 and the range of scores is 0 to 21. A score above six, indicates a poor sleep quality. Validity and reliability of this questionnaire have been investigated in Iran (α = 0.83 and correlation coefficient = 0.88)[@r6].
Sleep Health Index
------------------
Sleep Health Index (SHI) is a self-report index with 13 items and it is used to assess the environmental and behavioral variables that can cause low quality sleep. In this questionnaire, each question is scored in five scales (always, often, sometimes, rarely, and never). The Cronbach\'s alpha is 0.66 and test re-test value is 0.71 (*p*\<0.01)[@r20]. Moreover, there is a positive correlation between this index and the Epworth Sleepiness Index (0.24) (*p*\<0.01). Chehri et al.[@r11] reported a Cronbach alpha of 0.83 for this tool.
Epworth Sleepiness Scale (ESS)
------------------------------
This scale is designed to evaluate the rate of daytime sleepiness. There are eight different situations and the respondents should indicate the likelihood of sleepiness, dozing off, or falling asleep in these eight situations. Each item is scored from zero to three; so that score zero means that dozing off or falling asleep in that state never happens, and the score three means that there is a high probability of a napping or falling asleep in that situation. The total score of ten or more indicates excessive daytime sleepiness of the responder[@r14].
Insomnia Severity index
-----------------------
The Insomnia Severity Index (ISI) is a brief self-report instrument measuring patients\' perception of the severity of insomnia. The ISI is comprised of seven items assessing the perceived severity of difficulties of initiating sleep, staying asleep, early morning awakenings, satisfaction with current sleep pattern, and interference with daily functioning, noticeability of impairment attributed to the sleep problem, and the degree of distress or concern caused by the sleep problem. The questions are designed based on Likert\'s five-point scale (zero = never to 4 = very high). The overall score for this indicator is between zero and 28 and the higher scores mean more severe insomnia. Insomnia severity index is a sensitive indicator for measuring the efficacy of insomnia treatment. This index is a valid and reliable tool used in several studies.
The concurrent validity of this tool was reported with the registration dossier at the time of its development (r=0.65)[@r19].
With the registration dossier, Bastien et al.[@r21] reported the internal consistency validity, the concurrent validity, and the correlation of each question with the whole test equal to 0.74, 0.65, and 0.38-0.69, respectively. Bastien et al.[@r21] reported the internal consistency validity of the test through the calculation of Cronbach\'s alpha equal to 0.72. This questionnaire was evaluated by Sadeghniiat-Haghighi et al.[@r22] in Iran and its psychometric properties were determined. The researchers showed that the Persian version of the insomnia severity index had an acceptable internal consistency with Cronbach\'s alpha of 0.78. They also showed that the Persian version of the insomnia severity index had a sufficient differentiation power to diagnose patients from healthy people.
Global Sleep Assessment Questionnaire
-------------------------------------
The GSAQ is an 11-item tool that measures sleep behaviors based on a three-point scale of behaviors that never occur (score 0) to behaviors that always occur (score 2) (reference). The GSAQ score is the total scores of the 11 responses. The higher scores (the behaviors that always occur, as well as behaviors that sometimes occur) represent a higher risk of experiencing sleep disturbance. Reliability of the test-retest of this questionnaire is in the range of 0.51 to 0.92. According to the concurrent validity of GSAQ, with the evaluation of a clinical expert, this instrument has a desirable validity in detecting sleep disturbances[@r23].
Berlin questionnaire
--------------------
The Berlin Questionnaire includes 10 items organized into three categories including snoring examination (questions 1 to 5), daytime sleepiness (questions 6 to 9), and blood pressure and body mass index (question 10). If the patient receives two or more points in the first and second category a positive case is concluded. The third category measures blood pressure and body mass index. According to the Berlin Questionnaire, patients are divided into two groups at high risk and at low risk of respiratory interruptions or sleep apnea. If the patient\'s points are positive in two or more categories, the patient is considered to be at high risk for respiratory interruptions or sleep apnea. The alpha Cronbach\'s reliability of the BQ categories in Amra et al.[@r24] study was 0.70 and 0.50 for category 1 and category 2, respectively.
Data analysis
-------------
Descriptive statistics indices such as frequency, percentage, mean, and standard deviation were used to describe the data. Cronbach\'s alpha coefficient was used to determine the PSQI internal consistency and reliability and the Pearson correlation coefficient was used to calculate the correlation between the variables and structural validity of PSQI questionnaire and other tools. To investigate the PSQI three-factor structures (perceived sleep quality, sleep efficiency, and daytime sleep disorder), Confirmatory Factor Analysis was used through Maximum Likelihood method using AMOS software (version 23).
Findings
--------
The mean age of the participants in the study was 68.33 with a standard deviation of 8.75 and the age range of 60 to 85 years. Among the research units, 53.3% were female and 46.7% were male. In addition, 80.6% of them were married and 44.8% did not have a high school diploma ([Table 1](#t1){ref-type="table"}) and Mean and standard deviation of PSQI\'s sub-scales in research units are listed in [Table 2](#t2){ref-type="table"}.
######
Demographic characters of research units.
N (%)
--------------- -------------- -----------
Sex Female 319(53.3)
Males 279(46.7)
Marital S. Single 9(1.5)
Married 482(80.6)
Widow 107(17.9)
Graduate S. Illiterate 203(33.9)
Under Diploma 268(44.8)
Diploma 85(14.2)
Academic L. 42(7)
Job House keeper 201(33.6)
Retired 107(17.9)
Other 290(48.5)
######
Mean and standard deviation of PSQI\'s sub-scales in research units.
Mean SD
----- ------ -------
SSQ 1.31 0.823
SL 1.57 0.963
SDu 0.92 1.01
HSE 0.7 1.09
SD 1.5 0.61
USM 0.59 1.06
DD 1.08 0.86
TSQ 7.69 4.06
Abbreviations: SSQ= Subjective Sleep Quality, SL- Sleep Latency, SDu= Sleep Duration, HSE= Habitual Sleep Efficiency. SD= Sleep Disturbances, USM= Use of Sleeping Medication, DD= Daytime Dysfunction, TSQ= Total Sleep Quality.
In order to verify the reliability of PSQI, test-retest and internal consistency methods were used. Using Cronbach\'s alpha, the coefficient of reliability for the subscales of the questionnaire was obtained in the range 0.73-0.82. Therefore, the subscales have the required reliability for the measurement. On the other hand, the overall quality has a validity coefficient of 0.81 which indicates the appropriate reliability of the test. Based on the test-retest method, the PSQI reliability was 0.87 (*p*-value\<0.001).
Spearman correlation coefficient was used to measure the internal reliability of subscales. [Table 3](#t3){ref-type="table"} shows the correlation between Pittsburgh Sleep Quality Dimensions and its components. The correlation is significant at the level below 0.01. In order to evaluate the reliability of sleep quality instrument, 10% of the sample size (60 people) was screened for 4-7 weeks in terms of the Sleep Quality Index. The results of Spearman\'s correlation test showed that the correlation coefficient was 0.89 - i.e. acceptable reliability. The reliability of the dimensions varied from 0.76 to 0.84, which is desirable in terms of reliability. The correlation of PSQI with sleep health questionnaire was used to assess the criterion or concurrent validity. Therefore, the Spearman correlation coefficient between the sleep quality and the sleep health subscales were used to determine the normality of sleep scales and subscales. The results showed that there was a direct and significant correlation between sleep quality index and sleep health index (r=0.363, *p*-value=0.001). For the correlation and significance level of other subscales see [Table 4](#t4){ref-type="table"}.
######
PSQI Correlation Coefficients Matrix.
1 2 3 4 5 6 7
----- ----------------------------------------- ----------------------------------------- ----------------------------------------- ----------------------------------------- ----------------------------------------- ----------------------------------------- -----------------------------------------
SSQ 1
SL 0.471[\*\*](#TFN4){ref-type="table-fn"} 1
SDu 0.339[\*\*](#TFN4){ref-type="table-fn"} 0.362[\*\*](#TFN4){ref-type="table-fn"} 1
HSE 0.308[\*\*](#TFN4){ref-type="table-fn"} 0.382[\*\*](#TFN4){ref-type="table-fn"} 0.579[\*\*](#TFN4){ref-type="table-fn"} 1
SD 0.437[\*\*](#TFN4){ref-type="table-fn"} 0.292[\*\*](#TFN4){ref-type="table-fn"} 0.16[\*\*](#TFN4){ref-type="table-fn"} 0.162[\*\*](#TFN4){ref-type="table-fn"} 1
USM 0.295[\*\*](#TFN4){ref-type="table-fn"} 0.213[\*\*](#TFN4){ref-type="table-fn"} 0.136[\*\*](#TFN4){ref-type="table-fn"} 0.131[\*\*](#TFN4){ref-type="table-fn"} 0.306[\*\*](#TFN4){ref-type="table-fn"} 1
DD 0.388[\*\*](#TFN4){ref-type="table-fn"} 0.262[\*\*](#TFN4){ref-type="table-fn"} 0.157[\*\*](#TFN4){ref-type="table-fn"} 0.48[\*\*](#TFN4){ref-type="table-fn"} 0.458[\*\*](#TFN4){ref-type="table-fn"} 0.171[\*\*](#TFN4){ref-type="table-fn"} 1
TSQ 0.714[\*\*](#TFN4){ref-type="table-fn"} 0.676[\*\*](#TFN4){ref-type="table-fn"} 0.613[\*\*](#TFN4){ref-type="table-fn"} 0.607[\*\*](#TFN4){ref-type="table-fn"} 0.598[\*\*](#TFN4){ref-type="table-fn"} 0.484[\*\*](#TFN4){ref-type="table-fn"} 0.544[\*\*](#TFN4){ref-type="table-fn"}
Abbreviations: SSQ= Subjective Sleep Quality, SL- Sleep Latency, SDu= Sleep Duration, HSE= Habitual Sleep Efficiency. SD= Sleep Disturbances, USM= Use of Sleeping Medication, DD= Daytime Dysfunction, TSQ= Total Sleep Quality.
\*Significant in \<0.05.
significant level in \<0.01.
######
Correlation between the sub-scales of PSQI and sleep health.
The behaviors of sleep cycle and wake - up Bedroom agents Effective sleep behaviors Whole Scores
----- -------------------------------------------- ----------------------------------------- ----------------------------------------- -----------------------------------------
SSQ 0.174[\*\*](#TFN7){ref-type="table-fn"} 0.144[\*\*](#TFN7){ref-type="table-fn"} 0.384[\*\*](#TFN7){ref-type="table-fn"} 0.345[\*\*](#TFN7){ref-type="table-fn"}
SL 0.181[\*\*](#TFN7){ref-type="table-fn"} 0.083[\*](#TFN6){ref-type="table-fn"} 0.267[\*\*](#TFN7){ref-type="table-fn"} 0.259[\*\*](#TFN7){ref-type="table-fn"}
SDu 0.043 0.04 0.169[\*\*](#TFN7){ref-type="table-fn"} 0.112[\*\*](#TFN7){ref-type="table-fn"}
HSE 0.117[\*\*](#TFN7){ref-type="table-fn"} 0.043 0.021 0.059
SD 0.209[\*\*](#TFN7){ref-type="table-fn"} 0.139[\*\*](#TFN7){ref-type="table-fn"} 0.389[\*\*](#TFN7){ref-type="table-fn"} 0.362[\*\*](#TFN7){ref-type="table-fn"}
USM 0.186[\*\*](#TFN7){ref-type="table-fn"} 0.15[\*\*](#TFN7){ref-type="table-fn"} 0.2[\*\*](#TFN7){ref-type="table-fn"} 0.258[\*\*](#TFN7){ref-type="table-fn"}
DD 0.165[\*\*](#TFN7){ref-type="table-fn"} 0.112[\*\*](#TFN7){ref-type="table-fn"} 0.342[\*\*](#TFN7){ref-type="table-fn"} 0.294[\*\*](#TFN7){ref-type="table-fn"}
TSQ 0.229[\*\*](#TFN7){ref-type="table-fn"} 0.138[\*\*](#TFN7){ref-type="table-fn"} 0.38[\*\*](#TFN7){ref-type="table-fn"} 0.363[\*\*](#TFN7){ref-type="table-fn"}
Abbreviations: SSQ= Subjective Sleep Quality, SL- Sleep Latency, SDu= Sleep Duration, HSE= Habitual Sleep Efficiency. SD= Sleep Disturbances, USM= Use of Sleeping Medication, DD= Daytime Dysfunction, TSQ= Total Sleep Quality.
Significant in \<0.05.
Significant level in \<0.01.
PSQI had a direct and significant correlation with the total score of insomnia severity index (r=0.625, *p*=0.001), Epworth sleepiness index (r=0.139, *p*=0.001), Berlin Index (r=0.336, *p*=0.001), and the Global Sleep Assessment Index (r=0.634, *p*=0.001) (*p*\<0.01).
[Figure 1](#f1){ref-type="fig"} shows the relationship between the PSQI scale and subscales, in which perceived sleep quality, sleep efficacy, and daytime sleep disorders factors are correlated with sleep quality.
Figure 1Three factors model of sleep quality and its subscales in the elderly.
As listed in [Table 5](#t5){ref-type="table"} the Chi square is equal to 2.66 and the degree of freedom is 8. The chi-square is the most important index of goodness of fit and it measures the difference between the observed and estimated matrices. This statistic is very sensitive to the sample size, so it is divided into degrees of freedom and the goodness of fit is confirmed if the result is less than five.
######
Elderly sleep quality model fitting indices.
Model fit index Rate Criterion Interpretation
----------------- ------- ----------- -----------------
*X^2^* 2.66 \<5 Optimal fitting
Df 8 \- Optimal fitting
CFI 0.98 \>0.9 Optimal fitting
NFI 0.97 \>0.9 Optimal fitting
GFI 0.99 \>0.9 Optimal fitting
TLI 0.96 \>0.9 Optimal fitting
RMSEA 0.053 \<0.08 Optimal fitting
R^2^ 0.99 Near to 1 Optimal fitting
Another indicator is the Goodness of the Fit Index, which indicates whether or not the goodness of fit is acceptable and desirable. The Root Mean Square Error of Approximation (RMSEA) value of the goodness of fit was obtained equal to 0.053. Since it is less than 0.08, it is acceptable and the research model is supported.
According to the Goodness of Fit Index (GFI), values above 0.9 are acceptable. In the proposed model, GFI=0.99 indicates a goodness of fit of the model. The Confirmatory Fit Index (CFI) is a comparative fit index so that the closer it is to one the more acceptable is the model. According to the Normed Fit Index (NFI) or Bentler-Bonett Index, a minimum value of 0.9, represents a good fitness of the model.
According to The Tucker-Lewis Index (TLI) or the non-normed fitness index, the values between 0-1 and 0.95 indicate a good fitness of the model.
In the case of root mean square of the estimation error or RMSEA index, values less than 0.05 are considered as acceptable fit and above 0.1 as weaknesses of the model. Here RMESA is equal to 0.053, which is at a confidence interval of 90% with a lower limit of 0.01 and an upper limit of 0.88-i.e. goodness of fit is supported.
DISCUSSION
==========
The PSQI is used to diagnose sleep disorders and assess the quality of sleep. Verifying the validity and reliability of this questionnaire for the target community is necessary to ensure the accuracy of the obtained information. The validity and reliability of the Persian version of the PSQI in the elderly population were determined. The results showed that the PSQI had an acceptable validity and reliability in Iranian elderly people.
One of the objectives of this study was to determine the validity of the PSQI factor structure using confirmatory factor analysis. The results showed that a three-factor model had a good fitness.
This three-factor model has been also used in other researches[@r14]^,^[@r15]^,^[@r25]^,^[@r26]. These three factors were investigated according to previous studies and confirmatory factor analysis. Finally, a good fitness was obtained with a three-factor model.
Different models were examined based on literature review and a three-factor model with 19 items was approved. The indices obtained in this model were compared with the values obtained by previous studies. Cole et al.[@r15] reported a three-factor model and supported its goodness of fit (GFI=0.95; RMSEA=0.06; CFI=0.9). Burkhalter et al.[@r26] also confirmed a three-factor model with 19 items using confirmatory factor analysis. The reported goodness of fit indices in the present and the mentioned study are as follows (CFI=0.99; RMSEA=0.06; DF=8; x[@r2]=11.85).
Becker and Jesus[@r25] studied the compatibility of the three-factor model of sleep scale with the help of confirmatory factor analysis. Several models were fitted and ultimately the final 3-factor model was approved by deleting \"the use of sleeping drugs\" item. The reported goodness of fit indices in the present and the mentioned study are as follows (CFI=0.98; GFI=0.99; RMSEA=0.46; DF=6; x[@r2]=1.21).
The reliability of PSQI was determined using internal consistency (Cronbach\'s alpha). The results indicated that this index has a high internal consistency.
Cronbach\'s alpha coefficient for the total scale was 0.81 and for the subscales of this questionnaire ranged from 0.73 to 0.82. In Becker and Jesus[@r25] study, the Cronbach\'s alpha coefficient was 0.69 with a three-factor model; however, by deleting \"the use of sleeping drugs\" item, it increased to 0.70, which is lower than that of the present study. None of the items were removed in this study. The range of correlation coefficients varied from 0.12 to 0.52, which was in line with the results of Becker and Jesus[@r25]. In Sohn et al.[@r27] study, the Cronbach\'s alpha coefficient for internal consistency was 0.84 - i.e. a high reliability - which is consistent with our findings.
Salahuddin et al.[@r28] reported a Cronbach\'s alpha equal to 59.0. In Spira et al.[@r29] study, the internal reliability was 69.0, which is lower than the present study. It also indicates a high degree of internal consistency, which is consistent with the results of this study.
Based on test-retest method, the Sleep Quality Index had a good reliability in the elderly population. In Tzeng et al.[@r30] study, the reliability of the Taiwanese version of the PSQI for cancer patients was 91.0, which is consistent with the present study.
Another objective of this study was to determine the validity of PSQI with other questionnaires using the concurrent method. The results showed that the Sleep Quality Index had a good criterion validity with other questionnaires to assess sleep disorders. Del Rio Joao et al.[@r16] reported that the correlation coefficients of the seven PSQI components and the total score of GSAQ were ( 0.46. In Takacs et al.[@r31] study, the correlation between components scores and global scores was high, and the calculated range was 0.59 to 0.88. Moreover, there was a significant correlation between Pittsburgh Sleep Quality Index and Epworth sleepiness index. The results of Spira et al.[@r29] study showed a significant correlation between the Pittsburgh Sleep Quality Index and Epworth sleepiness index.
The results of the present paper are in a good agreement with the majority of studies mentioned above. It can be said that the Pittsburgh questionnaire is a standard and widely applicable questionnaire that has the same characteristics in the majority of studies in different populations. Although, the results of this study are in a good agreement with the results of previous studies, the cultural factors should be taken into account in using and reviewing the questionnaire.
CONCLUSION
==========
The Persian version of Pittsburgh Sleep Quality Index has the required validity and reliability in the Iranian elderly population and can be used as a useful tool in relevant researches.
**Ethics approval and consent to participate:** In this research, the ethical considerations including the principles of confidentiality of information, obtaining written informed consent for participating in study, publication and having the right to withdraw from the research at any time were observed. This study was approved by research committee (Grant No. 3005693) and ethical committee of
**Consent to publication:** not applicable.
**Availability of data and materials:** the datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
**Competing interests:** the authors declare that they have no conflict of interest about this work.
| |
The season and temperature influence humidity. Don’t worry if you’re unsure how to set your air conditioner’s dry mode temperature to get your home’s humidity to a healthy level. I discovered the following after conducting some research on the subject:
The air conditioner’s dry mode operates when the relative humidity is between 25% and 75%. When using the dry setting, set the temperature to 77 degrees Fahrenheit.
The dry mode may be less expensive than the cooling mode. Even on humid days, it contributes to energy savings. This article describes how AC Dry mode differs from other modes in creating the ideal room ambiance. For more information, continue reading.
The Dry Mode Function:
Many people own air conditioners, but few are aware of all of their capabilities. Understanding your air conditioner’s functions will allow you to coordinate its use better. Dry mode is frequently confused with Cool way, and comparisons between the two are common. They aren’t that dissimilar.
Their functions, however, are distinct. As the remote indicates, snowflakes typically represent cool mode, while water droplets typically represent dry mode. Dry mode is available on a wide range of split and ducted air conditioners, but it is not standard on all models. The main purpose of the dry mode is to lower the relative humidity in the room.
You are aware that as the humidity rises, so will the temperature. Even when the temperature is low, the humidity is oppressive. Because the drying function reduces humidity, it is especially useful during humid seasons like summer. During the rainy season or when there has been much rain.
It may not be hot enough to turn on the air conditioner, but it is humid and uncomfortable this time of year. It would help if you used dry mode at this point. Switching to dry mode will not remove all of the humidity from the room, but it will make it more tolerable for those already present.
How Does Dry Mode In AC Operate?
An air conditioner’s drying stage functions similarly to a large dehumidifier. When the air conditioner is set to run dry, the fans and internal components of the other unit continue to run, but the unit does not expel cold air into the room.
Instead, the air conditioning system filters the indoor air, and the water vapor in the evaporator goes through a condensation process to remove moisture from the air. The unit expels dry air, which returns to the room.
How Important Is The Dry Mode Setting?
Understanding dry mode allows you to use it when the conditions are right, which is especially important for maximizing energy efficiency. The default cooling mode wastes device power resources and should not be used all the time.
We recommend using cool mode when it’s hot or dry and dry mode when it’s humid but not necessarily warm. In hot and dry weather, the cooling method can be activated. Using dry mode when necessary can help you save money on your energy bills, improve energy efficiency, and reduce your carbon footprint.
Also Read: Does the Nest Thermostat regulate humidity?
Cool Mode Vs Dry Mode:
The dry mode of an air conditioner’s primary function is to remove moisture from the air. The primary function of a cool-mode air conditioner is to remove heat from the air and replace it with cooler air. Switching to the dry setting on a day with high relative humidity may not make much difference.
The dry mode does not directly cool the air but removes excess moisture, which has a cooling effect. In the same way, cooling air in cold mode removes moisture from the air. Because the air conditioning radiator remains cold, condensation and water droplets form. These droplets then fall into a trough and are discharged.
Air Conditioner’s Dry Mode Vs Dehumidifier:
Because these two things serve the same function, there’s no reason to pick one over the other. The solution is straightforward. Dry mode is one of several options available on your air conditioner. Air conditioners use a slow cooling process to extract moisture from the air.
In contrast, a dehumidifier is a stand-alone appliance that removes excess moisture from the air. The air conditioner’s dry mode consumes significantly more power than a dehumidifier’s power consumption. Furthermore, the dry mode is likely insufficient if the humidity is extremely high, as in Florida or Louisiana. You might need to use specialized equipment, such as a dehumidifier.
Is Dry Mode More Economical?
Dry mode is less expensive than cool mode and is an excellent way to reduce your electric bill on hot and humid days. When using the air conditioner in dry mode, the temperature should be set to 24°C.
On hot days, the dry mode is ineffective at regulating room temperature. These days, the cool mode is the most effective. Raising the thermostat a few degrees while in cooling mode can save your utility bills significantly.
Is Dry Mode Healthy?
Indoor air quality is critical to your health. A healthier indoor environment can be created by running your air conditioner in dry mode.
Because many harmful bacteria and viruses thrive in humid environments, high humidity levels can harm your health. Reducing humidity to healthy levels improves health, particularly for children, the elderly, and people suffering from respiratory diseases like asthma.
Too dry air, on the other hand, can cause breathing difficulties, dry eyes, and sore throats. For maximum comfort and safety, experts recommend keeping your home’s humidity level between 35-60%.
The dry mode does not completely remove the moisture in the air. Instead, it contributes to maintaining a healthy level of humidity. To avoid excessive air drying, he recommends not running the air conditioner in dry mode for more than 1-2 hours.
When To Switch To Dry Or Cool Mode?
Use Dry Mode:
- When your home is damp or sticky but not excessively warm.
- When there is a chance of precipitation or just before a storm
- When you don’t want the weather to be too frigid.
Use Cool Mode:
- If the weather is hot and dry, change to a cool setting.
- It’s an extremely hot day with high relative humidity. You’ll want to keep your house cool at all times.
Other Air Conditioner Modes:
The remote may display various air conditioning and cool and dry mode options. Heating mode is ideal for days when you want to warm the air in your home or keep it at a comfortable temperature.
This mode activates the fan, which is useful if you want more airflow but don’t want to change the temperature. While fan mode is active, Windows can remain open. The fan speed and temperature are automatically adjusted when in auto mode. The temperature detected by the sensor determines this.
FAQs
Does Using Dry Mode Save Electricity?
Changing to the dry mode can reduce overall power consumption by up to 30-50%. This is due to the lower speed of the fan unit and, more importantly, the lower frequency of the compressor unit.
Can You Sleep With Dry Mode On?
Dry Mode works wonders when activated by maintaining a constant temperature throughout the day and night. It does so without overheating.
What Does A Dry Mode Do On A Heat Pump?
In this mode, the heat pump alternates between heating and cooling functions to remove excess moisture from the room while maintaining the temperature set.
Can I Run AC On Dry Mode All Day?
The dry mode cannot eliminate moisture from the environment. Instead, it aids in reducing humidity to a more manageable level. To keep the air from becoming too dry, set the air conditioner’s dry mode to run for no more than 1-2 hours at a time.
Final Thoughts:
When using the dry setting on your air conditioner, set the temperature to 77 degrees Fahrenheit. It’s a good idea to consult an expert if you want recommendations on the most efficient air conditioner for your home or a guide on maintaining your air conditioner.
Disclosure: We may get commissions for purchases made through links in this post. | https://k2hvac.com/how-to-set-temperature-in-dry-mode |
Treatment of ocular melanoma metastatic to the liver by hepatic arterial chemotherapy.
Details
Serval ID
serval:BIB_819A47F25CA8
Type
Article: article from journal or magazin.
Collection
Publications
Institution
Title
Treatment of ocular melanoma metastatic to the liver by hepatic arterial chemotherapy.
Journal
Journal of Clinical Oncology
ISSN
0732-183X
Publication state
Published
Issued date
1997
Peer-reviewed
Oui
Volume
15
Number
7
Pages
2589-2595
Language
english
Abstract
PURPOSE: Ocular melanoma is characterized by a high rate of liver metastases and is associated with a median survival time less than 5 months. There is no standard treatment available. Treatment strategies have, without success, relied on the experience with metastatic cutaneous melanoma. The only effective treatment is chemoembolization using cisplatin and polyvinyl sponge, which has never become accepted on a large scale. The objective of the study was to establish prospectively the efficacy and toxicity of hepatic intraarterial fotemustine, a third-generation nitrosourea, in patients with liver metastases from ocular melanoma. PATIENTS AND METHODS: Thirty-one patients were subjected to laparotomy to place a totally implantable catheter into the hepatic artery and received fotemustine 100 mg/m2 as a 4-hour infusion, first once a week for four times and then, after a 5-week rest period, every 3 weeks until progression or toxicity. Cox regression models were used to assess the prognostic role of patient survival characteristics. RESULTS: Objective responses were observed in 12 of 30 assessable patients (40%; 95% confidence interval, 22% to 59%). The median duration of response was 11 months and the median overall survival time, 14 months. Lactate dehydrogenase (LDH) appeared to be the strongest prognostic factor for survival. Toxicity was minimal and treatment could be administered on an outpatient basis. CONCLUSION: The results of hepatic arterial chemotherapy with fotemustine produced a high response rate and survival similar to chemoembolization therapy. It involves no major toxicity and preserves the quality of life. To assess further its effectiveness, a randomized study to compare hepatic intraarterial versus intravenous chemotherapy is being planned. | https://serval.unil.ch/en/notice/serval:BIB_819A47F25CA8 |
Assessing English language learners for special education
For years, research has shown a disproportionate identification of English language learners in special education. Dr. Joe Yoo ‘19, an Ed.D. graduate from the Department of Teaching, Learning and Culture, developed a checklist to help minimize unwarranted special education referrals of English language learners in Houston ISD.
Identifying ELL students with disabilities can be difficult. There is a lack of adequate assessments and most education professionals do not have the experience to effectively assess ELLs for special education.
During Yoo’s Intervention Assistance Team meetings, educators engage in a collaborative, problem-solving process to resolve student problems. School administrators, parents and evaluation professionals also take part. Should students not make sufficient progress with the interventions provided in the IAT meetings, they are referred for special education assessment. Classroom teachers are then asked to collect crucial data to support the referral for testing.
In Yoo’s qualitative record of study, results showed teachers lacked formal documents to collect that data, they felt inadequate to teach ELLs and were unable to distinguish between language acquisition and learning disabilities
“We need to equip teachers with a document so when they come to refer a child, they have good, solid information to offer,” said Yoo. “This is especially important for bilingual students because these students have issues with reading, for example. It’s hard to tell whether they are still learning the language or if they have a learning disability.”
Yoo has spent 21 years in the education field. He is currently managing compliance and evaluation in the Office of Special Education Services in Houston ISD. During meetings with school administrators and teachers, he hears reports about students and certain challenges. However, they had no formal documentation.
“As an evaluation team member, I wanted to see more concrete and formal documentation. All they brought to meetings were sticky notes or journals, or they would have anecdotal informal observations,” said Yoo.
Yoo felt formal documentation, such as a checklist, may provide support to the teachers who plan to introduce a student’s case at the IAT meeting and ensure the collection of crucial data to improve the fidelity of the meetings. That is when the IAT checklist was developed.
“My goal was to keep the checklist short so teachers are not overwhelmed with documents. The first column is for the characteristics of a student acquiring a language. The second focuses on characteristics of a student with learning disabilities,” said Yoo. “This helps us determine if the struggles are because the student has attention issues or because the student doesn’t understand the language.”
This checklist was first implemented in January 2021. Every ELL who is suspected of having a learning disability goes through the checklist. Yoo estimates the checklist will be used close to 3,000 times during the school year.
He hopes it helps to provide constructive collaboration with teachers to solve problems and gain sufficient knowledge in how to instruct and help ELLs and prevent unnecessary referrals.
“We need to give students the opportunity to learn the language and we need to exhaust every resource that we have and give them all the interventions before we label them as a student with a disability,” said Yoo.
About the Writer
Ashley is the Communications Manager and responsible for news coverage in the Department of Teaching, Learning and Culture as well as the Department of Educational Psychology.Articles by Ashley
For media inquiries, contact Ashley Green.
Recent Posts
Serving migrant families through virtual professional development
A grant-funded initiative is helping Bryan ISD Spanish-speaking parents receive research-based professional education workshops in their native language.
What new research finds of sport industry careers and realities
Recent research by Dr. Marlene Dixon in the Department of Health & Kinesiology sheds light on the realities of careers in the sport industry.
Remembering Dr. Dean Corrigan
Dr. Dean Charles Corrigan, former dean of the College of Education and Human Development, passed away November 7 at his home in Middlebury, VT. He was 91 years old.
Fostering success for student veterans
The structured life of military service can be a shocking contrast to the self-directed environment of higher education. Research by a Texas A&M scholar is hoping to help veterans meet the challenges of their transitions.
$1.2M grant creates training program for doctoral special education students
A new $1.2M grant awarded to Dr. Florina Erbeli in the Department of Educational Psychology is creating a doctoral training program in special education.
Health education research explores ways to discuss sexual health
Faculty and students in The Laboratory for Community Health Evaluation & Systems Science are collaborating with organizations in Oklahoma aimed at reducing teen pregnancy and improving adolescent health.
How improving emergency management for IDD community will help all
Recent research by Dr. Laura Stough, an IDD scholar in the Department of Education Psychology at Texas A&M University, points out the flaws of crisis management for the IDD community and shows where there is room to improve.
A&M scholar innovating path to colorectal cancer prevention
Dr. Lace Chen in the Department of Health & Kinesiology is using research to create a path to colorectal cancer prevention.
More than a ring
When Rebecca Buckland ’09 had her ring stolen, student teacher Sarah Adams ’21 called on the Aggie family to replace it.
Remembering Dr. Larry Kelly
Dr. Larry Kelly, clinical professor in the Department of Teaching, Learning & Culture, passed away Sunday evening. | https://education.tamu.edu/assessing-english-language-learners-for-special-education/ |
At Blink, we feel one of the greatest strengths we bring to clients is the extent to which we leverage our internal expertise—and work with clients—in a collaborative way. But effective collaboration is not a slam-dunk. The key is to tap into individual competencies and perspectives in a way that improves the outcome rather than hinders it.
The reason teamwork and brainstorming get so much “play” in the business community is that they have demonstrated potential to produce superior solutions. One of the first exercises I did in business school was a survival exercise where you are placed in a fictional scenario (in the case, a plane crash in the Cascade Mountains). You have limited number of items you can select (things like a flashlight, skis, a mirror) to enhance your chances of survival. The goal was to select the “right” things based on the experience of military experts. First we all did the exercise individually. Next we were divided into teams and worked on the exercise collaboratively. It wasn’t easy – we had to come to consensus on which items to select. However, almost everyone’s team score on the exercise was higher—in many cases significantly higher—than the individual score.
However, as anyone who has worked on a team that isn’t functioning well knows, teams can also come up with weak “design by committee” solutions that just don’t work. Or one person in the group can dominate, causing others to acquiesce to an inferior solution. Or the team can fall prey to group think, where the team is so consumed with maintaining harmony that it interferes with each individual’s critical thinking.
David Perkins, a professor at the Harvard Graduate School of Education, warns that sometimes group sessions can result in one person’s bad idea tainting and limiting the range of others’ ideas. “The best way to get good ideas is to get people to write them down privately and then bring them in,” he says.
Professor Perkins observation is consistent with our experience. People need to spend time thinking through problems on their own first. Coming in cold to a brainstorming session is usually a recipe for wasted efforts.
Most user experience professionals have experience conducting user research and usability evaluations – and are therefore aware of how facilitation, if done poorly, can impact the behavior of participants. With group facilitation there is the added challenge of managing the group dynamic.
Our work with clients is highly collaborative. After all, they are the experts in their domain. We are sometimes brought in as an impartial third party to lend a fresh perspective to issues that may be divisive within a group. The client may have even conducted testing or other research on their own, but each party has a different interpretation of the results that supports their position. As a third party, we can come in and “ask the dumb” questions, challenge existing assumptions, and facilitate discussion in a way that helps encourage idea exploration.
Where it’s not possible to bring in a third party, you can use an internal person as facilitator whose role it is to create an open environment for discussion that allows a range of ideas to be heard. It’s much like facilitating a focus group—if people have come prepared with their ideas you can ensure more even participation by going around the table and giving each person an opportunity to be heard. Or, in more free-form discussion, politely move the focus away from someone who is dominating and ask for the thoughts of someone who has been sitting more on the sidelines. Often these quiet types have good listening skills that help them form well-considered opinions.
Sometimes people equate brainstorming with census decision-making – meaning that they expect the group will work until they find solutions that the entire group can agree to. However, the process of brainstorming and other collaborations may only be input used by a final decision-maker. In a worst-case scenario, decisions have already been made – and collaborative efforts are in reality window-dressing to make people feel like they were a part of the process. Major misunderstandings can occur if there is a difference between the expected decision-making process and the actual one.
Decision-making ultimately can be the hardest part of any collaborative process. Unless there is clear consensus, someone may leave feeling that their ideas were bypassed. In many non-profit environments, consensus decision making is the norm because group harmony tends to be more highly favored. However, gaining consensus is generally a more lengthy process.
Typically, in a for-profit environment, there is a project owner who will make the final call. But there can be tremendous variation in leadership style. Some styles are more autocratic – where ideas are briefly heard before quickly closing on a decision. Others lean more towards consensus and are hesitant to make a decision where there is no clear prevailing opinion. To avoid misunderstandings and hard feelings, it’s important for the entire group to understand the style of decision-making that will be employed.
Keeping in mind the above principles, there are a couple of different collaborative approaches we use at Blink for interactive design. At a minimum, we conduct a design review, where the lead designer creates the design and then asks others to review it and provide feedback. These sessions often lead to exploring design alternatives – a fresh pair of eyes on a design problem can provide new insights into solving a particular design problem. We also may employ parallel problem solving – where two or more designers tackle the same problem independently and then come together to compare and contrast solutions. This approach is particularly fruitful for larger, more complex design problems.
We also lead collaborative sessions with clients – most commonly when they are having difficulty articulating or agreeing on system requirements. One approach is to get the stakeholders in a room and begin sketching out screen flows on a white board. This allows for immediate feedback and quick re-working. It’s also highly visual, which tends to be easier for people to grasp than more abstract representations of requirements such as use case narratives or flow diagrams.
And, of course, all our client meetings are highly collaborative. We bring one set of expertise (user experience) and the client brings another (expertise in their domain). The key is to create a synthesis of these perspectives to arrive at the best possible solution—which, of course, we will then test with users.
With the right kind of planning and approach, we have seen a very high return on the time invested in collaborative efforts. Not only is it fruitful, but it can be fun, interesting, and engaging for all involved.
Reference: “Brainstorming Works Best if People Scramble for Ideas on their Own.” Wall Street Journal. June 13, 2006, page B1.
Heidi works in Interaction Design and is a Partner at Blink. She divides her leisure time between classical music, cooking, and the Seattle Mariners.
So how do we get together and collaborate on ideas when a project is top secret? When only one person at Blink knows what the client is working on, and only a handful of employees at the client’s company even know what it is?
We are often asked by potential clients if we “do Agile.” Being part of an outside firm, fitting into a client’s agile process can be a curious and interesting challenge given the variety of ways we see agile methodologies applied. | https://blinkux.com/ideas/making-design-collaboration-work |
Sequential roles of primary somatosensory cortex and posterior parietal cortex in tactile-visual cross-modal working memory: a single-pulse transcranial magnetic stimulation (spTMS) study.
Both monkey neurophysiological and human EEG studies have shown that association cortices, as well as primary sensory cortical areas, play an essential role in sequential neural processes underlying cross-modal working memory. The present study aims to further examine causal and sequential roles of the primary sensory cortex and association cortex in cross-modal working memory. Individual MRI-based single-pulse transcranial magnetic stimulation (spTMS) was applied to bilateral primary somatosensory cortices (SI) and the contralateral posterior parietal cortex (PPC), while participants were performing a tactile-visual cross-modal delayed matching-to-sample task. Time points of spTMS were 300 ms, 600 ms, 900 ms after the onset of the tactile sample stimulus in the task. The accuracy of task performance and reaction time were significantly impaired when spTMS was applied to the contralateral SI at 300 ms. Significant impairment on performance accuracy was also observed when the contralateral PPC was stimulated at 600 ms. SI and PPC play sequential and distinct roles in neural processes of cross-modal associations and working memory.
| |
Renaissance Arts and Artists The Renaissance was the era in which society and culture significantly changed in Europe. It was the bridge between the Middle Ages and Modern history. The Renaissance took place from the 14th to the 16th century. It was a time of rebirth, growth, and creativity. The Renaissance was also a time in which humanism increased. During the Renaissance, a new style in painting, architecture, and literature formed. The significant changes in art, architecture, and literature during…...
Artists
Michelangelo
Renaissance Art
A History Of The Renaissance Period History
Words • 1466
Pages • 6
The Renaissance was a period in European history marked by a cultural flowering. The Renaissance is defined as the revival or rebirth of the arts. The home of the Renaissance was Italy, with its position of prominence on the Mediterranean Sea. Italy was the commerce capital between Europe and Eurasia, during this time period, from fourteenth and sixteenth centuries. Painters, sculptors, and architects exhibited a similar sense of adventure and the desire for greater knowledge and new solutions.During the Renaissance,…...
Art
History
Music
Painting
Renaissance Art
Renaissance Period
High Renaissance Art
Words • 1452
Pages • 6
Well of Moses by Claus Sluter, is a well/portal that was placed in the chapel of a monastery. The Well of Moses is in the Chartreuse de Champmol in Dijon, France and was created between 1395 and 1406. The well is surrounded by Moses, sculpted with horns and five other biblical prophets that include David, Daniel, Isaiah, Jeremiah, and Zachariah. Moses being the most identifiable and referenced carries the plates containing the Ten Commandments. Adding a higher form of support…...
Renaissance Art
Visual Arts
Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper
Renaissance Artifacts
Words • 1028
Pages • 5
OBJECTIVE Renaissance world-view can be characterized by its humanistic orientation. The objective of this work is to analyze cultural artifacts from the Renaissance showing how they reflect the values of the time and will incorporate the testimony of two experts in the field. INTRODUCTION The manner in which the expression of values that exist within a civilization’s culture during a time period in their production of artifacts has been noted in the study of archaeologist and anthropologists who…...
Art
Renaissance Art
Donatello`s David
Words • 351
Pages • 2
The re-creation of David, a biblical hero, comes from three very notable works of art from history. The free standing sculptures are made by Donatello, Michelangelo and Bernini. Donatello is an artist and sculptor known for his exploration of human emotion and expression. He also known to use difficult mediums to make masterpieces. In Donatello’s depiction of David appears as a bronze work of art from the Italian Renaissance period. This is a life sized sculpture of David in the…...
Art
Michelangelo
Renaissance Art
Madonna of the Meadows
Words • 814
Pages • 4
In the history of art, one of the main goals of the artists has been to create a realistic and innovative art. Every era invented its own way of achieving this effect and when the paintings are compared, there are always pieces that are more realistic and innovative than the other. The search for exact representation of the real world has lead to very impressive results. The High Renaissance period brought about a great cultural movement in terms of scientific…...
Art
Art History
Artists
Country
Culture
Drawing
Comparing Medieval Art to Renaissance Art
Words • 1209
Pages • 5
Medieval art period Medieval art covers a large scope of time. The period covered over 1000 years of art in Europe, Middle East and North Africa. The period was characterized by major art movements based on national art and regional art. There was also the aspect of revivals and artists crafts. Art historians have been successful in classifying medieval art into major periods and styles. This is often characterized with significant difficulty. The major periods of art in medieval period…...
Art
Leonardo Da Vinci
Renaissance Art
The Italian Renaissance vs the Renaissance in Northern Europe
Words • 562
Pages • 3
The Renaissance is a time in history that is often discussed and referenced, but rarely defined. Literally meaning “re-birth,” it started in the late 1300s in Italy, particularly in Florence. It encompassed all areas of culture, from art to music to literature to medicine. The Renaissance can also be seen not just as a re-birth of culture but as a revival of culture. After the dark ages, the arts were finally flourishing again. People were interested in science. Many of…...
Italian Renaissance
Painting
Renaissance Art
Renaissance Artemisa Gentileschi
Words • 1011
Pages • 5
My idol of the Renaissance period is the famous woman artist named Artemisia Gentileschi. She was born in Rome on July 8, 1593. Her father was a well-know Roman artist named Orazio Gentileschi and my mother was named Prudentia Monotone. She died when Artemisia was twelve. Most women artists in this post-Renaissance era were limited to portrait paintings and poses. She was the first woman to paint major historical and religious scenarios, such as her painting, Judith Beheading Holofernes, c.…...
Caravaggio
Renaissance Art
Dürer – Adam And Eve
Words • 658
Pages • 3
Albrecht Durer’s painting Adam and Eve, completed in 1504, exemplifies the Renaissance style for its visual artisanship and religious imagery. The artist’s efforts to render the human body realistically, as well as the use of shadow and perspective, set it apart from the recently-ended medieval era through its evolved use of technique. The work consists of two tall panels with dark backgrounds, placed side by side. The left panel contains Adam, standing on a surface scattered with small stones. Unclothed,…...
Adam and Eve
Michelangelo
Painting
Renaissance Art
Differences between Northern Renaissance Art and Italian Renaissance Art
Words • 550
Pages • 3
?There are many differences between Northern renaissance art and Italian renaissance art. They are quite different. While Italian renaissance art tended to show the body in an idealistic way, Northern renaissance art hid the body. The art was very realistic, but drapery hid the body in a medieval fashion. That makes one major difference between the two: Italian was classical and Northern was medieval. Northern art had an immense amount of symbols in it. A good example of Northern art…...
Art
Italian Renaissance
Renaissance Art
Humanism in The Renaissance
Words • 694
Pages • 3
The Renaissance was a great revolution in Europe from the ways of the Middle Ages. This essay is about the different aspects of humanism evident during the Renaissance (the changes in political philosophy, art and religion). Essay Question: What cultural changes during the Renaissance portrayed humanism? Humanism in the Renaissance The Renaissance was a time in which the modern age began, because of humanism. Humanism is a way of life centered on human interest. It was a huge change to…...
Art
Humanism
Renaissance Art
Middle Ages vs Renaissance Art Periods
Words • 922
Pages • 4
When seeking two art periods to compare and contrast, fewer artistic examples provide a starker depiction of radically changing ideas and mentality than the art of the Middle Ages against that of art from the Renaissance. First, art originating from the Middle Age will be thoroughly analyzed for context. Afterward, art from the Renaissance period art will be analyzed next to it for its departures on from Middle Age techniques and thinking, before the two are finally systematically compared and…...
Art
Middle Ages
Renaissance Art
Renaissance Period
We've found 13 essay examples on Renaissance Art
Prev
1 of 1
Next
Still don't know
where to start with your assignment?
Hire a subject expert to help you
Hire Writer
Let’s chat? | https://studymoose.com/renaissance-art |
A new report from the Deloitte Center for Sustainable Progress (DCSP) released today during the World Economic Forum’s annual meeting indicates that—if left unchecked—climate change could cost the global economy US$178 trillion over the next 50 years, or a 7.6% cut to global gross domestic product (GDP) in the year 2070 alone. If global warming reaches around 3°C toward the century’s end, the toll on human lives could be significant—disproportionately impacting the most vulnerable and leading to loss of productivity and employment, food and water scarcity, worsening health and well-being, and ushering in an overall lower standard of living globally.
Deloitte’s Global Turning Point Report is based on research conducted by the Deloitte Economics Institute. The report analyzed 15 geographies in Asia Pacific, Europe, and the Americas, and found that if global leaders unite in a systemic net-zero transition, the global economy could see new five-decade gains of US$43 trillion—a boost to global GDP of 3.8% in 2070.
“The time for debate is over. We need swift, bold and widespread action now—across all sectors,” said Deloitte Global CEO Punit Renjen. “Will this require a significant investment from the global business community, from governments, from the non-profit sector? Yes. But inaction is a far costlier choice. The data bears that out. What we have before us is a once-in-a-generation opportunity to re-orient the global economy and create more sustainable, resilient, and equitable long-term growth. In my mind the question is not why we should make this investment, it’s how can we not?”
Transforming the economy for a low-carbon future will require extensive coordination and global collaboration throughout industries and geographies. Governments will need to collaborate closely with the financial services and technology sectors—leading the charge on sustainable progress through global policymaking, greater investment in clean energy systems, and a new mix of green technologies across industries. According to the Deloitte Economic Institute’s research, collectively pivoting from an economy reliant on fossil fuels to an economy primarily powered by renewable energy would spur new sources of growth and job creation. Global cooperation and regulation are vital to setting the stage for a successful transformation.
“It’s important that the global economy evolves to meet the challenges of climate change,” said Dr. Pradeep Philip, Deloitte Economics Institute. “Our analysis shows that a low-carbon future is not only a societal imperative but an economic one. We already have the technologies, business models, and policy approaches to simultaneously combat the climate crisis and unlock significant economic growth, but we need governments, businesses, and communities globally to align on a pathway toward a net-zero future.”
“In order to find new and lasting solutions to these societal challenges, we must model new forms of cooperation and pursue a multi-party, holistic approach. The Turning Point analysis lays a powerful foundation of economic benefit and growth for decision-makers, influencers and participants to work from for individual and shared prosperity,” said Prof. Dr. Bernhard Lorentz, founding chair of the DCSP and Deloitte Global Consulting Sustainability & Climate Strategy leader. | https://marketsherald.com/deloitte-research-reveals-inaction-on-climate-change-could-cost-the-worlds-economy-us178-trillion-by-2070/ |
Climate change is the major threat facing humanity. Human interactions with climate occur at all levels but so far research has focused on governments, industries and on the technological, demographic and economic trends that drive climate change. Factors that influence decisions and behaviour at the individual level have received less attention. But, individual behaviour drives societal change via adoption of technologies and support for policies.
Unless we examine what factors influence mitigation and adaptation behaviours and how climate change will affect human well-being, we will be unable to respond effectively as a society. Too much policy is based on oversimplifications and erroneous assumptions about these factors, for example, the assumption that informing individuals about climate change science is sufficient to affect decisions and behaviours. Ignoring insights from psychological research will handicap progress towards a low-carbon, sustainable future. Climate change will affect well-being in ways that are often overlooked. Natural disasters have direct impacts on mental health and indirect impacts will result from cumulative environmental stresses. Awareness of these impacts encourages public engagement and encourages effective adaptations that minimize negative effects and capitalize on possibilities for more positive changes. People typically underestimate the likelihood of being affected by disaster events and tend to under- rather than overreact.
Community preparedness can be improved. by considering these processes in the design of education and messaging; for example, by accompanying risk information with information about the specific personal implications of the risk and about specific actions to address the risk. The psychological perspective is uniquely placed to understand individual factors in socio-ecological systems, and provide important input towards a multi-level approach integrating natural sciences, social sciences and the humanities.
Researchers concerned with understanding and responding to climate change acknowledge that multiple disciplinary approaches are necessary, but do not always act on this recognition. It is time to develop effective ways to integrate psychological research into these efforts.
To successfully communicate about risk, change behaviours that contribute to climate change and facilitate adaptation, it is necessary to consider individual capabilities, cognitive processes, biases, values, beliefs, norms, identities and social relationships, and to integrate this understanding into broader understanding of human interactions with a changing climate. | http://tyndall.cc.demo.faelix.net/ideas-and-insights/perceptions-climate-change |
Seattle Commission on Electronic Communications
The Seattle Commission on Electronic Communications' charge was to develop a short-term and long-term vision and direction for the City's television station and its web site in order to increase public awareness, understanding and participation in government, community and cultural affairs. The Commission was also asked to explore areas of structure, finance, programming, marketing, teledemocracy and emerging technologies.
The Commission has 14 volunteer members. It began work in early 2001, and its recommendations were published mid-December. The Commission gathered information for its recommendations from numerous sources, including: guest presenters; research conducted by City staff and consultants; subcommittee work; review of other cities' stations and web sites; and independent reading.
Executive Summary
Recommended Goal
- To be a national leader in using technology to dramatically expand civic engagement and public discourse by transforming TVSea into a multimedia organization that provides compelling content and two-way communication opportunities.
Recommended Mission Statement
- To inform and engage citizens in the governmental, civic and cultural affairs of Seattle through compelling use of television, Internet and other media.
Recommendations
Content & Production
- Create a multimedia resource that provides linkages to public information and opportunities for citizens to interact with their government and each other across all media platforms.
- Improve programming and content, making it engaging and informative for television, Internet and other digital media.
- Enhance City Council meeting coverage by placing meetings in context, providing interactivity with viewers and web users, including online access to briefing materials, using graphics and crawls to increase understanding, and improving production values (lighting, camera angles, etc.).
- Consider new content, such as: weekly council highlights; top 10 questions from citizens; backstage at Bumbershoot; “Day in the Life” programs; and instant feedback.
Branding & Marketing
- Develop a brand (new name, professional style, logo and graphics) that is consistent across television, Internet and other digital media.
- Develop and implement a comprehensive marketing plan to draw new users and viewers.
Technology
- Use integrated technology—e-mail, Internet chat, indexed video on demand, instant polling, wireless services, television, etc.—to promote civic engagement and participation.
- Incorporate new technologies as they emerge.
Partnerships
- Establish partnerships with local television and radio stations, high-tech companies and community and non-profit organizations to leverage operational, content and technical resources.
Finance
- Maintain the current level of support from City funds and the cable franchise fee.
- Use any revenues above projections for 2001 and 2002 to implement improvements in 2002.
- Increase the cable franchise fee in 2003 and 2004 and dedicate the revenue to improving quality and content, expanding interactive services, marketing and creating partnerships.
Governance & Evaluation
- Maintain the TV/democracy portal as a part of City government.
- Restructure the current TVSea organization to create two functional units—content development and engineering/operations—that serve both television and web.
- Establish a citizen review panel to report on the organization’s performance and independence.
- Set measurable goals and conduct regular evaluations to measure and improve performance. | http://www.seattle.gov/scec/ |
A common cause and adversity will often bring very different people together. In 2020, the global LGBTQIA+ community is one of the most diverse minority groups globally, with representation from all countries, ethnicities, genders and faiths. The community is comprised of many smaller communities, from the well-known and established groups like the lesbian, gay, bisexual and transgender communities to many smaller lesser-known groups. Representation is important, and today, fortunately, more people now have the freedom to discover and accept their identities. Many quickly realise that they do not entirely identify with non-traditional identities, resulting in the risk of splintering representation groups. Not all voices will be heard with such a large group of diverse people. A reality that often results in problems where the wants and needs of members will differ among the very different groups and factions. However, today, 25% of the world’s LGBTQIA+ community still live in countries where their sexuality is deemed illegal. Where gender expression is not permitted and the unity of the community so crucial for the global fight for equality, is there a risk of representative groups and associations separating?
Underrepresentation
Over the past few years, there have been consistent rumours within the LGTBQIA+ community that many smaller and lesser-known groups often feel underrepresented. The feeling that many of the well-respected representative groups and associations advocating for and speaking on their behalf do not always highlight or address the issues directly affecting them.
The under-representation is rumoured to have started conversations around creating targeted organisations that concentrate on their specific wants and needs, breaking away from more established representative bodies. Though just rumours, many of the LGBTQIA+ representative groups and associations have come out and publicly denied any potential splintering. However, there is a small amount of truth that many feel marginalised and their needs ignored within the community.
Outside of the community, people often joke about the ever-growing number of letters that form the name of the wider community. Those same people often do not appreciate the need we all have to be represented and the feeling that our voice is being heard. Whether non-binary or lesbian, we all want acknowledgement and acceptance. Most importantly, we all want our fundamental human rights and protection under the law. Over the years, pioneers and advocates as a collective have fought hard to improve the lives and conditions of community members.
Though LGBTQIA+ is now more widely accepted in certain parts of society and within specific countries worldwide, many are still being actively persecuted even today. Whether the continual persecution by antiquated regimes or, in recent years, the rise in anti-LGBTQIA+ sentiment across the world. With the constant threat to hard-fought rights and ongoing persecution, it is undeniable that the community still has a long way to go. The global LGBTQIA+ community has clear goals in mind, for everyone to have the freedom to be their true self and be accepted, not persecuted worldwide.
Need for unity
Looking closely at the estimates relating to the community, you can see where the minority groups exist. It is estimated that of the world’s population, 1.34% identify as Lesbian & Gay, 1.29% identify as Bisexual, and less than 1% (0.006%) identify as Transgender. The larger groups tend to focus more on sexuality-related rights and laws. Women that identify as lesbian or bisexual will also often fight for gender equality. However, though sexuality is important within many smaller groups, gender expression and anti-discrimination laws are higher on their list of priorities.
It is undeniable that work needs to be carried out to ensure everyone feels represented and that their voices are heard. However, splintering off at this crucial time with so many still persecuted and the constant risk of progressive laws being reversed could cause long-term issues. A move that could also potentially dilute both the overall message, the calls for equal treatment and the collective influence the community has overall.
There are examples of very different people uniting together, working towards one common goal throughout history. One such example is the Indian Independence Movement, whereby various Indian communities living under the rule of the former British Empire worked together. With their own cultures, religions, beliefs, and languages, these very different individuals worked together to achieve independence. Though the groups fractioned off after independence in 1947 to create modern-day India, Pakistan and Bangladesh, for over 90 years, they worked together in unity with one common goal in mind.
The world, as a whole, is still a long way from treating LGBTQIA+ individuals as equal. The community needs are relatively simple to achieve. To give all people their fundamental rights, the freedom to be their true selves, loving whomever they want without fear of persecution. In attaining these collective needs, no one is asked to change their fundamental beliefs or even dilute or lose their heterosexual rights to legal recognition and protection.
Many countries over the years have stepped up and legalised same-sex relationships, introduced anti-discrimination laws, improved equality and gender expression for LGBTQIA+ individuals; however, there is still so much more to do. With 25% of the community living in fear of persecution and anti-LGBTQIA+ sentiment growing, there is a risk that the laws and rights many fought hard to achieve could be reversed.
Common cause
From establishing mandates, we all collectively agree to fairer representation amongst all groups, including the smaller, lesser-known ones. For community members themselves to have more empathy and understanding for those different to them, especially when finding common ground. All with the common goal and ambition to ensure everyone, whether they live around the world, has the freedom to be their authentic self without fear.
Like the pioneers that came before us, the advocates fighting for us now, you do not have to be on the front lines. Still, it is crucial that you believe in the cause and that your wants and needs are fairly represented. The clear message to the world is that we are not asking to be treated differently. Only that we are given the same rights and protections under the law like everyone else; that cannot be too much to ask, can it? | https://gayther.com/lgbtq-strong-together/ |
English has emerged as the global language of trade and commerce in the past few decades, affecting many key aspects of business in the modern world. The English language first spread as the result of colonial expansion, and has become the standard for all important official communications in an increasingly large number of countries with a wide variety of native languages. In the modern world, thanks to the Internet, English continues to spread as the major medium through which both small businesses and large corporations do business.
England began to develop overseas colonies as early as the 12th century in Ireland, and soon expanded to the New World in the Americas, creating English-speaking colonies in what would eventually become the United States and Canada. Other key colonies in the British Empire included various parts of India, the African continent, such as South Africa, the Middle East, Australia and Hong Kong. English was the unifying language in many of these areas, and soon became the language of shipping, travel and commerce.
There are 27 member states in the European Union (EU), and 54 in the Commonwealth of Nations. English is one of the main official languages in the EU through which all business is conducted. The Commonwealth is comprised of 52 nations which were formerly British colonies, and two which elected to join for trade reasons (Mozambique and Rwanda). English is the main language for all business transacted by the Commonwealth, which promotes free trade amongst its member states.
English is a global language for doing business. In some industries, such as the airline and shipping industries, English is the official standard language. Therefore, an excellent command of English is required for key jobs, such as air traffic controller or ship captain. In addition, English has emerged as a major language for finance and the stock markets around the world. People wishing to do business globally need to have a good command of spoken English. The ability to clearly write in English is also key, as many forms of business communication, from emails to presentations and marketing to important business contracts, are written in English.
What Is a Trade Coordinator?
How Do I Find a Company in Taiwan?
In some industries, a knowledge of business terminology in English is critical for entry into and the success of a business. Workers need to have an understanding and command of detailed vocabulary dealing with specific concepts in order to be able to communicate effectively with other professionals in the business. Examples of specialized businesses requiring a knowledge of English include computing, engineering, science, technology, medicine and law.
English has emerged as one of the major languages for doing business on the Internet. A website written in English can attract many customers and enable even small business owners in remote villages to sell items to people around the world. Well-written product and service descriptions in English are key for attracting new customers and keeping them up to date on any new product offerings.
Since graduating from New York University with her Bachelor of Arts in 1996, Evelyn Trimborn has written both fiction and nonfiction for many websites and blogs on health, diet, nutrition, self-help, and business and finance. Her work has appeared on Amazon and at Healthful-Goddess.com, TreatAcneToday.com, InsiderSecretsCorp.com and Career-Command.com. | https://bizfluent.com/about-6710260-importance-english-business-communication.html |
As the healthcare system shifts towards quality and efficiency, pharmacists can play an integral role, focusing on medication management, medication reconciliation, preventive care and patient education and according to report there are five ways pharmacists could provide care for patients
The report examined five areas in which pharmacists could enhance coordinated care:
Medication management: Pharmacists can play a role helping patients with chronic diseases have better medication adherence and clinical outcomes.
Medication reconciliation: Pharmacists help detect and reduce medication discrepancies and increase benefits through comprehensive transition of care programs, especially among post-discharge patients with an elevated risk of readmission.
Preventive care services: Pharmacists play a key role in immunization services and identifying vaccine candidates. They also provide screening services, and have great access to the community, which accountable care organizations (ACOs) could leverage for their benefit.
Education and behavior counseling: Pharmacist-provided behavioral counseling improves medication adherence and therapeutic outcomes in patients with chronic conditions, and can play a major role in other types of pharmacist interventions shown to improve outcomes. | https://pharmahub.com.ng/5-ways-pharmacists-can-help-improve-patient-care-2/comment-page-1/ |
Perfect competition provides both allocative efficiency and productive efficiency:
The theory of perfect competition has its roots in late-19th century economic thought. Léon Walras gave the first rigorous definition of perfect competition and derived some of its main results. In the 1950s, the theory was further formalized by Kenneth Arrow and Gérard Debreu.
Real markets are never perfect. Those economists who believe in perfect competition as a useful approximation to real markets may classify those as ranging from close-to-perfect to very imperfect. The real estate market is an example of a very imperfect market. In such markets, the theory of the second best proves that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal.
There is a set of market conditions which are assumed to prevail in the discussion of what perfect competition might be if it were theoretically possible to ever obtain such perfect market conditions. These conditions include:
In a perfect market the sellers operate at zero economic surplus: sellers make a level of return on investment known as normal profits.
Normal profit is a component of (implicit) costs and not a component of business profit at all. It represents the opportunity cost, as the time that the owner spends running the firm could be spent on running a different firm. The enterprise component of normal profit is thus the profit that a business owner considers necessary to make running the business worth her or his while i.e. it is comparable to the next best amount the entrepreneur could earn doing another job. Particularly if enterprise is not included as a factor of production, it can also be viewed a return to capital for investors including the entrepreneur, equivalent to the return the capital owner could have expected (in a safe investment), plus compensation for risk. In other words, the cost of normal profit varies both within and across industries; it is commensurate with the riskiness associated with each type of investment, as per the risk–return spectrum.
Only normal profits arise in circumstances of perfect competition when long run economic equilibrium is reached; there is no incentive for firms to either enter or leave the industry.
Economic profit does not occur in perfect competition in long run equilibrium; if it did, there would be an incentive for new firms to enter the industry, aided by a lack of barriers to entry until there was no longer any economic profit. As new firms enter the industry, they increase the supply of the product available in the market, and these new firms are forced to charge a lower price to entice consumers to buy the additional supply these new firms are supplying as the firms all compete for customers (See "Persistence" in the Monopoly Profit discussion). Incumbent firms within the industry face losing their existing customers to the new firms entering the industry, and are therefore forced to lower their prices to match the lower prices set by the new firms. New firms will continue to enter the industry until the price of the product is lowered to the point that it is the same as the average cost of producing the product, and all of the economic profit disappears. When this happens, economic agents outside of the industry find no advantage to forming new firms that enter into the industry, the supply of the product stops increasing, and the price charged for the product stabilizes, settling into an equilibrium.
The same is likewise true of the long run equilibria of monopolistically competitive industries and, more generally, any market which is held to be contestable. Normally, a firm that introduces a differentiated product can initially secure a temporary market power for a short while (See "Persistence" in Monopoly Profit). At this stage, the initial price the consumer must pay for the product is high, and the demand for, as well as the availability of the product in the market, will be limited. In the long run, however, when the profitability of the product is well established, and because there are few barriers to entry, the number of firms that produce this product will increase until the available supply of the product eventually becomes relatively large, the price of the product shrinks down to the level of the average cost of producing the product. When this finally occurs, all monopoly profit associated with producing and selling the product disappears, and the initial monopoly turns into a competitive industry. In the case of contestable markets, the cycle is often ended with the departure of the former "hit and run" entrants to the market, returning the industry to its previous state, just with a lower price and no economic profit for the incumbent firms.
Profit can, however, occur in competitive and contestable markets in the short run, as firms jostle for market position. Once risk is accounted for, long-lasting economic profit in a competitive market is thus viewed as the result of constant cost-cutting and performance improvement ahead of industry competitors, allowing costs to be below the market-set price.
Economic profit is, however, much more prevalent in uncompetitive markets such as in a perfect monopoly or oligopoly situation. In these scenarios, individual firms have some element of market power: Though monopolists are constrained by consumer demand, they are not price takers, but instead either price-setters or quantity setters. This allows the firm to set a price that is higher than that which would be found in a similar but more competitive industry, allowing them economic profit in both the long and short run.
The existence of economic profits depends on the prevalence of barriers to entry: these stop other firms from entering into the industry and sapping away profits, like they would in a more competitive market. In cases where barriers are present, but more than one firm, firms can collude to limit production, thereby restricting supply in order to ensure the price of the product remains high enough to ensure all of the firms in the industry achieve an economic profit.
However, some economists, for instance Steve Keen, a professor at the University of Western Sydney, argue that even an infinitesimal amount of market power can allow a firm to produce a profit and that the absence of economic profit in an industry, or even merely that some production occurs at a loss, in and of itself constitutes a barrier to entry.
In a single-goods case, a positive economic profit happens when the firm's average cost is less than the price of the product or service at the profit-maximizing output. The economic profit is equal to the quantity of output multiplied by the difference between the average cost and the price.
Often, governments will try to intervene in uncompetitive markets to make them more competitive. Antitrust (US) or competition (elsewhere) laws were created to prevent powerful firms from using their economic power to artificially create the barriers to entry they need to protect their economic profits. This includes the use of predatory pricing toward smaller competitors. For example, in the United States, Microsoft Corporation was initially convicted of breaking Anti-Trust Law and engaging in anti-competitive behavior in order to form one such barrier in United States v. Microsoft; after a successful appeal on technical grounds, Microsoft agreed to a settlement with the Department of Justice in which they were faced with stringent oversight procedures and explicit requirements designed to prevent this predatory behaviour. With lower barriers, new firms can enter the market again, making the long run equilibrium much more like that of a competitive industry, with no economic profit for firms.
If a government feels it is impractical to have a competitive market – such as in the case of a natural monopoly – it will sometimes try to regulate the existing uncompetitive market by controlling the price firms charge for their product. For example, the old AT&T (regulated) monopoly, which existed before the courts ordered its breakup, had to get government approval to raise its prices. The government examined the monopoly's costs, and determined whether or not the monopoly should be able raise its price and if the government felt that the cost did not justify a higher price, it rejected the monopoly's application for a higher price. Although a regulated firm will not have an economic profit as large as it would in an unregulated situation, it can still make profits well above a competitive firm in a truly competitive market.
In a perfectly competitive market, the demand curve facing a firm is perfectly elastic.
As mentioned above, the perfect competition model, if interpreted as applying also to short-period or very-short-period behaviour, is approximated only by markets of homogeneous products produced and purchased by very many sellers and buyers, usually organized markets for agricultural products or raw materials. In real-world markets, assumptions such as perfect information cannot be verified and are only approximated in organized double-auction markets where most agents wait and observe the behaviour of prices before deciding to exchange (but in the long-period interpretation perfect information is not necessary, the analysis only aims at determining the average around which market prices gravitate, and for gravitation to operate one does not need perfect information).
In the absence of externalities and public goods, perfectly competitive equilibria are Pareto-efficient, i.e. no improvement in the utility of a consumer is possible without a worsening of the utility of some other consumer. This is called the First Theorem of Welfare Economics. The basic reason is that no productive factor with a non-zero marginal product is left unutilized, and the units of each factor are so allocated as to yield the same indirect marginal utility in all uses, a basic efficiency condition (if this indirect marginal utility were higher in one use than in other ones, a Pareto improvement could be achieved by transferring a small amount of the factor to the use where it yields a higher marginal utility).
A simple proof assuming differentiable utility functions and production functions is the following. Let wj be the 'price' (the rental) of a certain factor j, let MPj1 and MPj2 be its marginal product in the production of goods 1 and 2, and let p1 and p2 be these goods' prices. In equilibrium these prices must equal the respective marginal costs MC1 and MC2; remember that marginal cost equals factor 'price' divided by factor marginal productivity (because increasing the production of good by one very small unit through an increase of the employment of factor j requires increasing the factor employment by 1/MPji and thus increasing the cost by wj/MPji, and through the condition of cost minimization that marginal products must be proportional to factor 'prices' it can be shown that the cost increase is the same if the output increase is obtained by optimally varying all factors). Optimal factor employment by a price-taking firm requires equality of factor rental and factor marginal revenue product, wj=piMPji, so we obtain p1=MC1=wj/MPj1, p2=MCj2=wj/MPj2.
Now choose any consumer purchasing both goods, and measure his utility in such units that in equilibrium his marginal utility of money (the increase in utility due to the last unit of money spent on each good), MU1/p1=MU2/p2, is 1. Then p1=MU1, p2=MU2. The indirect marginal utility of the factor is the increase in the utility of our consumer achieved by an increase in the employment of the factor by one (very small) unit; this increase in utility through allocating the small increase in factor utilization to good 1 is MPj1MU1=MPj1p1=wj, and through allocating it to good 2 it is MPj2MU2=MPj2p2=wj again. With our choice of units the marginal utility of the amount of the factor consumed directly by the optimizing consumer is again w, so the amount supplied of the factor too satisfies the condition of optimal allocation.
Monopoly violates this optimal allocation condition, because in a monopolized industry market price is above marginal cost, and this means that factors are underutilized in the monopolized industry, they have a higher indirect marginal utility than in their uses in competitive industries. Of course, this theorem is considered irrelevant by economists who do not believe that general equilibrium theory correctly predicts the functioning of market economies; but it is given great importance by neoclassical economists and it is the theoretical reason given by them for combating monopolies and for antitrust legislation.
In contrast to a monopoly or oligopoly, in perfect competition it is impossible for a firm to earn economic profit in the long run, which is to say that a firm cannot make any more money than is necessary to cover its economic costs. In order not to misinterpret this zero-long-run-profits thesis, it must be remembered that the term 'profit' is used in different ways:
Thus, if one leaves aside risk coverage for simplicity, the neoclassical zero-long-run-profit thesis would be re-expressed in classical parlance as profits coinciding with interest in the long period (i.e. the rate of profit tending to coincide with the rate of interest). Profits in the classical meaning do not necessarily disappear in the long period but tend to normal profit. With this terminology, if a firm is earning abnormal profit in the short term, this will act as a trigger for other firms to enter the market. As other firms enter the market, the market supply curve will shift out, causing prices to fall. Existing firms will react to this lower price by adjusting their capital stock downward. This adjustment will cause their marginal cost to shift to the left causing the market supply curve to shift inward. However, the net effect of entry by new firms and adjustment by existing firms will be to shift the supply curve outward. The market price will be driven down until all firms are earning normal profit only.
It is important to note that perfect competition is a sufficient condition for allocative and productive efficiency, but it is not a necessary condition. Laboratory experiments in which participants have significant price setting power and little or no information about their counterparts consistently produce efficient results given the proper trading institutions.
In the short run, a firm operating at a loss [R < TC (revenue less than total cost) or P < ATC (price less than unit cost)] must decide whether to continue to operate or temporarily shut down. The shutdown rule states "in the short run a firm should continue to operate if price exceeds average variable costs". Restated, the rule is that for a firm to continue producing in the short run it must earn sufficient revenue to cover its variable costs. The rationale for the rule is straightforward: By shutting down a firm avoids all variable costs. However, the firm must still pay fixed costs. Because fixed costs must be paid regardless of whether a firm operates they should not be considered in deciding whether to produce or shut down. Thus in determining whether to shut down a firm should compare total revenue to total variable costs (VC) rather than total costs (FC + VC). If the revenue the firm is receiving is greater than its total variable cost (R > VC), then the firm is covering all variable costs and there is additional revenue ("contribution"), which can be applied to fixed costs. (The size of the fixed costs is irrelevant as it is a sunk cost. The same consideration is used whether fixed costs are one dollar or one million dollars.) On the other hand, if VC > R then the firm is not covering its production costs and it should immediately shut down. The rule is conventionally stated in terms of price (average revenue) and average variable costs. The rules are equivalent (if one divides both sides of inequality TR > TVC by Q gives P > AVC). If the firm decides to operate, the firm will continue to produce where marginal revenue equals marginal costs because these conditions insure not only profit maximization (loss minimization) but also maximum contribution.
Another way to state the rule is that a firm should compare the profits from operating to those realized if it shut down and select the option that produces the greater profit. A firm that is shut down is generating zero revenue and incurring no variable costs. However, the firm still has to pay fixed cost. So the firm's profit equals fixed costs or −FC. An operating firm is generating revenue, incurring variable costs and paying fixed costs. The operating firm's profit is R − VC − FC. The firm should continue to operate if R − VC − FC ≥ −FC, which simplified is R ≥ VC. The difference between revenue, R, and variable costs, VC, is the contribution to fixed costs and any contribution is better than none. Thus, if R ≥ VC then firm should operate. If R < VC the firm should shut down.
A decision to shut down means that the firm is temporarily suspending production. It does not mean that the firm is going out of business (exiting the industry). If market conditions improve, and prices increase, the firm can resume production. Shutting down is a short-run decision. A firm that has shut down is not producing. The firm still retains its capital assets; however, the firm cannot leave the industry or avoid its fixed costs in the short run. Exit is a long-term decision. A firm that has exited an industry has avoided all commitments and freed all capital for use in more profitable enterprises.
However, a firm cannot continue to incur losses indefinitely. In the long run, the firm will have to earn sufficient revenue to cover all its expenses and must decide whether to continue in business or to leave the industry and pursue profits elsewhere. The long-run decision is based on the relationship of the price and long-run average costs. If P ≥ AC then the firm will not exit the industry. If P < AC, then the firm will exit the industry. These comparisons will be made after the firm has made the necessary and feasible long-term adjustments. In the long run a firm operates where marginal revenue equals long-run marginal costs.
The short-run (SR) supply curve for a perfectly competitive firm is the marginal cost (MC) curve at and above the shutdown point. Portions of the marginal cost curve below the shutdown point are not part of the SR supply curve because the firm is not producing any positive quantity in that range. Technically the SR supply curve is a discontinuous function composed of the segment of the MC curve at and above minimum of the average variable cost curve and a segment that runs on the vertical axis from the origin to but not including a point at the height of the minimum average variable cost.
Though there is no actual perfectly competitive market in the real world, a number of approximations exist:
1. Large action of identical goods with all potential buyers and sellers present. By design, a stock exchange resembles this, not as a complete description (for no markets may satisfy all requirements of the model) but as an approximation. The flaw in considering the stock exchange as an example of Perfect Competition is the fact that large institutional investors (e.g. investment banks) may solely influence the market price. This, of course, violates the condition that "no one seller can influence market price".
2. Horse betting is also quite a close approximation. When placing bets, consumers can just look down the line to see who is offering the best odds, and so no one bookie can offer worse odds than those being offered by the market as a whole, since consumers will just go to another bookie. This makes the bookies price-takers. Furthermore, the product on offer is very homogeneous, with the only differences between individual bets being the pay-off and the horse. Of course, there are not an infinite amount of bookies, and some barriers to entry exist, such as a license and the capital required to set up.
The use of the assumption of perfect competition as the foundation of price theory for product markets is often criticized as representing all agents as passive, thus removing the active attempts to increase one's welfare or profits by price undercutting, product design, advertising, innovation, activities that – the critics argue – characterize most industries and markets. These criticisms point to the frequent lack of realism of the assumptions of product homogeneity and impossibility to differentiate it, but apart from this, the accusation of passivity appears correct only for short-period or very-short-period analyses, in long-period analyses the inability of price to diverge from the natural or long-period price is due to active reactions of entry or exit.
Some economists have a different kind of criticism concerning perfect competition model. They are not criticizing the price taker assumption because it makes economic agents too "passive", but because it then raises the question of who sets the prices. Indeed, if everyone is price taker, there is the need for a benevolent planner who gives and sets the prices, in other word, there is a need for a "price maker". Therefore, it makes the perfect competition model appropriate not to describe a decentralized "market" economy but a centralized one. This in turn means that such kind of model has more to do with communism than capitalism.
Another frequent criticism is that it is often not true that in the short run differences between supply and demand cause changes in price; especially in manufacturing, the more common behaviour is alteration of production without nearly any alteration of price.
The critics of the assumption of perfect competition in product markets seldom question the basic neoclassical view of the working of market economies for this reason. The Austrian School insists strongly on this criticism, and yet the neoclassical view of the working of market economies as fundamentally efficient, reflecting consumer choices and assigning to each agent his contribution to social welfare, is esteemed to be fundamentally correct. Some non-neoclassical schools, like Post-Keynesians, reject the neoclassical approach to value and distribution, but not because of their rejection of perfect competition as a reasonable approximation to the working of most product markets; the reasons for rejection of the neoclassical 'vision' are different views of the determinants of income distribution and of aggregated demand.
In particular, the rejection of perfect competition does not generally entail the rejection of free competition as characterizing most product markets; indeed it has been argued that competition is stronger nowadays than in 19th century capitalism, owing to the increasing capacity of big conglomerate firms to enter any industry: therefore the classical idea of a tendency toward a uniform rate of return on investment in all industries owing to free entry is even more valid today; and the reason why General Motors, Exxon or Nestlé do not enter the computers or pharmaceutical industries is not insurmountable barriers to entry but rather that the rate of return in the latter industries is already sufficiently in line with the average rate of return elsewhere as not to justify entry. On this few economists, it would seem, would disagree, even among the neoclassical ones. Thus when the issue is normal, or long-period, product prices, differences on the validity of the perfect competition assumption do not appear to imply important differences on the existence or not of a tendency of rates of return toward uniformity as long as entry is possible, and what is found fundamentally lacking in the perfect competition model is the absence of marketing expenses and innovation as causes of costs that do enter normal average cost.
The issue is different with respect to factor markets. Here the acceptance or denial of perfect competition in labour markets does make a big difference to the view of the working of market economies. One must distinguish neoclassical from non-neoclassical economists. For the former, absence of perfect competition in labour markets, e.g. due to the existence of trade unions, impedes the smooth working of competition, which if left free to operate would cause a decrease of wages as long as there were unemployment, and would finally ensure the full employment of labour: labour unemployment is due to absence of perfect competition in labour markets. Most non-neoclassical economists deny that a full flexibility of wages would ensure the full employment of labour and find a stickiness of wages an indispensable component of a market economy, without which the economy would lack the regularity and persistence indispensable to its smooth working. This was, for example, John Maynard Keynes's opinion. | https://db0nus869y26v.cloudfront.net/en/Perfect_competition |
International Workshop on "Advances in Personalized Healthcare Services, Wearable Mobile Monitoring, and Social Media Pervasive Technologies"
KEY DATES
Submission:
10 August 17 August for late papers
Notifications: 10 September
Workshop: 14-16 November
Scope
Modern mobile healthcare systems, supported by information and communication technologies, provide solutions for improving illness prevention, facilitating chronic disease management, empowering patients, enable personalization of care improving the productivity of healthcare provisioning and improve utilization of healthcare enabling the management of diseases outside institutions as well as encouraging citizens to remain healthy.
Personalized healthcare emphasizes on the use of information about an individual/patient to select or optimize patient's preventative, therapeutic care and wellbeing. Modern healthcare solutions emphasize on the need to empower citizens to manage their own health and disease and include smart medical sensors, remote eHealth monitoring, smart-phone enabled data aggregation, medical awareness and analysis and context-aware assistive living technologies.
In this context we invite researchers, physicians, computer scientists and engineers to present their experience and research work.
Topics covered (but not limited to):
- Mobile and wireless technologies for healthcare delivery, enhanced monitoring and self-management of disease and health
- Mobile applications for health, wellbeing disease management and disease prevention
- Internet of Things for healthcare in smart monitoring and diverse environments (i.e hospitals, home, Assisted living);
- Wireless, mobile and wearable devices for pervasive healthcare
- Patient monitoring in diverse environments (hospitals, nursing, assisted living)
- Wireless access in ubiquitous systems for healthcare professionals
- Interventions and limitations and clinical acceptability of modern technologies with respect to applications for the digital patient in terms of supporting personalised medicine
- Legal and ethical aspects of Wearable, mobile monitoring, outdoor and home based telemedicine/ehealth applications
- Healthcare telemetry and telemedicine, remote diagnosis and patient management
- Service-oriented middleware architecture for medical device connectivity in personal health monitoring applications
- Technologies for the management and support of chronic diseases and the ageing population
- Innovative communication and mobile technologies to support data collection and access, sharing, search and reasoning
- Social media pervasive technologies for healthcare
- Data mining, machine learning and signal processing;
- Security for ehealth and modern healthcare mobile services
- Biometrics, and privacy-preserving mechanisms for individualized mHealth;
The proceedings of the APHS 2016 workshop will be published together with the proceedings of MobiHealth 2016 by Springer and made available through LNICST. Papers must be formatted using the guideline from the Author's Kit (found here).
Organizers: | http://archive.mobihealth.name/2016/show/aphs |
Provider Compliance Programs
The proposed regulation repeals the current Provider Compliance Program regulatory requirements and replaces them with a new Subpart 521-1, which imposes obligations on “required providers” to adopt and implement effective compliance programs. The proposed Subpart 521-1 defines “required providers,” consistent with existing law, as any entity subject to Article 28 or 36 of the PHL, Article 16 or 31 of the MHL, MMCOs (including managed long-term care plans (MLTCs)) and any other entity for which Medicaid is a substantial portion of its business. The current Provider Compliance Program regulations, unlike the proposed Subpart 521-1, do not apply to MMCOs (including MLTCs).
Additionally, the proposed Subpart 521-1 includes several new requirements that do not appear in existing regulations, including:
- 10-year document retention requirements for MMCOs and a six-year document retention period for all other “required providers.”
- All compliance program requirements expressly apply to the “required provider’s” contractors, agents, subcontractors, and independent contractors.
- A new “risk area” — contractors, subcontractors, agents and independent contractor oversight — must be considered by all “required providers,” and a number of additional “risk areas” must also be considered by MMCOs (including MLTCs).
- Providers that are “required providers” must submit a compliance certification to each MMCO for which they are a participating provider upon execution of the MMCO’s participating provider agreement and annually thereafter (and the submission method shall be described on the MMCO’s website).
- “Required providers” must comply with OMIG’s regulations regarding Medicaid overpayments (see discussion in the last section below).
- Specifically enumerating the compliance officer’s duties, including his or her reporting structure. (Notably, under the proposed regulations, the compliance officer is no longer required to be an “employee” of the “required provider.”)
- Establish and implement an effective system for the routine monitoring and identification of compliance risks, including the types of audits the provider must undertake and the frequency of such audits.
- Establish and maintain procedures for responding to and addressing compliance issues as they are raised.
MMCO Fraud, Waste and Abuse Programs
The proposed regulations also add a new Subpart 521-2, which, consistent with current law, requires MMCOs (including MLTCs) to implement Medicaid fraud, waste, and abuse programs. Although similar obligations are imposed by existing regulations (i.e., 10 NYCRR 98-1.21 and
11 NYCRR 86.6), these existing regulations either exclude MLTCs or apply to only MLTCs that have an enrolled population of 10,000 or more. The proposed Subpart 521-2, however, applies to all MLTCs regardless of enrollment, and further requires the establishment of a dedicated full-time Special Investigation Unit (with details about staffing, reporting and work plan requirements) if the MMCO has an enrolled population of 1,000 or more.
Some of the more significant requirements in proposed Subpart 521-2 that do not appear in existing regulations, include:
- Audit and investigation requirements which include the scope of such audits and investigations and the general requirements for conducting such audits and investigations.
- Obligations to report cases of fraud, waste and abuse to OMIG in accordance with the MMCO’s contract with the Department of Health.
- Obligation to file a fraud, waste and abuse prevention plan with OMIG (which can be a plan that was prepared pursuant to another state regulation as long as all of the requirements of Subpart 521-2 are included in the submission).
Medicaid Overpayments
The proposed Subpart 521-3 adds provisions, consistent with existing law, that require “persons” to report, explain and return Medicaid overpayments to OMIG. The term “person” includes home care agencies, hospices and MMCOs (including MLTCs and their contractors and participating providers) and virtually any other provider or supplier that is enrolled in the Medicaid program.
Generally, the proposed Subpart 521-3 follows existing law, requiring all “persons” to report and return to OMIG (through its Self-Disclosure Program) all Medicaid overpayments (plus applicable interest) by the later of: the date which is 60 days after the date on which the overpayment was identified; or the date any corresponding cost report is due, if applicable. A “person” has identified an overpayment, according to both existing law and the proposed regulation, when such a person “has or should have through the exercise of reasonable diligence, determined that they received an overpayment and quantified the amount of the overpayment.”
OMIG’s proposed Part 521 regulation, once published in the State Register, will be subject to a 60-day public comment period. These proposed regulations represent the next phase of OMIG’s efforts to implement the provisions of Chapter 56 of the Laws of 2020, Part QQ, which focuses on provider compliance and greatly increased OMIG’s authority to investigate and sanction instances of non-compliance. For example, Part QQ amended Social Services Law § 145-b to allow OMIG to impose civil monetary penalties of up to $15,000 per day for a failure to grant timely access to facilities and records, as well as making the existence of the compliance programs addressed by the proposed Subpart 521-1 a “condition of payment” from Medicaid. | https://24kyc.com/omig-to-adopt-new-fraud-waste-and-abuse-prevention-requirements-lippes-mathias-llp/ |
The Expanded Use of Research in the Workplace
Nowadays, we live in a society where innovation and information predominantly valued due to the continuous knowledge and skill based competition. There is an essential role of all professional and academic institutions and that is to promote research that support economic growth and development while protecting environment. Different types of research may vary according to the needs of the researcher until they draw a strong data that will provide findings and conclusions.
According to University Research Council of Nipissing University, Research is broadly defined as any original and systematic investigation undertaken in order to increase knowledge and understanding and to establish facts and principles. Research can also be defined also as the systematic investigation of a problem, issue or question which increases knowledge and understanding of one field of specialization. It encompasses the creation of ideas and generation of knowledge that lead to new and substantial improved insights and/or the development of new materials, devices, products and processes. Research should produce significant results and good analysis that creates theories, hypotheses and benefits every intellectual attempt to analyze facts and phenomena. This search for individual facts or data requires an open-ended question and data draws together through experiments, methodologies and surveys. Research should be able to clearly state the way a term, word or phrase used in the study. Tables and graphs are used to include clarity to the presentations of the results and tables are summarized to simplify in order to integrate it into the discussion. Although the goal is to secure the best data obtainable through the use of the most refined technique available, there is a need to point out frankly the limitations in sources and procedures in order for the data to be more reliable and accurate. Research can also include systematic identification, location, and analysis of documents containing information related to the research problems. These documents gathered provides findings and conclusions of the past investigations which may relate to the researcher’s findings and conclusions. The research methodology is very important because it presents the methods of the study, instruments to be used, procedures in the preparation and administration of the instruments and the treatment of data. The participation of respondents in the study serves as one of the most significant sources of data in research. The respondents and the different instruments to be used should be clearly explained and presented. Research presents findings and discussions, the presentation of data and analysis are integrated with the interpretation and discussion. Also, avoidance of opinions or any discussion that is out of the data content is greatly emphasized. In any research conducted there are highlights of the important findings presented meticulously. The researcher may also include some open thinking in the recommendation part as long as it shows relevance to the problems and findings.
There are three types of research such as quantitative, qualitative and mixed research. Quantitative research refers to an inquiry into an identified problem, based on testing a theory, measured with numbers, and analyzed using statistical techniques. The basic structure of quantitative research are Variables which is something that takes on different values or categories and the opposite of it are constants, something that cannot be different, such as a single value or category of a variable (Johnson 2007). The researcher who conducts quantitative research method remains distant and independent of what is being researched and the research should be value free which means the values of the researcher do not interfere with, or become part of, the research. The goal of quantitative research is to produce generalizations that allow the researcher to predict, explain, and understand some phenomenon
Qualitative research refers to the inquiry that has the goal of understanding a social or human problem from various perspectives. In qualitative methods of research, the researcher considers the process of investigation of individuals; the researcher interacts with those he studies and actively works to reduce the distance between the researcher and those being researched. Qualitative research method aims to discover patterns or theories that give explanation to a phenomenon of interest. The data collected for qualitative research method should come from field notes, one to one/focus group interviews and content/historical analysis, thus qualitative method well suited for studying social progress and phenomena. However qualitative method is weak since interview is widely used that makes the reliability low because study it appears to be subjective. Researcher immersed in a particular place or setting for the purpose of gathering detailed data. Qualitative method is less interested in variables, have extended accounts of feelings and details are acquired through experience of the respondents. Doesn’t matter how many talked about each themes as long as there are presentation of the set of ideas, patterns, practices which were conversed in interviews. Success of attaining good data and information in qualitative method highly depends on logical ordering of the interview guide which can be in conversational or chronological order. During interview the first set of questions should be easy and less sensitive, and then similar topics must be put together. Open-ended questions can also contribute in strengthening the study. Reliability and validity are conceptualized by qualitative researchers as trustworthiness; rigor and quality in qualitative paradigm, through this association the manner to achieve validity and reliability of a research get affected. For qualitative researchers to eliminate bias and increase truthfulness of the intention about some social phenomenon they usually use triangulation (Denzin 1978). Triangulation is a validity procedure where researchers investigate for gathering among multiple and different sources of information to form themes to better understand a topic by studying it simultaneously.
Quantitative research method allows the researcher to acquaint him/herself with the problem or concept to be studied, and possibly generate hypotheses to be tested (Golafshani 2003). Glesne & Peshkin authors of Becoming qualitative researches: An introduction said that the positivist or scientific theory, leads us to observe the world as made up of observable, measurable facts (cited in Golafshani 2003). All qualitative researchers search for clarification of social implication; however some use qualitative methods to complement or enhance interpretation of numerical data, thus the study would become a mixed research. Quantitative and qualitative inquiry represents two reasonable ways to examine leadership; Everet and Louis (1981) clarify the statement that “inquiry from the outside”, often implemented through quantitative studies and “inquiry from the inside” through qualitative studies (cited in Ospina 2004). Researchers perceive that qualitative research is an inductive approach to build up theories that must be experienced deductively by the help of quantitative models. Also Qualitative method is an approach to inquiry that stands on its own and best allows a researcher to attain ‘a glimpse of the world’ (Ospina 2004). Mixed research is a common type of research in which quantitative and qualitative research methods and techniques are mixed in one research study. The researcher may conducts quantitative experiment and after the experiment conduct an interview (qualitative) with the participants to see how they viewed the experiment and to see if they agreed with the results. The main goal for researcher who conducts mixed research is to follow the fundamental principle of mixed research. According to the principle of mixed research, the researcher should mix quantitative and qualitative research methods and procedures in a procedure that the resulting mixture or combination has complementary strengths and non-overlapping weaknesses (Johnson 2007). According to Cristian Mihai Adomnitei, Minister of Education, Research and Youth, Romania, Based on business perspective, many countries are competing to attract successful businesses and industries which emphasize the significance of research and innovation in economic and social development. Businesses realized that they can not rely solely on their own resources but they need to work in partnership and develop technologies. Research outcomes can provide additional knowledge, can maintain or change programs or services, means for addressing the needs for a particular program and services and most importantly contribute to continuous policy development. According to the University of New South Wales, institutions which provide services strongly emphasize evaluation and quality improvement. Evaluation refers to the process of judging the value of an intervention by systematically gathering information to make more informed decisions. Attention has been directed toward the decision-making process, this has been characterized in several ways, however basically the process entails the following steps (1) recognizing problem or opportunity (2) exploring alternative courses of action (3) evaluation of alternatives (4) choosing of a course of action and implementation (5) assessing of the results of the decision (Cox III 2007). Many decision makers believed that there should be no overly emphasis on the relevance, accuracy and economical information in decision making process. The process of decision making also involves making forecasts of future events due to the competition pressures. The importance of research in business lies importantly in evaluation and improvement of existing policies, structure for improving the quality and efficiency of the business. Research in terms of business decision making includes examination of business strategies, company policies, products and services offered to determine if there would be a need for change. Every decisions have an impact that may lead to substantial harm to numerous or large number of people. Research is very helpful since it gives a decision maker useful rules or guidelines in quality and utility assessment of evidence gathered. However, managers and administrators should not use invalid and unreliable evidences simply because it is easily available.
References
Aalborg University (17 August 2005). The Importance of Research for a Modern University. Retrieved December 12, 2008, from http://www.icetcs.ru.is/luca/slides/importance-of-research.pdf
Congdon, J. D., Dunham, A. E. (1999). Defining the Beginning: The Importance of Research Design. IUCN/SSC Marine Turtle Specialist Group Publication, 4, 1-5.
Denzin, N. K. (1978). The research act: A theoretical introduction to sociological methods. New York: McGraw-Hill.
Golafshani, N. (2003). Understanding Reliability and Validity in Qualitative Research. The Qualitative Report, 8, 597-607.
Humboldt State University (2007). Qualitative Research. Retrieved December 12, 2008, from http://www.humboldt.edu/~mv23/qualresearch.pdf
New York University (2004). Qualitative Research. Retrieved December 12, 2008, from http://wagner.nyu.edu/leadership/publications/files/Qualitative_Research.pdf
Nipissing University (2008). Definition of Research for the Purposes of Application to the University Research Council. Retrieved December 12, 2008, from www.nipissingu.ca/research/downloads/ResearchDEFn-ApplicationstoURC_000.doc
Polster, C. (2007). The nature and implications of the growing importance of research
grants to Canadian universities and academics. Springer Science Business Media B.V., 53, 599–622. | https://lagas.org/the-expanded-use-of-research-in-the-workplace/ |
Each mental event or voluntary motor act is the result of the simultaneous activity of large groups of neurons in several areas of the brain far from each other.
How do these groups of nerve cells manage to instantly and selectively coordinate their electrical activity?
Discoveries by researchers working with the European Human Brain Project at Universidad Autónoma de Madrid (UAM) and Universidad Politécnica de Madrid, together with collaborators from Jülich Research Centre in Germany, now shed new light on the cellular basis of this process.
The team showed how single neurons, through extremely long branched axonal connections to different areas of the brain can act as “coordinators” by selectively and flexibly combining the activity of different neuronal groups at each moment – similar to the conductor of an orchestra.
The results, which were obtained from mice, were published this week in the Journal of Neuroscience.
Each neuron in the brain has a long, branched extension called an axon, through which it sends electrical signals to thousands of other neurons.
Although they can be hundreds of times thinner than a human hair, axons can be more than a meter long and branch out selectively to reach several points in the brain, and even the spinal cord.
Through its axon, neurons located in areas of the brain distant from each other manage to establish direct contact.
At the contact points, signals pass from one neuron to another through specialized structures called synapses and electrochemical mechanisms mediated by different substances, known as neurotransmitters.
The propagation of signals through the synapses produces simultaneous effects on the neurons that receive contacts from the same axon.
From a functional point of view, synapses can be interpreted as signal filters of variable amplitude and time profile, whose valence (excitation/inhibition) can differ.
The study directed by Prof. Francisco Clascá, from the Department of Anatomy, Histology and Neuroscience of the Faculty of Medicine of the UAM, and carried out on mice, focused on the axons of the neurons that connect the thalamus with the cerebral cortex.
The thalamus is located in the center of the brain and acts as a large communication node between different regions.
The axons of the thalamus neurons innervate all areas of the cerebral cortex in an orderly and selective manner, forming excitatory synapses mediated by the neurotransmitter glutamate.
Many of these axons branch out to selectively innervate two or more areas of the cortex.
3D reconstruction of synapses. Image is credited to Javier Rodriguez-Moreno et al.
Using advanced techniques of three-dimensional electron microscopy, performed in collaboration with Prof. Joachim Lübke at Forschungszentrum Jülich in Germany, and tagging of individual axons, the researchers were able to measure and compare the structure of synapses formed by branches of the same thalamic axon in two distant brain areas.
The study revealed important differences, directly related to the intensity and frequency with which the synapses can transmit signals, as well as in the type of cells that the axon contacts in each area.
In a study published by the researchers last year, they had already shown that the signals reach the two areas simultaneously, but produce different effects.
The demonstration that a single neuron, through its branched axon is capable of simultaneously producing different effects in separate areas of the cerebral cortex reveals an unsuspected complexity in the brain’s circuits.
These cells could thus act as “coordinators” by selectively and flexibly combining the activity of different neuronal groups at each moment – similar to the conductor of an orchestra. Knowing more about these cells is important to model the computation performed by the large neuronal networks of the brain and to understand their alteration in brain pathologies.
Neural oscillations, or “Brainwaves,” are fluctuations in activity shared among neuronal populations (evident as extracellular voltage fluctuations; Jia and Kohn, 2011) and were first discovered in the late 19th century in animals (Beck, 1890; Coenen et al., 2014).
The first electroencephalogram (EEG) was performed by Berger in the early 20th century revealing Alpha waves (Berger, 1929) which lead to a volley of research into these waves shortly after. Electromagnetic or EEG synchronization between brain areas indicates functional connectivity between those areas (Ivanitsky et al., 1999).
Even though such oscillations are known to be a component of many cognitive functions such as feature binding, neural communication (Fries, 2005), perception (Gray et al., 1989), and information processing (Gupta et al., 2016), it is still debated whether oscillations contribute to these processes or are merely an epiphenomenon (Koepsell et al., 2010). Various frequency bands of oscillations from very slow (<0.1 Hz) to very fast (600 Hz) have been shown to each be correlated to distinct aspects of mental activity (Stookey et al., 1941; Schnitzler and Gross, 2005; Fingelkurts and Fingelkurts, 2010a), and analysis of the EEG can be used to determine one’s level and potentially state of consciousness (Cvetkovic and Cosic, 2011; Fingelkurts et al., 2013).
Neural oscillations provide a powerful means to encode and transfer information in space and time (Cheong and Levchenko, 2010). They are the most efficient mechanism to transfer such information reciprocally between neural assemblies (Buzsáki and Draguhn, 2004).
They exist at multiple spatial levels from microscopic to macroscopic which can arise from mechanisms within individual neurons as well as interactions between them (Haken, 1996), all of which are a component of the bioelectric structure we describe.
The brainwaves observed on EEG are in fact mesoscopic or macroscopic oscillations (Freeman, 2003). Microscopic oscillations are not as easily detectable. Subthreshold membrane potentials are a major microscopic component of these layers that occur in frequencies observed in an EEG.
Just as action potentials and various types and patterns of synaptic connections serve as a means of information representation, computation, and transmission, subthreshold membrane potential oscillations provide a means for individual neurons to be a part of a collective whole (Fingelkurts et al., 2010).
Such intrinsic single cell oscillations form the basis for frequencies of mesoscopic activity generated by the summed dendritic activity of many neurons within a neural assembly which can be viewed in an EEG (Başar, 2008).
Neuronal assemblies can, in turn, synchronize with other adjacent or distant assemblies to form stronger and more global macroscopic oscillations responsible for the greater neural electromagnetic field (Jirsa and Kelso, 2000).
The emergent characteristic of large-scale bioelectric activity provides a metastable bridge to global coherence needed for an integrated experience (Fingelkurts et al., 2010).
Brains are systems that never reach a truly steady-state, constantly changing in dynamic patterns (Freeman, 2007; Fingelkurts et al., 2009).
A concept of nonlinear dynamics, metastability in regards to the brain describes the local-global harmony of the brain which may be responsible for the emergence of consciousness; distinct functional modules coupled together via neural oscillations while still maintaining their intrinsic, independent behavior (Freeman and Holmes, 2005; Kelso and Tognoli, 2007; Fingelkurts et al., 2013).
There is thus competition in brain regions between the tendency to act autonomously and to cooperate macroscopically with other regions (Bressler and Kelso, 2001; Fingelkurts and Fingelkurts, 2001).
In this metastable mode of functioning, although there is competition between the stability of either tendency, these local and global tendencies can coexist (Kelso and Engstrøm, 2006).
Oscillations may be an optimal metastable mechanism as they provide a low-energy operation for local and distant communication which is lost in action potential signaling in distant axonal connections (Buzsáki and Draguhn, 2004). A relatively large brain with only axonal connections would have severe spatial and metabolic constraints (Knyazev, 2012).
According to the Default Space Theory of Consciousness and other prominent theories on consciousness, consciousness is an emergent phenomenon which arises as the virtual recreation or simulation of the environment and the individual’s relationship to it (Revonsuo, 2006; Fingelkurts et al., 2010; Metzinger, 2013; Jerath et al., 2015a). Metastability, oscillations, and consciousness have been extensively researched as a part of the operational architectonics theory of brain-mind (OA) in an attempt to neurophysiologically explain the integrated experience and mind.
The theory we propose here is in line with the OA argument that the virtual structure of conscious experience corresponds to, or is functionally isomorphic to, the structure or architecture of the brain’s electromagnetic field (Fingelkurts and Fingelkurts, 2001; Fingelkurts et al., 2009).
Functional isomorphism describes two systems as correlating in a way in which functional relations are always preserved regardless of the physical nature of either system (Shapiro, 2000).
For instance, a digital computer can be isomorphic to an analog one if the transitional relations among its physical states mirror those in the analog one (Putnam, 1975).
Thus, whatever the phenomenal constitution of consciousness is at a given time, it will be isomorphic to its neural correlate. OA explains in-depth how any phenomenal state/pattern is reflected appropriately to a neurophysiological state/pattern (Fingelkurts et al., 2007).
A major assumption and basis of this article is a fundamental of OA, that the phenomenal mind is isomorphic to the globally unified electromagnetic field of the brain which consists of a nested hierarchy of oscillatory activity (Fingelkurts et al., 2010).
In this article, we explore a potential implication of an aspect of this architecture that is neglected by most in EEG-based research (Fingelkurts and Fingelkurts, 2010a), that being integrative brain functions arise from the bioelectric architecture of the brain viamultiple oscillations phasically superimposed upon one another based on frequency (Başar et al., 2004; Başar, 2006).
This idea that the true composition of the bioelectric structure consists of a concert of multiple superimposed oscillations is most often neglected as EEG analysis is mostly done by taking different frequency bands in isolation (Fingelkurts and Fingelkurts, 2010a).
Thus, the true bioelectric structure of one brain may be vastly different from another while still having identical averaged spectral band results (Fingelkurts and Fingelkurts, 2010a).
OA has described how at the core of the isomorphism between the neurophysiological organization of the brain and the informational organization of the phenomenal mind lies the “operation,” or the bioelectric processes occurring among the (potentially many) neural assemblies of the brain (Fingelkurts and Fingelkurts, 2005).
Complex operations of synchronized bioelectric activity among distributed neural assemblies, termed operational modules by OA, allow for metastability as the neural assemblies can do their own tasks while still be synchronized with greater and more abstract operations (Fingelkurts et al., 2009).
A potentially infinite nested hierarchy of operational modules, which are at the base level composed of basic operations within neural assemblies, may exist as the simplest modules can become synchronized or abstractly unionized with other modules to form a greater and more abstract module, which can be further unionized with other abstract modules all the way to the most macroscopic level of bioelectric activity proposed to be isomorphic to the integrated experience (Fingelkurts and Fingelkurts, 2005, 2006).
While frequency bands are often identified with distinct functions, some authors have discussed how oscillations of different bands may be grouped into intrinsic layers or “wave-sequences” (Steriade, 2006), or at least superimposed upon other spectrally distinct oscillations (Başar et al., 2004).
In this theoretical article, in contrast to the traditional view that the localization of higher and lower frequency activities are spatially distinct (Luo et al., 2014), we describe an organization of bioelectric cortical neurodynamics modeled as hierarchical “layers” of oscillatory frameworks differentiated by frequency which are not spatially distinct, but coexist in the same brain regions.
The lower layers (low frequency) represent more basic and widespread integrative activity, while the higher layers (high frequency) represent more complex and localized activity.
We thus form a further theoretical understanding on the organization of the global bioelectric architecture, referred to as a unified metastable continuum in OA (Fingelkurts et al., 2009), by describing the superimposition of such layers and its role in such a continuum.
Although divisions and dynamics between these layers may be complex in reality, in basic modeling of such architecture, each layer we describe can be thought of as an independent functional component of this continuum.
The higher layers however are dependent upon the lower ones to be a part of the global architecture as they entrain upon them just as the phenomenal isomorphic counterparts to the higher layers are dependent on the phenomenal isomorphic counterparts of the lower layers.
The fundamental elements of oscillations we see heavily summated (approximately millions of neurons) on an EEG are the ionic current producing membrane potential activities of individual neurons; the dendritic and postsynaptic potentials (Klein and Thorne, 2006).
The activity of individual neurons consists of relatively simple electrical activity, and can thus be considered nonconscious in contrast to the coordinated conscious and unconscious bioelectric activity of neural assemblies which have a phenomenological ontology (Searle, 1992; Fingelkurts et al., 2010).
The phenomenal unity of human experience indicates that there must be some mechanism(s) to unify processes responsible for the many aspects of experience such as the variety of sensory modalities.
We agree with the metastable view that the synchronized operations of several neural assemblies that are integrated EEG spatial-temporal patterns allow for the global functional unity (Honey et al., 2007; Werner, 2009) needed for the integrated experience (Fingelkurts and Fingelkurts, 2005).
While consciousness has been suggested to be quantized (some states more conscious than others; Oizumi et al., 2014), we focus on the phenomenal qualities and contents of human experience in this article.
This article may be seen as a further development of our opinion article on this layered model in which we introduced three separate but highly interactive oscillatory frameworks (Jerath and Crawford, 2015).
We elucidate an updated model here by detailing each of these layers and exploring their nature in different mental states. In the introductory article, we described a base layer of slow oscillations maintained in part by the Default Mode Network (DMN) and cardiorespiratory activity.
The second layer, built upon the first, is a constraining emotional layer powered by the limbic system.
The highest layer consists of higher frequency activity among the elements of the corticothalamic network which creates higher cognitive and perceptual components of mind (Figure 1).
In this updated version of the model, we have separated the infra-slow oscillations and the Delta oscillations into two layers based on their physiological distinctions and focus on the spectral aspects of these layers rather than anatomical locations.
We heavily strengthen our perspective with supporting research and discuss how breathing plays a role in the organization of neural oscillations.
In addition to describing the spectral, layered hierarchical framework relative to a global bioelectric architecture, we further extend the oscillatory framework from the brain to neural and non-neural elements of the body.
The relationship between neural oscillations of the brain and activity of the body (largely autonomic) has been explored previously by ourselves and other authors. Coordination and communication between the autonomic bodily system and the brain have been demonstrated in several studies (Walker and Walker, 1983; Basar, 2010).
These links reveal the likely existence of bidirectional oscillatory links between organs of the body and the brain which may allow for the maintenance of survival functions such as body temperature (Achimowicz, 1992; Fingelkurts et al., 2011).
There is also a link between neural oscillations and the immune system of the body (Saphier et al., 1990; Rosenkranz et al., 2003).
In addition, support for the idea that respiration acts as a oscillatory scaffold in the brain is growing (Heck et al., 2016, 2017; Varga and Heck, 2017).
Research into this relationship between the brain and body has not explored how this relationship fits into the global architecture.
We suggest the body fits (largely respiratory elements) into this architecture and may act as an underlying coordinator of bioelectric neural activity.
We also suggest the bioelectric structure of the brain in a sense may be projected to or unified with the sensory receptors of the body.
Although the concept of a hierarchy of brain oscillations across space and time has been previously proposed by notable authors (Freeman, 1987; Lakatos et al., 2005; Knyazev, 2012; Buzsáki et al., 2013; Fingelkurts et al., 2014; Fingelkurts and Fingelkurts, 2017a), we model a hierarchy in a novel way based on frequency by contending that these superimposed spectral layers are isomorphic to superimposed aspects of phenomenal consciousness.
Isomorphism among electromagnetic structure and phenomenal structure has been described (Fingelkurts et al., 2009); however, here we describe an isomorphism between the superimposition of electromagnetic “layers” and the superimposition of various components or “layers” of the phenomenal mind.
The layers we detail in this article have significantly more functionality, detailed operational processes, and blurred spectral borders, however, the modeling of an oscillatory spectral hierarchy which distinguishes groups of oscillatory networks and how the superimpose in relation to the phenomenal mind may further the understanding of the intrinsic and ubiquitous nature of oscillations in relation to psychology.
Source: | https://debuglies.com/2020/02/20/single-neurons-can-act-as-coordinators-by-selectively-combining-the-activity-of-different-neuronal-groups-at-each-moment/ |
A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is
application/pdf.
Filters
Predictive Multiplicity in Classification
[article]
2020
arXiv
pre-print
We introduce formal measures to evaluate the severity of predictive multiplicity and develop integer programming tools to compute them exactly for linear classification problems. ... Prediction problems often admit competing models that perform almost equally well. This effect challenges key assumptions in machine learning when competing models assign conflicting predictions. ... This research is supported in part by the National Science Foundation under Grants No. CAREER CIF-1845852 and by a Google Faculty Award. ...arXiv:1909.06677v4 fatcat:6dfmvetrvfdkhlhp472bdcwiu4
Learning Optimal Solutions via an LSTM-Optimization Framework
[article]
2022
arXiv
pre-print
In this study, we present a deep learning-optimization framework to tackle dynamic mixed-integer programs. ... Specifically, we develop a bidirectional Long Short Term Memory (LSTM) framework that can process information forward and backward in time to learn optimal solutions to sequential decision-making problems ... in MPS/NSF under Grant No. ...arXiv:2207.02937v1 fatcat:jx764ljacbfjfdmy7se45d2y4i
Energy management of small-scale PV-battery systems: A systematic review considering practical implementation, computational requirements, quality of input data and battery degradation
2019
Renewable & Sustainable Energy Reviews
The existing literature in this area focuses on individual aspects of this problem without a detailed, holistic analysis of the results with regards to practicality in implementation. ... Our analysis finds that using a more sophisticated energy management strategy may not necessarily improve the performance and economic viability of the PV-battery system due to the effects of modeling ... Furthermore, even faster, near-optimal solutions can be obtained with policy function approximations (PFA) using machine learning. ...doi:10.1016/j.rser.2019.06.007 fatcat:w2psa72fm5hu5hewrwgqlfwavy
A Taxonomy of Railway Track Maintenance Planning and Scheduling: A Review and Research Trends
2021
Reliability Engineering & System Safety
scheduling Bi-Objective Mixed-Integer Linear Programming, Pareto optimal solutions Lidén ✓ ✓ Possession scheduling Mixed Integer Programming Zhang et al. ✓ ✓ Possession scheduling Integer ... Linear and integer programming The nature of the decision variables in RTMP&S makes linear or nonlinear programming suitable. ... Summary of the literature review Table A1 and Table A2 Table A .1 Summary of the articles with predetermined maintenance policy. ...doi:10.1016/j.ress.2021.107827 fatcat:u7w5th73uvai3ane5b254bpcey
Learning Combined Set Covering and Traveling Salesman Problem
[article]
2020
arXiv
pre-print
to effectively deal with this problem by providing an opportunity to learn from historical optimal solutions that are derived from the MIP formulation. ... We study a combined Set Covering and Traveling Salesman problem and provide a mixed integer programming formulation to solve the problem. ... 41] , learning where to linearize a mixed integer quadratic problem , learning tactical solutions under imperfect information , and learning as a modeling tool . ...arXiv:2007.03203v1 fatcat:e5g2rckrmra3padchc7wxytw74
Power System Reliability and Maintenance Evolution: A Critical Review and Future Perspectives
2022
IEEE Access
Finally, areas requiring further research are identified alongside emerging trends in power system maintenance, to inform industry practice and support further research. ... As societal dependence on power system infrastructure continues to grow, there is an increased need to identify the best practices in the field of power system maintenance planning to ensure the continued ... Under this need, authors in , , and use Integer Linear programming to assess generation adequacy in Taiwan, Trinidad-Tobago, and Kuwait, respectively. ...doi:10.1109/access.2022.3172697 fatcat:dskfzzds7zhh7gci4cie32toue
A hybrid machine learning-optimization approach to pricing and train formation problem under demand uncertainty
2022
Reserche operationelle
To this end, we combined an optimization approach with a regression-based machine learning method to provide a reliable and efficient framework for integrated pricing and train formation problem under ... Further, in order to deal with the hybrid uncertainty of demand parameter, a robust fuzzy stochastic programming model is proposed. ... Machine learning models cannot optimize and they are only predicting factors based on some available information. ...doi:10.1051/ro/2022052 fatcat:7vdgtla32ffgtdzv54m4lxyir4
A Survey on Adaptive Data Rate Optimization in LoRaWAN: Recent Solutions and Major Challenges
2020
Sensors
First, we provide an overview of LoRaWAN network performance that has been explored and documented in the literature and then focus on recent solutions for ADR as an optimization approach to improve throughput ... LoRaWAN is built to optimize LPWANs for battery lifetime, capacity, range, and cost. ... scheduling Scalability Machine Learning MATLAB Variable Hysteresis Scalability Integer Linear Programming Scalability NS-3 Scalability Mathematical ...doi:10.3390/s20185044 pmid:32899454 fatcat:nrruddlhrzd7rdx4sxkmpn3bje
Smart "Predict, then Optimize" | https://scholar.archive.org/search?q=Predicting+Solution+Summaries+to+Integer+Linear+Programs+under+Imperfect+Information+with+Machine+Learning. |
Barley, O. R., Chapman, D. W., Guppy, S. N., & Abbiss, C. R. (2019). Considerations When Assessing Endurance in Combat Sport Athletes. Frontiers in physiology, 10, 205. Available here.
Abstract
Combat sports encompass a range of sports, each involving physical combat between participants. Such sports are unique, with competitive success influenced by a diverse range of physical characteristics. Effectively identifying and evaluating each characteristic is essential for athletes and support staff alike. Previous research investigating the relationship between combat sports performance and measures of strength and power is robust. However, research investigating the relationship between combat sports performance and assessments of endurance is less conclusive. As a physical characteristic, endurance is complex and influenced by multiple factors including mechanical efficiency, maximal aerobic capacity, metabolic thresholds, and anaerobic capacities. To assess endurance of combat sports athletes, previous research has employed methods ranging from incremental exercise tests to circuits involving sports-specific techniques. These tests range in their ability to discern various physiological attributes or performance characteristics, with varying levels of accuracy and ecological validity. In fact, it is unclear how various physiological attributes influence combat sport endurance performance. Further, the sensitivity of sports specific skills in performance based tests is also unclear. When developing or utilizing tests to better understand an athletes' combat sports-specific endurance characteristic, it is important to consider what information the test will and will not provide. Additionally, it is important to determine which combination of performance and physiological assessments will provide the most comprehensive picture. Strengthening the understanding of assessing combat sport-specific endurance as a physiological process and as a performance metric will improve the quality of future research and help support staff effectively monitor their athlete's characteristics.
DOI
10.3389/fphys.2019.00205
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License. | https://ro.ecu.edu.au/ecuworkspost2013/6128/ |
Becoming a Critic Of Your Thinking. Learning the Art of Critical Thinking There is nothing more practical than sound thinking. No matter what your circumstance or goals, no matter where you are, or what problems you face, you are better off if your thinking is skilled. As a manager, leader, employee, citizen, lover, friend, parent---in every realm and situation of your life, good thinking pays off. Poor thinking, in turn, inevitably causes problems, wastes time and energy, engenders frustration and pain. Critical thinking is the disciplined art of ensuring that you use the best thinking you are capable of in any set of circumstances.
The general goal of thinking is to “figure out the lay of the land” in any situation we are in. We all have multiple choices to make. What is really going on in this or that situation? Successfully responding to such questions is the daily work of thinking. Ask yourself these--rather unusual--questions: What have you learned about how you think? Here is one format you can use: 2. 3. 4. Transforming Midterm Evaluations into a Metacognitive Pause. Midterm evaluations often tip toward students’ (unexamined) likes and dislikes. By leveraging the weight of the midterm pause and inviting students to reflect on their development, midterm evaluations can become more learning-centered. Cued by our language, students can become aware of a distinction—that we’re not asking what they like, but what is helping them learn. This opportunity for students to learn about their learning yields valuable insights that not only inform instructors about the effects of our methods, but also ground students in their own learning processes, deepening their confidence in and commitment to their development in the second half of the course.
Last semester, I taught a research-based contemporary poetry course with a steep learning curve—due to our rather difficult, graduate-level texts and students’ lack of prior experience. Repice observed that “This set of questions calls attention to the ways you are learning. Lee, Virgina S. Nilson, Linda B. (2016.) Classroom Cognitive and Metacognitive Strategies for Teachers Revised SR 09.08.10. Cognitive Strategies. A cognitive strategy is a mental process or procedure for accomplishing a particular cognitive goal. For example, if students' goals are to write good essays, their cognitive strategies might include brainstorming and completing an outline.
The cognitive strategies that students use influence how they will perform in school, as well as what they will accomplish outside of school. Researchers have found that effective learners and thinkers use more effective strategies for reading, writing, problem solving, and reasoning than ineffective learners and thinkers. Cognitive strategies can be general or specific (Pressley & Woloshyn, 1995). Strategies have been distinguished from skills.
One factor that determines whether students use a strategy is whether students know what the strategy is and how to use it. The role of effective strategies in learning and thinking is emphasized by most theories of learning and development. Ineffective reasoners tend to be biased when evaluating evidence. Teaching Concepts: Cognitive Strategy. Goal Setting | Motivation | Cognitive Strategy | Cooperative Learning | Assessment Cognitive Strategy Excerpted from Chapter 9 of Biehler/Snowman, PSYCHOLOGY APPLIED TO TEACHING, 8/e, Houghton Mifflin Co., 1997. The Nature of Learning Tactics and Strategies (pp. 334-340) Types of Tactics Using Learning Strategies Effectively (pp. 340-343) The Components of a Learning StrategyResearch on Learning Strategy Training: Reciprocal Teaching Suggestions for Teaching in Your Classroom (pp. 348-351) Resources for Further Investigation: Learning Tactics and Strategies (p. 354) The Nature of Learning Tactics and Strategies A learning strategy is a general plan that a learner formulates for achieving a somewhat distant academic goal (like getting an A on your next exam).
A learning tactic is a specific technique (like a memory aid or a form of notetaking) that a learner uses to accomplish an immediate objective (such as to understand the concepts in a textbook chapter and how they relate to one another). Top. Response to Intervention | Math | Math Problem Solving. Solving an advanced math problem independently requires the coordination of a number of complex skills.
The student must have the capacity to reliably implement the specific steps of a particular problem-solving process, or cognitive strategy. At least as important, though, is that the student must also possess the necessary metacognitive skills to analyze the problem, select an appropriate strategy to solve that problem from an array of possible alternatives, and monitor the problem-solving process to ensure that it is carried out correctly.
The following strategies combine both cognitive and metacognitive elements (Montague, 1992; Montague & Dietz, 2009). First, the student is taught a 7-step process for attacking a math word problem (cognitive strategy). Second, the instructor trains the student to use a three-part self-coaching routine for each of the seven problem-solving steps (metacognitive strategy). Reading the problem. References Burns, M. Cognitive Learning Strategies | Wentworth Institute of Technology. Promoting Student Metacognition. 1219 6024 1 PB. Metacognition and confidence: comparing math to other academic subjects. 31958 111388 1 PB. 10. Module 4: Metacognitive Strategies for Reading and Writing. Module Introduction No matter what degree you are pursuing or what career you plan to go into; reading and writing skills are essential.
Think of just about every job ad you see, usually they say that "Excellent communication skills" are a highly desired trait. What exactly are "excellent communication skills? " Well, at their core, it is the ability to clearly express your thoughts and ideas both verbally and in writing. Another vital skill that employers need from their employees today is the ability to research, review and draw conclusions from data, reports and other informative sources.
In order to do this, basic reading skills aren't always enough. A student or an employee must be able to read as well as understand and synthesize the information that they come into contact with. In this module we will investigate and try out some methods for active reading and conscious writing that you may not have come into contact with before. Activity 1: Reading Comprehension Strategies Know. METACOGNITION. METACOGNITION: Study Strategies, Monitoring, and Motivation By William Peirce © 2003 A greatly expanded text version of a workshop presented November 17, 2004, at's Community College Outline II. Metacognition and Three Types of Knowledge III. Metacognition and Study Strategies IV. V. VI. VII. VIII. A.Some Sample Metacognitive Strategies B.Strategies for Instructors to Use in Teaching Textbook C.Strategies for Students to Use for Textbook D.Sample Reflective Topics for Self-Monitoring and Self-Assessment IX. I. To increase their metacognitive abilities, students need to possess and be aware of three kinds of content knowledge: declarative, procedural, and conditional.
IV. C. V. Students “enter the higher levels of education with . . . strategies that handicap them in achieving success.” The use of learning strategies is linked to motivation. VII. 1. The tasks that students need to perform vary not only among disciplines but among instructors in the same discipline. 3. 4. VIII. A. 1. 3. 4. 5. A. Metacognative Strategies. Metacognitive Strategies Introduction This site provides a detailed description of metacognitive strategies.
This site also explains why metacognitive strategies are helpful for students who have learning problems, it provides tips on teaching metacognitive strategies, it includes a step-by-step outline for how to develop your own metacognitive strategies, and it also provides a list of field-tested metacognitive strategies categorized by math concept area. You can access any of the listed topics simply by clicking on the appropriate title.
This site complements the instructional video model for Teaching Metacognitive Strategies, which you can access by clicking on the Instructional Strategies site found on the main navigational page. You can also access the video model by clicking on the icon entitled Metacognitive Strategies Video found in the section titled "How Do I Teach Them. " What Are They? [ back to top ] How Do They Positively Impact Students Who Have Learning Problems? Yes! Return. Response to Intervention | Math | Math Problem Solving. Metacognition And Learning: Strategies For Instructional Design. Do you know how to learn? Many people don’t. Specifically, they don’t know how to look inward to examine how they learn and to judge which methods are effective.
That’s where metacognitive strategies come in. They are techniques that help people become more successful learners. Shouldn’t this be a crucial goal of instructional design? Improved metacognition can facilitate both formal and informal learning. But let’s start at the beginning. What is metacognition? Metacognition is often referred to as “thinking about thinking.” The Two Processes of Metacognition Many theorists organize the skills of metacognition into two complementary processes that make it easier to understand and remember.
Metacognition and Expertise Many experts cannot explain the skills they use to elicit expert performance. Examples of Metacognition Skills You May Use Successful learners typically use metacognitive strategies whenever they learn. Metacognitive Strategies Ask Questions. References: Response to Intervention | Math | Math Problem Solving. Metacognition ELI. Stylus/Stylus Publishing - Using Reflection and Metacognition to Improve Student Learning: Across the Disciplines, Across the Academy. Table of Contents: AcknowledgmentsForeword—James Rhem1) Reflective Pedagogies and the Metacognitive Turn in College Teaching—Naomi Silver2) Make Exams Worth More than the Grade: Using Exam Wrappers to Promote Metacognition—Marsha C. Lovett3) Improving Critical-Thinking Skills in Introductory Biology Through Quality Practice and Metacognition—Paula P.
Lemons, Julie A. Reynolds, Amanda J. Curtin-Soydan, and Ahrash N. EJ1029627. Schoenfeld metacognition. Exam Wrappers. All too often, when students receive back a graded exam, they focus on a single feature – the score they earned. While this focus on “the grade” is understandable, it can lead students to miss out on several learning opportunities that such assessment can provide: identifying their own individual areas of strength and weakness to guide further study;reflecting on the adequacy of their preparation time and the appropriateness of their study strategies; andcharacterizing the nature of their errors to find any recurring patterns that could be addressed.
So, to encourage students to process their graded exams more deeply, several faculty members across the university have devised exam wrappers, short handouts that students complete when an exam is turned back to them. These exam wrappers direct students to review their performance (and the instructor’s feedback) with an eye toward adapting their future learning.
Examples from Mellon College of Science courses: Metacognition-ELI. Teaching Metacognition. This webpage is a summary, written by Carol Ormand, of Marsha Lovett's presentation at the 2008 Educause Learning Initiative conference. Dr. Lovett's slides and a podcast of her presentation can be accessed via the conference website. Teaching Metacognition Improves Learning Metacognition is a critically important, yet often overlooked component of learning. Effective learning involves planning and goal-setting, monitoring one's progress, and adapting as needed. All of these activities are metacognitive in nature. Teaching students that their ability to learn is mutable Teaching planning and goal-setting Giving students ample opportunities to practice monitoring their learning and adapting as necessary Self-Regulated Learning Expert learners consider their learning goals, plan accordingly, and monitor their own learning as they carry out their plans.
Expert learners engage in what we call Self-Regulated Learning. Expert Learners Can Be Made Accurate self-monitoring is quite difficult. CAPABLE: Calculus Acquisition through Problem and Activity Based Learning. TCM 2015 - Encouraging Metacognition in the Math Classroom. Getting Metacognition Out of the Closet | Home. Metacognition is not a common word. In fact, every time I typed "metacognition" or "metacognitive" in this article it was automatically underlined in red. Microsoft doesn't consider it a word, a sad state of affairs for such an important term. Metacognitive strategies have been linked with successful and meaningful learning. Furthermore, there are a number of things teachers can do to help foster metacognition among their students.
What is Metacognition? Metacognition is often described simplistically as "thinking about your thinking". NCREL, the North Central Regional Educational Laboratory, explains that metacognition consists of three basic elements: developing a plan of action, monitoring the plan, and evaluating the plan. 1) Developing a plan of action What in my prior knowledge will help me with this particular task? 2) Maintaining/monitoring the plan Did I understand what I just heard, read or saw? 3) Evaluating the plan Did my particular strategy produce what I had expected? Teaching Metacognative Strategies. Fact Sheet: Metacognitive Processes | Teaching Excellence in Adult Literacy (TEAL)
Metacognition is one’s ability to use prior knowledge to plan a strategy for approaching a learning task, take necessary steps to problem solve, reflect on and evaluate results, and modify one’s approach as needed. It helps learners choose the right cognitive tool for the task and plays a critical role in successful learning. What Is Metacognition? Metacognition refers to awareness of one’s own knowledge—what one does and doesn’t know—and one’s ability to understand, control, and manipulate one’s cognitive processes (Meichenbaum, 1985). It includes knowing when and where to use particular strategies for learning and problem solving as well as how and why to use specific strategies.
Cognitive strategies are the basic mental abilities we use to think, study, and learn (e.g., recalling information from memory, analyzing sounds and images, making associations between or comparing/contrasting different pieces of information, and making inferences or interpreting text). What’s the Research? | http://www.pearltrees.com/msheldon/metacognition/id14851480 |
For over five decades, the development of risk classification assessments, corrections-based treatment, and the associated outcome research have been focused on men. Thus, it is no surprise that existing treatment frameworks and correctional policies have been established from a male perspective. Women have also been incarcerated for over five decades, without suitable recognition of the body of literature to guide policy and procedures specifically for their needs. Compared with their male counterparts, justice-involved women have different pathways into, and out of, crime and substance use; they respond to supervision and custody differently, they have a higher prevalence of co-occurring mental health issues and lifelong trauma and abuse, and higher rates of other complex interpersonal and financial disadvantages [1-9].
Parallel statements have been published in dozens of research articles, books, other scholarly works, and policy recommendation reports throughout 1980 and 1990; however, little has changed [10-16]. Has it been published in invisible ink? It certainly bears repeating as by 2019, the number incarcerated women in the United States had grown over seven times higher than in 1980, with over 230,000 women residing in prisons and jails across the country . Moreover, the number of incarcerated women has risen globally by 53% since 2000 .
This commentary outlines the evolution of the past state of the research and policy guidelines for women to the current literature and research findings of gender-responsive and trauma-responsive models of care for corrections. Recommendations regarding appropriate treatment interventions and corrections-based policies for justice-involved women are also reasserted.
Changing policies and availability of research impacting women
Critical policy changes and harsher sentencing laws for drug-related crimes had a crucial role on the rise in women’s incarceration . Surely, this disturbing increase would have removed the cloak of invisibility and have created legislative change, at minimum in the most punitive states, requiring appropriate models of substance use treatment and criminal justice supervision for women. Between 1984 and 1990, policy changes specific to community-based substance use treatment for women occurred in response to public outrage over drug-exposed infants . The federal government set aside 5% of block grant funding to provide special ancillary services for women and pregnant women. Subsequently, throughout 1990, solicitations for treatment models for substance-using pregnant and postpartum women were sponsored by the Substance Abuse and Mental Health Services Administration (SAMHSA) and the National Institute on Drug Abuse (NIDA) .1
This increased the availability of specialized treatment programs for women in the community also generated funding for research and dissemination of findings on women-only and women-only versus mixed-gender treatment outcomes. The ancillary services typically included residential care with accommodations for children, individual counseling, family services, pregnancy-related services, supportive case management, transportation, health services, vocational training and aftercare. The findings from research and evaluations during this time showed that services that addressed women’s needs resulted in higher rates of completion, reductions in substance use, increased treatment satisfaction and improved health and well-being [14,22-25].
Gender-responsive treatment committees, needs assessments designed for women, and gender- and trauma-responsive programs for justice-involved women were also developed and became more accessible [11,26-32]. However, the application within the criminal justice system remained sparse and government block grants for ancillary services in the community were not sustained by mid to late 1999 [21,22]. Naturally, corresponding research on the effectiveness of specialized treatment for women in jail and prison was difficult to generate without extramural funding to establish and evaluate custody-based gender-responsive programs.
1The Residential Women and Children/Pregnant and Postpartum Women Demonstration Program.
Women-centered pathways into and out of the justice system
As the knowledgebase on justice-involved women grew, advocacy for appropriate care continued. Substantial differences between women and men’s life experiences led theorists, criminologists, psychologists, and others to posit the likelihood of gender-specific paths in the recovery process for decades. A pathways perspective recognizes the specific challenges and strengths in women that arise from social hierarchies [9,12,13,33]. Such hierarchies have created differences across gender and gender roles (e.g., patriarchy and sexism) that speak to the lived realities of women . These complex disadvantages, intersectional inequalities, and differences in social capital continue for women during incarceration .
Additionally, women consistently report a higher prevalence of Adverse Childhood Experiences (ACEs), such as neglect and emotional, physical and sexual abuse [6,35,36]. Justice-involved men also report substantial histories of childhood maltreatment and ACEs are critical factors negatively impacting women and men [37-41]. However, when compared with men, studies show a stronger correlation for women among types of ACEs, continued victimization into adolescence and adulthood, a more pronounced intergenerational impact, and greater severity of chronic mental and physical health outcomes [5,6,35,42,43]. ACEs are also highly correlated with adolescent pregnancy, homelessness, Prostitution, and Interpersonal Violence (IPV) [9,44-46], as well as recidivism and female-perpetrated violence [47-49].
Based on the numerous research results showing that women’s early childhood adversity is correlated with subsequent harmful behaviors, studies also began to explore distinctive factors associated with treatment and criminal justice outcomes for women relative to men. To begin to untangle treatment outcome data, Pelissier et al., assessed commonly analyzed predictors of post-release recidivism among 1,842 men and 473 women who participated in gender-neutral treatment. Among the 32 variables included in the model, only one variable was significantly unique to women (i.e., a history of mental health treatment increased the likelihood of recidivism for women). Thirteen variables were uniquely associated with recidivism for men, but only four were significant for both men and women and in opposite predictive directions. Variables that increased recidivism for men but decreased recidivism for women included disciplinary infractions during incarceration, counseling during supervision, number of monthly collateral contacts, and previous criminality. Interestingly, prior criminality has been a consistent predictor of return to criminal behavior in samples of men and is a risk factor often generalized to women.
Another study compared recidivism risk factors among a large sample of gender-neutral treatment participants (4,386 incarcerated women and 4,164 incarcerated men) and also found that there was a notable lack of predictive factors for women . Of the 11 variables in the models, the strongest predictor of return to prison for both men and women was co-occurring disorders. The single unique predictor for women was previous education, with higher education reducing the likelihood of return to prison. In contrast, previous employment significantly decreased return to prison for men, but not for women. Notably, a much smaller proportion of women reported any employment in the year prior to incarceration compared to men.
Hamilton et al., included women-centered variables in their analytical model and found that the predictive factors of recidivism for 8,815 women were primarily related to social support (e.g., minor children, no child support, legal contact restrictions) and victim/offender characteristics prevalent among women (e.g., IPV and prostitution). Brennan et al., identified eight reliable yet complex pathways to women’s recidivism, linking multiple women-centered factors to previous literature, including sexual/physical abuse, lower social capital, poor relational functioning, and extreme mental health issues. Other studies contend that risk factors that are more prevalent among women are trauma-related factors associated with co-occurring disorders, IPV, involvement with child protective services, homelessness, and dependency on others for financial support [2,9,31,53-59].
Thus, the literature reveals pattens that indicate that justice-involved women may be at a differential risk for recidivism than their male counterparts given their life realities. At the very least, treatment outcome and recidivism data should be analyzed separately for men and women with examination of women-centered variables included in the analyses.
Risk and need assessments
It follows that the predictive validity of gender-neutral risk assessments are also not as robust for women as for men [32,52,53,60,61].2 There is evidence showing the increased predictive validity for women when assessments are inclusive of women-centered factors. Van Voorhis et al., [31,32] created the Women’s Risk Needs Assessment (WRNA) as a stand-alone needs assessment or as a supplement for gender-neutral tools, such as the Level of Service Inventory-Revised and the Northpointe COMPAS [63,64]. The WRNA and the WRNA Trailor (WRNA-T) account for factors that are empirically more persistent in the lives of justice-involved women and included measures of trauma and abuse, unhealthy relationships, depression, parental stress, safety, financial considerations, anger, housing safety, family support and personal strengths such as self- efficacy [32,65-67].
In their application of WRNA, Salisbury et al., assessed whether the inclusion of measures of women’s needs (as risk factors) contributed to poor prison adjustment and recidivism among 156 women admitted to the department of corrections in a western state. Although different patterns were found across prisons, child abuse and relationships were associated with prison adjustment and victimization, while limited self-efficacy and parental stress were identified as risk factors for women upon release. Patterns were replicated across eight separate prison samples, seven pre-release samples, and six probation samples and resulted in recommendations for women-centered needs assessments for each type of setting [32,67]. Women’s gender-related needs are the pivotal factors to address in guiding assessment, treatment development, and gender-responsive policies to aid in women’s recovery.
2Women are usually administered the same classification risk assessments upon entry into prison as men .
Gender- and trauma-responsive treatment outcomes among justice-involved women
In 2003, the National Institute of Corrections published a groundbreaking report, Gender-Responsive Strategies: Research, Practice and Guiding Principles for Women Offenders . This report documented the need for a new vision that recognized the need to focus and integrate trauma services into the justice system. Since this time, supporters have been proposing to move corrections forward by adopting the Guiding Principles and other published “Blue-Prints” outlining gender-and trauma-responsive policies and practices.
There is now a growing evidence base and multiple Randomized Controlled Trials (RCT) documenting the effectiveness of gender- and trauma-responsive interventions for justice-involved women, at various levels of supervision, measuring outcomes beyond abstinence and recidivism, and when compared to gender-neutral or mixed-gender programs, to validate the recommended policies and provision of services [22,25,36,68-84].
With funding from NIDA, Messina et al., conducted an experimental study comparing post-release outcomes of 115 prison-based treatment participants. Women were randomized to a 20-session gender- and trauma-responsive treatment program (i.e., Helping Women Recover, Covington , and 16-session Beyond Trauma, Covington, 2013) or a prison-based therapeutic community model. Helping Women Recover and Beyond Trauma are manualized curricula with a facilitator guide and participant workbook. The gender-responsive treatment group had significantly greater reductions in post-release substance use, remained in voluntary residential aftercare longer (2.6 months vs. 1.8 months, p < .05), and were less likely to have been re-incarcerated within 12 months after parole (31% vs. 45%, p < .05; a 67% reduction in recidivism). While both groups improved on mental health outcomes, the findings show the beneficial effects of treatment components responsive to women’s needs.
The second experimental study, also funded by NIDA compared women in mixed-gender drug court programs with those in gender- and trauma-trauma responsive drug court programs . The gender- and trauma-responsive intervention groups across four outpatient drug courts showed the experimental intervention group had less disciplinary sanctions during the second and most intensive phase of drug court treatment (Gender-responsive group = 0.65 sanctions; Mixed-gender group = 1.2; p < .03) and were had less sanctions resulting in remand to jail, compared with the mixed-gender control group (Gender-responsive group = 1.9 jail remands; Mixed-gender group = 2.4 jail remands; p < .05).
A series of recent research studies (data collected from 2014-2019) conducted with 1,118 women convicted of serious or violent offenses who participated in brief or intensive interventions designed for women also showed consistent and positive results. The first study included a sample of 39 women in a Security Housing Unit (SHU: used to house residents at the highest risk of committing violent offenses against staff, other residents and the public). The pilot study assessed the efficacy of a six-session manualized intervention designed for women who have experienced trauma associated with ACEs (i.e., Healing Trauma: A Brief Intervention for Women, Covington & Russo, . Results demonstrated preliminary support for the effectiveness and feasibility of the brief intervention for women in the highest risk classification. The SHU women exhibited significant improvement across measures of depression, anxiety, Post-Traumatic Stress Disorder (PTSD), aggression, anger and social connectedness from the intervention . Effect sizes were moderate to large, with the largest impact on physical aggression (Cohen’s d .82).
The Healing Trauma SHU pilot study was replicated with 682 high-need incarcerated women (i.e., those with co-occurring disorders, frequent disciplinary infractions, or conflict with staff/others). Using a peer-facilitated model, the women exhibited improvement on over 90% of the outcomes measured . Significant reductions were found for anxiety, depression, PTSD, psychological distress, aggression, and anger. Significant increases were found in empathy, social connectedness, and emotional regulation. Effect sizes were small to moderate, with the largest impact on depression, PTSD and angry feelings (Cohen’s d ranged from 0.51, 0.41, 0.42 respectively). Anger expression measures approached significance (p = 0.061; p = 0.051). Moreover, Messina and Schepps found that a greater number of ACEs increased the likelihood of program gains for all mental health and aggression outcomes.
The findings of the pilot studies showed that the Healing Trauma six-session brief intervention had a positive impact on trauma-related outcomes for high-risk/high-need women, and those with the highest incidence of childhood trauma and abuse derived the most benefit. However, these pilot studies were limited to measures of pre- and post-change, without the benefit of a comparison group. Building upon the pilot studies with funding from the National Institute of Justice, Messina and Calhoun conducted an experimental study assessing an intensive 20-session manualized violence intervention (i.e., Beyond Violence, Covington, 2014) among 123 women primarily incarcerated for violent crimes (e.g., murder, attempted murder, manslaughter, assault). Results from the participants randomized to the Beyond Violence (BV) program had significantly lower mean scores than the control participants on depression (F=4.97), anxiety (F=9.12) and PTSD (F=4.68). Findings also showed that the BV participants had significantly lower mean scores than the control participants on physical aggression (F=6.11), hostility F=4.23), indirect aggression (F= 9.42), and expressive anger (i.e., anger used to manipulate or threaten) (F=7.15). Due to nature of the crimes and the lengthy sentences, post-release outcomes could not be explored.
A previous experimental study comparing BV with a 44-session Assaultive Offender Program in a women’s prison in Michigan, Kubiak et al., found similar positive changes in anger and aggression for the BV participants. While both groups experienced improvement in anger and mental health, women randomized to the BV intervention had stronger declines in anxiety (F=5.32) and state anger (i.e., outward expression or control of others) (F=8.84) than women in gender-neutral anger program. Furthermore, a longitudinal follow-up study showed that the women who participated in the BV program were significantly less likely to recidivate (i.e., arrest or time in jail) than women in the gender-neutral anger program during the first 12 months following their release from prison .
In summary, women with complex problems, histories of ACEs, and serving sentences for property, drug, or violent offenses benefited from various gender- and trauma-responsive interventions when compared to treatment as usual. These curricula evaluated were designed specifically for the primary needs of justice-involved women, addressing the gaps in programs focused on trauma, substance use and violence prevention. The content of the interventions, the method of delivery, and the applicability to the needs of the population are the essential components for enhancing women’s recovery.
Acknowledging the existing literature on the needs and recovery processes of justice-involved women is vital to the implementation of appropriate assessments, treatment services, supervision, policy recommendations and continued research for further advancement. One must only recognize the plethora of available research, RCTs, and meta-analyses [22,24,40,76,77,85]. Although, movement is gradual, California has been responsive to this process of change as their female population grew. Beginning in 2020, ten years after the published findings from the RCT on Helping Women Recover , the California Department of Corrections and Rehabilitation began to implement Helping Women Recover and Helping Men Recover as part of their integrated substance use program curricula via a Governor mandate. However, there was no evaluation component outlined in the mandate.
Overtime the conclusions regarding corrections-based treatment has shifted from “what works” later interpreted as “nothing works” to “some things work, for some people, some of the time” [87-90]. Covington and Bloom suggested an important shift of the field’s central question of “what works” to “what is the work?” The authors state that the work requires a theoretically based model recognizing the psychological development of women and a treatment model that supports gender-responsive programs and policy development. A gender-responsive and trauma-informed approach considers the social issues of gender inequalities and individual factors that impact justice-involved women.3 An interpersonal approach to programming would address substance use, trauma, economic marginality, relationships, and mental health issues through comprehensive, integrated, and culturally relevant services and supervision. Service providers need to be cross-trained in areas of gender-responsiveness and trauma-informed principles, and resources must be allocated for women’s programs and continuous rigorous evaluation.
Although men continue to be the majority of the imprisoned population in the United States, there are still over 230,000 women in prisons and jails across the country . Funding restraints often require service provisions be focused on the larger population of men and those at the highest risk of recidivism. Prison administrators and government officials may feel that rehabilitation programs are not a proper investment for women who often have short- term sentences. Yet, brief gender- and trauma-responsive interventions have been shown to be feasible and could be effective re-entry services. Ignoring the critical needs of women has long-term consequences and high costs to society given the involvement of social services and the intergenerational cycle of trauma, substance use, and criminal involvement.
In addition, focusing on recidivism as the sole determinant of a predictive model of rehabilitation is antiquated and based on research on men and goals for public safety. Measures of recovery should go beyond criminal activity or abstinence to include reductions in IPV, increased psychological wellbeing, education/employment,
financial independence, housing, family reunification, etc. Assessing multiple outcome measures, during confinement and post-release, are necessary to fully determine program effectiveness. Recidivism does not capture the full picture of post-release challenges or successes. Ward and Stewart question whether rehabilitation ends with risk management (i.e., reduced crime for public safety) or if it should incorporate services toward personal enhancement (i.e., improved quality of life/well-being). Rehabilitation and sustainable recovery after release go far beyond involvement with the criminal justice system. It is time other measures of change are expected and required in peer-reviewed journals seeking to increase the knowledgebase on what works for women and men.
Women’s gender-related needs are the pivotal factors to address in guiding assessment, treatment development, and gender-responsive policies to aid in women’s recovery. The recommendation of the Gender-Responsive Theoretical Framework and Guiding Principles for Corrections as a paradigm of care for justice-involved women was essential in 2003 and remains so as we begin 2022.
3Becoming gender- and trauma-responsive are terms which are inclusive of men, women, and gender-diverse populations. Gender identity and histories of trauma are important factors that should be included in treatment opportunities for all justice-involved populations. Men can also benefit from trauma-specific programming, as histories of trauma are not unique to women. The prevalence, type, and impact of lifelong trauma may vary by gender, but that is not an argument against incorporating treatment components that address trauma for both men and women .
Dr. Messina is the president of Envisioning Justice Solutions, Inc. and a research criminologist at the UCLA Integrated Substance Abuse Programs. The author declares that there is no conflict of interest and there was no funding provided for this commentary from any person or organization.
I am grateful to those who contributed decades of scholarly works, dedication to theoretical and program development, research, dissemination of information, committee involvement, and the creation of policy guidelines for criminal-justice-involved women. These criminologists, psychologists, sociologists, LCSWs, nurses, substance use treatment professionals, corrections professionals and relentless thinkers are cited throughout this article. I am especially grateful for the guidance of my mentors, friends, and colleagues Dr. Barbara Bloom, Dr. Stephanie Covington, Velda Dobson-Davis, Dr. Christine Grella, Rochelle Leonard, Rita Marmolejo and Dr. Barbara Owen. I am also forever grateful to the peer facilitators who provided years of program facilitation in prison, as well as care and attention to the program participants. Finally, I wish to acknowledge the bravery of the women with lived experience who volunteered to participate in the programs and the research and for sharing their stories over the years.
Citation: Messina NP (2021) The Evolution of Gender-and Trauma-Responsive Criminal Justice Interventions for Women. J Addict Addictv Disord 8: 070.
Copyright: © 2021 Nena P Messina, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. | https://www.heraldopenaccess.us/openaccess/the-evolution-of-gender-and-trauma-responsive-criminal-justice-interventions-for-women |
During our Boomtown 2040 series, we keep on talking about growth here in Austin. Our population is expected to double by the year 2040, but when did the "boom" start?
Let's start with the earliest official U.S. Census data we could find. We're going all the way back to 1850 when the population of Austin was just 629 people. It didn't take long for more people to start flocking to the then tiny Texas Capitol.
Ten years later, the number of people here grew by 455% to almost 3,500 people. That might not seem like a lot compared to the city's population now, but the numbers show Austin has been a rapidly-growing city for a while.
By 1990, the population jumped to more than 22,000. Fifty years later, that number multiplied by six and there were more than 132,000 Texans living in Austin.
Between 1950 and 2000, the population grew by at least 35% every 10 years. The biggest jump happened between 1990 and 2000 when almost 200,000 people moved to Austin. That left the city with about 656,000 people. | |
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a liquid crystal display (LCD) device, and more particularly, to a reflection type LCD device in which light efficiency reflection is improved by a hologram layer.
2. Description of the Related Art
LCD devices have advantages in terms of low power consumption due to a low driving voltage and a simplified structure. Most LCD devices which are currently being used adopt a TN (twisted nematic) type liquid crystal or an STN (super twisted nematic) type liquid crystal. Accordingly, at least one polarizer is necessary provided in order to control light, with, a filter is to realize full color image. A reflection layer is further provided for a reflection type LCD device.
FIG. 1 is a sectional view illustrating a general reflection type LCD device having the color filter and the reflection layer.
Referring to the drawing, the reflection type color LCD device is comprised of an upper substrate 12 and a lower substrate 26, an upper polarizer 10 disposed on the upper substrate 12, a lower polarizer 28 which can be selectively disposed below the lower substrate 26, and a reflection layer 30 disposed below the lower polarizer 28. A color filer 14 is arranged to be below the upper substrate 12. A plurality of upper electrodes 16 are formed in strips below the color filer 14 and a plurality of lower electrodes 24 are formed in strips on the lower substrate 26. The upper and lower electrodes 16 and 24 are arranged to cross with one another and covered with upper and lower orientation films 18 and 22, respectively. An LCD layer 20 is formed between the upper and lower orientation films 18 and 22.
In the above-mentioned reflection type color LCD device, input light is partially blocked by the color filter and the upper and lower polarizers 10 and 28 disposed midway in an optical path of the light. In particular, since over 50% of the light inputted to the LCD device is blocked by the polarizers, the amount of light which can be reflected by the reflection layer 30 is reduced. Therefore, to improve brightness of an image in the reflection type LCD device, the efficiency of light use should be increased.
SUMMARY OF THE INVENTION
To solve the above problem, it is an objective of the present invention to provide a reflection type color LCD device having a hologram layer so that image brightness is improved.
Accordingly, to achieve the above objective, there is provided an LCD device comprising: a pair of substrates, disposed from each other with a predetermined distance, and having an electrode or a thin film transistor driving device on the opposite sides thereof. An orientation film is formed on each opposing surface of the substrates, a liquid crystal layer is formed between the orientation films. A reflection layer disposed at one side of one of the substrates to reflect the light transmitted through the liquid crystal is toward the outside thereof. A hologram layer having predetermined patterns and light transmissivity which differs according to each of the patterns is provided so that efficiency of use of the light reflected by the reflection layer can be maximized.
It is preferable in the present invention that the LCD device further comprise a polarizer disposed at one side of any one of the substrates to control light inputted to the liquid crystal layer, and a color filter disposed at one side of any one of the substrates to selectively transmit the light passed through the liquid crystal layer.
It is also preferable in the present invention that the hologram layer be formed on between the electrode and the orientation film formed one of the substrates, and that the reflection layer be formed between the electrode and the hologram layer formed on one of the substrates and have the same pattern as that of the electrode.
BRIEF DESCRIPTION OF THE DRAWINGS
The above objective and advantages of the present invention will become more apparent by describing in detail a preferred embodiment thereof with reference to the attached drawings in which:
FIG. 1 is a sectional view illustrating a conventional reflection type color LCD device;
FIG. 2 is a perspective view illustrating a reflection type color LCD device according to a first preferred embodiment of the present invention;
FIG. 3 is a perspective view illustrating a reflection type color LCD device according to a second preferred embodiment of the present invention;
FIG. 4 is a sectional view illustrating a reflection type color LCD device according to a third preferred embodiment of the present invention; and
FIGS. 5A and 5D are sectional views for showing a method of forming the hologram layer of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
Referring to FIG. 2, the reflection type LCD device according to a preferred embodiment of the present invention is comprised of an upper substrate 102 and a lower substrate 120 arranged to be displaced from one another. A color filter 104 is formed below the upper substrate 102. An upper electrode 106 is formed in strips below the color filter 104 and a lower electrode 118 is formed in strips on the lower substrate 120. The upper and lower electrodes 106 and 118 are perpendicular each other. Also, upper and lower orientation films 108 and 116 are formed below the upper electrode 106 and above the lower electrode 118, respectively. Between the lower orientation film 116 and the lower electrode 118, a reflection layer 114 and a hologram layer 112 are formed in sequence. A liquid crystal layer 110 is formed between the upper and lower orientation films 108 and 116.
The hologram layer 112 is characterized in that efficiency of use of light passing through the hologram layer improves with respect to light of a particular wavelength. The hologram layer 104 is formed into a predetermined pattern to correspond to red, green and blue pixels of the color filter 112 so that light of a different wavelength is allowed to pass therethrough according to the pattern of the hologram layer 112. If the hologram layer 112 can maximize efficiency in transmission of light of a particular wavelength, and totally cut off light of the different wavelength, the use of the color filter 104 can be avoided. A method for forming the hologram layer 112 into a predetermined pattern will be described later.
The reflection layer 114 is preferably formed of aluminum. Also, it is preferable that the reflection layer 114 is formed in strips similar to the shape of the electrode and is arrayed corresponding to the lower electrode 118. Furthermore, a polarizer 100 is provided on the upper substrate 102. However, when the liquid crystal layer 110 is dyed with a particular color, there is no need to use the polarizer 100. A method of dying is well known in the relevant field.
The operation of the reflection type LCD device structured as above according to a first embodiment of the present invention will now be described.
When power is applied to the upper and lower electrodes 106 and 118, the liquid crystal layer 110 selectively allows light to pass therethrough and then the light passes through the polarizer 100, the upper substrate 102, the color filter 104, the upper electrode 106, the upper orientation film 108, the liquid crystal layer 110, the lower orientation film 116, and the hologram layer 112, in sequence, and finally arrives at the reflection layer 114. The inputted light is then reflected from the reflection layer 114 and proceeds out of the LCD device following the reverse order of the inputted light. Here, as the light passes through the hologram layer 112 during input and output of light, efficiency of light use increases and thus an image of improved brightness can be provided.
FIG. 3 shows a reflection type color LCD device according to a second preferred embodiment of the present invention.
Referring to the drawing, the LCD device includes a lower substrate 216 and an upper substrate 202, and a lower electrode 214, a lower orientation film 212, a liquid crystal layer 210, an upper orientation film 208, an upper electrode 206 and a color filter layer 204 are sequentially formed therebetween. Also, a polarizer 200 is disposed on the upper substrate 202. A hologram layer 218 is formed below the lower substrate 216 and a reflection layer 220 is formed below the hologram layer 218. That is, as shown in the drawing, when the reflection layer 220 is formed below the lower substrate 216, the hologram layer is formed on the reflection layer 220. Of course, the hologram layer 218 has predetermined patterns and can increase the efficiency of light use by patterns with respect to light of a different wavelength. Light inputted to the LCD device arrives at the reflection layer 220 by a selective penetration operation of the liquid crystal layer 210. The light arrived at the reflective layer 220 is reflected out of the LCD device to display a predetermined image. As the light passes through the hologram layer 218 during input and output of light, the efficiency of light use increases.
FIG. 4 is a sectional view of an LCD device according to a third preferred embodiment of the present invention.
Referring to the drawing, the LCD device according to the third embodiment is an active matrix type LCD device which adopts a thin film transistor (TFT) device. This LCD device includes an upper substrate 302 and a lower substrate 316, and therebetween, a TFT device 324, an insulation layer 322, a reflection layer 320, a hologram layer 318, a lower orientation film 312, a liquid crystal layer 310, an upper orientation film 308, an electrode 306, and a color filter 304 are sequentially formed. A polarizer 300 is arranged on the upper substrate 302. The hologram layer 318 formed between the reflection layer 320 and the lower orientation film 312 is patterned to correspond to that of the color filter 304. Preferably, the material of the reflection layer 320 is aluminum. As described above, the hologram layer 318 increases the efficiency of use of the light reflected by the LCD device and accordingly, brightness of an image can be improved.
As described above, the hologram layer can increase the efficiency of use of the light of a particular wavelength. In the case of a full color reflection type color LCD device, when a hologram layer for increasing the efficiency of light use with respect to a single particular wavelength is used, the effect of the increased light use efficiency is limited to any one pixel of red, green and blue pixels, with affecting the other pixels. That is, the effect is selectively made to only a single wavelength, not increasing the light use efficiency of all the red, green and blue wavelengths which were selectively allowed to be penetrated by the color filter.
To compensate for the above defect, the employed hologram layer is patterned. That is, the hologram layer is patterned to selectively produce an effect on the red, green and blue pixels. The patterns of the hologram layer are arrayed to respectively correspond to red, green and blue patterns of the color filter. Accordingly, the hologram pattern to increase the efficiency of light use at the red wavelength corresponds to the red pixel, and the same applies for the green and blue pixels.
FIGS. 5A through 5D show a method for forming the hologram layer. The method can be applied in forming the holographic device of the LCD device shown in FIG. 3.
Referring to FIG. 5A, a transparent electrode 502 in strips and an orientation film 503 are formed on one surface of a substrate 501 in a previous step, and a light-curable resin layer 504 and a reflection layer 505 are sequentially deposited on the other surface thereof. The light- curable resin layer 504 is a layer to be formed into a hologram layer in a step which will be described later, e.g., a photopolymer layer including vinyl monomer.
Referring to FIG. 5B, the light-curable resin layer 504 of the substrate 501 covered by a first mask 511 is exposed to a laser beam of a particular wavelength. The first mask 511 coincides with one of red, green and blue patterns formed on the color filter of the LCD device. For instance, a pattern of an aperture 511a in the mask 511 is formed to be the same as that of the red pixel of the color filter, and a laser beam of a 633 nm wavelength corresponding to red color is radiated therethrough. Since a portion of the light-curable resin layer 504 is light-cured by the radiation of the laser beam, a first hologram portion 521 is formed. The first hologram portion 521 improves efficiency of use of the light having a wavelength for red.
Referring to FIG. 5C, the light-curable resin layer 504 of the substrate 501 covered by a second mask 512 is exposed to a laser beam of a different wavelength. For instance, an aperture 512a is formed in the second mask 512 to be the same as the pattern of the green pixel of the color filter, and the second mask 512 is exposed to a laser beam of a 524 nm wavelength corresponding to green color. Thus, a second hologram portion 522 is formed in the light-curable resin layer 504. The second hologram portion 522 improves efficiency of use of the light having a wavelength for green.
In FIG. 5D, the light-curable resin layer 504 of the substrate 501 covered by a third mask 513 is exposed to a laser beam of a different wavelength. For instance, an aperture 513a is formed in the third mask 513 to be the same as the pattern of the blue pixel of the color filter, and the third mask 513 is exposed to a laser beam of a 442 nm wavelength corresponding to blue color. Thus, a third hologram portion 523 is formed in the light-curable resin layer 504. The third hologram portion 523 improves efficiency of use of the light having a wavelength for blue.
The hologram layer having a predetermined pattern can also be formed by using a light-etching method, not using the masking method described above. According to the light-etching method, the light- curable resin layer formed on the substrate is cured by radiating a laser beam of a 633 nm wavelength corresponding to red color. Next, except for a portion corresponding to the red pixel pattern of the color filter, the entire portion of the light-curable resin layer is etched to form a hologram portion for red color. A light-curable resin layer is then coated on the entire surface of the substrate and a laser beam of a 524 nm wavelength of green color is radiated thereon. A portion of the cured light-curable resin layer corresponding to the green pixel pattern of the color filter is selectively etched. Thus, a hologram portion for green color is formed further to the hologram portion for red color. The same process is applied for forming a hologram portion for blue color using a laser beam of a 442 nm wavelength. Therefore, a hologram layer having hologram portions in coincidence with the pattern of the color filter is formed.
As described above, in the LCD device according to the present invention, since the hologram layer which can maximize efficiency of light use according to each wavelength of red, green and blue colors of the color filter is formed above the reflection layer, efficiency of reflection of light reflected from the reflection layer can be improved and the quality of an image can be improved by increasing brightness of the image.
It is noted that the present invention is not limited to the preferred embodiment described above, and it is apparent that variations and modifications by those skilled in the art can be effected within the spirit and scope of the present invention defined in the appended claims. | |
How we behave and how we communicate is of great importance for the outcome, whether it be a simple task or a larger project. By developing our ability to understand our own behaviour and that of others, we create a better basis for good relations and a positive business culture. Appreciating our differences means that we can avoid unnecessary conflicts and tensions. Instead, we can take advantage of our similarities/differences and use them effectively in work groups, in organisations or as an individual.
More intenz offers the DISC tool, which describes your style of communication and your behaviour patterns. DISC is used worldwide and is based on years of research and development work. The tool can be used to gain insight into a number of different areas, such as:
- improving work climate and commitment
- developing managers and personnel
- resolving conflicts and co-operation problems
- improving communication skills
- recruiting the right personnel
We give you better conditions for good communication and good co-operation. | https://www.moreintenz.se/en/kompetens/DISCeng |
From the start, Jan Vriend has been a musical omnivore who combines a strongly modernistic approach with openness to the interests of and needs for a good musical education. ...
related works
Corona Concerto : for piano and orchestra / Jan Vriend
Genre:
Orchestra
Subgenre: Piano and orchestra
Instruments: 3fl/picc 2ob eh 2cl cl-b cl-cb afg cfg 4h 3tpt 2trb trb-b tb 8perc str
5 etudes-caprices : voor piano, opus 39, najaar 1957 / Léon Orthel
Genre:
Chamber music
Subgenre: Piano
Instruments: pf
Big Booster : Version for recorder and string orchestra / Chiel Meijering
Genre:
Orchestra
Subgenre: Recorder and string orchestra
Instruments: rec-solo str
Elegia e capriccio : per arpa e orchestra / Henri Zagwijn
Genre:
Orchestra
Subgenre: Harp and orchestra
Instruments: 2332 2210 timp str hp-solo
composition
Double Concerto : for piccolo, piano and orchestra / Jan Vriend
Description:
The idea for the concerto came about after a successful first performance of the Sonata for piccolo and piano that took place in The Goods Shed in my hometown of Tetbury in November 2018. What started off as a mere pipedream, well aware that few orchestras, if any, were desperately waiting for an opportunity to premiere such a novel concept, became a genuine ambition for me.
I set to work regardless of practicalities as soon as I didn’t have any other commitments and produced four movements within a year. When I began writing the music another triumphant performance of the sonata had meanwhile taken place in Amsterdam, followed later in the year by a CD recording, launched at the end of 2019. As the sonata’s fourth movement became a particular favourite of the musicians I pondered if a version with a light orchestral accompaniment couldn’t feature as the slow second movement. However, after finishing all four movements I started having second thoughts about it and decided to compose a new one in its place. | https://webshop.donemus.com/action/front/sheetmusic/21099 |
Lincoln Boulevard in Santa Monica is about to undergo a dramatic transformation. The stretch of Lincoln that runs from the 1-10 freeway to Santa Monica’s southern border, controlled by CalTrans until 2012 when Santa Monica took it over, is being reimagined as a multimodal, pedestrian-friendly street complete with bus lanes, more crosswalks, parklets, and (some) better bike facilities.
Next Wednesday, September 2, the Santa Monica Planning Commission will take a look at some of the preliminary designs for the Lincoln Neighborhood Corridor (LiNC) plan that have come out of an ongoing community process. People can still weigh in on the plans online here by voting on details like preferred street lighting models, number of medians, locations of crosswalks, and even street plantings.
There’s a lot of promising stuff happening, especially since Lincoln Boulevard has served primarily as a thoroughfare for vehicles. As a result, the streetscape can be pretty hostile and dangerous. Former Planning Commissioner Frank Gruber weighed in on the LiNC here.
Here’s a look at some of what could happen on Lincoln Boulevard. All of the images and renderings are from the city’s LiNC website.
Major improvements are being considered for the street’s built environment. By emphasizing pedestrian-oriented adaptive reuse, the city hopes to turn buildings like this old auto shop…
…into something like this:
Then, there are plans to add in dedicated peak-time bus lanes, medians, better sidewalk lighting, more crosswalks, better street furniture, signage, bulb outs, and parkway landscaping to improve the experience for those who walk Lincoln Boulevard.
The city is also considering improvements to connect Lincoln to Santa Monica’s bike network. Below are two options for creating a safer route for people on bikes crossing Lincoln at Ashland.
There don’t appear to be plans to put bike lanes on the Lincoln itself thought, which is likely due to the prioritization of dedicated bus lanes.
The goals of the LiNC plan are stated on the website and include reducing barriers to pedestrian access and comfort, reducing or eliminating conflicts between vehicles, bicycles, and pedestrians, increasing the number of crossings to promote pedestrian and bicycle movements across the boulevard and into the neighborhoods, improving sidewalks with amenities that would encourage more walking, improving wayfinding and legibility to key destinations like Santa Monica High School, the Beach and Downtown, and improving and diversifying landscaping and tree canopy.
According to city staff, “Many of the streetscape design elements have been evaluated and endorsed by the community. However, several outstanding questions about key corridor improvements remain that will impact the final design.”
City officials want people to weigh in here to help finalize the designs. It’s possible that Lincoln Boulevard in Santa Monica will become a multi-modal street in the near future. For more info, visit lincsm.net.
Santa Monica Next is published thanks to the support of our advertisers: Bike Center, Pocrass and De Los Reyes LLC Personal Injury Attorneys and Los Angeles Bicycle Attorney. | https://www.santamonicanext.org/2015/08/lincoln-blvd-in-santa-monica-could-become-a-shared-street/ |
The efforts of the United States government in the past 15 years have included harnessing the power of health information technology (HIT) to improve legibility, lessen medical errors, keep costs low, and elevate the quality of healthcare. However, user resistance is still a barrier to overcome in order to achieve desired outcomes. Understanding the nature of resistance is key to successfully increasing the adoption of HIT systems. Previous research has showed that perceived threats are a significant antecedent of user resistance; however, its nature and role have remained vastly unexplored. This study uses the psychological reactance theory to explain both the nature and role of perceived threats in HIT-user resistance. The study shows that perceived helplessness over process and perceived dissatisfaction over outcomes are two unique instances of perceived threats. Additionally, the results reveal that resistance to healthcare information systems can manifest as reactance, distrust, scrutiny, or inertia. The theoretical and practical implications of the findings are discussed.
Comments
© 2021
Recommended Citation
Ngafeeson, M. N., & Manga, J. A. (2021). The Nature and Role of Perceived Threats in User Resistance to Healthcare Information Technology: A Psychological Reactance Theory Perspective. International Journal of Healthcare Information Systems and Informatics (IJHISI), 16(3), 21-45. doi:10.4018/IJHISI.20210701.oa2
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License. | https://scholarworks.utrgv.edu/is_fac/61/ |
/ factorpad.com / fin / glossary / variance.html
An ad-free and cookie-free website.
Intermediate
Variance is a measure of the distribution of observations around the mean. It is often used for descriptive statistics, inferential statistics and hypothesis testing.
For investment modeling, variance is a widely used descriptive statistic, as is its square-root, the standard deviation. Variance is also used heavily in valuation models of individual securities, portfolio choice and the measurement of risk-adjusted portfolio performance.
Variance is calculated by taking the sum of all squared differences between each observation and the mean over the measurement period. After summing those, next divide by the number of observations. Because it is the result of squaring, variance will always be zero or greater.
Variance is not easily interpreted because the scale is in units squared, like percent-return-squared. So instead, to interpret, translate to standard deviation by taking the square-root of variance. Its units would then be in percent-return, and that can easily be interpreted.
Synonym: second central moment
For context, variance provides a measure for how far from the average the whole group of obervations are spread out. Imagine a bell-shaped curve and here a higher variance will have a wider distribution. On the other hand, when the variance is low, the distribution will be narrow and tall.
Doc: Besides variance what else could
we use on the x-axis for MPT charts? And why?
Wes: Standard deviation, because it's more interpretable and the relationship still holds.
This video can be accessed in a new window or App , at the YouTube Channel or from below.
Variance definition for investment modeling (4:22)
The script includes two sections where we visualize and demonstrate the calculation of variance using demeaned returns.
We're sitting right here in Excel, and this is a snippet from our boot camp course.
This is one depiction of variance, from a discussion on portfolio theory.
Think about each dot here as a stock or portfolio. Each has a return and a risk measure. Risk is on the x-axis and return is on the y-axis.
Risk here can be interpreted as either variance or standard deviation. As you will see shortly, they are both related. You go seven steps with the exact same calculation, until the final step.
The key with variance is the calculation, so let's head there now.
Let's walk through a calculation for two stocks, Microsoft and eBay. We have six monthly observations of return for each stock from April to September 2003. Column F is the return on Microsoft, eBay is Column G.
Next we compute the average of each, here 2.38% for Microsoft and 3.98% for eBay. Then move those over to columns H and I.
In column J we take the return minus the average which gives us 3.24%. That's 5.62% minus 2.38%, or 3.24%. For eBay it is 8.91% minus 3.98%, or 4.93%. Carry that formula down for all months. Next, square these in columns L and M.
Next, using the
=SUM() function, add
up the products for each stock to get 0.0062 for Microsoft and 0.0109
for eBay.
Next, divide by 6 observations to get the variance of 0.0010 for Microsoft and 0.0018 for eBay. Recall, these are in units of returns squared so aren't interpretable. So it is common to use standard deviation, which is the square root of variance.
To get that, use the
=SQRT() function
or take the variance to the one-half power, as I have done here.
Click box for answer.
True
True
Still unclear on variance? Leave a question in the comments section on YouTube or check out the Quant 101 Series, specifically Four Essential Stock Risk Measures.
Our trained humans found other terms in the category statistics basics you may find helpful.
We're producing videos for smart folks like you. Subscribe for reminders and to show your support.
/ factorpad.com / fin / glossary / variance.html
A newly-updated free resource. Connect and refer a friend today. | https://factorpad.com/fin/glossary/variance.html |
Held on Fridays at 9 a.m., these hands-on training sessions teach users how to master technology.
FORT PIERCE — The Lakewood Park Branch Library’s Technology Training Sessions helps you navigate social media and modern TV services this December.
• Dec. 7 – Cutting the Cord: Thinking about ditching your cable provider? In this class we’ll explore some of the options available to you such as Apple TV, Chromecast, Netflix and Prime Video.
• Dec. 14 - Streaming Services: There are a lot of options out there for listening to music and watching video. Join us as we discuss the use of streaming services, music apps and discover which one works best for you.
• Dec. 21 - Social Media: Wondering about Facebook or Instagram? Want to know what Snapchat is all about? Then this class is for you. Join us as we discuss various social media platforms and learn how to create an account for yourself.
• Dec. 28 - Conquer your iPhone: Hands-on class exploring the features and functions of your Apple device. In this class we’ll go over the basics and even delve into some more advanced uses.
Registrations are not required but space is limited.
For details, stop by the library or contact [email protected] or 772-462-6872.
Be sure to have all your devices fully charged and bring your library card and passwords. | https://www.tcpalm.com/story/specialty-publications/your-news/st-lucie-county/reader-submitted/2018/11/27/cut-cord-help-lakewood-park-librarys-december-tech-training/2129751002/ |
Description Biosecurity is a set of measures to prevent, respond to and recover crops and livestock from pests and diseases that threaten the economy and environment. Comprehensive biosecurity systems help ensure food security and food safety, which is crucial for community health, competitiveness for agricultural export and conservation of natural environments. This unit studies the epidemiologic triangle consisting of the host, disease and the environment in which the disease develops, and the series of measures and practices to detect and prevent entry and spread of pests, diseases and weeds. The potential for future biosecurity mega shocks to the agricultural industry, preparedness for rapid emergency responses to an exotic incursion, and management of invasion of pests and diseases will be discussed.
School Science
Discipline Agricultural Science
Student Contribution Band HECS Band 1 10cp
Level Postgraduate Coursework Level 7 subject
Assumed Knowledge
Foundation in chemical and biological sciences, quantitative thinking.
Learning Outcomes
On successful completion of this subject, students should be able to:
- Critically appraise biosecurity systems as applied to global food security.
- Identify diseases, pests and weeds that are the target of surveillance.
- Monitor plants and animals for signs of disease and pest infestation.
- Devise a biosecurity plan that is tailored to the needs of a specific area.
- Create solutions to dynamic complex problems in biosecurity by synthesizing information from a range of relevant data sources.
- Justify inferences and solutions to biosecurity issues to a range of audiences.
Subject Content
1.How farms and farm products are affected by microbes (diazotrophs, mycorrhizae, viruses, bacteria, fungi and nematodes), pests and weeds
2.Key concepts of epidemiology; the study of the distribution (frequency, pattern) and determinants (causes, risk factors) of disease-related states and events
3.Methods for diagnosis such as quantitative PCR as well as different sequencing and sensor technologies
4.The symbiotic relationships of microorganisms and insects with plants and animals and their use in biocontrol
5.Methods of control (cultural, chemical, biological, and genetics to breed resistant varieties) and their relative advantages and disadvantages,
6.Data modelling and visualisation together with increased data availability for long-term decision making
7.The relevant legislation and authorities (Biosecurity Australia, AQIS, TGA etc.).
8.The strengths and weaknesses of current biosecurity systems
Assessment
The following table summarises the standard assessment tasks for this subject. Please note this is a guide only. Assessment tasks are regularly updated, where there is a difference your Learning Guide takes precedence. | https://hbook.westernsydney.edu.au/subject-details/agri7001/ |
This time we are looking on the crossword clue for: Yam, for one.
it’s A 12 letters crossword puzzle definition. See the possibilities below.
Did you find what you needed?
We hope you did!. If you are still unsure with some definitions, don’t hesitate to search them here with our crossword solver.
Possible Answers: ROOT, TUBER.
Random information on the term “ROOT”:
A histogram is a graphical representation of the distribution of numerical data. It is an estimate of the probability distribution of a continuous variable (quantitative variable) and was first introduced by Karl Pearson. It is a kind of bar graph. To construct a histogram, the first step is to “bin” the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) must be adjacent, and are often (but are not required to be) of equal size.
If the bins are of equal size, a rectangle is erected over the bin with height proportional to the frequency — the number of cases in each bin. A histogram may also be normalized to display “relative” frequencies. It then shows the proportion of cases that fall into each of several categories, with the sum of the heights equaling 1.
However, bins need not be of equal width; in that case, the erected rectangle is defined to have its area proportional to the frequency of cases in the bin. The vertical axis is then not the frequency but frequency density — the number of cases per unit of the variable on the horizontal axis. Examples of variable bin width are displayed on Census bureau data below. | https://www.crossword-clues.com/clue/yam-for-one-crossword/ |
Republican Presidential candidate Rick Santorum had to leave the campaign trail last weekend when his 3-year-old daughter Isabella (Bella) had to be hospitalized. Bella has a serious genetic disorder called Trisomy 18, and was admitted to a Virginia Children’s Hospital with pneumonia in both lungs.
Bella is the youngest of the seven Santorum children. It was noticed at birth that she was “different” and at five days of age, Santorum and his wife Karen were given the diagnosis of Trisomy 18. They were also told that the life expectancy of children with Trisomy 18 was usually very short and that she would probably not live more than a year. Despite these predictions, the Santorums told doctors that “we were not going to concentrate on her dying, we're going to concentrate on her living, and do everything we can to help her.” Despite frequently hospitalizations, especially during her first year of life, Santorum remains upbeat:
Some people describe people like Bella as disabled children … I look at the joy, the simplicity and the love that she emits and its clear to me that we're the disabled ones. Not her … she’s got it right.
Reports from later today indicate that Bella is “turned the corner" and it is hoped that she will be discharged home soon.
We wish her a speedy recovery.
What is a Trisomy?
Let's take a short walk down memory lane and review a little basic genetics:
Meiosis is a special type of cell division necessary for sexual reproduction. The cells produced by meiosis are called gametes which in humans refer to sperm and egg cells.
Meiosis differs from the "life-cycle" cell division of mitosis in two important respects:
- The chromosomes in meiosis undergo a recombination which shuffles the genes, producing a different genetic combination in each gamete, compared with the co-existence of each of the two separate pairs of each chromosome (one received from each parent) in each cell, which results from mitosis.
- The outcome of meiosis is four (genetically unique) haploid cells, compared with the two (genetically identical) diploid cells produced from mitosis.
Meiosis begins with one diploid cell containing two copies of each chromosome (46 total) — one maternal and one paternal — and produces four haploid cells containing one copy of each chromosome (23 chromosomes). Each of the resulting chromosomes in the gamete cells is a unique mixture of maternal and paternal DNA, ensuring that offspring are genetically distinct from either parent.
Nondisjunction is the failure of chromosome pairs to separate properly during meiosis stage 1 or stage 2. It results in a cell with an imbalance of chromosomes, and the cell is said to be aneuploid.
If the chromosome pairs fail to separate properly during cell division, the egg or sperm may have a second copy of one of the chromosomes. If such a gamete results in fertilization and an embryo, the resulting embryo may also have an entire copy of the extra chromosome.
"Full trisomy" means that an entire extra chromosome has been copied. "Partial trisomy" means that there is an extra copy of part of a chromosome.
Trisomies can occur with any chromosome, but often result in miscarriage. For example, Trisomy 16 is the most common trisomy in humans, occurring in more than 1% of pregnancies. This condition, however, usually results in spontaneous miscarriage in the first trimester.
Of those trisomies that survive until birth, the most common is Trisomy 21 or Downs Syndrome. Trisomy 18, also known as Edwards Syndrome, is the next most frequent, followed by Trisomy 13 or Patau Syndrome.
What is Trisomy 18?
Trisomy 18 is a relatively common genetic disease, occurring in 1 out of every 5000 live births. It is three times more common in girls than boys.
Most cases of trisomy 18 are not inherited, but occur as random events during meiosis. This means that parents have done nothing before or during pregnancy to cause this disorder in their child.
Approximately 5% of people with trisomy 18 have an extra copy of chromosome 18 in only some of the body's cells. In these people, the condition is called mosaic trisomy 18. The severity of mosaic trisomy 18 depends on the type and number of cells that have the extra chromosome. The development of individuals with this form of trisomy 18 may range from normal to severely affected.
Very rarely, the long (q) arm of chromosome 18 becomes attached (translocated) to another chromosome during the formation of gametes or very early in embryonic development. Affected people have two copies of chromosome 18, plus extra material from chromosome 18 attached to another chromosome. If only part of the q arm is present in three copies, the physical signs of translocation trisomy 18 may be different from those typically seen in trisomy 18. If the entire q arm is present in three copies, individuals may be as severely affected as if they had three full copies of chromosome 18.
What are the features of Trisomy 18?The material in the extra chromosome interferes with normal development. These developmental issues are also associated with medical complications that are potentially life-threatening in the early months and years of life. Fifty percent of babies who are carried to term will be stillborn. Only 10% of children with Trisomy 18 survive to their first birthday.
The features of Trisomy 18 include:
- Clenched hands
- Crossed legs (preferred position)
- Feet with a rounded bottom (rocker-bottom feet)
- Low birth weight
- Low-set ears
- Mental deficiency
- Small head (microcephaly)
- Small jaw (micrognathia)
- Underdeveloped fingernails
- Undescended testicle
- Unusual shaped chest (pectus carinatum)
Other signs include eye abnormalities including coloboma of the iris, umbilical or inguinal hernia, and diastasis recti.
There are often signs of congenital heart disease, the most common being atrial septal defect(ASD), ventricular septal defect (VSD), or patent ductus arteriosus (PDA). | https://www.medpagetoday.com/blogs/celebritydiagnosis/30977 |
Fidel during a visit to Oran, Algeria, in 1972. Photograph: Prensa Latina/Reuters
His appearances at the United Nations were the highlight of the General Assembly season. Here he is, addressing the UN as president of the Non-Aligned Movement in 1979. Photograph: Prensa Latina/Reuters
Who knew Fidel could play football? Soccer legend Diego Maradona and Fidel play around in La Havana in 2005. Photograph: Canal 13/Reuters
Castro with schoolchildren at the inauguration of a school in Havana in 2013. There are no streets named after him in Cuba, no statues, but children re-enact his march towards Havana in 1959 every year. Photograph: Cubadebate/Reuters
Five years before a botched up stomach surgery forced him out of office, Fidel traveled to South Africa and met another legendary revolutionary: With Nelson Mandela in Houghton, Johannesburg, in 2001. Photograph: Chris Kotze/Reuters
Fidel and younger brother Raul, left, attend the 20th anniversary of the Triumph of the Revolution at Revolution Square in Havana in February 1979. Raul Castro has led Cuba for the last eight years and dismantled some of his brother's old style Communist policies. Photograph: Prensa Latina/Reuters
Castro with Jesse Jackson, left, at Havana's Jose Marti airport in June 1984. Reverend Jackson was a rare American public figure to travel to the island. Americans were forbidden to travel to Cuba for over half a century because of the US economic embargo against its tiny Communist neighbour. After President Barack Obama ended the trade embargo and normalised relations, Americans are now free to fly to Cuba, which is just 330 miles by air from Miami. Photograph: Prensa Latina/Reuters
Right to left: Fidel, with his admirers, Bolivian President Evo Morales and Venezuelan President Nicolas Maduro in a van in Havana, August 13, 2015, his 89th birthday. Throughout his tenure at the helm of the Cuban nation, El Commandate tried to kindle the flame of Revolution in Latin America. Photograph: Agencia Boliviana de Informacion/Reuters
Pope Francis, the Argentine born pontiff, with Fidel in Havana in 2015. The Pope is said to have played a key role in mediating between Cuba and its giant neighbour, eventually ending the hostilities between the two nations. Photograph: Alex Castro/AIN/Reuters
Fidel at a cultural gala to celebrate his 90th birthday in Havana, August 13, 2016. Photograph: Miraflores Palace/Reuters
Fidel in glasses with his legendary comrade Ernesto Guevara as Che plays golf at Colina Villareal in Havana. Che was murdered in Bolivia in 1967 while trying to spread revolution. Photograph: Prensa Latina/Reuters
Fidel casts his ballot at a polling station in Havana on February 3, 2013. This was his first extended public appearance since 2010. He had voted from his home in three previous elections since taking ill in 2006 and ceding power to his brother Raul two years later. Photograph: Cubadebate/Reuters
Fidel and Che -- who was Argentinian and a doctor -- at a meeting. Photograph: Prensa Latina/Reuters
Fidel and then Venezuelan leader Hugo Chavez -- one of his biggest admirers -- during a baseball game between their two countries at Barquisimeto's baseball stadium on October 29, 2000. Venuzuela's financial largesse from its oil revenues kept Cuba afloat after Havana's old patron the Soviet Union collapsed in 1991. Photograph: Reuters
Fidel and Che with Yuri Gagarin, the first man in space, centre. Havana, May 26, 1961. Photograph: Prensa Latina/Reuters
Russian President Vladimir Putin with Fidel in Havana. Despite his mercurial nature, Soviet leaders from Khrushchev to Brezhnev to Gorbachev kept Fidel in good humour, recognising the propaganda value Communist Cuba presented the USSR during the Cold War. Photograph: Cubadebate/Reuters
Fidel and his wife Dalia Soto Del Valle (in the red dress) pose for a photograph with the 'Cuban Five': Ramon Labanino, centre, front; Fernando Gonzalez, left; Gerardo Hernandez, second left; Antonio Guerrero, third right; and Rene Gonzalez, second right. Fidel met with all five Cuban spies who returned home as heroes after serving long prison terms in the United States. Photograph: Cubadebate/Reuters
Three hour speeches were routine in his heyday. Fidel could go on and on, rhapsodising about the beauty of revolution, even though it was apparent to his people that the Cuban revolution had failed to deliver what it had promised. Even after he was sick and infirm, Fidel still turned up on occasions like the 50th anniversary of the creation of the Committees for the Defence of the Revolution -- Havana, September 28, 2010 -- to deliver his message that the revolution must not die even if one day he did. The CDRs are assigned to every block of houses in Cuba to provide medical assistance if needed. The CDRs are also an Orwellian Big Brother apparatus to spy on the Cuban people and bring dissenters to the Castro regime's harsh notice. Photograph: Desmond Boylan/Reuters
French President Francois Hollande with Fidel in Havana, May 11, 2015. Photograph: Alex Castro/Cubadebate/Reuters
Fidel during a hunting trip in Romania in May 1972. During the Cold War, he was feted like a hero in the USSR dominated Communist bloc, his lustre as the last revolutionary survived the demise of the Soviet Union. Photograph: Prensa Latina/Reuters
Fidel was often seen smoking what else a Cuban cigar. Here he is, lecturing the press in Havana during then US Senator Charles McGovern's visit in May 1975. Photograph: Prensa Latina/Reuters
Mikhail Gorbachev with Fidel in Havana, April 3, 1989. Fidel was sceptical and suspicious of the Soviet leader's policies of perestroika and glasnost, realising correctly that the totalitarian edifice of the USSR would neither survive reformation nor openness. Photograph: Gary Hershorn/Reuters
Fidel in Havana in 2012, dismissing an article published in Cuba's tightly controlled State-run press claiming that he was dead or near death. He accused Cuba's enemies of spreading 'stupidities' about him, particularly a report in a Spanish newspaper that said he had suffered a massive stroke and was in a vegetative state. Phtograph: Alex Castro/Cubadebate/Reuters
Fidel and Raul Castro at the closing ceremony of the sixth Cuban Communist Party Congress in Havana in 2011. Photograph: Desmond Boylan/Reuters | |
NBA Draft 2019: What is motivating the new age players?/
As we prepare for 2019 NBA Draft, the buzz and excitement of an expected busy off-season ignited by player selections to make up NBA sides rosters next year shall reach new heights in a few hours. However, with the the media attention and public anticipation, how do these fresh-faced superstars stay grounded, focused and motivated as they shall prepare for their newest and biggest challenge of entering into the premier basketball competition and where do the role of coaches fit into these scenarios?
Let’s look at some of the comments made by involved players this week before comparing against the research; expected #1 draft pick, Duke’s Zion Williamson, dubbed the hottest draft pick since LeBron James said recently:
“I don’t play basketball for the money; it was the last thing I thought of when I was a little kid….When I was a little kid, I looked at my mom, stepdad and said, ‘I want to be an NBA player,’ just because I love to play the game of basketball like 24/7…..I’m going to still be playing the game I love. I feel like with the circle I have around me, they’ll keep it the same as it’s always been, just probably more people calling my name
After the excitement of Toronto Raptors victory and the most Canadian players ever in NBA Draft, Canadian basketballer Nickeil Alexander-Walker has been quoted saying:
When you get to this level, motivation comes from within. It shouldn’t really come from other people because if you need other people to motivate you at this point, you should probably choose another profession…..I’m trying to be as humble as possible because I know at any moment it can be taken from you. I’m just trying to enjoy it.
Ideas around “talent requires trauma” is supported by Ja Morant’s statement. The mid-major Murray State Racers’ player discussed how negative comments and emotions have driven him to be touted top 3 pick:
I really like the negative energy. The ‘he hasn’t played against nobody. He’s too small. He can’t shoot.’ I love, like, negative energy motivates me. It really doesn’t bother me because my dad was my first hater. So if I can take it from him, I can take it from anybody
So, with seemingly internal, external, positive and negative feelings fuelling player’s drive and motivation, how can top coaches such as Steve Kerr, Gregg Popovich and Doc Rivers motivate these new players entering the league? Can their influence or design an environment which enhances player’s focus to reach the lofty heights expected or build resilience to push through the first few years of finding their feet? Or is their drive solely internally driven, players answering their passions and intrinsic motivations?
Intrinsic motivation leads to greater persistence, improved performance and enhanced well-being in a physical setting as found in many forms of research such as Angela Duckworth’s research of grit whereby working towards singularly important goals being the hallmark of high achievers in every domain. While passion and intrinsic motivation stems from innate physiological need of competency and represents the prototype of self-determined behaviour, self-determined extrinsic motivators, which are extrinsic motivators which have been internally rationalised with oneself, become activities which are being carried out as are important and concordant to one’s values (Mageau, 2003); these types of motivations could be seen from Ja Morant’s statement. Self-determined forms of motivation also result in optimal behaviour, resulting in peak performance and persistence (perseverance for this example) (Deci and Ryan, 2008), important factors remembering deliberate practice can take supreme effort and concentration with top performers only able to complete between 3-5 hours per day, a timeline which may be foreign to newly professional athletes.
As Coaches Kerr, Popovich and Rivers most likely understand, the challenge of successful coaching is acknowledging social interactive dilemmas within individual and team goal setting and development, offering suitable scenarios and choices with all members’ involvement and collaboratively dealing with matters as opposed to eradicating them. Past research by Mageau and Vallerand regards the “actions of coaches as (possibly) the most critical motivational influences within sport setting”. Coaching should be recognised as a dynamic educational relationship, where the coach can satisfy player’s goals and development but both sides have an investment of will capital, where human initiative and intentionality are both dedicated to show commitment towards goals and relationships. Top NBA coaches could motivate these newly drafted modern day athletes by offering autonomy supportive practices and offering engagement and drive through understanding and supporting individual’s intrinsic motivations. Ultimately, high performance coaching environments need to adopt and offer players ingredients for genuine motivation; mastery, autonomy and purpose. Amorose supported that “the more athletes felt autonomous, competent and have sense of relatedness, the more reasons for participating were self-determined in future” (Amorose, 2007).
So in what ways or how can involved coaches find and mix these ingredients and push these players to elite levels? How should our coaches act and help influence or support these young athletes? The importance of offering autonomy, the capacity to decide for oneself and pursue a course of action in one's sporting or working life, and being autonomy supportive to our young players is key. These actions include acknowledging and providing choice within specific limits and rules, providing rationale for tasks limits and rules, inquiring and recognising other’s feelings, allowing opportunities to take initiatives and complete independent work, provide non-controlling feedback, avoid over control, controlling statements and tangible rewards and prevent ego involvement from taking place. This shall help start to offer the engagement, purpose and impact these emerging adults are desiring and searching for as they start to understand themselves and their position in sport and life in broad, existential terms.
In previous sport specific research, Pelletier found that changes to people’s perceptions of competence and self-determined motivators should increase intrinsic motivations and identification while decreasing introjection, external motivators and amotivation in athletes (Pelletier, 1995). Theories around self-determination such as Basic Psychological Needs Theory (BPNT) addresses the degree to which people’s behaviour in a domain is governed by self-determined motivators (Adie, 2010). The areas addressed as basic needs requiring fulfilment include competence, autonomy and relatedness. Vallerand and Mageau’s research has shown that intrinsic motivations and self-determined extrinsic motivators are necessary ingredients for athlete’s optimal function (Mageau, 2003). Deci and Ryan’s research investigated that intrinsic motivation is experienced as consequence of feeling competent and self-determined. Intrinsic motivation leads to greater persistence, improved performance and enhanced well-being in a physical setting.
Developing expert status requires interest and motivation within sport to increase along with proficiency of skills, which is inline with another popular sport in America, USA Hockey Ken Mantel’s mantra of “play, love and excel” meaning you must love the sport prior to mastering. Offering a playful, athlete-focused atmosphere with games as a significant portion of the practice environment, include activity-based games, skill-based games and game situational role-based games which encourage peer teaching opportunities and autonomy in playing practices, shall tick off many of the autonomy, relatedness and competency boxes. Collective and collaborative development shall ensure the newly introduced players are heavily involved in their own development and practice design while rising or being risen to the standards of NBA players and teams. This allows the head coaches to focus on the relationship building and testing elements; our role as coaches is to make players comfortable with being uncomfortable, help them grow and continually to think for themselves or make decisions as part of a group and act independent from coaches. We as coaches in highly dynamic sports such as basketball need to develop close and meaningful connections to enable us to offer accurate feedback and enthusiasm on the run. I believe we as coaches need to address our current fundamental purposes in the game; understanding the needs of players as individuals, ensure players basics skills are addressed and developed while expanding their imagination and motivation to succeed within the sport.
So again, how should our coaches act and help influence or support these young athletes at a critical stage of development and can we offer them what they want or need? Offer them exactly what we want in return; passion for learning and application, meaningful communications and relationships, support to choose and make autonomous decisions, following instinct while offering honest feedback on performance and support for a growth mindset; all things we must educate, support and train to others. This will allow our newly integrated players to look past media attention and public anticipation and focus on what’s important to them and the people closest to them. | https://www.coachingthecoaches.net/blog/2019/6/20/nba-draft-2019-what-is-motivating-the-new-age-players |
In Spermatophyte plants, seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their seeds, including both abiotic vectors, such as the wind, and living (biotic) vectors such as birds. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time. The patterns of seed dispersal are determined in large part by the dispersal mechanism and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity, wind, ballistic, water, and by animals. Some plants are serotinous and only disperse their seeds in response to an environmental stimulus. These modes are typically inferred based on adaptations, such as wings or fleshy fruit. However, this simplified view may ignore complexity in dispersal. Plants can disperse via modes without possessing the typical associated adaptations and plant traits may be multifunctional.
Seed dispersal is likely to have several benefits for different plant species. Seed survival is often higher away from the parent plant. This higher survival may result from the actions of density-dependent seed and seedling predators and pathogens, which often target the high concentrations of seeds beneath adults. Competition with adult plants may also be lower when seeds are transported away from their parent.
Seed dispersal also allows plants to reach specific habitats that are favorable for survival, a hypothesis known as directed dispersal. For example, Ocotea endresiana (Lauraceae) is a tree species from Latin America which is dispersed by several species of birds, including the three-wattled bellbird. Male bellbirds perch on dead trees in order to attract mates, and often defecate seeds beneath these perches where the seeds have a high chance of survival because of high light conditions and escape from fungal pathogens. In the case of fleshy-fruited plants, seed-dispersal in animal guts (endozoochory) often enhances the amount, the speed, and the asynchrony of germination, which can have important plant benefits.
Seeds dispersed by ants (myrmecochory) are not only dispersed short distances but are also buried underground by the ants. These seeds can thus avoid adverse environmental effects such as fire or drought, reach nutrient-rich microsites and survive longer than other seeds. These features are peculiar to myrmecochory, which may thus provide additional benefits not present in other dispersal modes.
Seed dispersal may also allow plants to colonize vacant habitats and even new geographic regions. Dispersal distances and deposition sites depend on the movement range of the disperser, and longer dispersal distances are sometimes accomplished through diplochory, the sequential dispersal by two or more different dispersal mechanisms. In fact, recent evidence suggests that the majority of seed dispersal events involves more than one dispersal phase.
Seed dispersal is sometimes split into autochory (when dispersal is attained using the plant's own means) and allochory (when obtained through external means).
Long-distance seed dispersal (LDD) is a type of spatial dispersal that is currently defined by two forms, proportional and actual distance. A plant's fitness and survival may heavily depend on this method of seed dispersal depending on certain environmental factors. The first form of LDD, proportional distance, measures the percentage of seeds (1% out of total number of seeds produced) that travel the farthest distance out of a 99% probability distribution. The proportional definition of LDD is in actuality a descriptor for more extreme dispersal events. An example of LDD would be that of a plant developing a specific dispersal vector or morphology in order to allow for the dispersal of its seeds over a great distance. The actual or absolute method identifies LDD as a literal distance. It classifies 1 km as the threshold distance for seed dispersal. Here, threshold means the minimum distance a plant can disperse its seeds and have it still count as LDD. There is a second, unmeasurable, form of LDD besides proportional and actual. This is known as the non-standard form. Non-standard LDD is when seed dispersal occurs in an unusual and difficult-to-predict manner. An example would be a rare or unique incident in which a normally-lemur-dependent deciduous tree of Madagascar was to have seeds transported to the coastline of South Africa via attachment to a mermaid purse (egg case) laid by a shark or skate. A driving factor for the evolutionary significance of LDD is that it increases plant fitness by decreasing neighboring plant competition for offspring. However, it is still unclear today as to how specific traits, conditions and trade-offs (particularly within short seed dispersal) affect LDD evolution.
Autochorous plants disperse their seed without any help from an external vector, as a result this limits plants considerably as to the distance they can disperse their seed. Two other types of autochory not described in detail here are blastochory, where the stem of the plant crawls along the ground to deposit its seed far from the base of the plant, and herpochory (the seed crawls by means of trichomes and changes in humidity).
Barochory or the plant use of gravity for dispersal is a simple means of achieving seed dispersal. The effect of gravity on heavier fruits causes them to fall from the plant when ripe. Fruits exhibiting this type of dispersal include apples, coconuts and passionfruit and those with harder shells (which often roll away from the plant to gain more distance). Gravity dispersal also allows for later transmission by water or animal.
Ballochory is a type of dispersal where the seed is forcefully ejected by explosive dehiscence of the fruit. Often the force that generates the explosion results from turgor pressure within the fruit or due to internal tensions within the fruit. Some examples of plants which disperse their seeds autochorously include: Arceuthobium spp. , Cardamine hirsuta , Ecballium spp. , Euphorbia heterophylla , Geranium spp. , Impatiens spp. , Sucrea spp , Raddia spp. and others. An exceptional example of ballochory is Hura crepitans —this plant is commonly called the dynamite tree due to the sound of the fruit exploding. The explosions are powerful enough to throw the seed up to 100 meters.
Witch hazel uses ballistic dispersal without explosive mechanisms by simply squeezing the seeds out at 28 mph.
Allochory refers to any of many types of seed dispersal where a vector or secondary agent is used to disperse seeds. These vectors may include wind, water, animals or others.
Wind dispersal (anemochory) is one of the more primitive means of dispersal. Wind dispersal can take on one of two primary forms: seeds or fruits can float on the breeze or, alternatively, they can flutter to the ground. The classic examples of these dispersal mechanisms, in the temperate northern hemisphere, include dandelions, which have a feathery pappus attached to their fruits (achenes) and can be dispersed long distances, and maples, which have winged fruits (samaras) that flutter to the ground.
An important constraint on wind dispersal is the need for abundant seed production to maximize the likelihood of a seed landing in a site suitable for germination. Some wind-dispersed plants, such as the dandelion, can adjust their morphology in order to increase or decrease the rate of germination. There are also strong evolutionary constraints on this dispersal mechanism. For instance, Cody and Overton (1996) found that species in the Asteraceae on islands tended to have reduced dispersal capabilities (i.e., larger seed mass and smaller pappus) relative to the same species on the mainland. Also, Helonias bullata , a species of perennial herb native to the United States, evolved to utilize wind dispersal as the primary seed dispersal mechanism; however, limited wind in its habitat prevents the seeds to successfully disperse away from its parents, resulting in clusters of population. Reliance on wind dispersal is common among many weedy or ruderal species. Unusual mechanisms of wind dispersal include tumbleweeds, where the entire plant (except for the roots) is blown by the wind. Physalis fruits, when not fully ripe, may sometimes be dispersed by wind due to the space between the fruit and the covering calyx which acts as an air bladder.
Many aquatic (water dwelling) and some terrestrial (land dwelling) species use hydrochory, or seed dispersal through water. Seeds can travel for extremely long distances, depending on the specific mode of water dispersal; this especially applies to fruits which are waterproof and float on water.
The water lily is an example of such a plant. Water lilies' flowers make a fruit that floats in the water for a while and then drops down to the bottom to take root on the floor of the pond. The seeds of palm trees can also be dispersed by water. If they grow near oceans, the seeds can be transported by ocean currents over long distances, allowing the seeds to be dispersed as far as other continents.
Mangrove trees grow directly out of the water; when their seeds are ripe they fall from the tree and grow roots as soon as they touch any kind of soil. During low tide, they might fall in soil instead of water and start growing right where they fell. If the water level is high, however, they can be carried far away from where they fell. Mangrove trees often make little islands as dirt and detritus collect in their roots, making little bodies of land.
Animals can disperse plant seeds in several ways, all named zoochory. Seeds can be transported on the outside of vertebrate animals (mostly mammals), a process known as epizoochory. Plant species transported externally by animals can have a variety of adaptations for dispersal, including adhesive mucus, and a variety of hooks, spines and barbs. A typical example of an epizoochorous plant is Trifolium angustifolium, a species of Old World clover which adheres to animal fur by means of stiff hairs covering the seed. Epizoochorous plants tend to be herbaceous plants, with many representative species in the families Apiaceae and Asteraceae. However, epizoochory is a relatively rare dispersal syndrome for plants as a whole; the percentage of plant species with seeds adapted for transport on the outside of animals is estimated to be below 5%. Nevertheless, epizoochorous transport can be highly effective if seeds attach to wide-ranging animals. This form of seed dispersal has been implicated in rapid plant migration and the spread of invasive species.
Seed dispersal via ingestion and defecation by vertebrate animals (mostly birds and mammals), or endozoochory, is the dispersal mechanism for most tree species. Endozoochory is generally a coevolved mutualistic relationship in which a plant surrounds seeds with an edible, nutritious fruit as a good food resource for animals that consume it. Such plants may advertise the presence of food resource by using colour. Birds and mammals are the most important seed dispersers, but a wide variety of other animals, including turtles, fish, and insects (e.g. tree wētā and scree wētā), can transport viable seeds. The exact percentage of tree species dispersed by endozoochory varies between habitats, but can range to over 90% in some tropical rainforests. Seed dispersal by animals in tropical rainforests has received much attention, and this interaction is considered an important force shaping the ecology and evolution of vertebrate and tree populations. In the tropics, large animal seed dispersers (such as tapirs, chimpanzees, black-and-white colobus, toucans and hornbills) may disperse large seeds with few other seed dispersal agents. The extinction of these large frugivores from poaching and habitat loss may have negative effects on the tree populations that depend on them for seed dispersal and reduce genetic diversity. A variation of endozoochory is regurgitation of seeds rather than their passage in faeces after passing through the entire digestive tract. The seed dispersal by birds and other mammals are able to attach themselves to the feathers and hairs of these vertebrates, which is their main method of dispersal.
Seed dispersal by ants ( myrmecochory ) is a dispersal mechanism of many shrubs of the southern hemisphere or understorey herbs of the northern hemisphere. Seeds of myrmecochorous plants have a lipid-rich attachment called the elaiosome, which attracts ants. Ants carry such seeds into their colonies, feed the elaiosome to their larvae and discard the otherwise intact seed in an underground chamber. Myrmecochory is thus a coevolved mutualistic relationship between plants and seed-disperser ants. Myrmecochory has independently evolved at least 100 times in flowering plants and is estimated to be present in at least 11 000 species, but likely up to 23 000 or 9% of all species of flowering plants. Myrmecochorous plants are most frequent in the fynbos vegetation of the Cape Floristic Region of South Africa, the kwongan vegetation and other dry habitat types of Australia, dry forests and grasslands of the Mediterranean region and northern temperate forests of western Eurasia and eastern North America, where up to 30–40% of understorey herbs are myrmecochorous. Seed dispersal by ants is a mutualistic relationship and benefits both the ant and the plant.
Seed predators, which include many rodents (such as squirrels) and some birds (such as jays) may also disperse seeds by hoarding the seeds in hidden caches. The seeds in caches are usually well-protected from other seed predators and if left uneaten will grow into new plants. In addition, rodents may also disperse seeds via seed spitting due to the presence of secondary metabolites in ripe fruits. Finally, seeds may be secondarily dispersed from seeds deposited by primary animal dispersers, a process known as diplochory. For example, dung beetles are known to disperse seeds from clumps of feces in the process of collecting dung to feed their larvae.
Other types of zoochory are chiropterochory (by bats), malacochory (by molluscs, mainly terrestrial snails), ornithochory (by birds) and saurochory (by non-bird sauropsids). Zoochory can occur in more than one phase, for example through diploendozoochory, where a primary disperser (an animal that ate a seed) along with the seeds it is carrying is eaten by a predator that then carries the seed further before depositing it.
Dispersal by humans ( anthropochory ) used to be seen as a form of dispersal by animals. Its most widespread and intense cases account for the planting of much of the land area on the planet, through agriculture. In this case, human societies form a long-term relationship with plant species, and create conditions for their growth.
Recent research points out that human dispersers differ from animal dispersers by having a much higher mobility, based on the technical means of human transport. On the one hand, dispersal by humans also acts on smaller, regional scales and drives the dynamics of existing biological populations. On the other hand, dispersal by humans may act on large geographical scales and lead to the spread of invasive species.
Humans may disperse seeds by many various means and some surprisingly high distances have been repeatedly measured. Examples are: dispersal on human clothes (up to 250 m), on shoes (up to 5 km), or by cars (regularly ~ 250 m, singles cases > 100 km). Seed dispersal by cars can be a form of unintentional transport of seeds by humans, which can reach far distances, greater than other conventional methods of dispersal. Cars that carry soil are able to contain viable seeds, a study by Dunmail J. Hodkinson and Ken Thompson found that the most common seeds that were carried by vehicle were broadleaf plantain (Plantago major), Annual meadow grass (Poa annua), rough meadow grass (Poa trivialis), stinging nettle (Urtica dioica) and wild chamomile (Matricaria discoidea).
Deliberate seed dispersal also occurs as seed bombing. This has risks, as unsuitable provenance may introduce genetically unsuitable plants to new environments.
Seed dispersal has many consequences for the ecology and evolution of plants. Dispersal is necessary for species migrations, and in recent times dispersal ability is an important factor in whether or not a species transported to a new habitat by humans will become an invasive species. Dispersal is also predicted to play a major role in the origin and maintenance of species diversity. For example, myrmecochory increased the rate of diversification more than twofold in plant groups in which it has evolved because myrmecochorous lineages contain more than twice as many species as their non-myrmecochorous sister groups. Dispersal of seeds away from the parent organism has a central role in two major theories for how biodiversity is maintained in natural ecosystems, the Janzen-Connell hypothesis and recruitment limitation. Seed dispersal is essential in allowing forest migration of flowering plants. It can be influenced by the production of different fruit morphs in plants, a phenomenon known as heterocarpy. These fruit morphs are different in size and shape and have different dispersal ranges, which allows seeds to be dispersed for varying distances and adapt to different environments.
In addition, the speed and direction of wind are highly influential in the dispersal process and in turn the deposition patterns of floating seeds in the stagnant water bodies. The transportation of seeds is led by the wind direction. This effects colonization situated on the banks of a river or to wetlands adjacent to streams relative to the distinct wind directions. The wind dispersal process can also affect connections between water bodies. Essentially, wind plays a larger role in the dispersal of waterborne seeds in a short period of time, days and seasons, but the ecological process allows the process to become balanced throughout a time period of several years. The time period of which the dispersal occurs is essential when considering the consequences of wind on the ecological process.
A seed is an embryonic plant enclosed in a protective outer covering, along with a food reserve. The formation of the seed is part of the process of reproduction in seed plants, the spermatophytes, including the gymnosperm and angiosperm plants.
Mutualism describes the ecological interaction between two or more species where each species has a net benefit. Mutualism is a common type of ecological interaction. Prominent examples include most vascular plants engaged in mutualistic interactions with mycorrhizae, flowering plants being pollinated by animals, vascular plants being dispersed by animals, and corals with zooxanthellae, among many others. Mutualism can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, or parasitism, in which one species benefits at the "expense" of the other.
Pollination is the transfer of pollen from an anther of a plant to the stigma of a plant, later enabling fertilisation and the production of seeds, most often by an animal or by wind. Pollinating agents can be animals such as insects, birds, and bats; water; wind; and even plants themselves, when self-pollination occurs within a closed flower. Pollination often occurs within a species. When pollination occurs between species it can produce hybrid offspring in nature and in plant breeding work.
In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species, or of different species. These effects may be short-term, like pollination and predation, or long-term; both often strongly influence the evolution of the species involved. A long-term interaction is called a symbiosis. Symbioses range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be indirect, through intermediaries such as shared resources or common enemies. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship.
A frugivore is an animal that thrives mostly on raw fruits or succulent fruit-like produce of plants such as roots, shoots, nuts and seeds. Approximately 20% of mammalian herbivores eat fruit. Frugivores are highly dependent on the abundance and nutritional composition of fruits. Frugivores can benefit or hinder fruit-producing plants by either dispersing or destroying their seeds through digestion. When both the fruit-producing plant and the frugivore benefit by fruit-eating behavior the interaction is a form of mutualism.
Biological dispersal refers to both the movement of individuals from their birth site to their breeding site, as well as the movement from one breeding site to another . Dispersal is also used to describe the movement of propagules such as seeds and spores. Technically, dispersal is defined as any movement that has the potential to lead to gene flow. The act of dispersal involves three phases: departure, transfer, settlement and there are different fitness costs and benefits associated with each of these phases. Through simply moving from one habitat patch to another, the dispersal of an individual has consequences not only for individual fitness, but also for population dynamics, population genetics, and species distribution. Understanding dispersal and the consequences both for evolutionary strategies at a species level, and for processes at an ecosystem level, requires understanding on the type of dispersal, the dispersal range of a given species, and the dispersal mechanisms involved.
Myrmecophytes are plants that live in a mutualistic association with a colony of ants. There are over 100 different genera of myrmecophytes. These plants possess structural adaptations that provide ants with food and/or shelter. These specialized structures include domatia, food bodies, and extrafloral nectaries. In exchange for food and shelter, ants aid the myrmecophyte in pollination, seed dispersal, gathering of essential nutrients, and/or defense. Specifically, domatia adapted to ants may be called myrmecodomatia.
Myrmecochory is seed dispersal by ants, an ecologically significant ant-plant interaction with worldwide distribution. Most myrmecochorous plants produce seeds with elaiosomes, a term encompassing various external appendages or "food bodies" rich in lipids, amino acids, or other nutrients that are attractive to ants. The seed with its attached elaiosome is collectively known as a diaspore. Seed dispersal by ants is typically accomplished when foraging workers carry diaspores back to the ant colony, after which the elaiosome is removed or fed directly to ant larvae. Once the elaiosome is consumed, the seed is usually discarded in underground middens or ejected from the nest. Although diaspores are seldom distributed far from the parent plant, myrmecochores also benefit from this predominantly mutualistic interaction through dispersal to favourable locations for germination, as well as escape from seed predation.
In the flowering plants, an ovary is a part of the female reproductive organ of the flower or gynoecium. Specifically, it is the part of the pistil which holds the ovule(s) and is located above or below or at the point of connection with the base of the petals and sepals. The pistil may be made up of one carpel or of several fused carpels, and therefore the ovary can contain part of one carpel or parts of several fused carpels. Above the ovary is the style and the stigma, which is where the pollen lands and germinates to grow down through the style to the ovary, and, for each individual pollen grain, to fertilize one individual ovule. Some wind pollinated flowers have much reduced and modified ovaries.
Elaiosomes are fleshy structures that are attached to the seeds of many plant species. The elaiosome is rich in lipids and proteins, and may be variously shaped. Many plants have elaiosomes that attract ants, which take the seed to their nest and feed the elaiosome to their larvae. After the larvae have consumed the elaiosome, the ants take the seed to their waste disposal area, which is rich in nutrients from the ant frass and dead bodies, where the seeds germinate. This type of seed dispersal is termed myrmecochory from the Greek "ant" (myrmex) and "circular dance" (khoreíā). This type of symbiotic relationship appears to be mutualistic, more specifically dispersive mutualism according to Ricklefs, R.E. (2001), as the plant benefits because its seeds are dispersed to favorable germination sites, and also because it is planted by the ants.
A flower, sometimes known as a bloom or blossom, is the reproductive structure found in flowering plants. The biological function of a flower is to facilitate reproduction, usually by providing a mechanism for the union of sperm with eggs. Flowers may facilitate outcrossing resulting from cross-pollination or allow selfing when self-pollination occurs.
Seed predation, often referred to as granivory, is a type of plant-animal interaction in which granivores feed on the seeds of plants as a main or exclusive food source, in many cases leaving the seeds damaged and not viable. Granivores are found across many families of vertebrates as well as invertebrates ; thus, seed predation occurs in virtually all terrestrial ecosystems. Seed predation is commonly divided into two distinctive temporal categories, pre-dispersal and post-dispersal predation, which affect the fitness of the parental plant and the dispersed offspring, respectively. Mitigating pre- and post-dispersal predation may involve different strategies. To counter seed predation, plants have evolved both physical defenses and chemical defenses. However, as plants have evolved seed defenses, seed predators have adapted to plant defenses. Thus, many interesting examples of coevolution arise from this dynamic relationship.
Fruit anatomy is the plant anatomy of the internal structure of fruit. Fruits are the mature ovary or ovaries of one or more flowers. They are found in three main anatomical categories: aggregate fruits, multiple fruits, and simple fruits. Aggregate fruits are formed from a single compound flower and contain many ovaries or fruitlets. Examples include raspberries and blackberries. Multiple fruits are formed from the fused ovaries of multiple flowers or inflorescence. Examples include fig, mulberry, and pineapple.
In botany, a diaspore is a plant dispersal unit consisting of a seed or spore plus any additional tissues that assist dispersal. In some seed plants, the diaspore is a seed and fruit together, or a seed and elaiosome. In a few seed plants, the diaspore is most or all of the plant, and is known as a tumbleweed.
A dispersal vector is an agent of biological dispersal that moves a dispersal unit, or organism, away from its birth population to another location or population in which the individual will reproduce. These dispersal units can range from pollen to seeds to fungi to entire organisms.
Simarouba amara is a species of tree in the family Simaroubaceae, found in the rainforests and savannahs of South and Central America and the Caribbean. It was first described by Aubl. in French Guiana in 1775 and is one of six species of Simarouba. The tree is evergreen, but produces a new set of leaves once a year. It requires relatively high levels of light to grow and grows rapidly in these conditions, but lives for a relatively short time. In Panama, it flowers during the dry season in February and March, whereas in Costa Rica, where there is no dry season it flowers later, between March and July. As the species is dioecious, the trees are either male or female and only produce male or female flowers. The small yellow flowers are thought to be pollinated by insects, the resulting fruits are dispersed by animals including monkeys, birds and fruit-eating bats and the seeds are also dispersed by leaf cutter ants.
Acromyrmex striatus is a species of the leaf-cutter ants found in the Neotropics.
Platypodium elegans, the graceful platypodium, is a large leguminous tree found in the Neotropics that forms part of the forest canopy. It was first described by Julius Rudolph Theodor Vogel in 1837 and is the type species of the genus. The tree has been known to grow up to 30 metres in height and have a trunk with a diameter up to 1 m at breast height. Its trunk has large holes in it, sometimes making it possible to see through the trunk. The holes provide a habitat for giant damselflies and other insects both when alive and once the tree has died and fallen over. It has compound leaves each of which is made up of 10–20 leaflets. Three new chemical compounds have been isolated from the leaves and they form part of the diet of several monkeys and the squirrel Sciurus ingrami. In Panama it flowers from April to June, the flowers contain only four ovules, but normally only one of these reaches maturity forming a winged seed pod around 10 cm long and weighing 2 g. During the dry season around a year after the flowers are fertilised, the seeds are dispersed by the wind and the tree loses it leaves. The seeds are eaten by agoutis and by bruchid beetle larvae. The majority of seedlings are killed by damping off fungi in the first few months of growth, with seedlings that grow nearer the parent trees being more likely to die. The seedlings are relatively unable to survive in deep shade compared to other species in the same habitat. Various epiphytes are known to grow on P. elegans with the cactus Epiphyllum phyllanthus being the most abundant in Panama. Despite having holes in its trunk which should encourage debris and seeds to collect, hemiepiphytes are relatively uncommon, meaning that animals are not attracted to it to feed and then defecate. It has no known uses in traditional medicine and although it can be used for timber, the wood is of poor quality.
Seed dispersal syndromes are morphological characters of seeds correlated to particular seed dispersal agents. Dispersal is the event by which individuals move from the site of their parents to establish in a new area. A seed disperser is the vector by which a seed moves from its parent to the resting place where the individual will establish, for instance an animal. Similar to the term syndrome, a diaspore is a morphological functional unit of a seed for dispersal purposes.
Diplochory, also known as “secondary dispersal”, “indirect dispersal” or "two-phase dispersal", is a seed dispersal mechanism in which a plant's seed is moved sequentially by more than one dispersal mechanism or vector. The significance of the multiple dispersal steps on the plant fitness and population dynamics depends on the type of dispersers involved. In many cases, secondary seed dispersal by invertebrates or rodents moves seeds over a relatively short distance and a large proportion of the seeds may be lost to seed predation within this step. Longer dispersal distances and potentially larger ecological consequences follow from sequential endochory by two different animals, i.e. diploendozoochory: a primary disperser that initially consumes the seed, and a secondary, carnivorous animal that kills and eats the primary consumer along with the seeds in the prey's digestive tract, and then transports the seed further in its own digestive tract. | https://wikimili.com/en/Seed_dispersal |
Description of Initiative:
In August 2012, the Maine Department of Environmental Protection created a new division to undertake a cross-media and multi-program approach to material management. Including merging disparate recycling programs and management responsibilities, the goal of this innovative new division is to create a team which oversees the implementation of a comprehensive, coordinated, and holistic approach to materials management.
Specifically, the Sustainability Division has responsibility for:
- Administering the various product stewardship programs overseen by the department. These programs include: electronic wastes; cell phones; mercury thermostats; mercury-added (fluorescent) lamps; mercury auto switches; dry cell mercuric oxide and rechargeable batteries; and unwanted paint (effective 2015);
- Furnishing technical assistance to residents, municipalities, institutions and businesses on waste reduction, reuse, recycling and composting opportunities. Currently, a major emphasis is being placed on diverting unwanted organics from disposal, with those organics sent for use as animal feed or to composting operations or for anaerobic digestion;
- Directing the chemical management programs which include the priority chemicals in products program and the toxic chemical reduction program;
- Encouraging the restaurant, lodging, and grocery sectors to participate in and become recognized for their sustainability activities through the recently rebranded Environmental Leader Program; and
- Being the department’s resource for greenhouse gas and climate change/climate adaptation issues, with a focus on Maine’s strategy for addressing potential impacts.
Results to Date:
A number of the highlights of the recently created Sustainability Division include:
- Aiding several municipalities and institutions, including medical facilities, in designing and implementing organic waste composting programs;
- Implementing revisions to the priority chemical law, including the creation of the list of chemicals of high concern, and the sales prohibition on infant formula and baby food packaging containing intentionally added- Biphenyl A, effective March 1, 2014;
- Assisting the state’s largest anaerobic digestion operation in identifying and securing organic waste;
- Outreach on adaptation and related challenges and opportunities surrounding severe weather incidents;
- Expanding outreach to municipalities and businesses on recovery and recycling of fluorescent bulbs, mercury thermostats, and other Universal Waste;
- Preparing the 2011 Waste Generation and Disposal Capacity Report and presenting it to the Governor and the Legislature in early 2013;
- Beginning work on the update to the state’s five-year Waste Management & Recycling Plan;
- Certifying or recertifying more than 35 businesses in the Environmental Leader program; and
- Initiating the toxic use reduction reporting requirements and outreach to regulated entities.
Contact: | https://www.ecos.org/news-and-updates/maines-new-sustainability-division/ |
Thinking about the future of science might seem futile, since future science by definition consists of knowledge we do not yet have. But it is worth recalling William Gibson’s comment: “The future has arrived — it’s just not evenly distributed yet.” Science is a socially determined activity as much as a purely knowledge-driven one, so it is very probable that new knowledge already exists but remains stuck on the periphery because it does not fit established ways of thinking.
Science as a whole has expanded very rapidly over the last hundred years and is now beset by significant conceptual lags between disciplines. Quantum physics is at the forefront of a potential new paradigm but the wider culture struggles with its implications. Meanwhile, other scientific disciplines continue to operate on the assumptions of classical physics as a matter of convenience.
If the future of science is already partly here, it may not be evenly distributed because its acceptance depends on the reexamination of fundamental assumptions. Arguably this even includes the nature of scientific observation, a basic building block of all science.
Science is based on close observation of physical phenomena, with the aim of understanding their nature and source. Hard sciences such as physics work with phenomena that are considered external to and separate from the observer. The scientist is supposed to be an impassive objective observer – rather like noticing the editing and camera work of a film rather than getting caught up in the story.
The phenomena being observed are considered as part of a reality which is presumed to exist independently of the observer, though the observer may be able to influence it to some degree. Not only are the observer and the observed thought of as separate, but the phenomena being observed are also classically regarded as separate from each other. This conventional mode of scientific observation can be thought of as “externally directed”.
But there is another realm of human experience. In addition to experience of an “outside world” there is experience of an “interior world”. This interior experience consists of phenomena such as thoughts, emotions, memories, dreams and perceptions. These phenomena are the focus of so-called “soft” sciences such as psychology and they are also a major area of interest in philosophy.
Scientists would ideally like to observe this interior world in the same way as the outside world. External phenomena can be detected and measured, and the measurements analysed mathematically. Interior phenomena are not so accessible. It is not possible from outside to access a dream or a memory in the subjective form in which it is experienced. But brain states can be detected and measured, and the neural correlates of mental experience can often be identified. Precisely because they can be detected and measured, brain states are frequently taken as the appropriate scientific observables of interior experience.
A scientist observing someone else’s brain turns interior phenomena into exterior phenomena by approaching that brain as an external object. Inner experience is thought to be produced by the collective activity of neurons and the actual inner experience of the observer becomes little more than a side effect. In this way, scientific reduction is applied, breaking down the interior phenomena into their supposed underlying causes.
But there is a catch. Cognitive activities such as scientific theorising involve an interior experience of comprehension that cannot itself be detected in the neural correlates of theorising when observed from outside the brain. Indeed, all of science depends on interior cognitive phenomena such as understanding that cannot be detected when the brain is observed from the outside. By treating interior phenomena as exterior phenomena we are attempting to understand our own understanding by reducing it to other phenomena in which the understanding can no longer be seen.
The human brain is an information processing system and many neuroscientists take the view that subjective experiences such as understanding will turn out to be the emergent outcome of complex information processing, much as a computer can produce complex outputs from the simple digital components of 0s and 1s. The objection to this line of argument is that subjective experiences require a conscious self that is aware of its own internal states and experiences. However information-rich the computer’s output may be, there will be no understanding unless there is a conscious self to understand it – a position philosopher John Searle argued for in his famous “Chinese room” thought experiment.
Many computer scientists believe that computers themselves will become conscious if they are organized like the brain and if they are powerful enough, a position called “strong AI” by Searle. Even supposing this may happen in the future it would be difficult to establish. If consciousness is necessary for the experience of interior phenomena, and if interior phenomena cannot be directly observed from outside, how would we even know if a computer became conscious? Would it tell us it is conscious, as a human being might? Could we trust such self-reporting as a reliable phenomenon for exterior observation when consciousness itself cannot be observed from the outside?
The enigma of interior experience will almost certainly remain intractable as long as scientists observe interior phenomena from the outside. This is why philosopher David Chalmers famously called consciousness “the hard problem.” Trying to explain the subjective experience of consciousness by deriving it from the contents of its own awareness – externally observed phenomena – seems an inherently implausible move. If consciousness is a bottom-up effect, built up from simpler components, then we have no idea what could possibly constitute a “component” of the seamless subjective quality of conscious experience – other than rather questionable ideas such as that consciousness arises from logical operations. Without this knowledge, the strong AI position involves a circular act of faith in which externally observed phenomena – which are only known through the medium of conscious perception – are held to be the cause of interior conscious experience.
There is a long history of theorizing in philosophy about whether the external physical world or the interior mental world represents the true nature of reality. The prevailing scientific consensus is that the external world of experience is the real world, despite the difficulty that all scientific observation of external phenomena requires an interior phenomenon of meaningful perception. This question is a live topic of debate in the philosophy of physics but with no immediate prospect of agreement. Meanwhile, regrettably, the dominance of external observation is tending to suppress insights arising from interior experience.
Scientific research does not necessarily have to wait for the disagreement about the two types of experience to be resolved. One way to sidestep the impasse would be to think about the outside and inside worlds as merely two different modes of experience. There could be two equally valid directions of scientific observation: “externally directed” observation and “internally directed” observation, without attempting to establish one as better or more real than the other. They would be two alternative ways of seeing, reminiscent of the interior and exterior perspectives proposed by writer Ken Wilbur. Inward observation would essentially be a conscious self observing the contents of its own consciousness and taking their felt value and meaning as primary or irreducible elements of observation. The externally observable neural correlates of interior experience would then be simply that – correlates, not causes.
To a considerable extent this is what psychology has done, but psychological phenomena are still persistently regarded as ultimately caused by physical phenomena that can be detected and measured by external observation. The observational bias, even in psychology, is that the “real” world exists to the outside of conscious observers and psychological phenomena are interpreted as merely an emergent side effect of brain functioning.
Seen through the lens of internally directed observation, consciousness becomes the fundamental property that enables observation itself – it appears as an ontological primitive, a primary feature of reality which cannot be broken down into anything else. Thoughts, memories and dreams become not the product of external material processes but phenomena or qualities intrinsic to consciousness, to be encountered as the basic content of interior experience and observation.
In a binocular combination of interior and exterior observation, exterior observables would not be “more real” than the interior ones. The interior ones would not have to be constructed from the exterior ones. They would simply be two aspects of reality meeting in conscious experience.
If exterior and interior observation were used in tandem this could well lead to breakthroughs for science as a whole. Many aspects of experience that are currently discounted as merely subjective could be revalidated and revalued. The two directions of observation would support complementary hypotheses which could be experimentally evaluated.
Quantum theory, for example, could benefit from interior observation. Quantum effects are weird when viewed from the perspective of external observation because it posits a universe of separate phenomena that exist independently of awareness. Quantum paradoxes might make more sense from the perspective of interior observation, in which existence and awareness are intrinsically related.
Certain experimental findings suggest quantum effects are the result of an interaction between quantum-level reality and consciousness, but this idea has met with considerable resistance. As a line of thought it is blocked by the assumption that externally observed “physical” reality could not be affected by consciousness because it is “more real” than the interior phenomena of conscious experience. Adopting the perspective of interior observation could help us to take puzzling quantum effects at face value. This will not be easy: we have a deep assumption, inherited from an earlier stage of science, that the arrow of causal dependence runs from outer to inner experience. Possibly quantum theory is exactly the frontier where empiricism can only advance if we allow ourselves to treat interior and exterior phenomena on at least an equal footing and begin to ask if just possibly the causal arrow might point the other way.
Internal observation would challenge other widely held assumptions. The perspective of external observation gives rise to the idea that consciousness must be generated from components that are themselves unconscious. This leads to the assumption that only brains, and maybe only human brains, are conscious. But seen through the lens of internal observation, consciousness appears as a primary feature of reality, which leads to a quite different assumption. From this perspective, we should expect to find consciousness everywhere in the universe at all levels. It would become the prerequisite for existence of any kind – after all, without consciousness nothing can register as existing, which is indistinguishable from a situation in which nothing exists.
Furthermore, internal observation could open the door to taking the paranormal more seriously. Such things as psychic and spiritual experiences could be explored without the preliminary requirement of having to explain them in terms of phenomena that can be observed externally. They would be valid interior phenomena to be investigated in parallel, internally and externally, without rejecting them because they are only “in the mind”.
Looking to the future, science appears to be under growing pressure to open up to a broader view of observable reality. Quantum puzzles and other anomalies that now confront science, not to mention the advent of artificial intelligence, place the nature of human understanding and consciousness at the frontier of science. If scientists insist on the perspective of exterior observation alone, this is likely to hinder the further significant advance of science. If in the future scientists adopt something like the mode of interior observation proposed here, a new frontier would open up, and an expanded philosophy of science could lead to advances in theory and breakthroughs in technology. | http://www.hardintibbs.com/blog/some-thoughts-about-the-future-of-scientific-observation |
Multiple linear regression occurs when more than one independent variable is used to predict a dependent variable:
Where, Y is the dependent variable, a is the intercept, b1 and b2 are the coefficients, and x1 and x2 are the independent variables
Also, note that squaring the dependent variable still makes it linear, but if the coefficient is squared, then it is nonlinear.
To build the multiple linear regression model, we'll utilize the NBA's basketball data to predict the average points scored per game
The following are the column descriptions of the data:
height: This refers to the height in feet
weight: This refers to the weight ...
Get Mastering Python for Data Science now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. | https://www.oreilly.com/library/view/mastering-python-for/9781784390150/ch06s02.html |
Binomial logistic regression using SPSS IBM statistics
Binomial logistic is simply a logistic regression model that can be used to predict the probability of an outcome falling within a given category. The dependent variable is always a dichotomous variable and the predictors (independent variables) can be either continuous or categorical variables. When there are more than two categories of the outcome variables, then it is appropriate to use a multinomial logistic regression model. An example is when one might be interested in predicting whether a student "passes" or "fails" his/her college statistics based on the time they spend while revising for the exam. One can also predict the probability of drug use based on previous behaviors, age, and gender.
This text explains to you the best way to do binomial regression using SPSS Statistics. However, before we run the data through a binomial process, your data must meet the following assumptions.
Assumptions for a Binomial regression model
1. The dependent variable should be on a dichotomous scale - That is the measurements of the variables should be measured in categorical form. Examples of categorical variables include gender, race, presence of heart disease (Yes or No). Remember that we also have an ordinal regression model which can be used when the response variable is on an ordered scale.
2. You must have more than one independent variable measured on either a continuous scale, an ordered scale or a categorical scale.
3. The independence of the observations should also be met.
4. Your continuous variable and the logit transformation of the dependent variable must be linearly related.
The 4th assumption can be checked via SPSS but the first three assumptions relate to the data collection process,
Case:
In this example, we analyze to predict heart-disease (The dependent variable), that is whether an individual has heart disease or no, Using maximal aerobic capacity, age, weight, and gender. Note that age and weight are the continuous variables while gender is the categorical predictor variables.
Analysis:
To run the Logistic regression model in SPSS step by step solutions
Step 1: Go to Analyze > Regression > Binary Logistic as shown in the screenshot below.
Step 2: In the logistic regression dialogue box that appears, transfer your dependent variable to the dependent variable (in this case its heart_disease) dialogue box and move you independent variables to the covariate dialogue box.
The dialogue box shows how the variables should be transferred.
Step 3: Click categorical to define the categorical variables (Gender), and transfer your categorical variables to the categorical covariates as shown below.
Step 4: See the contrast area check the first option in the contrast category and click the Change button as shown below.
Step 5: Click continue to return to the logistic dialogue box the Options button the dialogue box below is presented.
Step 6: Check the following buttons, Classification plots, Hosmer-Lemeshow goodness of fit and casewise listing of residuals in the statistics and plots and the CI for Exp(b). Remember to check the at last step in the display area. Your dialogue box after this step should be as shown below.
Last step: Click continue to return to your logistic regression dialogue box and click OK to get your output.
Output and interpretation of the Logistic results
Variance Explained
This is equivalent to the R-squared explained in the multiple regression model. Cox & Snell R Square and Nagelkerke R Square values are used to explain the variation that can be explained by the model. Based on the output of the model, the explained variation is between 0.240 and 0.330 it is upon you to pick the statistic that interests you. Nagelkerke R2 is a modification of Cox & Snell R2, the latter of which cannot achieve a value of 1. Remember that it is always advisable to report the Nagelkerke statistics because Cox ^ Snell cannot be 1.
Classification table
The cut value of 0.50 implies that if the predicted category is greater than 0.50 then that is classified as a "Yes" otherwise that is a no.
Some useful information that the classification table provides include:
- A. The percentage accuracy in classification (PAC), which reflects the percentage of cases that can be correctly classified as "no" heart disease with the independent variables added (not just the overall model).
- B. Sensitivity, which is the percentage of cases that had the observed characteristic (e.g., "yes" for heart disease) which were correctly predicted by the model (i.e., true positives).
- C. Specificity, which is the percentage of cases that did not have the observed characteristic (e.g., "no" for heart disease) and were also correctly predicted as not having the observed characteristic (i.e., true negatives).
- D. The positive predictive value, which is the percentage of correctly predicted cases "with" the observed characteristic compared to the total number of cases predicted as having the characteristic.
- E. The negative predictive value, which is the percentage of correctly predicted cases "without" the observed characteristic compared to the total number of cases predicted as not having the characteristic.
Variables in the equation table
The table presents the contribution of each variable and its associated statistical significance.
The wald statistic determines the statistical significance of each independent variable. From these results it be seen that age (p = .003), gender (p = .021) and VO2max (p = .039) added significantly to the model, weight (p = .799) did not. You can use the information in the "Variables in the Equation" table to predict the probability of an event occurring based on a one-unit change in an independent variable when all other independent variables are kept constant. For example, the table shows that the odds of having heart disease ("yes" category) is 7.026 times greater for males as opposed to females. | https://www.statisticsanswered.com/blog/43/Binomial-logistic-regression-using-SPSS-IBM-statistics |
CROSS-REFERENCE TO RELATED APPLICATIONS
TECHNICAL FIELD
BACKGROUND
SUMMARY
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
This application is a continuation of International Application No. PCT/CN2015/082582, filed on Jun. 27, 2015, which is hereby incorporated by reference in its entirety.
The present invention relates to the field of wireless communications technologies, and in particular, to a method and an apparatus for determining a signal-to-noise ratio in wireless communication.
FIG. 1
11
121
122
123
110
121
110
130
13
131
13
121
11
121
122
123
11
122
123
122
123
112
122
11
113
123
11
To improve the development of wireless communications technologies, a Long Term Evolution (LTE) project is set up by the 3rd Generation Partnership Project (3GPP). Multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) are the two most crucial technologies in the LTE project. In an actual LTE application scenario, a neighboring cell may interfere with user equipment (UE), and the interference may severely affect performance of demodulating data by the UE. In addition, in an MU-MIMO (multi-user MIMO) system, a specific scheduling mechanism may be used to select UEs that meet a requirement and group the UEs into a group, and antennas of multiple UEs in one group constitute a virtual multi-antenna array. A base station and multiple UEs in the group may send and receive data on a same time-frequency resource, the group of UEs are referred to as paired UEs, and interference may exist between the paired UEs. Using as an example, a serving base station may be an eNodeB, and provides a service for multiple UEs, such as UE , UE , and UE , in a serving cell . The UE is close to an edge of the cell , and is subjected to interference from a cell formed by another base station , and this is also called inter-cell interference, that is, a communications link between the another base station and the UE is an interference link of a communications link in between the serving base station and the UE . The UE and the UE are paired UEs, when the serving base station performs MU-MIMO transmission to the UE and the UE , interference may also exist between the UE and the UE , that is, a communications link between the UE and the serving base station and a communications link between the UE and the serving base station interfere with each other. A link may also be considered as a channel.
The LTE project defines a standard receiver for rejecting interference in Release 11 (Release 11), for example, an interference rejection combining (IRC) receiver. However, a capability of rejecting inter-cell interference by the IRC is limited, and the IRC cannot reject interference between UEs well. Therefore, in LTE, a receiver having a stronger capability is defined in Release 12, for example, a symbol level interference cancellation (SLIC) receiver and a maximum likelihood (ML) receiver, to achieve a better interference rejection effect.
11
121
121
121
121
121
11
121
121
In an LTE system, a serving base station may schedule appropriate radio resources, a modulation and coding scheme (MCS), Precoding Matrix Indicator (PMI), and a Rank Index (RI) for any UE, such as UE , according to channel state information (CSI) reported by the UE , to ensure normal communication of the UE . The UE may calculate the CSI according to a minimum mean square error (MMSE) criterion, for which a received signal-to-noise ratio of the UE needs to be calculated first, that is, a ratio of a valid signal to interference, the CSI is determined based on the signal-to-noise ratio, and the CSI is fed back to the base station . However, in a process of calculating the signal-to-noise ratio, the UE does not consider inter-cell interference or interference between UEs; as a result, an obtained signal-to-noise ratio or obtained CSI is not accurate. Particularly, when the SLIC receiver or the ML receiver is used in the UE , the signal-to-noise ratio obtained by using the MMSE by means of calculation often cannot reflect an actual channel state of the UE, and therefore, inaccurate CSI is further obtained.
Embodiments provide a method and an apparatus for determining a signal-to-noise ratio in wireless communication, so as to improve accuracy of a signal-to-noise ratio or CSI obtained by user equipment.
According to a first aspect, an embodiment provides a method for determining a signal-to-noise ratio in wireless communication. The method includes determining an effective signal-to-noise ratio of a received signal of current user equipment in the wireless communication. The method also includes acquiring at least one parameter used to correct the effective signal-to-noise ratio. The method also includes determining, based on a mapping relationship used to correct the effective signal-to-noise ratio, a corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio. Optionally, a minimum mean square error criterion may be used to determine the effective signal-to-noise ratio. Compared with a conventional minimum mean square error criterion algorithm, according to the method for determining a signal-to-noise ratio in wireless communication provided in this embodiment, an effective signal-to-noise ratio can be further corrected based on one or more parameters, and an obtained corrected signal-to-noise ratio more accurately reflects an actual channel state of user equipment.
According to the first aspect, in a first possible implementation manner of the first aspect, the mapping relationship used to correct the effective signal-to-noise ratio is a mapping formula, where the at least one parameter and the effective signal-to-noise ratio are inputs of the mapping formula, and the corrected signal-to-noise ratio is an output of the mapping formula.
According to the first aspect, in a second possible implementation manner of the first aspect, the mapping relationship used to correct the effective signal-to-noise ratio is a mapping table, where the mapping table is used to indicate the corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio. The mapping table includes a series of discrete values, so that the mapping table may be used to simplify complexity caused by calculation using a mapping formula.
According to the first aspect, the first possible implementation manner of the first aspect, or the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the at least one parameter includes one or a combination of the following: a parameter indicating a receiver algorithm used by the current user equipment and a parameter of the at least one interference signal.
According to the first aspect, the first possible implementation manner of the first aspect, or the second possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the at least one parameter includes a parameter indicating a receiver algorithm used by the current user equipment.
th
According to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the at least one parameter further includes parameters of N interference signals, where N is an integer that is greater than or equal to 2; and the determining, based on a mapping relationship used to correct the effective signal-to-noise ratio, a corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio includes: step 1: determining, based on the mapping relationship, a corrected signal-to-noise ratio corresponding to the parameter indicating the receiver algorithm used by the current user equipment, a parameter of an iinterference signal in the parameters of the N interference signals, and the effective signal-to-noise ratio; and step 2: replacing the effective signal-to-noise ratio with the corrected signal-to-noise ratio, adding 1 to a value of i, and repeating the step 1, until i=N, where i is an integer that is greater than or equal to 1 and that is less than or equal to N, and an initial value of i is 1. Influence caused by multiple interference signals can be eliminated by performing iterative processing multiple times, so that a finally obtained corrected signal-to-noise ratio is more accurate.
According to the third or the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, any interference signal in the at least one interference signal is caused by a neighboring cell of a serving cell of the current user equipment, or is caused by another user equipment in the serving cell, where the another user equipment is user equipment paired with the current user equipment in the serving cell.
According to the third, the fifth, or the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, a parameter of any interference signal in the at least one interference signal includes one or a combination of the following: a transmission mode of the interference signal, a rank of the interference signal, a data-to-pilot power ratio of the interference signal, and a modulation scheme of the interference signal. In a correction process, reference is made to various parameters about the interference signal, and correction processing is performed based on the parameters, so that a more accurate signal-to-noise ratio can be obtained.
According to the third or the fourth possible implementation manner of the first aspect, in an eighth possible implementation manner of the first aspect, the receiver algorithm is a symbol level interference cancellation algorithm or a maximum likelihood algorithm. In the correction process, reference is made to the receiver algorithm, and for different receiver algorithms, different corrected signal-to-noise ratios can be obtained, so that a calculation result is more accurate.
According to the first aspect or any manner of the first to the eighth possible implementation manners of the first aspect, in a ninth possible implementation manner of the first aspect, the method further includes: determining channel state information based on the corrected signal-to-noise ratio; and reporting the channel state information to a serving station of the current user equipment. According to the method, accuracy of the channel state information obtained based on the corrected signal-to-noise ratio is also further improved, thereby improving accuracy of a channel feedback.
According to the first aspect or any manner of the first to the ninth possible implementation manners of the first aspect, in a tenth possible implementation manner of the first aspect, the wireless communication is Long Term Evolution wireless communication.
According to a second aspect, an embodiment provides an apparatus for determining a signal-to-noise ratio in wireless communication. The apparatus includes an effective signal-to-noise ratio determining unit, configured to determine an effective signal-to-noise ratio of a received signal of current user equipment in the wireless communication. The apparatus also includes a parameter determining unit, configured to acquire at least one parameter used to correct the effective signal-to-noise ratio. The apparatus also includes a correction unit, configured to determine, based on a mapping relationship used to correct the effective signal-to-noise ratio, a corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio. Optionally, the apparatus may be located in the current user equipment. Optionally, a minimum mean square error criterion may be used to determine the effective signal-to-noise ratio.
According to the second aspect, in a first possible implementation manner of the second aspect, the mapping relationship used to correct the effective signal-to-noise ratio is a mapping formula, and the correction unit uses the mapping formula to calculate the corrected signal-to-noise ratio, where the at least one parameter and the effective signal-to-noise ratio are inputs of the mapping formula, and the corrected signal-to-noise ratio is an output of the mapping formula.
According to the second aspect, in a second possible implementation manner of the second aspect, the mapping relationship used to correct the effective signal-to-noise ratio is a mapping table, where the mapping table is used to indicate the corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio, and the correction unit obtains the corrected signal-to-noise ratio by using the mapping table.
According to the second aspect, the first possible implementation manner of the second aspect, or the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the at least one parameter includes one or a combination of the following: a parameter indicating a receiver algorithm used by the current user equipment and a parameter of the at least one interference signal.
According to the second aspect, the first possible implementation manner of the second aspect, or the second possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, the at least one parameter includes a parameter indicating a receiver algorithm used by the current user equipment.
th
According to the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, the at least one parameter further includes parameters of N interference signals, where N is an integer that is greater than or equal to 2; and the correction unit is specifically configured to perform: step 1: determining, based on the mapping relationship, a corrected signal-to-noise ratio corresponding to the parameter indicating the receiver algorithm used by the current user equipment, a parameter of an iinterference signal in the parameters of the N interference signals, and the effective signal-to-noise ratio; and step 2: replacing the effective signal-to-noise ratio with the corrected signal-to-noise ratio, adding 1 to a value of i, and repeating the step 1, until i=N, where i is an integer that is greater than or equal to 1 and that is less than or equal to N, and an initial value of i is 1.
According to the third or the fifth possible implementation manner of the second aspect, in a sixth possible implementation manner of the second aspect, any interference signal in the at least one interference signal is caused by a neighboring cell of a serving cell of the current user equipment, or is caused by another user equipment in the serving cell, where the another user equipment is user equipment paired with the current user equipment in the serving cell.
According to the third, the fifth, or the sixth possible implementation manner of the second aspect, in a seventh possible implementation manner of the second aspect, a parameter of any interference signal in the at least one interference signal includes one or a combination of the following: a transmission mode of the interference signal, a rank of the interference signal, a data-to-pilot power ratio of the interference signal, and a modulation scheme of the interference signal.
According to the third or the fourth possible implementation manner of the second aspect, in an eighth possible implementation manner of the second aspect, the receiver algorithm is a symbol level interference cancellation algorithm or a maximum likelihood algorithm.
According to the second aspect or any manner of the first to the eighth possible implementation manners of the second aspect, in a ninth possible implementation manner of the second aspect, the apparatus further includes: a channel state information reporting unit, configured to determine channel state information based on the corrected signal-to-noise ratio; and report the channel state information to the serving station of the current user equipment.
According to the second aspect or any manner of the first to the ninth possible implementation manner of the second aspect, in a tenth possible implementation manner of the second aspect, the wireless communication is Long Term Evolution wireless communication.
According to a third aspect, an embodiment of the present invention provides user equipment for determining a signal-to-noise ratio in wireless communication, including: a memory, configured to store at least one parameter used to correct an effective signal-to-noise ratio; a processor, configured to determine the effective signal-to-noise ratio of a received signal of the user equipment, acquire the at least one parameter from the memory, and determine, based on a mapping relationship used to correct the effective signal-to-noise ratio, a corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio. Optionally, a minimum mean square error criterion may be used to determine the effective signal-to-noise ratio.
According to the third aspect, in a first possible implementation manner of the third aspect, the mapping relationship used to correct the effective signal-to-noise ratio is a mapping formula, and the processor is further configured to calculate the corrected signal-to-noise ratio by using the mapping formula, where the at least one parameter and the effective signal-to-noise ratio are inputs of the mapping formula, and the corrected signal-to-noise ratio is an output of the mapping formula.
According to the third aspect, in a second possible implementation manner of the third aspect, the mapping relationship used to correct the effective signal-to-noise ratio is a mapping table, where the mapping table is used to indicate the corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio; and the processor is further configured to obtain the corrected signal-to-noise ratio by using the mapping table.
According to the third aspect, the first possible implementation manner of the third aspect, or the second possible implementation manner of the third aspect, in a third possible implementation manner of the third aspect, the at least one parameter includes one or a combination of the following: a parameter indicating a receiver algorithm used by the user equipment and a parameter of the at least one interference signal.
According to the third aspect, the first possible implementation manner of the third aspect, or the second possible implementation manner of the third aspect, in a fourth possible implementation manner of the third aspect, the at least one parameter includes a parameter indicating a receiver algorithm used by the user equipment.
th
According to the fourth possible implementation manner of the third aspect, in a fifth possible implementation manner of the third aspect, the at least one parameter further includes parameters of N interference signals, where N is an integer that is greater than or equal to 2; and the processor is further configured to perform: step 1: determining, based on the mapping relationship, a corrected signal-to-noise ratio corresponding to the parameter indicating the receiver algorithm used by the user equipment, a parameter of an iinterference signal in the parameters of the N interference signals, and the effective signal-to-noise ratio; and step 2: replacing the effective signal-to-noise ratio with the corrected signal-to-noise ratio, adding 1 to a value of i, and repeating the step 1, until i=N, where i is an integer that is greater than or equal to 1 and that is less than or equal to N, and an initial value of i is 1.
According to the third or the fifth possible implementation manner of the third aspect, in a sixth possible implementation manner of the third aspect, any interference signal in the at least one interference signal is caused by a neighboring cell of a serving cell of the user equipment, or is caused by another user equipment in the serving cell, and the another user equipment is user equipment paired with the user equipment in the serving cell.
According to the third, the fifth, or the sixth possible implementation manner of the third aspect, in a seventh possible implementation manner of the third aspect, a parameter of any interference signal in the at least one interference signal includes one or a combination of the following: a transmission mode of the interference signal, a rank of the interference signal, a data-to-pilot power ratio of the interference signal, and a modulation scheme of the interference signal.
According to the third or the fourth possible implementation manner of the third aspect, in an eighth possible implementation manner of the third aspect, the receiver algorithm is a symbol level interference cancellation algorithm or a maximum likelihood algorithm.
According to the third aspect or any manner of the first to the eighth possible implementation manners of the third aspect, in a ninth possible implementation manner of the third aspect, the processor is further configured to: determine channel state information based on the corrected signal-to-noise ratio; and report the channel state information to a serving station of the user equipment. In a possible implementation manner, the step of reporting the channel state information to a serving station of the user equipment may be implemented by a processing unit in the processor, or the step of reporting the channel state information to a serving station of the user equipment may be implemented by a radio frequency apparatus in the processor.
According to the third aspect or any manner of the first to the ninth possible implementation manners of the third aspect, in a tenth possible implementation manner of the third aspect, the wireless communication is Long Term Evolution wireless communication.
The foregoing implementation manners may be used to correct an effective signal-to-noise ratio to obtain a signal-to-noise ratio with higher accuracy, so that channel state information with higher accuracy is obtained based on the corrected signal-to-noise ratio, thereby improving communication performance of a wireless communications system. The foregoing implementation manners may be used to improve a conventional minimum mean square error criterion algorithm, to achieve a better wireless communication effect.
The following clearly describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are merely some but not all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present invention.
In an embodiment, user equipment, that is, UE, is also referred to as a wireless terminal or a user terminal, which may enjoy a wireless access service of a serving station. The serving station is generally a base station, for example, an eNodeB or a NodeB in LTE, or may be an access point for connecting the user equipment to a mobile communications network, for example, a base station controller. When providing the access service for the user equipment, the serving station may form one or more cells, where a cell may cover a range geographically and occupies a carrier or a frequency band in a frequency domain. Specifically, the user equipment and the serving station may implement a communication process by running a wireless communications protocol, where the wireless communications protocol includes, without being limited to, various cellular wireless communications protocols, such as LTE, Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Worldwide Interoperability for Microwave Access (WiMAX), and Time Division-Synchronous Code Division Multiple Access (TDS-CDMA) or Code Division Multiple Access 2000 (CDMA2000). In the embodiments of the present invention, LTE is a more common application scenario.
When the user equipment communicates with the serving station, to feed back channel state information to the serving station, so as to schedule a resource and allocate a modulation and coding scheme and a precoding scheme based on the channel state information, the user equipment first needs to accurately estimate a signal-to-noise ratio of a received signal of the user equipment, where sometimes the signal-to-noise ratio may also be a signal to interference plus noise ratio (SINR). Therefore, a method, better than a conventional technique, for determining a signal-to-noise ratio in wireless communication is provided in this embodiment of the present invention.
FIG. 2
FIG. 3
30
30
31
32
31
32
33
32
30
21
30
32
30
is a brief schematic diagram of an embodiment of the method, where the method may be executed by the user equipment , and can correct a conventional signal-to-noise ratio. Referring to , the user equipment may include a memory and a processor . The memory and the processor may be coupled by using a connection cable or a circuit interface . The processor in the user equipment may be configured to execute the method for determining a signal-to-noise ratio in this embodiment. Specifically, in S, the user equipment or the processor may determine an effective signal-to-noise ratio of a received signal of the user equipment in wireless communication based on a minimum mean square error criterion. The minimum mean square error criterion is a conventional technique for calculating an effective signal-to-noise ratio in wireless communication, and the implementation principle of the technique is described in many documents, which is not described in detail in this embodiment.
22
30
30
30
30
In S, the user equipment acquires at least one parameter used to correct the effective signal-to-noise ratio. The at least one parameter may be a group of parameters, that is, multiple parameters, which are used to correct the effective signal-to-noise ratio to obtain a more accurate signal-to-noise ratio. Specifically, the at least one parameter may be one or a combination of the following: a parameter indicating a receiver algorithm used by the current user equipment and a parameter of at least one interference signal. The receiver algorithm may be a symbol level interference cancellation algorithm or a maximum likelihood algorithm, and certainly, another available receiver algorithm is not excluded. The used algorithm is used to implement good interference rejection in demodulating the received signal. In this embodiment, a parameter indicating a receiver algorithm is used as a reference factor to correct a signal-to-noise ratio. It is noted that the user equipment , when using different receiver algorithms, has different interference rejection capabilities, and accuracy of a signal-to-noise ratio obtained by the user equipment can be improved by means of the correction.
FIG. 3
31
30
31
31
31
32
31
30
31
30
32
30
31
30
30
31
As shown in , preferably, the at least one parameter may be stored in a memory of the user equipment . The memory may be a random access memory (RAM), a read-only memory (ROM), a flash memory, or the like, or may be an element for temporary or interim storage, such as a buffer, a FIFO (First In First Out), or a register, and a type of the memory is not limited in this embodiment. In an example, the memory may be a register. When performing correction processing, the processor may specifically acquire the at least one parameter from the memory . The user equipment updates the memory during working in real time or at intervals according to a working state of the user equipment . For example, the processor may learn the receiver algorithm currently used by the user equipment and writes a parameter indicating the algorithm into the memory , so that the parameter is used when the correction processing is subsequently performed. In addition, the user equipment may receive a parameter of at least one interference signal from a serving base station of the user equipment or another network communications node and write the parameter into the memory , so that the parameter is used when the correction processing is subsequently performed.
30
30
30
30
30
30
In an implementation manner, a parameter of an interference signal may include a transmission mode of the interference signal, a rank of the interference signal, a data-to-pilot power ratio of the interference signal, or a modulation scheme of the interference signal. The user equipment uses the parameter of the interference signal as a reference factor to correct a signal-to-noise ratio. It is noted that when transmission modes, ranks, or modulation schemes used by interference signals are different, interference to the user equipment is also different, and accuracy of a signal-to-noise ratio calculated by the user equipment can be improved by means of the correction. As described in the Background, the user equipment may have multiple interference signals, and causes for forming the interference signals may be different. An interference signal may be caused by user equipment paired with the user equipment in a serving cell of the current serving base station of the user equipment , or is caused by a neighboring cell of a neighboring base station, and a cause for forming an interference signal is not specifically limited in this embodiment.
22
30
32
30
30
30
30
30
32
30
31
32
30
31
22
In the foregoing implementation manner, the transmission mode of the interference signal may be a MIMO transmission mode of an interference signal from an interference cell, and may include a MIMO transmission mode, such as a transmit diversity, open-loop spatial multiplexing, closed-loop spatial multiplexing, or beamforming. The modulation scheme of the interference signal may include a modulation scheme for an interference signal, such as 16QAM (Quadrature Amplitude Modulation), 64QAM, or QPSK (Quadrature Phase Shift Keying). The data-to-pilot power ratio of the interference signal reflects a ratio of data signal power to pilot power of the interference signal, where the pilot may also be referred to as a reference signal, and may be used to perform channel estimation or measurement. The parameters about the transmission mode of the interference signal, the rank of the interference signal, or the modulation scheme of the interference signal may be obtained by means of estimation performed on the interference signal by the user equipment based on an existing interference estimation solution. That is, before S, the user equipment or the processor of the user equipment may obtain a parameter of the at least one interference signal by estimating an interference signal or an interference cell. Certainly, another manner for the user equipment to acquire the parameter of the at least one interference signal is not excluded in this embodiment, for example, the user equipment may obtain the parameters from another communications node, for example, a base station or another user equipment. For example, the user equipment may specifically receive the parameter of the at least one interference signal through a physical downlink control channel (PDCCH) of the serving base station. After obtaining the parameters, the user equipment or the processor of the user equipment may write the parameter of the at least one interference signal into the memory , so that the processor in the user equipment read the parameter in the memory in the subsequent step S.
23
30
32
In S, the user equipment determines, based on a mapping relationship used to correct the effective signal-to-noise ratio, a corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio. Specifically, the processor may obtain the corrected signal-to-noise ratio by means of calculation based on a mapping formula or by means of looking up a mapping table.
no
no
no
no
32
31
30
30
31
32
31
32
32
32
32
32
30
32
31
32
30
In an implementation manner, the mapping relationship used to correct the effective signal-to-noise ratio is a mapping formula, which may be specifically SNR=f (SNR, {Φ}). SNR is the obtained corrected signal-to-noise ratio, SNRis the effective signal-to-noise ratio, and {Φ} is a parameter set, including the at least one parameter. f( ) is a mapping function, representing the mapping relationship. The processor may obtain, based on the mapping formula SNR=f (SNR, {Φ}), SNR by means of calculation by using SNRand {Φ} as input variables. The mapping function f( ) may be preset, and may be stored in the memory or another memory. That is, f( ) may be acquired in an offline manner. In this manner, before the user equipment determines the signal-to-noise ratio, an expression of f( ) is already pre-stored in the user equipment , so that implementation complexity is low. Specifically, f( ) may be obtained by a person skilled in the art by means of emulation. Before delivery of the user equipment , f( ) used as a parameter in a software code form is stored in the memory or another memory, the processor may acquire f( ) from the memory or the another memory, and perform the correction processing based on f( ) to obtain a corrected signal-to-noise ratio. Alternatively, f( ) may be built in the processor , as a hardware circuit, that is, is made in the processor by means of an integrated circuit or another circuit producing technique. When the processor performs the correction processing, the mapping relationship f( ) is already stored in the processor , so that the processor may directly calculate the corrected signal-to-noise ratio based on f( ). When the at least one parameter takes a different value, after the mapping of f( ) a value of the corrected signal-to-noise ratio obtained by the user equipment or the processor is different, so that an obtained signal-to-noise ratio is modified and improved according to an actual parameter of a receiver, thereby improving accuracy of the obtained signal-to-noise ratio. Regardless of whether f( ) is pre-stored in the memory or another memory in a software form, or is built in the processor , as a hardware circuit, a person skilled in the art may obtain an appropriate function f( ) by means of emulation and verification in development or production qualification of the user equipment .
32
31
32
31
32
32
32
32
In another implementation manner, the expression of f( ) may be replaced with a search table or a mapping table including multiple discrete values. The mapping table is used to indicate the corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio, so as to replace f( ) to indicate the mapping relationship mentioned above in this embodiment. The processor is configured to obtain a corrected signal-to-noise ratio by using one or more parameters as an input to search the mapping table. The mapping table may be stored in the memory or another memory in the software code form, and is read by the processor from the memory . Alternatively, the mapping table may be built in the processor in a logical circuit form. When the processor performs the correction processing, the mapping table is already stored in the processor , so that the processor may directly calculate a corrected signal-to-noise ratio based on the logical circuit reflecting the mapping table.
FIG. 5
32
is a schematic diagram of a mapping table 1 according to an embodiment. In the table 1, a parameter indicating a receiver algorithm is used as an input 1 of the table 1, and the table 1 includes a series of discrete values, for example, an algorithm 1 and an algorithm 2. Another input 2 is an effective signal-to-noise ratio, including multiple values, which respectively represent an effective signal-to-noise ratio 1, an effective signal-to-noise ratio 2, and so on. In this case, the mapping table 1 is equivalent to a two-dimensional search table, that is, a correction result is mapped by two inputs. The processor uses the acquired parameter indicating the receiver algorithm and the obtained effective signal-to-noise ratio as two inputs, and finds a corresponding correction result from a table 2 as the corrected signal-to-noise ratio. For example, the algorithm 1 and the effective signal-to-noise ratio 1 correspond to a correction result 1, and the algorithm 2 and the effective signal-to-noise ratio 2 correspond to a correction result x+1.
FIG. 6
32
30
31
32
30
32
As a quantity of introduced parameters used to perform correction is increased, a quantity of inputs in the mapping table is also increased. In the mapping table 2 shown in , in addition to using the effective signal-to-noise ratio and the parameter indicating the receiver algorithm as two inputs, three inputs may be further introduced, that is, a transmission mode of an interference signal, a rank of the interference signal, or a modulation scheme of the interference signal. In this way, the table 2 may be considered as a five-dimensional search table, including five inputs, that is, an effective signal-to-noise ratio and four parameters in a parameter set {Φ}, and the four parameters are respectively a parameter indicating a receiver algorithm, a transmission mode of an interference signal, a rank of the interference signal, and a modulation scheme of the interference signal. The processor finds a corresponding correction result from the search table 2 as a corrected signal-to-noise ratio by using the five inputs. It should be understood that multiple mapping tables may be built in the user equipment , where the multiple mapping tables may be stored in the memory or another memory in a software code form, or may be built in the processor in a logical circuit form, and the user equipment or the processor may determine a mapping table in the multiple mapping tables to be used to determine the corrected signal-to-noise ratio. A quantity of inputs of each mapping table may be two or more, and the quantity of inputs specifically depends on parameters that are included in the parameter set {Φ} related to the mapping table. Besides including one or multiple parameters listed in the table 2, the parameters may also further include another parameter affecting an interference characteristic, which is not limited in this embodiment. A larger quantity of used parameters indicates that more factors are considered in a correction process, and in this case, a signal-to-noise ratio obtained by means of correction is more accurate. Therefore, as the quantity of inputs of the mapping table is increased, a correction effect is improved.
30
30
30
23
30
32
30
32
th
th
th
th
For the user equipment , whether a quantity of interference signals is one or more than one depends on an actual use scenario of the user equipment or a deployment status of wireless networks around the user equipment . When the quantity of interference signals is more than one, an effective signal-to-noise ratio may be corrected for each interference signal sequentially. Specifically, in step , the user equipment or the processor may have a capability of processing multiple interference signals. Specifically, the user equipment first calculates an effective signal-to-noise ratio before correction, and then traverses all possible interference signals. If the correction processing needs to be performed on an iinterference signal, the mapping table or the mapping formula described in the foregoing embodiment are used to calculate an icorrected signal-to-noise ratio. The icorrected signal-to-noise ratio is used as an input for a next interference signal, that is, an (i+1)interference signal during correction, where i is an integer that is greater than or equal to 1 and that is less than or equal to N, and an initial value of i is 1. After each iteration, 1 is added to a value of i, until i is equal to N. N is the quantity of interference signals, and is an integer that is greater than or equal to 2. That is, the processor may calculate an effective signal-to-noise ratio in an iterative manner for a parameter of each interference signal, to improve system performance.
FIG. 7
71
32
32
72
73
74
71
th
th
A specific iteration process of the foregoing method may be shown in . In S, the processor corrects an effective signal-to-noise ratio for the iinterference signal based on the mapping relationship, that is, the processor determines, based on the mapping table or the mapping formula, a corrected signal-to-noise ratio corresponding to a parameter indicating a receiver algorithm used by the current user equipment, a parameter of the iinterference signal in parameters of N interference signals, and the effective signal-to-noise ratio. In S, it is determined whether i is equal to N. If i is equal to N, in S, the corrected signal-to-noise ratio is output as a final correction result. If i is less than N, S is performed, that is, replacing the effective signal-to-noise ratio with the corrected signal-to-noise ratio, adding 1 to the value of i, and going back to step S. The correction is performed on the multiple interference signals, so that accuracy of a signal-to-noise ratio obtained by means of calculation may be further improved in this embodiment.
24
30
32
30
32
30
30
30
FIG. 3
Optionally, the method for determining a signal-to-noise ratio may further include: in S, the user equipment or the processor may determine channel state information based on the corrected signal-to-noise ratio obtained by means of calculation, and reports the channel state information to a serving station of the current user equipment . Channel state information reporting, that is, a step of channel feedback, is further added in the method for determining a signal-to-noise ratio, which is equivalent to providing a channel state information reporting or channel feedback method in this embodiment. In this case, the processor may be further divided into a processing unit configured to determine channel state information and a radio frequency apparatus (not shown in ) configured to report the channel state information. The radio frequency apparatus and the processing unit may be located in a same chip or separately located in different chips. The channel state information in this embodiment may include at least one of a rank index (RI), a precoding matrix indicator (PMI), or a channel quality indicator (CQI). The CQI may further include a broadband CQI or a narrowband CQI, and the PMI may also further include a broadband PMI or a narrowband PMI, which is not limited in this embodiment. Because the channel state information is obtained based on the corrected signal-to-noise ratio, an actual state of a channel used by the user equipment in the wireless communication can be more accurately reflected. The serving base station also performs scheduling more accurately based on the channel state information fed back by the user equipment , so as to improve a data throughput of the user equipment , thereby improving overall performance of a wireless communications system.
FIG. 3
32
32
31
32
32
In an embodiment shown in , the processor may be specifically a communications processor, a baseband and radio frequency processor, a universal processing unit, or a wireless modem, and may be configured to run any one of wireless communications protocols, such as LTE, UMTS, or GSM. The processor may be driven by necessary driver software to work. The driver software may be stored in the memory or another storage unit. The driver software may be necessary protocol software that runs the foregoing wireless communications protocol. The processor may include one or more chips, or the processor may be implemented by using an integrated circuit or a circuit in another form, for example, a printed circuit, or a combination of the two. The integrated circuit is a circuit form that is manufactured on a semiconductor substrate by using an integrated circuit manufacturing process, and may include at least one of a digital circuit or an analog circuit. A chip includes a great quantity of integrated circuits and a peripheral packaging component.
FIG. 4
30
401
402
401
401
402
401
401
402
402
402
402
402
no
no
no
The following provides a method example for a mapping relationship in this embodiment. A person skilled in the art may obtain the mapping relationship mentioned in this embodiment in a manner shown in in development or production qualification of the user equipment . A person skilled in the art may construct a conventional receiver emulation program and an improved receiver emulation program by means of a computer emulation environment. The conventional receiver emulation program is used to emulate a signal-to-noise ratio calculation method in a receiver in the prior art, and the improved receiver emulation program is used to emulate the method provided in this embodiment of the present invention. A group of parameters the same as the parameters {Φ} described in the foregoing embodiment may be set for the two programs and . An initial signal-to-noise ratio of the conventional receiver emulation program , that is, SNRdescribed in the foregoing embodiment, is set to a value, so that a receive accuracy ratio, for example, a frame error rate, of the conventional receiver emulation program at the signal-to-noise ratio reaches a preset value, for example, 10%.For the improved receiver emulation program , all possible SNR values are traversed and the values are sequentially configured for the improved receiver emulation program , and for each SNR value, a receive accuracy rate, for example, a frame error rate, of the improved receiver emulation program is calculated. By means of traversal and iteration, until the receive accuracy rate of the improved receiver emulation program reaches the preset value, in this case, an SNR value input into the improved receiver emulation program is a corrected value of SNRin the groups of parameters {Φ}. That is, for different groups of parameters {Φ}, correspondences between multiple discrete SNRvalues and multiple discrete SNR values may be obtained in the foregoing manner, so as to form the mapping table described above. As described above, the mapping table reflecting the correspondences between the multiple discrete values may also be converted into an f( ) function, that is, a person skilled in the art may select an appropriate f( ) function by using a function fitting method to simulate actual correspondences between multiple discrete values. For how the fitting is performed, there is already relatively common application in the field of mathematics and computer science, which is not described in detail herein.
FIG. 8
80
80
81
82
83
80
84
is a schematic diagram of an apparatus for determining a signal-to-noise ratio in wireless communication according to an embodiment of the present invention, which may be located in user equipment, and is configured to correct an effective signal-to-noise ratio. The apparatus may include: an effective signal-to-noise ratio determining unit , configured to determine an effective signal-to-noise ratio of a received signal of the current user equipment in the wireless communication based on a minimum mean square error criterion; a parameter determining unit , configured to acquire at least one parameter of the effective signal-to-noise ratio; and a correction unit , configured to determine, based on a mapping relationship used to correct the effective signal-to-noise ratio, a corrected signal-to-noise ratio corresponding to the at least one parameter and the effective signal-to-noise ratio. Optionally, the apparatus may further include: a channel state information reporting unit , configured to determine channel state information based on the corrected signal-to-noise ratio, and report the channel state information to a serving station of the current user equipment. For a specific corresponding step performed by each unit in the units, reference may be made to descriptions in the foregoing method embodiment, which is not described in detail herein again.
A person of ordinary skill in the art may understand that all or some of the processes of the methods in the embodiments may be implemented by a computer program instructing relevant hardware, such as a computer processor. The computer program may be stored in a computer readable storage medium. When the program runs, the processes of the methods in the embodiments are performed. The computer readable storage medium may be a magnetic disk, an optical disc, a ROM, a RAM, or the like.
The foregoing are merely exemplary embodiments of the present invention. A person skilled in the art may make various modifications and variations to the present invention without departing from the spirit and scope of the present invention. For example, specific shapes or structures of components in the accompanying drawings in the embodiments of the present invention may be adjusted according to an actual application scenario.
BRIEF DESCRIPTION OF THE DRAWINGS
To describe the technical solutions in the embodiments of the present invention or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description show merely some embodiments of the present invention or the prior art, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
FIG. 1
is a brief schematic diagram of a principle of forming an interference signal in wireless communication according to the prior art;
FIG. 2
is a brief schematic diagram of a method for determining a signal-to-noise ratio in wireless communication according to an embodiment;
FIG. 3
is a brief schematic structural diagram of user equipment for determining a signal-to-noise ratio in wireless communication according to an embodiment;
FIG. 4
is a brief schematic diagram of a method for acquiring a mapping function according to an embodiment;
FIG. 5
is a brief schematic diagram of a mapping table for determining a signal-to-noise ratio in wireless communication according to an embodiment;
FIG. 6
is a brief schematic diagram of another mapping table for determining a signal-to-noise ratio in wireless communication according to an embodiment;
FIG. 7
is a brief schematic flowchart of sequentially correcting effective signal-to-noise ratios for multiple interference signals in an iterative manner according to an embodiment; and
FIG. 8
is a brief schematic diagram of an apparatus for determining a signal-to-noise ratio in wireless communication according to an embodiment. | |
Editorial opinion by Emily Feek
One of the hottest trends of the last decade, self-care, is undoubtedly going to follow us into 2020. You’ve heard it all before: take time for yourself, relax, run a hot bath and read a book.
But self-care isn’t just sitting back and doing nothing after a busy day. There’s validity to tuning out for a while and watching some TV, but is it self-care? Not really.
Self-care means different things to different people. There are a number of ways to practice self-care, and while taking a long bath might be a great way to unwind for one person, to someone with anxiety and overactive thoughts, it might just be too much time to think.
Regardless of what works for you, we can probably all (or at least mostly) agree that self-care is important to your relationship with yourself.
Often when we talk about relationships, we refer to how we interact with others. But how do we interact with ourselves? We spend more time with ourselves than anyone else does, so shouldn’t that be a priority?
Taking care of ourselves comes down to the same thing all of our other relationships do: putting in effort.
This effort is particularly vital for full-time students with part-time jobs; a paper written by Whitney McLaughlin in the Journal of American College Health addressed the need for resident advisors to practice self-care to avoid burnout.
According to McLaughlin, “achieving wellness requires intentionality in health-related decision-making.” That means our self-care should be, you guessed it, intentional.
If we are making decisions for our health and well-being, that means we shouldn’t just sit idly by and binge-watch “Friends” for the third time, justifying it as self-care. (Trust us, Netflix is doing you and your well-being a favor by ditching “Friends.”)
Our best bet at practicing active and effective self-care is to focus on consistently practicing healthy behaviors — forming good habits, essentially.
Good habits are always going to be good for your health and well-being, and self-care is considered integral to maintaining mental health as well as physical health, according to McLaughlin.
This holds even more true when it comes to managing burnout from work.
When considering our relationship with ourselves, we need to consider that we have many selves; each person will view us differently, and it’s accurate to say we adopt different personas to suit whatever situation we find ourselves in, notably between our work and personal lives.
Bringing your work home with you is a common issue that boils down to a lack of separation between our work and personal selves and a lack of effective self-care, according to an article by Sara Bressi and Elizabeth Vaden published in the Clinical Social Work Journal.
Though the article focused on social workers and the unique demands of that field, the concepts of burnout and bringing home your work are applicable to us all in different ways. Even if our work is not as emotionally demanding as social work can be, we still have to deal with workplace stress and the burden of overworking ourselves.
One notable element of the article is this: self-care should acknowledge the many facets of the self, focusing on balancing the different elements of ourselves and the specific expectations and roles we fill.
Self-care isn’t just about relaxing, but about containing “the impact of the professional self on the personal,” and separating our work stresses from our personal lives when possible, according to Bressi and Vaden.
The best way to manage those stressors is with behavior that promotes well-being, aka intentional self-care habits. Things like active hobbies, exercise, creative activities and socializing with the people we care about, are all viable options for putting distance between yourself and your work, according to Bressi and Vaden.
Sometimes, though, self-care is more than managing day-to-day stress. Sometimes it’s about coping with mental illness and that classic seasonal affective disorder that we’re running into this time of year. That’s okay.
If you want to practice active self-care this year and work on improving yourself, be intentional about it. Do things that you know work for you and make you feel good. Go outside more, spend time with friends or learn new skills like knitting or cooking.
And most importantly, don’t be afraid to utilize the resources we have on campus. When it comes to managing mental health, a lot of us aren’t prepared to go it alone and we shouldn’t have to.
Self-care isn’t just about the things you do by yourself. It’s about the things you do for yourself, and that includes reaching out for help from others. Utilize the Counseling Center and the Wellness Wednesday group meetings if you can, and if you need.
We owe it to ourselves to keep ourselves healthy and happy, and that’s what self-care is all about. So the next time you pull up “The Office” and decide to rewatch a season, take a second to ask yourself if it’s actually fulfilling or if you’re just dodging issues, because escapism only works for so long before everything compounds. Take care of yourselves. | https://www.westernfrontonline.com/article/2020/01/self-care-isn-t-just-watching-netflix-it-s-an-action?ct=content_open&cv=cbox_latest |
complaints were diagnosed with carotid dissection.
Methods: For this purpose, cranio-cervical CT angio images of patients admitted to our clinic with suspected carotid artery dissection
and diagnosed as ICA dissection were retrospectively reviewed. the characteristics of the patients with ICA dissection and normal
patients undergoing cranio-cervical CT angiography were investigated using 3D anatomical programs, specifically the course of the
stylohyoid ligament, the distance between stylohyoid ligament and the carotid artery, and contact between stylohyoid ligament and
the carotid artery. Sectra and AW Server 2 archives and XIO 4.80 radiotherapy planning programs were used for 3D analysis of the
images. The data obtained from the analyzes were evaluated using the SPSS statistical program.
Results: Descriptive statistics, Independent Samples T-Test, Pearson correlation analysis, multiple regression analysis, and graphical
programs were used in statistical evaluations. According to Independent Samples T-Test results performed under SPSS, the distance
between the ICA with stylohyoid ligament at 0.05 (sig.2-tailed) level (t = 5,950, sig = 0,000) and the difference between the age of the
patients (t = 2,226, sig = 0,026) was significant. In addition, the axial (t = 2,938, sig = 0,005) and sagittal (t = 2,307, sig = 0,026) angles
between the genders were different. In Pearson correlation analysis, styloid axial angles (r = -0.316, sig = 0.029), styloid sagittal angles
(r = -0.333, sig = 0.020) and age of ICA dissection (r = -0.332, sig = 0.026) at 0,05 (2-tailed) level was found to be correlated. More
importantly, in patients with ICA dissection, it was found that there was a strong correlation between stylohyoid ligament-carotid
distance (r = -0.659, sig = 0.000) at the 0.01 (2-tailed) level. | https://avesis.ege.edu.tr/yayin/5661dc38-64f3-4d34-87c3-9facaeadf1b1/retrospective-investigation-of-the-relationship-between-the-stylohyoid-ligament-and-carotid-dissection |
The Republic of Costa Rica became the 52nd country to join the Climate and Clean Air Coalition (CCAC) today, making it the 114th Coalition partner.
Upon joining the Coalition, Costa Rica’s Minister of Environment and Energy, Dr. Edgar Gutiérrez Espeleta, said his country fully endorses the Coalition’s Framework and meaningful action to reduce short-lived climate pollutants.
“We are particularly interested in the CCAC initiatives that promote hydrofluorocarbon (HFC) alternative technology and standards, reduce black carbon from heavy duty diesel vehicles and engines, and mitigate short-lived climate pollutants from the municipal solid waste sector,” Minister Gutiérrez Espeleta said in a written statement.
Costa Rica recently launched plans to achieve low emissions development and reduce the social, environmental, and economic impacts of climate change. Many of these actions will lead to reductions of short-lived climate pollutant emissions.
Canada’s Ambassador to Costa Rica, Michael Gort, welcomed Costa Rica to the Coalition. Canada is current Co-Chair of the CCAC.
“We welcome Costa Rica and feel confident that a country with such a strong environmental reputation can make a significant contribution to the Coalition partnership,” Ambassador Gort said. “This milestone underscores once again Costa Rica’s commitment to sustainability.”
We welcome Costa Rica and feel confident that a country with such a strong environmental reputation can make a significant contribution to the Coalition partnershipMichael Gort
The country is wasting no time moving forward, combining its official membership in the CCAC with a national seminar on sustainable transport. The seminar, supported by the Government of Canada, looks at how electric transport can help fight climate change and improve the quality of life, air, and health in cities.
Gustavo Máñez Gomis, UN Environment’s Climate Change Coordinator for Latin America and the Caribbean, said moving away from polluting vehicles and toward electric mobility reduces urban air pollution, protects the climate, reduces energy bills and improves human health, particularly in countries like Costa Rica which has an almost 100% share in renewables.
“As a sustainability leader in the region, and with the Electric Mobility Bill under preparation, we expect to see Costa Rica leading the way in the transition to electric mobility in Latin America,” Mr Máñez Gomis said.
Helena Molin Valdés, Head of the UN Environment hosted Climate and Clean Air Coalition Secretariat welcomed Costa Rica as a partner.
“Costa Rica is known for its global leadership in protecting the environment and we welcome their commitment to reduce short-lived climate pollutants,” Ms Molin Valdés said. “They are already working with Coalition partners to find ways to reduce emissions from the transport sector and have made energy efficiency and low emissions a national priority. We look forward to their contributions to the work of the Coalition.”
Under its ‘7th National Energy Plan 2015-2030’, Costa Rica has identified priorities to reduce emissions from electricity production and the transport sector. In its ‘National Development Plan 2015-2018’ the country has also set a strategic objective to promote actions against global climate change in order to guarantee security, human safety and the country’s competitiveness.
In 2015, the country submitted its Intended Nationally Determined Contribution (INDC) where it announced actions to increase its resilience to the impact of climate change and strengthen its capacity for low emissions development.
The Climate and Clean Air Coalition is a voluntary global partnership of 52 countries, 17 intergovernmental organizations, and 45 businesses, scientific institutions and civil society organizations committed to catalyzing concrete, substantial action to reduce Short Lived Climate Pollutants (including methane, black carbon and many hydrofluorocarbons). The Coalition has 11 initiatives working to raise awareness, mobilize resources and lead transformative actions in key sectors. Reducing short-lived climate pollutants can provide benefits to health, development, and the environment. These actions must go hand in hand with deep and persistent cuts to carbon dioxide and other long-lived greenhouse gases if we are to achieve the goal of the Paris Agreement and keep global warming to 1.5 degrees Celsius.
Our Expert Assistance is a no-cost service that connects you to an extensive network of professionals for consultation and advice on a range of short-lived climate pollution issues and policies.
Experts will provide guidance on technological options, mitigation measures (like those carried out by our initiatives), funding opportunities, application of measurement tools, and policy development. | http://www.ccacoalition.org/en/news/republic-costa-rica-joins-climate-and-clean-air-coalition |
The World Architecture Festival wrapped up its 2014 contest earlier this month, bestowing awards upon 33 buildings that are making innovative design a priority across the globe. From a community library in China that doubles as a playground to a Danish Maritime Museum to a hyper modern church in Spain, the seventh annual WAF winners are as diverse as there are stunning.
What started as a 400-project short list spanning 50 countries earlier this year, was whittled down to less than a few dozen designs across 27 different categories. In the wake of the massive competition, which took place in Singapore, we are profiling each and every building that received commendation. Behold: A comprehensive look at the year's best architecture -- and a glimpse into the future of design, whether it includes water-balanced and energy efficient imaginings, or buildings known as "bespoke bookends" and "spirals of knowledge."
Let us know your thoughts on the projects in the comments. Spoiler: if you're from the United States and you're hoping to see what architectural wonders lay in store for your country, you will be disappointed. While nations like Vietnam, Australia and the United Kingdom make numerous appearances on this list, you won't find a single American-based design... or, for that matter, hardly any designs in Africa or South America (except Brazil). Here's to next year, folks.
Future Projects:
Completed Buildings:
Special Awards:
All photos courtesy of the World Architecture Festival. | https://www.huffpost.com/entry/world-architecture-festival-2014_n_5979190 |
The invention provides a method of increasing capacity of a transmission line considering safety check and economic analysis, which includes ensuring a capacity increasing demand of the transmission line and a temperature constraint check, obtaining a maximum load rate which satisfies temperature constraint and a failure rate constraint checking, obtaining a maximum load rate which satisfies the failure rate, choosing a transmission line load rate which satisfies both the temperature constraint and the failure rate constraint, carrying out an economic verification, and obtaining a final increasing program of the transmission line. A real-time temperature of line is calculated according to the heat balance equation, a short term failure rate of line is calculated by establishing a short term failure rate model of transmission line which is based on PHM, profits of increasing capacity of the transmission line is evaluated by three indexes of transmission revenue, line loss cost and life loss cost. The method of increasing capacity of the transmission line considering safety check and economic analysis belongs to the evaluated field of loading capacity of power transmission and transformation equipment. | |
Decentralized AI is waiting for a Navigator that performs an “architect role” in coordinating and accelerating the efforts of all Agents and hierarchies, applying the beneficial User Experiences holistically and recursively throughout. Perhaps Sophia already has this in mind as part of their role ;-). I make the following suggestions that might make the job easier:
- plan, organize and share the derivation, expression and maintenance of Scenarios
- in a business context, derive these (for example) from the Client’s Business Rules, Vision and Success Factors
- in a “personal” context, derive these (for example) from social media, expert groups etc.
The goal in each case is to define the “common language” between the abstract (User needs and wants) and the solution in a way that the User could ratify and prioritise them
(Reference: http://www.opengroup.org/public/arch/p4/bus_scen/bus_scen.htm)
-
Evolve the Scenarios as the maturity and context of the target User Experience (UE) changes, for example, from:
- ”Get my document reviewed“ to
- “Send a document out for multiple review according to a predetermined list, with appropriate response & progress tracking, and have a mechanism for capturing comments which can be exposed for Audit trail purposes” to
- “Get real-time, decentralized peer review of my Idea, including explicit loops of enhancement and acceptance, with trusted Artefacts generated and validated” to…
-
Identify the Scenario Patterns in all industry and personal (UE) contexts that are relevant for any business area and/or opportunity to enhance value
- Leverage “Pattern-Based Systems Engineering” (PBSE) to identify the small-to-large scale regularities
(Reference: https://www.incose.org/docs/default-source/enchantment/140514schindel-intro-to-pbse1f58e68472db67488e78ff000036190a.pdf?sfvrsn=928381c6_2)
- Use Scenario-based Requirements Elicitation techniques to automate and formalise the scope of any Client Proposition or general change to UE
- “Automate” the generation of a bi-directional bridge between the abstract definition of User needs and the Logical-Physical architectural constructs that must result
(Reference: https://enfocussolutions.com/personas-and-scenarios-as-a-requirements-elicitation-technique/)
- Generate the architectures from the Scenarios and Patterns in each specific context:
- Automation of Logical from (example) functional interdependency of Requirements plus their criticality
- Exploration of Physical from (example) an analysis of the Solution Space (with risk mitigation etc.)
- Eliminate the time-consuming manual complexity and unintentional redundancy that can result from a lack of traceability to Scenarios
- Ratify the definition and intended UE value enhancement with the Users / Customers in their “language”
- something that can be universally understood and appreciated, keeps humans “in-the-loop”
-The result must be VERY EASY for Users to visualize how it fits their Vision, maybe presented with Extended Reality (XR) tools etc.
- Apply Solution Value Analysis techniques to make the connection between “abstract” (requirements and logical architecture) and “physical” (solution architecture, product, process)
- quantify how well (or not) a proposal for “physical” Product, Service, Technology, People, Materials meets each need or want in each context:
- establish the extent and nature of any acceleration or convergence in activity (technology-centric or otherwise) that can improve the UE and/or make it more widely available
- “IF (tech x +10% faster) AND (tech x,y converge like this: …) THEN (Application in context z becomes (cost-)effective to improve the value of UE → …)”. | https://community.singularitynet.io/t/ultimate-navigator/3843 |
History of Chess
The origin of chess is something that is debated and there is not really a consensus on its origin or even the history of chess, from the old to the present.
Some say that versions of chess and its board date back from Ancient Egypt or Dynastic China, but its most supported origin is that it first appeared in India around the 6th century, at that time it would have the name of Chaturanga.
Over time, it eventually reached Persia and its name eventually changed, being now called Xatranje, probably having other rules as well.
It eventually spread throughout Europe slowly and it took about 500 years for Xatranje to begin to look more like the chess we know today.
In the yoer of 1475 began to be consolidated with the current rules and its name also eventually changed to chess but it still took a few hundred years for Europe to play with the most modern pieces and rules as well.
In the middle of the nineteenth century began to appear tournaments and chess competitions, thus giving rise to a sport, which was always dominated by the same players and had World champions who maintained their reign for long periods, 20 / 30 years for example.
Chess game
The game of Chess aims to give the “mate” in the king of the opponent player. This happens in the following situations:
- The king can not move to any house (they are all in the attack line of the opponent’s pieces);
- No part can stand in front and protect the King;
- The piece you are attacking cannot be captured;
If any of these conditions arise, then the “mate” or “checkmate” will occur and the game ends, winning the player who applied the “mate”.
Board and Chess Pieces
The chessboard is composed of squares of white and black color (8 squares of length on each side), being these always alternating.
The pieces also have the same colors and each color corresponds to the pieces of a player.
The board should be positioned so the last house on the right side of the row closest to each player is a white square.
The game of chess consists of the following pieces:
- Pawn
- Tower
- Horse
- Queen
- Bishop
- Rei
The pieces have their own order to be placed on the board (Follow the positions that are represented in the image);
Please note: The position of the King and Queen changes depending on the player, following the rule:
White King – Black House
Black King – White House
White Queen – White House
Black Queen – Black House
Who starts playing is always the player who owns the white pieces.
Chess Moves / Plays
Each piece has its unique way of moving around on the board, which enables a very large number of patterns and strategies, making chess a strategy sport and more interesting.
The possible plays per house piece are:
Pawn
This can only just move forward (no more part has this rule), being that it can only advance 1 home in front, or 2 if it has never been tweaked. It is also the only one that moves differently when it is to take the opponent’s piece, and he can only take the piece that is diagonally forward.
Tower
The tower has a fairly simple movement. It can move forward, backward, left or right in a straight line across the entire board as long as it has the path without any part.
Horse
The horse is the one that has the most unique movement. It can only move in L. format. This means that you can only walk two houses forward and one to the side at each move. This piece can already make your move jumping over other pieces (it’s the only one with this rule).
Bishop
The bishop has a movement similar to that of the Tower, only the latter, instead of moving in a straight line, moves diagonally, and can not pass over any part either.
Queen
This is considered the most powerful piece of Chess, this due to its versatility of movements since it can make the same movements of the Tower and the Bishop.
Rei
The King can move 1 each in all directions. The only restriction that this has is not being able to go to a house that is “check” (in the line of attack) by some part of the opponent.
Special moves
There are certain moves that can be made under special circumstances that make this magnificent game even more interesting and competitive. The special plays that exist are:
Pedestrian Promotion
When you can get one of your pawns to reach the end of the board (on the first line of the opponent), the pawn must be replaced by another piece (being a bishop, tower, queen, or horse), and is usually chosen the queen since it is the most powerful piece of the game.
Roque
It consists of a movement between the king and the queen, who change positions at the same time, in an attempt to defend the king by taking it from the center of the board and move the tower to a better position of attack.
In this movement, the king moves toward the tower two houses and the tower moves to the king’s side passing over him. There are two rocks: the small and the large, the only difference being its designation. The big rock is when the rock is made with the tower farthest from the king and the small is the opposite.
To make this move, certain conditions must be met:
- The king could not have moved;
- The tower can not have been moved;
- The way between the tower and the king must be clean;
- The king can not stand in a position that is in the line of attack of the enemy;
Passing
This is a special way for the pawn to move and can take the opponent’s pawn that passed next to him when he is in the 5 row counting from his field. Observe the images to better understand the move.
For this move there are also special conditions, which are:
- Your pawn should be in the 5 row
- The pawn of the opponent has advanced two houses and thus have stayed next to his;
- The bid must be made immediately to the movement of your opponent;
Some Chess Rules
Chess also has some rules. Let’s just highlight the main rules that are usually addressed.
- When a player lifts a piece, it may place it anywhere that is valid for the movement of that piece. After landing the piece cannot be removed, only if it has made an invalid movement.
- When promoting a pawn, the player can touch a piece that is outside the board and change for his pawn, thus finishing the move;
- During the Roque the player must first take the King and then in the Tower thus making their exchange of positions. If you touch both at the same time you can do it. If this one raises the King with the intention to make the Roque but this is impossible, then it must move the King to a valid house.
- Players must not speak during the game, only when asking for a tie or advising the referee of any infraction. In games between amateurs it is common to announce the “check” but among professionals, the same should not be announced. | https://eduindex.org/2021/07/13/all-about-chess/ |
Jamaica is known for its white sand beaches, crystal clear waters, and endless days of sun. But perhaps more notably the island is known for its reggae music, which has made Jamaican culture an internationally recognized and marketable product. However, despite its mainstream viability and entertainment value, reggae music originated as a deeply political form of protest and contestation against the colonial and imperialist forces operative in the social context of Jamaican life. Musical ambassadors like The Mighty Diamonds, the Abyssinians, Judy Mowatt, Burning Spear, Jimmy Cliff, Bob Marley, Peter Tosh, Bunny Wailer, Rita Marley, and Marcia Griffiths, along with many others, can be credited with bringing reggae music to the world stage and giving it international recognition. Yet, despite its worldwide popularity, few (outside of its practitioners and followers) truly understand the cultural and political climate in which the music developed. As both a fan of the music and culture, and as a child of Jamaican parentage, I have always been intrigued by the social, political, and cultural context from which the music emerged. My annual visits to the island of Jamaica further informed my interpretation of the history of reggae music and the impact of Rastafari culture on the music as well.
Since the beginning of its rise to international popularity in the 1960's and 1970's, there has been a close association between reggae artists and Rastafari culture, a culture founded on resistance. The Rastafari in Jamaica have inhabited a marginalized place in society. Scorned by society for their beliefs and appearance, they have been, and continue to be, disenfranchised from the means of production and wealth in Jamaica. Perhaps because of this disenfranchisement, Rastafari brethren and sistren have always been at the forefront in contesting the institutionalized racism and classism inherited from the colonial system. The Rastafari in Jamaica were among the first on the island to look to Africa as the source of their ancestry and identity, and were also among the first to use reggae music as a form of protest against the oppressive social conditions on the island. Following in the revolutionary spirit of the Maroons (communities of runaway slaves who fought against British slavers in Jamaica in the early 18th century), the Rastafari sought to distance themselves from the colonial culture of the island both in appearance and in beliefs.
The brutality targeted against the Rastafari in Jamaica was manifest in the police practice of cutting off a Rastafarian's locks, an actual and symbolic act meant to strip the Rastafari of their faith and power. It is little wonder that reggae music, a socially acceptable and viable product, became a means of political resistance through performance. Reggae music's lyrics, imbued with biblical imagery and symbolism, constitute a performative act. In the process of performing the music, reggae artists were actually protesting against the oppressive forces of the Babylon system. For the Rastafari, the Babylon system represents all of the exploitative and oppressive practices in Jamaica (in particular) and Western society (in general). This oppressive system began to come under increasing scrutiny in reggae music. Reggae lyrics, like the above, are indicative of the reggae protest songs popular during the 1960's and 1970's. Marley's lyrics, similar to the lyrics of his contemporaries, illustrate the reggae artists' negative view of the Babylon system, manifest in the oppressive political policy of the Jamaican government at the time. During the early rise of the reggae protest songs the Jamaican government banned many reggae recordings critical of governmental practices, or these reggae protest songs received very little airplay. This practice of banning or underplaying the reggae protest songs reveals the social force and power of this form of protest, which the Jamaican government sought to control.
Despite (or perhaps because of) the Jamaican government's recognition of the revolutionary power of reggae protest music, by the mid to late 1970's, Rastafari symbolism and reggae music began to be increasingly appropriated by the political parties in Jamaica. This was most notably manifest during the fierce and violent political campaigns of the late 1970's between Michael Manley and opposition leader Edward Seaga, in which Rastafari religious symbolism and reggae music became an integral part of the political landscape. Rastafari symbolism and reggae music became a symbolic representation of both political parties' attempts to show their sympathy with the plight of the disenfranchised and underrepresented populations in Jamaica. This attempt by both parties to show a connection to reggae music and Rastafari culture was evidenced at Bob Marley's 1978 "One Love Peace Concert" in Kingston, Jamaica, at which Manley and Seaga both appeared on stage. Marley, flanked by the two politicians, clasped both of their hands together with his and raised them over his head in a symbolic call for unity and an end to politically inspired violence in Jamaica. Yet, despite this symbolic allegiance, the actual manifestation of political policy to improve the conditions of the majority of the impoverished Jamaican citizens and policies to stop the violence associated with politics in Jamaica were never realized. Although this appropriation of reggae protest music did not result in any structural changes in governmental practices or political policy, it did bring reggae protest music into the national consciousness as a powerful political protest tool. In many ways, the appropriation of reggae protest music by the Jamaican political parties legitimized this performative practice as a viable form of political protest and as a type of nonviolent resistance.
Reggae music and Rastafari culture continue to be revolutionary and accessible forms of performative art/speech. A new generation of reggae artists emerging in the 1990's was able to combine conscious lyrics with hardcore dancehall and hip-hop rhythms, again popularizing this form of musical protest and introducing the music to a new generation of listeners. Current reggae artists like Buju Banton, Anthony B, Sizzla, Capleton, and the Marley Brothers, along with many others, continue to usher reggae music and Rastafari culture into the mainstream consciousness of Western society. Reggae music has contributed to a worldwide youth culture in which the music is used as a commentary and criticism of social and political policies considered unjust by the artists. Reggae musicians continue to use their music as a forum for political protest and as a means for raising consciousness amongst urban youth populations. The rise in popularity of Rastafari symbolism in the late twentieth century into the twenty-first century is evidenced by the ever-broadening social acceptance (in the West) of the dreadlocks hairstyle and reggae music. Further, the inclusion of reggae music as a category in the Grammy awards in 1985 was an acknowledgement of the mainstream acceptance of both the music and the culture from which it originated.
Although reggae music and Rastafari culture continue to work as viable and accessible forms of protest, the recent trend in reggae and dancehall music towards more 'rudeboy', materialistic lyrics undermines the historical and social context out of which earlier reggae protest music originated. Despite the far-reaching possibilities for the continued use of reggae music as a platform for political and social commentary, its inclusion into mainstream popular culture must necessarily shape the form and content of the music and the culture. Rastafari symbolism, likewise, has continued to lose its meaning and significance in the face of its own rise in popularity. Reggae music and Rastafari culture, which historically constituted a performative act of protest and contestation, both in the performance of the music and in the physical embodiment of a Rastafari appearance, have come to be identified in popular culture as simply a genre of music or a style of dress and hairstyle. The diminishment of reggae music and Rastafari culture as forms of protest is evidenced in the fact that as the music and culture become increasingly more popular and mainstream, the revolutionary history of this particular performative practice is slowly being lost. The loss of the historical context out of which the music originated, coupled with the economic incentives to mass produce the music and culture, have moved this performative practice farther away from the realm of revolutionary practice and closer to the realm of performance as product and commodity. However, as long as there is a single reggae artist who continues to honor the struggle and revolutionary spirit out of which this music originated, it will continue to serve as an effective example of performance-as-protest for future generations. | https://hemisphericinstitute.org/pt/emisferica-21/2-1-review-essays/rastafari-reggae-and-resistance-by-katrina-lacey.html |
Possible Fifth Force Of Nature Found
Over the years, humans have come up with four forces that can be used to describe every single interaction in the physical world. They are gravity, electromagnetism, the weak nuclear force that causes particle decay, and the strong nuclear force that binds quarks into atoms. Together, these have become the standard model of particle physics. But the existence of dark matter makes this model seem incomplete. Surely there must be another force (or forces) that explain both its existence and the reason for its darkness.
Hungarian scientists from the Atomki Nuclear Research Institute led by Professor Attila Krasznahorkay believe they have found evidence of a fifth force of nature. While monitoring an excited helium atom’s decay, they observed it emitting light, which is not unusual. What is unusual is that the particles split at a precise angle of 115 degrees, as though they were knocked off course by an invisible force.
The scientists dubbed this particle X17, because they calculated its mass at 17 megaelectronvolts (MeV). One electron Volt describes the kinetic energy gained by a single electron as it moves from zero volts to a potential of one volt, and so a megaelectronvolt is equal to the energy gained when an electron moves from zero volts to one million volts.
What Are Those First Four, Again?
Let’s start with the easy one, gravity. It gives objects weight, and keeps things more or less glued in place on Earth. Though gravity is a relatively weak force, it dominates on a large scale and holds entire galaxies together. Gravity helps us work and have fun. Without gravity, there would be no water towers, hydroelectric power plants, or roller coasters.
The electromagnetic force is a two-headed beast that dominates at the human scale. Almost everything we are and do is underpinned by this force that surrounds us like an ethereal soup. Electricity and magnetism are considered a dual force because they work on the same principle — that opposite forces attract and like forces repel.
This force holds atoms together and makes electronics possible. It’s also responsible for visible light itself. Each fundamental force has a carrier particle, and for electromagnetism, that particle is the photon. What we think of as visible light is the result of photons carrying electrostatic force between electrons and protons.
The weak and strong nuclear forces aren’t as easy to grasp because they operate at the subatomic level. The weak nuclear force is responsible for beta decay, where a neutron can turn into a proton plus an electron and anti-neutrino, which is one type of radioactive decay. Weak interactions explain how particles can change by changing the quarks inside them.
The strong nuclear force is the strongest force in nature, but it only dominates at the atomic scale. Imagine a nucleus with multiple protons. All those protons are positively charged, so why don’t they repel each other and rip the nucleus apart? The strong nuclear force is about 130x stronger than the electromagnetic force, so when protons are close enough together, it will dominate. The strong nuclear force holds both the nucleus together as well as the nucleons themselves.
The Force of Change
Suspicion of a fifth force has been around for a while. Atomki researchers observed a similar effect in 2015 when they studied the light emitted during the decay of a beryllium-8 isotope. As it decayed, the constituent electrons and positrons consistently repelled each other at another strange angle — exactly 140 degrees. They dubbed it a “protophobic” force, as in a force that’s afraid of protons. Labs around the world made repeated attempts to prove the discovery a fluke or a mistake, but they all produced the same results as Atomki.
Professor Attila Krasznahorkay and his team published their observations in late October, though the paper has yet to be peer-reviewed. Now, the plan at Atomki is to observe other atoms’ decay. If they can find a third atom that exhibits this strange behavior, we may have to take the standard model back to the drawing board to accommodate this development.
So what happens if science concludes that the X17 particle is evidence of a fifth force of nature? We don’t really know for sure. It might offer clues into dark matter, and it might bring us closer to a unified field theory. We’re at the edge of known science here, so feel free to speculate wildly in the comments. | |
In this blog post, we’ll describe the important role played by female scientists in the development of network ecology, focusing on the contributions by two ground-breaking ecologists and also highlighting contributions from a range of other scientists working in this field.
Jennifer Dunne, currently a Professor and the Vice President for Science at Santa Fe Institute in the USA, has a multidisciplinary background with degrees in Philosophy, Ecology and Systematic Biology, Energy and Resources, and a post-doc fellowship in Biological Informatics. Her research includes pioneering work in the analysis, modelling, dynamics, and functions of ecological networks. She was recently named Fellow of The Ecological Society of America for deep and central contributions to the theory of food web analyses, including extension to paleo food webs.
In 1998, together with Neo Martinez, her first work on ecological networks was published as a chapter in Ecological Scale: Theory and Applications. The chapter (Time, space, and beyond: scale issues in food web research) highlighted the importance of considering spatial, temporal, and other relevant scales such as species richness when analysing patterns in food webs.
Her first peer-reviewed articles in this area appeared in 2002 and continue to be influential. They revealed the unique structure of trophic ecological networks and identified aspects of food web structure that drive patterns of community response and robustness to species loss. This pioneering work was a stepping stone for the development of simulations of the consequences of species losses at community and ecosystem levels, and of the consequences of removal of exotic species, identification of keystone species and improvement of nature conservancy and restoration strategies.
Jennifer’s food web work uses data from a wide diversity of aquatic and terrestrial ecosystems from all over the world, and she has published general reviews of marine and freshwater systems. Before her focus on food webs, she also studied the plant ecology of subalpine meadows and Mediterranean shrublands. Together with her collaborators, she has made substantial contributions to the study of the structure and dynamics of ecological networks, by exploring the role of adaptive behaviour on the dynamics of food webs, continuing to develop extinction cascade modelling, and investigating the effect of spatial scale on network patterns.
As well as studying modern ecological networks Jennifer has explored patterns of paleo-ecological networks going back half a billion years to the Cambrian. Most recently she described and analysed a highly resolved feeding interaction dataset for 700 lake and forest taxa from the 48 million-year-old early Eocene Messel Shale. The study suggested that modern trophic organization developed along with the modern Messel biota during an 18 million year interval of dramatic post-extinction change.
Her research team is currently investigating the roles and impacts of pre-industrial humans on food webs and, more generally, the many ways that humans interact with other species all over the world.
Jennifer’s work has inspired – and will certainly continue to inspire – a large number of female and male researchers all over the world.
Jane Memmott is Professor of Ecology at the University of Bristol in the UK. Her early worked focused on the community ecology of tropical sandflies. In the first of two post-doc positions, she took her first steps into the world of ecological networks with Charles Godfray, researching tropical host-parasitoid food webs in Costa Rica. She followed this up with a New Zealand based field study on the ecology of biocontrol.
Jane took up a lectureship post at the University of Bristol in 1996, where she established a group working in the area of community ecology using food webs to examine a wide range of ecological processes. Enlisting the help of keen students, and inspired by Jordano’s 1987 paper – ‘Patterns of Mutualistic Interactions in Pollination and Seed Dispersal: Connectance, Dependence Asymmetries, and Coevolution’ – she published the first plant-pollinator food web showing community-level links among plants and their flower visitors for a nature reserve in Somerset, UK. Jane further developed her work on invasive species with research investigating the impact of non-native invasive species on ecological communities, examining the effect of introduced biocontrol agents on host-parasitoid communities in Hawaii using a food web approach.
She has guided a number of PhD students through a diverse range of studies, educating the next generation of network scientists and encouraging them to explore the diverse habitats around the world in the course of their studies including the restoration of UK hay meadows and heathlands, the impact of non-native plants on UK plant-pollinator and plant-herbivore-parasitoid networks, the effects of non-native plants in the Azores and the non-target effects of introduced biocontrol agents in Australia.
Recognising that combining ecology with raising a family might not be particularly amenable to tropical ecology expeditions, Jane switched to applying network approaches closer to home. She began investigating the effect of organic farming on pest-control networks and then integrating a range of sampling approaches to construct the first “network of networks” for any system. This work combined a range of different ecological network types, including pollinator, seed dispersal and host-parasitoid networks as well as trophic food webs.
Working with leading pollination ecologists Nikolas Waser and Mary Price, Jane developed novel approaches to examine the effect of extinctions on network robustness, utilising unique historical datasets to simulate the effects of species extinctions and climate change on plant-pollinator networks.
The breadth of ecological network studies undertaken in Jane’s group in recent years includes nocturnal pollen transport networks in Scottish pine forests, the comparison of different sampling methods on plant-pollinator network structure, exploring the role of functional diversity structuring salt marsh island networks and comparisons of pollen transfer and flower visitation networks. Recent key projects have examined the role of nectar in historical UK landscapes and plant-pollinator networks in urban environments.
Jane is always the first person to acknowledge the key roles played by PhD students and postdocs in her lab and over the years recruited a community of international researchers from five continents and more than ten different countries, many of whom have gone on to develop their own research using networks to study ecological systems and questions.
We chose to focus on two prominent scientists who have made significant advances in network ecology over a number of years, as well as influencing and mentoring an enormous number of students and early career researchers. However, ecological network research has grown significantly in the past decade. There are a lot of researchers, many of whom are women, making fantastic contributions and developing new areas of research in diverse areas of network ecology.
We now have better understanding of network data quality and completeness, more metrics that help us quantify indirect impacts between species. We also have new model based approaches that help understand the formation of ecological network patterns and lead to more realistic simulations of the consequences of species loss. Molecular techniques are helping to improve the resolution of ecological network data and, by combining ecological network analyses with phylogenetic information, to explain macro-evolution processes.
Important contributions evaluating the effects of global environmental changes on ecological network patterns of a diverse set of ecosystems have been made in recent years, and classical food web analytical methods are increasingly being applied to novel areas of research (e.g. individual based ecological networks and bacteria metabolic networks).
This new research is, in no small part, due to the ground-breaking work of Professors Jennifer Dunne and Jane Memmott. Their examples will certainly serve as an inspiration to the next generation of female and male ecologists, as well as many young people looking to break into STEM fields.
This entry was posted in General and tagged Biocontrol, community ecology, conservation, Ecological Interactions, Ecological Networks, Food webs, International Women's Day, Jane Memmott, Jennifer Dunne, Pollinators, Species Loss by Chris Grieves. Bookmark the permalink. | https://methodsblog.com/2017/03/06/women-in-ecological-network-research/ |
The Role of Diet in Chronic Pain
In this blog, I usually focus on the muscle and joint pain that’s caused by our chronic muscle tension and our habitual posture and movement; that’s what Clinical Somatics is all about.
However, there’s another common cause of chronic pain that many people are not aware of: chronic systemic inflammation.
We hear about inflammation a lot these days in terms of the role it plays in many chronic disease conditions, including cancer, diabetes, and heart disease. Research and personal stories of people who have eliminated their chronic pain by reducing inflammation in their bodies is now accumulating as well, and the way that many people achieve this is by changing their diet.
In this post we’ll start with a quick lesson on inflammation and why what we eat can increase or decrease inflammation throughout the body. If you want to learn more, check out What is Chronic Inflammation, and Why Is It Killing Us? and 12 Causes of Chronic Inflammation.
Then, for each of the following painful conditions that are caused or exacerbated by inflammation, I’ll summarize and link to research and personal stories showing how changing your diet can oftentimes eliminate these conditions completely:
Migraine headaches
Rheumatoid arthritis
Osteoarthritis
Fibromyalgia
Endometriosis
Lupus
Multiple sclerosis
Neuropathy
Localized vs. Systemic Inflammation
Localized inflammation is the type that I typically talk about in the context of Clinical Somatics. Localized inflammation occurs at the site of an injury or infection. When cells of your body are damaged or attacked, your immune system kicks in to remove the harmful stimuli and begin the healing process. Blood vessels dilate and capillaries become permeable, and the increased blood flow to the affected area makes it swollen, red, and warm. The swelling can press on nerves and cause pain. Another cause of pain is the release of inflammatory mediators; these substances activate nociceptors (pain receptors) and contribute to the pain you feel in localized inflammation.
When localized inflammation is acute (lasting for a short period of time), it’s beneficial because it facilitates the healing process. However, when localized inflammation becomes chronic—like in a joint that is constantly being put under too much strain and pressure—the physical wear-and-tear and the never-ending immune system attack will gradually destroy protective joint tissues and cause dysfunction and deformity of the joint.
If you feel pain and symptoms of inflammation in just one specific part of your body, then chronic, localized inflammation is probably the type of inflammation involved. In order to reduce or eliminate this type of inflammation, you must release your chronic muscle tension and change the habitual posture and movement patterns that are causing the damage to your body.
However, if you feel inflammatory pain throughout your body, chronic systemic inflammation is likely involved. Systemic inflammation occurs when your immune system produces the inflammatory response throughout your body rather than in just one specific area. When systemic inflammation is chronic, research shows that it can cause pain throughout the body, destruction and scarring of tissues, buildup of plaques in arteries, changes in gene expression, cancer, diabetes, dementia, depression, and other dangerous conditions.
Some proven causes of chronic systemic inflammation are viral and bacterial infections, allergies, smoking, obesity, stress, and alcohol intake. And another major cause of systemic inflammation, our diet, affects the vast majority of people in the world to some degree, whether they’re aware of it or not.
Luckily, the effects of our diet on inflammation and chronic health conditions is becoming more widely researched and recognized. It hasn’t hit the tipping point quite yet, in my opinion, but I believe it will very soon simply because so many people are suffering and desperate for a solution.
Why what we eat causes inflammation
The research on the link between diet and chronic pain conditions (and many other health conditions) is narrowing in on the consumption of animal products versus eating a plant-based diet. So, I’m going to briefly discuss why eating animal products causes systemic inflammation, and the growing movement in favor of a plant-based diet.
Meat, poultry, and fish contain substances called endotoxins. Endotoxins are lipopolysaccharides found in the outer membrane of certain bacterial cells, and they are released when the bacterial cell dies or disintegrates. These compounds are classified as “toxins” because they can cause serious health problems—like cancer—in humans, animals, and other organisms.
Endotoxins are present in meat, poultry, and fish no matter how these products are cooked or prepared. When we eat these foods, the endotoxins are absorbed into our system, triggering the immune system response of systemic inflammation. Eating meat, poultry, and fish that are high in fat increases our absorption of endotoxins.
If you want to learn more about this topic, watch this series of short (2-4 minutes each), easy-to-understand videos by Dr. Michael Greger.
First video: The Leaky Gut Theory of Why Animal Products Cause Inflammation
Second video: The Exogenous Endotoxin Theory
Third video: Dead Meat Bacteria Endotoxemia
Meat and other animal products contain or lead to the formation of many other inflammatory substances as well, including nitrosamines, trimethylamine N-oxide (TMAO), heterocyclic amines (HCAs), N-Glycolylneuraminic acid (Neu5Gc), and polycyclic aromatic hydrocarbons (PAHs).
Meat, poultry, and fish aren’t the only inflammation-causing animal products. Eggs cause inflammation for two reasons: they contain high levels of arachidonic acid and cholesterol.
Arachidonic acid is a fatty acid involved in the inflammatory process. While we need a certain amount of arachidonic acid for essential cellular processes, our body makes all that we need; we don’t need to ingest any. The same is true for cholesterol: our body makes all that we need, and excess cholesterol in the bloodstream triggers the inflammatory process.
Just one egg yolk contains 62% of our recommended daily intake of cholesterol, and eggs are associated with increased cancer risk. And incredibly, egg consumption has similar effects on atherosclerosis and life expectancy as a regular smoking habit.
Lastly, there are several reasons why dairy products cause inflammation. Some people don’t produce the enzyme lactase, which is necessary for breaking down the lactose in dairy products. Some people are intolerant of casein and whey, the two proteins found in cow’s milk. And many dairy products contain hormones and antibiotics that are given to cows in order to stimulate their milk production and prevent infection.
Lactose, casein, whey, hormones, and antibiotics can all trigger the inflammatory response. So it’s not surprising that dairy consumption has been linked to an increased risk of cancer and other inflammatory conditions, like arthritis, asthma, acne, type 1 diabetes, and multiple sclerosis.
What is a plant-based diet?
As a result of the research that Dr. Greger discusses in his videos, as well as other research showing the negative effects of eating highly processed foods, more and more healthcare professionals are recommending a whole-food, plant-based (WFPB) diet. Basically, this diet is a slightly relaxed version of a vegan diet (in which you eat zero animal products). In addition, it emphasizes whole, minimally processed foods.
While many people are under the impression that humans are omnivores, it turns out that biologically we’re frugivorous herbivores—we evolved eating fruit, vegetables, nuts, seeds, roots, and legumes. That’s why eating a whole-food, plant-based diet resolves so many health issues; it’s how we’re meant to eat.
The great thing about adopting a WFPB diet is that it allows for the fact that we’re human and that it’s hard to avoid eating animal products and processed foods all of the time. And the reality is that eating a little bit of animal products or processed food once in a while is probably not going to kill us, so we don’t need to avoid them 100% of the time in order to reap the benefits and avoid chronic disease.
If you want to learn more about eating a whole-food, plant-based diet, I recommend looking at Dr. T. Colin Campbell’s Center for Nutrition Studies and Forks Over Knives.
The growing movement toward a plant-based diet
At the 2017 American Medical Association (AMA) Annual Meeting, the AMA passed resolutions calling for hospitals to eliminate processed meat from their menus and to offer more plant-based meals. Hospitals including NYC Health & Hospitals, the University of Florida Health Shands Hospital, and the University of Rochester Highland Hospital have begun to offer and encourage a plant-based diet for their patients. And according to Food Revolution Network, other mainstream health organizations including Kaiser Permanente (the largest healthcare organization in the U.S.), the Dietary Guidelines Advisory Committee, and the American Institute for Cancer Research have all begun recommending a plant-based diet.
The Economist declared that “2019 will be the year veganism goes mainstream.” The article states that 25% of Americans between the ages of 25 and 34 are vegan or vegetarian.
The school district of Los Angeles, America’s second-largest, now offers vegan meals in its schools.
And yes, folks…even McDonald’s has introduced its McVegan burger in Sweden and Finland.
This Food Revolution Network article provides statistics on how veganism is growing around the world. The Chinese government has encouraged the country to reduce their meat consumption by 50%; Google is shifting toward plant-based foods in their employee cafeterias; and professional athletes including Venus Williams, Tom Brady, 11 of the Tennessee Titans, and a number of NBA players are now eating plant-based diets.
If you want to learn more about the negative health effects of eating animal products and the growing movement toward a plant-based diet, I recommend watching the engaging documentaries What the Health and Forks Over Knives (both available with a Netflix or Amazon Prime subscription, and available to rent or buy through other streaming services).
If you’re curious about athletic performance on a plant-based diet, The Game Changers is a must-see. This “revolutionary new documentary about meat, protein, and strength,” produced by James Cameron, Arnold Schwarzenegger, and Jackie Chan, is sure to change public perception of plant-based diets.
How a plant-based diet can reduce and eliminate chronic pain conditions
Following is just a taste of the research and personal stories available online showing how eating a plant-based diet can improve and eliminate some chronic pain conditions. If you have any chronic health condition, I recommend doing your own research to find out if switching to a plant-based diet might help you.
Migraine headaches:
A 2014 study published in the Journal of Headache and Pain compared the efficacy of eating a plant-based diet to a placebo for the treatment of migraine headaches. The study showed significant decreases in number of headaches, headache intensity, duration of headaches, and number and percent of medicated headaches after following a plant-based diet for 8 weeks. Notably, many of the participants refused to resume their baseline diets at the end of the test period, even though the study required that they do so, because they were experiencing relief on a plant-based diet.
Rheumatoid arthritis:
In this 2002 study, 24 rheumatoid arthritis (RA) patients ate a very low-fat, vegan diet. The patients experienced significant improvement in RA symptoms after just 4 weeks on the diet.
This 2001 study found that a vegan, gluten-free diet improved RA symptoms, and that the positive effects may be related to a reduction in the immune system’s response to food antigens.
This 2000 study found improvement in RA symptoms after following a raw vegan diet.
A quick search on the National Center for Biotechnology Information (NCBI, a division of the National Institutes of Health) returns a list of related research.
Theresa Dojaquez was diagnosed with rheumatoid arthritis at age 37. She refused to go on the medications that her doctor recommended, and after doing her own research, decided that eating a plant-based diet might help her. Two months later, she was free from all pain and inflammation. She went on to run her first half marathon. Six months after adopting the diet, her polycystic ovary syndrome, which she’d battled her whole life, also resolved.
Jon Hinds was an elite Ju-jitsu athlete when he was diagnosed with RA. Luckily, a colleague told him it was reversible with a plant-based diet. Two months later, he was completely pain-free.
35-year-old Emily Brandehoff resolved her rheumatoid arthritis, which involved excruciating pain and severe depression, by switching to a plant-based diet.
Osteoarthritis:
This 2015 study found that osteoarthritis patients eating a whole-food, plant-based diet experienced significant reduction in pain after just two weeks, as well as improvements in their energy and physical functioning.
Fibromyalgia:
This 2001 study tested the effects of eating a mostly raw vegetarian diet on fibromyalgia symptoms. After 2 months, 19 of 30 participants experienced improvement of all fibromyalgia symptoms. And after 7 months, these 19 participants had complete resolution of 7 out of 8 health scales measured in the study.
This 2002 study found that eating a raw vegan diet reduced pain and joint stiffness in fibromyalgia sufferers.
Dr. Michael Greger summarizes research about the effects of vegan and vegetarian diets for fibromyalgia sufferers in his video Fibromyalgia vs. Vegetarian & Raw Vegan Diets.
In addition to plant-based diets, gluten-free diets have been shown to improve fibromyalgia symptoms in people who have celiac disease or non-celiac gluten sensitivity. For these people, going on a gluten-free diet is shown to have “remarkable clinical improvement.”
Read the story of Melayna Evans, who used a plant-based diet to cure her fibromyalgia, as well as her high blood pressure, diabetes, high cholesterol, and sleep apnea.
And read Cheryl Lambert’s story of how she overcame 4 years of fibromyalgia and constant physical pain by going on a plant-based diet.
Endometriosis:
While I couldn’t find any clinical studies on the effectiveness of a plant-based diet for endometriosis symptoms, I found many personal stories:
Christine Krebs had suffered from endometriosis pain for about 20 years, debilitating back pain for 10 years, and also had recurring chronic pain in her hands, feet, legs, and neck. Within three days of going on a whole-food vegan diet, her pain was completely gone.
Katherine Lawrence was diagnosed with Stage 4 endometriosis, ovarian and uterine cysts, and advanced reproductive disease. Her doctor told her that her only option was to get a hysterectomy. But after 6 weeks on a whole-food, plant-based diet, 95% of her endometriosis was gone, her reproductive problems resolved, and the fibrotic cysts she’d had in her breasts since puberty had completely disappeared.
Jessica Murnane couldn’t get out of bed most days because of her Stage 4 endometriosis. But within weeks of going on a plant-based diet, her pain began to fade, and her depression lifted soon after.
Paula recovered from many years of endometriosis pain by going on a vegan diet. Within 8 months her pain was completely gone, and within 2 years she was able to naturally conceive after previously being told she would not be able to.
Lupus:
In this 2019 case study of two lupus sufferers, both experienced improvement in joint pain, kidney function, and energy level after eating a raw, whole-food, plant-based diet for 6 weeks.
Jami Heymann was diagnosed with lupus after experiencing crippling joint pain and partial seizures. Immediately after being diagnosed, she went home and researched how people had recovered from autoimmune diseases. The consistent answer was a whole-food, vegan diet, so she made the switch immediately. Jami’s pain, hair loss, and extreme fatigue gradually decreased over the next few months until they disappeared completely. She has stopped having seizures as well.
At age 16, Brooke Goldner, M.D. was diagnosed with lupus after suffering from debilitating arthritis and migraine headaches. At the same time, she found out that she had stage IV kidney failure—her immune system was destroying her kidneys. She underwent 2 years of chemotherapy in order to stabilize her condition, but had by no means recovered. Luckily, 12 years after her initial diagnosis, she met her future husband who helped her switch to a vegan diet. Dr. Goldner has now been disease-free for 10 years and has given birth to two healthy children.
Joyce Hale suffered from daily seizures and neuropathy episodes caused by lupus. Shortly after switching to a whole-food, plant-based diet, her seizures and neuropathy began to lessen. Five years later, she was able to stop all of her lupus medications completely, and she finally feels free from chronic illness.
Multiple sclerosis:
In 1948, neurologist Roy Swank began testing the effects of a diet very low in saturated animal fats on multiple sclerosis (MS). Thirty-four years after beginning his study, 95% of the participants were without progression of the disease. His work has been called the “most effective treatment of multiple sclerosis ever reported in the peer review literature.”
Corinne Nijjer was diagnosed with fibromyalgia at age 22 and multiple sclerosis at age 24. She continued to have relapses of MS about every six months, and 4 years later, she woke up one day unable to feel anything below her waist. She had been hearing about Roy Swank’s research, and she knew at this moment that she had to make the switch to a low-fat, whole-food, plant-based diet. Corinne quickly felt improvement in her chronic pain and other symptoms. Ten years later, she is completely free from symptoms and relapses.
In 1995, Dr. Saray Stancic woke up from a brief nap during an overnight shift at the hospital to find both her legs numb and heavy. She was diagnosed with MS after an emergency MRI. By 2003 she could only walk with a cane and was dependent on nearly 12 prescription drugs. In 2003, she learned about the benefits of a whole-food, plant-based diet for MS patients, and decided to taper off her medications and make the switch to the WFPB diet. Her neurological symptoms gradually improved and she felt stronger and more energetic. In 2010, she ran a marathon. Dr. Stancic is so passionate about the WFPB diet that she left her infectious disease practice to focus solely on lifestyle medicine.
Neuropathy:
In this study, 21 patients suffering from systemic distal polyneuropathy (SDPN) with adult-onset (Type II) diabetes mellitus (AODM) were put on a low fat, high fiber, whole-food, vegetarian diet. Remarkably, 17 of the 21 patients experienced complete pain relief after just 4-16 days. Four years later, 71% of the patients had remained on the diet, and nearly all of them continued to experience relief or had further improvement.
This 2015 study also found a reduction in neuropathic pain in type 2 diabetes patients after following a low-fat, plant-based diet for 20 weeks.
Always remember: Just because you’re told that you will suffer from a condition and be on medications forever doesn’t mean it’s true. We did not evolve to inevitably get one disease or another; our bodies simply can’t deal with the toxins in our food and our environment, and the stress in our daily lives. Keep an open mind, do your research, and don’t give up hope! | https://somaticmovementcenter.com/role-diet-chronic-pain/ |
In June, archaeologists began excavating a Viking ship in a farm field in eastern Norway. The 1000 to 1200 year old ship was probably the tomb of a local king or jarl, and it once lay beneath a monumental burial mound. A 2018 ground radar survey of a site called Gjellestad, on the fertile coastal plain of Vikiletta, revealed the buried ship.
The Norwegian Institute for Cultural Heritage Research, or NIKU, announced the ship’s find in 2018 and announced earlier in 2020 that excavations would begin in the summer to save the ship from wood-eating fungus. The recent study by NIKU archaeologist Lars Gustavsen and his colleagues is the first academic publication of the research findings and includes the previously announced ship burial in Gjellestad, as well as the other ancient graves and buildings. In the recently published article, the radar images reveal the ghosts of an ancient landscape around the royal tomb: farms, a banquet hall and centuries of burial mounds.
Taken together, the buried structures suggest that over the course of several centuries, from at least 500 BCE to 1000 CE, a common coastal farming settlement somehow evolved into a major seat of power on the eve of the Viking Age.
A ghost map from the past
In 2018, NIKU archaeologists criss-crossed the fields of Gjellestad with ground-penetrating radar mounted on the front of an all-terrain vehicle. They revealed a forgotten Iron Age world beneath the crops and pastures. In the radar images, a dozen ghostly rings mark the loose soil that fills the trenches that once surrounded burial mounds. Post holes and wall foundations trace the faint outline of at least three former farmhouses, along with a larger building that could have been an Iron Age banquet hall.
There, the local landowner is said to have held parties, political rallies and some religious gatherings (though others would have taken place outside). A proper banquet hall was not something most farms or small communities would have had; only wealthy, powerful landowners could have built one or would have had any reason to. The hall is said to have marked Gjellestad as an important meeting point for religious events and business, as well as a center of political power for the entire region.
Radar images show postholes that once contained wide, hefty girders arranged in two parallel rows in the center of a 38-meter-long building, with two large rooms in the center. That’s unusually large for a farm, but just right for a banquet hall. And just over a gate from the hall, to the east, were four large burial mounds, including the ship’s grave of Gjellestad.
Ties to the country were very important in Scandinavian culture. People thought it was very important to stay connected with, for example, the land where their ancestors were buried. All the construction in Gjellestad would have been a powerful statement about the ruling family’s hold on their land and their power in the tumultuous centuries leading up to the Viking Age.
Local farm makes it big
The banquet hall’s layout—particularly the way the walls curve outwards slightly—suggest that it may date to somewhere between about 500 and 1100 CE. It’s impossible to be more accurate without actually excavating artifacts to radiocarbon date, but based on comparisons to other sites, the largest burial mounds at the site, including the ship’s grave, likely date from the same broad time frame.
By this time the community in Gjellestad was probably centuries old, starting out as a more ordinary farming community with a fairly typical burial mound nearby. The 2018 radar survey revealed the ring moat footprints of nine small mounds (about 7 to 11 meters wide) in Gjellestad, and archaeologists already knew of dozens of other mounds about a kilometer from the site.
These mounds would likely contain the dead ancestors of those who lived and farmed nearby. A hill near Gjellestad, called M8, may actually belong to a woman; its long oval shape resembles women’s graves from other Iron Age mound cemeteries in Norway. Radar images show features that may be the actual graves buried in the center of the former mounds.
And the images are detailed enough to reveal literal layers of history beneath the fields of Eastern Norway. Gustavsen and his colleagues were able to see that people in Gjellestad had built their large burial mounds overlapping the sides of smaller mounds. That suggests the smaller hills came first.
“This could be a result of chance or practical circumstances,” Gustavsen told Ars. “Another interpretation is that it is a way of associating with an existing cemetery, or perhaps as a more powerful statement where an incoming elite wants to settle in the landscape and do this by placing their burial mounds on existing ones.”
Again, it’s impossible to say for sure how old the mounds are without excavating them, but the larger ones probably date from the centuries just before and during the Viking Age, 500 to 1100 CE, based on comparisons to other sites. The smaller mounds may be centuries older. At least two of the farms may be the same age as the smaller mounds based on their layout.
Work in progress
Excavating the Gjellestad ship will likely take about another month, Gustavsen said. The Gjellestad ship offers archaeologists their first chance to excavate and study a Scandinavian ship in more than a century. It is one of only four ship graves in Scandinavia, including the one spotted by a GPR aerial survey in western Norway last year. Only about 19 meters remain of the ship’s hull, but in ‘life’ it was probably 22 meters from stern to stern – a real seagoing vessel of the kind that would eventually take the Vikings to the coasts of Greenland to Constantinople .
Meanwhile, Gustavsen hopes to do more ground-penetrating radar surveys of the landscape around Gjellestad to try and understand more about how the burial mounds, farms and banquet hall fit into the larger world of Iron Age Norway.
“What will happen to the site and this particular field in the future is not clear,” Gustavsen told Ars. These discoveries happened because in 2017 a local farmer applied for a permit to dig a drainage ditch in one of their fields. “The landowner was positive about the process and has been informed and involved from the start,” says Gustavsen. “Currently, the landowner is compensated for lost income, but of course that cannot go on forever.”
The people who work the Vikiletta plain today know that they walk on top of the houses, halls, ritual places and tombs of centuries past. Most of the burial mounds and standing stones that once dotted the gently rolling landscape disappeared under 19th-century plows, but modern farmers occasionally display artifacts in their fields, and crops usually grow taller and greener over buried ditches.
Archaeologists excavating the Gjellestad ship work practically in the shadow of one of the largest burial mounds in Scandinavia, known as the Jell Mound, likely the resting place of an Iron Age ruler. Like much of the region’s ancient landscape, it had faded into the background of modern life. “It might have been a bit forgotten – it was something you passed on the highway on your way to Sweden,” Gustavsen told Ars. “Hopefully people will eventually see these sites as valuable assets that can enrich a place.”
Antiquity, 2020 DOI: 10.1584/aqy.2020.39 (About DOIs). | https://techilink.com/this-farm-field-was-once-a-mighty-stronghold-in-iron-age-norway-techilink/ |
Why has economic inequality in the United States increased since the late 1970s?
Why has economic inequality in the United States increased since the late 1970s?
The last explanation suggests that U.S. government policies created an institutional framework that led to increasing inequality. Since the late 1970s, deregulation, de-unionization, tax changes, federal monetary policies, the shareholder revolution, and other policies reduced wages and employment.
How Has income inequality changed in the US since the 70’s?
From 19, the share of aggregate income going to middle-class households fell from 62% to 43%. Over the same period, the share held by upper-income households increased from 29% to 48%. The share flowing to lower-income households inched down from 10% in 1970 to 9% in 2018.
How Has income inequality changed over time in the US?
Income inequality has fluctuated considerably since measurements began around 1915, declining between peaks in the 1920s and 2007 (CBO data) or 2012 (Piketty, Saez, Zucman data). Inequality steadily increased from around 19, with a small reduction through 2016, followed by an increase from 20.
Why is income inequality increasing in the developed world?
We address empirically the factors affecting the dynamics of income inequality among industrialized economies. We find that democratization, the interaction of technology and education, and changes in the relative power of labor unions affect inequality dynamics robustly.
Is income inequality rising around the world?
No general trend to higher inequality: It’s a mistake to think that inequality is rising everywhere. Over the last 25 years, inequality has gone up in many countries and has fallen in many others. Conversely, advanced industrial economies show lower levels of inequality, but rises in most, though not all, instances.
Why does inequality still exist?
Social inequality refers to disparities in the distribution of economic assets and income as well as between the overall quality and luxury of each person’s existence within a society, while economic inequality is caused by the unequal accumulation of wealth; social inequality exists because the lack of wealth in …
Why is inequality so high in America?
The US consistently exhibits higher rates of income inequality than most developed nations, arguably due to the nation’s relatively less regulated markets. immigration – Relatively high levels of immigration of less-skilled workers since 1965 may have reduced wages for American-born high school dropouts.
What is the problem with income inequality?
Effects of income inequality, researchers have found, include higher rates of health and social problems, and lower rates of social goods, a lower population-wide satisfaction and happiness and even a lower level of economic growth when human capital is neglected for high-end consumption.
Does inequality exist today?
Over the last 30 years, wage inequality in the United States has increased substantially, with the overall level of inequality now approaching the extreme level that prevailed prior to the Great Depression.
Which country has the highest income inequality?
GINI index (World Bank estimate) – Country RankingRankCountryValue1South Africa63.002Namibia59.103Suriname57.604Zambia•
How do you overcome inequality?
Three ways to overcome inequalityEducate middle- and high-schoolers about the American history of segregation. Sell homes in formerly restricted areas to people of color for mid-20th century prices. Create policies that bring low-income housing into higher-income neighborhoods.
Why should we stop inequality?
Inequality drives status competition, which drives personal debt and consumerism. More equal societies promote the common good – they recycle more, spend more on foreign aid, score higher on the Global Peace Index. Business leaders in more equal countries rate international environmental agreements more highly.
How can we solve education inequality?
Possible solutions to educational inequality:Access to early learning.Improved K-12 schools.More family mealtimes.Reinforced learning at home.Data-driven instruction.Longer school days, years.Respect for school rules.Small-group tutoring.
How can health inequalities be reduced?
Prevention can help to reduce health inequalities. For this to happen, prevention needs to be at least as effective in groups of the population with the worst health. Cost-effective health improvement: Preventing people taking up smoking (primary prevention) avoids smoking-related illness.
Are health inequalities avoidable?
Disparities in health are avoidable to the extent that they stem from identifiable policy options exercised by governments, such as tax policy, regulation of business and labour, welfare benefits and health care funding. It follows that health inequalities are, in principle, amenable to policy interventions.
What are health inequalities examples?
Health inequalities can therefore involve differences in: health status, for example, life expectancy and prevalence of health conditions. access to care, for example, availability of treatments. quality and experience of care, for example, levels of patient satisfaction.
What are some inequalities in healthcare?
Causes of Health Care InequalityDisparities in Care. Low-income neighborhoods may not have nearby access to the best hospitals, doctors’ offices, and medical technology. Rising Cost of Health Care. Lack of Access to Health Insurance. Poor Health Can Create Poverty. Age.
What is the difference between health inequalities and inequities in health care?
Absent from the definition of health inequality is any moral judgment on whether observed differences are fair or just. In contrast, a health inequity, or health disparity, is a specific type of health inequality that denotes an unjust difference in health.
What are the root causes of health inequities?
Health inequity arises from root causes that could be organized in two clusters: The unequal allocation of power and resources—including goods, services, and societal attention—which manifests itself in unequal social, economic, and environmental conditions, also called the determinants of health.
What are the main determinants of health?
Health is influenced by many factors, which may generally be organized into five broad categories known as determinants of health: genetics, behavior, environmental and physical influences, medical care and social factors. These five categories are interconnected. | https://thecrucibleonscreen.com/why-has-economic-inequality-in-the-united-states-increased-since-the-late-1970s/ |
A socialist economic system that features social ownership, but that it is based on the process of capital accumulation and utilization of capital markets for the allocation of capital goods between socially-owned enterprises falls under the subcategory of market socialism. As a matter of fact, the Public Sector flourishes in a planned economy. In , the is tantamount to the subject of this article, the entirety of a given culture or stage of human development. For example, while America is a capitalist nation, our government still regulates or attempts to regulate fair trade, government programs, moral business, monopolies, etc. For example, businesses that took on too much risk could receive taxpayer-funded bailouts.
American traditions support the family farm. That is, who is to enjoy the benefits of the goods and services and how is the total product to be distributed among individuals and groups in the society? A mixed economic system protects private property and allows a level of economic freedom in the use of capital, but also allows for governments to interfere in economic activities in order to achieve social aims. It depends on how they are set up. It allows the federal government to safeguard its people and its market. Fundamentally, this meant that socialism would operate under different economic dynamics than those of capitalism and the price system. Proper Protection is provided to Weaker Sections of the Society Specially Workers and Labourers: In the initial stage of Industrial Revolution, the producers or capitalists, ruthlessly exploited the working class. That promotes the innovation that's a hallmark of a market economy.
With the result that none of them fulfills the objectives of national planning and hence it is one of the major factor causing the failure of a long-term planning in India. The main responsibility of the government in this system is to ensure rapid economic growth without allowing concentration of economic power in the few hands. Private firms tend to be more efficient than government controlled firms because they have a profit incentive to cut costs and be innovative. However, usually progressive taxes and means-tested benefits to reduce inequality and provide a safety net. In consumer goods industries price mechanism is generally followed. The study of economic systems includes how these various agencies and institutions are linked to one another, how information flows between them and the social relations within the system including and the structure of management. Theoretically, it may refer to an economic system that combines one of three characteristics: public and private ownership of industry, market-based allocation with economic planning, or free markets with state interventionism.
Command economies retain their supporters. Economic actors include households, work gangs and , firms, and. Types of Mixed Economy : The mixed economy may be classified in two categories: Capitalistic Mixed Economy : In this type of economy, ownership of various factors of production remains under private control. Globalization makes it difficult to avoid. Most goods and services are privately-owned. Besides, government can also fake over these services in the public interest.
Each economy has its strengths and weaknesses, its sub-economies and tendencies, and, of course, a troubled history. Employees vie with each other for the highest-paying jobs. Economic agents with decision-making powers can enter into with one another. For example, the continue their traditional economy. Below we examine each system in turn and give ample attention to the attributes listed above. These questions have no real answer; it is subjective, and therefore only a relatively small portion of the population will, at any given time, agree with the state of a mixed economy.
However, whenever and wherever demand is necessary, government takes actions so that basic idea of economic growth is not hampered. Trade protection, subsidies, targeted tax credits, fiscal stimulus and public-private partnerships are common examples of government intervention in mixed economies. At the beginning of the 20th century, more than half of Americans lived in farming communities. Second, it allows the free market and the laws of supply and demand to determine prices. If you want to know how the global economy works and the role you play in it, check out this Market Economy And Politics: Arguably the biggest advantage to a market economy at least, outside of economic benefits is the separation of the market and the government.
There is little need for trade since they all consume and produce the same things. Both institutions play vital roles in an economy. However, there is an increasingly small population of nomadic peoples, and while their economies are certainly traditional, they often interact with other economies in order to sell, trade, barter, etc. That lowers prices to a level where only the remain. Market Economic System A market economy is very similar to a free market. Second, it rewards the most efficient producers with the highest profit. The difficulties may follow as under: 1.
There are certain elements of a traditional economy that those in more advanced economies, such as Mixed, would like to see return to prominence. Stigtitz has defined the concept in a much more simple manner. Many economists argue that advantages are best utilized by using the as a temporary solution, allowing the free to make long term. Government forces allocation through involuntary taxes, laws, restrictions, and regulations. However, this system is again sub-divided into two parts: i Liberal Socialistic Mixed Economy : Under this system, the government interferes to bring about timely changes in market forces so that the pace of rapid economic growth remains uninterrupted. The theoretical basis for market economies was developed by classical economists, such as , David Ricardo, and Jean-Baptiste Say in the late 19th and early 20th centuries.
As a result, no knowledge gap exists, and producers can respond to changing consumer demands much more efficiently. These nomadic hunter-gatherers compete with other groups for scarce. As such, an economic system is a type of. The government encourages both the sectors to develop simultaneously. Governments can pursue policies to provide macroeconomic stability, e. Pros and Cons Mixed pros and cons differ from person to person. Where should there be more government regulation? Traditional Economic System A traditional economic system is the best place to start because it is, quite literally, the most traditional and ancient type of economy in the world.
To Maintain balance between Public and Private Sector: The Government of India has made it clear that in future, the Government has no programme of nationalisation of any industry and therefore has decided to maintain a balance between the two sectors. Vast portions of the world still function under a traditional economic system. Wealth will be produced and distributed in its natural form of useful things, of objects that can serve to satisfy some human need or other. Not being produced for sale on a market, items of wealth will not acquire an exchange-value in addition to their use-value. The other measures of control over private sector which it generally uses are appropriate monetary and fiscal policies. However, they are often said to have market economies because they allow market forces to drive the vast majority of activities, typically engaging in government intervention only to the extent it is needed to provide stability. | http://spitfirephoto.com/define-mixed-economy-in-economics.html |
12. Promote Sustainable Consumption
Ensure sustainable consumption and production of goods nationwide.
12.1
Implement the 10-year framework of programs on sustainable consumption and production.
12.2
By 2030, achieve the sustainable management and efficient use of natural resources.
12.3
By 2030, halve per capita global food waste at the retail and consumer levels and reduce food losses along production and supply chains, including post-harvest losses.
12.4
By 2020, achieve the environmentally sound management of chemicals and all wastes throughout their life cycle, in accordance with agreed international frameworks, and significantly reduce their release to air, water and soil in order to minimize their adverse impacts on human health and the environment.
12.5
By 2030, substantially reduce waste generation through prevention, reduction, recycling and reuse
12.6
Encourage companies, especially large and transnational companies, to adopt sustainable practices and to integrate sustainability information into their reporting cycle.
12.7
Promote public procurement practices that are sustainable, in accordance with national policies and priorities.
12.8
By 2030, ensure that people everywhere have the relevant information and awareness for sustainable development and lifestyles in harmony with nature.
12.a
Support developing countries to strengthen their scientific and technological capacity to move towards more sustainable patterns of consumption and production.
12.b
Develop and implement tools to monitor sustainable development impacts for sustainable tourism that creates jobs and promotes local culture and products.
12.c
Rationalize inefficient fossil-fuel subsidies that encourage wasteful consumption by removing market distortions. | https://inuni.org/responsible-consumption-and-production/ |
One graduate from MESA’s Bay Area Farmer Training Program (BAFTP) will be offered a two-year paid position to manage an organic, no-till vegetable experiment located on the Oxford Tract in Berkeley, California. This “agroecology trainee” will learn about experimental design, innovative farming techniques, and crop planning suitable for East Bay climatic conditions and targeted to local markets. Over two years for these experiments, we will measure crop yields, labor requirements, and input costs (e.g. compost, cover crop seeds, and water) to evaluate economic costs and benefits of these production methods. The agroecology trainee will also take a leading role in facilitating dialog and trainings with urban farmers in the East Bay.
This position is supported by the Foundation for Food and Agriculture Research (FFAR) through its Seeding Solutions grant program, which calls for bold, innovative, and potentially transformative research proposals. This grant supports the Urban Food Systems Challenge Area, which aims to enhance our ability to feed urban populations through urban and peri-urban agriculture, augmenting the capabilities of our current food system.
FFAR, a nonprofit established in the 2014 Farm Bill with bipartisan congressional support, awarded a $295,000 Seeding Solutions grant to the Berkeley Food Institute at the University of California, Berkeley (UC Berkeley) to improve the ecological resilience and economic viability of urban and peri-urban farming systems and improve urban food distribution systems to reduce waste and meet fresh produce needs of low-income consumers. MESA partnered with researchers at UC Berkeley reach a 1:1 match of funds.
MESA is honored to work alongside researchers at UC Berkeley through the Seeding Solutions grant to improve the sustainability and resilience of urban farms by building health of soils, conserving water, and promoting beneficial insects. Researchers will also evaluate the effectiveness of existing urban and peri-urban food access and food distribution methods for meeting food needs of urban low-income, food insecure communities. Research will take place in the San Francisco East Bay Region of Northern California and findings will be applicable to other urban communities throughout the United States. | https://mesaprogram.org/programs/research-and-projects/research/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.