content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
NIST SP 800-123 suggests the following basic steps that should be used to secure an operating system: Install and patch the operating system. Harden and configure the operating system to adequately address the identified security needs of the system by: Removing unnecessary services, applications, and protocols.
How can I secure my operating system?
Operating System Minimization
Remove nonessential applications to reduce possible system vulnerabilities. Restrict local services to the services required for operation. Implement protection for buffer overflow. You may need third-party software to do this.
What are the five steps that can be used to ensure the security of an OS?
5 steps for securing your computer
- Keep your operating system and applications updated. Set up your computer for automatic software updates to your operating system (OS). …
- Use antivirus software. …
- Install FREE WiscVPN to secure your wireless connection. …
- Protect your NetID, password and MFA-Duo credentials. …
- Use a firewall.
What is designing secure operating system?
Secure Operating System and Software Architecture builds upon the secure hardware described in the previous section, providing a secure interface between hardware and the applications (and users) which access the hardware. Kernels have two basic designs: monolithic and microkernel.
What is the aim of system security planning?
The objective of system security planning is to improve protection of information system resources. All federal systems have some level of sensitivity and require protection as part of good management practice. The protection of a system must be documented in a system security plan.
Which is the most secure operating system and why?
1. OpenBSD. By default, this is the most secure general purpose operating system out there.
What is operating system hardening?
Operating system hardening involves patching and implementing advanced security measures to secure a server’s operating system (OS). One of the best ways to achieve a hardened state for the operating system is to have updates, patches, and service packs installed automatically.
What is the first step in securing an operating system?
Securing an operating system initially would generally include the following steps: Patch and upgrade the operating system, Remove or disable unnecessary services and applications, Configure operating system user authentication, Configure resource controls, Install and configure additional security controls, Perform …
What are the main steps in virus protection?
Basic steps to protect you from viruses
- Use your antivirus correctly and make sure to update it regularly.
- Install a reliable firewall, important if you are outside of ITQB. …
- Make regular back-up copies of your system files.
- Update software applications with manufacturers patches.
Which of the following is an example of a trusted OS?
Examples of certified trusted operating systems are: Apple Mac OS X 10.6 (Rated EAL 3+) HP-UX 11i v3 (Rated EAL 4+) Some Linux distributions (Rated up to EAL 4+)
What is security planning procedures?
A security plan is a documented, systematic set of policies and procedures to achieve security goals that protect BSAT from theft, loss, or release. Plans may also include agreements or arrangements with extra- entity organizations, such as local law enforcement.
What is security planning process?
Security planning considers how security risk management practices are designed, implemented, monitored, reviewed and continually improved. Entities must develop a security plan that sets out how they will manage their security risks and how security aligns with their priorities and objectives.
What are the four objectives of planning for security?
The Four Objectives of Security: Confidentiality, Integrity, Availability, and Nonrepudiation. | https://bio-guard.net/real/which-of-the-steps-you-will-use-to-secure-the-base-operating-system.html |
Paul Fieldhouse notes that “patterns of meal-taking vary widely and are a part of cultural learning;” while he says this in regard to the practice of learning the signs and practices of a participants own culture it should be noted that observation by outsiders can understand culture by the meal (63). Indeed, plenty of anthropological research has been preformed connecting food to practices of social communication and as a function of psychological, emotional, and nutritional needs. There is a lack of research on how the event of eating itself is a sphere of protected and idealized interests and ideas of a community. While food ideology and culture takes into account the methods of eating, agriculture, preparation, class, and so on that determine biological and social stimuli this paper will focus solely on the act of consumption, the meal, and its use as a keyword within cultural studies. The meal structure itself is demonstrative of the cultural practices and ideas that a society has, both conscious and unconscious. E.N. Anderson states that food is “a way of showing the world many things about the eater,” and thus, “[takes] on a role second only to language as a form of communication” (124). The round social table setting, one of equality, versus a head of table arrangement. Manners utilized in order to create social classes. How the meal in its many forms communicates the values held by different socio-economic and racial groups. This paper will dissect the keyword “meal” in cultural studies and how it’s role as a communicator of social ideas, dogmas, and realities and related through the eating event and its importance as an approach to understanding various communities.
My breakdown of the meal will consist of analyzing five aspects: food (the actual food in place and it’s importance in cultural ideology), manners (the symbolic rules of order and civilization), placement (the arrangement of the participants and space the meal will take place in), group (the participants of the meal), and dialogue (what discussion takes place between the participants). I will then analyze these five aspects of the meal and how they relate to cultural studies, discussing how they are influenced by cultural standards to create various types of meals. The primary focus will be how the various concepts of what meal is (the communal meal, solitary meal, the absence of the meal for example) works as a communicator of heritage and cultural identity. Furthermore, I plan to ascertain how some types of meals have replaced others and in reaction have created a loss of cultural identity and the passing on of traditions in action and thought. Indeed, in some cases a descent into a negative cultural sphere encouraging the rejection of the meal as a societal norm. My research will consist of reading into cultural philosophy, theory of food ideology, and the anthropology and sociology of food, with particular focuses on the theories of Paul Fieldhouse, E.N. Anderson, and Carlo Petrini.
To say it differently, how does the meal convey ideas of culture, heritage and identity? Indeed, in America we seem to glorify the meal-on-the-go mentality. Eating in the car or in the office on your own as you pound out a report is praised! You are an example of the pull-up-your-bootstraps, economically driven, succeed at all costs, capitalist mentality: working hard to make it to the top! Or does the absence of a meal define our culture? Sure, Food Network has created a new wave of foodie culture, but Vogue is still kicking it's ass as a whole. Not eating; the absence of the meal? Is that a signifier of the dominating culture? Beauty versus Nutrition. Of course, this is assuming the absence of a meal isn't due to poverty in which the meal may be demonstrative of a shoddy welfare system, or a societal idea that the poor should be ignored, that they have only themselves to blame and if they would just get with the bootstrap program mentioned before then they could eat little (but still more) alone in an office and be rich, successful, and powerful. An then there is the family meal, the business meal, and so on.
I say each of these are made up the dissected parts: manners, placement/location, food, dialogue, and participants. This could be subdivided further I'm sure but let's stay basic as possible for the purpose of brevity and my personal sanity.
Anyways, I'm going to be using the blog as a space to jot down some additional thoughts for the next 20 months or so. I am also putting it out there for people to weigh in: Do you have other ideas? Resources I should look into? Books, articles, websites?
Ah, academia. My only hope is I can parlay all of this into a book and become the next Paul Fieldhouse or Michael Pollan.
*And no, this is not my graduate thesis, this is for an 8 page paper I'm working on but plan to try and get published somewhere.
10 comments:
- AnonymousMarch 12, 2009 at 10:07 AM
I love this idea...I can't wait to see how you work up 'The Church Potluck'---that could be a thesis unto itself.ReplyDelete
- AnonymousMarch 12, 2009 at 1:24 PM
Might consider the fact that the nation as a whole is grossly overweight...absence of a meal? Not very often. Or is it WHAT we eat (consider Supersize Me). Good luck, whichever way it goes.ReplyDelete
- AnonymousMarch 12, 2009 at 4:42 PM
@anonymous but i think that the overweight demographic has a much different meal culture than the underweight. it is what we eat but i think how we serve, time, set and value a meal has tremendous influence. communal eating likel breeds healthier eating than solitary.ReplyDelete
You wrote this at 1:ooam? I can't understand it at 9:00pm! so basically it boils down to pretty much you are what you eat or how you eat.ReplyDelete
Have you read:ReplyDelete
The rituals of Diner (book disappeared so don't remember the writer), and Much depends on Dinner by Margaret Visser, 1986, books by Sidney Mintz might be interesting, and try for an English translation of Brillat Savarin, and Claude Levi-Strauss, if you haven't got those. The French know a lot about these things. If some more pops up in my mind I'll post again. Interesting stuff you're producing!
This is a fascinating concept.ReplyDelete
I wonder how are table arrangement fits in, we have a little rectangular table that is a hand-me-down and seating tends to rotate around the table depending on where the kids feel like sitting or where their favorite (of our many mismatched) chair is. I'm not sure there is a pattern for cultural mores there.
Are you going to include the push for family dinners on the radio ads? I think it's interesting that studies show that eating a meal together can affect a child's choice to use drugs or start other destructive behaviors.
This will be fun to watch!
This looks to be a promising analysis of food and culture.ReplyDelete
I look forward to the results of your research...
An excellent book on the subject if you are interested in the connection between food and celebration: "In Tune with the World: A Theory of Festivity" by Josef Pieper.
Good luck!
Further reading suggestion: the Economist - on line version - has an article on how cooking food has influenced the evolution of mankind. Fascinating, as surely your paper will be too!ReplyDelete
Just wanted to say I just found your blog ! Love it... I have fun exploring it!ReplyDelete
you might also want to look at "Food and Culture" by Pamela Goyan Kittler and Kathryn P. Sucher. It's a great textbook style book where they talk about different foods and cultures of various countries... and how they adapted to the united states. what customs were kept, what was left out, etc. it really does a nice job describing how religion, family, region, health beliefs, etc. all ties into food. good luck!ReplyDelete
Hey, you're leaving a comment! That's pretty darn cool, so thanks. If you have any questions or have found an error on the site or with a recipe, please e-mail me and I will reply as soon as possible. | http://www.vanillagarlic.com/2009/03/meal-as-indicative-of-cultural-norms.html |
Global investment manager Schroders and Singapore’s sovereign wealth fund, GIC announced today the publication of “A Framework for Avoided Emissions Analysis,” a new research paper outlining a holistic approach for investors to identify and assess emissions reduction investment opportunities beyond the confines of companies’ value chain emissions.
According to the paper, while climate change is emerging as a defining investment theme for the coming decades, with major opportunities arising from the pursuit of companies’ and governments’ decarbonization commitments, most conventional emissions reduction measures used by investors focus primarily on Scope 1, 2, and 3 emissions, without considering companies’ broader emissions impact.
The new framework launched by Schroders and GIC aims to enable investors to measure and integrate Avoided Emissions into investment and portfolio analysis, capturing a broader understanding of a company’s contribution to emissions reduction, such as a company’s development of solutions to drive economy-wide reductions and substitute high-carbon activities with low-carbon alternatives.
Rachel Teo, co-author and Head of Futures Unit and Senior Vice President, Economics & Investment Strategy at GIC, said:
“Building robust tools and models to integrate climate-related risks and opportunities into investment processes is a key focus of our climate research work at GIC. The Avoided Emissions framework enables long-term investors like us to better identify and potentially align our portfolio with the opportunities presented by the low carbon transition.”
According to Schroders and GIC, the framework utilizes a systematic value chain approach to capture the contribution of industries to avoided emissions, focusing on investability and scalability. For the paper, the companies examined 19 carbon-avoiding activities and industries, quantifying emissions savings per dollar of revenue.
Andy Howard, lead author and Global Head of Sustainable Investments, Schroders, said:
”The framework is based on a proprietary systematic value chain approach to capture the contribution of a broad set of industries and activities to Avoided Emissions. We believe that this innovative framework, with its emphasis on investability and scalability, presents a significant advancement from common approaches to carbon footprint and exposure analysis. It underpins our longstanding focus on and work in understanding the investment implications from the low carbon transition.”
Applying the framework to a focused portfolio of companies identified as accelerating the low carbon transition and comparing to the MSCI ACWI Investable Market Index (IMI) stock universe, the analysis found that the Avoided Emissions portfolio saw significantly stronger revenue growth relative to the broader index, despite having undifferentiated Scope 1-3 emissions reductions, highlighting the opportunity identified by the more holistic approach.
Kevin Bong, Director, Economics & Investment Strategy, GIC, commented:
“Investors need to fully consider the causes and effects of climate change on our portfolios, and prepare and participate in the multi-decade carbon transition that will likely entail a rewiring of the modern economy.
“Avoided Emissions introduce a new and important dimension to a growing set of metrics that investors and policymakers need to make better decisions.”
Click here to access the research paper.
The post Schroders, GIC Launch Framework to Identify Decarbonization Investment Opportunities Overlooked by Current Methods appeared first on ESG Today. | https://esg-investing.com/2021/11/30/schroders-gic-launch-framework-to-identify-decarbonization-investment-opportunities-overlooked-by-current-methods/ |
Effect size is "a measure of the strength of a phenomenon or a sample-based estimate of that quantity" [Wikipedia].
Interpretation of two odds ratios derived from two different cohen's ds, comparability for meta-analysis?
Why is my p-value correlated to difference between means in two sample tests?
Can eta squared be used for comparing effect size of a categorical (>2 categories) and continuous variable?
Is there a formula to convert t-value from a regression parameter to an effect size (d)?
I've seen formulas to convert independent or dependent samples t-tests into effect sizes, but what about converting the t-value from a regression parameter to an effect size?
Possible to extend Gelman & Carlin's “Beyond Power Calculations” when significance not obtained?
How can you meta-analyse continuous and binary repeated measures outcome data?
Why is information gain rarely used as a reported effect size?
Why don't thresholds of effect sizes between $r$ and $R^2$ or $f$ and $f^2$ directly correspond?
What is the variance of the standardized mean difference of two independent studies?
Can the r-scale value (for Bayes Factor) be directly based on Cohen's d?
Are effect sizes calculated for prospective/ retrospective cohort studies that look at odds ratios?
Could someone please explain the difference between standard mean difference and adjusted mean difference. I want a common rubric for reporting effect size.
How to determine sample size for Chi-squared test?
How to calculate the effect size for a t-test?
How do I get a measure of effect size for each term in a multilevel model?
I have an assignment (doing regression) and one of the tasks is "Check the magnitude of the effects using confidence intervals.". How can I check this and how to report it in a paper? Using SPSS.
How am I able to compute the effect size and variance for the given data?
Is there a commonly accepted effect size parameter for pairs of Bernoulli processes?
Is it possible to determine significance with a defined alpha (.05) from an effect size estimate with a 95% CI? | https://stats.stackexchange.com/questions/tagged/effect-size |
I'm a novice, been playing for almost a year. Since I discovered sympathetic vibration, I've relied on it to check my intonation with a great success and some nice compliments from my teacher on my intonation improvement.
Now, I've been using Warchal Timbre for a few months now. And something about these strings make the sympathetic vibration really noticeable, really loud and easy to hear, which I like a lot.
What I am asking is: what characteristic of strings that can bring out more sympathetic vibration (low/high tension, material ?). Can you recommend a set of string for loud/noticeable sympathetic vibration?
While some strings may be more responsive the generation of sympathetic vibration, the process has more to do with your bowing and the vibrational characteristics of your instrument than just the strings.
I noticed a huge change a long time ago when I spent way too much money on my instrument having the plates tuned. After that was done the instrument was much more resonant and responsive and the sympathetic vibrations increased.
Looking back, it would have cost a whole lot less money to simply find a violin that was more resonant from the start. However, I have to say it was interesting albeit expensive to upgrade the "Family Fiddle" which came from my wife's great-grandfather.
Good stings will resonate - I prefer Dominants but they all will ring when you hit the notes right and the instrument vibrates just right.
Edited: November 8, 2018, 5:52 PM · "Sympathetic resonance or sympathetic vibration is a harmonic phenomenon wherein a formerly passive string or vibratory body responds to external vibrations to which it has a harmonic likeness. The classic example is demonstrated with two similar tuning-forks of which one is mounted on a wooden box."
In other words, an instrument present in the same room and tuned to the same pitch will resonate if you play the same frequency as one of its open strings. That is why some dealers casually place 6 cellos in the room where you try a violin 20% above your budget.
So, if the violin can not have purely sympathetic resonance with itself, except for 4 open strings ( viola d amore can), your questions boils down to: Louder, meaning more power = more air pushed toward the listener.
Edited: November 8, 2018, 6:17 PM · I think George has it when he says to work on your tone production. I know that when my tone gets rough, my intonation suffers, and when I spent a lot of time with one of my teachers working on my bowing technique, my intonation improved dramatically.
There's a sort of madness that descends upon many beginners who want to immediately sound better than they do, and they begin to ask about changing their instrument setup. "My violin/bow/strings sound harsh/nasal/trebly, what should I replace them with?" The answer, I think in almost every case, is to leave the fiddle/bow/strings alone and practice more to get better. To be patient and improve and keep working on technique, that is. You probably don't need better (whatever that means, given your particular instrument) strings, you probably need better technique, which will come in time.
Yes, a better instrument will make a better sound, but a year in, you should save your money and work on your bowing. That is not the answer most people want, I admit.
November 8, 2018, 6:48 PM · Treat the strings real nice. Buy them gifts and tell funny stories. That's sure to do the trick.
November 8, 2018, 8:38 PM · @Scott: that is totally alright. I'm not asking for strings recommendation so that I sound better though. I'm simply curious what characteristic of strings that would bring out more sympathetic vibrations? Is that tension, gauge, material ? Of course, bad intonation will not bring out any sympathetic vibration no matter how fancy your strings are.
November 8, 2018, 9:41 PM · This doesn't sound like the desire for sympathetic vibrations, but rather the desire for more resonance and overtones.
I find gut to be unmatched in overtones, personally.
November 9, 2018, 6:35 AM · Daniel, I am also a novice, I've also been using Warchal Timbres recently and I'm also as curious as you about equipment tweaks. My ears still need much training but the overtones generated with Timbres were an eye opener, even on my intermediate level violin (a Sima Traian.) I thought I was going to be an Obligato addict forever but happy that my curiosity got the better of me. I have a set of Warchal Brilliant Vintage on deck to see what a relatively lower tension string from the same manufacturer sounds like. Now time to get back to bowing practice for me!
November 9, 2018, 6:43 AM · I find sympathetic resonance a bit annoying.
On a guitar it's transient and adds colour. On a violin it just sounds like ringing. I think I'm using Tonicas - they came ready supplied.
It's useful for intonating some notes, but I wonder about those higher overtones such as the "major third" and the "minor 7th". They are both flat and not true notes.
November 9, 2018, 11:01 AM · Daniel, yes I clearly misread your post, sorry. I think Lydia might be right that you're looking for overtones more than sympathetic vibrations. | https://www.violinist.com/discussion/thread.cfm?page=2224 |
Precision Livestock Project welcomes farmers and further education students to IBERS.
Farmers and further education students from across Wales have visited IBERS (Institute of Biological, Environmental and Rural Sciences) to learn about ongoing precision agriculture technology research at Coleg Cambria Llysfasi and Aberystwyth University.
These events were organised as part of the ‘PreciseAg’ (Precision livestock farming for a sustainable Welsh agricultural industry) project which is run by IBERS and Cambria with funding from HEFCW and support from Farming Connect.
The project is undertaking essential, early stage development, of technologies and models which will aid livestock production throughout the UK and beyond.
The Precise Ag initiative began in 2018 and is focusing on factors affecting on-farm productivity. These include, improving parasite control strategies; developing technologies to assist in early prediction of lambing and the risk of associated complications; using tools to assess individual dairy cow feed intake; and using sensors to assess health in young dairy calves.
Open days were hosted by the project at Aberystwyth University’s Gogerddan Farm and Llysfasi farm to demonstrate and discuss how Precision Agriculture Technology can aid farmers to manage their farms and improve animal husbandry and welfare. The day included presentations from IBERS research staff who discussed different aspects of the PreciseAg project and the developments happening on University Farms.
Land-Based further education college students from across Wales were also welcomed to IBERS for a two-day Precision Livestock Farming taster course. Students from Coleg Cambria Llysfasi, Coleg Meirion-Dwyfor Glynllifon, Coleg Gwent, Newtown College, and Coleg Sir Gar attended.
During the event, students got a chance to learn about Precision Livestock Farming research, including demonstrations of precision agriculture technology such as using accelerometers and GPS to monitor and manage livestock health and were given a chance to process and analyse data from these technologies. Students also got a taste of laboratory molecular work aiming to map Liver Fluke infection risk areas in fields, and had a chance to tour the University Farms with particular focus on recent investments in precision livestock technologies.
Dr Hefin Williams, who is the Agriculture degree scheme coordinator at IBERS, and leads the PreciseAg team said: "These events were a crucial opportunity for us to share some of the findings from our Precision Livestock Agriculture project to the farming industry. We believe that Precision Technologies will play a crucial role in allowing agriculture to adapt to many challenges in the future and we are working hard to make sure that farmers are aware of the technologies’ capabilities."
Advertisement
How to resolve AdBlock issue?
You are using adblocker please support us by whitelisting www.fenews.co.uk
Would you like to start a career or are you looking to boost your care
Sector News
@imperialcollege has launched a free online course on @coursera explai
Iain Clarke, Head of Llysfasi, said: "Our partnership with Aberystwyth University has been a joined up approach in delivering scientific research and science onto farm, a crucial component supporting the Agricultural industry. Our role as a college is to engage with key projects and partnerships to ensure that our students and the Agricultural industry in Wales and further afield are benefiting from the latest precision technologies, expertise and adaption of science to ensure they are equipped for the many challenges facing the future."
You are now being logged in using your Facebook credentials
How to resolve AdBlock issue?
You are using adblocker please support us by whitelisting www.fenews.co.uk
The FE News Channel gives you the latest breaking news and updates on emerging education strategies and the #FutureofEducation.
Providing trustworthy and positive news and views since 2003, we publish exclusive peer to peer articles from our feature writers, as well as user content across our network of over 3000 Newsrooms, offering multiple sources of news across the Education and Employability sectors.
FE News also broadcast live events, webinars, video interviews and news bulletins so you receive the latest developments in Skills News and across the Apprenticeship, Further Education and Employability sectors. | |
Audited Financials – The highest level of assurance for the financial statements. The CPA conducts the audit in accordance with generally accepted auditing standards and provides reasonable assurance that the financial statements are not materially misstated. An audit also includes examining and testing evidence that supports the amounts and disclosures in the financial statements; assessing the accounting principles used and significant estimates made by the association; and evaluating the overall financial statement presentation.
Reviewed Financials – A review consists principally of inquiries of the association’s management and the application of analytical procedures to the financial data obtained from the business. The purpose of a review is to obtain a basis for expressing limited assurance that no material modifications are necessary in order for the financial statements to be in conformity with generally accepted accounting principles (GAAP). The limited procedures applied in a review make the report somewhat more reliable than a compilation, but substantially less than an audit.
Compiled Financials – A compilation involves placing financial data obtained from the association into a financial statement format. The CPA does not express an opinion or any other form of assurance on the financial statements. | http://www.jonescpas.com/homeowner-condo |
Anxiety disorders are common comorbidities associated with traumatic brain injury (TBI) [1-3]. As defined by the Centers for Disease Control and Prevention, anxiety disorders have an aggregate 12 month prevalence of 10% among the general population . Specified anxiety disorders like generalized anxiety, panic, and post-traumatic stress disorders have 12 month prevalence rates of 0.9% and 2-3%, 3.5%, respectively .
In contrast, the aggregate incidence of anxiety disorders has been reported up to seven times greater among TBI survivors . Their meta-analysis revealed a prevalence rate of anxiety disorders ranging from 11-70% in TBI survivors. Similarly, a study reported that 38% of 100 TBI patients met DSM IV criteria for specified anxiety disorders between 6 months to 5.5 years post-injury with generalized anxiety disorder, panic disorder, and post-traumatic stress disorder occurring at rates of 17%, 6%, and 14%, respectively . Additional evidence demonstrated prevalence rates for generalized anxiety disorder, panic disorder, and post-traumatic stress disorder of 9.1%, 9.2% and 14.1% , while others reported 23.8% (19/80) of TBI outpatients met DSM IV diagnostic criteria for generalized anxiety disorder . TBI patients with specified or unspecified anxiety often report symptoms including persistent worry, tension, and fearfulness. The nature of anxiety symptoms presents significant barriers to participation in neurological rehabilitation and overall recovery following TBI . Given the prevalence of anxiety disorders in this population, research is needed to better understand the impact of this concomitant condition on the course of recovery following a TBI.
The study objectives included: (1) describing prevalence and severity of anxiety among a group of chronically injured TBI adults, (2) examining the impact of anxiety on functional outcomes achieved in post-hospital residential rehabilitation, and (3) evaluating the effectiveness of post-hospital residential rehabilitation programs for improving anxiety symptoms following TBI.
The study sample was selected retrospectively from 1,385 neurologically impaired individuals with consecutive discharges from 32 post-hospital residential rehabilitation programs in 15 states from 2011 to 2016. From the population of 1,385, a sample of 950 individuals met study inclusion criteria as follows: diagnosed with a traumatic brain injury, 18 years of age or older, and admitted and discharged from active residential neurorehabilitation. The extent and nature of their disability prevented these participants from living independently. Due to the chronicity of this population, Glasgow Coma Scale scores (GCS) at the scene of injury or upon admission to the trauma center were largely unavailable for review. Therefore, the severity of disability for each participant was determined upon admission to program by assessing the level of impairment with MPAI-4 Total T-scores. This score provides an indication of disability severity level compared to a referenced group of neurologically impaired persons .
The mean length of stay for the entire sample was 5.5 months. The mean chronicity of injury (e.g., onset of injury to admission) was 25.91 months. The average age for the total sample was 41.8 years. Detailed demographic characteristics of the sample including MPAI-4 Total T-scores at admission are presented in Table 1.
Table 1. Sample demographics and injury related variables (n=950).
The MPAI-4 anxiety rating at admission was used to assign participants to one of the four groups: severe anxiety =17% (n=164); moderate anxiety =28% (n=269); mild anxiety =25% (n=234); and no anxiety = 30% (n=283). The anxiety scale is presented in Table 2, and the characteristics of participants by group are presented in Table 3.
Description: Tense, nervous, fearful, phobic, symptoms of post-traumatic stress disorder such as nightmares, flashbacks of stressful events.
1 Mild: no interference with activities. May use medication.
Infrequent or mild symptoms of tension or anxiety but these do not interfere with activities and usually do not require further evaluation or treatment. Symptoms do not create a significant disruption in interpersonal or other activities and may appear appropriate reactions to significant life stress.
2 Mild Problem: interferes with activities 5-24% of the time.
Mild anxiety that interferes with some but not the majority of activities. At this level, individuals usually appropriately receive a psychiatric diagnosis, such as, Adjustment Disorder with Anxiety, PTSD, Anxiety Disorder NOS, or specific phobia. At this level, anxiety most often only interferes with social or interpersonal activities.
3 Moderate Problem: interferes with activities 25-75% of the time.
Anxiety is sufficiently severe to interfere with many activities including vocational activities. As for level 2, these individuals at this level usually appropriately receive a psychiatric diagnosis.
4 Severe Problem: Interferes with activities more than 75% of the time.
Anxiety is disabling. Examples at this severe level are provided by those who are unable to work or attend school because of anxiety or unable to leave the house because of severe agoraphobia.
Table 3. Demographics for Anxiety Groups.
Group differences in length of stay, F(3,946)= .08, p = .97, n.s.
Group differences in time since injury, F(3,946) = .79, p = .50, n.s.
Group differences in age, F(3,946) = .52, p = .67, n.s.
One way ANOVAs were conducted to determine if the anxiety groups differed on age, length of stay in program, and onset of injury to admission interval. These analyses revealed no significant differences among groups on these factors (Table 3).
The Mayo Portland Adaptability Inventory Version -4 (MPAI-4) provides a useful tool to better understand the impact of anxiety on functional outcome following a TBI. The MPAI-4 is comprised of 29 items assessing the cognitive, physical, and behavioral sequelae following neurologic injury. Raw scores are converted to T-scores within three subscales: Abilities (physical, communication, and cognitive skills), Adjustment (emotional and neurobehavioral skills), and Participation (initiation of activities, social contact, leisure skills, basic and advanced activities of daily living, home skills, paid and unpaid productive activity, and money management). As such, Participation provides a good measure of performance (e.g., application of skills use) and final outcome aim, namely societal participation. The dependent measure used to evaluate functional outcome was the MPAI-4 Participation Index score. Participation scores were converted to T-scores to compare outcomes among the four study groups. Higher scores indicated greater disability (inverse interpretation).
The MPAI-4 is used in post-hospital rehabilitation and has proven reliability, concurrent and predictive validity . Rasch analysis conducted with the MPAI-4 has revealed strong item reliability that demonstrates the independence of the MPAI-4 anxiety scale from other MPAI-4 measures . Given this strong internal reliability, the MPAI-4 anxiety scale provides a useful means of measuring the impact of anxiety symptoms on neurologic rehabilitative outcomes following TBI.
Post-hospital residential rehabilitation programs are designed to teach generalization of functional skills for return home and greater independence . Admission to these programs commonly occurs 6-12 months or greater after the injury onset . Each participant was admitted to residential neurorehabilitation programs and received physical therapy, occupational therapy, speech therapy, recreation, counseling (based on need), neuropsychological consultation, case management, and medical management provided by nursing and physicians specializing in physical medicine and rehabilitation. Consultation was also provided with Psychiatry and/or Neurology based on individual needs of the participants.
Participants were evaluated upon admission by each program’s multidisciplinary treatment team members. Once individual discipline assessments were completed, each participant was then evaluated within two weeks of admission using the MPAI-4 by treatment team consensus. Discharge MPAI-4s were completed in a similar fashion by the treatment team within the final week of the participant’s stay. The results of all evaluations were compiled into a national database and combined with participant demographic data. To reduce team scoring bias (e.g., managing reliability), monthly training with the MPAI-4 was provided by experts external to the treating team. To ensure construct validity and item reliability, separate Rasch analyses were conducted on admission and discharge MPAI-4s.
Rasch analysis was conducted for purposes of determining reliability and construct validity of the MPAI-4 as a measure of disability following brain injury. A repeated measures multivariate analysis of co-variance (RM MANCOVA) was provided to evaluate change scores on Abilities, Adjustment, and Participation Indices from admission to discharge. Conventional post-hoc analyses were performed for further clarification of results for each of the study groups (e.g., levels of Anxiety) at admission and discharge and the significance of improvement from admission to discharge. Analyses were performed using SPSS version 22. Rasch Analysis was completed using Winsteps version 3.81.
Rasch analysis was conducted for purposes of determining reliability and construct validity of the MPAI-4 as a measure of disability following brain injury . The model compares expected from the actual values of an item. More specifically, this analysis has been used to demonstrate two important concepts with measures such as the MPAI-4: item and person fit. According to prior research, this analysis “has been used to evaluate how items contributing to a measure represent the underlying construct, and how well the items provide a range of indicators that reliably differentiate among people rated with the measure” . In the current study, Rasch Infit and Outfit statistics were utilized to demonstrate the fit of each item representing unique contribution to the level of disability for individuals being evaluated in the post-hospital setting. Infit values that are nearest to 1.0 indicate minimal distortion, and values between +0.5 and +1.5 are considered productive for measurement use . Other key measures are Person and Item Reliability and Person and Item Separation. Person Reliability indicates how well a measure’s items distinguish among individuals (e.g. those possessing a lot or a little of the construct measured). Item Reliability refers to whether test items relate to each other in a consistent way in describing a disparate group of individuals. A coefficient of .80 or greater is considered acceptable for Person Reliability, while a coefficient of at least .90 is optimal for Item Reliability . Separation values indicate “the extent to which items distinguish among people (Person Separation) and are distinct from each other (Item Separation)” (p. 483) . According to that research, a separation of 2.00 or greater is an acceptable value for statistical distinction.
The results of the Rasch analyses demonstrate that the item of Anxiety is in fact a unique contributor to overall disability and reliably differentiates persons with different levels of anxiety contributing to the effect of rehabilitation outcome. These findings can be viewed in Table 4 demonstrating the specific item of anxiety, and additional items found in the Mayo Portland that describe the symptoms that have been associated with anxiety using the Diagnostic and Statistical Manual of Mental Disorders .
Table 4. Rasch indicators of reliability and separation for MPAI-4 Admission and Discharge, and select items.
The results clearly identified the distinction between the item of anxiety and other items within the MPAI-4 based on Infit and Outfit values at admission and discharge from rehabilitation. In addition, item and person reliability, along with person separation and internal consistency were achieved at acceptable levels and comparable to the MPAI-4 analysis conducted previously . Based on these Rasch findings, the anxiety item provides a unique contribution to the measure of disability and distinguishes between those with and without anxiety that has a functional impact on rehabilitation outcome.
The first goal of the research was to demonstrate the prevalence of anxiety within this post-hospital traumatic brain injury group. As shown in Table 3, 164 or 17% of the study participants were rated to be severely anxious, meaning anxiety symptoms precluded meaningful participation in routine productive activities. Another 269 (28%) participants were rated as moderately anxious. For these participants anxiety was assessed to interfere with performance of everyday activities ranging 25-75% of the time. Taken together, 45% of participants experienced anxiety sufficiently severe to receive a psychiatric diagnosis. Another 234 (25%) exhibited mild anxiety symptoms interfering with activities up to 24% of the time. Only 283 (30%) of the study population were considered to not experience any debilitating symptoms of anxiety.
To better understand how the anxiety groups differed at admission with regard to disability (e.g., impact), a between groups one-way ANOVA was conducted on admission MPAI-4 Participation T-scores (performance measure). This analysis revealed a significant main effect for anxiety group F(3,946)= 31.31, p< .0005. The results of post hoc LSD paired comparisons are presented in Table 5.
Table 5. Post-hoc LSD paired comparison mean differences on MPAI-4 Admission Participation T-Scores by anxiety groups.
These findings show that the severely anxious group differed significantly from each of the other three anxiety groups, p <.001. The moderately anxious group differed from the no anxiety group, p<.001 and the mildly anxious group, p<.01. The mildly anxious group was not significantly different from the no anxiety group. The impact of anxiety on performance at admission is shown in Figure 1.
Figure 1. Impact of Anxiety on performance at Admission.
Note: Higher scores indicate greater disability at the time of admission; T-score of 50 = moderate impairment.
Figure one illustrates that as anxiety level increases there is a concomitant increase in Participation T-Scores, clearly describing how anxiety magnifies disability at admission, presenting challenges to recovery.
A Repeated Measures (2x4) mixed design ANOVA was performed to examine mean differences among anxiety groups from admission to discharge on MPAI-4 Participation T-scores. This analysis revealed a significant main effect for pre-post testing, Pillai’s Trace = .49, F=892, df = (1,946), p=.0005, partial eta2 =.49. Each of the anxiety groups showed significant functional improvement from admission to discharge as measured by Participation T-scores. Table 6 presents the means and standard deviations for these scores at admission and discharge for the four anxiety groups.
Table 6. Means and Standard Deviations for MPAI-4 Participation T-scores at Admission and Discharge by Anxiety group.
This analysis also revealed a significant effect for anxiety group, F (3,946) =27.19, p<.0005, partial eta2=.08. Given this significant effect for anxiety, post hoc LSD comparisons were conducted on all possible pairwise contrasts. The severe anxiety group differed significantly from each of the other three groups. The moderate group was significantly different from no anxiety group but not the mild group. The mild group differed significantly from the no anxiety group. Table 7 presents the mean differences and significant levels for each group comparison.
Table 7. LSD pair wise comparisons for mean differences between anxiety groups on MPAI -Participation T-scores admission to discharge.
Taken together, these findings demonstrate the adverse impact that anxiety has on functional outcome but also reveal that even those in the most severely anxious group achieved significant reduction in disability and improved functional performance. Admission to discharge improvement by group is illustrated in Figure 2.
Figure 2. Admission to discharge Participation T-scores by anxiety group*.
Note: Higher scores indicate greater disability; T-score of 50 = moderate impairment.
The third objective of this study was to determine effectiveness of post-hospital brain injury rehabilitation in reducing symptoms of anxiety as part of the therapeutic milieu. An examination of MPAI-4 anxiety ratings at admission and discharge for those 164 subjects in the severely anxious group revealed that the ratings declined from 4.0 (severe anxiety) to an average of 2.7 (mild to moderately anxious). A paired-sample T-test revealed this difference to be statistically significant, t (163) =13.05 p<.0005. Of the 164 individuals rated as severely anxious, 120 subjects (73%) improved. More specifically, 59 subjects (49% of the group) improved one level to moderately anxious, 38 subjects (32% of the group) improved two levels to mildly anxious, while 23 subjects (19% of the group) improved three levels to not anxious.
Similarly, a paired-samples T-test conducted on the moderately anxious group showed a significant reduction in anxiety ratings from 3.0 at admission to a mean score of 2.07 at discharge t (268) =16.89 p <.0005. Of the 269 individuals rated as moderately anxious, 102 subjects (38% of the moderate group) improved one level to mildly anxious, 55 subjects (20% of the moderate group) improved two levels to not anxious, but 8 subjects (3% of the moderate group) worsened to the severely anxious category.
Next, follow-up analyses were performed to determine if the reduction in anxiety resulted in improved functional outcomes. The 164 severely anxious subjects were divided into two groups based on discharge MPAI-4 anxiety scores. One group was comprised of 61 individuals who improved to ratings of mildly anxious or not anxious. The second group included 103 who remained moderately (n= 59) or severely anxious (n= 44). An independent samples t-test was performed to examine differences between the groups on functional performance as measured by the MPAI-4 Participation T-score at discharge. This analysis revealed significant group differences t (162) =4.5, p<.0005. The mean discharge Participation T-scores were 56.55 (moderate – severe group) vs 48.44 (mild – not anxious group). On average, those experiencing no more than mild symptoms of anxiety demonstrated greater independence in performing everyday functional tasks than those experiencing moderate to severe symptoms. Of note, mean Participation T-scores differences were not statistically significant for these groups at admission, independent t-test, t (162) = 1.27 n.s., again indicating the adverse impact of anxiety on functional outcomes. When anxiety abated, functional independence significantly improved. It is also important to note that it is common for persons recovering from TBI to experience increased anxiety as they gain awareness of their circumstances and the extent of their injury impact. An examination of the 234 mildly anxious subjects revealed that 16 subjects (7%) experienced a worsening of symptoms. At discharge from program, 11 subjects (5%) of these subjects increased to ratings of moderate anxiety while 5 subjects (2%) were rated as severely anxious. Similarly of the 283 subjects in the not anxious group, at discharge 20 subjects (7%) were rated mildly anxious, 7 subjects (2%) moderately anxious, and 3 subjects (1%) severely anxious.
Consistent with prior research , the MPAI-4 was found to be psychometrically sound and a reasonable measure to evaluate the impact of disability and performance of skills following traumatic brain injury. Further, the current study supported the statistical properties established earlier with the MPAI-4 that demonstrated the instrument is able to measure independent and unique factors contributing to brain injury recovery. Anxiety was one facet that was measured and evaluated within the current study using this instrument.
Consistent with previous research as well, this study found a high rate of anxiety among a chronic group of TBI survivors. Prior research estimated a range between 20-70% of TBI survivors experiencing symptoms of anxiety as part of recovery and restoration of function at the post-hospital level . Of the 950 current study participants, 45% experienced anxiety symptoms sufficient to merit a psychiatric diagnosis upon admission to the program. The study also demonstrated a more chronic group being evaluated with an average of approximately 28 months from the time of injury to measurement at admission.
Rasch analyses conducted on the MPAI-4 rating scales demonstrated that the prevalence of anxiety was not inflated by injury induced cognitive and neurobehavioral impairments whose symptoms may mirror anxiety. The impact of anxiety on overall function was clear. As anxiety ratings increased, so did the level of overall disability. Those in the moderate and severe anxiety groups experienced higher levels of disability and poorer functional performance than the mild and not anxious groups as measured by MPAI-4 Participation T-scores at admission and discharge respectively. Outcomes for those in the severely anxious group were significantly more disabled than those in each of the other three groups.
An additional important finding of this study was that the participants in each of the four anxiety groups realized improved functional independence following completion of post-hospital residential rehabilitation programs despite averaging greater than 2 years since the time of injury to admission. In each of the groups, MPAI-4 T-scores were significantly lowered (less disability) from admission to discharge, a result achieved even in the severely anxious group suggesting the effect of a multidisciplinary post-hospital approach. This finding is also consistent with prior research indicating that after approximately 8 months of recovery, improvement is based on intervention rather than spontaneous recovery effects .
Aside from improved disability and improved performance, the programs were also effective with reducing the impact of anxiety symptoms. The percentage of persons with moderate to severe anxiety ratings dropped from 45% at admission to 24% at discharge, a rate much lower than the prevalence reported in the literature for chronic TBI survivors. Nonetheless, for those remaining anxious, their outcomes were poorer than those who improved by discharge. The analysis of the severe anxiety group revealed that 103 of the 164 participants continued to experience moderate to severe symptoms of anxiety at discharge. These participants experienced far worse outcomes than the 61 participants in the severe group who improved to the mildly anxious or not anxious category. There were no significant differences between these two sub-groups (severe) using the same measure at admission, suggesting that anxiety was an important contributor to the difference in outcome performance. The 61 participants within the severe anxiety group that improved to the mild level at discharge averaged a discharge Participation T-score of 48.54. This score was equivalent to the mean discharge Participation T-score of 48.25 achieved by the mild anxiety group. This finding is illustrative of the impact of anxiety on functional outcomes and emphasizes the importance of treating anxiety to maximize the effectiveness of post-hospital residential rehabilitation programs.
The finding that anxiety has an adverse impact on outcome following TBI is not surprising. Success in post-hospital brain injury rehabilitation requires active participation in rigorous therapies. Symptoms of anxiety (e.g. worry, fear, irritability, difficulty concentrating, and fatigue) present significant barriers to such participation. Therefore, if not addressed, rehabilitative outcomes may be negatively altered. It is encouraging that many participants who began rehabilitation ranging from moderate to severe anxiety ratings showed significant improvement with anxiety at discharge and this improvement was associated with greater functional independence.
The researchers were unable to determine the percentage of anxious participants who were anxious premorbidly. For example, those in the severe anxiety group who continued with high levels of anxiety (e.g., no positive change in symptom presentation) may have been related to a more chronic anxiety history premorbidly than those who changed from severe anxiety to less. In addition, the researchers were unable to determine the impact of treatment complexity (e.g., counseling, medication, or both) on anxiety levels at discharge. Finally, this study concluded as each subject was discharged from the respective programs. A long-term follow-up may be indicated to evaluate the durability of the reduction in anxiety and disability post-discharge.
Anxiety has been considered a public health problem across the world populations with negative effects impacting physical well-being and level of functioning for those without neurological injury. The current study demonstrated the negative impact of anxiety on neurological rehabilitation outcomes. Remediation of primary and residual effects of anxiety during rehabilitation significantly improved outcomes, leading to reduced disability with a positive societal impact.
Three of the authors are employed by NeuroRestorative who provided financial support for this research. The authors’ employers were not aware of the study’s findings at the time of submission. The fourth author is a student within the Florida State University, College of Medicine and has no financial disclosure.
Chaudhury S, Biswas PS, Kumar S. Psychiatric sequelae of traumatic brain injury (2013) Medical J Dr. DY Patil University 6: 222-228.
Malec JF, Lezak MD (2008) The Mayo-Portland Adaptability Inventory (MPAI-4) for adults, children and adolescents. Manual 1-84.
Dixon TP. Systems of care for the head-injured (1989) Phys Med Rehab: State of the Art Reviews 3: 169-181.
Bond T, Fox CM (2001) Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah, NJ: Erlbaum.
Linacre J (2002) What do Infit and Outfit, mean-squared and standardized mean? Arch of Rasch Measurement 16: 871-882.
American Psychiatric Association (2013) Diagnostic and Statistical Manual of Mental Disorders 5th Edition (DSM 5). Arlington, VA: American Psychiatric Publishing.
© 2017 Horn GJ. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. | https://www.oatext.com/anxiety-following-traumatic-brain-injury-impact-on-post-hospital-rehabilitation-outcomes.php |
Slow-slip earthquakes may occur at somewhat regular intervals and could be dissipating some of the energy stored in subduction zones where tectonic plates meet, new research suggests.
The findings could give researchers insight into larger quakes and the formation of tsunamis.
Two tectonic plates meet off the eastern coast of Japan, the Pacific Plate and the Eurasian Plate, in a subduction zone where the Pacific plate slides beneath the Eurasian plate.
Map of the area at the subduction zone where the bore holes are located. Small map gives general location and pull out map shows detail of the layout. (Credit: Demian Saffer/Penn State)
“This area is the shallowest part of the plate boundary system,” says Demian Saffer, professor of geosciences at Penn State. “If this region near the ocean trench slips in an earthquake, it has the potential to generate a large tsunami.”
This type of earthquake zone forms the “ring of fire” that surrounds the Pacific Ocean, because once the end of the plate that is subducting—sliding underneath—reaches the proper depth, it triggers melting and forms volcanoes.
Mt. St. Helens in the American Cascade Mountains is one of these volcanoes, as is Mt. Fuji, about 60 miles southwest of Tokyo. Subduction zones are often also associated with large earthquakes.
Reducing risk?
The researchers focused their study on slow earthquakes, slip events that happen over days or weeks. Recent research by other groups has shown that these slow earthquakes are an important part of the overall patterns of fault slip and earthquake occurrence at the tectonic plate boundaries and can explain where some of the energy built up on a fault or in a subduction zone goes.
“These valuable results are important for understanding the risk of a tsunami,” says James Allan, program director in the National Science Foundation’s Division of Ocean Sciences, which supports the Integrated Ocean Drilling Program, now the International Ocean Discovery Program (IODP).
“Such tidal waves can affect the lives of hundreds of thousands of people and result in billions of dollars in damages, as happened in Southeast Asia in 2004. The research underscores the importance of scientific drillship-based studies, and of collecting oceanographic and geologic data over long periods of time,” Allan says.
In 2009 and 2010, the IODP NanTroSEIZE project drilled two boreholes in the Nankai Trough offshore southwest of Honshu, Japan, and in 2010 researchers installed monitoring instruments in the holes that are part of a network including sensors on the seafloor.
The two boreholes were 6.6 miles apart, straddling the shallow boundary of slip in the last major earthquake in this area, which occurred in 1944 and measured magnitude 8.1. The accompanying tsunami that hit Tokyo was 26 feet in height.
“Until we had these data, no one knew if zero percent or a hundred percent of the energy in the shallow subduction zone was dissipated by slow earthquakes,” says Saffer. “We have found that somewhere around 50 percent of the energy is released in slow earthquakes. The other 50 percent could be taken up in permanent shortening of the upper plate or be stored for the next 100- or 150-year earthquake.
“We still don’t know which is the case, but it makes a big difference for tsunami hazards. The slow slip could reduce tsunami risk by periodically relieving stress, but it is probably more complicated than just acting as a shock absorber,” Saffer says.
The researchers found a series of slow slip events on the plate interface seaward of recurring magnitude 8 earthquake areas east of Japan. These slow earthquakes lasted days to weeks, some triggered by other unconnected earthquakes in the area and some happening spontaneously.
According to the researchers, this family of slow earthquakes occurred every 12 to 18 months.
‘Caution is required’
“The area where these slow earthquakes take place is uncompacted, which is why it has been thought that these shallow areas near the trench act like a shock absorber, stopping deeper earthquakes from reaching the surface,” says Saffer.
“Instead we have discovered slow earthquakes of magnitude 5 or 6 in the region that last from days to weeks.”
These earthquakes typically go unnoticed because they are so slow and very far offshore.
The researchers also note that because earthquakes that occur at a distance from this subduction zone, without any direct connection, can trigger the slow earthquakes, the area is much more sensitive than previously thought. The slow earthquakes are triggered by the shaking, not by any direct strain on the area.
“The question now is whether it releases stress when these slow earthquakes occur,” says Saffer. “Some caution is required in simply concluding that the slow events reduce hazard, because our results also show the outer part of the subduction area can store strain. Furthermore, are the slow earthquakes doing anything to load deeper parts of the area that do cause big earthquakes? We don’t know.”
Additional researchers who were part of the project are from the MARUM-Center for Marine Environmental Sciences; GNS Sciences, New Zealand; University of Texas Institute of Geophysics; Japan Agency for Marine-Earth Science and Technology; Department of Earth and Planetary Science, University of Tokyo; Pacific Geoscience Centre, Geological Survey of Canada; and IODP Expedition 365 shipboard scientists.
The study appears in the journal Science. The National Science Foundation and the IODP funded this work. | |
The pivotal point of rebellion, protests and celebrations, Plaza de Mayo is an iconic landmark of Buenos Aires and a significant part of its political history. The city square is located in the financial district of Buenos Aires and is surrounded by numerous monuments and buildings of national interest like the Cabildo, the Metropolitan Cathedral, the Casa Rosada which is the seat of Argentina's President, headquarters of the Bank of the Argentine Nation and the Buenos Aires City Hall. At the centre of the square is Pirámide de Mayo, the national monument built to commemorate the first anniversary of independent Argentina.
Teatro Colon is an opera house in Buenos Aires widely renowned for its acoustic quality. It counts among the top ten Opera Houses around the world and has seen some great performances by international stars. Built in 1908 on the site of another Opera House, Teatro Colon has seen over a hundred years of name and fame.
Designed by Peró and Torres Armengol and originally a theatre, El Ateneo Grand Splendid is a grand bookstore. Retaining the lavishness of the theatre that it used to be, El Ateneo Grand Splendid bookstore has a stunning architecture and an equally classy collection of books that arouse your curiosity. Described as the second most beautiful book-store around the world by Guardian, the place is one of the premiere attractions of Buenos Aires and receives millions of visitors annually.
A beautiful green oasis quite strange from the rest of Argentina boasting peace and tranquillity is the Japanese garden in Buenos Aires. The place perfectly picturizes the graceful and sombre Japanese culture with its lush greenery along with Japanese Tea House and a temple. It also has a greenhouse showcasing a beautiful collection of Bonsai Trees. But the main attraction of the garden is the tranquil lake surrounded by the flora of Japan. The lake can be crossed by quaint bridges that add a plush colour to the sober place.
The Museo Nacional de Bellas Artes is the National Museum of Fine Arts in Buenos Aires comprising some significant works of European and Argentine artists. The ground floor of the museum showcases its international collection of paintings from the Middle ages to the twentieth century. The first floor has exhibits containing works of some renowned 20th century Argentine painters while the second floor showcases great specimens of photography and has two sculpture terraces.
Declared as a national monument, Buenos Aires Botanical Garden is a picturesque garden spread on area of around 7 hectares, harbouring over 5500 species of plants, shrubs and trees. It also comprises of five greenhouses and several monuments and sculptures. The Botanical garden has three distinct gardens: Roman, French and Oriental portraying different gardening styles. Besides it, there is an abundant collection of flora from Argentina and America.
The city of Tango and juicy steaks, Buenos Aires is a magnificent place to explore. The capital of Argentina and its most populous city with an eclectic mix of European heritage and Latin American architecture offers a quaint and majestic sight. Add on to it, is the dynamic food culture that would leave with you an unforgettable experience.
Seductive, enrapturing and addictive like Tango, the city is hard to leave once you get to know it. Its spirited energy ensures you have a memorable urban adventure.
Tickets to all the prominent attractions of the city are available online as well as offline. It is recommended that you buy the tickets from this site to evade long lines at the ticket counters and save on your time.
Owing to its efficient transportation network, Buenos Aires is easy to explore. The Buenos Aires Subte is the underground subway system that provides quick and easy transportation. Public buses also known as 'bondis' or 'colectivos' have routes covering the entire city. These provide cheaper transportation. To travel by bus or metro you need to purchase the SUBE travel card. Bikes are a great option to explore the city that has 130 km of cycle lanes. Taxis are also good for private transportation or you can even hire a car.
Argentine locals love it casual but have a great dressing sense. Anything tidier and casual will go for day exploration and for evenings you need to dress up a little. It is extremely hot and sweaty in summers, so bring lighter fabrics. Shoes work great but should be stylish enough to match up to the standards of Buenos Aires. Winters are quite cold and wet hence pack accordingly.
January and February are the summer months of Buenos Aires and the peak tourist season witnessing a lot of crowd and soaring fares. If you wish to evade crowd, then the best time to visit the city is Fall (April to June) and spring (September to December). Weather is pleasant during this time and crowd is comparatively lesser. July and August are cold and wet months.
There is no need to book the services of a guide as this site provides you all the necessary details regarding the tour. But, if you are visiting Buenos Aires for the first time then plan your complete trip in advance, so as to get the best experience of the city and avoid any chaos. You can even opt for a guided tour of the city.
Buenos Aires is served by three airports. Ezeiza International Airport is its main airport that receives several international and domestic flights. Aeroparque Jorge Newbery airport is a smaller airport receiving domestic flights and that from neighbouring countries. El Palomar Airport is another airport for low cost domestic flights. Also, there is an efficient coach facility between neighbouring cities and Buenos Aires. Ferries run between Buenos Aires and Colonia de Sacramento and Montevideo in Uruguay. Further, there are many cruise lines that visit the city's cruise terminal.
Please note that Argentina lies in the southern hemisphere and thus experiences a reversed season cycle. Also, beware of pickpockets on the streets and keep your valuables secure. | https://metatrip.com/top-things-to-do/buenos-aires |
The scientific work of Charles-Augustin de Coulomb has transcended as a significant contribution to explain phenomena in nature and generate crucial technological development in recent daily activities. We used Coulomb’s law to calculate the changes generated in the electrostatic interactions of residue 614 of SARS-CoV-2 Spike protein when the D614G mutation occurs. We made a physical analysis of the transformation and the biological implications in the whole molecule’s structural stability, obtaining that a greater electronegativity of the mutation stage favors the open state of Spike, which is manifested as a greater efficiency to bind to the ACE2 human receptor.
Keywords
Spike, Coulomb’s Law, Electrostatic Interactions
Share and Cite:
1. Introduction
Electrostatic interactions are essential for biomolecule function, e.g., changes in these interactions on residues can influence phosphorylation and dephosphorylation, thereby inducing significant structural effects, such as protein denaturation and thus disruption of metabolic processes in living organisms (Zhou & Pang, 2018). In fact, to fully understand their activity, it is necessary to comprehend the relationship between their structure and energy, where the most relevant factors are associated with the electrostatics interactions (Warshel & Russell, 1984).
In proteins, electrostatic forces have a central role in their structure and stability, as well as enzymatic activity, ligand binding, and allosteric control (Hanoian et al., 2015; Laskowski et al., 2009; Nakamura, 1996). Protein electrostatics is composed of intramolecular interactions and interactions with the medium. Such electrostatic properties are mainly due to the different types of amino acids that can be combined to make a protein (each one characterized by unique physicochemical properties) interacting with each other through salt bridges, hydrogen bonds, and charge-dipole interactions to stabilize the molecule. On the other hand, electrostatic interactions are long-range and highly dependent on the environment and the ions surrounding the biomolecule (Bremer et al., 2004; Nakamura, 1996).
The recognition and binding of a protein with other biomolecules (ligand, receptor, or antibody) depends on different chemical and physical factors, mainly electrostatic energy—whose contribution is the result of Coulomb interactions between the molecules— (Voet et al., 2013). The electric field in the active site of a protein regulates its catalytic activity and determines the relative binding orientations; in addition, the surface of the proteins and the interface generated after the interaction have many polar and charged residues (Sheinerman et al., 2000). For example, antibodies with a predominantly positive surface charge interact electrostatically with anionic cell membranes or tissues as long as the ionic strength and pH of the protein solution is adequate (Boswell et al., 2010; Nguyen et al., 2021); this is known as electrostatic complementarity: regions of positive electrostatic potential in one biomolecule interact with regions of negative electrostatic potential in the other. Complementarity has been observed in many biological complexes; in proteins, the influence of charged amino acids in the structure generates charged surfaces that promote interactions with other biomolecules, and a single change in their sequence—for example, a mutation—could modify the electrostatic interactions by increasing or decreasing the attraction of the complex (Goher et al., 2022; Ishikawa et al., 2021).
The SARS-CoV-2 virus is responsible for causing the COVID-19 disease that has led to one of the most severe health emergencies in recent years, and as a result, the virus has been the subject of extensive research and open access information and therefore represents an excellent model for teaching. SARS-CoV-2 is made up of five structural proteins, of which the Spike protein is fundamental since it interacts with the human receptor—Angiotensin Converting Enzyme 2 (ACE2)—to begin the infection process. The D614G mutation in Spike quickly became a dominant variant of SARS-CoV-2 worldwide (Fernández, 2020; Gobeil et al., 2021); hence, structures with and without the mutation are available in each conformational state that distinguishes the Spike protein (open or 1-RBD up and closed or 3-RBD down) (Gobeil et al., 2021; Walls et al., 2020). Since mutation D614G is present in the predominant variants worldwide, it has been suggested that it could be a candidate for antigen design and future vaccines, mainly because G614 Spike is prone to maintain the open state (Koenig & Schmidt, 2021). However, D614G is not considered to affect current vaccines, in fact, by promoting the open state, the mutation increases the exposure of neutralizing epitopes, and it has been shown that G614 variant is therefore better neutralized (Plante et al., 2021; Weissman et al., 2021).
In this work, we use open-source technological and scientific resources to calculate the electrostatic interactions of residue 614 of the SARS-CoV-2 Spike protein with and without the D614G mutation. These computational tools are essential in various areas of biology and are fundamental for the study of biomolecules where the calculation of electrostatic interactions is a necessary procedure to identify the regions of chemical affinity in different viral proteins (Pena-Negrete et al., 2020; Osorio-González et al., 2021). We compare the open and close Spike conformational states to highlight the biophysical implications of Coulomb’s law.
2. Materials and Methods
Coulomb’s law is also known as the law of electrostatics; it was published for the first time in 1785 (Coulomb, 1785), and it justified—with an experimental foundation—the nature of the forces of attraction or repulsion between two charges
q
1
and
q
2
separated by a distance
r
F
=
1
4
π
ε
0
q
1
q
2
r
2
e
^
where
ε
0
is the constant of electric permittivity in a vacuum and
e
^
is a unit vector in the direction of
F
. Considering that the
q
1
charge is mobile and the
q
2
a charge is fixed:
Coulomb’s law has applications in many areas of knowledge; however, this paper will highlight its usefulness in describing the biological problem related to a mutation of the SARS-CoV-2 virus. This mutation allows a change in electrostatic interactions that surprisingly makes the virus more efficient for binding to the human ACE2 receptor.
As a starting point, we use the differential relationship between force and potential energy V given by
F
=
−
d
V
d
r
Likewise, using Coulomb’s law, it can be easily verified that the electric force has a potential energy associated with it in the following way:
V
=
1
4
π
ε
0
q
1
q
2
r
The superposition principle applies to a set of n individual charges such that the total potential energy would be:
V
=
1
4
π
ε
0
∑
p
a
i
r
s
i
,
j
q
i
q
j
r
i
j
We used this last expression to calculate the difference in the electrostatic interactions of the atoms that make up residues D614 and G614 of the SARS-CoV-2 Spike protein. The Cartesian coordinates of these residues were taken from the Protein Data Bank using the codes of identification 6VXX, 6VYB, 7KDL, and 7KDK. We used the charges defined by the GROMOS 53a force field (Oostenbrink et al., 2004), and the distances between the atoms of each residue were calculated using the PyMOL 2.3.4 molecular viewer, the results of which are presented in the next section.
3. Results
When the SARS-CoV-2 virion approaches the human receptor, conformational changes occur in the Spike protein that allows the opening of an RBD—that is, the transition from the closed to the open state—for the recognition and binding with ACE2 (Takeda, 2022). D614 is a surface residue located in the Spike subunit (S1) that contains the receptor-binding domain (RBD), and although it is not a residue that has a direct interaction with ACE2, the D614G mutation disrupts critical hydrogen bonds, causing a change in the observed balance between the open and closed state of Spike (Gobeil et al., 2021). By applying Coulomb’s law to all non-covalently bonded pairs of atoms in the 614 residue, it was shown that the D614 residue in the closed state (6VXX) is more electronegative and therefore more attractive than in the open state (6VYB) (Table 1), that attraction is necessary to maintain conformation and balance between open and closed states.
On the other hand, the electrostatic interactions between the open and closed state of G614 did not show differences (Table 1), which could be because the mutation favors the open state of Spike and the conformational changes from the closed to the open state does not imply an energetic change (Gobeil et al., 2021; Weissman et al., 2021; Yurkovetskiy et al., 2020). A recent investigation found that the Spike models with G and D lead to an endless number of contacts that are similar between the three protomers that make up the protein; however, in the D model, the contacts can be described as symmetrical which are lost when Spike switches to the open state; while in the model with the D614G mutation, Spike maintains the symmetry in the number of persistent contacts when switching to the open state, which implies that the energy of the protomers between the different conformational states is similar (Mansbach et al., 2022), as can be seen when comparing the distances in D614 and G614 in both states (Figure 1). This allows us to understand the impact of a mutation in a single
Table 1. Electrostatic interactions of residue 614 of the Spike protein in the open and closed models, with and without mutation.
D614 open state (PDB ID: 6VYB), G614 open state (PDB ID: 7KDL), D614 closed state (PDB ID: 6VXX), and G614 closed state (PDB ID: 7KDK).
Figure 1. The distances between atoms are shown in residue 614 of the different Spike models. (a) Open state without mutation (PDB ID: 6BYV), (b) Closed state without mutation (PDB ID: 6VXX), (c) Open state with D614G mutation (PDB ID: 7KDL), and (d) Closed state with D614G mutation (PDB ID: 7KDK).
residue that can facilitate conformational changes towards the open state in Spike, thus improving viral infectivity.
Both the open and closed states are more electronegative in the D614 form of Spike than with G614, which is 58.7% and 62.5% less electronegative in the open and closed state, respectively, this difference could be related to the fact that the Spike structure with D614 is less stable than the G614 variant and, in general, Spike with G614 naturally favors a prefusion state (1RBD-up) and presents the 3RBD-down and 1RBD-up conformations with better stability (Zhang et al., 2021).
4. Conclusion
In the present work, we verify that Coulomb’s law allows us to explain simply and directly why a single mutation of the SARS-CoV-2 virus enables high effectiveness in binding to human receptors and that the electrostatic nature of biomolecules is essential for biological interactions. Specifically, changes in electrostatic potential make it possible to understand the biophysical and structural consequences of the D614G mutation in Spike. The biophysical analysis of said transformation shows that under this situation, a greater electronegativity prevails in the vicinity of residue 614, and this confers more structural stability to the whole molecule in its open state, which is manifested as a greater efficiency in binding to the human receptor ACE2 and, consequently, facilitates the spread of COVID-19.
From a scientific and academic perspective, it is gratifying that such a complex biological process can be modeled with a physical law that was proposed more than 270 years ago and that its simple form makes it possible to understand a phenomenon that history will record as the pandemic of the 21st century.
Acknowledgements
The authors wish to express gratitude to the Center for Research, Technology Transfer, and Business Innovation (CITTIES) for supporting the development of new teaching techniques.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this paper.
References
Copyright © 2023 by authors and Scientific Research Publishing Inc.
This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License. | https://www.scirp.org/journal/paperinformation.aspx?paperid=122623 |
Shawnee, Kansas, the small city where FLY Bard Travels is located, has a population of about 66,000 residents.
Johnson County, where the city of Shawnee is located, is shown in red on theKansas map.
|
|
Shawnee, Kansas, the location of FLY Bard Travels, is shown on the interactive map below.
The most recently known address of FLY Bard Travels is 5830 Theden St, Shawnee, Kansas 66218.
Before visiting FLY Bard Travels, be sure to verify its address and hours of operation. This Kansas-based organization may have ceased operations or relocated, and hours can sometimes vary. So a quick phone call can often save you time and aggravation.
Visit this Travel Agents directory page to find travel agents and travel agencies throughout the USA.
Visit this search results page to find more information about FLY Bard Travels in Shawnee, Kansas.
Copyright © RegionalDirectory.us.
All rights reserved.
Terms and Conditions. | https://www.regionaldirectory.us/ks/10507567.htm |
DIAC takes a holistic perspective on the design sector and emphasizes the positive impact that can be achieved through cross-disciplinary collaboration among designers from various disciplines. The board members of DIAC include senior representatives of the professional associations representing architects, landscape architects, industrial, interior, graphic and fashion designers in Ontario.
ACIDO is an association of accredited Industrial Designers in Ontario formed to develop and promote the profession. It is a community that facilitates the exchange of ideas and best practices, the advancement of our shared values and growth opportunities for the Industrial Design workforce.
Founded in 1972, Interior Designers of Canada (IDC) is the national advocacy association for the interior design profession. As the national advocacy body, IDC represents more than 5,000 members including fully qualified interior designers, Intern members, students, educators and retired members.
The Ontario Association of Architects is a self-regulating organization governed by the ArchitectsAct, which is a statute of the Government of Ontario.
The OALA Mission is to promote, improve and advance the profession of Landscape Architecture and maintain standards of professional practice and conduct consistent with the need to serve and to protect the public interest.
The Association of Registered Graphic Designers is a non-profit, professional Association that represents over 3,800 design practitioners, including firm owners, freelancers, managers, educators and students.
The TSA is a volunteer-based organization of architects, other professionals, and community members interested in architecture and urban issues.
Economic Development & Culture strives to make Toronto a place where business and culture thrive. The division’s objective is to advance Toronto’s prosperity, opportunity and liveability. | https://www.diac.on.ca/members-and-partners |
The student masters the subject matter for the exam in Czech to 98%, from Math to 86% and from Economics to 71%. What is the probability that he will fail from Math and from others will succeed?
Correct result:
Correct result:
Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it.
Showing 0 comments:
Tips to related online calculators
Would you like to compute count of combinations?
You need to know the following knowledge to solve this word math problem:
Related math problems and questions:
- Alarm systems
What is the probability that at least one alarm system will signal the theft of a motor vehicle when the efficiency of the first system is 90% and of the independent second system 80%?
- Final exam
There are 5 learners in a class that has written a final exam Aleta scored 55% vera scored 36% and Sibusiso scored 88% if Thoko scored 71% and the class average was 63%. What was Davids score as a percentage?
- Records
Records indicate 90% error-free. If 8 records are randomly selected, what is the probability that at least 2 records have no errors?
- The university
At a certain university, 25% of students are in the business faculty. Of the students in the business faculty, 66% are males. However, only 52% of all students at the university are male. a. What is the probability that a student selected at random in the
- Six questions test
There are six questions in the test. There are 3 answers to each - only one is correct. In order for a student to take the exam, at least four questions must be answered correctly. Alan didn't learn at all, so he circled the answers only by guessing. What
- Test scores
Jo's test scores on the first four 100 point exams are as follows: 96,90,76, and 88. If all exams are worth the same percent, what is the minimum test score necessary on his last exam to earn an A grade in the class (90% or better)?
- Birthday paradox
How large must the group of people be so that the probability that two people have a birthday on the same day of the year is greater than 90%?
- Test
The teacher prepared a test with ten questions. The student has the option to choose one correct answer from the four (A, B, C, D). The student did not get a written exam at all. What is the probability that: a) He answers half correctly. b) He answers al
- Geography tests
On three 150-point geography tests, you earned grades of 88%, 94%, and 90%. The final test is worth 250 points. What percent do you need on the final to earn 93% of the total points on all tests?
- Double probability
The probability of success of the planned action is 60%. What is the probability that success will be achieved at least once if this action is repeated twice?
- Class - boys and girls
In the class are 60% boys and 40% girls. Long hair has 10% boys and 80% girls. a) What is the probability that a randomly chosen person has long hair? b) The selected person has long hair. What is the probability that it is a girl?
- Two doctors
Doctor A will determine the correct diagnosis with a probability of 93% and doctor B with a probability of 79%. Calculate the probability of proper diagnosis if both doctors diagnose the patient.
- Acids
70% acid was make from same acid of two different concentrations. Amount weaker acid to the stronger acid is in ratio 2:1. What was the concentration of the weaker acid, if stronger had 91% concentration?
- Sick days
In Canada, there are typically 261 working days per year. If there is a 4.9% chance that an employee takes a sick day. .. what is the probability an employee will use 17 OR MORE sick days in a year?
- Std-deviation
Calculate standard deviation for file: 63,65,68,69,69,72,75,76,77,79,79,80,82,83,84,88,90
- Seeds
The germination of seeds of a certain species of carrot is 96%. What is the probability that at least 25 seeds out of 30 will germinate?
- Shooters
In army regiment are six shooters. The first shooter target hit with a probability of 49%, next with 75%, 41%, 20%, 34%, 63%. Calculate the probability of target hit when shooting all at once. | https://www.hackmath.net/en/math-problem/39991 |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF EMBODIMENTS
The invention relates to a clinical decision support system.
In healthcare, issues of increasing importance are reducing the expenses, and increasing the quality, safety and efficiency of care. Additionally, there is a widening knowledge gap between the care provided in top research clinical sites and standard care sites, which may result in differences in treatments and outcomes. In this context, there is a need to bring the latest therapy options to as many hospitals as possible.
Electronic health record (EHR) systems are currently being widely implemented to help manage patient records, increase the ability of analysts to assess quality of healthcare, and reduce patient sufferance due to medical errors. Clinical decision support tools help leverage the value of the data collected in EHR systems. Such tools may allow doctors to use the data in the patient file and combine it with clinical knowledge to make the best available patient-specific decisions. Moreover, clinical recommendations or specific advice provided by clinical experts to their colleagues form another means of knowledge dissemination. Also, patients may ask for a second opinion. Such consultations can take place face-to-face or via messages.
US 2008082358 A1 discloses a method comprising: receiving user-provided clinical information during a first clinical decision support event associated with a patient; comparing the user-provided clinical information with the patient against one or more rules for initiating one or more clinical decision support events; generating a user interface for the second clinical decision support event, including user-provided clinical information from the first clinical decision support event and stored clinical information; providing clinical advice based on further user-provided clinical information, the user-provided clinical information from the first clinical decision support event, and the stored clinical information.
US 2012101845 A1 discloses a method comprising: selecting a patient condition for management based on at least one evidence-based clinical practice guideline; reviewing, using a processor, evidence-based studies and clinical practice guidelines to form a starting point for medical support; reviewing an existing workflow; creating a modified workflow, associated decision support; developing a guideline-assisted medical support process; and providing the guideline-assisted medical support process to a user for usage and review.
It would be advantageous to provide an improved clinical decision support system. To better address this concern, a first aspect of the invention provides a system comprising
at least one clinical guideline comprising a plurality of nodes, wherein a node is associated with a set of clinical preconditions and a clinical recommendation, and wherein the node is further associated with a pair of a clinical question and a corresponding clinical answer, the pair forming an extension to the clinical guideline;
a node unit for determining a relevant node of the plurality of nodes, based on a condition of a specific patient and the set of clinical preconditions of the relevant node;
a presenting unit for presenting at least a part of the pair of the question and/or the corresponding clinical answer associated with the relevant node.
Since the number of clinical experts in most clinical domains is small, their time is an expensive resource which should be managed efficiently. The system provides a way to reuse recommendations by experts, i.e. clinical answers, when they become generally relevant to different patients. It will be understood that the clinical answer may take the form of any kind of clinical recommendation that corresponds to the clinical question. The clinical question may for example include information relating to a condition of a patient and/or an indication of the desired clinical information.
Clinical experts may collect the data they provide during second opinion or consultation encounters: the question, the patient data, the answer, and clinical evidence that supports the answer. This information may be stored in the patient record of the patient to whom the question relates.
However, advantageously, by associating the questions and their answers to a relevant node of a clinical guideline in accordance with the invention, relevant clinical answers may be found more easily for any patient, with reduced or eliminated waiting time.
Extending guidelines with information obtained from new expert recommendations that are collected during clinical practice enables the system to provide the most accurate recommendation for a case based both on standard guidelines, on specific expert recommendations (e.g. in more complex cases), and on the most recently available knowledge. For example, the guidelines may be extended on-the-fly as new recommendations become available.
Of a direct benefit for a healthcare organization is to use the knowledge provided by the experts it hires to improve the care provided by all the clinicians in the organization (including education of the young physicians). This may be achieved by augmenting the clinical guidelines in use at the organization with relevant knowledge out of expert recommendations. The augmented guidelines can also be used by other healthcare organizations to improve their standard of care For example, a community hospital could increase their quality of care and reduce the gap compared to a top academic center by using a clinical decision support system that incorporates clinical answers to clinical questions that were answered in the past. The system may provide a mechanism or a formally established channel for transferring best practices to clinicians of a particular specialty.
The system may comprise:
a question unit for receiving an input clinical question in respect of a patient; and
a matching unit for matching the question against a corpus of existing questions previously answered, to find a matching question that is similar to the input clinical question according to a predetermined similarity measure;
wherein the presenting unit is arranged for, if a matching question is found, presenting the clinical answer corresponding to the matching question from the corpus of existing questions.
Thus, if a treating physician has a clinical question relating to a particular patient, for example a question regarding the treatment options for the patient, the question may be asked by the physician by providing the question to the question unit. However, the question does not need to be forwarded to a human expert, in case the answer to a similar question is already available at a relevant node in the guideline. This way, less work is needed, and/or the answer to the question may be obtained more quickly. More than one similarity measure may be evaluated. The similarity measure may be based on information extracted from several sources, such as questions, answers, or patient data.
The system may further comprise an adding unit for, if no matching question is found, retrieving a clinical answer corresponding to the clinical question from an expert, and adding the question and the clinical answer retrieved from the expert to the corpus of questions. If a question is asked for which no matching question/answer pair is found, then the question may be forwarded for processing by a human expert. This way, in principle any question can be answered.
The adding unit may be arranged for associating the pair added to the corpus of questions with the relevant node in view of the condition of the patient to which the clinical question relates. This is a useful way to find the relevant node in the guideline with which a question/answer pair may be associated.
The adding unit may be arranged for adding a new node to the plurality of nodes based on the clinical answer retrieved from the expert, wherein the new node is indicative of a set of clinical preconditions extracted from the clinical question and/or the clinical answer. This may be useful to further integrate the knowledge represented by the clinical question and/or answer into the guideline. For example, when a recommendation (question-answer pair) prescribes the evaluation of a new patient condition or the collection of additional patient information, this may be consolidated in the guidelines by adding a new node.
The system may comprise a feedback unit for, if a matching question is found, determining whether the clinical answer corresponding to the matching question is appropriate, and if the clinical answer is found to be inappropriate for the patient to which the clinical question relates, triggering the adding unit to retrieve a clinical answer corresponding to the question from an expert, and add the question and the corresponding clinical answer retrieved from the expert to the corpus of questions. Even if a new question matches an existing question/answer pair, it is possible that the existing answer does not provide a sufficient answer to the new question. For example, in such a case, the physician who asked the new question may provide such feedback, so that the question is forwarded to an expert.
The system may comprise a first alert unit for generating an alert if no matching question is found, to indicate to the expert that a clinical answer corresponding to the clinical question is requested; and/or a second alert unit for generating an alert directed to a user who inputted the input clinical question, in response to the adding unit retrieving the clinical answer from the expert. This way, users of the system get alerted when an action is expected from them.
The system may comprise
a condition unit for extracting information relating to a clinical condition of a patient from a clinical question and/or a corresponding clinical answer and/or a patient health record;
a question-node unit for determining a particular node of the plurality of nodes, wherein the extracted clinical condition of the patient matches the set of clinical preconditions of the particular node according to a set of predetermined matching criteria;
an associating unit for associating the clinical question and the corresponding clinical answer with the particular node.
This allows an existing question/answer pair to be analyzed and matched to a node of the guideline, so that the guideline can be extended by associating the question/answer pair to that node of the guideline. This way, for example, a bulk of existing question/answer pairs may be added to the guideline.
The system may comprise a question normalizing unit for normalizing a question by generating a formal representation of the question based on a predetermined terminology and syntax. This allows new and old questions to be matched with each other more reliability and/or efficiency, because the standardized representation makes it easier to compare the questions. Also other processing operations may be implemented in a streamlined way, when the questions have been normalized to a standardized format.
The system may comprise a health record unit for retrieving information relating to the condition of the specific patient from an electronic health record. This information may be used, for example, to enrich the question with more patient-specific information. This additional information may be used to find similar questions, to provide the human expert with more information, and/or to be able to find a node with which to associate a question/answer pair with more accuracy.
In another aspect, the invention provides a workstation comprising a system set forth herein.
In another aspect, the invention provides a method of providing clinical decision support, comprising associating a set of clinical preconditions and a clinical recommendations with a node of a clinical guideline including a plurality of nodes; associating a pair of a clinical question and a corresponding clinical answer with the node, wherein the pair forms an extension of the clinical guideline; determining a relevant node of the plurality of nodes, based on a condition of a specific patient and the set of clinical preconditions of the relevant node; and presenting at least a part of the pair of the question and/or the corresponding answer associated with the relevant node.
In another aspect, the invention provides a computer program product comprising instructions for causing a processor system to perform a method set forth herein.
It will be appreciated by those skilled in the art that two or more of the above-mentioned embodiments, implementations, and/or aspects of the invention may be combined in any way deemed useful.
Modifications and variations of the workstation, the method, and/or the computer program product, which correspond to the described modifications and variations of the system, can be carried out by a person skilled in the art on the basis of the present description.
FIG. 1
depicts, schematically, a small fragment of a clinical guidelines decision tree for early breast cancer. Guidelines are used to support clinicians in the patient management and are elaborated based on various types of evidence (clinical trials, state of art practice, expert opinion). Several computerized guideline systems exist that allow the clinician to navigate through the decision tree. A possible implementation of a clinical guideline is by means of a decision tree or graph that can also be represented as a Resource Description Framework (RDF) graph. Other implementations using other formalisms are also possible.
101
101
102
103
101
102
107
108
103
109
103
In the drawing, nodes are represented by rectangles. The details of the nodes, such as a specification of clinical preconditions and a clinical recommendation, have not been indicated in the drawings. Each node, such as , , , represents a set of clinical preconditions on a patient, as well as a recommendation as to how to proceed. For example, nodes and indicate that, depending on the patient condition, node , , , or is relevant. Node , for example, may recommend a particular treatment or test.
However, sometimes the details of the patient's condition may not match entirely with the information in the clinical guidelines, or the treating physician may have further questions that he or she is not able to answer based on the guidelines. In such a case the treating physician may formulate a question directed to a clinical expert.
The decision graph or tree may be extended by associating to each node, where available, a list of links to relevant clinical consultation questions and the corresponding expert recommendations (clinical answers). The length of the list of such expert recommendations for each node depends on the number of distinct questions submitted for expert review that are related to that node. These questions can usually be linked to uncertainties concerning the decision in particular nodes in the guidelines: exceptions, variations in treatments, adverse events, patients with co-morbidities, etc. The decision graph/tree can also be extended with new nodes based on new data items and conditions provided by the expert recommendations.
By addressing these complex cases by making use of the recommendations of the experts, the system may provide more detailed and up-to-date knowledge, while improving the efficiency of care and of the consultation process.
FIG. 2
illustrates aspects of a clinical decision support system. The system may be implemented on many different hardware platforms, such as a distributed computer system or a standalone workstation.
1
1
1
101
102
109
3
1
3
FIG. 2
FIG. 1
FIG. 2
The system may comprise a clinical guideline . More than one clinical guideline may be present in the system. The techniques disclosed herein may be applied to all guidelines in the system, or to only one or a subset of the guidelines that are present in the system. For clarity, only one clinical guideline is shown in . The system may select a relevant guideline automatically, or enable manual selection of a guideline to use. The guideline may comprise a plurality of nodes , , , as described in more detail hereinabove with reference to . For reasons of clarity, only illustrates one node of the plurality of nodes of the guideline . A node may be associated with a set of clinical preconditions and a clinical recommendation. The clinical preconditions of a node may follow in part or entirely from the position of the node in the tree. Other representations of nodes, not in a tree, are also possible. For example, a statistical or otherwise computational classification system may be used to determine the relevancy of a particular recommendation (node). Consequently, the scope is not limited to a clinical guideline that is organized in form of a decision tree.
3
4
5
6
6
5
4
1
4
3
At least one of the nodes may be associated with a pair of a clinical question and a corresponding clinical answer . The clinical answer may have the form of a clinical recommendation corresponding to the clinical question . The pair forms an extension to the clinical guideline , because the pair is associated with a node of the clinical guideline.
7
3
7
The system may comprise a node unit arranged for determining a relevant node of the plurality of nodes, based on a condition of a specific patient and the set of clinical preconditions of the relevant node. Features of this node unit may be implemented in a way known in the art per se.
8
4
5
6
3
8
3
3
8
6
4
8
3
The system may further comprise a presenting unit arranged for presenting at least a part of the pair of the question and/or the corresponding clinical answer associated with the relevant node . For example, the presenting unit may be arranged for presenting the recommendation of the relevant node for a patient, and a list of question/answer pairs that are associated with the relevant node . Such a list may be displayed automatically, for example, or triggered by a user input. The presenting unit may be arranged for displaying or otherwise presenting the clinical answer in response to a user selecting a pair for presentation. The presenting unit may also be arranged for automatically presenting any available pairs associated with the relevant node .
8
The system may further comprise an automatic relevant pair determination unit (not shown), arranged for matching the information in the pairs with the information known about the patient. The presenting unit may be arranged for automatically presenting at least part of a question/answer pair when it matches the information known about the patient according to a set of matching criteria.
9
14
9
3
3
The system may comprise a question unit for receiving an input clinical question in respect of a patient. This question may be input by a user , for example. This question unit may be arranged, for example, to accept a question after the relevant node has been established and optionally the recommendation of the node has been presented.
10
3
10
3
3
10
4
8
5
6
5
The system may comprise a matching unit for matching the question against a collection of existing questions previously answered, in dependence on the relevant node . For example, the matching unit may evaluate the questions associated with the relevant node and/or nodes that are closely related to the relevant node according to predetermined criteria. The matching unit seeks a matching question that is similar to the input clinical question according to one or a plurality of predetermined similarity measures. The presenting unit may be arranged for, if a matching question is found, presenting the clinical answer corresponding to the matching question from the collection of existing questions.
1
It is noted that the collection of existing pairs of questions and answers may be stored in a separate data structure, such as a table or a database, wherein associative links are created between pairs and nodes. Alternatively, the collection of existing pairs may be integrated into the guideline by embedding the information of each pair into a node of the clinical guideline. Other arrangements are also possible.
11
12
11
11
4
3
The system may comprise an adding unit arranged for, if no matching question is found, retrieving a clinical answer corresponding to the clinical question from an expert . This may be performed in many different ways, for example by sending a message to an expert through a messaging system or by setting a flag in the system that causes a user interface to generate a signal indicative of an open question. Such a signal may be noted by a personnel member who may then forward the question to an appropriate expert. The adding unit may be arranged for adding the question and the clinical answer retrieved from the expert to the collection of questions. The adding unit may be arranged for associating the pair added to the collection of questions with the relevant node in view of the condition of the patient to which the clinical question relates.
13
4
5
4
11
The system may comprise a feedback unit arranged for, if a matching question is found, determining whether the clinical answer corresponding to the matching question is appropriate. For example, the user may believe that the answer is not relevant, not accurate, outdated, or otherwise inappropriate to answer the specific question of the user. In case the clinical answer is found to be inappropriate for the patient to which the clinical question relates, the feedback unit may trigger the adding unit to retrieve a clinical answer corresponding to the question from an expert, and add the question and the corresponding clinical answer retrieved from the expert to the collection of questions.
15
16
14
The system may comprise a first alert unit arranged for generating an alert if no matching question is found, to indicate that a clinical answer corresponding to the clinical question is requested. Alternatively or additionally, the system may comprise a second alert unit arranged for generating an alert directed to a user ′ who inputted the input clinical question, in response to the adding unit retrieving the clinical answer from the expert.
18
20
21
17
3
3
3
19
21
22
3
3
The system may comprise a condition unit arranged for extracting information relating to a clinical condition of a patient from a clinical question and/or a corresponding clinical answer . For example, this information may be obtained from the standardized representation of the question and/or answer. The system may comprise a question-node unit arranged for determining a particular node of the plurality of nodes. The particular node is selected such that the extracted clinical condition of the patient matches the set of clinical preconditions of the particular node according to a set of predetermined matching criteria. The system may further comprise an associating unit arranged for associating the clinical question and the corresponding clinical answer with the particular node . For example, a link is created or the question and answer are embedded into the clinical guideline at the particular node .
23
20
23
The system may comprise a question normalizing unit arranged for normalizing a question by generating a formal representation of the question based on a predetermined terminology and syntax. For example, natural language processing is used to replace synonyms by a standard term and to translate natural language text into structured form.
24
3
The system may comprise a health record unit arranged for retrieving information relating to the condition of the specific patient from an electronic health record. This information may be used to find the most relevant node , for example, or to add additional information to a question.
FIG. 3
1
3
4
5
6
4
1
301
302
illustrates a method of handling an extension of a clinical decision support system, wherein a clinical guideline of the clinical decision support system comprises a plurality of nodes, wherein a node is associated with a set of clinical preconditions and a clinical recommendation, wherein at least one of the nodes is associated with a pair of a clinical question and a corresponding clinical answer , the pair forming an extension of the clinical guideline . The method comprises determining a relevant node of the plurality of nodes, based on a condition of a specific patient and the set of clinical preconditions of the relevant node. The method further comprises presenting at least a part of the pair of the question and/or the corresponding answer associated with the relevant node. The method may be extended or modified based on the description of the system. The method may be implemented as a computer program.
The current computerized clinical guidelines typically are an implementation of the paper guidelines and are usually updated with a significant delay (on average 1-2 years) compared to the latest available knowledge. Known guidelines typically focus on the generic patient and are not applicable in all difficult, complex or non-standard cases.
Additionally, even in a top healthcare center, experts are few and their time is a scarce and expensive resource. It is also relevant for healthcare organizations to become able to reuse the knowledge and data in their systems to reduce costs, avoid medical errors, or improve efficiency. A method and a system that provides an efficient dissemination of the expert knowledge through the augmented clinical guidelines to all healthcare professionals in the organization would be helpful in this respect.
Expert knowledge may be captured in the clinical recommendation/advice process in the context of clinical consultations. In current practice, this is focused on individual cases and there is no link to the guidelines to identify missing information and no possibility to reuse that expert recommendation for other patients.
The content of the questions may be formalized in a way that supports automatic evaluation and that makes it easier to link questions to specific recommendation documents based on their semantic content.
However, questions may be provided in free text form. Extracting meaning form free text is a computer science problem often referred to as Natural Language Processing (NLP). The use of free text in the healthcare domain is frequent and extracting the semantics from such text is a technology that may be used to provide more intelligent clinical decision support systems. While using NLP techniques can enable to detect concepts and even in particular cases their relationships, comparing a large number of free text narratives (such as the questions for clinical consultations) is a large computational task when those narratives are described in natural language. To improve the quality of the guidelines or the clinical workflow, the expert recommendations may be linked to the nodes in the guidelines where they are relevant, as disclosed herein. For example, an expert recommendation concerning the handling of an adverse event for a particular treatment would become an extension of the node in the guidelines recommending that treatment. A recommendation describing how to interpret a borderline value of a particular test may be linked to both the node in the guidelines that suggests that test and to the node(s) that indicate the patient stratification based on that test.
Free text narratives, as those represented by questions for clinical expert recommendations and corresponding clinical documents providing an expert recommendation in reply to the question with respect to diagnosis or treatment in a patient case, may be linked to relevant nodes in the clinical guidelines graph.
In a particular example, a system may comprise a domain ontology that defines a standardized terminology used in a knowledge domain. The system may further comprise a clinical expert recommendations system comprising a repository of clinical questions represented for example as an (RDF) graph, a repository of clinical documents/recommendations, and a subset of the terminology containing the concepts relevant for the guidelines and the clinical recommendations. The clinical documents/recommendations may be associated with a timestamp of each document and authorship information: electronic signature or name of the expert who provided the recommendation.
The system may also comprise a repository of relevant patient data that was used to provide the recommendations. The system may comprise an NLP pipeline arranged for processing a question entered by the user and convert it into a set of canonical, or standardized, terms (out of the domain ontology) and patterns (e.g. chosen sub-sentences, regular expressions, etc.). The system may comprise a matcher used to match clinical consultation records to the nodes in the guidelines that they could extend. The system may comprise a computerized clinical guidelines system that is arranged for being extended by including links to clinical questions and expert recommendations in response to the questions. The system may also comprise a visualization module enabling the browsing of the extended guidelines.
In the following, an example of a method of using the system is described. From the questions in the available recommendations database, any redundant/non-informative parts may be removed. A semantic graph may be made of a question and/or corresponding recommendation by extracting the relevant set of concepts and patterns present in the narrative and building the relations among them and identifying the instances. Synonyms may be detected and replaced with the canonical terms. This graph containing canonical terms and defined patterns is a conceptual representation of the information need of the user. The documents may be retrieved from the EHR or in a separate repository. The system may extract the relevant information and store it in a suitable format in a repository controlled by the system.
A new question introduced by the user may be processed through the NLP pipeline as described above, and then it may be compared to the existing questions. If a suitable existing question is found, the corresponding answer/recommendation may be linked to the new question. If a matching existing question is not found, the question may be added to the corresponding repository and submitted to the expert for feedback.
The relevant nodes in the clinical guidelines, to which the extension in form of the question and/or corresponding recommendation is linked, may be computed in dependence on the available patient data and the semantic content of the patient data.
When a recommendation is provided by the expert, in answer to the question/request, the document holding the recommendation may be added to the repository of recommendations together with a time stamp, authorship information, and a link to the question that initiated the recommendation and the corresponding patient data. If the recommendations are stored directly in the EHR, the data can also be fed in a repository that is connected to the clinical guideline system.
In an example implementation, at deployment the repositories are built as follows. First one or more databases of questions and recommendations are built based on retrospective data (all qualifying past recommendations stored in a legacy consultation system or in the EHR). Each question may be associated with a patient file. Therefore, selected patient data may be used to provide a context for the query. Based on the workflow, this data may be considered sufficient to allow the expert to provide a recommendation. Hereinafter, this patient data may be referred to as the patient record, although it may in fact be a structured subset of the complete patient record in the EHR. The linked question may be stored, for example, as free text or as a semantic graph.
Both the structured patient data and the clinical question may be annotated with concepts from the domain terminology. This metadata may be stored and used to find relevant nodes in the guidelines. The NLP pipeline may be used to extract concepts out of free text data.
In the simplest case, just by restructuring the available data, each patient record and associated question can link to exactly one expert recommendation. In a more complex implementation an additional step may compare questions and recommendations, group similar questions and corresponding recommendations together and eliminate redundant questions. The expert recommendations may be stored as free text or in another representation, including a structured data format. The authorship and timestamp metadata may also be stored.
Next, each qualifying node of the guidelines may be annotated with the same chosen domain terminology. Some guidelines systems may already make use of standard terminologies. In other cases, the nodes may be annotated with a representation using such standard terminologies.
Next, a matcher may be run on the semantic metadata extracted from expert recommendations (the concepts in the pairs of (patient record, question)) and on the nodes of the guidelines. The guidelines nodes and the recommendation records that for which a matching measure is above a desired threshold are linked, i.e. an identifier of (pointer to) the recommendation record may be added to the list of extensions associated to the node.
An additional step of manual verification and/or editing can be performed, for example at the end, to verify the result of the matching and improve the accuracy and relevance of the lists of extensions. When desired, the thresholds of the matcher can be changed and the algorithm re-run.
New questions from the consultation process may first be passed through the NLP pipeline and/or annotated with concepts from the domain terminology. Alternatively, the questions are entered in a structured and/or standardized way. They may be then compared with the semantic metadata of existing questions. When a match is found, the corresponding record and document is presented to the user as recommendation.
A validation/user feedback mechanism can also be included, in which the clinical user confirms or rejects the system suggestion (enabling evaluation and learning). If the suggestion is rejected, the new question is added to the repository and submitted to the expert to provide a recommendation.
The visualization module may enable the users to browse through the extended guidelines together with the relevant context information, e.g. the user can check for expert recommendations included who was the expert providing the advice and what was the relevant patient data (to decide whether this is indeed relevant for own case).
The current computerized clinical guidelines typically are an implementation of the paper guidelines and are usually updated with a significant delay (on average 1-2 years) compared to the latest available knowledge. Guidelines typically focus on the generic patient and are not applicable in all difficult, complex or non-standard cases. Additionally, even in a top healthcare center, experts are few and their time is a scarce and expensive resource.
It is also relevant for healthcare organizations to become able to reuse the knowledge and data in their systems to reduce costs, avoid medical errors, or improve efficiency. A method and a system that provides an efficient dissemination of the expert knowledge through the augmented clinical guidelines to all healthcare professionals in the organization would be helpful in this respect.
Expert knowledge may be captured in the clinical recommendation/advice process in the context of clinical consultations. In current practice, this is focused on individual cases and there is no link to the guidelines to identify missing information and no possibility to reuse that expert recommendation for other patients.
The content of the questions may be formalized in a way that supports automatic evaluation and that makes it easier to link questions to specific recommendation documents based on their semantic content.
However, questions may be provided in free text form. Extracting meaning form free text is a computer science problem often referred to as Natural Language Processing (NLP). The use of free text in the healthcare domain is frequent and extracting the semantics from such text is a technology that may be used to provide more intelligent clinical decision support systems. While using NLP techniques can enable to detect concepts and even in particular cases their relationships, comparing a large number of free text narratives (such as the questions for clinical consultations) is a large computational task when those narratives are described in natural language. To improve the quality of the guidelines or the clinical workflow, the expert recommendations may be linked to the nodes in the guidelines where they are relevant. For example, an expert recommendation concerning the handling of an adverse event for a particular treatment would become an extension of the node in the guidelines recommending that treatment. A recommendation describing how to interpret a borderline value of a particular test may be linked to both the node in the guidelines that suggests that test and to the node(s) that indicate the patient stratification based on that test.
Free text narratives, as those represented by questions for clinical expert recommendations and corresponding clinical documents providing an expert recommendation with respect to diagnosis or treatment in a patient case, may be linked to relevant nodes in the clinical guidelines graph.
In a particular example, a system may comprise a domain ontology that defines a standardized terminology used in a knowledge domain. The system may further comprise a clinical expert recommendations system comprising a repository of clinical questions represented for example as an (RDF) graph, a repository of clinical documents/recommendations, and a subset of the terminology containing the concepts relevant for the guidelines and the clinical recommendations. The clinical documents/recommendations may be associated with a timestamp of each document and authorship information: electronic signature or name of the expert who provided the recommendation.
The system may also comprise a repository of relevant patient data that was used to provide the recommendations. The system may comprise an NLP pipeline arranged for processing a question entered by the user and convert it into a set of canonical, or standardized, terms (out of the domain ontology) and patterns (e.g. chosen sub-sentences, regular expressions, etc.). The system may comprise a matcher used to match clinical consultation records to the nodes in the guidelines that they could extend. The system may comprise a computerized clinical guidelines system that is arranged for being extended by including links to clinical questions and expert recommendations in response to the questions. The system may also comprise a visualization module enabling the browsing of the extended guidelines.
In the following, an example of a method of using the system is described. From the questions in the available recommendations database, any redundant/non-informative parts may be removed. A semantic graph may be made of a question and/or corresponding recommendation by extracting the relevant set of concepts and patterns present in the narrative and building the relations among them and identifying the instances. Synonyms may be detected and replaced with the canonical terms. This graph containing canonical terms and defined patterns is a conceptual representation of the information need of the user. The documents may be retrieved from the EHR or in a separate repository. The system may extract the relevant information and store it in a suitable format in a repository controlled by the system.
A new question introduced by the user may be processed through the NLP pipeline as above, and then it may be compared to the existing questions. If a suitable existing question is found, the corresponding answer/recommendation may be linked to the new question. If a matching existing question is not found, the question may be added to the corresponding repository and submitted to the expert for feedback.
The relevant nodes in the clinical guidelines, to which the extension in form of the question and/or corresponding recommendation is linked, may be computed in dependence on the available patient data and the semantic content of the patient data.
When a recommendation is provided by the expert, in answer to the question/request, the document holding the recommendation may be added to the repository of recommendations together with a time stamp, authorship information, and a link to the question that initiated the recommendation and the corresponding patient data. If the recommendations are stored directly in the EHR, the data can also be fed in a repository that is connected to the clinical guideline system.
It will be appreciated that the invention also applies to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice. The program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention. It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person. The sub-routines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the sub-routines. The sub-routines may also comprise calls to each other. An embodiment relating to a computer program product comprises computer-executable instructions corresponding to each processing step of at least one of the methods set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer-executable instructions corresponding to each means of at least one of the systems and/or products set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically.
The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a storage medium, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a flash drive or a hard disk. Furthermore, the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention are apparent from and will be elucidated hereinafter with reference to the drawings.
FIG. 1
shows a diagram of a part of a clinical guideline.
FIG. 2
shows a diagram of a clinical decision support system.
FIG. 3
shows a flowchart illustrating aspects of a method of building an extension of a clinical decision support system, | |
It is estimated that there are over 550,000 people in the United States experiencing homelessness on a given night. In Connecticut, the estimated homeless population was more than 14,000 people, including 2,500 children back in 2012. It is likely that after the upcoming 2020 census, we will see those numbers change. Homeless men, women, and children come from every race, every level of education, and represent one of our most vulnerable populations. We will list as many resources as we can on this page to serve this community; please contact us if you know of any outreach program not listed here, thank you.
Please be aware that organizations listed here offer a verity of services to the homeless. Groups that offer such things as mental health care, medical care, addiction support and recovery services will not be cross referenced in other categories. Outreach programs only tend to offer these services to their target population and would cause confusion if they were listed elsewhere. Homeless shelters can be found on our Housing & Shelter page.
DMHAS Operated Homeless Outreach: All state operated Local Mental Health Authorities (LMHAs) have collaborative Outreach and Engagement programs. Homeless Outreach Teams consist of individuals with expertise in substance abuse, mental health, financial assistance programs, housing and vocational services. Many of these teams work in tandem with the PATH funded staff. The teams reach homeless individuals under bridges, in cars, shelters, bus stations, and in encampments. They offer a variety of support services and a safe environment to assist individuals through the transition from homelessness.
Brian’s Angels: is dedicated to helping the Bristol Connecticut homeless with love and caring. We supply items such as toiletries, non-perishable foods, clothing (such as socks, thermal sets, underwear, shoes, coats), temporary shelter, and supplies. We try to “fill a hole” for items that are not in the area agencies and for items outside their budgets.
Location: Bristol, CT
Catholic Charities’ Homeless Outreach: is the only program in the greater Danbury area that meets with homeless individuals where they are at and provides services right there. Engaging with clients at the shelters, soup kitchens, on the streets, in the woods, and under bridges; the Homeless Outreach Team effectively provides homeless individuals and families with access to benefits, food, clothing, medical, mental health, substance abuse, and housing services.
Serving: Bethel • Bridgeport • Brookfield • Danbury • Darien • Easton • Fairfield • Greenwich • Monroe • New Canaan • New Fairfield • Newtown • Norwalk • Redding • Ridgefield • Shelton • Sherman • Stamford • Stratford • Trumbull • Westport • Weston • Wilton
Community Health Center, Inc.: At our core, we are a community based health organization. Though we have a national reach, we understand that health care takes place where our patients live. Our clinical teams offer Medical, Dental, and Behavioral Health.
We are dedicated to providing comprehensive and respectful care to our Lesbian, Gay, Bisexual, and Transgender community. We welcome all sexual orientations and gender identities. We know that those in the LGBT+ community face many barriers to accessing compassionate, informed care. Our providers are trained to provide exceptional care to serve the specialized needs of this community. Visit their LGBT+ page HERE
If you do not have insurance, or you’re underinsured (your insurance doesn’t cover everything you need it to), CHC offers a Sliding Fee Discount program, which provides reduced rates for those who quality. If you qualify, visit fees will be based on a sliding scale using income and household size.
Location: Various around the state, different locations offer different services. For their complete service map click HERE
Eddy Shelter (The Connection, Inc.): The Eddy Shelter provides emergency shelter and rapid re-housing support to homeless adults while identifying and planning for long-term housing. Participants must use 211 to access emergency shelter beds.
Housing Services
Location: Middletown, CT
ImmaCare Inc. strives to eliminate homelessness in the Hartford region, while building a more vibrant community, by creating safe and affordable housing options and increasing the skills, income and hope of those who struggle with housing crisis.
Location: Hartford, CT
Liberty Community Services: Our mission is to end homelessness in Greater New Haven. We offer services to those experiencing homelessness who are living with HIV/AIDS, mental illness and/or addiction. What we do:
Eviction Prevention, Rental Assistance, and Diversion – Outreach & Engagement – Rapid Re-Housing – Supportive Housing – Employment & Income
Outreach and Engagement (The Connection, Inc.)Helping homeless adults with individualized treatment services and assist in transitioning to appropriate and secure housing. | http://rockingrecovery.org/homeless-outreach/ |
This application is related to U.S. patent application Ser. No. 11/296,077, filed Dec. 7, 2005, and entitled “Wireless Controller Device”, which is incorporated by reference in its entirety.
The field of the present invention is applications for operation on a wireless remote device. More particularly, the present invention relates to a wireless remote device configured to operate as a storage device in a file management system.
Wireless devices are widely used today, and their use is becoming more widespread as additional applications and devices become available. Also, the network infrastructures have improved wireless coverage, as well as communication quality and speeds. For example, a wireless mobile handset has improved to the point where the mobile handset may have a wide range of entertainment applications, as well as its essential communication capabilities. With these extended capabilities, the wireless handset has become the most widely deployed and most conveniently available remote wireless device. Many users consider their wireless handset to be an essential partner, both in business and in their personal lives. As such, these users almost always have access to their wireless handsets, and are comfortable carrying and using the wireless handset in almost any environment. The wireless handset may take the form of a traditional wireless phone, or may be included with a personal data assistant, gaming device, or music player, for example.
The widespread use of mobile handsets permits users to work remotely while still maintaining communication with a home office, co-workers, or clients. In some cases, these mobile handsets store data files, which users rely on to make decisions and to capture information. For example, a mobile phone may have a data file that has a list of available products, and includes current pricing and delivery information. The user will use this information to quote prices and delivery to clients, and may further use the handset to take orders for available stock. Several salespeople may be taking orders for the same stock, and since the file is not updated, it is possible that the same stock may be sold to multiple customers. Thus, the static file is prone to providing inaccurate pricing and delivery information. Accordingly, it has not proven satisfactory to maintain such data files on a wireless handset. Instead, companies rely on a central system, where a central server maintains a current database of inventory. Then, as each salespersons sells stock, the database is updated. Unfortunately, this requires an active communication to the sever, which is not always possible. For example, wireless service may not be available in some geographic areas, and may be lost inside buildings. In these cases, the salesperson is not able to provide any information as to price, delivery, stock availability, or transact the business, as no communication may be established to the central server.
Further, the proliferation of mobile devices has exacerbated problems of securely backing up data files. More and more data is being generated and modified on mobile devices, and this information is difficult to assimilate into the overall backup processes. This problem is particularly difficult, as the nature of mobile devices subjects them to theft, loss, and destruction. In this way, data on mobile wireless devices is at substantial risk for loss, while being particularly dependent on human process for backup. For example, most mobile devices are backed up by having a user “dock” the wireless device to a desktop computer, which transfers the mobile data to the computer's storage devices. The data may then be backed up using the computers normal back up procedures. For many users, backup is done sporadically, at best, and subjects the mobile's data to permanent loss.
What is needed, then, is a device and system that integrates a wireless remote device into an effectively and efficiently file management system.
Briefly, the present invention provides a system and method for managing logical documents using a wireless mobile device. The wireless mobile device, which may be a wireless handset, connects to the management system through a wireless communication network such as a public telecommunications provider network. The network has other devices, such as computers, servers, data appliances, or other wireless devices. Selected logical documents from the network devices are associated with the wireless mobile device, and the selected logical documents are targeted to be stored, copied, distributed, or backed up to the wireless mobile device. In a similar manner, logical documents originating on the wireless mobile device may be targeted to be stored, copied, distributed, or backed up on selected network devices. A logical document may be, for example, an XML document, a file, a set of files, a disk drive, or the files on a device.
In one specific example, the logical document management system enables a wireless mobile device to be a logical disk drive for another network device, or for a network device to be a logical disk drive for the wireless mobile device. This enables a secure and efficient method to transfer files between network devices and a wireless mobile handset, for example. This is particularly desirable as the communication between devices uses the typical wireless communication network, so is not limited to physical proximity or physical connection between devices. In another example, the wireless mobile device cooperates with other network devices to provided a redundant backup process, with files distributed among the several devices. In yet another example, the logical document management system provides for distribution and backup of files to multiple devices on the network. The system is also able to provide each device a selectable level of access to its instance of the file, and provides for weighted and automated synchronization of the files.
Advantageously, the logical document management system enables a wireless handset device to be an integral and functioning asset in a file backup system. The system provides for flexible distribution of files among devices on the network, and automatically provides sufficient redundancy to support disaster recovery. The system may be configured to recognize when an instance of a file has been changed, and update other instances of that file according to flexible synchronization rules. In a simple example, the logical document management system may be configured to enable a wireless mobile handset to act as a logical disk drive for a computer system.
FIG. 1A
10
10
10
10
12
14
16
20
20
20
Referring now to , a logical document management system is illustrated. Logical document management system is able to securely, conveniently, and seamlessly synchronize and backup data files between multiple storage devices, multiple networks, and multiple mobile devices. After initial setup and configuration, logical document management system acts to automatically protect a user's or company's data, while enabling sophisticated and intelligent access to data, irrespective of which device or user needs the data. As illustrated, logical document management system numeral has mobile device , mobile device , and mobile device communicating on a public wireless communication system . For example the over the air communication network could be a CDMA, WCDMA, UMTS, GMS, edge, PHS, or other public communication system. In other examples, the network may be a proprietary, commercial, government, or military communication network. The design and deployment of wireless communication networks and mobile devices is well known, so will not be described in detail. In another example, the over the air communication network may be a local area, campus, or wide area radio network. This more limited arrangement may enable advanced synchronization and backup processes within a limited commercial, governmental, industrial, or military environment.
The logical document management system operates to seamlessly synchronize, propagate, and back up logical documents. Logical documents provide descriptions for locations of data files, and may be as simple as a single file descriptor, or as complex as an XML description document. Other examples of logical documents are directories, network resources, device drives, or even all files stored on a particular device. The use of a logical document enables a single descriptor to conveniently bring together and organizes multiple data files , irrespective of the physical location of the data files. For example, a single XML document may include file links to files on a local drive, to files on network drives, and to data assets accessible using a URL descriptor.
12
14
16
Each mobile device , , and may be, for example, a wireless mobile handset, a personal data assistant, a portable music player, or other wireless portable device. For discussion purposes; the mobile devices will be generally referred to as wireless mobile handsets. The use of wireless mobile handsets has redefined communication and electronic device proliferation. For many, the wireless handset has become the center of communication and entertainment, with this trend continuing for the foreseeable future. Since the wireless phone is central to modern life, users tend to carry their phone with them at most times, whether on business or on personal time. Also, the functionality of mobile devices has allowed mobile wireless handsets to view, use, and generate more data. For example, mobile devices now routinely work on larger text documents, image files, audio files, spreadsheet files, and other data information.
10
10
10
12
14
16
Often, a user will have multiple wireless mobile devises, as well as a business desktop computer, and a home desktop computer. In a similar manner, a business may have many users with one or more wireless devices, as well as an existing computer network. The logical document management system is able to be deployed in these environments, and seamlessly synchronize data between devices, and confidently backup and protect data. With our increasingly mobile society, the reliance on mobile devices to view and generate digital information is increasing. The logical document management system numeral advantageously assures that data generated at the mobile device is properly and timely distributed to those that need the information, while also assuring that the mobile generated data is properly maintained and backed up. In another important advantage, the logical document management system numeral also enables the mobile devices , , and to access required information, irrespective of the location where that information was generated. In this way, the mobile device becomes a safe, secure, and convenient data device.
10
18
18
18
21
18
20
10
22
22
The logical document management system numeral typically includes a more static computer which has substantially more memory and processing horsepower than the mobile devices. Often, computer is a personal computer or a local network device configured to operate office applications, and store significant amounts of data. Typically, computer has a network interface for communicating to wide area networks. The network interface may be, for example, the Internet, or a wideband wireless modem. Either way, the computer is able to communicate to the mobile devices through the over the air communication network . This communication may be in the form of an TCP/IP protocol, or may use other messaging systems, such as SMS, EMS, or MMS. It will be appreciated other communication protocols and standards may be used, and others may be developed in the future. Optionally, the logical document management system numeral may also have a server . The server may be a local resource of the network or may be at a remote facility. Also, server may be operated by the same person or company that is operating the computer and mobile devices, or may be a contracted third-party server house.
FIG. 1A
FIGS. 1B and 1C
FIG. 1A
10
10
18
22
18
23
describes one aspect of logical document management system , while show other useful operations. In , logical document management system numeral is illustrated with computer primarily responsible for the generation of data information, while the mobile devices and server are used for backup of the data. Further, the illustrated example uses logical documents that describe file locations. Even though the simple file descriptors are illustrated, it will be understood that a more complex logical document representation may be used. Storage assets on the mobile device may be used to provide redundant and distributed backup for important files or other logical documents. Also, files or other logical document may be distributed to enable certain mobile devices to have local access to required data. In some cases, to provide for redundancy, the files may also be stored on server . Computer employs the concept of a storage unit . A storage unit may be, for example a single file, a set of files, or data files on a particular device or resource. In another example, the storage unit may be a logical document or other logical selection of files. In one specific example, a logical set storage unit could be an XML file with external entities distributed over the Internet or other file resources that together form a complete document.
23
18
18
25
23
10
23
22
23
14
16
22
10
27
Generally, the storage sets for computer define the complete set of digital information that computer needs to maintain. For example, files would be defined as some of the storage unit files within storage units . During the process of initializing the logical document management system numeral , particular storage units were associated with a particular mobile device, multiple mobile devices, or the server . These associations provide instructions as to where storage units are to be distributed and maintained. For example, file A has been associated with mobile device , mobile device , and server . Accordingly, when backup or synchronization is requested, file A will be distributed or synchronized on the associated devices only. Logical document management system numeral also has been configured during initialization to set configuration instructions as to the when backups are to occur, how many past versions to keep, and how to manage and synchronize data. These associations and configuration information are stored in a file , which also may be distributed and stored throughout the backup network.
FIG. 1A
10
25
27
20
18
12
32
14
34
16
36
22
41
43
As illustrated in , the logical document management system numeral is able to backup files and to selected wireless mobile devices through the over the air communication network . For example, files C and D from computer are backed up on to mobile device as shown in block . In a similar manner, files A and B are stored on mobile device as shown by block , while files A and C are stored on mobile device as shown by block . Server has all files as shown by block , as well as a more complete file list and history file as shown in block , which may be useful for more robust tracking of changes and past versions.
10
12
14
22
18
23
18
FIG. 1B
FIG. 1B
By using a wireless phone as a network storage device, the network logical document management system numeral enables the wireless phone to act as a storage extension or backup device to other devices on the network. As illustrated in , a mobile device such as mobile device may generate a file E, which also may be a logical document description. File E may then be distributed on to other devices, such as mobile device , server , and computer . In this way, digital information generated on a single mobile device may be timely and seamlessly backed up, as well as synchronized to other devices for ease of access to the information. As illustrated in , the storage sets defines the complete set of digital information that all the network devices need to maintain. The storage set information is typically collected and stored in one device, for example computer , although a more distributed approach may be used.
10
50
As described thus far, files or other logical documents may be backed up to any other device in the file management system or irrespective of which device generated the data. More particularly, any particular file or storage unit may be associated with one or more devices, and those devices will be used to back up the defined file or storage unit. Once the file has been stored on that device, that device may then be able to use the data file. For example, the device may read and display the information, and depending upon access control, may be able to change or otherwise amend the data file.
70
12
14
16
18
22
41
22
FIG. 1C
FIG. 1C
In some cases, additional backup security may be obtained by using logical document management system illustrated in . In , the mobile devices , , and , as well as computer each operate a security or encryption process that enables each device to securely transmit data files, but yet are able to locally use the data files according to their access control list. Accordingly, even though the data files are communicated in an encrypted form, they become available and decrypted for use in the local device. However, server does not have access to the security process, and therefore stored files are stored in an encrypted manner so that the files may not be accessed or changed without access to security process. For example, the operators of server may not have access to the security encryption keys necessary to decrypt the data files. By storing files in an encrypted form, data may more confidently be stored on servers under the control of a third-party, since the third-party is not able to access or use the data.
10
50
70
10
The descriptions of the logical document management systems , , and , have focused on the power of enabling wireless mobile devices to be used for backup purposes, and for the ease of distribution of data in a file network that has wireless mobile devices. The ability to confidently backup and maintain files using wireless devices is a powerful feature, but the logical document management system may readily be adapted to enable more advanced logical document management controls. The logical document management system also has powerful synchronization features which allow intelligent and adaptive proliferation of data throughout the logical document system. For example, multiple wireless mobile devices may be distributed a copy of a particular data file, and each device may have access rights that enable that device to edit its instance of the particular data file. Since the copies of the file are changeable on several devices, it is likely that the content in one file will become out of sync with the content of the data in other files. Accordingly, the network management system provides for weighted merging of changes, with the merge rules defined during configuration. Further, the network management system provides sophisticated notification processes for notifying users or devices that files have been updated to reflect other's changes, or that changes made in the device have been preempted by, another higher priority change. By providing for such timely and controlled file synchronization, a user may confidently use information knowing it is current and accurate.
a) the systematic discovery of devices so that active devices may be automatically connect to the logical document system;
b) the secure transmission of data between devices;
c) the distribution of data files only to selected and authorized devices;
d) the synchronization of data files so that changes made in the file of one device are promptly updated in other instances of the file on other devices; and
e) the redundancy of files among devices to provide backup of files.
The logical document management system enables a set of devices to cooperate to distribute, synchronize, and backup logical documents. For example, a set of computers, servers, wireless handsets, and notebook devices are used to operate a data network that allows any authorized device to access needed data, irrespective of where or when it was generated. Further, the logical document management system automatically provides for distribution and synchronizations of files, and assures that files are sufficiently redundant to support disaster recovery. In particular, the logical document management system provides for:
When a backup copy of a file or other logical document is made from one device to another, the storage of the backup file may be adjusted according to the type of file protection desired. For, example, the backup file may be made on the second device in a “opaque” way. This means that the primary device encrypts the file and stores the file on the second device, but the second device does not have the ability to decrypt the data file. In this way, the backup file is only usable as a backup file to the primary device, and cannot be used by another other device. This may be accomplished, for example, by encrypting the data file to the primary device's public key, and storing the encrypted file on the second device. When the data file is retrieved by the primary device, it is able to decrypt the file using its private key. Since the private key is known only to the primary device, the encrypted data file is of no use to any other device. The second device may be a wireless handset, a computer, a server, or a server farm operated by a third party, for example.
In another example, the data file may be made on a second device in a “translucent” way. This means that the primary device encrypts the file and stores the file on the second device, and the second device has the ability to decrypt the data file. In this way, the backup file is usable as a backup file for the primary device, and also may be used by the second device. Additional rights may be specified as to the level of rights the second device has to the file. For example, the second device may have only the ability to read the file, or may be given edit capability as well. A translucent data file may be accomplished, for example, by encrypting the data file to the primary device's private key, and storing the encrypted file on the second device. When the data file is used by the second device, the second device can decrypt the file using the primary device's public key. The secondary device may be a wireless handset, a computer, a server, or a server farm operated by a third party, for example.
In another example, the data file may be made on a second device in a “transparent” way. This means that the primary device does not encrypt the file and stores the file on the second device. In this way, the backup file is usable as a backup file for the primary device, and also may be used by the second device. Additional rights may be specified as to the level of rights the second device has to the file. For example, the second device may have only the ability to read the file, or may be given edit capability as well. Since a transparent file has no encryption security, it is the least secure type of storage, but also uses the least processing power. The secondary device may be a wireless handset, a computer, a server, or a server farm operated by a third party, for example.
FIG. 2
FIG. 1B
100
100
50
100
100
101
100
102
100
103
101
105
107
109
111
113
115
118
105
120
122
124
Referring now to , a system for logical document management is illustrated. System operates on a network system, such as logical document management system numeral discussed with reference to . Method has three general processes. First, the method has a setup phase which initializes and configures the overall network. Second, the system has a normal operation phase , which allows files or other logical documents to be automatically and timely synchronized, as well as to provide for secure backups. Finally, method has a disaster recovery phase , which is used in response to a catastrophic or fatal failure on one or more devices. As part of setup process , the particular devices in the file management system are selected as shown in block . These devices may be for example computers , personal data assistants , wireless handsets , notebook computers , or other network devices . Also, the particular desired storage units are selected as shown in block . These storage units may be, for example, files, multiple file sets, directories, devices, network resources, or logically defined file arrangements such as an XML file definition. These storage units may be on the devices selected in , or may include other storage units not represented in the devices. The storage units are then associated with particular devices as shown in block . In this way, the logical document management system is made aware of which files and storage units are to be stored on which device or sets of devices. Each of these devices may have a different access rights to the associated file as shown in block , which may be set in an access list. For example, a file or logical document may be stored on a computer with full read, write, and delete rights, while that file may be stored on a first mobile device with read and write access, and on another mobile device with only read access rights. In this way, access rights may be defined according to storage unit, device, or network requirements. Once the associations and configuration has been completed, the network logical document system may be initially operated to create a baseline distribution of files as shown in block . This baseline is used to create support files, configuration files, association files, as well as initially distribute the storage units to their appropriate associated devices. With such a baseline set, incremental operations become more efficient during normal operation.
100
102
127
FIGS. 6-8
With setup complete and a baseline set, the process moves to normal operation . In normal operation selected devices may move into and out of the network system. For example, some devices may be powered on or powered off at various times, and some devices, such as mobile phones, may move in and out of a wireless service area. Accordingly, as devices are powered on or moved into the network area, a device must be discovered and authenticated as shown in block . Generally, the process of discovery enables a mobile device to be recognized as an intended member of the network. Once the mobile device has been discovered, additional processes are used to authenticate the device, as well as established secure and efficient communication. A more complete discussion of discovery and authentication processes are discussed with reference to , and in co pending U.S. patent application Ser. No. 11/296,077, filed Dec. 7, 2005, and entitled “Wireless Controller Device”, which is incorporated by reference in its entirety.
129
100
133
During normal operation, files or other logical documents may need to be synchronized as shown in block . Although the logical document management process may be used simply as a backup mechanism, additional desirable features may be enabled for synchronizing files. Synchronization generally refers to the process of proliferating changes in one instance of a file to other instances of the file throughout the network. Since it is possible that multiple instances of the file or other logical documents may be changed between synchronization times, synchronization may be accomplished according to a set of automated rules . These automated rules may set, for example, the relative weight to apply to a changed file. In a specific example, assume that a financial file has been distributed to a large number of mobile devices, and the network is set to synchronize the financial file every five minutes. In one of these five-minute periods between synchronizations, the file is changed both by a mobile device, and by an order entry computer system. At the next synchronization time, the network process will recognize that the financial file has been changed by two different devices. Accordingly, the network will refer to its automated rules, which may define that the order entry server is given preference over any change by a mobile device. In this way, the change made by the order entry server would be distributed to all instances of the financial file.
131
133
131
102
100
A change notification rule may be used to provide notification that a change was either accepted or not accepted. In the specific example above, the mobile handset whose file change was rejected may be sent a notification that its previous entry has been ignored. It will be appreciated that a large number of automated rules and change notification rules may be used consistent with the normal operation . It will also be appreciated that synchronization does not have to be done on all files, but may be done on a subset of files within the network. It will also be understood that default synchronization time periods may be used of all selected files, or that synchronization periods may be defined by file or file type. In this way, critical files may be synchronized relatively often, while less important files are synchronized less frequently. As illustrated in method , the synchronization rules generally provide for a real-time propagation of changes to files.
100
135
138
138
The method also allows for a more batch propagation of files or other logical documents in the form of backup processes. Generally, a backup may take the form of an incremental backup or a full backup . An incremental backup typically analyzes a file for changes made since the last incremental or full backup, and stores only the changes. In this way, an incremental backup provides a complete record of all changes made to all files, but does so with lowered file and transmission requirements. However, incremental backup is somewhat less secure than a full backup, so is typically supplemented with full backups. A full backup completely backs up each file defined in the storage units, and then acts as a new baseline for future incremental backups. Since a full backup requires significant transfers of data in bandwidth, the backup may be timed such that backup is done during off hours, and devices are staggered during the backup period.
140
During normal operation, a user may also desire to recover a specific older version of a particular file as shown in block . For example, a file or other logical document may have been changed by someone, and a particular user would like to go back to a version prior to when the change was made. Accordingly, the file management system may be constructed to hold past versions for all or selected files. The level of version retention is set during setup and configuration. By allowing devices to recover specific versions, a user is relieved from the manual process of retaining a record of older files.
100
103
142
144
148
149
In the unfortunate occurrence of a device or network disaster, the network process is able to easily perform disaster recovery . Disaster recovery generally refers to the ability of a network to rebuild or reclaim data information with no or minimal data loss. Accordingly, the system is able to do a full integrity check as shown in block , and is able to restore a full file set or storage unit set as shown in block . In performing the disaster recovery, the network intelligently decides whether to take data files from mobile devices, from the computer , or from the repository . By comparing files between devices, integrity is assured, and with redundant and distributed backup, the full-size file set may be reclaimed or reconstructed.
FIG. 3
175
175
177
179
181
181
Referring now to , a method for selecting storage units is illustrated. Method is shown selecting particular files to protect and synchronize as shown in block . The network system may have default settings for the identification of files, file sets, logical devices, resources, and devices. These defaults may provide for basic synchronization, backup, and security, without user intervention or decision. However, other users may desire more sophisticated synchronization and backup arrangements, and therefore provide for additional or alternative protection and synchronization rules. In defining which rule files to protect, a user may use a local system, such as a computer system as shown in block . Also, the user may make selections using an authenticated mobile device also as shown in block .
183
185
Once the mobile device has been discovered and authenticated, it may be given access into file structure of other devices in the file management system. In this way, the mobile device may make selections of those accessible storage units, and associate those storage units with particular devices. Storage units may be selected to include local files, local directories, the entire local disk, network drives, network directories, network files, or other types of logical associations, or devices. The method also includes identifying whether or not to track versions as shown in block , and if versions are tracked, how many levels to maintain. For example, some files may have versions maintained for a few changes, while some files may have changes tracked for every change ever made. In this way, the reconstruction capability for an individual file may be set on a storage unit by storage unit basis.
FIG. 4
200
200
202
210
204
206
210
212
200
214
212
Referring now to , a configuration method is illustrated. In configuration method , the specific protection for storage units is defined as shown in block . Generally, the configuration includes defining the synchronization rules and priority as shown in block , setting incremental and full backup options as shown in block , and making specific associations between storage units and available devices as shown in block . The configuration of synchronization rules may include how often to perform a merge as shown in block . For example, some files or other logical documents may not need synchronization due to their static nature, while other files may require synchronization routinely or very often. The process allows for synchronization to be set on a file by file or storage unit by storage unit basis, thereby allowing network resources to be conserved, while having the flexibility to support application-specific requirements. Since multiple files may have been changed between merge periods, the synchronization rules also include the ability to define a weight to each storage unit or device as shown in block . This weight will be used to determine which of conflicting changes will be incorporated, and may define how the unincorporated information will be handled. For example, the unincorporated material may simply be discarded, or may be included in the file as a comment or footnote. In a similar manner, the synchronization rules may include merge notice rules . These merge notice rules define when devices or users are notified that a file has been changed. In some cases, if a merged file has discarded changes, the user may also be notified that a previous change has not been accepted into the system.
204
204
The synchronization rules enable a defined subset of the storage units to be synchronized in a nearly real-time manner. For a more complete backup of all file systems, an incremental backup may be performed as shown in block . An incremental backup typically is a backup of all files, but captures only changes made since the last incremental or full backup. In this way, an incremental backup has far less data that needs to be stored or transferred, thereby conserving network resources. Although an incremental backup is more efficient, a full backup provides additional robustness to the backup system. Accordingly, a full backup may be done as shown in block . The frequency of incremental backups may also be set, as well as the frequency for full backups. In one example, an incremental backup may be done on a daily basis, while a full backup may be done each weekend. Preferably, full backups are done at off-peak periods, and devices are backed up in a staggered manner to reduce network traffic.
206
208
In configuring the system, the selected files or storage units are associated with one or more devices. These selected devices are where instances of the file or storage units are stored. Depending on the access rights for the device, this file may be merely present as a backup file, or may be usable by the associated device. Again, depending upon access rights, the local device may be able to read, write, or delete the file. A storage unit may be selected to be stored on a single mobile device, multiple mobile devices, in a repository server, on a network resource, on a third party server device, or on a third-party encrypted device, as shown in block . As shown in block , each device may be set to track a set or maximum number of versions. In this way, a file may be configured to have all previous versions tracked, but for a particular device, the number of versions is reduced due to limited storage or bandwidth considerations. In this way, the storage, distribution, and synchronization requirements may be finely adjusted to application needs.
FIG. 5
225
225
226
226
227
228
229
230
231
Referring now to , a method for synchronization is illustrated. Method has synchronization rules that have been defined during configuration of the logical document management system. These synchronization rules may include rules related to how often synchronization is to be performed, the weight to apply to changes made at a particular device, the actions to take when files are merged, any notices to be sent to devices or users, and information regarding storage or file-set information. Based on rules , the logical document management system will synchronize files from time to time as show in block . Synchronization may be performed periodically or at other predetermined times, or may be done according to dynamic application requirements. During synchronization, the system identifies files that have been changed as shown in block . For a changed file, the system will determine if any other device has changed another instance or copy of that file as shown in block . If the identified file has only been changed on one device as shown in block , then the change can be updated for all instances of the files as shown in block . If configured to do so, prior versions of the file may be maintained to support rolling back to the prior version.
233
234
236
235
246
237
238
In some cases, two devices may have made changes to their respective instances of the data file as shown in block . Often, the devices may be assigned a weight, and the relative weight of the devices may be compared as shown in block . For example, a computer operated at the corporate offices may be given a higher priority than a mobile device operated by a salesperson. In this way, changes made by the corporate office will take priority over any changes made by a salesperson. In such a case, the changes made by the corporate computer may be used to update all instances of the file throughout the management system as shown in block , and the changes made by the salesperson may be discarded, or inserted as a comment in the updated document. The detailed actions taken during an update process may be set during configuration. It will be appreciated that many alternative actions may be used consistent with this disclosure. In some cases the devices making changes to their respective files may have an equal weight as shown in block . In such a case, the system management must provide for conflict resolution. Typically, the system will request a user or administrator input to manually resolve a conflict , although automated processes may be provided as well. For example, an automated process may include both changes in a document as a footnote or comment. The system may also provide notifications as shown in block , which may be used to inform users and devices that changes have been made or ignored. As before, additional prior version information may be stored to accommodate rollback to an earlier version as shown in block .
239
240
241
232
243
244
In a more unusual circumstance, more than two instances of a file may be changed by three or more devices as shown in block . Typically, such multiple changes are undesirable, and would suggest that a synchronization rate be increased to avoid such situations. In this regard, the management system may adaptively increase synchronization rate for that file. In a similar manner, if a file seldom has a change, its synchronization rate may be reduced. As with the case with two changes, the files may be updated according to the highest weight of the changing device as shown in block . However, sometimes there may be no clear update instruction as shown in block . This typically will occur when two or more of the devices are operating with the same weight, so a conflict resolution must be made. Conflict resolution often may require user instructions as shown in block . In other cases, an automated resolution process may be used. In one example, the three or more changes are updated or merged according to a pair-wise update as shown in block . In this way, the files having the lowest weights are first compared, and the result of that update or merge is then compared to files with a higher rate. It will be appreciated that other types of merge or update comparisons may be made consistent with this disclosure. As with other conditions, the system may save version information to facilitate rollback to prior versions.
FIG. 6
250
250
251
252
253
251
254
255
Referring now to , a logical document management system is illustrated. For illustrative purposes, the logical document management system will be discussed with reference to a network device, such as a computer, that establishes communication with a wireless mobile device. Logical document management system has preamble activities which are performed prior to a normal operation, initialization steps which are performed to discover and authenticate the network device and mobile devices, and normal operation processes which are used to maintain, synchronize, and backup files. Preamble activities are used to register the network device and mobile devices with a trusted server so that future discovery and authentication processes may be done in a secure and trusted environment. As shown in block , a public-key/private key is established for the network device. A public-key/private key pair is useful in establishing asymmetrical secured communication. A handle is also defined for each network device, which enables simplified identification of the network device. For example, the handle for a computer may be the name of the computer on its network, or may be the name of its primary user. In another example, a handle may be the e-mail address for the primary user of a computer, or may be another easy to remember name for the computer. In this way, the trusted server has handle and key information for each available network device. Each mobile also registers with the trusted server as shown in block . Each mobile also has a public-key/private key pair, and registers its public-key with the trusted server. Mobile devices typically are identified with their mobile identification number (MIN), which is often referred to as their phone number. For data enabled devices, the mobile device may be identified by its TCP IP address. In this way, the public-key and address information for each mobile device is also preregistered with the trusted server.
252
256
257
During the initialization process , the preregistered network devices and mobiles are associated for a particular file management session. As shown in block , this association may be predefined, or may be dynamically set during initialization. In one example, a network has a particular set of mobile devices which hold selected data files, and upon initialization, the network attempts to establish a trusted communication with each of the authorized mobiles. In another example, the set of wireless mobile devices may not be preauthorized, but may be discovered upon initialization. In this way, mobile devices may be placed in a state to be discovered, and the network may be placed in a state to receive requests from mobile devices. In a typical example, the network is made operational and operates the file management system. A mobile device makes a request to join the network. The network is in a state where it is able to receive the mobile's request, and then proceeds to further authenticate the mobile device. For example, the process may move to the authentication step as shown in block . The network and the mobile use asymmetric cryptography to authenticate each other. In the process a time-limited session key is also communicated between the network and the mobile to allow for more efficient communication. After authentication, data in the session is encrypted using the session key. It will be appreciated that the asymmetrical private key/public-key messaging protocols consume valuable mobile processing power, and therefore a more efficient symmetrical security system may be desirable. In this way, after secure and trusted communication is established, the network and mobile communicate securely via symmetric encryption using a session key. The network and the mobile use asymmetric cryptography to authenticate each other. In the process a time-limited session key is also communicated from one of the network and the mobile to the other. After authentication, data in the session is encrypted using the session key.
258
As shown in block , the network selects the storage units or files that are to be maintained by the file management system. These storage units may be individual files, sets of files, directories, disks, all files on a device, or some logical file arrangement. The list of storage units may be continually updated as new files are generated, new devices added, as files are deleted, or as devices are removed from the file management system. It will also be appreciated that the storage units may include files or data not on network devices. For example, the storage unit may be a URL that links to a data set on a remote Internet server, or may be a logical document description.
259
With the network devices defined, and the storage units selected, the process move to associate the devices with the storage units, as shown in block . In this way, particular storage units or files are assigned to be maintained on a particular one or set of network devices. Also, the system allows configuration to be set to control the synchronization and backup processes. For example, the configuration may set how often synchronization, incremental backups, and full backups are to be performed. The configuration may also set how many past versions of a file to maintain, as well as set access control for files. The initialization process is completed, and a baseline set of files is propagated to the appropriate devices.
253
260
The process is now ready for normal operation . In normal operation, the process may allow for file set or other logical document synchronization . This enables a realtime updating of files, so that consistent and accurate data is available to the selected devices. For purposes of this discussion, the term “realtime” is not used in its strict engineering sense, but to indicate that files are automatically updated from time-to-time at a rate sufficient to support application needs. In some cases, this may require resynchronization periods measured in minutes, while for other files, the resynchronization period may be much longer. During configuration, a set of synchronization rules were defined that set synchronization timing, merger priority and weights, and the actions to be taken when conflicts exist. During operation, these rules are automatically and systematically applied, and may be set to adapt to current application requirements.
261
The system also automatically and systematically performs backup functions . Backup may be done incrementally, which stores changes from a previous baseline backup. Incremental backups may be performed relatively often, as they consume relatively little network, file, and communication resources. However, incremental backups may become unwieldy as the difference becomes substantial between the baseline backup and the current file set. Therefore, a full backup of all files may be done, which also provides a better level of file protection and a simplified restore process. During configuration, the types and timing of the backup process was defined.
During normal use, a user may desire to restore a file or set of past files that were inadvertently deleted. In another example, a user may desire to go back to an earlier version of a document, or to track who has made changes in a document. Provided a file has been configured to keep past versions, a user may restore past versions of a file. The management system may keep some level of past versions on one device or devices, and a more complete history on another device, such as a repository server.
FIG. 7
275
277
283
279
285
Referring now to , a system for performing preamble activities for a logical document management system is illustrated. Method has a network device operating on the file management system. In one example, the network device may be a desktop computer system or a computer server. The network device has communication capability such that it may establish communication with a trusted server, such as a key server. The network device generates a private key and public key pair as shown in block . The network device also has a handle, which may be a name, e-mail address, or other easy identification value or indicator. The network device registers its public key, handle, or name with the trusted key server as shown in block . In a similar manner, a mobile wireless device generates a private and public key pair and registers its public-key, and handle with the trusted server, as shown in blocks and . For a mobile, the handle typically will be its mobile identification number, although in other cases it may be its TCP/IP address. Also, the mobile may register its preferred discovery method with the trusted server. For example, some mobile devices may more efficiently respond to an SMS, MMS, or EMS message, while other mobile devices may respond more efficiently to TCP/IP communications.
More specifically, a mobile device may be configured to operate a small process which acts to determine when a network device desires to establish a trusted communication. This small process may monitor for an SMS/MMS/EMS message, and more particularly may monitor for an SMS/MMS/EMS message with a particular code, value or message. In this way, a network device, either alone or with in cooperation with a trusted server, may send a predefined SMS/MMS/EMS message to a mobile, and the mobile may therefore be aware that a network device is trying to establish communication. In another example, a mobile may have TCP/IP enabled communication, and may therefore identify a particular port for receiving requests from network devices. When a request is received on this specific port address, the mobile device becomes aware that the network device desires to establish trusted communication. It will be appreciated that some mobile devices have both SMS/MMS/EMS and TCP/IP communication capability, and decision on which to enable may be made on application specific requirements. In another example, the mobile may register both types of discovery methods, and the target may attempt both methods in established communication.
281
287
During initialization, the network device may also define particular access limits for a specific mobile, a set of mobiles, or all mobile devices. For example, if the network device enables a mobile device to access its file structure, mobile devices may be restricted to particular files, or particular folder structures within the file system. In another example, the access rights may be established for each mobile individually, or may be established for sets of mobile devices. Also, it will be appreciated that the access rights are only the predefined access rights, and may be changed as specific communications are established between mobile and target devices.
FIG. 8
300
300
302
306
310
Referring now to , a method of discovering and authenticating is illustrated. Method has a mobile device that is prepared to be discovered as shown in block . In this way, the mobile device may have registered its mobile identification number and public key as shown in block . After registration, the mobile device monitors its SMS messages, or its TCP/IP ports, for contact by an appropriate network device. If such a request is made, then the mobile starts a local client process and continues to establish trusted communication. In another example as shown in block , the mobile device may generate a request to connect to a specific network device. For example, a user may walk into a room and desire to have his or her mobile phone become a disk device for a computer system. The mobile user may be invited to send a message to the computer, and thereby begin the establishment of trusted communication.
304
312
314
316
318
320
321
322
323
324
The network device is also prepared for discovery as shown in block . In one example, the network device has a set of predefined mobile devices that are authorized to control it. In this way, the network device may simply recall the mobile addresses as shown in block . In other cases, the network device may receive requests for communication, and thereby need to request a specific mobile handle (MIN or address) as shown in block . Finally, the network device may have made itself available to receive requests, and thereby wait for requests from mobile devices as shown in block . Irrespective of which process is used to obtain the mobile address information, the network device cooperates with the key server to obtain the mobile public-key . The mobile public-key, which has been prestored by the mobile device, is associated with the address for the mobile device. In this way, the network device is able to retrieve the public-key for the mobile device. The network device then encrypts the target IP address and the network device handle using first the network device private key and then the resulting message is encrypted to the mobile public-key . This twice encrypted message is then transmitted wirelessly to the mobile device. The mobile device, using its private key, decrypts the message . Upon decrypting with its private key, the mobile obtains the handle for the network device . The mobile is then able to communicate with the trusted server to obtain the public-key for the network device. Using the public-key of the network device, the mobile further decrypts the message and obtains the network device address as shown in block .
325
327
327
329
331
333
FIG. 8
FIG. 8
Upon confirming messages and addresses, the mobile confidently trusts the origination of the network device message. Accordingly, the mobile generates a session key as shown in block . The session key is intended for symmetrical communication encryption, which is more efficient then asymmetrical encryption. The session key is encrypted by the mobile using its private key, and then encrypted to the network device's public-key as shown in block . The twice encrypted session key is then wirelessly communicated to a network device as shown in block . The network device then decrypts the message using its private key and then the mobile public-key as shown in block . Provided the decryption process completes successfully, the network device has authenticated the mobile as a trusted communication partner. It also has obtained the session key as shown in block . The network device and mobile may then proceed with symmetrical communication encryption as shown in block . The process illustrated with is used to establish a trusted communication between a network device and a mobile. Further, the process described with reference to also enables network devices and mobile devices to preregister with a trusted third party, and then upon application needs, establish control relationships between mobile devices and network devices.
FIG. 9
350
350
351
352
353
360
360
360
362
350
351
357
352
359
353
352
355
351
359
353
Referring now to , a logical document management system is illustrated. Logical document management has mobile devices , , and communicating through an over the air communication network . Communication network may be, for example a public wireless phone or data system, or a government, industrial, or military communication system. Typically, each mobile device will be a device such as a wireless handset or personal data assistant, although other mobile devices may be used. The over the air communication network also connects to a repository server . File management system shows a backup and synchronization system using only wireless mobile devices. For example, mobile device has generated file A, which is backed up to file area on mobile device and to file area on mobile device . In a similar manner, mobile device has generated file C, which is backed up to file area on mobile device and to file area on mobile device . It will be understood that the files may be simple files, or represent data for more sophisticated logical document descriptions.
351
353
362
350
350
350
The storage unit list and associations are stored on mobile device and on mobile device . Since the file or logical document list is essential for backup and recovery, a copy of the list is stored on a repository . This repository may be another mobile device, or may be a personal computer or other network device. As with other logical document management systems previously discussed, the logical document management system may be configured for real-time synchronization of files, may set access rights to files on an individual device basis, may be used for incremental backups, and may provide incremental or full backups. Since the file management system allows a mobile device to view file structures and storage units for other mobile devices, any one of the mobile devices may be used to select storage units, associate devices and storage units, and set configurations. In another example, a computer system may be used for configuration purposes, and then the file lists imported to the devices. Advantageously, system enables a set of mobile devices to perform near real-time synchronization and seamlessly provide backup and security functions.
FIG. 10
375
375
377
377
380
379
377
379
377
379
377
Referring now to , a logical document management system is illustrated. Logical document management system has a mobile device , which may be in the form of a wireless handset. Wireless handset communicates using an over the air communication network . This over the air communication network may be for example, a public voice or data communication network, or may be a proprietary commercial, military, or government communication system. A computer also communicates to the over the air communication network, typically through an Internet or other wide area network connection. In operation, the mobile device and computer perform a discovery and authentication process. Once discovery and authentication has occurred, the mobile device appears as a storage device for computer , or the computer may show as a storage device available to mobile device . In this way, data transfers may be made in a comfortable and known way.
377
379
379
377
377
375
379
377
379
377
377
375
377
379
379
377
In a specific example, the mobile device may appear as a disk drive to computer . In this way, an operator at computer may store data on to a mobile device , or read files or other information from the mobile device . In this arrangement, the network management system operates to enable a mobile device to appear as a standard storage device to a computer system. In a similar manner, the computer may be viewed as a disk drive or network drive for mobile device . In this way, data stored on device is presented to the user of mobile device in the usual and comfortable file structure used by the mobile device . Mobile file manager also includes the automated file synchronization and backup processes previously discussed. In this way, files generated on the mobile device may be automatically backed up and synchronized with files on computer , and files generated on computer may be synchronized and backed up with files on device .
While particular preferred and alternative embodiments of the present intention have been disclosed, it will be appreciated that many various modifications and extensions of the above described technology may be implemented using the teaching of this invention. All such modifications and extensions are intended to be included within the true spirit and scope of the appended claims.
BACKGROUND
SUMMARY
DETAILED DESCRIPTION
BRIEF DESCRIPTION OF THE DRAWINGS
The invention can be better understood with reference to the following figures. The components within the figures are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views. It will also be understood that certain components and details may not appear in the figures to assist in more clearly describing the invention.
FIGS. 1A
1
1
, B, and C are a block diagrams of wireless logical document manager systems in accordance with the present invention.
FIG. 2
is a flow diagram of a wireless logical document manager system in accordance with the present invention.
FIG. 3
is a flow diagram of file section for a wireless logical document manager system in accordance with the present invention.
FIG. 4
is a flow diagram of configuration settings for a wireless logical document manager system in accordance with the present invention.
FIG. 5
is a flow diagram of a method for synchronizing files or a wireless logical document manager system in accordance with the present invention.
FIG. 6
is a flow diagram of a wireless logical document management system in accordance with the present invention.
FIG. 7
is a flow diagram of initializing a wireless logical document management system in accordance with the present invention.
FIG. 8
is a flow diagram of a wireless logical document management system in accordance with the present invention.
FIG. 9
is a block diagram of wireless logical document manager system in accordance with the present invention.
FIG. 10
is a block diagram of wireless logical document manager system in accordance with the present invention. | |
Algebra 50 - Three Variable Systems in the...
6:09
Three-Dimensional Coordinates and the...
Other Resource Types ( 27 )
Lesson Planet
Jackson Pollock- Mini Biography
Abstract expressionist Jackson Pollock is the subject of a short, mini-biography that looks at his work, other artists who influenced him, and his legacy.
Lesson Planet
Jackson Pollock and Mixing Colors
Students discuss how Jackson Pollock always wanted to be an artist. They are told that Jackson Pollock called his paintings action paintings because it took a lot of action to make them, and because they have a lot of eye movement and...
Lesson Planet
Could Anyone Make a Jackson Pollock Painting?
Is abstract expressionism art? Or could your cat have created such works? The paintings of Jackson Pollock are used to answer these questions.
Lesson Planet
Paint Like a Pollock
Jackson Pollock's art appeals to young children. It's whimsical, fun, and accessible to them. Here is an art instructional activity which invites young artists to paint like Jackson Pollock. After viewing many of his works, they spread a...
Lesson Planet
National Gallery of Art: Jackson Pollock
Students paint a picture in the style of Jackson Pollock. In this painting lesson, students learn about Jackson Pollock and the method that he used to create his famous pieces of art. Students use objects other than paintbrushes to...
Lesson Planet
Splash & Splatter
Students research the work of Jackson Pollock, a 20th century painter. They experiment with splatter-painting methods and discover the meaning of field or action painting, terms used to describe Pollock's abstract work. Students apply...
Lesson Planet
Chance Art: Pollock, Cage and Cunningham
Students clearly identify commonalities and differences between dance and other disciplines with regard to fundamental concepts such as materials, elements, and ways of communicating meaning.
Lesson Planet
The Life and Work of Jackson Pollock
By discussing Jackson Pollock's painting techniques, students can engage in an exploration of what makes art, art.
Lesson Planet
The New York School: Action and Abstraction
High schoolers examine the influences and similarities between the New York School poets and Abstract Expressionist artists. They analyze paintings and poems, and write original poetry.
Lesson Planet
Dripping Paint (Action Painting)
Students observe the lines and shapes that make up an "action" painting. They explore the work of Jackson Pollock as they explore Abstract Expressionism and Action Painting. (This lesson is best done outside.)
Lesson Planet
Jackson Pollack and Mixing Colors
Learners explore the artwork of Jackson Pollock and his place in art history. They create a panting using different types of balls. Students mix colors. They create a picture by mixing colors, using
Lesson Planet
Revolutions in Painting
Young scholars analyze Abstract Expressionist artists and their art. In this art analysis lesson, students view the works by Jackson Pollock and Helen Frankenthaler and analyze the pieces. Young scholars complete image based discussion...
Lesson Planet
Jackson Pollack
Young scholars examine and explore the life of Jackson Pollack and his art. They research his artwork and why he painted the things he did. They practice painting using the techniques he used.
Lesson Planet
Dripping Paint [Action Painting]
Young scholars create examples of American Abstract Expressionism after studying the art of Jackson Pollock in this Art lesson for all levels. It is suggested to work with small groups of students if this lesson is done with a younger...
Lesson Planet
Action Painting on a SMARTboard
Students learn about Jackson Pollock, an artist famous for "action painting," and his works in order to become familiar with his artistic style. For this art lesson, students view Pollock's pieces, then watch a slideshow and video...
Lesson Planet
Paint With Expression!
Students identify the moods expressed in the work of several famous artists. They express an emotion in a painting through the use of color, line, and texture. Students analyze and reflect on the emotions expressed in their work and the...
Lesson Planet
Famous Artist of the Month
Feature one famous artist a month with a series of portraits, biographies, and examples of their gallery. With masters such as Augustus Rodin, Francisco Goya, and Michelangelo, the resource provides opportunities every month for kids to...
Lesson Planet
Fantasy Creature
Middle schoolers, in groups, create three-dimensional sculptures from found objects. They paint their sculptures and write essays that reflect on the collaborative creative project.
Lesson Planet
For Public Display
Students compare three works of art to understand how juxtaposition can express a point of view. They brainstorm topics of interest to them and their respective communities that could act as a springboard for curating individual exhibits...
Lesson Planet
Learning About Artists
Students explore various artists and practice creating their style of painting. In this art history lesson, students discover various artists, such as Van Gogh and Michelangelo, and painting style each utilizes. Students paint a picture...
Lesson Planet
Identify the Element of Line
Students explore the element of "line." In this beginning art lesson, students listen to the book Harold and the Purple Crayon, then describe the types of lines Harold drew. Students identify straight lines, jagged lines, curvy lines,...
Lesson Planet
Painting Without a Brush
Learners demonstrate how to finger paint. In this finger painting lesson, students use various painting tools and their fingers to create a unique art piece.
Lesson Planet
Helen Frankenthaler Biography
Students examine the abstract art of Helen Frankenthaler. For this art analysis lesson, students complete a criticism of the aesthetics of the art, analyze the color use in the art, and research the history of abstract art.
Lesson Planet
Abstraction-Critique of Art
Students apply a four step critique process as they observe and make personal decisions about abstract artworks. | https://lessonplanet.com/search?keywords=jackson+pollock |
Hermeneutics and the theory of intertextuality as a method of researching modernism in Serbian poetry of the second half of the 20th century on the example of poetic cycle Deset soneta nerođenoj kćeri by Ivan V. LalićThe aim of this paper is to show that hermeneutics and the theory of intertextuality provide useful tools for the research in post-war Serbian modernism in poetry. The argumentation takes two steps. Firstly, the use of these methods is being legitimized in the context of general characteristics of the literature under consideration. Secondly, the functioning of such tools is presented in the example of a particular text, which in this case is Deset sonata nerođenoj kćeri ['Ten Sonnets for The Unborn Daughter'], a poetic cycle by Ivan V. Lalić. The kind of poetry researched in this paper is strongly influenced by the ideas of T. S. Eliot who said that an original literary work is the result of the author’s awareness of his place in tradition. For this reason such poetry contains numerous cultural and literary references which are an inevitable context of its interpretation. A hermeneutic and intertextual approach helps to expose such references and make them part of the process of understanding the poetry in question. Hermeneutyka i teoria intertekstualności jako metodologia badań modernizmu w poezji serbskiej drugiej połowy XX w. na przykładzie cyklu Deset soneta nerođenoj kćeri Ivana V. LaliciaZadaniem pracy jest wykazanie, że hermeneutyka i teoria intertekstualności dostarczają narzędzi do badania poezji powojennego modernizmu serbskiego. Argumentacja opiera się na uzasadnieniu użycia tych metod najpierw w świetle ogólnej charakterystyki omawianej twórczości, a następnie na podstawie ich funkcjonowania w interpretacji konkretnego tekstu będącego jej typowym przykładem (cykl Ivana V. Lalicia Deset soneta nerođenoj kćeri). Poezja będąca przedmiotem zainteresowania tej pracy pozostaje pod silnym wpływem poglądów Thomasa S. Eliota, według którego oryginalna twórczość powinna wynikać ze zrozumienia przez autora swojego miejsca w tradycji. To powód występowania w omawianych utworach licznych nawiązań kulturowych i literackich, które stanowią kontekst niezbędny do interpretacji tej twórczości. Podejście hermeneutyczne i metody teorii intertekstualności pozwalają te elementy wydobyć i uwzględnić w rozumieniu sensu tekstu. | http://cejsh.icm.edu.pl/cejsh/element/bwmeta1.element.ojs-doi-10_11649_a_2015_009 |
Q:
Function in a class requires inputs from another function in the class
I am pretty new to Python and are trying to make a option pricing class with three functions: call, put and graph. The call and put function work fine, but I can't figure out the graph function. I want the p.append to get the values from the call function, hold all the variables constant except for S0 which is equal to i.
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
class Option():
def __init__(self, S0, K, T, r, sigma, start, stop, N):
self.S0 = S0
self.K = K
self.T = T
self.r = r
self.sigma = sigma
self.start = start
self.stop = stop
self.N = N
def call(self):
d1 = (np.log(self.S0/self.K) + \
(self.r + 0.5*self.sigma**2)*self.T)/(self.sigma*np.sqrt(self.T))
d2 = d1 - self.sigma*np.sqrt(self.T)
price = (self.S0 * norm.cdf(d1, 0.0, 1.0) - \
self.K * np.exp(-self.r * self.T) * norm.cdf(d2, 0.0, 1.0))
return price
def put(self):
d1 = (np.log(self.S0/self.K) + \
(self.r + 0.5*self.sigma**2)*self.T)/(self.sigma*np.sqrt(self.T))
d2 = d1 - self.sigma*np.sqrt(self.T)
price = (self.K * np.exp(-self.r * self.T) * norm.cdf(-d2, 0.0, 1.0) - \
self.S0 * norm.cdf(-d1, 0.0, 1.0))
return price
def graphCall(self):
S = np.linspace(self.start, self.stop, self.N)
p = []
for i in S:
p.append()
plt.plot(S, p)
x = Option(100, 50, 3, 0.05, 0.40, 100, 200, 500)
print(x.call())
x.graphCall()
A:
You could decide to use self.S0 as the default value for calls to call and put, but allow for a other arguments as well.
def call(self, s=None):
if s is None:
s=self.S0
d1 = (np.log(s/self.K) + \
(self.r + 0.5*self.sigma**2)*self.T)/(self.sigma*np.sqrt(self.T))
d2 = d1 - self.sigma*np.sqrt(self.T)
price = (s * norm.cdf(d1, 0.0, 1.0) - \
self.K * np.exp(-self.r * self.T) * norm.cdf(d2, 0.0, 1.0))
return price
def put(self, s=None):
price = (self.K * np.exp(-self.r * self.T) * norm.cdf(-d2, 0.0, 1.0) - \
s * norm.cdf(-d1, 0.0, 1.0))
return price
def graphCall(self):
S = np.linspace(self.start, self.stop, self.N)
plt.plot(S, self.call(S))
plt.show()
x = Option(100, 50, 3, 0.05, 0.40, 100, 200, 500)
print(x.call())
x.graphCall()
| |
BACKGROUND OF THE INVENTION
SUMMARY
DETAILED DESCRIPTION OF THE INVENTION
The present invention relates to a device and a method for correcting a blood pressure measured at a measuring position and, in particular, to determining the arm altitude to improve the measuring results in peripheral blood pressure measurements of a human being.
The blood pressure measured of a human being, but also of an animal, among other things, depends on a measuring position. Due to gravity, the blood pressure in a foot, for example, naturally is higher than in a head region—at least when standing upright. In order to avoid such errors caused by gravity, a blood pressure measurement is typically performed at the same altitude as the heart. However, the blood pressure measurement may also be performed at different positions of the body, as long as the error resulting as a consequence of the deviating altitude relative to the heart is compensated. When measuring the blood pressure at a wrist, for example, as is the case in commercially available apparatuses, the altitude of the hand above the heart is an essential factor influencing the blood pressure measured. If the blood pressure measured is to be freed of the error caused by the “incorrect measuring altitude”, what will be necessitated is the easiest determination possible of the altitude of the measuring arrangement above the heart.
According to an embodiment, a device for altitude correction of a blood pressure measured at a measuring position of a living being may have: a transmitter for emitting a signal from close to a measuring position; at least three receivers for receiving the signal, wherein the receivers may be mounted to positions at different altitudes of the living being; and an evaluating unit for correcting the blood pressure measured on the basis of run time or phase differences of the signals received at the at least three receivers.
According to another embodiment, a method for altitude correction of a blood pressure measured at a measuring position of a living being may have the steps of: transmitting a signal by a transmitter from close to a measuring position; receiving the signal by at least three receivers, wherein the at least three receivers are mounted to positions at different altitudes of the living being; and correcting the blood pressure measured on the basis of run time or phase differences of the signal received at the at least three receivers.
The present invention is based on the finding that a device for determining an altitude of a measuring apparatus relative to a reference point can be provided when the measuring apparatus comprises a transmitter or a transmitter is located near the measuring apparatus and emits a signal, like, for example, in the form of an electromagnetic wave of constant frequency, and the signal is received from at least three receivers. The three receivers transmit a receive signal to an evaluating unit, so that a run time or phase difference of the signals received at two of the three receivers can be determined in the evaluating unit. The run time or phase difference in turn determines a path difference between paths which the signal has traveled to the first one of the two transmitters and to the second one of the two transmitters. Similarly, another path difference can be determined during propagation to a third one of the three transmitters and the second or first one of the three transmitters.
The receive signals of the three receivers may exemplarily be electrical alternating signals the phases of which are in a fixed relation to the phases of the waves (or signals) received by the receivers. In this case, phase differences of the electrical alternating signals can be determined easily by means of a phase discriminator and the evaluating unit can determine a path length difference or path differences from it using the frequency and propagation speed. It is also possible for the phase information of the wave received to be detected and passed on to the evaluating unit in another way (like, for example, digitally).
If the three receivers are located at the body at different altitudes, advantageously along a vertical line (i.e. advantageously perpendicular), the relative altitude of the transmitter relative to the three receivers can be determined using the path differences (i.e. the run time or phase differences). Assuming the three receivers to be located in a predetermined or known distance to the heart, an altitude deviation of the transmitter relative to the heart can finally be determined from it. The altitude deviation determined may then be used to determine a correction value by which the blood pressure measurement is corrected. If the transmitter is, according to embodiments of the present invention, located as close to the measuring apparatus as possible, the most precise determination possible of the altitude deviation may be performed.
Thus, the present invention describes a device for correcting a blood pressure measured at a measuring position at a living being (like, for example, a human being), the device comprising a transmitter, at least three receivers and an evaluating unit. The transmitter emits a signal from close to the measuring position and the three receivers receive the signal, wherein the three receivers may be mounted to positions at different altitudes of the living being. The evaluating unit is configured to perform correction of the blood pressure measured on the basis of run time or phase differences of the signals arriving at the three receivers.
In embodiments, the measuring arrangement comprises a transmitter mounted to a wrist and three receivers mounted to the torso of, for example, a human being-like to the shoulder and waist. The transmitter emits an electromagnetic wave which is detected by the three receivers. Due to the relative position of the transmitter to the three receivers, differences in phase expressing themselves in phase differences result in the receivers. These phase differences can be detected and measured by phase discriminators. Using the known propagation speed (speed of light for an electromagnetic wave), the phase differences are converted into fractions of wavelengths by which the distance differences differ from one another.
The frequency of the electromagnetic wave here may be selected such that there will never be several complete wave trains in one region to be detected (like, for example, double an arm's length), since otherwise the phase angle or phase angle difference cannot be determined unambiguously and thus, the position or altitude deviation cannot be determined unambiguously. Double an arm's length, for example, here refers to the two extreme values where the blood pressure measuring apparatus or the transmitter is located at the wrist and the hand can be stretched out perpendicularly downwards on the one hand and perpendicularly upwards on the other hand. In order to ensure unambiguous measurements, a lower limit for the wavelength used (=minimum wavelength) of the electromagnetic wave should be taken into account.
On the other hand, the wavelength of the electromagnetic wave used, however, should not be selected to be too large (or the frequency should not be selected to be too small) in order for the phase angles to differ, at least two of the three receivers, to an extent allowing unambiguous detection of the phase difference. Thus, the wavelength is below a maximum wavelength. With very large wavelengths, the amplitude value of the electromagnetic wave received at the three receivers will only differ marginally, so that this marginal difference might be below a measuring threshold (measuring tolerance, like, for example, an amplitude value differing by less than five %). Since a perpendicular positioning of the three receivers along the body in itself already represents a source of error, it is of advantage to select the wavelength range of the electromagnetic wave such that receivers are allowed to detect the phase differences easily. In other words, such that the phase difference detected (or the respective difference in length) differs considerably (like, for example, by more than 20 percent) from the error resulting from deviations from an ideal perpendicular orientation of the receivers. Exemplarily, the wavelength may be in a range between 10 cm and 200 cm or in a range between 40 cm and 120 cm.
In further embodiments, the transmitter is integrated directly in the blood pressure measuring apparatus. Additionally, it is possible to determine the distance between the three receivers such that the transmitter is brought close to one of the three receivers for being calibrated, so that the phase differences of the signals received at the other ones of the three receivers are a direct measure of the distance (or altitude difference) of the other receivers to the one of the three receivers. Alternatively, the distance of the three receivers may also be measured by means of other conventional procedures.
In other embodiments, the device comprises further receivers so that the measuring precision can be increased by means of averaging. Since determining the altitude using four receivers is already over-determined, the fourth and/or any further receiver may be used for determining an error rate. With four or more receivers, the error rate determined may also be used for performing optimization with regard to the wavelength of the electromagnetic waves in order to achieve an optimum wavelength range, which, for example, allows unambiguous measurement with minimal errors, by altering the wavelength of the transmitter in this way. Additionally, it is possible, when using pulsed signals, to perform run time measurements of the pulses and thus determine path differences directly from run time differences. Both single distances and double distances (there and back) using signals reflected at the receivers may be employed for this.
The present invention is of advantage in that it offers a simple and effective and, in addition, cheap way of performing altitude correction of blood pressure measurements using simple standard components. The inventive device and the inventive method may be employed flexibly and dynamic adjustment of the blood pressure measuring value measured can be performed even with a momentary change in the relative altitude (like, for example, relative to the heart).
Other elements, features, steps, characteristics and advantages of the present invention will become more apparent from the following detailed description of the preferred embodiments with reference to the attached drawings.
Before the present invention will be detailed subsequently referring to the drawings, it is pointed out that same elements in the figures are provided with same or similar reference numerals and that a repeated description of these elements is omitted.
FIG. 1
105
110
110
115
110
115
110
115
115
110
117
115
110
117
120
115
110
117
120
117
117
117
110
110
115
117
110
117
110
117
117
117
115
115
115
a
a
b
b
c
c
a
a
a
b
b
b
c
c
c
a
b
c
a
a
b
b
c
c
a
b
c
a
b
c.
shows a schematic illustration of an embodiment of the present invention. A transmitter emits an electromagnetic wave of a predetermined wavelength. A first part of the electromagnetic wave reaches a first receiver , a second part of the electromagnetic wave reaches a second receiver and a third part of the electromagnetic wave reaches a third receiver . The first receiver detects the first part of the electromagnetic wave and communicates a first receive signal to an evaluating unit. The second receiver detects the second part of the electromagnetic wave and communicates a second receive signal to the evaluating unit . The third receiver detects the third part of the electromagnetic wave and communicates a third receive signal to the evaluating unit . The first, second and third receive signals , and exemplarily contain information on the phase of the electromagnetic wave or phase of the first part of the electromagnetic wave at the time of receiving by the first receiver . Similarly, this second receive signal contains phase information of the second part of the electromagnetic wave and the third receive signal contains phase information regarding the third part of the electromagnetic wave (like, for example, a phase value at the time of reception). If the transmitter uses pulsed signals, alternatively the three receive signals , and may also contain information on the time of receiving the pulsed signals by the first, second and third receivers , and
120
117
117
117
120
110
110
110
115
110
115
120
110
110
120
110
110
115
115
a
b
c
a
b
a
a
b
b
a
b
b
c
b
c
The evaluating unit compares the first receive signal , the second receive signal and the third receive signal . Exemplarily, the evaluating unit may determine a phase difference of the first part of the electromagnetic wave compared to the second part of the electromagnetic wave at the time of receiving the first part of the electromagnetic wave by the first receiver and the second part of the electromagnetic wave by the second receiver . The evaluating unit can determine a length difference between the lengths the first part or the second part of the electromagnetic wave or have traveled from this phase difference. Similarly, the evaluating unit can find out a phase difference between the second part of the electromagnetic wave and the third part of the electromagnetic wave at the time of receiving by the second receiver and the third receiver and determine a length difference from it. Alternatively, when using a pulsed transmitter, the time differences when receiving can be converted to length differences.
FIG. 2
115
115
115
125
105
115
115
115
125
a
b
c
a
b
c
From the length differences determined therefrom (further details will be described in ) and from the altitude differences between the first, second and third receivers , and , the evaluating unit can determine an altitude deviation where the transmitter is located relative to one of the three receivers , , and . The altitude deviation may then be used to correct a blood pressure measured correspondingly.
FIG. 2
FIG. 2
125
115
115
115
115
115
115
115
115
105
105
115
115
115
115
105
115
105
115
105
a
b
c
c
b
c
a
b
a
b
c
a
b
c
2
1
1
2
3
1
2
shows an illustration of the geometrical quantities for determining the altitude deviation , which subsequently will be referred to by H. The three receivers , and in this illustration are arranged to be perpendicular one above the other, the third receiver being located on a basic line in a distance D from a coordinate origin O. In a distance t, the second receiver is located perpendicular above the third receiver and the first receiver is located in a distance tperpendicular above the second receiver . The coordinate origin O is selected such that the transmitter is arranged to be perpendicular above the coordinate origin O. The transmitter in is referred to by S, the first receiver is characterized by E, the second receiver by Eand the third receiver by E. The first receiver is in a radial distance r from the transmitter , the second receiver is in a second radial distance r+Lfrom the transmitter and the third receiver is in a third radial distance r+Lfrom the transmitter .
110
105
115
105
115
115
105
110
115
110
115
115
105
a
b
a
b
b
c
c
c
1
2
The electromagnetic wave transmitted by the transmitter has a wavelength λ and, at the time of receiving by the first receiver , a phase value of π-Δφ (angles will subsequently be indicated in a circular measure). Since the distance of the first transmitter to the second receiver is greater by the first length Lthan the distance of the first receiver to the transmitter , the phase value of the electromagnetic wave received has changed correspondingly at the second receiver . Similarly, the phase value of the third part of the electromagnetic wave , at the time of receiving by the third receiver , has shifted correspondingly as a consequence of the distance of the third receiver from the transmitter increased by the third path length L.
105
115
115
115
1
2
3
1
1
3
1
2
a
b
c
2
3
3
The position of the point S (transmitter ) relative to points E(first receiver ), E(second receiver ) and E(third receiver ) can be determined using trigonometric laws and, using these, the (relative) altitude H of the point S (exemplarily the wrist) relative to E(exemplarily the shoulder) can be determined. Determining the position of the point S takes place in R(i.e. within a plane) with a precision of two possible positions—left and right or in mirror symmetry relative to the perpendicular straight between Eand E. Considered from the space R, this means that determining the position takes place with a precision of a circular path with a central point on the straight EE. The position along the circle, however, remains undetermined—only the position of the circle in Ris determined. Since, however, only the altitude H is of interest for the application aimed at, the resolution of the position which may be achieved is sufficient.
1
FIG. 2
In detail, a system of equations made of three equations and three unknown quantities is solved for determining the relative altitude H (like, for example, of the wrist above or below the altitude of E). As can be taken from , the following variables are used here:
1
2
EE
1
be t,
2
3
EE
2
be t,
1
SE
be r, and
1
2
D be the perpendicular distance from S to the straight EE
Using this definition, the following relations result:
r
=H
+D
2
2
2
, (1)
r+L
t
−H
+D
1
1
2
2
2
()=(), (2)
r+L
t
+t
−H
+D
2
2
1
2
2
2
()=(). (3)
2
2
2
By substituting D=r−Hfrom (1) in (2) and (3), the following equations result:
t
·H·t
·r·L
−L
1
1
1
1
2
2
0=−2−2, (4)
t
+t
·H
t
+t
·r·l
−L
1
2
1
2
2
2
2
2
0=()−2·()−2. (5)
Solving (5) for r and substituting in (4) will have the following result:
<math overflow="scroll"><mtable><mtr><mtd><mrow><mi>H</mi><mo>=</mo><mfrac><mrow><msup><mrow><msub><mi>L</mi><mn>1</mn></msub><mo></mo><mrow><mo>(</mo><mrow><msub><mi>t</mi><mn>1</mn></msub><mo>+</mo><msub><mi>t</mi><mn>2</mn></msub></mrow><mo>)</mo></mrow></mrow><mn>2</mn></msup><mo>-</mo><mrow><msub><mi>L</mi><mn>2</mn></msub><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>t</mi><mn>1</mn><mn>2</mn></msubsup><mo>-</mo><msubsup><mi>L</mi><mn>1</mn><mn>2</mn></msubsup></mrow><mo>)</mo></mrow></mrow><mo>-</mo><mrow><msub><mi>L</mi><mn>1</mn></msub><mo>·</mo><msubsup><mi>L</mi><mn>2</mn><mn>2</mn></msubsup></mrow></mrow><mrow><mn>2</mn><mo>·</mo><mrow><mo>(</mo><mrow><mrow><mrow><mo>(</mo><mrow><msub><mi>t</mi><mn>1</mn></msub><mo>+</mo><msub><mi>t</mi><mn>2</mn></msub></mrow><mo>)</mo></mrow><mo>·</mo><msub><mi>L</mi><mn>1</mn></msub></mrow><mo>-</mo><mrow><msub><mi>t</mi><mn>1</mn></msub><mo>·</mo><msub><mi>L</mi><mn>2</mn></msub></mrow></mrow><mo>)</mo></mrow></mrow></mfrac></mrow></mtd><mtd><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
115
115
117
117
120
122
110
115
110
115
115
115
117
117
124
115
115
124
a
b
a
b
a
a
b
b
b
c
b
c
b
c
1
2
1
2
1
2
The first receiver and the second receiver in this embodiment communicate the receive signals , to the evaluating unit comprising a first phase discriminator which in turn determines a first phase difference Δφbetween the phase of the first part of the electromagnetic wave detected by the first receiver and the phase of the second part of the electromagnetic wave detected by the second receiver . In the same manner, the second receiver and the third receiver transmit the receive signals , to a second phase discriminator determining a second phase difference Δφbetween the phases of the electromagnetic wave detected at the second receiver and the electromagnetic wave detected at the third receiver . The first phase difference Δφdetermined at the first phase discriminator and the second phase difference Δφdetected at the second phase discriminator can be converted to the first length different Land the second length difference Lusing the following relations:
<math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>L</mi><mn>1</mn></msub><mo>=</mo><mrow><mfrac><mrow><mi>Δ</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo></mo><msub><mi>ϕ</mi><mn>1</mn></msub></mrow><mrow><mn>2</mn><mo></mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo></mo><mi>π</mi></mrow></mfrac><mo></mo><mi>λ</mi></mrow></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mo>(</mo><mn>7</mn><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mrow><msub><mi>L</mi><mn>2</mn></msub><mo>=</mo><mrow><mfrac><mrow><mi>Δ</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo></mo><msub><mi>ϕ</mi><mn>2</mn></msub></mrow><mrow><mn>2</mn><mo></mo><mstyle><mspace width="0.3em" height="0.3ex" /></mstyle><mo></mo><mi>π</mi></mrow></mfrac><mo></mo><mi>λ</mi></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
wherein, as has been mentioned, angle measurement is performed in circular measure, i.e. the phase φ is periodic in 2π.
FIG. 2
1
1
2
2
i
1
2
2
Thus, in the arrangement as is illustrated in , two measurements are performed, one measurement for determining the first phase difference Δφwhich in turn establishes the first length difference L, and a second measurement determining the second phase difference Δφwhich in turn establishes the second length difference L. Since the relative position of the receivers Eand thus the quantities tand tare known, wherein for example a certain receiver (like, for example, the second receiver E) may be mounted close to the heart, it is possible to determine the relative position of the point S to the certain receiver.
i
i
i
105
When optionally adding further receivers and, consequently, performing further measurements, in addition to equations (1) to (3), a fourth equation would be added and thus the system of equations would be over-determined (four equations for three unknown quantities D, r and H; tare assumed to be known or are measured)—however, the further measurement could serve as test measurement for determining an error rate, for example as a consequence of non-ideally perpendicularly oriented receivers E(i counts the number of receivers, like, for example, i=1, 2, 3, 4) or as a consequence of an unfavorably selected wavelength λ of the transmitter . Determining the error rate here can take place such that three different ones of the four (or more) receivers are selected to determine different altitude deviations Hso that scattering (exemplarily expressed by standard deviation) represents a measure of the error rate. Thus, both the geometrical arrangement (orientation of receivers) and the wavelength λ selected could be optimized.
FIG. 3
a
a
b
c
1
2
3
1
2
3
1
2
1
1
1
3
2
2
110
110
115
110
115
110
115
shows an illustration where the three receivers, the first receiver E, the second receiver Eand the third receiver E, are represented along a direction so that the different distances to the point S manifest themselves in different phase values φ for the electromagnetic wave emitted by the transmitter S. A phase φ=Ethus corresponds to a first phase value which the electromagnetic wave has when received by the first receiver and the phase φ=Ecorresponds to a second phase value which the electromagnetic wave has at the time of receiving by the second receiver and the phase φ=Ecorresponds to a third phase value which the electromagnetic wave has at the time of receiving by the third receiver . Accordingly, the first phase difference Δφ=E−Ein accordance with equation (7) corresponds to the first length difference L. In the same way, the second phase difference Δφ=E−Ecorresponds to the second length difference L, in accordance with the above equation (7).
FIG. 3
FIG. 3
a
a
c
a
110
110
110
115
115
110
1
2
3
As can be seen in , the electromagnetic wave varies between two maximum values represented by broken lines and the wave length here is selected such that the three receivers (E, E, E) are within one period of the wave and the amplitude of the electromagnetic wave changes considerably between the first receiver and the third receiver . The selection of the wavelength λ of the electromagnetic wave as it is shown in , thus corresponds to the criteria mentioned before that there cannot be several wave periods between the receivers—the wavelength λ is both above the minimum and below the maximum wavelengths.
FIG. 3
b
a
b
110
110
115
115
110
1
2
1
2
2
2
1
1
2
1
in contrast shows another wave ′ comprising a considerably shorter wavelength (in comparison to the distance of the receivers Eand E) so that in this case one complete period of the further wave ′ is between the first receiver and the second receiver . The evaluating unit examining or determining the phase difference of the further wave ′ at point Eto point Ecannot differentiate between point Eand point E′. Consequently, an unambiguous distance determination of the length Lbased on this phase difference is not possible. The same would apply if the wavelength λ were selected to be so great that the amplitude between point Eand point Eonly changed marginally so that, within an error tolerance, the length for the length difference Lcannot be determined.
1
2
105
115
115
115
105
110
a
b
c
FIG. 2
Determining the phase differences Δφand Δφthus corresponds to determining run time differences of the electromagnetic wave from the transmitter to the receivers , and and, alternatively, could also take place using time measurement, like, for example, using the pulsed signals mentioned before. However, it is of advantage in the embodiment shown in that no time synchronization is necessitated and the transmitter can continuously transmit an electromagnetic wave .
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
FIG. 1
shows a schematic illustration of an embodiment of the present invention;
FIG. 2
shows an illustration of the geometrical quantities for determining an altitude deviation;
FIG. 3
a
shows an illustration for optimizing a wavelength of the electromagnetic wave; and
FIG. 3
b
shows an illustration of an unfavorably selected wavelength of the electromagnetic wave. | |
Q:
Gearman callback with nested jobs
I have a gearman job that runs and itself executes more jobs when in turn may execute more jobs. I would like some kind of callback when all nested jobs have completed. I can easily do this, but my implementations would tie up workers (spin until children are complete) which I do not want to do.
Is there a workaround? There is no concept of "groups" in Gearman AFAIK, so I can't add jobs to a group and have something fire once that group has completed.
A:
As you say, there's nothing built-in to Gearman to handle this. If you don't want to tie up a worker (and letting that worker add tasks and track their completion for you), you'll have to do out-of-band status tracking.
A way to do this is to keep a group identifier in memcached, and increment the number of finished subtasks when a task finishes, and increment the number of total tasks when you add a new one for the same group. You can then poll memcached to see the current state of execution (tasks finished vs tasks total).
| |
Firefox
Internet Explorer
Chrome
Available for download
Not available in stores
Seminar paper from the year 2005 in the subject Speech Science / Linguistics, grade: 1 (sehr gut), University of Marburg, course: Psycholinguistics, 28 entries in the bibliography, language: English, abstract: Introduction How do children acquire language? As Susan H. Foster-Cohen put it in her book An Introduction to Child Language Development, most parents would reply either that they taught their children how to speak or that their children learned language 'from hearing it and from being spoken to' (Foster-Cohen 1999: 95). This statement brings along further questions: Are children really dependent on input from their environment? If they are, when do they need to get what amount of input? And, more specifically, what sort of input do they need? There is a huge amount of different theories regarding children's first language acquisition and the most important ones will be depicted in my term paper. At first, we will get a general overview on the different phases or stages a child goes through during language acquisition. Then, we will see some strange or 'secret' phenomena, which bring along the question whether children only learn language by imitation as stated above by several parents, or if there might be an innate knowledge about how language could look like. We will then differentiate between the empiricist and rationalist positions that were represented by Locke and Descartes in the 17th/18th century. These positions have been examined and developed since then and will lead us to take a closer look at more modern theories. Piaget's constructivist theory as well as Chomsky's innateness hypothesis will be depicted and discussed in my term paper. Finally, we will see an example that demonstrates the important problem of the time limit for language acquisition. We will finally discuss whether this problem is contradictory to Chomsky's innateness hypothesis.
The following ISBNs are associated with this title:
ISBN - 10:3638359956
ISBN - 13:9783638359955
Look for similar items by category: | https://www.chapters.indigo.ca/en-ca/books/empiricism-vs-rationalism-the-innate/9783638359955-item.html |
This portal is available for chapter members, presidencies and coordinators. Members can register for access to the portal by simply signing up for their chapter.
Additional chapters are coming soon.
I authorize The Academy for Creating Enterprise the right to edit, alter, copy, exhibit, publish, distribute and make use of any and all pictures or videos to be used in an/or for legally promotional materials including, but not limited to, newsletters, flyers, posters, brochures, advertisements, fundraising letters, annual reports, press kits and submissions to journalists, websites, social networking sites and other print and digital communications, without payments or any other consideration. | https://academy-portal.com/ |
Marine Extension and Georgia Sea Grant is seeking a public relations specialist II to join our communications team. We seek an applicant with a demonstrated interest in the environment and who has shown skills in creating content about related topics for print and/or multimedia. The public relations specialist II will work with the communications staff to promote Marine Extension and Georgia Sea Grant’s research, education and outreach initiatives through various communications platforms, including the production of content used for print and electronic communications. Duties include, but are not limited to, developing promotional materials (flyers, web graphics, brochures, etc.) designing Marine Extension and Georgia Sea Grant’s electronic newsletter, and developing story maps, consumer products and videos. Applicants should have experience with social media, graphic design and, preferably, video production. The position is based in Brunswick, Ga., with some travel required.
Duties will include the following, depending on the candidate’s interests and experience:
- Manage the day-to-day operations of the program’s graphic design needs, creating print and electronic materials requested by staff.
- Assist with content creation and maintenance of communications platforms including Marine Extension and Georgia Sea Grant monthly newsletter, website and social media.
- Coordinate the creation of digital content (e.g., videos, website, blogs, social media and podcasts).
- Assist with promoting public programs by updating email lists, posting events to online event calendars, writing event releases.
- Participate in Marine Extension and Georgia Sea Grant events to document (photo/video) program projects and accomplishments for use in print and electronic communications.
- Engage in writing and editing to communicate the program’s activities and impacts in print and digital formats.
- Perform other related duties as required.
MINIMUM QUALIFICATIONS
- Bachelor’s degree in digital media, communications or related field and 3-4 years of relevant experience with print and electronic communications OR equivalent combination of experience and training.
- Excellent interpersonal/communications skills.
- Excellent writing and editorial skills.
- Good skills in organization, prioritization and time management.
- Experience with visual communication principles and web design.
PREFERRED QUALIFICATIONS
- Demonstrated experience in graphic design
- Demonstrated experience in writing for lay audiences
- Demonstrated experience in developing and/or maintaining a website
- Demonstrated experience in videography and photography
- Demonstrated experience in Adobe Creative Suite software
TO APPLY
The direct link to the posting is: https://www.ugajobsearch.com/postings/17672
Posting number is S00465P
Interested candidates should submit a cover letter, resume, three graphic design or writing samples that demonstrate how you meet the qualifications for this position, and contact information, including phone numbers, for three professional references. All applications received by March 16, 2018, will be assured consideration. | https://gacoast.uga.edu/marine-extension-georgia-sea-grant-seek-applications-public-relations-specialist-2/ |
Background
More than one in five patients presenting to the emergency department (ED) with (suspected) infection or sepsis deteriorate within 72 h from admission. Surprisingly little is known about vital signs in relation to deterioration, especially in the ED. The aim of our study was to determine whether repeated vital sign measurements in the ED can differentiate between patients who will deteriorate within 72 h and patients who will not deteriorate.
Methods
We performed a prospective observational study in patients presenting with (suspected) infection or sepsis to the ED of our tertiary care teaching hospital. Vital signs (heart rate, mean arterial pressure (MAP), respiratory rate and body temperature) were measured in 30-min intervals during the first 3 h in the ED. Primary outcome was patient deterioration within 72 h from admission, defined as the development of acute kidney injury, liver failure, respiratory failure, intensive care unit admission or in-hospital mortality. We performed a logistic regression analysis using a base model including age, gender and comorbidities. Thereafter, we performed separate logistic regression analyses for each vital sign using the value at admission, the change over time and its variability. For each analysis, the odds ratios (OR) and area under the receiver operator curve (AUC) were calculated.
Results
In total 106 (29.5%) of the 359 patients deteriorated within 72 h from admission. Within this timeframe, 18.3% of the patients with infection and 32.9% of the patients with sepsis at ED presentation deteriorated. Associated with deterioration were: age (OR: 1.02), history of diabetes (OR: 1.90), heart rate (OR: 1.01), MAP (OR: 0.96) and respiratory rate (OR: 1.05) at admission, changes over time of MAP (OR: 1.04) and respiratory rate (OR: 1.44) as well as the variability of the MAP (OR: 1.06). Repeated measurements of heart rate and body temperature were not associated with deterioration.
Conclusions
Repeated vital sign measurements in the ED are better at identifying patients at risk for deterioration within 72 h from admission than single vital sign measurements at ED admission.
More than one in five patients presenting to the emergency department (ED) with (suspected) infection or sepsis deteriorate within 72 h from admission, despite treatment [1]. Recent advances in research have improved our understanding of the pathophysiology of sepsis [2]. The adoption of surviving sepsis campaign (SSC) guidelines, increased awareness and early goal-directed therapy dramatically reduced sepsis-related mortality over the past two decades [3, 4]. However, one of the main challenges for the physician in the ED remains to determine the risk of deterioration for the individual patient [2]. The numerous sepsis-related biomarkers lack sensitivity and specificity for deterioration and are not readily available in the ED [5–7]. Despite the relative ease of measurement, surprisingly little is known about vital signs in relation to clinical outcomes, especially in the ED setting [8–11]. There is limited evidence that oxygen saturation and consciousness level at ED arrival are associated with mortality, and that heart rate and Glasgow coma scale (GCS) are associated with intensive care unit (ICU) admission [9, 11]. For all other vital signs, insufficient evidence is available [9, 11]. The few available studies mostly studied vital signs used in triage systems or vital signs obtained at the time of ED admission [9, 12]. Almost one third of the medical patients who arrive at the ED with normal vital signs show signs of deterioration in vital signs within 24 h [13]. Our pilot study in the ED showed that vital signs change significantly during the patient’s stay in the ED [7]. However, surprisingly little is known on how to monitor and identify deteriorating patients in the emergency department [13]. The latest SSC guidelines recommend a thorough re-evaluation of routinely measured vital signs as parameter for response to treatment [4]. Therefore, the aim of the current study was to determine whether repeated vital sign measurements during the patient’s stay in the ED can distinguish between patients who will deteriorate within 72 h from admission and patients who will not.
Study design and setting
This study is a predefined prospective observational study, part of the Sepsis Clinical Pathway Database (SCPD) project in our emergency department (ED). The SCPD project is a prospective cohort study of medical patients presenting to the ED with fever and/or suspected infection or sepsis. Data was collected in the ED of the University Medical Center Groningen in The Netherlands, an academic tertiary care teaching hospital with over 30,000 ED visits annually.
This study was carried out in accordance with the Declaration of Helsinki, the Dutch Agreement on Medical Treatment Act and the Dutch Personal Data Protection Act. The Institutional Review Board of the University Medical Center Groningen ruled that the Dutch Medical Research Involving Human Subjects Act is not applicable for this study and granted a waiver (METc 2015/164). All participants provided written informed consent.
Study population
Data was collected between March 2016 and February 2017. Consecutive medical patients visiting the ED between 8 a.m. and 23 p.m. were screened for eligibility. Inclusion criteria were: (1) age of 18 years or older, (2) fever (> = 38 °C) or suspected infection or sepsis, (3) able to provide written informed consent. The clinical suspicion of infection or sepsis was judged by the coordinating internist acute medicine on duty. He/she handles all medical patient announcements from general practitioners or the emergency medical services (EMS), and medical patients that enter the ED without previous announcement. The judgement was based on information provided over the phone during the announcement, information obtained at triage and immediately after ED admission of the patient. Only patients with at least three repeated vital sign measurements during their first 3 h in the ED were included in the final analysis.
Data collection
The data collected in the SCPD project includes socio-demographic information, patient history, prescription drug usage, comorbidity, treatment parameters, results from routine blood analysis, questionnaires about activities of daily living, follow-up during the patient’s stay in the hospital and registration of various endpoints. The data was collected by trained members of our research staff during the patient’s stay in the ED and combined with data from the patient’s medical record for follow-up during the patient’s stay in the hospital.
For the current study, next to the data collected for all patients included in the SCPD project, we repeatedly measured vital signs in 30-min intervals during the patient’s stay in de ED. These vital signs included heart rate, respiratory rate and blood pressure, measured using a Philips MP30 or MX550 bed-side patient monitor (Philips IntelliVue System with Multi-Measurement Module; Philips, Eindhoven, The Netherlands). Furthermore, the body temperature was measured using an electronic tympanic ear thermometer (Genius 2; Mountainside Medical Equipment, Marcy, New York, USA).
All patients received treatment for infection or sepsis as per our hospital’s standardized protocol at the treating physician’s discretion. This protocol included intravenous antibiotics, fluid resuscitation and oxygen supplementation [7]. The protocol did not change during the inclusion period and was not influenced by the patient’s participation in the study. For patients arriving at the ED with EMS and (suspected) sepsis, treatment with fluid resuscitation and supplementary oxygen was started in the ambulance by EMS personnel according to the nationwide EMS guidelines for sepsis in The Netherlands [14]. The average time from EMS dispatch call to ED arrival is 40 min in The Netherlands, but actual dispatch times in this study were not measured [14]. Pre-hospital start of treatment was not influenced by the patient’s participation in the study.
Endpoints and definitions
The primary endpoint was patient deterioration within 72 h from ED admission. We defined patient deterioration as the development of organ dysfunction, ICU admission or death during the patient’s stay in the hospital. For organ dysfunction, we distinguished between acute kidney failure (AKI), liver failure and respiratory failure. AKI was defined using the Kidney Disease Improving Global Outcomes (KDIGO) criteria as an increase in serum creatinine by 26.5 μmol/L (0.3 mg/dL) within 48 h or 1.5 times the baseline (known or presumed to have occurred within the prior 7 days) [15]. Liver failure was defined as total bilirubin level > 34.2 μmol/L (2.0 mg/dL) and either alkaline phosphatase or a transaminase level above twice the normal limit [16]. Respiratory failure was defined as the need for mechanical ventilation, or either hypoxemia (PaO2 < 8.0 kPa) or hypercapnia (PaCO2 > 6.5 kPa) in the arterial blood gas analysis, or a peripheral oxygen saturation < 90% when breathing ambient air or < 95% with at least 2 L/min of oxygen supplementation [17]. In-hospital mortality was defined as all-cause mortality during the patient’s stay in the hospital. The Sepsis-2 criteria (2001 international sepsis definitions conference) were used to define sepsis, severe sepsis or septic shock, i.e. two or more systemic inflammatory response syndrome criteria and suspected/confirmed infection [18].
Statistical analysis
Continuous data were reported as median with interquartile range (IQR) and analysed using the Mann-Whitney U test. Categorical data were summarized as counts with percentages and analysed using the Chi-square test.
For each vital sign and for each patient, we used the repeated measurements to estimate the linear change and variability over time. Linear change over time was estimated using individual linear regression analysis separately for each vital sign (heart rate, respiratory rate, mean arterial pressure and temperature) with the time of the measurement (in minutes) as independent variable. The resulting regression estimates for time, indicate the linear change per minute for each patient and each vital sign. The variability of each vital sign was calculated as the difference between the highest and lowest value during the first 3 h in the ED.
To analyse the added value of the linear change and variability over time of each vital sign as predictors for patient deterioration within 72 h, we performed multiple logistic regression analysis. First, we constructed a base model containing age, gender and comorbidity. The added value of each vital sign to the base model was assessed using the following logistic regression analyses: (1) base model + vital sign value at admission, (2) base model + vital sign value at admission + change of the vital sign during the first 3 h in the ED and (3) base model + vital sign value at admission + variability of the vital sign during the first 3 h in the ED. For each model, the area under the receiver operator curve (AUC) was calculated using the predicted probabilities.
All statistical analyses were performed using IBM SPSS Statistics for Windows V.23.0 (IBM Corp, Armonk, New York, USA). A two-tailed p-value of < 0.05 was considered significant.
Patient characteristics
During the study period 366 patients met the inclusion criteria (Fig. 1). Seven patients were excluded because they had less than three repeated vital sign measurements in the emergency department (ED) during the first 3 h from admission. The remaining 359 patients were included in the final analysis. Of the 359 patients, 106 (29,5%) patients deteriorated within 72 h from admission (Table 1). Patients with cardiac disease (p = 0.004), COPD (p = 0.047) or diabetes (p = 0.002), deteriorated more often compared to patients without these comorbidities. Malignancy (28.4%) and organ transplant (26.7%) were the most frequent comorbidities (Table 2).
Fig. 1
Flow chart of patient recruitment. Consecutive adult medical patients visiting the emergency department of the University Medical Center Groningen between March 2016 and February 2017 were screened for eligibility
Patient deterioration
Signs of organ failure were observed in 21.2% of the patients at ED admission (Table 3). An additional 6.1% of the patients deteriorated in the first 24 h after admission. The increase in respiratory failure (+ 4.2%) was the largest contributor to this deterioration. In the first 48 h after admission, 3.1% of the patients deteriorated to multiple organ failure. Most deterioration took place within the first 72 h from admission (+ 8.3%), with only a small increase (+ 1.7%) during the rest of the hospitalization.
Table 3
Patient deterioration outcomes in different timeframes during the patient’s stay in-hospital and divided by infection and sepsis on emergency department presentation
Acute Kidney Injury
Liver failure
Respiratory failure
Organ failure
ICU admission
In-hospital mortality
Deteriorated
Single
Multiple
Total (N = 359, 100.0%)
At ED admission
45 (12.5%)
21 (5.8%)
14 (3.9%)
72 (20.1%)
4 (1.1%)
–
–
76 (21.2%)
24 h after ED admission
51 (14.2%)
22 (6.1%)
29 (8.1%)
82 (22.8%)
10 (2.8%)
16 (4.5%)
1 (0.3%)
98 (27.3%)
48 h after ED admission
57 (15.9%)
23 (6.4%)
33 (9.2%)
83 (23.1%)
15 (4.2%)
18 (5.0%)
1 (0.3%)
102 (28.4%)
72 h after ED admission
60 (16.7%)
23 (6.4%)
35 (9.7%)
87 (24.2%)
15 (4.2%) x
18 (5.0%)
3 (0.8%)
106 (29.5%)
Until hospital discharge
70 (19.5%)
26 (7.2%)
43 (12.0%)
87 (24.2%)
24 (6.7%)xx
22 (6.1%)
12 (3.3%)
112 (31.2%)
Infection (N = 82, 22.8%)
At ED admission
6 (7.3%)
4 (4.9%)
3 (3.7%)
11 (13.4%)
1 (1.2%)
–
–
12 (14.6%)
24 h after ED admission
7 (8.5%)
4 (4.9%)
4 (4.9%)
11 (13.4%)
2 (2.4%)
2 (2.4%)
0 (0.0%)
15 (18.3%)
48 h after ED admission
7 (8.5%)
4 (4.9%)
5 (6.1%)
12 (14.6%)
72 h after ED admission
Until hospital discharge
10 (12.2%)
6 (7.3%)
6 (7.3%)
14 (17.1%)
4 (4.9%)
3 (6.7%)
1 (1.2%)
18 (22.0%)
Sepsis (N = 277, 77.2%)
At ED admission
39 (14.1%)
17 (6.1%)
11 (4.0%)
61 (22.0%)
3 (1.1%)
–
–
64 (23.1%)
24 h after ED admission
44 (15.9%)
18 (6.5%)
25 (9.0%)
71 (25.6%)
8 (2.9%)
14 (5.1%)
1 (0.4%)
83 (30.0%)
48 h after ED admission
50 (18.1%)
19 (6.9%)
28 (10.1%)
71 (25.6%)
13 (4.7%)
16 (5.8%)
1 (0.4%)
87 (31.4%)
72 h after ED admission
53 (19.1%)
19 (6.9%)
30 (10.8%)
75 (27.1%)
13 (4.7%)x
16 (5.8%)
3 (1.1%)
91 (32.9%)
Until hospital discharge
60 (21.7%)
20 (7.2%)
37 (13.4%)
73 (26.4%)
20 (7.2%) xx
19 (6.9%)
11 (4.0%)
94 (33.9%)
ED: emergency department; x of which one patient with all three organ systems failing; xx of which four patient with all three organ systems failing
In the patients who presented with infection, 14.6% had signs of organ failure at ED admission (Table 3). An additional 3.7% of the patients with infection deteriorated in the first 24 h after admission. Two patients (2.4%) required ICU admission and one patient (1.2%) developed multiple organ failure. In the remainder of the first 72 h, no additional patients deteriorated.
Of the patients with sepsis, 23.1% had signs of organ failure at ED admission (Table 3). Most of them had AKI (14.1%). In the first 24 h after admission, an additional 6.9% of the patients with sepsis deteriorated, mostly due to respiratory failure (+ 5%). An additional 1.8% of the patients deteriorated to multiple organ failure and after 48 h another 1.8% of the patients had developed multiple organ failure. After 72 h, one patient had multiple organ failure in all three organ systems. During the rest of the hospitalization, only 1% of the patients deteriorated additionally. In the remainder of this article we use the first 72 h of admission as timeframe for patient deterioration.
Age and diabetes associated with higher risk of deterioration
The logistic regression base model for patient deterioration including age, gender and comorbidities yielded an AUC of 0.679 (Table 4). A higher age (odds ratio (OR): 1.02 / year) and a history of diabetes (OR: 1.90) were associated with a higher risk of patient deterioration. Gender and comorbidities other than diabetes were not independent predictors of deterioration.
Table 4
Logistic regression models for deterioration within 72 h from admission based on repeated vital sign measurements with a 30-min interval during the first 3 h of ED admission
aMissing or observations that were constant within the measured time period are excluded from the regression model; b the AUC of the base model only including patients with respiratory rate at admission was .638
Receiver operating curves of the logistic regression models for patient deterioration using various repeated vital sign measurements in 30-min intervals during the first three hours of the patient’s stay in the emergency department. The base model includes age, gender and comorbidities. Model M1 contains the base model combined with the value of the vital sign at admission, model M2 contains model M1 combined with the change of the vital sign over time, model M3 contains model M1 combined with the variability of the vital. A) the ROC curve for the base model combined with heart rate (HR). B) the ROC curve for the base model combined with mean arterial pressure (MAP). C) the ROC curve for the base model combined with respiratory rate (RR). * Base model only including patients with respiratory rate at admission (AUC .638). D) the ROC curve for the base model combined with body temperature (BT)
Next to the vital signs at ED admission, the change and variability of the repeated vital signs measurements in the first 3 h in the ED were entered into the base model together with the vital signs at ED admission (Table 4, Fig. 2). An increase in MAP over time was associated with a lower risk of deterioration (OR: 0.873/unit increase; model MAP-M2; AUC .758). An increase in respiratory rate over time was associated with a higher risk of deterioration (OR: 1.441/unit increase; model RR-M2; AUC .686). The changes in heart rate and temperature were not independently associated with deterioration.
Next to the vital signs at ED admission and change over time, a higher variability in MAP (i.e. a higher range) was significantly associated with a higher risk of deterioration (OR: 1.06/mmHg; model MAP-M3; AUC .800; Table 4, Fig. 2). The variability of the other vital signs was not associated with the risk of deterioration.
The aim of our study was to determine whether repeated vital sign measurements in the ED can identify patients with sepsis or infection that will deteriorate within 72 h. We found an increase in MAP over time was associated with a lower risk of deterioration, and a higher variability of the MAP or increase in respiratory rate over time, in combination with their respective values at ED admission, were associated with patient deterioration. Inclusion of repeated MAP measurements resulted in the largest AUC (.800), whereas repeated respiratory rate measurements only slightly improved the predictive capabilities of the logistic regression model over the base model. Repeated measurements of heart rate and body temperature were not associated with patient deterioration.
Our results indicate that changes and variability of the MAP are associated with patient deterioration in ED patients with infection or sepsis. This suggests that keeping a close eye on the MAP during the patients stay in the ED is important. Our study shows that this not only applies to patients with septic shock (only 1.9% of our population), as recommended by the surviving sepsis campaign (SSC) guidelines, but for all patients with sepsis or infection [4].
Apart from our earlier pilot study, little is known about repeated vital sign measurements in patients with infection or sepsis during their stay in the ED in relation to clinical outcomes, patient deterioration and (early) signs of organ failure. Our pilot study showed that vital signs changed significantly during the patient’s stay in the ED, but did not analyse patient deterioration [7]. Henriksen et al... retrospectively found a deterioration of vital signs from the normal to abnormal range within 4–13 h after arrival in 31% of patients in the general ED population, leading to a four times higher 30-day mortality risk [13]. The available studies on vital signs in the ED mostly use only single measurements, mainly at triage [9, 12]. Furthermore, these were often retrospective studies in contrast to our study. Finally, they often included the general ED population and thus a more heterogeneous population. The endpoints and cut-off values differ from study to study, most studies used mortality endpoints, several studies had ICU admission as an endpoint and only a few studies included organ failure [8, 11, 13, 19–22]. The single measurements, heterogeneous patient populations and different endpoints make a direct comparison of those results with our study’s results impossible. Coslovsky et al aimed to develop a prediction model for in-hospital mortality using a model with age, prolonged capillary refill, blood pressure, mechanical ventilation, oxygen saturation index, GSC and the APACHEII diagnostic category in a cohort that contained 15% patients with infection among which 7.3% with sepsis. Their model had an AUC of 0.92, although, it should be noted that their model was based on a heterogeneous patient population, single measurements and a combination of multiple vital signs [23]. Yamamoto et al. found an association between low body temperature (< 36 °C) at ED admission and higher 30 day in-hospital mortality risk in patients with suspected sepsis [24]. In our study, we did not find an association between body temperature and deterioration. Furthermore, it should be noted that the in-hospital mortality in our study (3.3%) is much lower than in the study of Yamamoto (9.6%). In summary, available studies did not specifically investigate ED patients with infection or sepsis, mostly used single vital sign measurements (at triage) and primarily had mortality or ICU admission endpoints.
Early warning scores (EWS), like the national early warning score (NEWS) and many variants and related scores, are increasingly being used throughout healthcare. These EWS commonly contain a combination of various vital sign parameters, supplemented with laboratory values or other items, where each item is scored at certain thresholds. Early warning scores are mostly used as ‘track-and-trigger’ systems to trigger the nurse to call the physician or a rapid response team, or to predict a high risk of mortality or ICU admission [25, 26]. The many different EWS and patient populations, in which they have been validated, make it difficult to compare their performance. However, a recent review by Nannan Panday et al. showed that the NEWS score was the best to predict mortality or ICU admission in the general ED population and the modified early warning score was the best in patients with suspected infection or sepsis [25]. Their performance (AUC) was in the same range as we found for our repeated blood pressure measurements (MAP). However, it should be noted that we used only a single vital sign repeated measurement and had a composite outcome of patient deterioration, which included signs of organ dysfunction. Another recent study by Kivipuro et al. showed that the NEWS score was significantly higher before ICU admission when a patient was transferred from the ward to the ICU, compared to the NEWS score of the same patient at the ED [27]. In our hospital, modified early warning scores (MEWS) are taken at admission to the ward and thereafter three times per day. Deterioration of the MEWS score triggers an early response team. Further research is needed to clarify whether repeated vital sign measurements in combination with repeated early warning scores are useful in the detection of patient deterioration in patients with sepsis or infection.
We have shown that almost 30% of the patients presenting to the ED with suspected infection or sepsis deteriorated within 72 h of admission and over 28% of the patients showed signs of (multiple) organ failure despite treatment. Our results show that 18.3% of the patients with infection, 32.9% of the patients with sepsis and in total 29.5% of the patients deteriorated within 72 h (Table 4). Glickman et al. showed that almost 23% of patients with uncomplicated sepsis progress to severe sepsis or septic shock within 72 h from admission [1]. Although a direct comparison cannot be made because of a different population and different endpoints, these results clearly show that a large part of the patients with infection deteriorate in the first days in the hospital and develop (severe) sepsis. Therefore, we question whether the introduction of the recent Sepsis-3 definitions, in which infection or uncomplicated sepsis are no longer part of the sepsis severity spectrum, will lead to better patient care [28]. We would like to emphasise that it is important to properly monitor and treat all patients with infection or sepsis in the ED. Since sepsis-related mortality has dramatically reduced over the past two decades, we believe that early detection or prevention of organ failure is where the future focus of infection/sepsis research should be, since there is a lot to gain [29].
The 30-min measurement interval in the current study was arbitrarily chosen, since there is no standard on how often vital signs should be measured in the ED and only little research has been conducted on this topic. Descriptive studies in the general ED population have shown that the time between two measurements is between 67 and 130 min and that a higher illness severity results in more frequent measurements [10, 30]. We believe that these measurement intervals are not representative for patients with infection or sepsis, however, there are no specific guidelines on how often vital signs should be measured in these patients [13]. The 30-min measurement interval in our study was much more frequent than the median intervals reported by Johnson and Lambe [10, 30]. A higher measurement frequency might provide even more information about deterioration, although this might lead to a higher burden on the patient and staff. Therefore, we recommend continuous measurement of vital signs on a beat-to-beat level, preferably automated with the use of bed-side patients monitors or wearable devices [3]. Our next step, as a follow-up of this study, is to shorten the measurement interval to a beat-to-beat interval with heart rate variability (HRV) measured using bed-side patient monitors in the SepsiVit study [3]. As we have shown, a substantial number of patients deteriorate in the first days from admission. In the currently running SepsiVit study, we will extend the measurements beyond the boundaries of the ED towards the nursing wards during the first 48 h of hospitalization. During this period, we will investigate whether the combination of HRV with monitoring on the nursing wards can provide an early warning of patient deterioration. Such an early warning could provide a possible opportunity for intervention in the future.
Strengths and limitations
To the best of our knowledge this is the first study that prospectively investigated the relation between repeated vital sign measurements and patient deterioration in the ED in patients with infection or sepsis. We did not only use the common mortality and ICU admission endpoints, but also included signs of organ failure in our composite patient deterioration endpoint. Vital signs can be easily measured with equipment readily available in every ED. The repeated vital sign measurements in our study were obtained specifically by a trained member of our research staff, which minimized the amount of missing data. However, in spite of the prospective study design, 92 (25%) respiratory rate measurements were not recorded at triage by the triage nurse. It is well-known that respiratory rate is the most frequently missing vital sign, unfortunately our study is no exception [31]. These missing respiratory rate measurements at triage limit the power of our logistic regression models that include respiratory rate (RR-Mx; Table 4).
Another limitation of our study is that it is a single centre study in an academic tertiary care teaching hospital. This may limit the generalizability to other patient populations, especially since our population contains a high number of patients with a history of organ transplantation (Table 2). However, a history of organ transplantation was not independently associated with patient deterioration in our models (Table 4). Therefore, we believe that the specific patient population did not have a substantial influence on our results. We did not design the study to analyse combinations of multiple vital signs in our models, since we were interested in identifying which repeated vital sign measurements are helpful in predicting patient deterioration and not in the best combination of vital signs. We acknowledge that a combination of repeated vital signs may provide even more information in future studies, perhaps in combination with repeated early warning scores.
Clinical implications
We have shown that more than one in four patients presenting to the ED with suspected infection or sepsis deteriorated within 72 h of admission and showed signs of organ failure. These were not exclusively patients with sepsis at admission, but one in five patients that presented to the ED with infection only. Although the organ failure generally did not result in mortality, organ failure may even be preventable or treatable. Our results show that repeated vital sign measurements (especially blood pressure) at the ED is a predictor of patient deterioration and might result in a reduction of organ failure related morbidity. It is thus important to reassess patient at the ED frequently, including measurement of vital signs, as is done on the wards with early warning scores [29]. Although it is known that patient deterioration is often preceded by changes in vital signs several hours before the event, these signs are frequently missed on general wards [25, 27, 32]. At this moment, we are conducting a subsequent study (SepsiVit study) with 48 h of continuous vital sign measurements at the ED and on the general wards to test the hypothesis that repeated vital sign measurements at the general ward (with high frequency) is better in the prediction of patient deterioration than the currently used systems [3]. Until this information from the SepsiVit study becomes available, we assess patients at the ED frequently, including repeated vital sign measurements.
Repeated measurement of vital signs in the ED are better at identifying patients at risk for deterioration within 72 h from admission than single vital sign measurements at ED admission. Repeated measurements of MAP and respiratory rate are associated with patient deterioration. Since almost one third of patients presenting with infection or sepsis to the ED deteriorate within 72 h, repeated vital sign measurements may be an important way to guarantee early identification of deterioration.
Acknowledgements
The authors thank the nurses and physicians in our emergency department for their assistance during the acquisition of the data. We thank the members of the Sepsis Research Team in our emergency department for their efforts in collecting the data.
Availability of data and material
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Funding
This study is funded by the emergency department of the University Medical Center Groningen. VMQ received a MD-PhD scholarship from the University of Groningen, University Medical Center Groningen for his PhD research.
Authors’ contributions
VMQ drafted the study design, assisted with data acquisition, carried out data analysis and drafted the manuscript. MvM participated in the study design, assisted with data interpretation and critically revised the manuscript. TJO participated in the study design, assisted with data acquisition, and critically revised the manuscript. JMV carried out data analysis, assisted with data interpretation and critically revised the manuscript. JJML participated in the study design, assisted with data interpretation and critically revised the manuscript. JCtM participated in the study design, assisted with data interpretation, critically revised the manuscript and has given final approval of the version to be published.
Ethics approval and consent to participate
This study was carried out in accordance to the Declaration of Helsinki, the Dutch Agreement on Medical Treatment Act and the Dutch Personal Data Protection Act. The Institutional Review Board of the University Medical Center Groningen ruled that the Dutch Medical Research Involving Human Subjects Act is not applicable for this study and granted a waiver (METc 2015/164). All participants provided written informed consent.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
| |
University Health Network (UHN) is looking for an experienced professional to fill the key role of Senior Organization Development Advisor to provide solutions and consultative services to internal clients ranging from technical professionals to senior leadership to make a difference in patient’s lives. Supporting effective leadership practices and increasing capacity is our objective. You will apply expertise in the implementation, co-design, delivery and evaluation of:
-
Talent management solutions including selection, training, coaching, performance management, and succession planning to guide the progression of leaders based on UHN’s leadership strategy.
-
Engagement surveys and action planning including vendor management; planning, communications and project management of surveys and reports; researching best practice, co designing solutions, and supporting clients to action plan based on results to increase engagement.
Join us in an environment where the appetite for skills and knowledge is high.
-
University degree required; relevant post graduate degree desirable
-
A minimum of 10 years in leadership talent management experience
-
5+ years experience with all aspects of managing a team of direct report staff
-
Strong project planning skills to manage multiple demands in parallel
-
Experience with change management practices to support organization initiatives, and to increase leadership capability to effectively lead those change initiatives
-
Demonstrated experience applying adult learning, self-awareness and change principles to the design of experiential learning that converts to changed habits and leadership practices
-
Expert facilitation and problem solving skills at all levels of leadership
-
Superior interpersonal skills; able to gain trust and partner with peers and leaders on difficult tasks and conversations
-
Able to use resourceful approaches to arrive at cost effective, simple solutions
You will join a team with diverse experience and seeking a peer who can demonstrate with specific examples and achievements:
-
experience delivering talent management solutions including a variety of solutions delivered over time with specific results to demonstrate effectiveness.
-
describe your experience planning and delivering employee engagement surveys through all phases including outcomes over time and effectiveness
-
practical experience and lessons learned managing staff to be a credible people leader
-
experience planning and coaching others through change
Interested applicants should apply online at the University Health Network Careers Site. While we will not be conducting in person interviews during the COVID19 Epidemic – screening and online interviews are still in progress. Thank you for passing this posting on! | https://www.todn.org/jobs |
A business model is, in essence, an approximation to reality in pretty much the same way a scientific model is. Models are built with partial knowledge and try to predict the future, they are based on hypotheses, and have a certain degree of uncertainty.
Apparently, entrepreneurs that treat their business models with a more scientific approach show a slightly better performance at pivoting to new ideas. This means, they identify the problems with their original plan earlier. Surprisingly, there's no clear evidence that they stop their process earlier.
Having a scientific approach to validating ideas means that one should establish thresholds to define success/failure earlier (avoiding the post hoc ergo propter hoc fallacy). It also helps identify biases in sample selection (for instance, asking only friends), and help plan the proper experiments ( experiments for idea validation )
An interesting idea is that since most startups fail, there's a large number of false positive observations in the processes.
Note
On the other hand, false-negatives would be ruled out since those companies would automatically not exist. But the authors do not comment on that
It must also be reminded that the scientific method pushes its own biases and limitations (for example: citations make science conservative ) which may permeate into the business validation process as well.
An interesting derivation from the model is the opportunity for learning that the scientific method enables. A properly conducted experiment that returns negative results, is still valuable from the perspective of pivoting into a new direction. This idea correlates with the observation that scientific entrepreneurs pivot more quickly.
Backlinks
These are the other notes that link to this one. | https://notes.aquiles.me/literature/202201061619_treating_a_business_model_as_a_scientific_model/ |
DENVER — A proposed bill that would ban declawing cats was unanimously passed at a Denver City Council meeting in Colorado on Monday.
WUSA reported that the ordinance is effective immediately.
Declawing occurs in a procedure known as onychectomy. In the operation, an animal’s claws are removed and most or all of the last bone of each of the front toes of an animal is removed. Nerves, tendons and ligaments are severed.
“Onychectomy is an amputation and should be regarded as a major surgery,” according to the American Veterinary Medical Association. The Humane Society of the United States says the effects of the surgery can include death of tissue, paw pain, infection, back pain, lameness and death.
According to The Denver Post, practicing veterinarian Casara Andre said declawing can be performed in a way that prevents pain for the pet, although she opposes the surgery as a routine operation.
“A decision to declaw a cat is affected by many human and animal factors,” Andre said at a public hearing Nov. 6. “The well-being of the animal and their human family is best defended by providing owners with education about alternatives to declawing, appropriate training for family cats, and well-informed discussions between that pet owner and their veterinary medicine provider.”
Kirsten Butler, a veterinary technician, said she no longer participates in the procedures.
“Having run anesthesia on declaw procedures, I can tell you it is an awkward and disheartening feeling to keep something alive while it is mutilated in front of you,” she said at the hour-long hearing.
Eight cities in California, including Los Angeles and San Francisco, have passed bans on declawing. Australia, Japan, Brazil, Israel and multiple countries in Europe also have similar bans.
Published: Saturday, April 21, 2018 @ 11:45 AM
DECATUR, Ga. — A stray dog who lingered around a former Publix grocery store in metro Atlanta for a year has finally found a loving home. The story behind the adoption is heartwarming.
The animal shelter PAWS Atlanta posted Publix's story Friday on its Facebook page. Shelter staff said a woman came rushing to the office Thursday as it was closing, asking if they had Publix.
Publix was a dog that shelter staff and concerned residents had been trying to rescue for a year. In December, shelter staff were able to rescue the dog and house him at the shelter. The woman who visited the facility Thursday evening knew the dog well; she called him Buddy. She said the dog would visit her shop and she would feed him, and the two had developed a close bond. When the dog disappeared in December, she had feared the worst.
The woman was overjoyed to learn that PAWS Atlanta had Publix, after a friend alerted her when she saw the photo of the dog on the shelter's Instagram feed.
Shelter staff said Publix was thrilled to see the woman who had been so kind to him, and that they'd never seen Publix so happy or animated.
Published: Friday, March 23, 2018 @ 11:25 AM
Updated: Friday, March 23, 2018 @ 11:25 AM
— Owning a dog can be extremely rewarding, but if you're a pet parent who lives in the heart of a city or in an apartment, you might face a few extra challenges.
RELATED: Tips to keep your dog cool in the summer
From a lack of yard space to nearby neighbors who can easily hear your dog barking, you may need to make some adjustments for the good of your lifestyle and your neighbors.
Try these seven hacks for a safe, happy city or apartment life with your pooch:
1. Choose the right breed.
If you haven't yet become a pet parent, choose a dog with your living situation in mind, according to this Pets Best Insurance blog. A puppy may be more rambunctious and need more bathroom breaks than an older dog. And while you might assume that larger dogs won't work well in the city or in an apartment, that's not necessarily true. Depending on the breed, they may bark less and be less energetic than smaller dogs.
2. Prepare for potty trips.
If you live several stories up in an apartment building, potty trips outside can be more of a hassle. You may have to improvise by using some training pads or trying a dog potty with real or synthetic grass,according to the experts at Bella’s House and Pet Sitting. Disposable and permanent versions are available, and you can place them inside or outside on a balcony.
RELATED: Pets on a plane: Which airlines are most pet-friendly?
3. Help your dog adapt.
If you have a new dog or one that's used to a different living environment, he or she may need time to adjust to city or apartment living. Introduce your pet slowly to the sounds of traffic, neighbors, and other animals, giving him or her extra attention and time to feel safe.
4. Help your pooch get plenty of exercise.
Your dog will require plenty of exercise and will need to be walked at least two to three times a day. For outdoor playtime in some wide-open spaces, try one of Atlanta's best dog parks, where you and your dog can socialize.
5. Protect your dog's paws.
As the summer sun heats up Atlanta's asphalt and concrete, it can be dangerous for your dog's paws. If you're taking your dog for a walk in hot weather, check the pavement for heat by putting the back of your hand on it for at least seven seconds. If it's too hot, stick to grassy surfaces, wait until a cooler part of the day, or invest in some dog booties.
6. Use and swap out toys.
Leave your dog some toys to play with to keep him or her from getting bored and destructive when you're not home. A few Kong toys – which have hollow centers to put treats inside – can help provide some stimulation and entertainment while you're gone. And it never hurts to swap out an old toy and add a new one to the mix now and then to keep your dog interested.
7. Get some help.
Published: Wednesday, March 21, 2018 @ 8:34 AM
FORT PIERCE, Fla. — Perry Martin probably can’t stop pondering about his cat.
T2 was reunited with his dad after being missing for 14 YEARS! He went missing in 2004 for during hurricane season and...Posted by Humane Society of the Treasure Coast on Tuesday, March 13, 2018
In 2004, the orange tabby Thomas 2, or simply just “T2,” disappeared.
It happened when the Fort Pierce man moved into a friend’s house in Stuart after Hurricane Jeanne stormed through the area, according to TCPalm.
>> Delta under fire after flying a puppy to the wrong airport
The retired K-9 officer grieved, but then came to terms with the idea that his cat had moved on to other ventures, or to that great catnap in the sky.
That all changed on March 9 with a phone call.
“Someone said, 'What if we told you T2 was alive?' I figured it was a mistake," Martin told TCPalm. "It was too crazy to believe."
>> Need something to lift your spirits? Read more uplifting news
Worn and weary, the fiery feline was found wandering the streets of Palm City.
He was brought into the shelter, where a scan of his skinny shoulder detected a microchip, which eventually led him back to Martin.
Next thing you know, the tabby, now 18 years old, is back snuggling on his owner’s lap.
The cat is content, but Martin’s questioning persists.
"Could you imagine if he could talk for just 15 minutes to tell us what he's been through?" Martin told TCPalm. "He'd probably say, 'Why did you keep the door shut, Dad?'"
Read more at TCPalm.
Published: Tuesday, March 06, 2018 @ 11:23 AM
GALESBURG, Ill. — The dogs awaiting adoption at one Illinois animal shelter no longer have to sleep on a cold floor.
The Knox County Humane Society posted a Facebook video Monday of their adoptable dogs lounging comfortably in donated chairs. Goober, Mickey, Tango and Buster Brown are seen making themselves at home on the chairs until they find their forever home. | https://www.whio.com/lifestyles/pets/denver-passes-bill-banning-cat-declawing/SpD5DIxKc0LipSkd6ngsGL/ |
Dr. Ruby Edet appointed Manager, Anti-Racism & Anti-Oppression & System Change Somerset West Community Health Centre
Nigerian-born Dr. Ruby Edet has been appointed to a one-year pilot position as Manager, Anti-Racism & Anti-Oppression and & System Change, with the Somerset West Community Health Centre. An announcement by Executive Director, Naini Cloutier indicated that the new appointee will be responsible for the ongoing development, service delivery and evaluation of the Ottawa Newcomer Health Centre, Mental Health and Counselling, Black mental health Coalition, ACB HIV Prevention Programs including Anonymous HIV Testing and other anti-racism Community Initiatives. According to her, this position is structured to support the centre’s programs by advancing its commitment to anti-racism, anti-oppression, and systemic change. The incumbent will be collaborating with staff, managers, volunteers, communities, and partners to develop SWCHC programs that reduce or eliminate inequities, oppression, and racism, resulting in improved health outcomes. She will also be supporting leadership, program development and strengthening partnerships with respect to health and mental health for immigrants, refugees and racialized communities.
Dr. Ruby Edet holds a bachelor degree in Medicine and Surgery (MBBS) from the University of Jos in Nigeria, West Africa. She also possesses numerous certifications from the University of Washington, including epidemiology, conducting research, project management in global health, and leadership and management in health, among others.
She has over 10 years of experience working with key stakeholders across the international public health landscape, as well as significant experience working with marginalized communities in Ottawa.
Ruby Edet is passionate about improving health and wellbeing, and ensuring better health care access for marginalized communities, including the African, Caribbean & Black (ACB) communities of Ottawa. | http://blackottawascene.com/dr-edet-appointed-anti-racism-manager-somerset-west-chc/ |
Q-methodology (also known as Q-sort) is the systematic study of participant viewpoints. Q-methodology is used to investigate the perspectives of participants who represent different stances on an issue, by having participants rank and sort a series of statements.
Q-sort is a mixed methodology. It uses the qualitative judgements of the researcher in defining the problem, developing statements to investigate the perspectives of participants (some of the statements may be developed after interviewing key informants), and selecting participants. It uses quantitative options of analysis. It can be very helpful in unearthing perspectives without requiring participants to articulate these clearly themselves. It is a useful complement to a range of other objective evaluation measures. For example, Q-methodology can be used to examine teacher’s perspectives on teaching as part of an evaluation of a school district. Other evaluation measures can include test scores, attendance and completion.
Operant Subjectivity: This journal from the International Society for the Scientific Study of Subjectivity aims to provide the latest research and opinion on Q Methodology in order to foster a greater understanding of subjectivity.
Q methodology: A sneak preview: This paper, authored by Job van Exel and Gjalt de Graaf aims to provide a detailed overview of the basics of Q methodology for those who need easily accessible information on this research strategy. | https://www.betterevaluation.org/en/evaluation-options/qmethodology |
These journals have been assigned a ranking (A*, A, B, C) for use with the ARC's ERA exercise. The ranking, and the suitability of the journal for a mathematics FoR classification code, has been determined by a working group consisting of academic mathematicians (members of AustMS) and statisticians (members of SSAI).
Please note that these are not necessarily the final journal FoRs and ranks to be used for the ARC's ERA 2010 evaluation.
For the journals you publish in, you may wish to check the spreadsheet field headed Rank, or look for this information within one or more of the web-pages linked-to below.
These pages include hyperlinks to some journal's websites, and to Ulrich's web (account required), for easy confirmation of details regarding each journal's ISSN and other data. Any errors in this factual data should be reported directly to Ulrich's, or using the Feedback link below.
not able to be ranked appropriately on the mathematics list because inclusion would mean that a purely mathematical journal would have to be downgraded.
In such cases we have assumed that the ARC will allow mathematical articles in these journals to be claimed as mathematics publications with an appropriate FoR code nominated by the author. This rule of thumb has been used, for instance, to omit all of the IEEE journals from the 0102 and 0103 lists.
The following journals may have had rankings from an earlier ERA process, but are not included in the current ERA for any of a number of reasons; e.g. the journal has not published since 2001, it only started publishing in 2009, the name has changed, an English translation is available under a different ERA-ID, the journal is not peer-reviewed, etc.
As advised by the ARC, submissions of journals for the ERA 2010 round closed on 2 November 2009.
New submissions are no longer possible.
The final list should appear shortly on the ARC website: http://www.arc.gov.au/era/era_journal_list.htm.
The ARC expects to reinstate its new journal form on that same page, once the ERA 2010 list has been posted. This will enable submissions to be made for future ERA rounds.
Attempts to submit new journal data using the Feedback link below are not sufficient to get information to the ARC. Such attempts will not be acknowledged, though any corrections to factual information, such as ISSN codes and starting dates, will be accepted this way for future reference. | http://www.austms.org.au/AustMS+-+Journal+Ranking+-+2009 |
Not Applicable
Not Applicable
1. Field of the Invention
This invention relates to disk drive suspensions, and more particularly to wireless suspensions in which the flexure comprises a laminate of plastic film, trace conductors, and a metal layer. In a specific aspect, the invention relates to improvements in the design and manufacture of suspension flexures to have capacitance and thus impedance values controlled separately for each trace and each pair of traces to have constant values throughout their lengths, as desired, and the same or different absolute impedance values for optimizing read and write circuit pairs, and to accommodate mechanical limitations of the flexure design, as needed for optimum flexure performance.
The invention achieves control of impedance by increasing or decreasing capacitance (to correspondingly decrease or increase impedance) of each conductive trace of a pair or pair of conductive traces through modification of trace width, and/or spacing from adjacent traces, or by changes in effective length by varying the length of a trace between defined places on the flexure, while not altering the distance between those defined places.
2. Description of the Related Art
Existing trace flexure designs have layouts that feature conductive trace paths generally as straight as possible, with any change in path direction being curved with as large as possible a radius. Generally, with multiple conductive traces routed side by side, the spacing between the traces will be at least locally uniform, that is, in any given portion of the trace flexure, the spacing between adjacent conductive traces and the trace width will not vary much. If, however, the traces after being routed alongside each other go to different destinations on the layout, such as occurs when the traces are divided to reach both sides of a slider along the outriggers.
A currently typical spacing for 0.0016 inch wide and 5 to 20 micron thick conductive traces in a wireless flexure is a 40 micron (1 micron equals 1 micrometer equals 40 micro inches) space laterally between adjacent traces. The traces thickness is determined by the flexure laminate, and is assumed to be fixed by the laminate supplier for purposes of this invention.
This width and spacing combination results in a characteristic impedance referred to as Z0, for the device and for each pair of conductive traces considered. Presently used trace flexures have a characteristic impedance of 30 or so ohms.
The putative xe2x80x9ccharacteristic impedancexe2x80x9d is not found at all points along the trace or pair of traces. Rather many variations from that impedance are found. This occurs, for example, in a disk drive suspension flexure because Z0 changes with differences in trace cross section or trace spacing arising from diverging trace pairs, and changes in routing and bends in traces around mechanical features of the flexure, such as tooling holes or weld points. It is, however, desirable that the flexure have a Z0 that is constant along the length of the traces and trace pairs throughout the flexure, and in the case of read-write circuits, constant at about 110 ohms for the write lines and at about 60 ohms for the read lines.
As the operating frequency of the disk drive increases, the importance of adhering to the constant and correct impedance becomes ever more important. Any change in impedance Z0 causes a reflection of the signal being sent along that path; the reflection represents wasted signal and an increase in noise. Both wasted signal (lost signal strength) and increased noise effectively decrease the signal to noise ratio (SNR) and thus decrease this important measure of the quality of an electronic device.
Historically, trace flexure layouts have to the good had generally large radius curves, but also uniform trace widths and spacing, all in accordance with the design rules of printed circuit boards (PCBs), which is the preceding technology to flex circuits and wireless flexures. But, PCBs were a different animal from flexures since there device placement was at the whim of the PCB designer, and mechanical constraints, so important in flexures, were not really a factor. The PCB designer could place the devices to be connected anywhere he wanted to, constant width and spacing of conductors was a design that was both correct and easy.
In making this invention it has been recognized that the simple default design taken from PCB technology does not apply to the very different case of trace flexures in which the end points of the traces are immutably fixed by exigencies of disk drive suspension technology, and their intermediate lengths are necessarily detoured around fixed obstacles implicated in suspension design, such as tooling holes and weld points.
Time Domain Reflectometry (TDR) analysis computes Z0 as a function of time as the energy passes down the trace length. TDR shows that the Z0 for an apparently uniform trace varies dramatically along the length of the trace in previously known flexure design, and even the nominal value of Z0 value about which the variation takes place is less than specifications require.
It is, accordingly, an object of this invention to provide a wireless flexure for a disk drive suspension having a constant characteristic impedance Z0 over the length of the flexure traces regardless of the presence of mechanical obstacles, curves in layout, or other factors that have caused unwanted variations in impedance along and between conductive traces and trace pairs in the flexures. It is another object to provide an improved flexure design in which the conductive traces are customized along their length to meet situations that might limit the constancy of the impedance of the trace. It is a still further object to vary locally the width and spacing of conductive traces to offset locally unwanted variations and lack of constancy in impedance imposed by the flexure design or application. Yet another object is to increase the effective length of one or more traces to increase capacitance relative to one or more adjacent traces and thus limit inversely changes in impedance constancy. It is yet another object to vary the length and configuration of one or more traces relative to another trace or traces between fixed points to change to a desired value the impedance of the one relative to the other, e.g. to increase the impedance of a write line over that of an adjacent read line.
These and other objects of the invention to become apparent hereinafter are realized in a controlled impedance trace flexure for a disk drive suspension, the trace flexure comprising a laminate of a metal layer, an insulative film layer and one or more pairs of conductive traces comprising paired trace members that extend together differentially in a pattern between two fixed points such that there tend to be unwanted local variations in the respective impedances of the paired members over their extent and therefore a lack of constancy in conductive trace impedances, the paired members being locally modified in their relative spacing, length and/or width in capacitance-varying relation sufficiently to offset the impedance variations, whereby the paired members are controlled to a constant impedance.
In this and like embodiments, typically, the paired members at a predetermined locus tend to unwanted variations in their respective impedances, and the paired members are locally differentiated in width at the predetermined locus to locally vary their capacitance against the impedance variations, or the paired members are locally differently spaced at the predetermined locus to locally vary their capacitance against the impedance variations.
Alternatively, the paired members are made locally of different effective lengths within the predetermined locus to locally vary their capacitance against the unwanted impedance variations.
In a further embodiment, the invention provides a controlled impedance trace flexure for a disk drive suspension, the trace flexure comprising a laminate of a metal layer, an insulative film layer and one or more pairs of conductive traces comprising paired trace members that extend together over a predetermined distance between two fixed points and in which there tends to be at a given locus an unwanted local variation in the impedances of the respective paired members, one of the paired members being reversely turned within the given locus to increase its effective length relative to the other paired member in capacitance-increasing relation sufficiently to make uniform the impedance of the one conductive trace with the other conductive trace, whereby the paired members are kept at a uniform impedance within the locus.
In this and like embodiments, typically, the one paired member is sinuous within the locus and has a predetermined period from peak to peak, the predetermined period being smaller than the frequency of the signals carried by the one paired member, e.g., the signal frequency is about 1 GHz, and the paired member period is less than about 1 inch.
In a further embodiment, the invention provides a controlled impedance trace flexure for a disk drive suspension, the trace flexure comprising a laminate of a metal layer, an insulative film layer and one or more pairs of conductive traces comprising paired trace members that extend together over a predetermined distance between two fixed points and in which there tends to be an unwanted variation between the total impedances of the respective paired members, one of the paired members being reversely turned within the predetermined distance to increase its effective length relative to the other paired member in capacitance-increasing relation sufficiently to make uniform the impedance of the one conductive trace with the other conductive trace, whereby the paired members are kept at a uniform impedance.
In this and like embodiments, the one paired member is sinuous and has a predetermined period from peak to peak, the predetermined period being smaller than the frequency of the signals carried by the one paired member, e.g., the signal frequency is about 1 GHz, and the paired member period is less than about 1 inch.
In a further embodiment, the invention provides a controlled impedance trace flexure having two or more pairs of conductive traces comprising members that extend together over a predetermined distance between two fixed points, one pair of conductive trace members being reversely turned within the predetermined distance to increase its effective length relative to the other pair in capacitance-increasing relation relative to the other pair, whereby the one pair has a relatively higher impedance desired in a write circuit and the other pair has a relatively lower impedance desired in a read circuit.
In this and like embodiments, the one pair has an impedance of about 110 ohms, and the other pair has an impedance of about 60 ohms.
In addition, there can be provided in this and like embodiments, a further conductive trace disposed between the pairs of conductive traces, the further conductive trace being connected to electrical ground, whereby one the pair is electrically isolated from the other pair.
In yet another embodiment, the invention provides a controlled impedance trace flexure for a disk drive suspension having two or more pairs of conductive traces each comprising paired members that extend together differentially in a pattern over a predetermined distance between two fixed points such that there tend to be unwanted local variations in the respective impedances of the paired members over their extent, the paired members being locally modified in their length through a sinuous shaping of the members within the predetermined distance sufficiently to offset the impedance variations, whereby the paired members of the pair of conductive traces are controlled to a constant impedance with each other.
In this and like embodiments, typically, the one paired member is sinuous within the locus and has a predetermined period from peak to peak, the predetermined period being smaller than the frequency of the signals carried by the one paired member, the signal frequency being about 1 GHz, and the paired member period being less than about 1 inch, and there optionally being a further conductive trace disposed between the pairs of conductive traces, the further conductive trace being connected to electrical ground, whereby one the pair is electrically isolated from the other pair.
In a still further embodiment, the invention provides a controlled impedance trace flexure for a disk drive suspension, the trace flexure comprising a laminate of a metal layer, an insulative film layer and two or more pairs of conductive traces each comprising paired members that extend together over a predetermined distance between two fixed points, the pairs of conductive traces being differentially sinuous to have different effective lengths over the predetermined distance, whereby the conductive trace pairs have different impedances from one another.
In this and like embodiments, typically, each paired member within the pairs is parallel with the other member in the pair, each sinuous pair has a predetermined period from peak to peak, the predetermined period being smaller than the frequency of the signals carried by the one pairs, and typically a signal frequency of about 1 GHz, and a period of the pairs that is less than about 1 inch.
Additionally, there can be provided a further conductive trace disposed between the pairs of conductive traces, the further conductive trace being connected to electrical ground, whereby one pair is electrically isolated from the other pair.
In its method aspects, the invention provides a method of controlling impedance in a conductive trace flexure comprising a laminate of a metal layer, an insulative film layer and one or more pairs of conductive traces that are differentially routed over their lengths, including varying the width, spacing, and/or effective length of one of the conductive traces relative to the other conductive trace until the desired impedance is achieved.
A further invention method includes controlling impedance in a conductive trace flexure comprising a laminate of a metal layer, an insulative film layer and pairs of generally parallel, spaced conductive traces that are to have different impedances over their lengths, by making the higher impedance trace sinuous over at least a portion of its length to increase its effective length and its impedance thereby.
A further invention method includes controlling impedance in a conductive trace flexure comprising a laminate of a metal layer, an insulative film layer and multiple pairs of generally parallel, spaced conductive traces in which certain pairs are to have higher impedances over their lengths, by making the higher impedance pairs sinuous over at least a portion of their lengths to increase their effective length and make their impedance higher thereby.
| |
These formulas relate lengths and areas of particular circles or triangles. On the next page you’ll find identities. The identities don’t refer to particular geometric figures but hold for all angles.
You can easily find both the length of an arc and the area of a sector for an angle θ in a circle of radius r.
Length of an arc. The length of the arc is just the radius r times the angle θ where the angle is measured in radians. To convert from degrees to radians, multiply the number of degrees by π/180.
Area of a sector. The area of the sector is half the square of the radius times the angle, where, again, the angle is measured in radians.
The most important formulas for trigonometry are those for a right triangle. If θ is one of the acute angles in a triangle, then the sine of theta is the ratio of the opposite side to the hypotenuse, the cosine is the ratio of the adjacent side to the hypotenuse, and the tangent is the ratio of the opposite side to the adjacent side.
These three formulas are collectively known by the mnemonic SohCahToa. Besides these, there’s the all-important Pythagorean formula that says that the square of the hypotenuse is equal to the sum of the squares of the other two sides.
If you know two of the three sides, you can find the third side and both acute angles.
If you know one acute angle and one of the three sides, you can find the other acute angle and the other two sides.
These formulas work for any triangle whether acute, obtuse, or right. We’ll use the standard notation where the three vertices of the triangle are denoted with the uppercase letters A, B, and C, while the three sides opposite them are respectively denoted with lowercase letters a, b, and c.
There are two important formulas for oblique triangles. They’re called the law of cosines and the law of sines.
The law of cosines generalizes the Pythagorean formula to all triangles. It says that c2, the square of one side of the triangle, is equal to a2 + b2, the sum of the squares of the the other two sides, minus 2ab cos C, twice their product times the cosine of the opposite angle. When the angle C is right, it becomes the Pythagorean formula.
The law of sines says that the ratio of the sine of one angle to the opposite side is the same ratio for all three angles.
If you know two angles and a side, you can find the third angle and the other two sides.
If you know two sides and the included angle, you can find the third side and both other angles.
If you know two sides and the angle opposite one of them, there are two possibilities for the the angle opposite the other (one acute and one obtuse), and for both possibilities you can determine the remaining angle and the remaining side.
There are three different useful formulas for the area of a triangle, and which one you use depends on what information you have.
Half the base times the height. This is the usual one to use since it’s simplest and you usually have that information. Choose any side to call the base b. Then if h is the distance from the opposite vertex to b, then the area is half of bh.
Heron’s formula. This is useful when you know the three sides a, b, and c of the triangle, and all you want to know is the area. Let s be half their sum, called the semiperimeter. Then the area is the square root of the product of s, s − a, s − b, and s − c.
Side-angle-side formula. Use this when you know two sides, a and b, and the included angle, C. The area is half the product of the two sides times the sine of the included angle. | https://www2.clarku.edu/faculty/djoyce/trig/formulas.html |
A Different Kind of Real-World Evidence: The Key Role That Patient-Provider Dialogue Research Can Play
Contributed by:
Katy Hewett, Associate Director, Research, Ogilvy Health
NOTE: The content below contains the first few paragraphs of the printed article and the titles of the sidebars and boxes, if applicable.
All roads in healthcare lead to and from the medical visit. Well, maybe not all roads, but nearly all. The medical visit is central to the overarching healthcare structure and it is driven by the dialogue between patient and provider.
The healthcare data and research landscape has been changing significantly over the past few years. Increasingly, real-world data (RWD) and real-world evidence (RWE) are being used to augment traditional clinical trial research for a variety of purposes. The Food and Drug Administration’s evolving guidance on RWD and RWE has demonstrated growing support for their use and has bolstered focus and investment. Healthcare stakeholders, including pharmaceutical companies, payers, regulators, providers, and patients, are all increasingly looking to RWE to inform decision-making, support the development and approval of more treatment options, and demonstrate the true value of treatments.
One type of RWE that has been increasing is observational research. Before the rise of RWD and RWE, observational research had always been used to collect real-world data and produce real-world insights — this is the very definition. Observational research is continuing to be used in many of the same ways that other types of RWE are used. Increasingly, patient-provider dialogue observational research is being applied in RWE contexts. This research provides a real-world view into the critical medical visit dialogue.
What is Observational Research?
Observational research focuses on systematically observing, recording, and analyzing behavior and lived experiences in a natural setting. It can be qualitative, quantitative, or a mix of the two.
Qualitative research is exploratory and descriptive — it approaches the study of human behavior by considering the participant’s interpretation and assuming that reality is ever-changing and co-constructed. Quantitative research is designed to test a specific hypothesis and the data are collected numerically — it approaches the study of human behavior from the perspective that there are widespread social phenomena that are static and measurable. They are important complements to one another.
Observational research can also be either prospective or retrospective. Prospective research collects data for the purpose of the study. Retrospective research examines data that were collected in the past, usually originally for another purpose. Observational research also encompasses various types of research, including naturalistic studies, participant observation, case studies, and archival research, or content analysis. The most common type of observational research discussed in the context of RWE thus far has been retrospective quantitative content analysis of claims and electronic health record (EHR) data. While this type of RWE is growing and proving to be informative and beneficial, other types of observational research can also play an important role, including prospective qualitative approaches.
Why is Provider-Patient Dialogue Important, But Rarely Examined?
The patient-provider dialogue that occurs during the medical visit impacts everything else in the patient’s healthcare journey. This dialogue determines:
Whether the provider fully understands the patient’s symptoms, functional and quality of life impacts, and concerns
Whether the provider prescribes treatment, which one, and how he or she communicates that recommendation
Whether the patient understands the purpose of the treatment and how it should be taken
Whether the patient understands the goals of treatment and has appropriate expectations
How satisfied the patient is with the provider and the treatment
Ultimately, the patient’s health outcomes
Clearly, such critical aspects of the healthcare journey have implications for all aspects of the healthcare industry and examining them can provide important insights. For example, lack of provider understanding of symptoms and functional and quality of life impacts can negatively affect diagnosis and prescribing, leading to patients not receiving treatment they need and poorer associated health outcomes. Also, weak provider recommendations, lack of patient understanding of the need for treatment and how it should be taken, lack of clear goal-setting, and patient misunderstanding of expectations can lead to patient nonpersistence and nonadherence, which can also lead to poorer associated health outcomes. When providers and patients do not communicate successfully during the medical visit and leave with disconnects, it can negatively affect not only patients and providers, but also pharmaceutical companies, manufacturers, payers, and other healthcare stakeholders.
Being at the heart of the healthcare journey and industry overall means that the medical visit dialogue is also rightfully sensitive and private, protected by Health Insurance Portability and Accountability Act (HIPAA) regulations, which makes access restricted.1 Only specific types of research are appropriate.
How Can We Examine the Provider-Patient Dialogue?
Any research designed to examine provider-patient dialogue should follow HIPAA regulations and be approved by an Institutional Review Board (IRB), thus guaranteed to protect the welfare, rights, and privacy of the participants. IRB-approved research can be published in peer-reviewed journals and shared in posters and presentations at medical conferences.
Communication is complicated. Dialogue is a nuanced collaborative activity, during which the participants co-create meaning and understanding. It is full of messages, the actual words spoken, but also meta-messages, the true meanings behind what is said.2,3,4 Extracting insights from dialogue requires trained analysis of these dynamics — a keyword search of the transcript does not suffice.
Patient-provider dialogue research uses a theoretical framework called interactional sociolinguistics, which is grounded in anthropology, sociology, and linguistics. It is a form of observational research that enables the detailed examination of dialogue, often in the context of a specific cultural interaction, such as a medical visit. Due to the nature of the analysis, as with most other observational qualitative research approaches, the appropriate and validated sample size of this type of qualitative observational research is small compared with quantitative observational research. Just as “big data” has its place, so does “small data.”2, 3, 4
A key aspect of the interactional sociolinguistic research framework is not only to observe and analyze dialogue, but also to interview the participants separately immediately after the interaction to measure what they intended, understood, and took away from the discussion. Often these interviews can illuminate areas of breakdown and miscommunication — gaps in understanding, issues not discussed, and misalignment on takeaways.2, 3, 4 By immediately interviewing patients and providers, their thoughts before and during the visit and the nuances of the visit discussion are still top of mind. In addition, follow-up patient interviews weeks to months after the visit can reveal what they actually did, which is critical data when striving to understand adherence and outcomes.
Why Should Patient-Provider Dialogue Research be Used as RWE?
Patient-provider dialogue research can help to provide understanding and demonstrate the burden of disease that patients face, their unmet needs with the current standard of care, barriers to treatment prescription during the office visit, and misunderstandings and disconnects between providers and patients that often lead to underdiagnosis and suboptimal satisfaction and outcomes. This research can be used as RWE to support the need for, and value of, a product. It is currently most effective when used by pharmaceutical companies to communicate to payers and providers to support product value propositions and reimbursement decisions.
Patient-provider dialogue research aligns with the patient-centric approach that the healthcare industry has been moving toward. The office visit and the dialogue with the provider are central to the patient experience of healthcare. Successful patient-provider communication is critical to ensuring patients receive optimal care and related health outcomes. Employing patient-provider dialogue research as RWE and looking closely at these dialogues enables healthcare stakeholders to make more informed decisions, ultimately benefiting patients and others.
What Role Does Patient-Provider Dialogue Research Play Compared With Other Forms of RWE?
A primary concern with using clinical data that was originally collected for other purposes, such as EHR data and medical claims data, is data privacy. Health data are sensitive and, with common data breaches and increased ethical considerations, the healthcare industry is having to think critically about the ways that data are used. Regulations are catching up to the upsurge in available personal data with the European Union General Data Protection Regulation (GDPR) having gone into effect in May of 2018 and the California Consumer Privacy Act (CCCA) having been signed into law in June of 2018.1 Data privacy and HIPAA compliance are built into IRB-approved observational patient-provider dialogue research. To receive IRB approval and eventually be published in peer-reviewed journals, the research must adhere to appropriate consenting processes and data protections.
Another concern with using clinical data that were collected for other purposes is that they were formatted and organized for those purposes. When used as RWE, this type of data must be reformatted and re-organized to fit this new purpose and there are inevitable gaps. Patient-provider dialogue data are rich and complex, but by using the validated interactional sociolinguistic research framework, it can be coded and analyzed systematically, producing high-quality research-level data that can be trusted.
From a patient perspective, claims data and EHR data are not fully representative of their authentic healthcare experiences — these sources of data represent the provider’s point of view and the final prescribing decisions. Observation and analysis of patient-provider dialogue captures real patient experiences and post-visit interviews capture provider- and patient-provided perspectives. During these interviews, patients are able to convey the full experience of their condition, any functional and quality of life impacts, their understanding of their condition and treatment, and the next steps that they actually plan to take — much of which is often not fully discussed during visits. When compared with interviews with the providers, misunderstanding and misalignment become clear.
Conclusion
Quantitative observational data have their place, but patient-provider dialogue research is a worthwhile complement demonstrating nuances and datapoints that cannot otherwise be accessed. No other type of research is positioned to observe and analyze the critical visit dialogue and understand and demonstrate disconnects. Patient-provider dialogue research is fundamentally designed to protect participant privacy, produce high-quality and organized data, and explore authentic patient and provider experiences and perspectives. This research can be used as RWE to communicate the need for and value of a product, especially when supporting product value propositions and reimbursement decisions.
References:
1. Mulligan SP, Freeman WC, Linebaugh CD. Data Protection Law: An Overview. Congressional Research Service. March 25, 2019. Available at: https://fas.org/sgp/crs/misc/R45631.pdf. Accessed February 9, 2020.
2. Gumperz JJ. On interactional sociolinguistic method. In: Talk, Work and Institutional Order: Discourse in Medical, Mediation and Management Settings. Berlin, Germany: Mouton de Gruyter; 1999:453-471.
3. Hamilton HE. Symptoms and signs in particular: the influence of the medical concern on the shape of physician-patient talk. Commun Med. 2004;1(1):59-70.
4. Tannen D. “Interactional Sociolinguistics.” Oxford International Encyclopedia of Linguistics, vol. 4, ed. by William Bright, 9-11. Oxford and New York: Oxford University Press, 1992.
Ogilvy Health makes brands matter by keeping our audiences’ health, healthcare and wellness needs at the center of every touchpoint.
For more information, visit ogilvyhealth.com. | https://www.pharmavoice.com/article/2020-03-rwe-dialogue/ |
Vacancy title:
Commercial Analyst (Manager)
Jobs at:Movit
Deadline of this Job:
22nd November 2019
Summary
Date Posted: Monday, November 11, 2019 , Base Salary: Not Disclosed
JOB DETAILS:
Reporting to the Executive Chairman, the commercial analyst will act as the business expert responsible for supporting the business decision making process through data analysis, competitor scans, performance insights in order to improve efficiency and performance .
Job Responsibilities:
• Conduct an analysis of the business' commercial performance and highlighting
• opportunities and potential risks.
• Coordinate the formulation of the business operating plans, budgets and monitor progress towards their achievement Identify and draw attention to important business trends and opportunities to maximize return while minimizing risk.
• Conduct financial analysis of the commercial monthly performance; update the Chief Commercial Officer on performance highlighting opportunities and potential risks.
• Support the commercial business in analyzing variances reports, variations from stated strategies at Category, Brand, Product and Stock Keeping Unit.
• Monitor Category, brand, product, Stock Keeping Unit and Channel profitability and provide insights to the Commercial Department on opportunities for scaling in terms of price, volume and positioning.
• Provide support to the Category team in computation of marketing spend effectiveness
• Return on marketing investment (ROMI) Identifying cost saving opportunities and assist the commercial team to implement them.
• Participate in financial modelling and develop user friendly data driven reports
• Participate in the process of developing project performance reports and ensure real-time reporting and integration in the finance system reporting.
• Monitor the trends and developments in the sector, industry and the fields of cosmetics and other house hold items
• Participate in the analysis of pricing to ensure an optimal price of products is achieved.
• Drive effective cost and investment management
Job Skills: Not Specified
Job Qualifications: Not Specified
Job Education Requirements: Not Specified
Job Experience Requirements: Not Specified
Job application procedure
If you believe you can clearly demonstrate your ability to meet the relevant criteria for the role, please submit your application, including copies of your academic and professional certificates, testimonials and your detailed curriculum vitae and contacts of three (3) professional referees familiar with your qualifications and work experience by close of business 22 November 2019.
For the full details about this position, and how to apply;
Kindly log onto our e-recruitment platform via https://www2.deloitte.com/ke/en/careers/executive-search-recruitment.html
Email or hard copy applications will not be accepted
All Jobs
|
|
Notification Board:
Join a Focused Community on job search to uncover both advertised and non-advertised jobs that you may not be aware of. A jobs WhatsApp Group Community can ensure that you know the opportunities happening around you and a jobs Facebook Group Community provides an opportunity to discuss with employers who need to fill urgent position. Click the links to join.
Caution: Never Pay Money in a Recruitment Process.
Add your CV to Head Hunting Tool. Let us market your CV to potential employers. | https://www.greatugandajobs.com/jobs/job-detail/job-commercial-analyst-manager-job-at-movit-career-opportunity-in-uganda-16642?Itemid= |
Use tab to navigate through the menu items.
Recent Posts
Visual Music: Keeping the Audience Engaged
What is visual music, and why is it important to classical music right now?
Archive
August 2018
Tags
audience
Beyonce
classical
color
concert
contemporary
engage
engaged
Fantasia
Fischinger
harmoinc
harmony
listener
listeners
live
meaning
modern
musical
Oskar
perfomance
pitch
projection
projector
romantic
see music
seemusic
sonata
synesthesia
synesthestic
visual
Visual music
visual music design
visualmusicdesign
Posts Coming Soon
Explore other categories in this blog or check back later. | https://www.visualmusicdesign.com/blog/categories/general |
A gigantic black hole at the heart of the brightest quasar in the early universe discovered, 12 billion times more massive than our sun.
Above: An artist’s impression of a quasar with a supermassive black hole in the distant universe. Image credit Zhaoyu Li/NASA/JPL-Caltech/Misti Mountain Observatory
A supermassive black hole (SDSS J0100+2802) is at the centre of a quasar, a powerful galactic radiation source, 12.8 billion light years from Earth, 12 billion times more massive than our sun.
SDSS J0100+2802 was formed 900 million years after the Big Bang, and astronomers cannot explain how could have formed so early.
The discovery of this ultraluminous quasar also presents a major puzzle to the theory of black hole growth at early universe, according to Xiaohui Fan, Regents’ Professor of Astronomy at the University of Arizona’s Steward Observatory, who co-authored the study.
Xiaohui Fan said:
“How can a quasar so luminous, and a black hole so massive, form so early in the history of the universe, at an era soon after the earliest stars and galaxies have just emerged? And what is the relationship between this monster black hole and its surrounding environment, including its host galaxy?
This ultraluminous quasar with its supermassive black hole provides a unique laboratory to the study of the mass assembly and galaxy formation around the most massive black holes in the early universe.”
The quasar dates from a time close to the end of an important cosmic event that astronomers referred to as the “epoch of reionization”: the cosmic dawn when light from the earliest generations of galaxies and quasars is thought to have ended the “cosmic dark ages” and transformed the universe into how we see it today. | https://wordlesstech.com/gigantic-black-hole-12-billion-times-more-massive-than-our-sun/ |
This page covers the following topics:
1. Distance
2. Displacement
3. Area under velocity-time graphs
4. Work done and distance
Distance is defined as how far in total an object has travelled. Distance is a scalar quantity, meaning that it is measured only by magnitude and its direction is insignificant. It is usually measured in metres (m). Kilometres (km) can be used for greater distances, and centimetres (cm) and millimetres (mm) can be used for shorter distances.
Displacement is defined as the total change in position of an object. Displacement is a vector quantity, meaning that it is measured by both magnitude and direction. It is usually measured in metres (m). The same units (m, km, cm, mm) as distance are used to measure displacement.
The motion of an object travelling in a straight line can be modelled using a velocity-time graph, where velocity is the y-axis and time is the x-axis. The area under the graph represents the displacement of the object, whereas the gradient of the graph is the acceleration. When the lines of the graph are straight, simple geometry can be used to calculate the area under the graph and thus the displacement of the object, whereas when the lines of the graph are curved, the technique of counting squares can be used instead.
Work is done when a force causes a body to move. It can be calculated using W = force × distance travelled in direction of force, and has units joules (J). The distance covered by an object can be calculated when the work done on the object and the force being applied to it are known.
1
The distance between Mary's house and the bakery is 1.5 km. What will be the total distance travelled if Mary walks to the bakery from her house and back?
total distance = 1.5 km + 1.5 km = 3 km
3 km
2
A car takes the route given in the diagram. Explain whether the distance or the magnitude of the displacement will be greater from start to end, and give them in terms of A, B, C, D, E and F.
The total distance travelled by the car will be the sum of all the small components of the journey, whereas the displacement magnitude will only give the direct distance between the start and the end to represent the overall change in position of the car, thus the distance will be greater than the displacement.
distance = A + B + C + D + E + F
displacement magnitude = A + C + E
distance = A + B + C + D + E + F > displacement magnitude = A + C + E
3
William goes on a walk of distance 2 km. Is William's displacement also equal to 2 km?
There is not enough information to deduce whether William's displacement is 2 km, since the route of William's walk is unknown. Displacement would be equal to 2 km only if William's walk was a straight line walk of 2 km.
not enough information
4
A man is pushing a shopping trolley a horizontal force of 80 N. He pushes the trolley forward at a constant velocity, by doing 160 J of work. Calculate the distance the trolley is pushed through.
d = W/F
d = 160 J/80 N = 2 m
2 m
5
Calculate the displacement of the object whose motion is modelled by the given velocity-time graph in the first 5 seconds.
Displacement between t = 0 and t = 5 s is area under the graph in the given interval.
The total number of squares under the graph is 27, thus s = 27 m.
This is an approximate value, as counting squares is only an estimate. | https://www.studysquare.co.uk/test/Physics/Edexcel/GCSE/Distance-and-displacement |
Generate more results.
Choosing a Fulfilling Career
Are you stuck trying to figure it out? There are many factors you need to consider for choosing a fulfilling career.
Do you not feel satisfied on your job? Does everyday feel like a drag? If the answer is yes, then it is about time for you to consider a change of careers.
Let’s face it, if you’re not choosing a fulfilling career from the start, you may never excel at it as you will continue to seek for better opportunities.
The fact of the matter is that every day of your employment needs to feels like a challenge. The tackling of which should garner feelings of pride and satisfaction.
People who have mundane jobs and not enjoyable have at least one necessity of life taken care of: livelihood.
Regardless of how bad things might get, they won’t have to sleep on an empty stomach.
On the other hand are those individuals who are unable to find a job that appeals to them.
The decision that such individuals have to make, therefore, is whether to keep on looking and choosing a fulfilling career that is appealing to them, or to settle for something that provides them with basic livelihood.
Selecting a fulfilling career may not come easy, as there are so many options available to you.
And it may feel at times as if these career options are just not clear, perhaps you don’t see yourself there, or perhaps the career is so long it would take you years to get there.
Regardless of the end of the spectrum you might lie on, you are not alone.
Actually, almost two-thirds of the people would choose another career if they could.
When you think about it after taking everything into perspective, you realize that the lack of job satisfaction for our millennial generation is nothing short of an epidemic.
People often assume that people with lousy jobs are better than people with no jobs at all. After all, having basic livelihood is better than having no livelihood, right?
However, according to a survey conducted by GALLUP, this does not appear to be the case at all.
Research suggests that emotionally disconnected employees rate their lives more poorly as compared to the unemployed folks.
According to the statistics, only forty two percent of the actively disengaged workers are thriving in their individual lives.
On the other hand, when it comes to the unemployed individuals, the same percentage goes up to forty eight.
However, the biggest take away from this survey relates to the engaged employees, seventy one percent of whom are thriving.
This means that the quality of one’s life significantly improves when their works is meaningful and fulfilling.
The fact of the matter is that there are fulfilling careers out there that can not only give you livelihood but enhance the quality of life that you lead as well.
Regardless of what you might like, there are fulfilling careers out there and you need to have you head in the right place if you are to find the right one for you.
Bearing this in mind, here are some of the factors that make jobs fulfilling:
A majority of the individuals go into their jobs believing that money matters the most.
They take the first big offer that comes their way, without considering if their work will be meaningful or not.
These are the kinds of people that you often find complaining the most about jobs. Everything just goes downhill when they realize that money is not as important as they initially considered it to be.
If you think about it, the value of money ends once you are able to pay all the bills.
Ask experienced professionals about what gives them job satisfaction and they’ll rarely give money the top place on the list.
Instead, you will find these people to lay much more stress on having a meaning and purpose about their lives. A meaningful job offers a level of satisfaction unmatched by the accumulation of wealth.
People believe that being the top dog in a profession brings with it the feelings of happiness and satisfaction. However, the fact of the matter is that it is not how things work. Why?
Well, it is because it is not your status on the job that generates meaning but the respect that you are able to muster out of it.
This means that if you wish to do meaningful work, you should not be in an unending struggle for status; rather you should be focusing more on the nature and relationships of your work.
For instance, it is natural for the head of a hospital to have the highest pay; but it’s the doctors and the nurses who’ll find their job descriptions to be the most meaningful and respectful.
At times, people are so busy with the compensation that they receive for their jobs that they find themselves involved in an unending desire for status, without realizing that it is relationships that bring respect and not their status.
Have you ever wondered why the most meaningful professions are often the ones that guarantee the good of mankind? For instance, consider the example of a firefighter.
If a firefighter is able to pay the bills, there’s a high probability of him never complaining about his job.
Similarly, there is a reason why nurses love the work that they do so much; considering how they are able to directly influence the lives of numerous individuals for the better on a daily basis.
To put it in a nutshell, people who are involved in work that benefits the society often display higher levels of job satisfaction; provided that their basic necessities are taken care of.
This means that if you wish to be happy in your work, you need to find a profession that focuses on others, rather than on yourself and blind pursuit of money.
However, the question arises: what does this mean for individuals who are not at all involved in professions that benefit mankind or Mother Earth?
Is leaving their professions behind for something completely different the only mean available to them for finding meaning in life? Well, that is not at all the case!
Regardless of the walk of life you might consider, the fact is that everyone craves to be the best at everything that they do.
The same logic applies to choosing a fulfilling career as well. Doing what you have the potential to be the best at does wonders to making it meaningful. Why?
Well, it is because the pursuit for greatness appears to be in born in a majority of us.
But how do you go about doing that? After all, how are you supposed to know if you will be good at doing at a job or not? Well, the fact of the matter is that you can never be sure of whether you will like a job or not but you can be sure of your skill sets and talents.
This is why you should only pursue careers that are in line with your talent pool. Doing what you are good at increases your chances of being the top dog. This enhances the satisfaction and meaning that you get out of your employment.
It is all good to do what you are good at but the question remains: what if you do not like doing what you are good at? This is the reason why it is so important to do what you love; considering how it has numerous benefits to offer. Thus choosing a fulfilling career always comes at play.
From negative depression to lesser anxiety, pursuing what you love offers not only numerous psychological benefits. It offers a deeper sense of satisfaction and meaning as well.
However, the fact remains that things are not quite as simple as they seem. After all, if it was this easy, everybody would be doing it.
We only find a handful of individuals to be pursuing their passions; the rest are as far away from their passions as the earth is far away from the sun. People often argue that their passions are quite difficult to make a living at.
What you should know is that it is only difficult if you are not good at doing what you love. This means that if you combine your passion with hard work, you will be able to develop the required skills set for success.
To put it into a nutshell: nothing will give you more satisfaction and sense of meaning than a job utilizing your talents and passions.
Time can be your greatest friend or your worst enemy. You will either have too much time on your hands or not enough. Both scenarios pose serious questions when it comes to finding a fulfilling profession.
If you find yourself with too little time on your hands, it means that your job is too stressful. If you were good at it, you would not need to spend too much time on it on a daily basis.
On the other hand, if you find yourself with too much time on your hands, know that you could be doing so much more. Once such a thought is in your head, you will not find satisfaction at all.
This is the reason why you need to find a job where you are can lose the sense of time and space. The sign of a meaningful career is that it will engross you in such a manner that you will lose the sense of time.
For instance, take the example of a surgeon. A surgery can last for up to several hours.
Without feeling the crunch of time at all, the surgeon is focused, determined and strong. Yet the surgeon realizes the length of time that has passed once the surgery is done.
As a rule of thumb, if you feel that the time on your job passes a bit too fast, know that you have found your “higher calling”.
Regardless of whether you speak of human beings or animals, the fact is that the pursuit of autonomy is common in all species.
After all, there is nothing better than doing what you want to do, right? However, just as a lion can be tamed to perform in a circus; human beings are often conditioned to follow a certain set of orders from their superiors at work.
Bearing this in mind, you have to say that there has to be some difference between a man in an office and a lion in a circus.
This is why having autonomy on job is important, for you can follow orders on job only for so long. However, this doesn’t create room to forego the authority of your superiors at work.
What this means, instead, is that it is incredibly important for a fulfilling career to not have you asking your supervisors on every matter that you encounter, no matter how rudimentary it might be.
Autonomy is an essential requirement of growth. The more you grow, the more satisfaction you will draw from your career. Choosing a fulfilling career can be a fun adventure if you position it into a positive direction.
All of us wish to have the best possible careers, right? Well, the only way for us to have such careers is if we let go of some our expectations.
Sure it is fine and dandy to wish for a career that has everything that you have ever wanted; but the fact remains that you will never find anything in life to be perfect.
Why is such the case? Why is it impossible to have the perfect job? Well, the reason is simple: it is a job.
You need to understand that being on job is not like being on a bed of roses; regardless of how passionate and talented you might be. You will not find your job to always be interesting and challenging, simply because that is not what the intention of the employer is.
The employers are not there to give meaning to your life; they hire you solely because there is some work that needs to be done. It is your job to draw meaning and satisfaction from the work you were hired to do.
Surely the work environment and what you are doing play a significant role in deciding satisfaction; but the fact remains that your level of satisfaction depends a whole lot upon your appetite.
Speaking of job hunting, everything ultimately boils down to how you never really know what you are getting into beforehand.
Too many a times we see individuals disappointed by their jobs. These jobs were not what they had in mind when they were applying for them.
The basic problem with job hunting is that we don’t really know much about what we’re getting ourselves into. This means that our experience will be either better or worse than our expectations.
For instance, if a person was to become a police officer solely based on how police officers appear to work on TV shows, they will certainly be in for a big disappointment.
You certainly cannot try a different job each month and choose the one that feels like the best. The only respite you have is to get in touch with some of the professionals doing the job that you’re considering.
Tell them about your expectations and inquire if they are reasonable or not. If you feel as if the position is nothing like what you had imagined, there is a high probability of it not being emotionally fulfilling.
Such an approach will allow you to get an exposure without any formal commitment. You will be able to assess your options accordingly!
An average man spends around 80,000 hours of their lives working. Going by this statistic, it would not be wrong to suggest that career is the most important aspect of a person’s life.
This is the reason why it is so important to choosing a fulfilling career that feels fulfilling and emotionally rewarding. After all, people will remember you mostly for the work that you did and how good you were at it!
Hey, I’m Italo Campilii. I’m determined to help you simplify your processes so that you may earn time to do what’s important to you. Are you ready?
He is a motivational speaker and author who inspires through thinking from the inside out. A life & business strategist who believes in giving to truly help those who feel stuck in their life or business. Italo breathes new life by brainstorming new strategies that allow organizations to flourish and individuals to find their inner core. Italo helps trace the line between your passion and vision to turn it into action. The entire system is focused on simplifying your processes, so that you may invest your valuable time into meaningful priorities of life that truly matter.
If you want help with strategy, automation & 180 Degree Breakthrough, let’s talk. | https://mentorme.com/choosing-a-fulfilling-career/ |
Declining balance is an accelerated depreciation method that multiplies the book value of an asset with a constant depreciation rate to determine the annual depreciation expense. The particular type is the double-declining balance method, also known as the 200% declining balance method.
To calculate depreciation costs, we must determine the straight-line rate, which is 100% divided by the number of years of the useful life of the asset. For example, if the useful life of an asset is ten years, the straight-line rate will be 10% (100/10).
Next, we must determine the acceleration factor, say 200% (double-declining balance method), which is multiplied by the straight-line rate. The depreciation rate of 20% (200% x 10%) is then applied to the net book value of assets to determine depreciation costs.
- The useful life = 10 years
- Straight-line rate = 100%/10 = 10%
- Acceleration factor = 200% or 2
- Depreciation rate = 2 x 10%
To calculate depreciation expense, we can use the following declining balance formula:
Depreciation expense = Depreciation rate x Cost of the asset
Assuming the cost of assets is Rp100, the depreciation expense in the first year is Rp20 = Rp100 x 20%. The book value of assets dropped to Rp80 = Rp100-Rp20 at the end of the second year. Therefore, the depreciation expense in the second year is Rp16 = (Rp100-Rp20) x 20%.
Consequences
The declining balance method offers more significant tax benefits in the initial year of use of the asset. Compared to the straight-line method, this method recognizes a higher depreciation expense at the beginning of the asset’s useful life. Therefore, profit before tax will be smaller, and as a consequence, the tax deduction is also lower. | https://penpoin.com/declining-balance/ |
Deforestation has led to a host of associated environmental problems including accelerated rates of biodiversity loss, desertification and soil loss. It has also created a fuel wood crisis, a significant social problem, since 80% of Africans rely on wood or charcoal fuel to meet their cooking and heating needs.
The Opportunities
The implementation of forest management creates economic growth opportunities within rural communities. Through job creation, technical training, increased access to markets, improved infrastructure, and value-adding processing, integrated forestry businesses generate a significant economic multiplier effect, thus improving the livelihoods of those directly and indirectly involved.
Responsible forest management has the potential to reduce urbanisation, increase the availability of fuelwood, protect water quality, enhance carbon sequestration, and reduce the pressures on biodiversity. MTO Group’s strategic intention to produce in Africa for Africa is sound as the demand for forest products in Africa keeps growing with demographics and economic growth. | http://www.mto.co.za/why-africa/ |
September 4th, 2011Top 10 Psychological Syndromes
Psychological infirmity possibly will vary from commonly known illogical uncertainties to post traumatic stress disorder. Here are the top 10 psychological syndromes that have emotional impact to millions of people all over the world these days.
1. Panic Attacks. It is also called as Social Anxiety Disorder. In a scenario in which an individual is afflicted with this kind of disorder, they become incredibly self-conscious the moment around many others and will typically experience anxiety attacks the moment in social surroundings. This is proven to bring about extraneous stress or incapability to function normally in certain aspects of daily life.
2. Bipolar Syndrome. It is also known as bipolar affective disorder in medical terms. In some cases it is known also as depressive syndrome. This is when an individual is afflicted with sensitive dysfunctions which can be determined from the presence of a particular or several instances of unusual superior mood levels typically referred to as mania. It is also referred to as hypomania if it is mild only. Sufferers with this affliction are also known to experience depressive symptoms or mixed emotions, for example mania and depression simultaneously.
3. Obsessive Compulsive Syndrome. It is a psychological infirmity that is characterized by visualizations which induces emotional stress or anxiety, by repetitive conducts or a combination of these visualizations that happen to be delusion and actions also referred to as compulsions. This could vary from prevalent hoarding, persistent hand washing or obsession with spiritual activities or intense desires.
4. Schizophrenia. An individual is said to go through schizophrenia when there can be irregularities in the foresight or manifestation with the things which are around them. When an individual is affected with this ailment, it could possibly have a large impact with their ability to touch, smell, taste see and hear. In more frustrating case conditions, it could possibly trigger strange hallucinations, delusions or inaudible speech followed with a lot of social deterioration. Schizophrenia levels vary from a single individual to another one. There are individuals who have psychotic disorders then in a matter of minutes continue with their lives. On the contrary, there are individuals who frequently have problems with it and will need consistent reliance from individuals around them.
5. Clinical Depression. It is often referred to as intensive depression syndrome or Unipolar depression is known as a critical sort og psychological syndrome. An individual is considered clinically depressed in the event that} without evidence are no longer keen on normal daily actions which would have been often called normal in any other case.
6. Borderline Personality Syndrome. It is also known as BPD which includes a longer scuffle of character capability that relies on complexity and deviation of emotional behavior. Individuals who are suffering from this certain illness are seen to encounter irregular stages of emotional uncertainty, dual personality and uneven sociable relationships. In most cases, the individual could possibly have complaints about their real identity and doubt their very own self value.
7. Post Traumatic Disorder. It is commonly known as PTSD which is more normally encountered in military who may have just returned from battle. It is usually as a consequence of being exposed to debilitating tribulation that has been tough to deal with. Some other military exposure, the disorder could possibly arise after outlasting any sort of mishap, natural catastrophe, or assault. Individuals being affected by PSTD often experience recurring attacks of the tribulation they underwent after certain provoke.
8. Acute Stress Disorder. It is also known as mental distress or psychological shock in clinical terms. This is certainly due to a response to a horrific incidence. Many folks often confuse it for circulatory problem of great shock.
9. Body Dysmorphic Syndrome. In contrast to other scenarios of psychological imbalances, BDS happens when an individual is consistently distracted in their bodies. The majority are the periods when a sufferer will spend a lot of time before a mirror worrying that something can be so wrong in their bodies.
10. Post Partum Depression. It is often known as PPD which is a kind of clinical depression which primarily happens to all women right after childbirth. The signs of PPD are vastly different from one particular person to another one and could include weariness, tension, petulance, sleeping disorders, inexplicable crying, lack of sexual interest, and loss in appetite.
Psychological or mental disorders are simply just but part of mankind’s life. This doesn’t imply that when an individual is affected with clinical depression or post partum depression they’re losing their mind. Profound measures that some individuals implement on sufferers which include isolation or chaining are uncalled for. It is feasible for anyone to get over the aforementioned disorders if an accurate medication is provided. However, this can just be performed by professional healthcare personnel. | http://www.xarj.net/2011/top-10-psychological-syndromes/ |
Work in Accra is led by Pathways consortium members from the University of Ghana in close collaborations with other partner institutions. We also work with partners from local and national government agencies and civil society organisations to understand inequalities and to identify areas of contemporary and emerging policy interest.
There is little data on how poorer and wealthier households are distributed in Accra, which makes it difficult to understand how policies influence inequalities. Pathways research is characterising poverty and inequality in Greater Accra Metropolitan Area (GAMA), and identifying the social and policy factors that contribute to these inequalities. Given the dearth of income data at this geography, we estimate measures of household consumption poverty and inequality using a small area estimation procedure. We then employ spatial regression models to identify the spatial determinants of poverty and inequality within Accra.
Read more about the work of Pathways Poverty and Inequality working group.
Although mortality has declined and life expectancy has improved in Ghana, little is known about variations in mortality throughout Accra. We are applying demographic methods to summary birth history data from over 700,000 women recorded in the most recent Ghanaian census to understand inequalities in child mortality and enable policies that reduce inequalities. To achieve this, we are quantifying child mortality rates for over 400 neighbourhoods that comprise GAMA. Together with the work on social and economic inequality and poverty (see section on social and economic inequality and poverty) this also enables us to understand variation in child mortality across the city’s socioeconomic groups.
We are also using death registration data to characterise mortality beyond childhood, and in relation to specific causes of death. To do this, we are first evaluating the completeness of death registration, as a whole and for sociodemographic subgroups, using demographic methods. Our overall aim is to inform policy regarding measures to improve health outcomes and death reporting.
Read more about the work of the Pathways Health Outcomes working group.
Air pollution has emerged as a major policy issue in Accra and other growing cities in Africa and is receiving attention from government and the civil society. Yet data to inform policies in terms of levels and sources, and their spatial and temporal distributions is extremely limited. Noise is also getting attention with even larger data gaps. We designed a measurement campaign to characterise air and noise pollution and their sources at high-resolution within GAMA. We deployed low-power and lightweight air and noise pollution monitoring devices, audio recorders, and time-lapse cameras in a combination of fixed and (weekly) rotating sites in a first of its kind campaign in sub-Saharan Africa. By combining measurements, audio, and images with state-of-the-art statistical and computer vision methods, and processed based emissions modelling, we are able to capture highly resolved temporal and spatial variations in pollution levels, identify their potential sources in space and time and select and model the total and inequality impacts of policies that aim to reduce pollution.
Read more about the work of the Pathways Measurement and Modelling working group and the Big Data working group.
Uninterrupted access to clean drinking water is essential for health. We are investigating the water supply and consumption pattern, as well as institutional and policy aspects of water supply in GAMA. The emphasis has been on how people and neighbourhoods of different socioeconomic status obtain their drinking water and its quality. The goal is to understand the inequalities faced in urban water quality services and suggesting policies for equitable improvements in this sector.
Read more about the work of the Pathways Water and Sanitation, and Waste Management working group.
The environment people live in affects their health in a number of ways and there are wide variations in the arrangement and quality of the neighbourhood environment in Accra. We are studying the neighbourhood environment in a thorough and multi-dimensional way. We are using high-resolution satellite images with deep learning computer vision techniques to identify clusters within the city that are visually similar, and probing factors like greenery, water, and density and character of buildings and roads that drive these similarities.
We are also looking at flood risk, which is particularly acute in unplanned, informal urban settlements and is experienced disproportionately by the poor. We are carrying out a modelling study to investigate the impact of urban flood risk management decisions on social inequality, particularly in the context of informality. We are focusing on how to reduce the vulnerability of informal residents as opposed to approaches such as forced evictions and relocations, which may worsen inequality and ultimately increase flood risk.
We are also focusing on neighbourhood characteristics and how it affects children’s safety and play activities in neighbourhoods of varying socioeconomic status. The work includes both details on recreational facilities, green spaces, playgrounds and informal play spaces in neighbourhoods and schools, and children’s perceptions and experiences of outdoor play spaces and activities. We will use this information to evaluate and formulate policies and put measures in place to improve and encourage play to equitably enhance the development, health and wellbeing of children.
Read more about the work of the Pathways Housing and Neighbourhood working group, Big Data working group and Water and Sanitation, and Waste Management working group.
Accra is expanding and where people live, work and use basic services is constantly changing. We are investigating how availability, spatial organisation and quality of the transport network influences accessibility to education and healthcare in GAMA. To achieve this, we are combining administrative and opensource data on roads, public transportation and locations of schools and health facilities. This will allow us to understand how specific transport policies and infrastructures will affect people’s access to services.
Read more about the work of the Pathways Transport and Mobility working group.
Related publications
Characterisation of urban environment and activity across space and time using street images and deep learning in Accra
Scientific Reports, 12, iss. 1, pp. 20470, 2022.
Spatial heterogeneity in drinking water sources in the Greater Accra Metropolitan Area (GAMA), Ghana
Population and Environment, 44, iss. 1-2, pp. 46-76, 2022.
Spatial modelling and inequalities of environmental noise in Accra, Ghana
Environmental Research, 214, iss. 2, pp. 113932, 2022.
Sachet water in Ghana: A spatiotemporal analysis of the recent upward trend in consumption and its relationship with changing household characteristics, 2010–2017
PLOS One, 17, iss. 5, pp. e0265167, 2022.
Neighbourhood, built environment and children’s outdoor play spaces in urban Ghana: Review of policies and challenges
Landscape and Urban Planning, 218, pp. 104288, 2022. | https://equitablehealthycities.org/focus-cities/accra/?limit=1&tgid=&yr=&type=&usr=&auth=&tsr= |
---
abstract: 'The Harary index of a graph $G$ is recently introduced topological index, defined on the reverse distance matrix as $H(G)=\sum_{u,v \in V(G)}\frac{1}{d(u,v)}$, where $d(u,v)$ is the length of the shortest path between two distinct vertices $u$ and $v$. We present the partial ordering of starlike trees based on the Harary index and we describe the trees with the second maximal and the second minimal Harary index. In this paper, we investigate the Harary index of trees with $k$ pendent vertices and determine the extremal trees with maximal Harary index. We also characterize the extremal trees with maximal Harary index with respect to the number of vertices of degree two, matching number, independence number, radius and diameter. In addition, we characterize the extremal trees with minimal Harary index and given maximum degree. We concluded that in all presented classes, the trees with maximal Harary index are exactly those trees with the minimal Wiener index, and vice versa.'
author:
- |
Aleksandar Ili' c\
[Faculty of Sciences and Mathematics, University of Niš]{}\
[Višegradska 33, 18000 Niš, Serbia]{}\
[e-mail: [ [email protected]]{}]{}\
Guihai Yu\
[Department of Mathematics, Shandong Institute of Business and Technology]{}\
[191 Binhaizhong Road, Yantai, Shandong, P.R. China, 264005.]{}\
[e-mail: [[email protected] ]{}]{}\
Lihua Feng\
[Department of Mathematics, Central South University]{}\
[Railway Campus, Changsha, Hunan, P. R. China, 410075.]{}\
[e-mail: [[email protected] ]{}]{}\
title: ' [**The Harary index of trees**]{} [^1]'
---
\[section\] \[theorem\][Corollary]{} \[theorem\][Definition]{} \[theorem\][Conjecture]{} \[theorem\][Question]{} \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Example]{}
[[**AMS Classifications:**]{} 92E10, 05C12. ]{} 0.1cm
Introduction
============
-0.1cm
In theoretical chemistry molecular structure descriptors (also called topological indices) are used for modeling physico-chemical, pharmacologic, toxicologic, biological and other properties of chemical compounds [@todeschini; @ToCo1]. There exist several types of such indices, especially those based on graph-theoretical distances.
In 1993 Plavšić et al. in [@plavsic] and Ivanciuc et al. in [@ivanciuc1] independently introduced a new topological index, which was named Harary index in honor of Frank Harary on the occasion of his 70th birthday. This topological index is derived from the reciprocal distance matrix and has a number of interesting chemical-physics properties [@ivanciuc3]. The Harary index and its related molecular descriptors have shown some success in structure-property correlations [@devillers; @diudea; @diudea1; @gutman; @ivanciuc2; @lucic; @zhou0]. Its modification has also been proposed [@lucic2] and their use in combination with other molecular descriptors improves the correlations [@todeschini; @trinajstic]. It is of interest to study spectra and polynomials of these matrices [@GuKlYaYe06; @Zhou4].
In this paper, let $G$ be a simple connected (molecular) graph with vertex set $V(G)$. The Harary index is defined as the half-sum of the elements in the reciprocal distance matrix (also called the Harary matrix [@janezic]), $$H(G)=\sum_{u,v \in V(G)}\frac{1}{d(u,v)},$$ where $d(u,v)$ is the distance between $u$ and $v$ in $G$ and the sum goes over all the pairs of vertices. The Wiener index, defined as $$W (G) = \sum_{u, v \in V (G)} d (u, v),$$ is considered as one of the most used topological indices with high correlation with many physical and chemical properties of molecular compounds. The majority of chemical applications of the Wiener index deal with acyclic organic molecules. For recent results and applications of Wiener index see [@DoEn01].
Up to now, many results were obtained concerning the Harary index of a graph. Gutman [@gutman] supported the use of Harary index as a measure of branching in alkanes, by showing
Let $T$ be a tree on $n$ vertices. Then $$1+n \sum_{k=2}^{n-1} \frac 1 k \leq H(T) \leq \frac{(n+2)(n-1)}{4}.$$ The right equality holds if and only if $T\cong S_{n}$, while the left equality holds if and only if $T\cong
P_{n}$.
In this paper, we further refine this relation by introducing long chain of inequalities and obtain the trees with the second maximum and the second minimum Harary index. In [@zhou1] Zhou, Cai and Trinajstić presented some lower and upper bounds for the Harary index of connected graphs, triangle-free and quadrangle-free graphs and gave the Nordhaus-Gaddum-type inequalities. In [@das] the authors obtained some lower and upper bounds for the Harary index of graphs in terms of the diameter and the number of connected components. Zhou, Du and Trinajsti' c [@zhou2] discussed the Harary index of landscape graphs, which have found applications in ecology [@UrKe02]. Feng and Ili' c in [@IlFe10] establish sharp upper bounds for the Zagreb indices, sharp upper bound for the Harary index and sharp lower bound for the hyper-Wiener index of graphs with a given matching number.
In this paper, we analyze relations between the extremal trees with maximum and minimum Harary and Wiener index. We investigate the Harary index of $n$-vertex trees with given number of pendent vertices and determine the extremal trees with maximal $H (G)$. Furthermore, we derive the partial ordering of starlike trees based on the majorization inequalities of the pendent path lengths. We characterize the extremal trees with maximal Harary index in terms of the number of vertices of degree two, the matching number, independence number, radius and diameter. Finally, we characterize the extremal trees with minimal Harary index with respect to the maximum vertex degree. All these results are compared with those of the ordinary Wiener index. We conclude the paper by posing a conjecture regarding to the extremal tree with the maximum Harary index among $n$-vertex trees with fixed maximum degree.
Preliminaries
=============
-0.1cm
For any two vertices $u$ and $v$ in $G$, the distance between $u$ and $v$, denoted by $d_{G}(u,v),$ is the number of edges in a shortest path joining $u$ and $v$. The eccentricity $\varepsilon (v)$ of a vertex $v$ is the maximum distance from $v$ to any other vertex. The vertices of minimum eccentricity form the center. A tree has exactly one or two adjacent center vertices; in this latter case one speaks of a bicenter. The diameter $d (G)$ of a graph G is the maximum eccentricity over all vertices in a graph, while the radius $r (G)$ is the minimum eccentricity over all $v \in
V (G)$. For a vertex $u$ in $G$, the degree of $u$ is denoted by $deg(u)$.
Two distinct edges in a graph $G$ are [*independent*]{} if they are not incident with a common vertex in $G$. A set of pairwise independent edges in $G$ is called a [*matching*]{} in $G$, while a matching of maximum cardinality is a [*maximum matching*]{} in $G$. The [*matching number*]{} $\beta(G)$ of $G$ is the cardinality of a maximum matching of $G$. It is well known that $\beta(G)
\leq \frac{n}{2}$, with equality if and only if $G$ has a perfect matching. The [*independence number*]{} of $G$, denoted by $\alpha(G)$, is the size of a maximum independent set of $G$.
Let $P_n$ and $S_n$ denote the path and the star on $n$ vertices. A [*starlike tree*]{} is a tree with exactly one vertex of degree at least 3. We denote by $S
(n_{1},n_{2},\ldots,n_{k})$ the starlike tree of order $n$ having a branching vertex $v$ and $$S (n_{1},n_{2},\ldots,n_{k})-v=P_{n_1}\cup P_{n_2}\cup \ldots \cup P_{n_k},$$ where $n_1\geq n_2\geq \ldots\geq n_k \geq 1$. Clearly, the numbers $n_1, n_2,
\ldots, n_k$ determine the starlike tree up to isomorphism and $n = n_1 + n_2 + \ldots + n_k + 1$. The starlike tree $BS_{n, k} \cong S (n_{1},n_{2},\ldots,n_{k})$ is [*balanced*]{} if all paths have almost equal lengths, i.e., $|n_i - n_j| \leqslant 1$ for every $1
\leqslant i < j \leqslant k$.
Denote by $\Delta (T)$ the maximum vertex degree of a tree $T$. The path $P_n$ is the unique tree with $\Delta = 2$; while the star $S_n$ is the unique tree with $\Delta = n-1$. Therefore, we can assume that $3 \leq \Delta \leq n - 2$. The broom $B_{n, \Delta}$ is a tree consisting of a star $S_{\Delta + 1}$ and a path of length $n - \Delta - 1$ attached to an arbitrary pendent vertex of the star.
If $\frac{n-1}{2} < m \leq n-1$, then $A_{n,m}$ is the tree obtained from $S_{m+1}$ by adding a pendent edge to each of $n-m-1$ of the the pendent vertices of $S_{m+1}$. We call $A_{n,m}$ a spur (see Fig. 1). Clearly, $A_{n,m}$ has $n$ vertices and $m$ pendent vertices; the matching number and the independence number of $A_{n,m}$ are $n-m$ and $m$, respectively. Note that if $m >
\frac{n-1}{2}$, then $A_{n,m}\cong BS_{n,m}$.
Next, we give some lemmas which are very useful in the following.
\[de-delta\] Let $v$ be a vertex of a tree $T$ and $deg(v) =m + 1$. Suppose that $P_1, P_2,
\ldots, P_m$ are pendent paths incident with $v$, with the starting vertices of paths $v_1, v_2,
\ldots, v_m$, respectively and lengths $n_i \geqslant 1$ $(i = 1, 2, \ldots, m)$. Let $w$ be the neighbor of $v$ distinct from $v_i$. Let $T' = \delta (T, v)$ be a tree obtained from $T$ by removing the edges $v v_1, v v_2, \ldots, v v_{m - 1}$ and adding edges $w v_1, w v_2, \ldots, w v_{m
- 1}$. We say that $T'$ is a $\delta$-transform of $T$.
This transformation preserves the number of pendent vertices in a tree $T$.
\[fig-delta\]
\[1\] Let $T$ be a tree rooted at the center vertex $u$ with at least two vertices of degree 3. Let $v \in \{z| \ deg(z) \geq 3, z \neq u\}$ be a vertex with the largest distance $d (u, v)$ from the center vertex. Then for the $\delta$-transformation tree $T' = \delta (T, v)$, it holds $$H (T') > H (T).$$
We follow the symbols in Definition \[de-delta\]. Let $G$ be the component of $T-wv$ containing the vertex $w$ (as shown in Fig. 3). Let $R=\{P_1, P_2, \ldots, P_{m-1}\}$. After $\delta$-transformation, the distances between vertices from $G$ and $R$ decreased by $1$, while the distances between vertices from $R$ and $Q = P_m \cup \{v\}$ increased by $1$. By direct calculation, we have $$H (T') - H (T) = \hspace{-0.3cm} \sum_{x \in G, y \in R} \frac{1}{d (x, y) - 1} - \frac{1}{d (x,
y)} \ + \sum_{x \in Q, y \in R} \frac{1}{d (x, y) + 1} - \frac{1}{d (x, y)}.$$
According to the assumption, there is an induced path $P = w w_1 w_2
\ldots w_k$ in $G$, with length at least $\max \{n_1, n_2, \ldots,
n_m\}$. For each path $P_i$, $1 \leq i \leq m - 1$, it follows $$\begin{aligned}
D_i &=& \hspace{-0.3cm} \sum_{x \in G, y \in P_i} \left ( \frac{1}{d (x, y) - 1} - \frac{1}{d (x,
y)} \right )+
\sum_{x \in Q, y \in P_i} \left ( \frac{1}{d (x, y) + 1} - \frac{1}{d (x, y)}\right ) \\
&>& \hspace{-0.3cm} \sum_{x \in P, y \in P_i} \left ( \frac{1}{d (x, y) - 1} - \frac{1}{d (x, y)}
\right ) -
\sum_{x \in Q, y \in P_i} \left (\frac{1}{d (x, y)} - \frac{1}{d (x, y) + 1} \right)\\
&\geq& 0.\end{aligned}$$
Hence $$H (T') - H (T) = \sum_{i = 1}^{m - 1} D_i > 0.$$ Since $T$ contains at least two vertices of degree at least 3, we have strict inequality. Therefore, if we move pendent paths $P_i$ towards the center vertex $u$ of $T$ along the path $P$, the Harary index increases.
\[2\][@gutman] Let $G$ be a connected graph and $v \in V(G)$. Suppose that $P=v_{0}v_{1}\ldots v_{k}, Q=u_{0}u_{1}\ldots u_{m}$ are two paths with lengths $k$ and $m$ $(k\geq m\geq1)$, respectively. Let $G_{k,m}$ be the graph obtained from $G, P, Q$ by identifying $v$ with $v_{0}$, $u$ with $u_{0}$, respectively. Then $$H(G_{k,m})>H(G_{k+1,m-1}).$$
By Lemma \[1\] and Lemma \[2\], we can get the following
\[4\] Let $G_{0}$ be a connected graph and $u\in V(G_{0})$. Assume that $G_{1}$ is the graph obtained from $G_0$ by attaching a tree $T$ ($T
\not \cong P_{k}$ and $T \not \cong S_{k}$) of order $k$ to $u$; $G_{2}$ is the graph obtained from $G_0$ by identifying $u$ with an endvertex of a path $P_{k}$; $G_{3}$ is the graph obtained from $G_0$ by identifying $u$ with the center of a star $S_{k}$. Then $$H(G_{2})<H(G_{1})< H(G_{3}).$$
Let $x = (x_1, x_2, \ldots, x_n)$ and $y = (y_1, y_2, \ldots, y_n)$ be two integer arrays of length $n$. We say that $x$ majorizes $y$ and write $x \succ y$ if the elements of these arrays satisfy following conditions:
1. $x_1 \geqslant x_2 \geqslant \ldots \geqslant x_n$ and $y_1 \geqslant y_2 \geqslant \ldots \geqslant
y_n$,
2. $x_1 + x_2 + \ldots + x_k \geqslant y_1 + y_2 + \ldots + y_k$, for every $1 \leqslant k < n$,
3. $x_1 + x_2 + \ldots + x_n = y_1 + y_2 + \ldots + y_n$.
Let $p=(p_1, p_2, \ldots, p_k)$ and $q=(q_1, q_1, \ldots, q_k)$ be two arrays of length $k \geqslant 2$, such that $p \prec q$ and $n =
p_1 + p_2 + \ldots + p_k = q_1 + q_2 + \ldots q_k$. Then $$\label{eq:starlike} H (S (p_1, p_2, \ldots, p_k)) \geq H (S (q_1,
q_2, \ldots, q_k)).$$
We will proceed by induction on the size of the array $k$. For $k =
2$, we can directly apply transformation from Lemma \[2\] on tree $S (q_1, q_2)$ several times, in order to get $S (p_1, p_2)$. Assume that the inequality (\[eq:starlike\]) holds for all lengths less than or equal to $k$. If there exist an index $1 \leqslant m < k$ such that $p_1 + p_2 + \ldots + p_m = q_1 + q_2 + \ldots + q_m$, we can apply the induction hypothesis on two parts $S (q_1, q_2, \ldots,
q_m) \cup S (q_{m + 1}, q_{m + 2}, \ldots, q_k)$ and get $S (p_1, p_2,
\ldots, p_m) \cup S (p_{m + 1}, p_{m + 2}, \ldots, p_k)$.
Otherwise, we have strict inequalities $p_1 + p_2 + \ldots + p_m < q_1 + q_2 + \ldots
+ q_m$ for all indices $1 \leqslant m < k$. We can transform tree $S (q_1, q_2, \ldots, q_k)$ into $$S (q_1, q_2, \ldots, q_{s - 1}, q_{s} - 1, q_{s + 1}, \ldots, q_{r - 1}, q_{r} + 1, q_{r + 1}, \ldots, q_k),$$ where $s$ is the largest index such that $q_1 = q_2 = \ldots = q_s$ and $r$ is the smallest index such that $q_r = q_{r + 1} = \ldots = q_k$. The condition $p \prec q$ is preserved, and we can continue until the array $q$ transforms into $p$, while at every step we increase the Harary index.
\[cor:order\] Let $T = S (n_1, n_2, \ldots, n_k) $ be a starlike tree with $n$ vertices and $k$ pendent paths. Then $$H (B_{n, k}) \leq H (T) \leq H (BS_{n, k}) .$$ The left equality holds if and only if $T \cong B_{n,k}$ and the right equality holds if and only if $T \cong BS_{n,k}$.
Main results
============
-0.1cm
Trees with given number of pendent vertices
-------------------------------------------
Let ${\cal{T}}_{n,k}$ $(2\leq k\leq n-1)$ be the set of trees on $n$ vertices with $k$ pendent vertices. If $k=2$, then the tree is just the path $P_n$. If $k=n-1$, then the tree is just the star $S_n$. Therefore, we can assume that $3 \leq k \leq n-2$ in the sequel. It was proved in [@IlIl09] that among $n$-vertex trees with given number $k$ of pendent vertices or given number $q$ of vertices of degree two, $BS_{n, k}$ and $BS_{n, n - q - q}$ have minimal Wiener index.
\[5\] Of all the trees on $n$ vertices with $k$ $(3\leq k\leq n-2)$ pendent vertices, $BS_{n,k}$ is the unique tree having maximal Harary index.
Suppose $T\in {\cal{T}}_{n,k}$ has the maximal Harary index, rooted at the center vertex. Let $S_{T} = \{ v\in V(T) : deg(v)\geq 3 \}$.
If $|S_{T}| = 1$, then by Corollary \[cor:order\], it follows that $BS (n, k)$ is the unique tree that maximizes Harary index. If $|S_{T}|\geq 2$, then there must exist at least two vertices of degree at least 3 and there are only pendent paths attached below them. We can consider $T$ as the rooted tree at the center vertex, and choose the vertex $v$ of degree at least 3 that is furthest from the center vertex. After applying $\delta$-transformation, we increase $H (G)$ while keeping the number of pendent vertices fixed – which is contradiction.
\[55\] Among trees with fixed number $q$ $(0\leq q\leq n-1)$ of vertices of degree two, $T_{n,n-1-q}$ is the unique tree having maximal Harary index.
The proof is similar to that of Theorem \[5\]. Consider $\delta$-transformed tree $T' = \delta (T, v)$. The vertex $v$ has degree greater than two, while the vertex $w$ has degree greater than or equal to two. Among vertices on the pendent paths $P_1, P_2, \ldots, P_m$, there are $$S = (n_1 - 1) + (n_2 - 1) + \ldots + (n_m - 1) = \sum_{i = 1}^m n_i - m$$ vertices of degree two.
If $w$ has degree two in $T$, then after $\delta$-transformation, $v$ will have degree two, and the number of vertices of degree two in $T'$ remains the same as in $T$. Otherwise, assume that $deg (w) > 2$. We can apply one transformation from Lemma \[2\], and get new tree $T''$ with $m +
1$ pendent paths attached at vertex $w$ with lengths $n_1, n_2,
\ldots, n_m, 1$. This way we increased Harary index, while the number of vertices with degree two in trees $T$ and $T''$ are the same.
By repetitive application of these transformations, we prove that the starlike tree has maximal value of Harary index among trees with $q$ vertices of degree two. The number of pendent paths is exactly $k = \sum_{i = 1}^m n_i - q = n - 1 - q$, and by Corollary \[cor:order\] it follows that balanced starlike tree $BS (n, n - 1- q)$ is the unique tree that maximizes Harary index.
By Lemma \[2\], we have the following chain of inequalities $$\label{eq:order-balanced} H(P_{n}) = H(BS_{n,2}) < H(BS_{n,3}) < \ldots < H(BS_{n,n-1}) = H
(S_{n}).$$
Notice that Lemma 2.2 from [@das], $H (P_n) \leq \frac{(n + 2)(n - 1)}{4} = H (S_n)$ follows directly from these inequalities.
Trees with given matching or independence number
------------------------------------------------
Du and Zhou in [@DuZh09] proved that among $n$-vertex trees with given matching number $m$, the spur $A_{n, m}$ minimizes the Wiener index.
\[6\] For arbitrary $\frac{n-1}{2} \leq m \leq n-1$, there holds $$H(A_{n,m})=\frac{1}{24} \left( 3 n^2+2 m n+m^2-9 m+19 n -22 \right).
$$
There are four types of vertices in the tree $A_{n, m}$. Denote with $D' (v)$ the sum of all reverse distances from $v$ to all other vertices.
- For the center vertex, $D' (v) = \frac{m}{1} + \frac{n - m -
1}{2}$;
- For each pendant vertex attached to the center vertex, $D' (v) = \frac{1}{1} + \frac{m - 1}{2} + \frac{n - m -
1}{3}$;
- For each vertex of degree $2$, different from the center vertex, $D' (v) = \frac{2}{1} + \frac{m - 1}{2} + \frac{n - m -
2}{3}$;
- For each pendant vertex not attached to the center vertex, $D' (v) = \frac{1}{1} + \frac{1}{2} + \frac{m - 1}{3} + \frac{n - m - 2}{4}$.
After summing above contributions to the Harary index, we get $$\begin{aligned}
H(A_{n,m}) &=& 1 \cdot \frac{n + m - 1}{2} + (2m - n + 1) \cdot \frac{2n + m + 1}{6} \\
&& + \ (n - m - 1) \cdot \frac{2n + m + 5}{6} + (n - m - 1) \cdot \frac{3n + m + 8}{12} \\
&=& \frac{1}{24} \left( 3 n^2+m^2-22-9 m+19 n+2 m n \right).\end{aligned}$$ This proves the results.
\[7\] Let $T$ be a tree on $n$ vertices with matching number $\beta$. Then $$H(T)\leq \frac{1}{24} \left( 6 n^2 - 4 \beta n + \beta^2 + 9 \beta +
10 n -22 \right),
$$ with equality holding if and only if $T\cong
A_{n,n-\beta}$.
Suppose that $T$ has $k$ pendent vertices, then $$k\leq \beta+n-2\beta=n-\beta,$$ and by Theorem \[5\], we have $H(T)\leq H(BS_{n,k})$. Using equation (\[eq:order-balanced\]), it follows $H(BS_{n,k}) \leq
H(BS_{n,n-\beta}) = H(A_{n,n-\beta})$ since $n-\beta \geq
\frac{n}{2}$. Finally, $H(T)\leq H(A_{n,n-\beta})$, with equality holding if and only if $T\cong A_{n,n-\beta}$. By Lemma \[6\], we obtain the explicit relation for $H(A_{n,n-\beta})$.
By Theorem \[7\], we have the following corollary.
\[8\] Let $T$ be a tree of order $n$ with perfect matching. Then $$H(T) \leq \frac{1}{4}(17n^2 + 58n -88),$$ with equality holding if and only if $T\cong A_{n,\frac{n}{2}}$.
\[9\] Let $T$ be a tree on $n$ vertices with independence number $\alpha$. Then $$H(T)\leq \frac{1}{24} \left( 3 n^2+2 \alpha n+\alpha^2-9 \alpha+19
n-22 \right),$$ with equality holding if and only if $T\cong A_{n,\alpha}$.
Since all pendent vertices form an independent set, it follows $k\leq \alpha$. Every tree is bipartite graph, and we get $\alpha \geq \lceil\frac{n}{2}\rceil$. By Theorem \[5\], we have $H(T)\leq H(BS_{n,k})$ and $H(BS_{n,k})\leq H(BS_{n,\alpha})=H(A_{n,\alpha})$. Therefore, $H(T)\leq
H(A_{n,\alpha})$, with equality holding if and only if $T\cong A_{n,\alpha}$.
Trees with given diameter or radius
-----------------------------------
Let $C_{n,d}(p_1, p_2, \ldots, p_{d-1})$ be a caterpillar on $n$ vertices obtained from a path $P_{d+1}=v_0v_1\ldots v_{d-1}v_d$ by attaching $p_i \geq 0$ pendant vertices to $v_i$, $1\leq i
\leq d-1$, where $n=d+1+\sum_{i=1}^{d-1}p_i$. Denote $$C_{n,d,i}=C_{n,d}(\underbrace{0, \ldots, 0}_{i-1}, n-d-1, 0 \ldots ,0).$$ Obviously, $C_{n,d,i}=C_{n,d, n-i}$. In [@IlIlSt09] and [@LiPa08] it is shown that caterpillar $C_{n,d, \lfloor d / 2\rfloor}$ has maximal Wiener index among trees with fixed diameter $d$.
Among trees on $n$ vertices and diameter $d$, $C_{n,d, \lfloor d / 2\rfloor}$ is the unique tree having maximal Harary index.
Let $T$ be $n$-vertex tree with diameter $d$ having maximal Harary index. Let $P_{d+1}=v_0v_1\ldots
v_d$ be a path of length $d$. By Proposition \[4\], all trees attached to the path $P_{d+1}$ must be stars, which implies that $T \cong C_{n,d}(p_1, p_2, \ldots , p_{d-1})$. Applying Lemma \[1\] at those vertices of degree at least 3 in $C_{n,d}(p_1, p_2, \ldots , p_{d-1})$, we get that $T
\cong C_{n,d,i}$ for some $1 \leq i \leq d - 1$. By Lemma \[2\] we get that the maximal Harary index is achieved uniquely for $i=\lfloor\frac n2\rfloor$. This completes the proof.
For a tree $T$ with radius $r(T)$, it holds $2 r (T) = d (T)$ or $2r (T) - 1 = d (T)$. Using transformation from Lemma \[2\] applied to a center vertex, it follows that $H (C_{n, 2r,\lfloor
d/ 2 \rfloor}) < H (C_{n, 2r-1,\lfloor d/ 2 \rfloor})$.
Let $T$ be a tree on $n$ vertices with radius $r \geq 2$. Then $$H (T) \leq H (C_{n, 2r-1,\lfloor d/ 2 \rfloor}),$$ with equality if and only if $T \cong C_{n, 2r-1,\lfloor d/ 2 \rfloor}$.
If $d>2$, we can apply the transformation from Lemma \[2\] at the center vertex in $C_{n,d, \lfloor d/2\rfloor}$ to obtain $C_{n,d-1, \lfloor (d-1)/2 \rfloor}$. Thus, $$H(P_{n}) = H(C_{n,n-1,\lfloor (n-1)/2 \rfloor}) < \ldots < H(C_{n,3,1})< H(C_{n,2,1}) = H (S_{n}).$$
Also, it follows that $H(C_{n,3,1})$ has the second maximum Harary index among trees on $n$ vertices.
Trees with given maximum vertex degree
--------------------------------------
Chemical trees (trees with maximum vertex degree at most four) provide the graph representations of alkanes [@GuPo86]. It is therefore a natural problem to study trees with bounded maximum degree. It was proven in [@DoEn01] that among $n$-vertex trees with the maximum degree $\Delta$, the broom $B_{n, \Delta}$ has maximal Wiener index.
\[77\] For arbitrary $2 \leq \Delta \leq n - 1$, there holds $$H(B_{n,\Delta}) = n \cdot H_{n - \Delta} - n + \Delta +
\frac{(\Delta - 1)(\Delta - 2)}{4} + \frac{\Delta - 1}{n - \Delta +
1},$$ where $H_k=1+\frac 12+\ldots+ \frac 1k $ be the $k$-th harmonic number.
The Harary index of $B_{n,\Delta}$ can be calculated as the sum of Harary index of $P_{n - \Delta + 1}$, the sum of reverse distances between $\Delta - 1$ pendent vertices and vertices from a long path, and the sum of reverse distance between pendent vertices. $$\begin{aligned}
H(B_{n,\Delta}) &=& H (P_{n - \Delta + 1}) + \frac{ \binom{\Delta - 1}{2} }{2} + (\Delta - 1) H_{n - \Delta + 1} \\
&=& n \cdot H_{n - \Delta} - n + \Delta + \frac{(\Delta - 1)(\Delta
- 2)}{4} + \frac{\Delta - 1}{n - \Delta + 1}.\end{aligned}$$
Let $T $ be a tree on $n$ vertices with the maximum degree $\Delta$. Then $H (T)\leq H (B_{n,\Delta})$. The equality holds if and only if $T \cong B_{n,\Delta}$.
Fix a vertex $v$ of degree $\Delta$ as a root and let $T_1, T_2, \ldots, T_{\Delta}$ be the trees attached at $v$. By Proposition \[4\], all subtrees attached to $u_i$ are paths for $1 \leq i\leq
\Delta$, the Harary index increases. This implies the result.
If $\Delta>2$, we can apply the transformation from Lemma \[2\] at the vertex of degree $\Delta$ in $B_{n, \Delta}$ and obtain $B_{n, \Delta-1}$. Thus, $$H(S_{n}) = H(B_{n,n-1}) > H(B_{n,n-2}) > \ldots > H(B_{n,3}) >
H(B_{n,2})= H (P_{n}).$$
Also, it follows that $B_{n, 3}$ has the second minimum Harary index among trees on $n$ vertices.
Concluding remarks
==================
-0.1cm
In this paper, we presented the partial ordering of starlike trees based on the Harary index and we derived the trees with the second maximal and the second minimal Harary index. We characterized the extremal trees with maximal Harary index and fixed number of pendent vertices, the number of vertices of degree two, matching number, independence number, radius and diameter. In addition, we characterized the extremal trees with minimal Harary index and given maximum degree. We concluded that in the all presented classes, the trees with maximum values of Harary index are exactly those trees with the minimal Wiener index $W (G)$, and vice versa. The complete $\Delta$-ary tree is defined as follows. Start with the root having $\Delta$ children. Every vertex different from the root, which is not in one of the last two levels, has exactly $\Delta - 1$ children. In the last level, while not all nodes have to exist, the nodes that do exist fill the level consecutively. Thus, at most one vertex on the level second to last has its degree different from $\Delta$ and $1$.
In [@GuFMG07] the authors proposed these trees to be called *Volkmann trees*, as they represent alkanes with minimal Wiener index [@FiHo02]. The computer search among trees with up to 24 vertices reveals that the complete $\Delta$-ary trees attain the maximum values of $H (G)$ among the trees with the maximum vertex degree $\Delta$.
For any $k \geqslant 2$, the complete $\Delta$-ary tree has maximum value of $H (G)$ among trees on $n$ vertices with maximum degree $\Delta$.
It would be interesting for further research to consider the extremal unicyclic and bicyclic graphs with respect to Harary index, and compare the results with those for the Wiener index.
[100]{}
K. C. Das, B. Zhou, N. Trinajstić, [*J. Math. Chem.*]{} 2010, 46, 1377.
J. Devillers, A.T. Balaban (eds), [*Topological indices and related descriptors in QSAR and QSPR*]{}, Gordon and Breach, Amsterdam, 1999.
M. V. Diudea, [*J. Chem. Inf. Comput. Sci.*]{} 1997, 37, 292.
M. V. Diudea, T. Ivanciuc, S. Nikoli' c, N. Trinajsti' c, [*MATCH Commun. Math. Comput. Chem.*]{} 1997, 35, 41.
A. Dobrynin, R. Entringer, I. Gutman, [*Acta Appl. Math.*]{} 2001, 66, 211.
Z. Du, B. Zhou, [*MATCH Commun. Math. Comput. Chem.*]{} 2010, 63, 101.
L. Feng, A. Ili' c, Zagreb, [*Appl. Math. Lett.*]{} 2010, 23, 943.
M. Fischermann, A. Hoffmann, D. Rautenbach, L. Székely, L. Volkmann, [*Discrete Appl. Math.*]{} 2002, 122, 127.
X. Guo, D. J. Klein, W. Yan, Y. N. Yeh, [*Int. J. Quantum. Chem.*]{} 2006, 106, 1756.
I. Gutman, [*Indian J. Chem.*]{} 1997, 36A, 128.
I. Gutman, O. E. Polansky, [*Mathematical Concepts in Organic Chemistry*]{}, Springer–Verlag, Berlin, 1986.
I. Gutman, B. Furtula, V. Markovi' c, B. Gliši' c, [*Z. Naturforsch.*]{} 2007, 62A, 495.
A. Ili' c, M. Ili' c, [*Linear Algebra Appl.*]{} 2009, 431, 2195.
A. Ili' c, A. Ili' c, D. Stevanovi' c, [*MATCH Commun. Math. Comput. Chem.*]{} 2010, 63, 91.
O. Ivanciuc, T.S. Balaban, A. T. Balaban, [*J. Math. Chem.*]{} 1993, 12, 309.
O. Ivanciuc, T. Ivanciuc, A. T. Balaban, [*J. Chem. Inf. Comput. Sci.*]{} 1998, 38, 395.
O. Ivanciuc, [*J. Chem. Inf. Comput. Sci.*]{} 2000, 40, 1412.
D. Janeži' c, A. Miličevi' c, S. Nikoli' c, N. Trinajsti' c, [*Graph Theoretical Matrices in Chemistry, Mathematical Chemistry Monographs No. 3*]{}, University of Kragujevac, Kragujevac, 2007.
H. Liu, X. F. Pan, [*MATCH Commun. Math. Comput. Chem.*]{} 2008, 60, 85.
B. Lučić, I. Lukovits, S. Nikolić, N. Trinajstić, [*J. Chem. Inf. Comput. Sci.*]{} 2001, 41, 527.
B. Luči' c, A. Miličevi' c, S. Nikoli' c, N. Trinajsti' c, [*Croat. Chem. Acta*]{} 2002, 75, 847.
D. Plavšić, S. Nikolić, N. Trinajstić, Z. Mihalić, [*J. Math. Chem.*]{} 1993, 12, 235.
R. Todeschini, V. Consonni, [*Handbook of molecular descriptors*]{}, Wiley-VCH, Weinheim, 2000, pp. 209–212.
R. Todeschini, V. Consonni, [*Molecular Descriptors for Chemoinformatics*]{}, Wiley-VCH, Weinheim, 2009, pp. 371–375.
N. Trinajsti' c, S. Nikoli' c, S. C. Basak, I. Lukovits, [*SAR QSAR Environ. Res.*]{}, 2001, 12, 31.
D. Urban, T. Keitt, [*Ecology*]{} 2001, 82, 1205.
B. Zhou, [*Int. J. Quantum Chem.*]{} 2006, 107, 875.
B. Zhou, N. Trinajstić, [*Int. J. Quantum Chem.*]{} 2008, 108, 858.
B. Zhou, X. Cai, N. Trinajstić, [*J. Math. Chem.*]{} 2008, 44, 611.
B. Zhou, Z. Du, N. Trinajstić, [*Int. J. Chem. Model.*]{} 2008, 1, 35.
[^1]: Supported by the Research Grant 144007 of Serbian Ministry of Science, the Postdoctoral Science Foundation of Central South University, China Postdoctoral Science Foundation, (No. 70901048, 10871205) and NSFSD (No. Y2008A04, BS2010SF017).
| |
Movie Play-By-Play
One of the most remarkable examples of cell communication is the fight or flight response.
When a threat occurs, cells communicate rapidly to elicit physiological responses that help the
body handle extraordinary situations. The movie depicts just some of the communication and
responses involved in the fight or flight response. Below is a detailed guide to events taking
place in the movie.
Movie Time
Event
0:16
An environmental signal travels into the brain. In response,
the amygdala, a primitive structure in the brain, fires off
a nerve impulse to the hypothalamus (not shown). The
hypothalamus sends a chemical signal to another part of the
brain called the pituitary gland.
Simultaneously, nerve impulses travel from the hypothalamus
along the spinal cord to the adrenal gland (atop the kidneys).
Both the chemical signal (ACTH) and the nerve impulse
initiated in the hypothalamus travel to the adrenal gland.
0:49
In the adrenal gland, the nerve impulse signals chromaffin
cells to release epinephrine (blue molecules, also known as
adrenaline) into the bloodstream. Epinephrine will travel to
many different cell types throughout the body.
0:54
The ACTH (green) previously secreted by the pituitary gland
travels through the blood stream to cells in another area of the
adrenal gland.
1:01 - 1:35
The Cortisol Production Signaling Cascade:
1:01
On the surface of an adrenal cell, the signaling
molecule ACTH (green, not drawn to scale) docks on a
MC2-R receptor (yellow), causing it to change shape.
cAMP activates Protein Kinase A (PKA) causing it to
release its catalytic subunits (only one is shown here
for simplicity). The catalytic PKA subunit travels to the
mitochondrial membrane and switches on a protein called
steroidogenic acute regulatory protein (StAR, not shown).
1:11
StAR is responsible for mediating the complicated task of
importing cholesterol (yellow) into the mitochondrion.
1:13
Inside the mitochondrion, enzymes convert the cholesterol
into 17-OH-pregnenolone. 17-OH-pregnenolone is released
from the mitochondrion and sent to the endoplasmic
reticulum, where it is converted into 11-deoxycortisol.
1:25
This compound is then sent back to the mitochondrion
where it is finally transformed into the final product,
cortisol. Cortisol leaves the adrenal cell by freely
crossing the cell membrane, and it enters the bloodstream.
1:35
Cortisol will travel through the bloodstream to several
cell types. It will initiate signaling cascades in these cells
resulting in an increase in blood pressure, an increase in
blood sugar levels, and suppression of the immune system
(not shown).
1:42
A view of epinephrine (blue) that was released earlier by
the adrenal gland. From here, the epinephrine will travel
to several cell types, eliciting different responses.
1:45 - 2:20
The Glycogenolysis Signaling Cascade:
1:45
On the surface of a liver cell, epinephrine (blue, not
drawn to scale) binds to an alpha-1 adrenergic receptor
(yellow), causing it to change shape.
1:47
Inside the liver cell, the conformational change of
the alpha-1 adrenergic receptor causes the G protein
complex to become activated and uncoupled. The G
protein (red, left) binds to phospholipase-C (center),
causing it to produce and release the signaling molecule
IP3 (pink, right).
1:58
IP3 binds to receptors on the surface of the endoplasmic
reticulum (ER, green), stimulating the release of calcium
ions (red spheres).
The newly-formed glucose is transported out of the
liver cell and it enters the bloodstream. This glucose will
provide an immediate source of energy for muscle cells
(not shown).
2:28
Simultaneously, epinephrine (blue) travels through the
bloodstream to other cell types.
2:42
In the skin, epinephrine binds to a receptor on an
erector pilli smooth muscle cell. This causes a signaling
cascade (similar to the glycogenolysis signaling cascade,
above) that contracts the muscle, raising the hair on the
surface of the skin.
2:56
On the surface of sweat glands, epinephrine binds
to Alpha-1 adrenergic receptors, triggering a signaling
cascade that contracts the gland, squeezing sweat to the
skin's surface.
3:15
In the lungs, epinephrine sets off a signaling cascade
(similar to the cortisol signaling cascade, described
above) that relaxes muscle cells surrounding the
bronchioles to enable increased respiration.
3:26
Epinephrine can have opposite effects (contraction, or
relaxation) depending on the type of signaling machinery
present in the cell. Docking on alpha-1 adrenergic
receptors on the erector pilli muscle causes contraction,
while docking on beta-2 adrenergic receptors on
bronchiole muscle cells cause relaxation.
3:51
In the heart, epinephrine acts on the pacemaker cells,
stimulating them to beat faster. As a result, energy and
messenger molecules are circulated throughout the body
at a faster rate.
| |
Located in the heart of North Carolina, between the mountains and the coast, Alamance County is home to nine municipalities, including Burlington, Mebane, Graham, Elon and Saxapahaw, and countless things to see, do and explore.
With a mix of charming small towns, wide-open spaces, and enough historical monuments, mill villages, museums, breweries, stock car races and hiking trails to fill even the most ambitious itineraries, Alamance County is worth a stop. Here are nine places to check out on your next visit.
1. HIKE THE HAW
Lace up your hiking boots and head out on the Haw River Trail. Over 17 miles of the trails follow the flowing Haw River in Alamance County, passing through some of the prettiest natural landscapes in the region. The Haw River corridor is also home to a “paddle trail” that allows you to explore the pristine wilderness from the water.
2. STEP BACK IN TIME
The Alamance County Museum in Burlington is in a 1790s house that highlights the life and influence of textile pioneer Edwin Michael Holt. Check out exhibits showcasing military artifacts, antiques, quilts and pottery. The African American Cultural Arts and History Center tells the stories of local African American history.
3. FIND YOUR INNER CHILD
With hands-on exhibits where kids (and kids at heart) can build bridges, paint on walls, perform in puppet shows and clean giant teeth, the Children’s Museum of Alamance County in Graham is a must-see stop for all ages. The award-winning museum encourages exploration, interaction and creativity.
4. DRIVE THE CIVIL WAR TRAIL
Soldiers crossed through Alamance County in April 1865. On the self-guided tour, visit sites like the Alamance Battleground, the Tribal Center in Mebane and the Cane Creek Meeting House. During the tour, which takes about three hours if you stop at every location, learn how the war impacted local communities and see interesting artifacts of life from the time.
5. RIDE THE CAROUSEL
Head to Burlington City Park to ride one of the 46 hand-carved animals that make up the 1910 carousel. It’s one of just 14 remaining intact Dentzel Menagerie Carousels in the world and is one of the most popular attractions in the park. It is in the process of being restored, so check to see if it’s reopened before planning your visit.
6. CELEBRATE THE SEASON
Alamance County hosts a full calendar of seasonal and special events. Festivals like the Mebane Dogwood Festival, Lil’ John’s Mountain Music Festival, Uncle Eli’s Quilting Party and the Yee Haw! River Paddle attract crowds and offer a great reason to visit the area. Each festival is filled with music, food and fun for all ages.
7. EXPERIENCE THE ARTS
Housed in an 1871 Queen Anne mansion in Graham, the Alamance Arts Council operates a gallery featuring the work of local and regional artists. View current exhibits, shop for unique gifts and explore the gardens for a glimpse into the local arts community.
8. SOAK UP THE SCENE
The towns of Burlington, Elon, Graham and Mebane are the heart of the region. Walk through the vibrant downtown districts for shopping and local eats. Explore downtown museums, shop locally owned boutiques, and peruse antiques shops and galleries for interesting, unique items. While in the area, stop for lunch or coffee at any diner or cafe. These small towns are the perfect starting points for exploring the region.
9. ENJOY GOOD EATS
Alamance County is home to breweries, wineries and farm-to-table restaurants. Order a pint from Haw River Farmhouse Ales, Bright Penny Brewing or Burlington Beer Works. Or sip local vintages at Grove Winery, Iron Gate Vineyards or Wolf Wines. Need a bite to eat? Sample dishes made with the freshest ingredients at The Root or the Mark at Elon.
The Saxapahaw General Store is also worth a stop to shop. The traditional general store stocks everything from gas and snacks to kombucha and filet mignon. It’s billed as “your local five-star gas station.”
For more things to see and do in Alamance County, check out visitalamance.com. | https://www.visitalamance.com/about/media-news/post/9-reasons-to-visit-alamance-county-this-spring/ |
Q1. What are some of the main factors that lead to racial inequality in health?
Racial inequality in health care has been a vexing issue for policymakers and health care systems in the United States. According to Valdez (2016), ethnic minority groups in the U.S, including African Americans and Hispanic Americans, continue to receive low-quality care compared to native groups despite the continuous societal progress regarding racial discrimination and inequality in today’s world. Bias from healthcare providers is one of the key driving factors of racial disparities in healthcare. Valdez (2016) maintains that racial inequality causes are complex and multifaceted, but there is an extensive accord that health care providers play a central role in healthcare racial inequalities. Healthcare providers provide substandard care to ethnic minority groups even when access to insurance is even for both minority groups and whites. For instance, physical indicators highlighted by Williams, Lawrence & Davis (2019) suggest that there are higher morbidity and mortality rates among Black Americans than white persons.
Socioeconomic inequalities also contribute to racial disparities in health. Williams, Lawrence & Davis (2019) claim that racial disparities in health care are undeniably related to socioeconomic inequity to some degree. The author’s referenced statistics indicating that ethnic minorities are more likely to work in lower-paying jobs that do not offer comprehensive access to health insurance packages. Furthermore, minority groups in the U.S experience high unemployment levels, which inhibits their access to high-quality health care. However, Valdez (2016) notes that racial health disparities are more complex than limited access to health care. They recommend that health care providers acquire a certain level of understanding regarding the citizens’ primary health concerns regardless of their race. The citizens also need to understand primary health concerns, including identifying symptoms of specific illnesses and taking precautionary measures such as eating a healthy diet and exercising regularly.
Q2. How does racism contribute to illness and health disparities?
Racial health disparities are immense and prevalent in the United States. Valdez (2016) maintains that African Americans have a higher death rate than whites in almost all the fifteen leading causes of death, including cancer, heart diseases, stroke, diabetes, and hypertension. Another data indicates that over 100,000 African Americans die prematurely annually due to racial health disparities. Moreover, Williams, Lawrence & Davis (2019) note that racism and discrimination are deeply rooted in society’s social, economic, and political systems. These disparities result in unequal treatment of minority groups in all aspects of life, including health. Therefore, racism contributes to illness and health disparities in that members of minority groups often receive low-quality healthcare, and they are unlikely to receive adequate preventive health care services.
Besides, racism makes minority groups have worse health outcomes for some health conditions that require high-quality care, such as cancer, diabetes, heart disease, and hypertension, among others (Williams, Lawrence & Davis, 2019). Furthermore, social factors affecting ethnic minority groups in the U.S, such as unemployment and unstable housing, lead to poor health. Additionally, Williams, Lawrence & Davis (2019) maintain that the life expectancy of ethnic minority groups in the U.S is often shorter than that of their white counterparts by about a decade. This is because people of color face higher risk diseases such as stroke, cancer, diabetes, and mental illnesses, which stem partly from the pressure and trauma of being ignored, oppressed, and targeted for violence and other forms of abuse.
References
Valdez, Z. (2016). Beyond black and white: A reader on contemporary race relations
Williams, D. R., Lawrence, J. A., & Davis, B. A. (2019). Racism and health: evidence and needed research. Annual review of public health, 40, 105-125.
Do you need high quality Custom Essay Writing Services?
Do you need high quality Custom Essay Writing Services? | https://essaywritemypaper.com/health-racial-inequality-essay/ |
An April 16 column by psychologist John Rosemond (“Fictional ADHD debate continues”) caused us distress on behalf of the youth that we serve at Child Guidance & Family Solutions. In it, the author not only asserts that Attention Deficit Hyperactivity Disorder (ADHD) “does not exist.” He also insists that there is no “proof” for the existence of other mental illnesses of youth.
In addition, Rosemond asserts that psychotropic medications used to treat these disorders “do not outperform placebos in clinical trials,” a statement that is scientifically untrue.
Such inaccurate statements are important contributors to the stigma of mental illness in children, and thus should be refuted. They create negative attitudes toward mental disorders, which can result in feelings of guilt and shame in parents of children with mental illness, and thereby reduce the likelihood that they will seek treatment.
Research on mental illness in children has generated statistics that cannot be ignored. According to the National Institute of Mental Health, 20 percent of youth ages 13 to 18 live with a mental health condition, 11 percent a mood disorder; 10 percent a behavior disorder; and 8 percent an anxiety disorder.
Furthermore, the research shows that 50 percent of all lifetime cases of mental illness begin by age 14, and 75 percent by age 24.
Despite these compelling numbers just 20 percent of youth with mental illness ever seek treatment during childhood or adolescence. As a result, the other 80 percent suffer silently into adulthood, developing increasing levels of disability that are progressively difficult to reverse.
This accounts, in part, for the fact that (according to the Centers for Disease Control and Prevention) the U.S. suicide rate has increased 24 percent during the past 15 years, and suicide is the third-leading cause of death for people ages 15 to 24, and the second leading cause for those ages 15 to 34.
Closer to home, the last time Summit County Public Health and the Alcohol, Drug Addiction and Mental Health Services Board administered the Youth Risk Behavior Survey to students they were surprised to discover that 9.7 percent of county middle school students reported that they had attempted suicide one or more times during the 12 months before the survey.
Child Guidance & Family Solutions is acutely aware of these facts, and thus for over 75 years our mission has been to combat the stigma of mental illness in children and youth, and treat children (and adults) afflicted by those illnesses. The stigma can be addressed in a number of ways, but education to dispel the myths of mental illness is a core strategy.
To that end, the two important messages that we convey to the community every day are that mental illness is real and it is treatable.
Those who question the reality of mental illness are ignoring the robust research showing its genetic and neurobiological underpinnings. Advances in neuroscience have provided clear evidence that mental illness is caused by a disruption in the usual functioning of the brain, the most complex and least understood organ in the human body.
Recent advances in brain scanning technology, though, are demonstrating how brain function is disrupted in mental illness, and how that is reversed with treatment. While we do not yet have a full understanding of how and why these disruptions occur, that does not invalidate the diagnosis of mental illness, nor does it mean that we shouldn’t treat it.
It is also important to know that treatments for mental illness are at least as effective as the treatments for many physical illnesses, such as heart disease. Thanks to advances in research there are more treatment approaches for specific mental illnesses, referred to as “evidence-based treatments” (EBTs), which have proved to be effective in rigorous research studies.
Child Guidance & Family Solutions currently provides EBTs for treatment of a variety of mental illnesses of children and adults (including — but not limited to — anxiety, depression, ADHD and first episode psychosis), as well as EBTs designed to strengthen parent management skills, improve children’s school readiness and reduce behavior problems.
We are constantly seeking to add newly developed EBTs to our array of services as they become available.
Our children’s health is a critical component of our community’s success, as there is no health without mental health. To deny the importance of this foundation of health and happiness for all children and youth in our community is both short-sighted and foolish. Investing in our children is an investment in the well-being of the entire community.
Talbott is the president and chief executive of Child Guidance & Family Solutions. Dr. Jewell is vice president and medical director of the organization. | https://www.cgfs.org/news/article/cgfs-medical-director-comments-on-children-and-mental-health/2173/ |
Under this relationship, leaders identify the specific talents of each of their employees, motivate them and coach them towards utilizing their talents effectively. Leaders are also responsible for building trust between them and their subordinates. Leaders involve guiding a group of people toward achieving the best result in and a company. The leadership of a company mainly involves creating a vision for the company. It involves modeling the vision, forming teams, influencing them and aligning people to achieve the set goals.
Decision making with your employees will let them gain respect for the leader and become more determined. This style will bring strength between you and your employees. Laissez-Faire-This style is used when the leader is lazy or distracted, it’s more of a you do what you want style. This style can be used when the team is highly capable and motivated, it’s when the team doesn’t need close monitoring or supervision. This style can cause failure when the leader expects the group to make the decision between themselves when they are un sure about what they need to achieve and how they need to accomplish the task.
I could find it hard to accept other staff members values and beliefs, and feel mine are the right ones. 2.1. Constructive feedback gives people the chance to develop within there role if its needed. If the feedback is good then the person will feel good and confident, but could take offence to constructive feedback. In my setting we have regular supervision with the team leader who will give constructive and good feedback.
Also, even if the other person replies verbally his or her body language may show that they have not really understood or agreed. Observation helps understand the effectiveness of communication. Aiv Explain why it is important to find out about an individual’s: a) Communication and language needs b) Wishes and preferences People usually feel satisfied when they communicate well with individuals. Good communication enables individuals’ needs to be met and for care and support workers to feel they are not just doing the job but doing it in a way that allows individuals to have choice and control over their lives. Good communication will enable you to build strong professional relationships based on trust.
This excellent service can be done by those that care about our world and the people that inhabit it. Service can be serving others in some helpful way. The main idea of service is that the individual isn’t the most important and other people could use help and good advice. Being a good leader means that the leader knows how to serve the people he is working for. Leadership is another trait that can’t be easily developed, but only with thorough diligence and dedication.
When the time comes and I address things that need to be fixed, I do not receive negative feedback from my employees. They all understand the issue at hand and try to fix it, based on their roles and tasks. Overall, I believe that one can be a successful leader if they can be a successful manager. Both of these titles work best hand-in-hand. A leader is someone that leads a team toward the main goal.
This is not an effective way to get the best result from a team, but it has some advantages in situations where there is pressure to get the task done, like in the armed forces. This leadership style may use threats or intimidation to make sure that subordinates do what the leader requires. This could bring down subordinates by ignoring their knowledge and input. The leader monitors the work and each individual's performance; which is good to make sure that everyone is working. People are motivated by being rewarded; this brings encouragement helping the person who has done well do a better job and help them achieve more in the job.
Best practices for supervisors. Axia College of University of Phoenix Judith Hein For a supervisor to be able to effectively manage those employees under them, that supervisor needs to know some of the best practices in the areas of communication. Orientation and training, improving productivity, performance appraisals, resolving conflicts and improving relations among employees are examples of said practices. These practices will help to minimize problems within the company in each of these areas. A supervisor needs to be able to keep these practices in mind when working daily with their employees because these are common problem areas that can be displayed on a day by day basis.
By continuing to build and train teams to be effective ultimately can produce positive results for any organization. Along with this, proper motivation and supervision is required. In some instances, teams are a direct reflection of their management and supervisors, though there are some instances in which they are not. Managers and supervisors that lack the skills to effectively train and guide their team will only set the team up for possible failure. Managers should have direct influence in the way teams operate and function. | https://www.antiessays.com/free-essays/What-Makes-A-Good-Role-Model-277006.html |
Discuss appropriate nursing interventions based on this scenario. Would the RN support the continued use of honey or would the RN emphasize the need to only use the Silvadene, which was ordered by the health care provider, for wound care? What else would the nurse consider? What evidence would support the RN’s decision?
The following articles may provide information to help you create an evidence-based response to this issue.
• Belcher, J. (2012). A review of medical-grade honey in wound care. British Journal of Nursing, 21(15), S4-S9.
• Connor-Ballard, P.A. (2009). Understanding and managing burn pain: Part 1. American Journal of Nursing, 109(4), 48-56.
• Connor-Ballard, P.A. (2009). Understanding and managing burn pain: Part 2. American Journal of Nursing, 109(5), 48-56.
In your responses to your peers’ posts provide constructive and insightful comments that go beyond that of agree or disagree.
Compose your work using a word processor and save it, as a Plain Text or an .rtf, to your computer. When you’re ready to make your initial posting, please click on the “Create Thread†button and copy/paste the text from your document into the message field. Be sure to check your work and correct any spelling or grammatical errors before you post it.
You are required to post your initial response to the discussion question by Wednesday at 11:59 PM (EST) of the week it is due. Your initial post cannot be a response to another student’s post. Students who do not submit their response to the discussion questions as noted above will have 10 points deducted from their discussion question grade for that week.
When you’re ready to make your initial posting, please click on the “Create Thread†button and copy/paste the text from your document into the message field.
This activity will be assessed according to the AD Nursing: Discussion Question Rubric
ORDER THIS ESSAY HERE NOW AND GET A DISCOUNT !!!
Our Service Charter
-
Excellent Quality / 100% Plagiarism-FreeWe employ a number of measures to ensure top quality essays. The papers go through a system of quality control prior to delivery. We run plagiarism checks on each paper to ensure that they will be 100% plagiarism-free. So, only clean copies hit customers’ emails. We also never resell the papers completed by our writers. So, once it is checked using a plagiarism checker, the paper will be unique. Speaking of the academic writing standards, we will stick to the assignment brief given by the customer and assign the perfect writer. By saying “the perfect writer” we mean the one having an academic degree in the customer’s study field and positive feedback from other customers.
-
Free RevisionsWe keep the quality bar of all papers high. But in case you need some extra brilliance to the paper, here’s what to do. First of all, you can choose a top writer. It means that we will assign an expert with a degree in your subject. And secondly, you can rely on our editing services. Our editors will revise your papers, checking whether or not they comply with high standards of academic writing. In addition, editing entails adjusting content if it’s off the topic, adding more sources, refining the language style, and making sure the referencing style is followed.
-
Confidentiality / 100% No DisclosureWe make sure that clients’ personal data remains confidential and is not exploited for any purposes beyond those related to our services. We only ask you to provide us with the information that is required to produce the paper according to your writing needs. Please note that the payment info is protected as well. Feel free to refer to the support team for more information about our payment methods. The fact that you used our service is kept secret due to the advanced security standards. So, you can be sure that no one will find out that you got a paper from our writing service.
-
Money Back GuaranteeIf the writer doesn’t address all the questions on your assignment brief or the delivered paper appears to be off the topic, you can ask for a refund. Or, if it is applicable, you can opt in for free revision within 14-30 days, depending on your paper’s length. The revision or refund request should be sent within 14 days after delivery. The customer gets 100% money-back in case they haven't downloaded the paper. All approved refunds will be returned to the customer’s credit card or Bonus Balance in a form of store credit. Take a note that we will send an extra compensation if the customers goes with a store credit.
-
24/7 Customer SupportWe have a support team working 24/7 ready to give your issue concerning the order their immediate attention. If you have any questions about the ordering process, communication with the writer, payment options, feel free to join live chat. Be sure to get a fast response. They can also give you the exact price quote, taking into account the timing, desired academic level of the paper, and the number of pages. | https://essaychimp.com/2017/10/16/registered-nurse-39/ |
- Demonstrates accountability for professional development that improves the quality of professional practice and the quality of patient care.
- Makes recommendations for the improvement of clinical care and the health of the workplace and welcomes and participates in change initiatives.
- Leads by investing and building healthy relationships among colleagues and other disciplines.
- Shows the ability to set priorities and demonstrates an understanding of shared governance and begins participating at the unit level.
- Begins to serve as an engaged member of a team supporting colleagues in service to patients and families and may participate in task forces or other initiatives.
- Demonstrates a basic knowledge of research, how it affects practice and who/what resources are available to assist with evidence based practice by asking questions, demonstrating interest, participating in unit-based journal clubs.
- Clinical practice demonstrates knowledge of how quality and innovation impacts patient satisfaction, safety, and clinical quality outcomes.
- Identifies opportunities for improvement on the clinical area.
- Demonstrates the ability to communicate clearly and effectively with all members of the health care team.
- Begins to demonstrate awareness of cultural diversity, horizontal violence and impairment in the health professions.
- Cares for patients and self by supporting safety in the workplace.
- Actively engages in clinical development and/or RN residency program for new hires.
- Requests opportunities to learn safe, accountable and autonomous practice from more experienced nurses.
- Seeks, accepts and utilizes performance feedback from peers, preceptors and unit/department Leaders as a learning opportunity and to improve practice.
- Demonstrates enthusiasm for continuous learning and identifies and creates a plan for the continuation of learning and development.
- Identifies patient and family needs for education and provides basic education to support the episode of care.
- Seeks professional development and involvement through membership in a professional nursing organization and/or reading professional literature on a regular basis.
- Applies basic nursing knowledge and skills within the framework of Relationship Based Care, using the nursing process to meet the clinical, psychosocial and spiritual needs of the patient and family.
- Communicates effectively, both verbally and in documentation.
- Demonstrates critical thinking in the identification of clinical, social, safety, spiritual issues within the episode of care.
- Learns to incorporate national professional organization as well as business unit and health system's goals to improve patient safety, quality and satisfaction.
- Formulates a plan of care and daily goals that consider individual patient needs.
- Demonstrates initiative and seeks formal and informal opportunities to improve clinical practice.
- Seeks guidance and asks questions to continuously improve nursing practice.
- Creates a caring and compassionate patient-focused experience by building healing relationships with patient, families and colleagues.
- Identifies ethical situations within patient care or within the workplace and seeks assistance.
- Professionally accepts assignments that gradually increase patient load and complexity.
Qualifications
- Minimum six months experience preferred.
- BSN preferred.
- Ability to establish and maintain positive, caring relationships with executives, managers, physicians, non-physician providers, ancillary and support staff, other departments, and patients/families.
- Ability to work productively and effectively within a complex environment, handle multiple/changing priorities and specialized equipment.
- Good clinical judgment with critical thinking, analytical and problem solving abilities required as related to various aspects of patient care.
- Critical thinking skills necessary to exercise and to lead others in application of the nursing process.
- Mobility and visual manual dexterity.
- Physical stamina for frequent walking, standing, lifting and positioning of patients.
Licensure, Certifications, and Clearances:
UPMC approved national certification preferred.Current Pennsylvania licensure as a Registered Professional Nurse.CPR required based on AHA standards that include both a didactic and skills demonstration component within 30 days of hire
ACLS within 1 year of hire or transfer into department NIH within 90 days of hire or transfer into department
Advanced Cardiac Life Support (ACLS)
Basic Life Support (BLS) OR Cardiopulmonary Resuscitation (CPR)
NIH Stroke Scale (NIH)
Registered Nurse (RN)
Act 34
OAPSA
UPMC is an Equal Opportunity Employer/Disability/Veteran
Total Rewards
More than just competitive pay and benefits, UPMC’s Total Rewards package cares for you in all areas of life &emdash; because we believe that you’re at your best when receiving the support you need: professional, personal, financial, and more.
Our Values
At UPMC, we’re driven by shared values that guide our work and keep us accountable to one another. Our Values of Quality & Safety, Dignity & Respect, Caring & Listening, Responsibility & Integrity, Excellence & Innovation play a vital role in creating a cohesive, positive experience for our employees, patients, health plan members, and community. Ready to join us? Apply today. | https://careers.upmc.com/jobs/6083903-nurse-opportunities-full-time-part-time-and-casual-available |
Autoionization resonances are intensively investigated both experimentally and theoretically. A lot of attention has been devoted recently to a theoretical study of the strong field ionization to an autoionization resonance. (For the extensive list of references see1.) However attempting to approach a physically realizable conditions by inclusion of: a finite width of the phase fluctuating laser light, an inhomogeneous broadening, a pontaneous emission, the existence of other atomic levels and continua, all these papers remain unrealistic in one important respect: They all assume a sudden switching — on of the strong laser signal, which otherwise remains time independent.
Preview
Unable to display preview. Download preview PDF. | https://rd.springer.com/chapter/10.1007/978-1-4757-0605-5_26 |
It’s the 100th anniversary of Flag Day, celebrated each June 14 since the 1880s but not officially recognized as a holiday until President Woodrow Wilson proclaimed it such on May 30, 1916.
The American flag is a well-known icon worldwide, and is often depicted in paintings.
In the 1950s, American artist Jasper Johns began painting familiar objects such as flags, maps and numbers, in unique ways to inspire a reexamination of iconic images. “I wanted to make people see something new. I am interested in the idea of sight, in the use of the eye,” Johns explained. “I am interested in how we see and why we see the way we do.”
As an optometrist, I am also interested in how and why we see the way we do. Let’s examine the optics behind one of the artist’s iconic paintings.
Jasper Johns’ 1968 painting “Flags” depicts two flags in radically different color schemes from the traditional red, white and blue of the American flag.
Look closely at the painting. First, stare intently at the top flag, the one in green, orange and black. Focus your gaze on the white dot in the middle of the image, and keep staring at it for a solid 60 seconds. Then, close your eyes briefly and lower your gaze to focus immediately on the black dot at the center of the gray flag painted below it.
What do you see?
Magically, as you lower your focus to the bottom flag, a familiar red, white and blue color scheme will appear to float faintly atop the gray flag. (If it doesn’t work the first time, try again, focusing even more intently on the green, orange and black flag.)
Why does this happen?
The optical illusion occurs because, after staring intently at a group of colors for an extended period of time, the color receptors in our eyes that recognize those specific colors become fatigued. More precisely, cone cells sensitive to the color differences between red/green and blue/orange are overstimulated and get tired. When we first look away, our eyes briefly see the exact opposite of those colors because different, fresh visual receptors are stimulated.
On the color wheel, the colors directly opposite of green and orange are red and blue. Thus, Jasper Johns’ optical illusion works perfectly to recreate the colors of our American flag. It’s an interesting exercise, well-executed by Jasper Johns’ painting from nearly five decades ago. | https://www.ridgefieldvisioncenter.com/a-flag-day-optical-illusion/ |
Is another future possible? So called ‘late modernity’ is marked by the escalating rise in and proliferation of uncertainties and unforeseen events brought about by the interplay between and patterning of social–natural, techno–scientific and political-economic developments. The future has indeed become problematic. The question of how heterogeneous actors engage futures, what intellectual and practical strategies they put into play and what the implications of such strategies are, have become key concerns of recent social and cultural research addressing a diverse range of fields of practice and experience. Exploring questions of speculation, possibilities and futures in contemporary societies, Speculative Research responds to the pressing need to not only critically account for the role of calculative logics and rationalities in managing societal futures, but to develop alternative approaches and sensibilities that take futures seriously as possibilities and that demand new habits and practices of attention, invention, and experimentation.
Table of Contents
Introduction
1. The Lure of Possible Futures: On Speculative Research, (Martin Savransky, Alex Wilkie & Marsha Rosengarten)
Part 1: Speculative Propositions
Section Introduction, (Martin Savransky, Marsha Rosengarten, Alex Wilkie)
2. The Wager of an Unfinished Present: Notes on Speculative Pragmatism, (Martin Savransky)
3. Speculative Research, Temporality and Politics, (Rosalyn Diprose)
4. Situated Speculation as a Constraint on Thought, (Michael Halewood)
Part 2: Speculative Lures
Section Introduction, (Marsha Rosengarten, Martin Savransky, Alex Wilkie)
5. Pluralities of Action, a Lure for Speculative Thought, (Marsha Rosengarten)
6. Doing Speculation to Curtail Speculation, (Alex Wilkie & Mike Michael)
7. Retrocasting: Speculating about the Origins of Money, (Joe Deville)
Part 3: Speculative Techniques
Section Introduction, (Alex Wilkie, Marsha Rosengarten, Martin Savransky)
8. Sociology’s Archive: Mass-Observation as a Site of Speculative Research, (Lisa Adkins)
9. Developing Speculative Methods to Explore Speculative Shipping: Mail Art, Futurity and Empiricism, (Rebecca Coleman)
10. Creating Idiotic Speculators: Disaster Cosmopolitics in the Sandbox, (Michael Guggenheim, Bernd Kräftner & Judith Kröll)
11.'Too Sweet to Kill' – A Contribution to the Art of Cosmopolitics, (Michael Schillmeier & Yvonne Lee Schultz)
Part 4: Speculative Implications
Section Introduction, (Martin Savransky, Alex Wilkie, Marsha Rosengarten)
12. On Isabelle Stengers’ ‘Cosmopolitics’: A Speculative Adventure, (Vikki Bell)
13. Aesthetic Experience, Speculative Thought, and Civilized Life, (Michael L. Thomas)
14. The Lure of the Possible: On the Function of Speculative, (Didier Debaise)
Afterword
15. Postscript, (Monica Greco)
View More
Editor(s)
Biography
Alex Wilkie is a sociologist and a senior lecturer at the Department of Design Goldsmiths, University of London. His research interests combine aspects of social theory, science and technology studies with design research that bears on theoretical, methodological and substantive areas including, but not limited to: energy-demand reduction, design practice and design studios, healthcare and information technologies, human-computer interaction design, inventive and creative practices, user involvement and participation in design, practice-based design research, process theory and speculative thought. Alex is a director of the Centre for Invention and Social Process (CISP), alongside Michael Guggenheim and Marsha Rosengarten, and convenes the Ph.D. programme in Design at Goldsmiths. He has recently co-edited Studio Studies: Operations, Topologies and Displacements with Ignacio Farias (Routledge, 2015) and he is preparing the edited collection Inventing the Social with Michael Guggenheim and Noortje Marres (Mattering Press). Alex is also a founding editor of Demonstrations, the journal for experiments in social studies of technology.
Martin Savransky is a lecturer at the Department of Sociology, Goldsmiths, University of London, where he teaches philosophy, social theory and methodology of social science. He works at the intersection of process philosophy, the philosophy and methodology of the social sciences, and the politics of knowledge. He has published widely on the ethics and politics of social inquiry, postcolonial ontologies, and social theory. He is the author of The Adventure of Relevance: An Ethics of Social Inquiry (Palgrave Macmillan, 2016).
Marsha Rosengarten is Professor in Sociology, Director of the Unit of Play and Co-Director of the Centre for the Study of Invention and Social Process, Department of Sociology, Goldsmiths, University of London. She is the author of HIV Interventions: Biomedicine and the Traffic in Information and Flesh and co-author with Mike Michael of Innovation and Biomedicine: Ethics, Evidence and Expectation in HIV. Recent articles focus on biomedical research within the field of HIV, Ebola and Tuberculosis drawing from feminist and process oriented approaches. Her work offers alternative ways of conceiving intervention, bioethics, randomized controlled trials and, hence, the nature of scientific evidence. In 2009 she co-founded the international Association for the Social Sciences and Humanities in HIV (ASSHH). Although mostly known for her empirically oriented work on HIV and direct engagement with the biomedical field, in 2013 as Director of the Unit of Play in collaboration with Martin Savransky, Jennifer Gabrys and Alex Wilkie she initiated an intellectual project on speculation. The project has since involved various seminars and workshops and public presentations which, to date, have resulted in the manuscript Speculative Research.
Reviews
"In this remarkable and innovative collection of essays, the authors give renewed value, meaning and, above all, empirical relevance to the practice of speculation. Speculation is rescued from the hands of the speculators!"
Andrew Barry, Chair of Human Geography, University College London.
"This beautifully written collection of essays represents an exciting exploration of the contemporary importance of making speculation centre stage. The book is a landmark in the philosophy and methodology of social science. It does not just illuminate the value of process philosophy – it also provides methodological and practical approaches to doing socially significant research. It is a must read for anyone that wants to take the turn to ontology and affect seriously."
Joanna Latimer, Professor of and Chair in Sociology, Science and Technology. University of York.
"Speculative Research is a truly unique collection that offers much needed inspiration for thinking beyond present conditions and the futures they seem to make impossible. It invites us to engage with a generative tradition of speculative thought that has yet to fulfil its radical practical potential. The stimulating contributions to this volume offer remarkable examples of what thinking speculatively can mean in encounters with specific research fields and problems – faithful to the empirical but not bounded by it, an adventurous yet careful inquiry. In composing this volume, Wilkie, Savransky and Rosengarten have achieved both a generous prolongation and innovative experimentation with speculative thought."
Maria Puig de la Bellacasa, Associate Professor of Science, Technology and Organisation, University of Leicester.
"Speculative Research is a remarkably prescient book that opens up new vistas of experimental thought and practice for contemporary social and cultural research. In reclaiming the question of the speculative from its more recent and notorious variants, this collection crystalizes how the possibilities of more–than–human futures can be engaged with empirical and conceptual assiduousness without relinquishing the challenges and risks of what is to come and what is possible to the logics of the probable. As the editors and contributors insist, developing a speculative sensitivity involves the care for and acceptance of knowledge practices that are part of the cultivation of new futures."
Antoine Hennion, Professor & Director of Research, Centre de Sociologie de l’Innovation, Mines ParisTech, Paris.
"Redeeming speculation against its negative connotations, this exciting book exhibits the multiple potentials of speculative social research. Engaging in a struggle against the deadening effects of probability and inevitability, it opens up for thinking and making alternative futures, inducing readers to come along for the ride."
Casper Bruun Jensen, Associate Professor, Department of Anthropology, Osaka University. | https://www.routledge.com/Speculative-Research-The-Lure-of-Possible-Futures/Wilkie-Savransky-Rosengarten/p/book/9780367895129 |
Rije (リジェ) is a guard captain at the village of Xandria in Ys V: Kefin, The Lost City of Sand. She seems to be working with Dorman for mysterious reasons...
HistoryEdit
Rije is actually the last member of the royalty from the ancient city of Kefin. When the seal on Kefin got weakened three years ago before the events of the game, she escaped along with some of her soldiers. She manipulated Dorman, tempting him with the power of alchemy that lies within Kefin, so he would start collecting the six crystals, which act as keys to unsealing the ancient city.
In GamesEdit
Ys V: Kefin, The Lost City of SandEdit
Adol first encounter Rije when he arrives at the village of Xandria. Just as soldiers demand that Adol be inspected due to increase in robbery, Rije shows up and orders them to let him go. While she initially appears favorable to Adol, it becomes increasingly apparent that she is plotting something insidious with Dorman as Adol finds more crystals.
Later on, she shows her true colors after her partner Dorman is defeated. Calling Dorman useless, she holds Niena hostage and demands Adol to hand over all the crystals. After Adol complies, she reveals her identity as the royalty of Kefin and breaks the seal, causing the ancient city to appear once again. She then teleports to Kefin with Niena. In order to rescue Niena and prevent the city of Kefin from harming rest of the world with its alchemy, Adol travels to Kefin.
After aiding the resistance against Kefin's oppressive regime, Adol sets off to destroy the Philosopher's Stone, the source of Kefin's alchemy. On his way to the core of Kefin where the Philosopher's Stone is placed, Adol encounters Rije again. Rije states that she has captured the Evil gang, Niena, and Stan and will use them as sacrifices for the Philosopher's Stone. Adol chases after her and reaches the core where he rescues his allies. There, Adol confronts Rije and Jabir, the alchemist who has been controlling Kefin from shadows. In the ensuing battle, Adol comes out a victor and destroys the Philosopher's Stone. Without its source of power, Kefin starts to disappear. Adol and his allies escape with the citizens of Kefin but Rije, with her goal completely thwarted, accepts her fate and remains in the crumbling city.
GalleryEdit
TriviaEdit
- In the original concept, Rije was a man as seen in the image above. Also, "he" wasn't a royalty of Kefin but a loyal retainer of the King of Kefin, the character who didn't appear in the game. In the end, "he" would be defeated by Adol in a battle. | https://isu.fandom.com/wiki/Rije |
Unisense can make oxygen microsensors with a tip size as small as 3-5 µm. This enables the insertion of the sensors into soft living tissue like the brain in vivo. For instance, Offenhauser et al. (2005) used small oxygen microsensors to study the coupling of neural activity and localized oxygen consumption in rat cerebral cortex tissue.
The authors found a relationship between the cerebral blood flow and the tissue oxygen tension, suggesting the presence of a tissue oxygen reserve.
Functional neuroimaging signals are generated, in part, by increases in cerebral blood flow (CBF) evoked by mediators, such as nitric oxide and arachidonic acid derivatives that are released in response to increased neurotransmission. However, it is unknown whether the vascular and metabolic responses within a given brain area differ when local neuronal activity is evoked by an activity in the distinct neuronal networks. In this study we assessed, for the first time, the differences in neuronal responses and changes in CBF and oxygen consumption that are evoked after the activation of two different inputs to a single cortical area. We show that, for a given level of glutamatergic synaptic activity, corticocortical and thalamocortical inputs evoked activity in pyramidal cells and different classes of interneurons, and produced different changes in oxygen consumption and CBF. Furthermore, increases in stimulation intensities either turned off or activated additional classes of inhibitory interneurons immunoreactive for different vasoactive molecules, which may contribute to increases in CBF. Our data imply that for a given cortical area, the amplitude of vascular signals will depend critically on the type of input, and that a positive blood oxygen level-dependent (BOLD) signal may be a consequence of the activation of both pyramidal cells and inhibitory interneurons.
In the awake brain, the global metabolic rate of oxygen consumption is largely constant, while variations exist between regions dependent on the ongoing activity. This suggests that control mechanisms related to activity, that is, neuronal signaling, may redistribute metabolism in favor of active networks. This study examined the influence of gamma-aminobutyric acid (GABA) tone on local increases in cerebellar metabolic rate of oxygen (CeMR(O(2))) evoked by stimulation of the excitatory, glutamatergic climbing fiber-Purkinje cell synapse in rat cerebellum. In this network, the postsynaptic depolarization produced by synaptic excitation is preserved despite variations in GABAergic tone. Climbing fiber stimulation induced frequency-dependent increases in synaptic activity and CeMR(O(2)) under control conditions. Topical application of the GABA(A) receptor agonist muscimol blocked the increase in CeMR(O(2)) evoked by synaptic excitation concomitant with attenuation of cerebellar blood flow (CeBF) responses. The effect was reversed by the GABA(A) receptor antagonist bicuculline, which also reversed the effect of muscimol on synaptic activity and CeBF. Climbing fiber stimulation during bicuculline application alone produced a delayed undershoot in CeBF concomitant with a prolonged rise in CeMR(O(2)). The findings are consistent with the hypothesis that activity-dependent rises in CeBF and CeMR(O(2)) are controlled by a common feed-forward pathway and provide evidence for modification of cerebral blood flow and CMR(O(2)) by GABA. | https://www.unisense.com/app_brain_oxygen_measurements/ |
Review: 2013-05-25, nice and close fight with some spectacular exchanges in the last part of the bout: Carl Froch vs Mikkel Kessler II gets three stars. Carl Froch (30-2-0, 22 KOs) entered as the No.2 super middleweight in the world while Mikkel Kessler (46-2-0, 35 KOs) entered as the No.4. In their first fight Kessler defeated Froch by unanimous decision (October 24, 2010); this second fight is valid for the IBF and WBA Super World super middleweight titles. Watch the video!
Review: 2010-04-24, even though, for us, it was not a candidate for fight of the year 2010, the first fight between Mikkel Kessler and Carl Froch can be considered a classic, however, it was very close to get four stars thanks to a great performance by both boxers, some tough exchanges and an exciting ending! The undefeated Carl Froch entered the fight with a record of 26-0-0 (20 knockouts) while Mikkel Kessler had a record of 42-2-0 (32 knockouts). Kessler vs Froch was valid for the WBC super middleweight title (Froch’s third defense) but it was also scheduled for the Super Six World Boxing Classic: super middleweight boxing tournament won by Andre Ward. The rematch Froch vs Kessler II is scheduled to take place May 25, 2013 at the O2 Arena in London (while their first fight was held in Denmark, homeland of Kessler). Watch the video!
Review: 2012-12-08, nice fight with three knockdowns between Mikkel Kessler and Brian Magee: three stars. Mikkel Kessler (45-2-0) entered as the No.5 super middleweight in the world while Brian Magee (36-4-1) entered as the No.16. Kessler vs Magee is valid for the WBA World super middleweight title. Watch the video!
Review: 2012-05-19, with both fighters went to the canvas and thanks to one of the best Knockouts of 2012, the fight between Mikkel Kessler and Allan Green gets four stars. Mikkel Kessler (44-2-0) entered as the No.6 light heavyweight in the world while Allan Green (31-3-0) entered as the No.19. Kessler vs Green is valid for the vacant WBC Silver light heavyweight title. Watch the video!
Review: 2011-06-04, Kessle returns to battle after more than a year against Mehdi Bouadla (former IBF International super middleweight champion) and the fight is exciting, with four knockdowns: four stars. Mikkel Kessler entered the fight with a record of 44-2-0 while Mehdi Bouadla had a record of 22-3-0. Kessler vs Bouadla is valid for the vacant WBO European super middleweight title. Watch the video!
Review: 2007-11-03, Joe Calzaghe vs Mikkel Kessler was a very exciting fight (between two undefeated boxers) valid for the WBA, WBC and WBO super middleweight titles. Calzaghe (43-0-0) had won his WBO title in 1997 (against Chris Eubank) and had already defended it twenty times, but he was 35 years old and Kessler (39-0-0), who was younger than seven, had the potential to inflict the first defeat to the “Italian Dragon”…but in front of more than 50,000 spectators, Calzaghe threw more than 1000 punches in twelve rounds (against 585 of Kessler). Watch the video! | https://www.allthebestfights.com/tag/mikkel-kessler-fight-videos/ |
Mission & Goals
Our mission is to connect the residents of Kennett Township to our natural beauty and to each other through sidewalks and trails that promote health, safety and a sense of community. We are developing and promoting a trail and sidewalk plan to create a network which links major open spaces, parks, public facilities, and neighborhoods in the Township and beyond.
Our goal is to provide increased walkability and recreational opportunities throughout the Township, via sidewalks and trails.
Understanding that trail development can be a lengthy and complicated process, we coordinate with township officials and private organizations to promote a regional approach to trails and sidewalks.
Kennett Greenway
This 10-12 mile Greenway backbone is planned to connect Kennett area residents and visitors to the area’s major nature preserves and parks, and to the many amenities of highly walkable Historic Kennett Square. The Greenway will also connect to existing and planned neighborhood trail systems in Kennett Township, and nearby trails in New Garden Township, New Castle County, DE and beyond. Portions of the corridor are completed, but large sections are awaiting easement decisions and construction planning.
Goals for Sidewalks
Our long-range goal is to connect Kennett Township population centers to the Borough via sidewalks. Walking on a safe, connected network of sidewalks to restaurants, shopping, schools, athletic fields, etc. enhances the quality of life for everyone.
Why Establish Trails & Sidewalks?
Trails and sidewalks positively impact communities by providing not only recreation and transportation opportunities, but also by influencing economic and community development.
Benefits
Some of the many benefits include:
Making communities better places to live by preserving and creating open spaces
Linking neighborhoods with shopping, work and school
Encouraging physical fitness and healthy lifestyles
Functioning as a buffer between the ‘built’ and the natural environment
| |
At the University of California, Berkeley, we are committed to creating a community that fosters equity of experience and opportunity, and ensures that students, faculty, and staff of all backgrounds feel safe, welcome and included. Our culture of openness, freedom and belonging make it a special place for students, faculty and staff.
The University of California, Berkeley, is one of the world's leading institutions of higher education, distinguished by its combination of internationally recognized academic and research excellence; the transformative opportunity it provides to a large and diverse student body; its public mission and commitment to equity and social justice; and its roots in the California experience, animated by such values as innovation, questioning the status quo, and respect for the environment and nature. Since its founding in 1868, Berkeley has fueled a perpetual renaissance, generating unparalleled intellectual, economic and social value in California, the United States and the world.
We are looking for equity-minded applicants who represent the full diversity of California and who demonstrate a sensitivity to and understanding of the diverse academic, socioeconomic, cultural, disability, gender identity, sexual orientation, and ethnic backgrounds present in our community. When you join the team at Berkeley, you can expect to be part of an inclusive, innovative and equity-focused community that approaches higher education as a matter of social justice that requires broad collaboration among faculty, staff, students and community partners. In deciding whether to apply for a position at Berkeley, you are strongly encouraged to consider whether your values align with our Guiding Values and Principles, our Principles of Community, and our Strategic Plan.
At UC Berkeley, we believe that learning is a fundamental part of working, and our goal is for everyone on the Berkeley campus to feel supported and equipped to realize their full potential. We actively support this by providing all of our staff employees with at least 80 hours (10 days) of paid time per year to engage in professional development activities. To find out more about how you can grow your career at UC Berkeley, visit grow.berkeley.edu.
Departmental Overview
Working within the Office of the Dean of Students and under the supervision of the Assistant Dean of Students this position exists to coordinate the campus' response to students experiencing varying degrees of distress. The Center for Support and Intervention is a unit that provides institutional responses to care for students and the overall Cal community through the framework of the Student of Concern Committee. This position will serve as a Case Manager in the Office of the Dean of Students coordinating with Counseling and Psychological Services, Residential Life, Center for Student Conduct, the Basic Needs Center, Financial Aid, UC Police Department (UCPD), Legal Counsel and other administrators as appropriate to address the needs of students who are having trouble in areas that may include academics, mental health, basic needs, discipline, family relationships, social adjustment. This will be done through assessments, consultations, interventions, referrals, and follow-up services. The incumbent fields calls, emails and referrals regarding distressed students; responds to student of concern cases. The position is non-counseling and non-therapeutic.
Application Review Date
The First Review Date for this job is: September 26, 2022
Responsibilities
General Case Management
- Provide case management and organization for students of concern cases. Serve as a strategist with students experiencing distress. Connect students with the appropriate resources on and off campus. Identify and document the network of campus and community services to meet specific needs related to academic stress, legal issues, mental health services, financial support agencies, food services, etc. Serve as a point of contact for campus community members who are seeking consultation and advice about our services for students who seem to be experiencing distress. Manage inquiries and care reports of students of concern and advise staff and faculty on how to manage complex student issues. Responsible for gathering initial information, determining behavioral interventions, developing and communicating recommendations. Investigates each case by applying professional expertise on behavioral and psychological risk factors. Establish assessment and evaluation procedures for case management. Exercises impeccable judgment regarding when to inform and consult with the Assistant Dean of Students. The Center for Support & Intervention serves as the campus expert for high risk student behavioral interventions. Communicate sensitive and confidential matters regarding complex cases within the guidelines of FERPA.
Respondent Services
- Serve as a resource using high competency knowledge to understand and assist students in navigating administrative processes such as Student Conduct and Title IX investigations. Responsible for understanding the laws, legislation, and policy that may affect students and the campus. Demonstrate skills and ability to work with students in crisis. Collaboratively assist in navigating logistical challenges of interim suspensions or other restrictions of privileges. Referrals to legal counsel, including assistance with understanding and complying with protection orders.
Database Management & Research.
- Manage, track, and maintain student records in Symplicity Advocate Care Module. Review student record data and provide insight and recommendations based on trends and patterns. Research best practices and national trends in non-clinical case management. Manage and identify areas of growth and learning opportunities for the Center for Support & Intervention.
Outreach & Awareness
- Design, develop, and deliver specialized training to the campus community regarding students of concern and the Center for Support & Intervention. Coordinate with various campus offices to ensure coherent integration and education of campus and community resources for individual students who have challenges with academic, mental and psychological health, conduct, financial, and social issues. Track and evaluate outreach to campus constituencies and identify areas of further training or heightened attention.
Professional Development & Other Duties as Assigned
- Identify national trends in relation to working with distressed students. Keep abreast of current literature and developments in the field of student affairs, mental health, case management. Keep abreast of federal, state, and UC Office of the President policy/ procedural changes that will affect the scope or practice of case management. Utilize campus and community resources to increase knowledge of mental health issues and health education. Participate in research; departmental and campus committees; programs; and projects as assigned.
Required Qualifications
Case Management Skill Set
- Advanced knowledge of advising and counseling techniques. Knowledge and/or ability to learn of common University-specific computer application programs and knowledge of University and departmental principles and procedures involved in risk assessment and evaluating risks as to likelihood and consequences. Advanced skills in judgment and decision-making, problem solving, identifying measures of system performance, and the actions to improve performance. Strong project management, problem identification, reasoning skills. Demonstrated ability to create training curriculum for community members working with distressed students. Knowledge and/or ability to learn of University and community resources to assist students in meeting mental/physical health, financial, academic, and other basic needs. Broad and/or ability to learn of knowledge of physical and mental health care and services, crisis management/prevention, educational outreach to students, staff, faculty, and parents. Working knowledge and/or ability to learn of campus behavioral intervention teams/ crisis management teams.
Leadership and Program Management Skill Sets
- Advanced knowledge and/or ability to learn of student affairs/student life. Experience developing innovative ideas to solve problems. Direct experience in working with traditional/ non-traditional college aged students and college campuses. Demonstrated knowledge of FERPA, Clery Act, Title IX, ADA, and other laws and relevant policies. Advanced experience working with students in crisis and creating behavioral intervention plans, and/or case management.
Communications Skill Sets
- Advanced skill to present and convey information to students, staff, faculty, and parents in a way that each group would receive it best. Ability to identify problems, use sound judgment and reasoning to make crucial decisions autonomously. Excellent interpersonal skills including both oral and written communication, including experience conducting presentations to large and small groups. Must be able to maintain confidentiality and privacy within the bounds of Family Educational Rights and Privacy Act (FERPA).
Education/Training
- Bachelor's degree or equivalent experience and at least two years of experience in student support roles.
Preferred Qualifications
Education/Training:
- Master's degree from an accredited school in Student Affairs, Higher Education, Social Work, Counseling, or related education or mental health fields, with 2+ years post-Master's experience.
Salary & Benefits
- This is a 100% full-time (40 hrs a week) exempt career position, which is paid monthly and eligible for full UC Benefits. Annual salary is commensurate with experience within the range of $71,000.00 - $76,000.00. For information on the comprehensive benefits package offered by the University visit: https://ucnet.universityofcalifornia.edu/compensation-and-benefits/index.html
How to Apply
Please submit your cover letter and resume when applying.
Other Information
This position is eligible for 40% remote work eligibility.
Conviction History Background
This is a designated position requiring fingerprinting and a background check due to the nature of the job responsibilities. Berkeley does hire people with conviction histories and reviews information received in the context of the job responsibilities. The University reserves the right to make employment contingent upon successful completion of the background check.
Equal Employment Opportunity
The University of California is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status. For more information about your rights as an applicant see:
https://www.eeoc.gov/sites/default/files/migrated_files/employers/poster_screen_reader_optimized.pdf
For the complete University of California nondiscrimination and affirmative action policy see:
http://policy.ucop.edu/doc/4000376/NondiscrimAffirmAct
To apply, visit https://careerspub.universityofcalifornia.edu/psp/ucb/EMPLOYEE/HRMS/c/HRS_HRAM.HRS_APP_SCHJOB.GBL?Page=HRS_APP_JBPST&Action=U&FOCUS=Applicant&SiteId=21&JobOpeningId=42362&PostingSeq=1
Copyright ©2022 Jobelephant.com Inc. All rights reserved. | https://careers.insidehighered.com/job/2593726/student-affairs-case-manager-4565u-dean-of-student-centers-42362/ |
quote = Assyrien har med rätta kallats världens första militärmakt.
ref = ] with an autocratic king as its supreme ruler, [ [http://books.google.com/books?id=ZBQpNEB8k8EC&printsec=frontcover&dq=isbn:1402174071#PPA90,M1 Archibald, pp. 90] ] whilst in Babylonia, the priesthood was the highest authority. The Assyrian dynasties were founded by successful generals; in Babylonia it was the priests whom a revolution raised to the throne. The Babylonian king remained a priest to the last, under the control of a powerful hierarchy; the Assyrian king was the autocratic general of an army, at whose side stood in early days a feudal nobility, aided from the reign of
Tiglath-Pileser IIIonwards by an elaborate bureaucracy. His palace was more sumptuous than the temples of the gods, from which it was quite separate. The people were soldiers and little else; even the sailor belonged to the state. Hence the sudden collapse of Assyria when drained of its fighting population in the age of Ashurbanipal.
ocial life in Babylonia and Assyria
The priesthood of
Babyloniawas divided into a great number of classes, including a medicinal class. It had a counterpart in the military aristocracy of Assyria. The army was raised by conscription; the concept of a standing army seems to have been first organized in Assyria. Successive improvements were introduced into it by the kings of the second Assyrian empire; chariots were replaced by cavalry; Tiglath-Pileser IIIgave the riders saddles and high boots, and Sennacheribcreated a corps of slingers. Tents, baggage-carts and battering-rams were carried on the march, and the "tartan" or commander-in-chief ranked next to the king.
In both countries, there was a large body of slaves; above them came the agriculturists and commercial classes, who were comparatively few in Assyria. The scribes, on the other hand, were a more important class in Assyria than in Babylonia. Both countries had their artisans, money-lenders, poets and musicians.
The houses of the people contained little furniture; chairs, tables and couches were used, and
Ashurbanipalis represented as reclining on his couch at a meal while his wife sits on a chair beside him.
After death, the body was usually partially cremated, along with the objects that had been buried with it. The cemetery adjoined the city of the living, and was laid out in streets through which ran rivulets of "pure" water. Many of the tombs, built of crude brick, were provided with gardens, and there were shelves or altars with offerings to the dead. As the older tombs decayed, a fresh city of tombs arose on their ruins. It is remarkable that thus far, no cemetery older than the Seleucid or
Parthian period has been found in Assyria!
References
*Archibald Henry Sayce, "Social Life among the Assyrians and Babylonians", ISBN 1402174071
Notes
ee also
* History of Babylonia and Assyria:
**
Sumer
**
History of Sumer
**
Akkadian Empire
**
Gutian period
**
3rd dynasty of Ur"Sumerian Renaissance"
**
Babylonia
**
Assyria
**
Kings of Babylon
**
Kings of Assyria
*
Geography of Babylonia and Assyria
*Assyro-Babylonian culture
**
Mesopotamian mythology
**
Babylonian and Assyrian religion
**
Babylonian law
**
Babylonian literature
**
Art and architecture of Babylonia and Assyria
**
Social life in Babylonia and Assyria
**
Cuneiform script
*
Ancient Orient
*
Mesopotamia
*
Assyriology
**
Classical authorities of Babylonia and Assyria
**
Modern discovery of Babylonia and Assyria
**
Chronology of the Ancient Orient
**
Chronology of Babylonia and Assyria
**
Chronological systems of Babylonia and Assyria
External links
* [http://encyclopaedic.net/american-encyclopedia/the-assyrians-and-babylonians.html American Encyclopedia - The Assyrians and Babylonians]
* [http://www.gutenberg.org/files/16653/16653-h/16653-h.htm Myths of Babylonia and Assyria] by Donald A. MacKenzie, available at
Project Gutenberg
*http://www.newadvent.org/cathen/02007c.htm
*http://www.newadvent.org/cathen/02179b.htm
Wikimedia Foundation. 2010.
Look at other dictionaries: | https://enwiki.academic.ru/dic.nsf/enwiki/36793 |
Login using
You can login by using one of your existing accounts.
We will be provided with an authorization token (please note: passwords are not shared with us) and will sync your accounts for you. This means that you will not need to remember your user name and password in the future and you will be able to login with the account you choose to sync, with the click of a button.
6Department of Psychology, University of Alberta, Edmonton, AB, Canada
Background/Objectives: Physical function indicators, including gait velocity, stride time and step length, are linked to neural and cognitive function, morbidity and mortality. Whereas cross-sectional associations are well documented, far less is known about long-term patterns of cognitive change as related to objective indicators of mobility-related physical function.
Methods: Using data from the Victoria Longitudinal Study, a long-term investigation of biological and health aspects of aging and cognition, we examined three aspects of cognition-physical function linkages in 121 older adults. First, we examined a simple marker of physical function (3 m timed-walk) as a predictor of cross-sectional differences and up to 25-year change for four indicators of cognitive function. Second, we tested associations between two markers of gait function derived from the GAITRite system (velocity and stride-time variability) and differences and change in cognition. Finally, we evaluated how increasing cognitive load during GAITRite assessment influenced the associations between gait and cognition.
Results: The simple timed-walk measure, commonly used in clinical and research settings, was a minor predictor of change in cognitive function. In contrast, the objectively measured indicator of walking speed significantly moderated long-term cognitive change. Under increasing cognitive load, the moderating influence of velocity on cognitive change increased, with increasing variability in stride time also emerging as a predictor of age-related cognitive decline.
Conclusion: These findings: (a) underscore the utility of gait as a proxy for biological vitality and for indexing long-term cognitive change; and (b) inform potential mechanisms underlying age-related linkages in physical and cognitive function.
Cross-sectional studies have linked slowing gait velocity to poorer cognitive function in elderly samples (Martin et al., 2013), as well as to age-related pathological outcomes including mild cognitive impairment (MCI) and dementia (Hausdorff and Buchman, 2013). Cognitive processes including executive function, attention, and processing speed share the strongest associations with gait, with observed gait-cognition associations perhaps reflecting a common neural substrate—mediation by frontal brain circuits sensitive to aging pathologies including vascular and neurodegenerative diseases (Parihar et al., 2013). Similarly, increasing gait variability (within-person fluctuations in gait characteristics across steps on a computerized walkway) has been linked to lower levels of physical activity and mobility for older adults (Brach et al., 2010), increased risk of falling (Callisaya et al., 2011), and ultimately to cognitive impairment (Hausdorff and Buchman, 2013). In fact, gait disturbances and falls often serve as an index event for facilitating early detection of cognitive impairment or diagnosing dementia (Axer et al., 2010). Finally, evidence from dual-task studies underscores the important association between gait and cognitive function (Killane et al., 2014). Requiring participants to engage in a cognitive task (counting backwards by 7 s from 100) while simultaneously walking negatively impacts both gait and cognition (Yogev-Seligmann et al., 2008). This dual-task cost increases with age, perhaps reflecting concerns that older adults have about falls. Such concerns may lead to increased focus on the act of walking itself, requiring attentional processes and top-down cognitive control that impairs the self-organizing dynamics of the motor system (Lövdén et al., 2008). With increasing age, sensorimotor functions including gait require an increasing number of cognitive resources. Thus, increasing cognitive task difficulty may result in poorer motor function due to cross-domain resource competition, particularly for older adults with diminished cognitive resources (Schaefer et al., 2006).
Despite the burgeoning research interest in gait-related physical function and cognition in aging, the associations across long-term change periods remain poorly understood (Mielke et al., 2013). Recent evidence suggests that slowing gait speed precedes cognitive declines during the prodromal phase of dementia (Hausdorff and Buchman, 2013). However, few studies have examined longitudinal gait-cognition associations, particularly for community-dwelling elderly (Clouston et al., 2013). Using multi-wave data from the Victoria Longitudinal Study (VLS), we examined gait-cognition linkages guided by three research objectives. The first research objective tested whether individual differences for a simple measure of gait speed, time required to walk a distance of 3 m, predicts age differences and change in cognitive function spanning numerous waves of assessment. Extending previous cross-sectional research (Clouston et al., 2013), the second objective assessed individual differences for GAITRite-derived measures (normalized velocity, stride-time variability) as predictors of age differences and change in cognitive function. A related question addressed whether cognitive function was similarly predicted by the simple timed walk vs. the GAITRite derived measure of walking speed. The expectation is that the GAITRite system will yield a more precise measurement for use in research contexts. The third research objective tested the moderating impact of increasing cognitive load on gait performance (walk-only condition vs. 7-letter words spelled backwards) and its corresponding association with differences and change in cognitive function. The overloading of the CNS by additional task demands may not only influence the impact on various gait indicators, but gait-cognition associations as well (Rosano et al., 2007; Lövdén et al., 2008). Dual-task studies examining how cognitive performance is further influenced by walking and performing a task at the same time have demonstrated that higher cognitive load may hamper motor control due to cross-domain resource competition (Schaefer et al., 2006). We expected that such resource competition would also increase the predictive association of gait on long-term cognitive change.
Materials and Methods
Participants
This study uses data from the VLS, a long-term project examining biological, health and neurocognitive aspects of aging. All data collection procedures are in full compliance with prevailing institutional research board ethic guidelines. At intake, VLS participants are community-dwelling adults, aged 55–85 years, with no serious health conditions (baseline exclusionary criteria include dementia diagnosis, as well as serious cardiovascular or cerebrovascular conditions). The present study was based upon data from VLS Samples 1 (initiated in 1986) and 2 (initiated in 1992), as it provided the largest number of retest waves and the overall longest duration of archival data. The research design of the VLS calls for retest intervals of approximately 3 years, with the present study sample spanning up to eight waves and as many as 25-years of longitudinal follow-up. The GAITRite® computerized walkway was first administered in the VLS protocol for Sample 1 Wave 8 (S1W8) and Sample 2 Wave 6 (S2W6), thereby facilitating the investigation of concurrent gait function with retrospective cognitive change.
Across the combined samples, gait assessment was completed by 121 participants (78 women, 43 men) aged 75–97 years (Mage = 84.92 years; SD = 4.74). Despite the advanced age of the S1W8 and S2W6 returnees, this group exhibited select profiles of education (M = 14.99 years, SD = 3.04) and health (self-reported health relative to perfect, M = 0.98, SD = 0.83; and relative to same-aged peers, M = 0.69, SD = 0.76) on a 5-point scale (ranging from 0 = very good to 4 = very poor). Education and health background characteristics for the original S1W1 (n = 484) and S2W1 (n = 530) samples, respectively, were comparable: years of education (M = 13.42, SD = 3.09; M = 14.81, SD = 3.15), self-reported health relative to perfect (M = 0.83, SD = 0.76; M = 0.77, SD = 0.73), and self-reported health relative to same-aged peers (M = 0.63, SD = 0.71l, M = 0.58, SD = 0.70). Using analysis of variance, we contrasted group differences in the baseline education and health measures for the current sample (those who returned for all waves of testing; n = 127) vs. those who attrited at any point during the study after baseline assessment (n = 887). As expected, at baseline, the final-sample returnees had more years of education (M = 14.88 vs. 14.05, F(1,1012) = 7.47, p < 0.01) and reported themselves to be in better health relative to perfect (M = 0.63 vs. 0.82, F(1,1012) = 7.44, p < 0.01), but not relative to their peers (n.s.). These group differences notwithstanding, very high levels of education and self-reported health (very good to good range) were reported for both the returnees as well as those who dropped out of the study.
Measures
Cognitive Function
Participants completed measures of perceptual speed (Digit Symbol Substitution), episodic memory (word recall), incidental memory (incidental recall of the digit symbol coding key), and semantic memory (vocabulary). Assessments of episodic and semantic memory were available across all waves for both samples, with up to 6 waves of data available for perceptual speed and incidental memory for Sample 1 (W3–W8) and Sample 2 (W1–W5).
Perceptual speed
The Digit Symbol Substitution test from the Wechsler Adult Intelligence Scale (Wechsler, 1958) was administered to index perceptual processing speed. Participants were given 90 s to transcribe as many symbols as possible into the empty boxes based on the digit–symbol associations specified in the coding key.
Episodic memory
The word recall test was based on six categorized lists of common English nouns from established norms (Howard, 1980). Each word list consisted of six words from five taxonomic categories typed on a single page in unblocked order. Participants were given 2 min to study each list and 5 min for free recall.
Incidental memory
Following completion of the digit symbol test, participants were presented with a coding key containing only the nine unique symbols. The incidental test of memory required participants to recall the number corresponding to each of the nine unique symbols based on the associations specified in the original coding key. Participants were given 90 s for recall.
Semantic memory
English vocabulary was indexed by a 54-item recognition measure adapted from the Kit of Factor Referenced Cognitive Tests (Ekstrom et al., 1976). Participants were given 15 min to complete the test.
Gait and Mobility
Timed Walk
A basic measure of walking speed was assessed reflecting the time (in seconds) required to walk a distance of 3 m, recorded using a handheld stopwatch. Participants began walking from a stationary position behind a clearly demarcated line, and proceeded in one direction until they walked beyond a second line. Participants did not decelerate at the 3 m marker, as ample space (>1.5 m) was available beyond this point.
GAITRite Computerized Walkway
The indicators of gait speed and variability were derived from a 4.88 m GAITRite® instrumented walkway (GAITRite; CIR Systems, Sparta, NJ, USA). Each sensor pad has an active area of 60 cm square and contains 2304 sensors arranged in a 121.9 × 121.9 cm grid pattern. Sensors are activated under pressure at footfall and deactivated at toe-off, enabling capture of the relative arrangement of footfalls as a function of time. Gait data from the pressure-activated sensors were sampled at 120 Hz and transferred to a computer for subsequent processing using GAITRite Platinum software (CIR Systems Inc, 2010).
Participants walked at their normal pace on the instrumented walkway, while wearing their own comfortable shoes, in a well-lit environment. No practice passes on the gait walkway were allowed prior to commencing testing. Participants completed two complete back-and-forth circuits (four total passes) of the mat for each condition; walking commenced 1.5 m prior to the mat and concluded 1.5 m beyond the mat to permit sufficient time for acceleration and deceleration. Data from the four passes for each cognitive condition were concatenated.
Gait speed
Normalized velocity from the GAITRite walkway was computed by dividing total distance traveled by the ambulation time (indexed in centimeters per second), and by then standardizing this estimate through division by the average leg length (yielding a normalized estimate in units of leg length per second) for each participant to control for individual differences in height.
Gait speed variability
This variability estimate from the GAITRite walkway was calculated on the basis of stride time, reflecting the time elapsed (in seconds) between the initial contacts of two consecutive footfalls of the same foot. Intraindividual variability in stride time was computed as the coefficient of variation (CV: the within-person standard deviation divided by the within-person mean) to control for mean differences as a potential confound.
Cognitive load condition
Participants completed two separate walking conditions on the GAITRite walkway: a walk-only (no load) condition, as well as a walk condition performed while simultaneously completing a cognitive task (load condition). For each condition, participants completed two complete back-and-forth circuits (four total passes) of the mat. The cognitive task in the load condition required participants to spell 7-letter words backwards that were equated for a grade 7 to 9 reading level. The no load condition always preceded the load condition.
Statistical Analyses
We used HLM 6.08 software (Scientific Software International, 2004) to fit linear mixed models to test the research objectives. To examine whether each of the cognitive constructs exhibited significant longitudinal changes, within-person (Level 1) models were fit for linear change as a function of time (years) in study (see equation 1). Cognitive performance for a given individual (i) at a given time (j) was modeled as a function of that individual’s performance at baseline testing (intercept centered at a representative value in the sample for Wave 1), plus his/her average individual rate of change per each additional year in the study (the slope), plus an error term.
For measures exhibiting significant change, we further examined how cognitive change spanning the up to 25-year period was related to concurrent markers of physical function (see Level 2 predictors for equation 1) including timed walk (research objective one) as well as markers of gait speed and variability (research objective two). To estimate
Level1:Cognitionij=β0i+βli(TimeinStudyij)+eij(1)
Level2:β0i=γ00+γ01(Gait)+γ02(Age)+u0i
β1i=γ10+γ11(Gait)+γ12(Age)+u1i
average effects for the entire sample, the Level 1 individual growth parameters for intercept (β0i) and slope (β1i) were employed as to-be-predicted outcomes for the Level 2 between-person equations. Our focus concerned whether individual differences for various gait indicators were associated with cross-sectional (intercept) differences in cognitive performance or with individual differences in trajectories of cognitive change (slope). For the level-2 analysis for intercept, a given individual’s predicted cognitive performance for each measure (B0i) was modeled as a function of cognitive performance at centered grand-mean values for age at time of gait testing and gait (γ00), the average difference in performance for a 1 unit increase in gait (γ01) and age (γ02), plus a random effect (U0i) that estimates the variability about that sample mean holding age and gait constant. Similarly, for the level-2 analysis for slope, we modeled the predicted linear rate of change in cognitive performance for each individual (B1i) as a function of the average cognitive change for centered values of gait and age (γ10), the average difference in cognitive change per unit increase in gait (γ11) and age (γ12), plus a random effect term (U1i) reflecting variance about cognitive performance slopes independent of other predictors. For all level 2 models, chronological age at time of gait testing was entered as a covariate to adjust for between-person age effects. Parameters were estimated using full information maximum likelihood.
Results
We fit linear mixed models to document up to 25-year change separately for each of the four cognitive indicators. Significant age-related declines were observed for all cognitive outcomes under consideration (see Figure 1). Each additional year in the study was associated with significant cognitive declines in perceptual speed, episodic memory, incidental recall and semantic memory (see Table 1). Between-person differences in age at time of gait testing significantly moderated intercepts for digit symbol (increasing age linked to fewer correct responses) and vocabulary (increasing age associated with more words recognized); between-person age differences also moderated two slope terms, with each additional year older associated with fewer vocabulary terms and words recalled (p < 0.01).
Table 1. Change in cognitive performance spanning as many as 25 years of assessment.
To address our first research objective, we examined the basic, concurrently assessed (S1W8 and S2W6) timed-walk indicator as a between-person predictor of differences and change in cognitive function. For modeling purposes, concurrent timed walk was centered at the grand mean (M = 9.29 s; SD = 3.27). This simple measure of timed walk was not significantly associated with individual differences in cognitive function for any of the outcome measures (all two-tailed p’s > 0.10 for intercepts) independent of age at testing. Significant moderating effects were observed for up to 25-year change in two cognitive outcomes; each additional second increase (slowing) in timed walk above the grand mean was associated with a further 0.037 (SE = 0.013, p < 0.01) unit decline in digit symbol performance accuracy, as well as a further 0.011 (SE = 0.006, p < 0.05) unit decline in number of words successfully recalled.
Our second research objective examined the two GAITRite indices as predictors of individual differences and multi-year change in cognitive function for the walk-only (no load) condition. Differences in concurrent markers of gait speed and variability did not moderate cross-sectional differences in cognitive function; there were no significant effects of gait on intercept estimates for cognitive performance, controlling for individual differences in age. However, individual differences in the GAITRite assessment of gait velocity and stride time variability significantly moderated age-related change in cognitive function (see Table 2). Per additional year in the study, a slower normalized gait velocity was associated with faster cognitive decline for word recall, digit symbol accuracy (one-tailed p-value), and vocabulary, with increased stride time variability linked to faster cognitive decline for vocabulary. The moderating influence of normalized velocity on age-related cognitive change are plotted in Figure 2 for select values (simple effects) of gait speed. Across cognitive outcomes, the significant interactions (see γ11 slopes in Table 2) indicate that cognitive declines across time are more pronounced for slower gait velocities.
TABLE 2
Table 2. Change in cognitive performance for the no load condition as a function of gait speed and variability.
For our third research objective, we examined the impact of increasing cognitive load on gait-cognition associations. We replicated all aforementioned analyses conducted for the no load (walk-only) condition, and then compared key parameters of interest to assess the impact of adding cognitive load while walking on the GAITRite walkway. Similar to the no load condition findings, cross-sectional estimates of cognitive function were not associated with concurrent gait velocity; a 1 SD increase in stride-time variability was significantly related to lower intercept values in cognitive performance for word recall (2.21 fewer words recalled, p < 0.05) and vocabulary (4.01 fewer words recognized, p < 0.05). Even under increased cognitive load, the impact of gait function on individual differences in cognitive performance was modest. In contrast, gait uniformly and significantly moderated cognitive change (see Table 3). Consistent with the no-load effects of gait on change slopes (γ11) reported in Table 2, a slower gait velocity (relative to the sample average walking speed) was associated with accelerated cognitive decline for all four cognitive measures. Moreover, the magnitude of these slope estimates for the time × gait velocity interaction (γ11 estimates in Table 3) showed uniform increases under cognitive load: word recall (from 0.151 to 0.190), digit symbol correct (from 0.244 to 0.399), digit symbol incidental recall (from 0.059 to 0.109), and vocabulary (from 0.103 to 0.146).
TABLE 3
Table 3. Change in cognitive performance for the load condition as a function of select markers of gait speed and variability.
Finally, in contrast to the single significant effect observed for the no load condition, under cognitive load increasing stride-time variability was consistently associated with further increases in cognitive decline (see Table 3). Each unit increase in gait variability above the sample average was associated with increased decline for word recall (64% increase in decline relative to the sample average), digit symbol correct (67% increase), digit symbol incidental recall (145% increase), and even for the knowledge-based vocabulary measure (141% increase).
Discussion
Across the up to eight waves and 25-year follow up, significant age-related decline in cognitive function was observed, despite the select survival of the sample. Having demonstrated significant age-related cognitive decline and variance in change, gait-related physical function predictors of this change were evaluated. The first research objective tested the basic 3 m timed-walk variable as a predictor of age-related differences and change in cognitive function. Slower walking speed on the basic timed-walk variable shared a modest association with two indicators of cognitive decline (digit symbol accuracy and word recall).
In the second objective, we further tested select markers of gait velocity and variability from the GAITRite system as predictors of differences and change in cognition, and directly compared predictive patterns from the GAITRite system to the simple timed-walk task. GAITRite indicators of diminished gait velocity and increased gait variability were linked to prior 25-year accelerated cognitive decline. Among the implications, the findings are consistent with claims that select gait indicators provide a snapshot of the integrity of various bodily systems (6), and that gait may prove useful as a proxy for biological vitality and its associated underpinnings for successful cognitive aging (2, 3). Further, we extended findings by Youdas et al. (2006) by directly comparing results from the simple timed measure of gait to those from the GAITRite system. Relative to the pattern of moderating effects observed for the simple timed-walk measure, the GAITRite index of normalized velocity from the no load condition yielded significant effects for the same cognitive outcomes, as well as significantly moderated cognitive change for one additional cognitive outcome (vocabulary). Consistent with conclusions drawn by Youdas et al. (2006), these patterns support claims that data from computerized walkways provide more nuanced, precise, and reliable assessment of gait that may in turn improve sensitivity for detecting various age-related or clinical outcomes (cognitive impairment, dementia, falls, death).
A third objective of the present study was to assess gait under two distinct walking conditions (a walk-only condition vs. a condition that required simultaneous performance of a cognitive task), and to test how such increases in cognitive load during GAITRite assessment influenced associations between gait and cognitive change. Increasing cognitive load impacted gait-cognition associations in several ways. First, increasing load exerted an additional negative impact on gait-cognition associations, consistent with patterns reported for dual-task research designs (Toulotte et al., 2006). Slower gait velocity was linked to cognitive decline in the no-load condition for three of four cognitive outcomes, with the magnitude of these declines amplified under increasing cognitive load. Further, whereas stride-time variability predicted decline for a single cognitive outcome (vocabulary) in the no-load condition, significant associations were observed for all cognitive outcomes under increased load. With regard to predicting a cognitive decline, the pattern of increased associations for gait speed and emerging importance of gait variability under increased cognitive load is consistent with explanations based on resource competition (Schaefer et al., 2006). Specifically, dual-task studies have examined how performance is influenced by walking and performing a cognitive task simultaneously. Among the explanations, increased cognitive load may hamper basic motor control due to cross-domain (physical function vs. cognitive) resource competition (Woollacott and Shumway-Cook, 2002; Schaefer et al., 2006). Attentional control is of central importance for gait and the rhythmic stepping mechanism (Lindenberger et al., 2000; Woollacott and Shumway-Cook, 2002; Dubost et al., 2006). With increasing age, available cognitive resources (particularly executive processes and attentional control) are declining, with any further demands placed on these limited resources likely to negatively impact gait performance (Woollacott and Shumway-Cook, 2002; Dubost et al., 2006) and subsequently increase the magnitude of gait-cognition associations (Killane et al., 2014). Previous research has documented such links between increased cognitive demands and corresponding impacts on age-graded impairments in physical function; the impact is greatest for older adults due to age-related increases in the demands of physical functions (less automatic and more effortful) for diminishing cognitive resources (Lövdén et al., 2008). Such explanations are consistent with our findings.
In future research, it will be important to directly explore potential mechanisms underlying why better gait function is protective against diminished cognitive decline. One promising avenue of study could involve cognitive reserve—a concept employed to help explain the vast individual differences in susceptibility to normative and pathological age-related cognitive decline (Stern, 2012). Recent research has explored the critical question as to why some individuals are more cognitively resilient than others, despite the presence of underlying brain changes. Although cognitive reserve is often indexed using a single indicator (years of education), multiple-indicator methods have also been employed (Opdebeeck et al., 2016). In one recent study by Grotz et al. (2017), cognitive reserve was operationalized as a multifaceted construct weighted by key indicators from across the lifespan including years of education, last occupation, and current participation in leisure activities. This weighted index of cognitive reserve more fully attenuated the age-cognition association, relative to education alone; based on this pattern, the weighted index was deemed to be the better proxy of cognitive reserve. Notably, Grotz et al. (2017) suggested that a negative load index, comprised of risk indicators such as chronic health conditions or body mass index, could further identify factors that negatively influence levels of cognitive reserve. Of direct relevance to the present study, diminished gait function may represent one such risk indicator with implications for reserve. In future studies, identifying key underlying indicators of cognitive reserve will be invaluable for identifying those who may be protected against various conditions from MCI (Franzmeier et al., 2016) to depression (Freret et al., 2015) with increasing age.
Among the study limitations, selective longitudinal survival likely influenced the results. Simple comparisons showed that the survivors reported slightly more years of education and better absolute self-reported health at baseline relative to their non-surviving cohort members (although levels were high for both groups). Further, despite the possible survival-related advantages, significant gait-cognition associations were observed for all cognitive outcomes, including for measures of incidental recall and semantic memory that are typically resistant to age-related influences. On balance, the observed patterns likely underestimate the true magnitude of associations between gait and cognitive decline. The sample size for the concurrent gait predictors and associated analytic constraints represent additional study limitations. Given the recent addition of the GAITRite assessment to the VLS protocol, our analyses were restricted to the examination of concurrent gait differences in relation to retrospective (prior) cognitive decline across as many as 25 years. We acknowledge the retrospective limitation of our design, but note that evidence of decline from a prior level of functioning represents important clinical detail, especially among higher functioning individuals, as well as the successful use of such designs in previous research (MacDonald et al., 2004). Despite the modest sample size for gait, the large number of assessments (n = 6–8) increased the statistical power to detect cognitive change (Rast and Hofer, 2014), and helped to offset this limitation.
In sum, findings from the present study represent a conservative first step for studying prospective gait-cognitive change relations. Contrasting prediction patterns for timed walk vs. normalized velocity from the first two research objectives extends recent findings underscoring the importance of objective gait assessment for use in clinical settings (Youdas et al., 2006) to research contexts as well, with important implications for reliable gait measurement and prospective identification of those at risk of various age-related outcomes. Findings from the second research objective extend gait-cognition associations longitudinally, thus addressing concerns that previously reported cross-sectional links are a spurious byproduct of between-subject age confounds. Patterns from the third research objective exploring the impact of dual-tasking on the moderating influence of gait on long-term cognitive change are consistent with conclusions drawn from cross-sectional studies. Dual tasking augments the moderating influence of gait on cognitive change, a finding consistent with greater attentional demands for maintaining gait and balance with increasing age (Lindenberger et al., 2000; Lövdén et al., 2008). Walking itself places ever-increasing demands on available cognitive processes; age-related constraints on such cognitive resources (particularly attentional processes) result in resource competition impacting motor control and coordination for even basic assessments of physical function (Woollacott and Shumway-Cook, 2002). Further, the increasing importance of gait variability as a predictor of cognitive change under dual-tasking conditions is consistent with recent findings that document increases in age-cognitive variability associations for an interference (but not control) condition of an executively demanding task, with the degree of attenuation of the age-cognitive variability association most pronounced after partialing estimates of dopamine binding for select regions in the cingulo-fronto-parietal (dorsal attention) network (MacDonald et al., 2012). Collectively, these findings underscore the importance of attentional processes as modulators of age-related increases in variability for the domains of physical function as well as cognition, and suggest that common mechanisms (e.g., age-related losses in dopamine binding potential) may underlie increases in both gait and cognitive variability (MacDonald et al., 2012; Rosso et al., 2013), as well as the predictive importance of variability indicators in their respective literatures. Looking to the future, research designs that incorporate various domains (cognition, gait, neural function) and indicators (mean, variability) will be best positioned for investigating the complex interrelations among aging, CNS, and physical and cognitive function, and their contributions to numerous age-related processes and outcomes—spanning successful aging to frailty and death (Amboni et al., 2003; Rosso et al., 2013).
Ethics Statement
This research was conducted under full, active and continuous human ethics approval from prevailing Institutional Review Boards. The Human Research Ethics Board (HREB) of the University of Victoria approved this study. Written informed consent was obtained from all participants.
Author Contributions
SWSM and RAD developed the theoretical research focus; SWSM analyzed the data and wrote the manuscript. SH, JAL, CAD, DWRH, PWHB, TVL, RC and RAD contributed to theoretical discussions and refinements, as well as provided detailed suggestions for revision of manuscript drafts.
Funding
This research was supported by a grant from the National Institutes of Health/National Institute on Aging (R01 AG008235) to RAD, who also acknowledges support from the Canada Research Chairs program. SWSM was supported by grants from the National Institutes of Health/National Institute on Aging (R21 AG045575) and the Natural Sciences and Engineering Research Council of Canada (418676-2012).
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Howard, D. V. (1980). Category norms: a comparison of the Battig and Montague (1969) norms with the responses of adults between the ages of 20 and 80. J. Gerontol. 35, 884–890. doi: 10.1093/geronj/35.2.225
| |
DNA Replication, also known as Semi-Conservative Replication, is the process by which DNA is essentially “doubled”. It is an important process that takes place within the dividing cell.
In this article, we shall look briefly at the structure of DNA, at the precise steps involved in replicating DNA (initiation, elongation and termination), and the clinical consequences that can occur when this goes wrong.
DNA Structure
DNA is made up of millions of nucleotides; these are molecules composed of a deoxyribose sugar, with a phosphate and a base (or nucleobase) attached to it. These nucleotides are attached to each other in strands via phosphodiester bonds, to form a ‘sugar-phosphate backbone’. The bond formed is between the third carbon atom on the deoxyribose sugar of one nucleotide (henceforth known as the 3’) and the fifth carbon atom of another sugar on the next nucleotide (known as the 5’).
There are two strands in total, running in opposite or antiparallel directions to each other. These are attached to each other throughout the length of the strand through the bases on each nucleotide. There are 4 different bases associated with DNA; Cytosine, Guanine, Adenine, and Thymine. In normal DNA stands, Cytosine binds to Guanine, and Adenine binds to Thymine. The two strands together form a double helix.
Stages of DNA replication
DNA replication can be thought of in three stages; Initiation, Elongation, Termination
1.Initiation
DNA synthesis is initiated at particular points within the DNA strand known as ‘origins’, which are specific coding regions. These origins are targeted by initiator proteins, which go on to recruit more proteins that help aid the replication process, forming a replication complex around the DNA origin. There are multiple origin sites, and when replication of DNA begins, these sites are referred to as Replication Forks.
Within the replication complex is the enzyme DNA Helicase, which unwinds the double helix and exposes each of the two strands, so that they can be used as a template for replication. It does this by hydrolysing the ATP used to form the bonds between the nucleobases, therefore breaking the bond between the two strands.
DNA can only be extended via the addition of a free nucleotide triphosphate to the 3’- end of a chain. As the double helix runs antiparallel, but DNA replication only occurs in one direction, it means growth of the two new strands is very different (and will be covered in Elongation).
DNA Primase is another enzyme that is important in DNA replication. It synthesises a small RNA primer, which acts as a ‘kick-starter’ for DNA Polymerase. DNA Polymerase is the enzyme that is ultimately responsible for the creation and expansion of the new strands of DNA.
2.Elongation
Once the DNA Polymerase has attached to the original, unzipped two strands of DNA (i.e. the template strands), it is able to start synthesising the new DNA to match the templates. This enzyme is only able to extend the primer by adding free nucleotides to the 3’-end of the strand, causing difficulty as one of the template strands has a 5’-end from which it needs to extend from.
One of the templates is read in a 3’ to 5’ direction, which means that the new strand will be formed in a 5’ to 3’ direction (as the two strands are antiparallel to each other). This newly formed strand is referred to as the Leading Strand. Along this strand, DNA Primase only needs to synthesise an RNA primer once, at the beginning, to help initiate DNA Polymerase to continue extending the new DNA strand. This is because DNA Polymerase is able to extend the new DNA strand normally, by adding new nucleotides to the 3’ end of the new strand (how DNA Polymerase usually works).
However, the other template strand is antiparallel, and is therefore read in a 5’ to 3’ direction, meaning the new DNA strand being formed will run in a 3’ to 5’ direction. This is an issue as DNA Polymerase doesn’t extend in this direction. To counteract this, DNA Primase synthesises a new RNA primer approximately every 200 nucleotides, to prime DNA synthesis to continue extending from the 5’ end of the new strand. To allow for the continued creation of RNA primers, the new synthesis is delayed and is such called the Lagging Strand.
The leading strand is one complete strand, while the lagging strand is not. It is instead made out of multiple ‘mini-strands’, known of Okazaki fragments. These fragments occur due to the fact that new primers are having to be synthesised, therefore causing multiple strands to be created, as opposed to the one initial primer that is used with the leading strand.
3.Termination
The process of expanding the new DNA strands continues until there is either no more DNA template left to replicate (i.e. at the end of the chromosome), or two replication forks meet and subsequently terminate. The meeting of two replication forks is not regulated and happens randomly along the course of the chromosome.
Once DNA synthesis has finished, it is important that the newly synthesised strands are bound and stabilized. With regards to the lagging strand, two enzymes are needed to achieve this; RNAase H removes the RNA primer that is at the beginning of each Okazaki fragment, and DNA Ligase joins two fragments together creating one complete strand.
Now with two new strands being finally finished, the DNA has been successfully replicated, and will just need other intrinsic cell systems to ‘proof-read’ the new DNA to check for any errors in replication, and for the new single strands to be stabilized.
Clinical Relevance – Sickle Cell Anaemia
Sickle Cell Anaemia is an autosomal recessive condition which is caused by a single base substitution, in which only one base is changed for another. In some cases this can result in a ‘silent mutation’ in which the overall gene is not affected, however in diseases such as Sickle Cell Anaemia it results in the strand coding for a different protein.
In this case an adenine base is swapped for a thymine base in one of the genes coding for haemoglobin; this results in glutamic acid being replaced by valine. When this is being transcribed into a polypeptide chain the properties it possesses are radically changed as glutamic acid is hydrophilic, whereas valine is hydrophobic. This hydrophobic region results in haemoglobin having an abnormal structure that can cause blockages of capillaries leading to ischaemia and potentially necrosis of tissues and organs – this is known as a vaso-occlusive crisis.
These crises are typically managed with a variety of pain medication, including opioids and NSAIDs depending on the severity. Red blood cell transfusions may be required in emergencies, for example if the blockage occurs in the lungs. | https://teachmephysiology.com/basics/cell-growth-death/dna-replication/ |
Hello! We're glad you're here.
The City of Zephyrhills Comprehensive Plan is getting an update and your input is needed! The City aims to collect thousands of ideas from residents, business owners, workers, and other community stakeholders for use in creating a Citywide Vision and Goals and Strategies to help make the vision come to life. Using words and illustrations, the vision will paint a picture of what the community wants Zephyrhills to look like in the future and set a defined direction for PlanZephyrhills2035.
A significant piece of the plan update is engaging with people from all parts of the Zephyrhills to hear about community needs and ideas for making Zephyrhills an even better place to live, work, and play. This summer, city planners hosted seven Community Conversations in different parts of the city. Approximately 100 participants shared their thoughts about Zephyrhills' quality of life, issues and challenges, and aspirations for the future. The Community Conversations focused on three questions that can be found in the "Tell us! Survey" tabbed below. The survey is currently open. We'd love to hear from you!
About PlanZephyrhills2035
The City of Zephyrhills is updating the City's adopted 2025 Comprehensive Plan. When the planning process is completed later this year, the updated plan will have a new name and planning timeframe. Put together, we get PlanZephyrhills2035.
The State's Community Planning Act requires every city and county in Florida to prepare and adopt a comprehensive plan. A comprehensive plan includes goals, objectives, and policies that are based on data, analysis, and public input to guide growth and development as well as conservation of valued community assets and natural resources. The goals, objectives, and policies set the foundation for land develoment regulations, infrastructure investments, capital improvements budgeting, and other government programs. The comprehensive plan considers population growth and related needs for public facilities and services including water, sewer, roads, bike paths, sidewalks, parks, and more.
Community input is very important to updating the comprehensive plan and there will be several opportunities for residents, business owners, and others who are part of our southeast Pasco community to participate. With so much happening in our city, county, region, and world, this is a great moment to get involved and help shape our community's bright future.
Frequently Asked Questions
PlanZephyrhills 2035 is a comprehensive planning initiative of the City of Zephyrhills. The initiative will result in a new comprehenisve plan to guide city decision making through the year 2035.
The City reviews applicable City plans, plans of other governments, and state laws to learn about specific comprehensive plan requirements. To understand how the City is has changed over the past decade and might change in the future, we look at current conditions and trends information such as demographics, land use patterns, and infrastruture availability/performance. We also engage with residents and other stakeholders to learn about community needs, issues and opportunties that could be addressed in the comprehensive plan goals, objectives, and policies. | https://publicinput.com/V5211 |
Welcome to Project AT › Forums › ProjectAT Forum › Unresolved Tensions – PDF download
- This topic is empty.
-
AuthorPosts
-
November 29, 2020 at 9:03 am #8836Lisa JulianGuest
Unresolved Tensions
Bolivia Past and Present
by John Crabtree, Laurence Whitehead
Unresolved Tensions — Click Here
- Format: paperback, 288 pages
- Author: John Crabtree, Laurence Whitehead
- Release date: September 20, 2008
- Language: english
- Publisher: University of Pittsburgh Press
- ISBN: 9780822960065 (0822960060)
About The Book
The landslide election of Evo Morales in December 2005 pointed toward a process of accelerated change in Bolivia, forging a path away from globalization and the neoliberal paradigm in favor of greater national control and state intervention. This in turn shifted the power relations of Bolivia’s internal politics-beginning with greater inclusion of the indigenous population-and altered the nation’s foreign relations. Unresolved Tensions engages this realignment from a variety of analytical perspectives, using the Morales election as a lens through which to reassess Bolivia’s contemporary political reality and its relation to a set of deeper historical issues.
This volume brings together an expert group of commentators and participants from within the Bolivian political arena to offer diverse perspectives and competing views on issues of ethnicity, regionalism, state-society relations, constitutional reform, economic development, and globalization. In this way, the contributors seek to reassess Bolivia’s past, present, and future, consider the ways in which the nation’s historical developments flow from these deeper currents, and assess the opportunities and challenges that arise within the new political context.
Paperback Unresolved Tensions download. Hardcover book Unresolved Tensions John Crabtree for Kindle on Books-a-Million. MOBI ebook Unresolved Tensions buy cheap iPhone on Kobo. Online Unresolved Tensions by John Crabtree read.
-
AuthorPosts
- The forum ‘ProjectAT Forum’ is closed to new topics and replies. | https://projectat.org/forums/topic/unresolved-tensions-pdf-download/ |
Over the past weekend, poor meteorological conditions contributed to a severe accumulation of air pollution in Beijing and hundreds of other cities in northern and eastern China. The extreme pollution was the worst in recent memory, yielding hundreds of media reports both within China and around the world. In addition, tens of millions took to Twitter and China’s Twitter-like service, Weibo, to post their thoughts and complaints. Unfortunately, such extreme pollution accumulation episodes are common in Beijing. And even when meteorological conditions are good, average pollution levels in Beijing are still unacceptably high. Until China takes critical steps towards reducing emissions, poor average air quality and occasional “crazy bad” episodes will continue.
Vehicles are a critical source to control. Vehicles are typically by far the largest source of human exposure to air pollution in densely packed urban areas. Plus, the contribution of vehicle emissions to air pollution in China is increasing as the population of motor vehicles in Beijing and around China continues to grow rapidly. Implementing stringent controls to mitigate the pollution impacts of China’s motor vehicles must be a key priority in parallel with controlling other sources such as factories and power plants.
Moreover, such action needs to take place at the national level. Because vehicles (especially trucks and long-distance buses) travel in and out of cities, only stringent national-level regulations can ensure that all vehicles are controlled effectively no matter where they travel. In recent years, Beijing has made a series of impressive steps towards controlling pollution within its own boundaries (e.g. the most stringent vehicle standards in the country, the cleanest fuel quality standards in the country, scrapping >500,000 old polluting vehicles over the past two years, and more). However, the city still struggles to improve air quality because perhaps 34%-70% of Beijing’s pollution is regional, coming from dirty vehicles and industrial sources polluting in the surrounding provinces.
Short-term actions to control motor vehicle emissions in China. Two simple steps could make a huge and near-term difference in improving air quality in Beijing and throughout China, while simultaneously demonstrating the new Chinese government’s commitment to reducing pollution emissions:
- Immediately issue new fuel quality standards with supporting fiscal policies to reduce nationwide diesel sulfur levels to below 10 parts-per-million (ppm). Because high sulfur levels in fuel can poison advanced emission control technologies, improving fuel quality, especially reducing sulfur levels, is a critical prerequisite to introducing more stringent vehicle tailpipe emission standards. In 2011, China’s State Council announced that preferential fiscal policies would be utilized to encourage the supply of higher quality fuels nationwide. However, these fiscal policies have not yet been issued; a new fuel quality standard is stalled in the review phasecontinue to stagnate at unacceptable levels of 350ppm or higher. Resolving the fuel quality issue is a critical step to facilitate continued progress in vehicle emission control in China.
- Ensure that the China IV truck and bus emission standards are implemented this year without further delays. China’s next stage nationwide tailpipe emission standard for trucks and buses, called “China IV” (equivalent to “Euro IV”), aims to cut emissions of PM and NOx from diesel vehicles by 80% and 30%, respectively. However, because these vehicles need to be fueled with higher quality fuel which is not yet supplied nationwide, MEP has twice delayed the introduction of these standards across China. The current implementation date is July 1, 2013. In parallel with resolving the fuel quality issue, China should commit to introducing these standards without any additional delays.
Medium and long-term action to control emissions from motor vehicles. The aforementioned short-term, immediate actions will not be enough to solve China’s long-term motor vehicle emissions problems. Last month, China laid out a number of impressive medium-to-long-term regional air quality improvement actions in its “12th Five-Year Plan for Air Pollution Prevention and Control in Key Regions.” While the plans outlined in the document represent significant progress, the plan does not call for the most important step China can make towards long-term control of vehicle emissions: establishing a clear nationwide timeline for the introduction of global best-practice “China VI/Euro VI” vehicle tailpipe emission standards. Only with the introduction of these standards will diesel trucks and buses – some of the most polluting vehicles on the road – be required to install particulate filters to reduce >99% of PM2.5 particle emissions. Filters belong on cars, not people.
Monitoring and reporting: a recent success story, but only the first steps. In the fall of 2011, public outcry over a series of heavy pollution episodes in Beijing was fanned by social media into enormous public pressure on Chinese authorities to respond. They did. In February 2012, China’s Ministry of Environmental Protection (MEP) issued two major new regulations: a revision to the ambient air quality standards to include PM2.5, and a new definition of China’s Air Quality Index. By the end of the year, China had completed and began operating a network of real-time PM2.5 monitors in 74 cities through the country. The Chinese government deserves praise for these important steps towards air pollution data transparency.
However, air quality monitoring and reporting are only first steps. Now the challenge is how to achieve rapid and significant emissions reductions in order to improve urban air quality. The above three steps – rapid improvement of fuel quality, introduction of China IV standards for trucks and buses this year, and establishment of a clear timeline for early introduction of China VI – will make huge differences in reducing toxic air pollution in Beijing and throughout China. In fall 2011, the public debate led directly to new standards on air quality monitoring and reporting. It’s time now to turn the current pressure towards the more fundamental issues of how and when to cut emissions themselves. | https://theicct.org/turning-the-conversation-about-beijings-air-pollution-toward-solutions/ |
Membership is open to anyone who believes in the mission and purposes of Parent Teacher Association. Individual members may belong to any number of PTAs and pay dues in each. Every person who joins a local PTA® automatically becomes a member of both the state and national PTAs.
Together we are a powerful voice for children. With your help, we can continue to work toward PTA's goal of a quality education and nurturing environment for every child.
PTA Mission:“To make every child’s potential a reality by engaging and empowering families and communities to advocate for all children.”
PTA Values:
• Collaboration: We work in partnership with a wide array of individuals and organizations to accomplish our agreed-upon goals.
• Commitment: We are dedicated to promoting children’s health, well-being, and educational success through strong parent, family, and community involvement.
• Accountability: We acknowledge our obligations. We deliver on our promises.
• Respect: We value our colleagues and ourselves. We expect the same high quality of effort and thought from ourselves as we do from others.
• Inclusivity: We invite the stranger and welcome the newcomer. We value and seek input from as wide a spectrum of viewpoints and experiences as possible.
• Integrity: We act consistently with our beliefs. When we err, we acknowledge the mistake and seek to make amends.
National PTA® Diversity and Inclusion Policy
PTAs everywhere must understand and embrace the uniqueness of all individuals, appreciating that each contributes a diversity of views, experiences, cultural heritage/traditions, skills/abilities, values and preferences. When PTAs respect differences yet acknowledge shared commonalities uniting their communities, and then develop meaningful priorities based upon their knowledge, they genuinely represent their communities. When PTAs represent their communities, they gain strength and effectiveness through increased volunteer and resource support.
The recognition of diversity within organizations is valuing differences and similarities in people through actions and accountability. These differences and similarities include age, ethnicity, language and culture, economic status, educational background, gender, geographic location, marital status, mental ability, national origin, organizational position and tenure, parental status, physical ability, political philosophy, race, religion, sexual orientation, and work experience.
Therefore PTAs at every level must:
Download the full Diversity and Inclusion Policy.
Types of PTAs
Because of its connections to the state and national PTAs, the local PTA is a valuable resource to its school community with:
1. access to programs to benefit children, youth, and their families
2. Recognition and size to influence the formulation of laws, policies, and practices—education or legislative. | http://www.bearvalleypta.com/about-pta |
COVID-19 Update: March 10/2020 – Italy is on Lockdown
Italy on Nation Wide Lockdown
In Italy on Monday night, the country was put on lockdown. This decision comess as the number of cases of COVID-19 virus is jumping upward.
The Centre for Disease Prevention and Control reports, “Italy today expanded its COVID-19 lockdown to include the whole country, affecting about 60 million people, as the World Health Organization (WHO) today said the threat of a pandemic from the COVID-19 virus is very real, signaling a tone of increased urgency.
“Italy’s announcement marks the first time a whole country has been placed on lockdown and comes on the heels of 1,797 new cases today and quickly rising numbers in other European countries. Over the weekend, Italian officials had announced a lockdown for Lombardy region and 14 provinces in other regions.”
This move by Italy comes in contravention of the recomendations of the World Health Organization.
Should you cancel travel plans due to concerns over COVID-19? Across Canada and globally the situation appears extremely varied.
In Canada the risk remains low according to officials. However many national and international events globally are being cancelled. Is this a massive over-reaction? Or is it safety first?
The World Health Organization statement on International Travel
“Affected areas” are considered those countries, provinces, territories or cities experiencing ongoing transmission of COVID-19, in contrast to areas reporting only imported cases. As of 27 February 2020, although China, particularly the Province of Hubei, has experienced sustained local transmission and has reported by far the largest number of confirmed cases since the beginning of the outbreak, lately the situation in China showed a significant decrease in cases. At the same time, an increasing number of countries, other than China, have reported cases, including through local transmission of COVID-19. As the epidemic evolves, it will be expected that many areas may detect imported cases and local transmission of COVID-19. WHO is publishing daily situation reports on the evolution of the outbreak.
The outbreaks reported so far have occurred primarily within clusters of cases exposed through close-contacts, within families or special gathering events. COVID-19 is primarily transmitted through droplets from, and close contact with, infected individuals. Control measures that focus on prevention, particularly through regular hand washing and cough hygiene, and on active surveillance for the early detection and isolation of cases, the rapid identification and close monitoring of persons in contacts with cases, and the rapid access to clinical care, particularly for severe cases, are effective to contain most outbreaks of COVID-19.
Recommendations for International Travel
WHO continues to advise against the application of travel or trade restrictions to countries experiencing COVID-19 outbreaks.
In general, evidence shows that restricting the movement of people and goods during public health emergencies is ineffective in most situations and may divert resources from other interventions. Furthermore, restrictions may interrupt needed aid and technical support, may disrupt businesses, and may have negative social and economic effects on the affected countries. However, in certain circumstances, measures that restrict the movement of people may prove temporarily useful, such as in settings with few international connections and limited response capacities.
Travel measures that significantly interfere with international traffic may only be justified at the beginning of an outbreak, as they may allow countries to gain time, even if only a few days, to rapidly implement effective preparedness measures. Such restrictions must be based on a careful risk assessment, be proportionate to the public health risk, be short in duration, and be reconsidered regularly as the situation evolves.
Travel bans to affected areas or denial of entry to passengers coming from affected areas are usually not effective in preventing the importation of cases but may have a significant economic and social impact. Since WHO declaration of a public health emergency of international concern in relation to COVID-19, and as of 27 February, 38 countries have reported to WHO additional health measures that significantly interfere with international traffic in relation to travel to and from China or other countries, ranging from denial of entry of passengers, visa restrictions or quarantine for returning travellers. Several countries that denied entry of travellers or who have suspended the flights to and from China or other affected countries, are now reporting cases of COVID-19.
Temperature screening alone, at exit or entry, is not an effective way to stop international spread, since infected individuals may be in incubation period, may not express apparent symptoms early on in the course of the disease, or may dissimulate fever through the use of antipyretics; in addition, such measures require substantial investments for what may bear little benefits. It is more effective to provide prevention recommendation messages to travellers and to collect health declarations at arrival, with travellers’ contact details, to allow for a proper risk assessment and a possible contact tracing of incoming travellers.
Recommendations for international travellers
It is prudent for travellers who are sick to delay or avoid travel to affected areas, in particular for elderly travellers and people with chronic diseases or underlying health conditions.
General recommendations for personal hygiene, cough etiquette and keeping a distance of at least one metre from persons showing symptoms remain particularly important for all travellers. These include:
Perform hand hygiene frequently, particularly after contact with respiratory secretions. Hand hygiene includes either cleaning hands with soap and water or with an alcohol-based hand rub. Alcohol-based hand rubs are preferred if hands are not visibly soiled; wash hands with soap and water when they are visibly soiled;
Cover your nose and mouth with a flexed elbow or paper tissue when coughing or sneezing and disposing immediately of the tissue and performing hand hygiene;
Refrain from touching mouth and nose;
A medical mask is not required if exhibiting no symptoms, as there is no evidence that wearing a mask – of any type – protects non-sick persons. However, in some cultures, masks may be commonly worn. If masks are to be worn, it is critical to follow best practices on how to wear, remove and dispose of them and on hand hygiene after removal (see Advice on the use of masks)
Travellers returning from affected areas should self-monitor for symptoms for 14 days and follow national protocols of receiving countries. Some countries may require returning travellers to enter quarantine. If symptoms occur, such as fever, or cough or difficulty breathing, travellers are advised to contact local health care providers, preferably by phone, and inform them of their symptoms and their travel history. For travellers identified at points of entry, it is recommended to follow WHO advice for the management of travellers at points of entry. Guidance on treatment of sick passengers on board of airplanes is available on ICAO and IATA websites. Key considerations for planning of large mass gathering events are also available on WHO’s website. Operational considerations for managing COVID-19 cases on board of ships has also been published.
General recommendations to all countries
Countries should intensify surveillance for unusual outbreaks of influenza-like illness and severe pneumonia and monitor carefully the evolution of COVID-19 outbreaks, reinforcing epidemiological surveillance. Countries should continue to enhance awareness through effective risk communication concerning COVID-19 to the general public, health professionals, and policy makers, and to avoid actions that promote stigma or discrimination. Countries should share with WHO all relevant information needed to assess and manage COVID-19 in a timely manner, as required by the International Health Regulations (2005).
Countries are reminded of the purpose of the International Health Regulations to prevent, protect against, control and provide a public health response to the international spread of disease in ways that are commensurate with and restricted to public health risks, and which avoid unnecessary interference with international traffic and trade. Countries implementing additional health measures which significantly interfere with international traffic are required to provide to WHO, within 48 hours of implementation, the public health rationale and relevant scientific information for the measures implemented. WHO shall share this information with other States Parties. Significant interference generally means refusal of entry or departure of international travellers, baggage, cargo, containers, conveyances, goods, and the like, or their delay, for more than 24 hours.
WHO continues to engage with its Member States, as well as with international organizations and industries, to enable implementation of travel-related health measures that are commensurate with the public health risks, are effective and are implemented in ways which avoid unnecessary restrictions of international traffic during the COVID-19 outbreak.
NetNewsledger.com or NNL offers news, information, opinions and positive ideas for Thunder Bay, Ontario, Northwestern Ontario and the world. NNL covers a large region of Ontario, but are also widely read around the country and the world.
To reach us by email: [email protected]
Reach the Newsroom: (807) 355-1862 | |
TECHNICAL FIELD
BACKGROUND
SUMMARY OF THE INVENTION
0001 The present invention relates to performance of information networks. In particular the present invention relates to statistical measurements of performance characteristics of an information network.
0002 Internet web sites continue to become more sophisticated and offer a wider variety of media for a user to access. With this trend, users have become more demanding of quick, high quality internet experiences. As such, to be able to keep up with user's demands, it has become increasingly important for the providers of internet content to be able to monitor and troubleshoot internet performance issues to both avoid degraded performance and provide improved performance.
0003 Given this, systems have been developed for measuring relevant network parameters to evaluate network performance and help troubleshoot network issues which might degrade network performance. Generally, such systems utilize computer servers deployed on a network of interest to measure network performance parameters. Such computer servers are generally referred to as data collection agents (DCAs). A DCA generally connects to a device in the network about which a measurement is desired and takes one or more measurements of one or more predetermined metrics. The DCA then typically stores the results of the measurement either locally or in a remote database. The stored measurements can the be called up and reviewed by a user who accesses the agent.
0004 Such systems can typically measure metrics related to either Universal Resource Locator (URL) objects (such as a web page located on a server on the network) or streaming media objects. URL objects and streaming media objects are collectively referred to herein as network services. With respect to URL objects, such metrics can include, but are not limited to:
0005 End-to-End Time (Seconds): The time taken from the moment a user clicks on a link to the instant the page is fully downloaded and displayed. It encompasses the collection of all objects making up a page including, but not limited to, third party content on off-site servers, graphics, frames, and redirections.
0006 Throughput (KB/Sec): The amount of data streamed back to the user and how long it took (in kilobytes per second). The calculation is based on adding all data segments returned (for example, but not limited to, the body of HTML documents and images) and dividing that by the total time it took to return that part of the data. It is to be understood that browsers and servers are requesting objects in parallel so throughput does not represent the limit of a Web server in returning data.
0007 DNS Lookup (Seconds): The time it takes for the browser to turn the text based hostname (e.g., www.yahoo.com) into an IP address (207.221.189.100).
0008 Connect Time (Seconds): The time it takes to set up a network connection from the end-user's browser to a web site. A web page is transferred over this connection and many are setup for each page.
0009 Request Time (Seconds): The time it takes to send a request from a user's browser to a server. This is a relevant amount of time if you are submitting a large form (e.g. a message on an email service), or uploading a file (e.g. an attachment to a message on a discussion board). It reflects the ability of a server to accept data.
0010 Response Time (Seconds): The time it takes for a server to respond with content to the browser. Preferably, this measurement is taken by waiting until the first byte of content is returned to the browser.
0011 Teardown Time (Seconds): The time it takes for the browser and server to disconnect from each other.
0012 Download Time (Seconds): The time for the page download from the start of the first object to the end of the last object.
0013 The unit in parenthesis following the name of the metric is the unit in which the measurement is generally taken and recorded.
0014 With respect to Streaming media objects, such metrics include, but are not limited to:
0015 DNS Lookup Time (seconds): This metric is generally the same as the DNS lookup time for URL type objects.
0016 Quantity of Data Received (bytes or bits): The absolute amount of data gathered by the DCA if a stream had been rendered.
0017 Packet Loss (number): The number of packets that are not received by the media monitor.
0018 Percent Packet Loss (number): The percentage of total packets that are not received by the media monitor.
0019 Packets Received (number): The total number of packets received by the media monitor.
0020 Packets Late (number): The number of packets received too late to functionally render.
0021 Packets Resend Requested (number): The number of packets that have been requested to be resent. This metric preferably applies to RealMedia streams.
0022 Packets Recovered (number): The number of packets for which some type of corrective action is taken. Corrective action typically means requesting that the missing or broken packets be resent. This metric preferably applies to RealMedia streams.
0023 Packets Resent (number): (Also known as packets resend received) the number of packets asked for again (the packets resend requested metric) and were received. This metric preferably applies to RealMedia streams.
0024 Packets Received Normally (number): The number of packets received by the media monitor from the streaming media server without incident.
0025 Current Bandwidth (bytes/second): The rate at which data is received measured over a relatively small time frame.
0026 Clip Bandwidth (bytes/second): The rate at which data is received measured over the length of the entire stream or over a relatively long predetermined timeframe.
0027 Results of the above measurements can be used to help determine whether network services operating up to standard. In the context of the internet, results of the above URL object measurements can, for instance, indicate whether a web page is downloading consistently, at a high enough speed, or completely. The results of measurements of the above streaming media parameters can help determine the same information with respect to a streaming media object.
0028 However, while important diagnostic information can be collected about the current status of a particular web page or streaming media service by making individual or random measurements of one or more of the above noted network performance metrics, it can be difficult to use this testing method to fully diagnose performance. For example, using such techniques it can be difficult to determine the performance of a network over time or during certain times of the day, days of the week, or parts of the year. Thus, it can be difficult to detect, and predict, cycles in network operation, such as if a network operates more and less rapidly on a periodic basis. Such information could be useful in determining how other network parameters such as network traffic load, which likely varies over a day, week or year period, effects performance of network services.
0029 Without such information, individual measurements may be misleading. For example, an unsatisfactory results of such measurements may be caused by high or low network traffic load, rather than a specific problem with a network device. Also, using the above described standard techniques, it can be difficult to provide any type of predictive event correlation. For example, what, if any is the effect of degradation of DNS lookup time on overall network service performance during specific time periods Such predictive information can help providers of network services to set appropriate expectations of network performance for customers of such providers. Additionally, such predictive information can facilitate troubleshooting of root causes relating to network, application and third party content (e.g. banner ads on a web site) issues.
0030 Further, in order to determine whether a particular network service is operating appropriately using the above described methods, a user must initiate measurement of one or more network performance metrics, retrieve and then analyze the result. That is, there is no way for a system that does no more than take measurements of network performance metrics to notify a user if a network is not operating correctly because there is no baseline or other reference available to the system to make such a determination.
0031 What is needed is a system for measuring network performance metrics which allows a user to take into account network conditions, such a traffic load, when analyzing the measurement. Also, the system should allow a user to be able to make predictions about network performance at a given time. Additionally, such a system should be automated and should be able to analyze and present measurement results in a manner which is meaningful and straightforward to interpret.
0032 A system and method in accordance with the present invention collects measurements of network performance metrics and automatically calculates and provides composite variance analysis of such metrics. The system and method can then use history of performance data statistics to alert a user about performance of network services that are outside acceptable tolerance or control limits. The technique exposes subtle deviation from accepted measurement tolerance that can, in turn, be categorized in relation to control limits based on defined standard deviation thresholds.
0033 A system in accordance with the present invention includes at least one DCA located on a network, a processing module interconnected with the DCA, and, preferably, a comparison module interconnected with the processing module. The DCA collects at least a first plurality of measurements of a single network parameter and at least a first set of measurements including at least a single measurement of the single network parameter. Each of the first plurality of measurements is taken at a different time. The processing module calculates at least a first variance statistic, such as an average value, and a second variance statistic. The first variance statistic relates to the first plurality of measurements and the second variance statistic relates to the first set of measurements. The comparison module compares the first variance statistic with at least the second variance statistic to determine if a predetermined relationship exists between the first variance statistic and the second variance statistic. For example, the variance statistics could be averages of the group and first set of measurements. The comparison module could determine if the average of the first set of measurements is within a predetermined multiple of standard deviations from the average of the group of measurements. Preferably, the system also includes a screen display for displaying at least the first and second variance statistics and the results of the comparison thereof.
0034 A method in accordance with the present invention includes collecting at a first plurality of measurements of a single network parameter, each measurement taken at a different time. Also, at least a first set of measurements is collected including at least a single measurement of the single network parameter. Then a first variance statistic associated with the first plurality of measurements and at least a second variance statistic associated with the first set of measurements are calculated. The first variance statistic is then compared with at least the second variance statistic to determine if a predetermined relationship exists the two variance statistics.
BRIEF DESCRIPTION OF THE DRAWINGS
0035FIG. 1 is a block diagram illustrating a preferred embodiment of a method for providing composite variance analysis for network operations in accordance with the present invention.
0036FIG. 2 is a block diagram showing a system for providing composite variance analysis for network operations in accordance with the present invention.
0037FIG. 3 is a reproduction of a preferred embodiment of a screen display rendered by a page rendering module of a system in accordance with the present invention.
0038FIG. 4 is a reproduction of the screen display reproduced in FIG. 3 showing a different portion of the screen display.
0039FIG. 5 is a reproduction of the screen display reproduced in FIG. 3 showing a different portion of the screen display.
DETAILED DESCRIPTION
0040 A system and method in accordance with the present invention collects measurements of network performance metrics and automatically calculates and provides composite variance analysis of such metrics. The system and method can then use history of performance data statistics to alert a user about performance of network services that are outside acceptable tolerance or control limits. That is, a system and method in accordance with the present invention collects raw data including a set of periodic measurements of at least a single network performance metric such as, without limitation, end-to-end time or throughput of at least a single network service. Composite variance analysis is then completed on this set of measurements. The results of this analysis are preferably values such as the average, mean, median, minimum, maximum and standard deviation (referred to collectively herein as variance statistics) of the group of periodic measurements of the single metric. The data collection and analysis can be completed with respect to any network performance metric or group of such metrics. Further, a set of periodic measurements for a single metric can be accumulated over any period of time. Accordingly, the results of the composite variance analysis can advantageously be used to determine how the performance of a given network service with respect to any desired performance metric or group of metrics varies over any amount of time. This also allows a user to advantageously determine whether performance of a network service at any particular time is outside of acceptable limits.
100
110
110
100
110
0041FIG. 1 is a block diagram illustrating a method of providing composite variance analysis of network performance. In step , network performance data is collected from a network (not shown) and stored. Such data preferably includes periodically repeated measurements of one or more network performance metrics including, but not limited to, those enumerated in the Background section with respect to both URL services and streaming media services such as DNS lookup time or packets lost. Preferably, in step , method measures and stores at least one network performance metric corresponding to a URL object or streaming media object on a continuous basis. With respect to URL objects, such metric can include, but is not limited to: throughput, DNS lookup time, connect time, request time, server response time, teardown time, download time, and/or end-to-end time. With respect to streaming media objects, such a metric can include, but is not limited to: packets lost, packets received, bytes received, jitter, percent packets lost, DNS lookup time, buffer time average bandwidth, and/or first stat time. Preferably each of the above listed metrics is collected and stored in step on a continuous basis. It is also considered to collect each metric, or a subset of the metrics, at only predetermined times.
110
100
0042 Additionally, in step , method preferably takes a measurement of each of the above listed metrics approximately once per minute to take a total of approximately 60 measurements per hour of each metric on a continuous basis. However, it is within the ambit of the present invention to collect measurements of the metrics at any other interval of time. As discussed in detail below, this information can be stored in a database or other type of data storage configuration.
110
100
110
0043 Preferably, in step , method collects error data relating to measurements made of network services. More preferably, step collects errors referred to as access errors, service errors and content errors. An access error includes an error that prevents a DCA from starting the download process for a given URL. A service error includes a DCA's failure to load the first object on the URL's page. Service errors can occur, for example, when the configuration for a monitored URL is an improperly formatted or when the site is being worked on. A content error includes an error that is encountered when downloading a component object for a URL.
120
0044 In step , a user requests a report of collected network performance data and a composite variance analysis of such data. In making such a request, the user preferably includes information identifying the URL of the site or the streaming media service to be measured and a time range over which measurements are desired, preferably in the form of a date range or single date with a time range.
130
0045 After retrieving the raw network performance data corresponding to the URL and time range of the user request, in step , the retrieved raw network performance data is analyzed to generate variance statistics. Preferably, the variance statistics include average value, mean value, median value, standard deviations, minimums and maximums of each requested network performance metric over the requested time period. For example, if a user requests throughput, DNS lookup time, connect time, request time and response time for a specific URL over a given 48 hour period, the average value, mean value, median value, minimum value, maximum value and standard deviation for all the measurements taken of each of these metrics over the 48 hour time period is calculated. Thus, if 60 measurements of each metric are taken per hour, for each metric, the variance statistics average, minimum, maximum and standard deviation of 2880 measurements is calculated. In addition to these overall variance statistics, preferably, variance statistics for each of the requested metrics is also calculated for smaller increments of time. Preferably, but not necessarily, a single variance statistic, the average, is calculated for this smaller increment of time. For example, and without limitation, the average value of each requested metric over a 1 hour period is preferably also calculated.
0046 Providing to a user the mean, median, average, minimum, maximum and standard deviation of each metric in the manner described above can advantageously allow the user to determine network performance over a period of time and determine whether a network service device is operating outside of tolerance at any given time during the relevant time period. Additionally, calculating and storing variance statistics for a given metric over predetermined time periods provides a baseline for performance of a network service over time. As such, a user can compare performance of the network service at any given time to the established baseline. As discussed in greater detail below, this can advantageously allow a user to filter out systemic network problems, such as network traffic load, which might effect the performance of a network service at a particular time, in evaluating the performance of a network service.
132
0047 As noted above a system and method in accordance with the present invention preferably identifies for a user network performance parameters that are outside of acceptable control limits. Accordingly, in one embodiment of the present invention, in step , averages of subsets of measurements are compared to a calculated standard deviation for the metric as calculated from a larger group of measurements taken over a longer time period. For example, and without limitation, if a user requests measurements of a particular metric over a 48 hour period, and each requested metric is measured approximately once per minute, the subsets of hourly averages (or other variance statistic), preferably calculated from 60 measurements during the 48 hour period can be compared to the same variance statistic calculated for the entire group of 2880 measurements taken over the 48 hour period. Preferably, regardless of whether a user requests data for a 48 hour period or other length of time, the hourly average of each metric is compared to the average value of the same metric over the entire requested time period, or, as explained below, over another time period. It should be noted that herein, a subset of measurements can include a single measurement. In such a case, the average for the subset exists and is considered to be the value of the single measurement.
132
0048 The comparison that is made in step preferably, though not necessarily, involves determining if the average (or other variance statistic) of a subset of measurements is within a predetermined number of standard deviations from the average (or other variance statistic) of an overall group of measurements from which the standard deviation was calculated. The subset or subsets of measurements can be part of the overall group of measurements but need not be. If the variance statistic of a subset of measurements is more than a predetermined number of standard deviations away from the same variance statistic of an overall group of measurement, then the variance statistic of the subset of measurements is considered to be outside of acceptable tolerance or, in other words, out of control. What constitutes out of control performance, that is, how many standard deviations a variance statistic of a subset of measurements must be away from the same variance statistic of a larger group of measurements, is preferably configurable by a user.
Chart 1
End-To-End Time
# Std. Dev. from
Hourly Average (Sec.)
Collective Avg.
Hour 1
4.5
<1
Hour 2
8.0
> 1 but < 2
Hour 3
12.5
> 2
Hour 4
4.2
< 1
Collective
4.0
Average
Standard
2.0
Deviation
0049 Chart 1 below provides an example of the results of a potential measurement of end-to-end time (the time taken from the moment a user clicks on a link to a web page to the instant the web page is fully downloaded and displayed) illustrating in control and potentially out of control performance. In this example, it can be assumed that the total measurement time is over a 48 hour period and that end-to-end time pertaining to the relevant web site is being measured approximately once per minute. In Chart 1, the variance statistic that is calculated and compared is average value of the end-to-end time. It could, however, be a mean, median, maximum, minimum or other such statistic of the end-to-end time.
0050 The first four rows of the first column of chart 1 list the average of the 60 measurements taken during the first four hours of the 48 hour time period. The fifth row of the first column lists the collective average of the 2880 measurements taken of end-to-end time over the 48 hour period and the sixth row of the first column lists the standard deviation of this collective set of measurements. The second column of chart 1 displays the number of standard deviations each hourly average is away from the collective average. As shown, in hour 1, the average end-to-end time was 4.5 seconds which is less that 1 standard deviation away from the collective average. Accordingly, the end-to-end time in hour one would likely be considered within acceptable operating tolerance or in control. However, in hour 2 the hourly average was 8.0 seconds. This is more that 1 standard deviation away from the collective average but still less that 2 standard deviations away from the collective average. Accordingly, the end-to-end time in hour 2 might be considered out of control. In hour 3, the hourly average is 12.5 seconds. This is greater than 2 standard deviations away from the collective average and accordingly, would likely be considered out of control. In hour 4, the hourly average is back within control at 4.2 seconds, which is less than one standard deviation away from the collective average.
0051 Whether a metric is in or out of control is preferably determinable by the user. For example, the user may determine that anything within 2 standard deviations of a collective variance statistic is in control or that anything greater that 1 standard deviation from the collective variance statistic is out of control. Any other scheme for determining what performance is in control and what performance is out of control is also within the ambit of the present invention. For example, and without limitation, the determination of whether a network parameter is out of control or not could also be made using any other number of standard deviations or fractions of standard deviations. That is, if the measurement is greater than 1.5 standard deviations away from the collective average or greater than 2.5 standard deviations away from the collective average, the parameter could be considered out of control.
0052 It is also contemplated to categorize performance in two or more levels. For example, without limitation, any measurement greater than 1 standard deviation from the collective average but less that 2 standard deviations therefrom could be considered a first, or warning level. And, any measurement greater than 2 standard deviations away from a collective average would be considered a second, or alert level. Each level could, for example, indicate that certain corrective or additional actions should be taken.
134
120
140
120
0053 If an variance statistic of a subset of measurements is out of control, a problem could be indicated with a web site, server, streaming media service, or other component of the network and is preferably reported out to a user. Accordingly, in step , variance statistics which are out of control are highlighted to stand out from other calculated and reported variance statistics for the user that requested the set of measurements in step . How the measurements are highlighted depends upon how the information is to be reported to the user. Preferably, as explained in detail below, the requested measurements and statistics are reported to the user in a tabular or chart format in a screen display provided on a user terminal. Using this reporting format, measurements that are out of control are preferably highlighted by displaying such measurement in a different color than measurements that are in control. Most preferably, variance statistics that are within 1 to 2 standard deviations away from a collective variance statistic are highlighted in a first color and variance statistics that are greater than 2 standard deviations from a collective variance statistics are highlighted in a second color. In step , the results of the request made in step are reported back to the user making the request. As discussed above, this is preferably done by providing a screen display showing the measurement and variance analysis results in tabular or chart form on a monitor of a user terminal. It is also considered, however, that the measurement and variance analysis results be displayed in any other format such a graph showing averages (or other variance statistic) over time as compared to standard deviations. It is also within the ambit of the present invention that the reporting out step includes generating an alarm when one or more metrics for one or more network services are out of control. To initiate such an alarm, the method and system of the present invention could send and e-mail to a predetermined address, send a fax, or initiate a phone call.
132
100
0054 As discussed above with respect to step , in one embodiment of method it is preferable to compare hourly averages of measured network performance metrics with collective averages of such metrics over a longer period of time, such as 48 hours, which includes the hour from which the hourly average was calculated. However, past network performance can vary depending on time of day, day or week or even time of year. Specifically, for example, due to different amounts of network traffic in the middle of a week day afternoon as compared to early morning weekend times, a network service will likely display superior performance during the early morning weekend times. For example, due to varying network traffic, an average end-to-end time would likely be longer in the middle of a Friday afternoon that early Saturday morning.
0055 Accordingly, limits of acceptable network performance would likely be different for the two time periods. Specifically, variance statistics of network performance metrics for the high network traffic periods would reflect the fact that the network is under heavy load. For example, DNS lookup times, connect times, request times, packets lost and packets received may all be longer during a period of high network traffic than during a period of low network traffic. And, such longer time periods might not represent any type of network service malfunction, only that network traffic is high. Thus, applying the limits determined for a consistently low network traffic period to a period of consistently high network traffic period, or vice-versa, could produce misleading results. For example, if DNS lookup time is lower during periods of low network traffic, then a collective average applied to DNS lookup time during periods of high network traffic might be too low and result in false reports of out of control measurements. Conversely, including measurements for periods of high network traffic in the collective average applied to periods of low network traffic could result in missing measurements which might otherwise be considered out of control. Additionally, including periods of relatively high network traffic with periods of relatively low network traffic could result in larger standard deviations. This could cause variance statistics which might otherwise be considered out of control not to be designated as such.
132
120
130
0056 Accordingly, in step , it is also within the ambit of the present invention to compare performance at any particular time of interest to past performance at a similar time. For example, network performance on, say, a Wednesday from 2:00 to 4:00 can be compared to network performance on any number of previous Wednesdays over the same time period as opposed to including other times and days of the week in the comparison, such as a Sunday evening at 11:00 p.m. when network traffic would likely be quite different. Preferably, if such a comparison is desired, it can be requested by a user in step . In this way, past data from a time frame similar to the time frame of interest can be used to perform the composite variance analysis in step .
0057 Determining network performance by comparing network performance data with data collected at similar times can take into account systemic or environmental network conditions such as network traffic. Such a comparison allows systemic or environmental network conditions to be filtered out so that operation of a network device or service can advantageously be isolated and accurately measured.
10
10
20
25
25
20
25
25
25
25
0058FIG. 2 is a block diagram showing a preferred embodiment of a system for measuring and reporting performance statistics for network operations in accordance with the present invention. As shown, system includes at least one data collection agent (DCA) which is preferably located on a distributed computer network such as the internet. Network could also include, but is not limited to, a LAN (local area network), WAN (wide area network), MAN (metropolitan area network), VAN (value-added network), PAN (personal area network), PON (passive optical network), VPN, enterprise-wide network, direct connection, active network, control network, an intranet, or any other suitable network. DCA is preferably an automated server that carries out measurements on network , predetermined portions of or services provided on network , or a device which is part of network . Services provided by network can include, but are not limited to, URL objects such as web sites and streaming media objects.
20
20
20
0059 The type of measurement carried out by DCA depends upon the data to be collected. For example, if DCA is to collect data for a throughput measurement of a given web site, DCA will connect to the given web site and measure the throughput for a predetermined amount of time. Preferably, as discussed above, this measurement will be repeated on a periodic basis for a given duration of time. Measurement of URL object and streaming media object parameters, such as those listed in the Background section, by DCA's is well understood by those skilled in the art.
20
0060 Preferably, DCA is pre-configured with information concerning what measurements are to be taken, at what times and on which network services. Such pre-configuration of a DCA is will understood by those skilled in the art. For web page or steaming media measurements, the configuration preferably contains the URL or location of the streaming media object for which a given test is to be performed. The configuration also contains the network performance metric to be tested and the frequency of the test to be performed (for example, once each minute, preferably at substantially evenly spaced intervals).
20
0061 Preferably, DCA is configured to take measurements of at least each of the following URL or streaming media service metrics once each minute on a continuous basis: throughput, DNS lookup time, connect time, request time, server response time, connection time, socket teardown time download time, and/or end to end time. With respect to streaming media services, such a metrics can include, but are not limited to: packets lost, packets received, bytes received, jitter, percent packets lost, DNS lookup time, buffer time average bandwidth, and/or first stat time. However, which metrics are measured and when they are measured can be determined in advance by a user.
20
20
20
0062 DCA then performs the requested tests to collect performance data for one or more URLs or service metrics named in the subscription. DCA can be a Windows NT servers, UNIX servers or any other type of server. Configuration and implementation of a DCA such as DCA is well understood by those skilled in the art.
20
30
20
30
30
40
20
40
20
40
0063 After data is collected by DCA , the data is forwarded to data ingest module which transforms the data into an appropriate format for storage as is well understood by those skilled in the art. Preferably, DCA forwards data to data ingest module on a regular, predetermined basis. Information concerning when this data is to be forwarded is preferably included in the DCA configuration information. Data ingest module then forwards the data to performance data repository which is preferably, but not necessarily, placed at a network location apart from DCA . Performance data repository can be any type of database or a plurality of databases capable of storing network performance metric data collected by DCA . It is also considered, however, that performance data repository could be any facility for storing text strings. Preferably, however, performance data repository supports Structured Query Language (SQL).
10
50
20
40
50
55
10
70
55
50
20
40
72
70
70
55
0064 System also includes processing software which performs statistical analyses on network performance metric data collected by DCA and stored in performance data repository . Processing software preferably runs on a processing server and will be discussed in greater detail below. System also includes web browser which provides a user (not shown) with access to processing server and processing software . Data collected by DCA remains in database until a user request is initiated by the user of web browser . Web browser is preferably a standard computer terminal including a display monitor but can be any device, such as a PDA or cellular telephone, capable of communicating with processing server .
72
25
72
72
72
52
52
40
40
0065 User request preferably includes identifying information for the URL or streaming media service, network device, or other portion of network which the user wishes to analyze. User request also preferably includes a time and/or date range over which the user wishes to retrieve data. Additionally, as discussed above, if the user wishes to compare data from the requested time and date range to a similar, but different, time and date range, user request will also include the similar time and date range to which the requested time and date range is to be compared. User request is received by data access module . Preferably, data access module constructs a query for data repository to retrieve the raw data from data repository necessary to generate the requested composite variable statistics.
20
40
40
30
50
40
0066 As discussed above, DCA preferably measures a wide range of metrics of predetermined network services on a continuous basis. And, most preferably, such measurements are taken approximately once each minute. Also as discussed above, all these measurements are stored in data repository . Further, data repository preferably retains all the data provided to it by data ingest module for a predetermined period, such as 3 months. As such, a request for data constructed by data access module can preferably retrieve data from any time frame within the predetermined data retention time period for any metric measured for a monitored network device or service. Also, because data repository preferably stores each measurement made and measurements are preferably made in approximately one minute increments, data is preferably retrieved in one minute increments. Construction of a query to retrieve data from a data repository such as data repository is well understood by those skilled in the art.
40
54
54
20
54
70
54
0067 After retrieving the required raw data from data repository , data access module forwards the raw data to recordset processing module . Recordset processing module preferably completes requested composite variance analysis on raw measurement data collected by DCA . Additionally, recordset processing module preferably constructs components of a display of the calculated statistics. The display is to be used to present the calculated statistics to the network analyst on web browser . Preferably, recordset processing module constructs components of a display using a computer markup language such as HTML.
54
54
54
54
54
54
52
54
54
54
a
b
c
a
a
a
a
0068 Recordset processing module includes compute statistics module , compare module and build components module . Compute statistics module of recordset processing module preferably completes the composite variance analysis of the raw data provided by data access module . Compute statistics module accepts data sets for each metric included in the query results and calculates statistics over predetermined time periods for each of the data sets. Preferably, compute statistics module can be configured to calculate any statistics for a data set over any time range. More preferably, however, compute statistics module calculates the following statistics over the associated time periods: mean value, median value, average value, standard deviation, minimum value and maximum value for the data set associated with each metric for the entire time period requested (e.g. 48 hours); and hourly averages for the data set associated with each metric. Calculation of such statistics from sets of data is well understood by those skilled in the art.
54
54
54
54
54
54
54
a
b
b
a
b
a
b
0069 After computing the requested statistics from the provided raw data, compute statistics module preferably provides the calculated statistics to compare module . In an embodiment of the present invention in which a system and method identifies for a user network performance parameters that are outside of acceptable control limits, compare module compares given variance statistics calculated by compute statistics module to determine if a predetermined relationship exists between the given variance statistics. Preferably, though not necessarily, compare module compares averages, means, medians, minimums and/or maximums of subsets of measurements to an average, mean, median, minimum and/or maximum value calculated from a larger group of measurements taken over a longer time period. Preferably, compute statistics module determines if the variance statistic associated with the subset of measurements (e.g. average, mean, median, minimum and/or maximum) satisfies a predetermined relationship with the corresponding variance statistic associated with the larger group of measurements. If a predetermined relationship does (or, alternatively, does not) exist between compared variance statistics, compare module identifies the variance statistic associated with the subset of measurements for which the predetermined relationship did (or did not) exist.
0070 For example, and without limitation, if a user requests measurements of a particular metric over a 48 hour period, and each requested metric is measured approximately once per minute, the subsets of hourly averages (including 60 measurements) during the 48 hour period can be compared to the standard deviation calculated for the entire group of 2880 measurements taken over the 48 hour period. Preferably, regardless of whether a user requests data for a 48 hour period or other length of time, the hourly average of each metric is compared to the average value of the same metric over the entire requested time period, or, as explained below, over another time period. As noted above, a subset of measurements can include a single measurement. In such a case, the average for the subset exists and is considered to be the value of the single measurement.
54
54
b
b
0071 More preferably, though not necessarily, compare module determines if the average of a subset of measurements is within a predetermined number of standard deviations from the average of an overall group of measurements from which the standard deviation was calculated. The subset or subsets of measurements can be part of the overall group of measurements but need not be. If the average of a subset of measurements is more than a predetermined number of standard deviations away from the average of an overall group of measurements, then the average of the subset of measurements is considered to be out of control. Compare module can then flag or otherwise identify averages of the subset of measurements that are out of control. What constitutes out of control performance, that is, how many standard deviations an average of a subset of measurements must be away from the average of a larger group of measurements, is preferably configurable by a user. Performing the above described comparisons between calculated variance statistics, and identifying variance statistics which do not meet certain conditions, is well understood by those skilled in the art.
54
54
54
54
54
56
54
70
54
56
b
c
c
c
c
c
b
0072 Preferably, after performing the above described comparisons and identifying measurements that are out of control, compare module forwards this information to build components module . Build components module preferably constructs components of a display of the calculated variance statistics. Preferably, build components module accomplishes this using a computer markup language such at hypertext markup language (HTML). After building components of the measurement and statistics display, build components module preferably forwards the components to page rendering module which interprets the output of build components module and preferably displays the measurements and statistics display on web browser . As noted above, build components module preferably constructs an HTML document. Accordingly, page rendering module preferably renders the measurement and statistics display using HTML.
210
56
212
210
0073FIG. 3 is a screen print showing a preferred embodiment of a measurements and statistics screen display rendered by page rendering module in accordance with the present invention. Performance analysis line of screen display provides identification information for the service or network component for which calculated statistic are displayed. Specifically, in FIG. 3, the displayed statistics relate to a web site having the URL:
0074http://rocky.adquest3d.com:81/supermain.cfmbrd9000.
212
214
216
218
214
218
0075 Beneath performance analysis line , display chart displays measurement and statistics relevant to the displayed URL. Date column displays the date on which corresponding measurements were made. Hour column displays the time over which the corresponding averages were made. In the example shown by display chart of FIG. 3, the time period over which the corresponding averages were calculated is one hour. However, it is considered that hour column could also display any other time period over which displayed measurements were made or displayed averages or other statistic were calculated.
220
20
220
220
0076 Statistics column displays statistics calculated from the measurements taken by DCA relevant to a user selected metric. The selected metric is displayed at the top of statistics column . In the example shown in FIG. 3, the selected metric is end-to-end time. However, any measured metric can preferably be selected. Beneath the column label is displayed the statistics which have been calculated for the relevant metric. In FIG. 3, statistics column is preferably divided into an additional 4 columns displaying for each hour measurements were taken, an average value, minimum value, maximum value and standard deviation.
220
210
220
0077 As discussed above, in a preferred embodiment, each measurement of each metric is taken approximately once each minute. Accordingly, the averages, minimums, maximums and standard deviations displayed in statistics column represent statistics calculated from approximately 60 measurements. The hour and date associated with each statistic displayed is preferably provided in the same row as the statistic. Thus, for example screen display shown in FIG. 3, the average end-to-end time for the web site listed on Jun. 10, 2001 at 1600 hours was 4.35 seconds. The minimum end-to-end time in that hour was 1.59 seconds, the maximum was 15.24 seconds and the standard deviation for the measurements taken over the hour was 3.44. It is also considered to display additional or fewer statistics in statistics column .
214
222
20
20
222
0078 Display chart also preferably includes errors column which displays the number of errors DCA experienced while taking measurements in the corresponding date and hour. As discussed above, DCA preferably records three types of errors: access errors, service errors and content errors. Errors column is preferably divided into three additional columns to display each of these types of errors for the corresponding date and hour in which the errors occurred. For example, there were 13 content errors with respect to the relevant web site during the first hour of Jun. 11, 2001.
214
224
218
0079 Display chart also preferably includes total measurements column which displays the total number of measurements taken in a corresponding time period (shown in hour column ). For example, while 60 measurements were taken during most hours on Jun. 10, 2001, at 2100 hours on that day, only 59 measurements were taken.
214
226
218
214
226
0080 Display chart also preferably includes performance details column , which displays one statistic for the measurements made during the corresponding time period (shown in hour column ). Preferably, the statistic displayed is the average value of the metric listed at the top of the listing column during the corresponding time period. In display chart , for example, the left-most sub-column of performance details column displays the average value of the end-to-end time during the corresponding date and time. Thus, on Jun. 10, 2001, the average of the 60 measurements taken during 1900 hours of end-to-end time of the measured web site was 4.68 seconds.
226
214
226
226
20
226
0081 Preferably, performance details column is divided into as many additional columns as metrics for which measurements were taken during the relevant time period. For example, as shown in FIG. 4, which is second view of display chart showing the entire performance details column , sub-columns for the following measurements are included in performance details column : end-to-end time, throughput, DNS lookup time, connect time, request time, response time, teardown time, and download time. It is also considered to include fewer or additional (if DCA took measurements of additional metrics with respect to the particular object being measured) types of metrics in performance details column .
214
210
214
50
214
0082 As discussed above, a system and method in accordance with the present invention can preferably use history of performance data statistics to alert a user about network performance that is outside acceptable tolerance or control limits. FIG. 5 is a screen display showing a portion of display chart of screen display . In exemplary display chart , processing software highlights a measurement statistic in a first color, as discussed above, if the measurement statistic is more that 1.5 standard deviations, but less than 2 standard deviations, away from a collective average and highlights a measurement statistic in a second color if the measurement statistic is greater than 2 standard deviations away from a collective average. As shown in FIG. 3, (in the first row of display chart under in the Avg column and Std Dev column) the collective average for end-to-end time is 4.99 seconds and the collective standard deviation is 4.07. However, for hour 1000 on Jun. 11, 2001, the average end-to-end time was 13.58 seconds. This is more than 2 collective standard deviations (4.07) away from the collective average end-to-end time (4.99 seconds). Accordingly, the average end-to-end time at hour 1000 has been highlighted.
214
0083 Further, the throughput for hour 1000 on Jun. 11, 2001 (26.90 KB/Sec) has been highlighted in a different color. Accordingly, while the user is not informed on display chart of what the collective average or standard deviation for throughput is, the user is alerted that the throughput for hour 1000 is greater than 1 standard deviation but less that 2 standard deviations away from the collective average.
3
5
210
210
0084 As shown in FIGS. - and discussed above, statistics screen display preferably displays hourly averages of a wide range of metrics related to a network object in a relatively compact tabular format and highlights measurements which reflect that a network service may be out of control. As such, a system and method in accordance with the present invention consolidates and presents a relatively large amount of detailed network performance data. This advantageously allows a user viewing statistics screen display to quickly complete a relatively thorough assessment the performance of a selected network object and determine whether any corrective actions are necessary with respect to the network object. This can advantageously save time when troubleshooting performance of a network service.
0085 As noted above, it is also within the ambit of the present invention to display results of performance measurements and statistical calculations and comparisons in any other format such as in a graphical format.
0086 The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and it should be understood that many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. Many other variations are also to be considered within the scope of the present invention. For instance, a system and method in accordance with the present invention can measure, analyze and report network performance metrics other than those enumerated in the Background section including network performance statistics that are not associated with a URL or streaming media object. For example, a system and method in accordance with the present invention could measure, analyze and report network performance metrics associated with network hardware devices, such as server, or portions of a network. | |
Nay Says: What Do Short-term And Long-term Relationships Look Like?
Psychologists Paul W. Eastwick, Elizabeth Keneski, Taylor A. Morgan, Megan A. McDonald, and Sabrina A. Huang compare the trajectory and experiences of short-term and long-term relationships.
Eastwick and colleagues conducted multiple studies to unpack this subject, while predominately using the Relationship Coordination and Strategic Timing (ReCAST) Model. The ReCAST Model is essentially a retelling of several relationship developmental stages for an individual in their current or past relationship, incorporating evolutionary, psychological, and intimate components. And several participants were asked about both their short-term and long-term relationships (within-subjects) or just their short-term or long-term relationships (between-subjects).
Through a combination of mate-seeking and mate-retention, people behave and are internally driven differently for short-term and long-term relationships, but the results suggest that short-term and long-term are not as distinctive as many people may think.
Short-term and long-term relationships appear on average to garner similar interest and experiences during the early parts of the relationship. As time goes on, short-term relationships appear to stabilize and then decline, while long-term relationships generally continue to grow and increase. The investigators point out that this finding may also explain how it can be difficult for people to determine whether they are in a short-term or long-term relationship, because the early experiences and interests are similar.
Eastwick and colleagues dive deep into explaining more of the findings and literature that supports their results, but here a few important things to takeaway:
· Dating, socializing in group settings, introducing to friends, flirting, expressing interest, sexual behaviors, and one-on-one togetherness are some of the examples of similar experiences in both short-term and long-term relationships
· Desire to care for partner was greater in long-term than short-term
· Desire to self-protect was greater in short-term than long-term
· Attachment principles appears to be a huge identifier for long-term relationships as opposed to short-term relationships.
The psychologists mention that as a relationship becomes more sexual, essentially more signs begin to surface on whether the relationship is short-term or long-term.
Nay Says
This investigation was thoroughly executed. Looking at both between and within subjects provides more insight on the motivations and experiences of the participants in the experiments. The concept of ReCAST is intriguing and likely the most effective method in understanding the nuances and perspectives on different type of relationships that people experience. Though the investigators mention how ReCAST cannot really pick up short-term relationships that are very short, such as mere hours, the information and data that can be obtained seem to be sufficient enough to support their conclusions.
It is becoming clearer that short-term and long-term relationships are not as clearly defined and distinct as past researchers and society have claimed. The idea that one can be in a relationship that is unmarked or not specifically classified makes sense, if the early parts of the relationship trend similarly. Also, the fact that people tend to form short-term relationships significantly more so with a friend than a stranger, may provide some credence to those early interpersonal experiences.
And it makes sense that attachment-behavioral systems appear to be the most vivid difference in long-term and short-term relationships. How people ultimately view and treat their partner, and vice versa, illuminates the trajectory of what is to be expected. A replication of the psychologists’ methodology could provide more awareness to people and set a new standard on how professionals characterize relationships. | https://www.renaytionships.com/post/nay-says-what-do-short-term-and-long-term-relationships-look-like |
There are now 3 possible online modes for units:
Units with modes Online timetabled and Online flexible are available for any student to self-enrol and study online.
Units available in Online Restricted mode have been adapted for online study only for those students who require the unit to complete their studies and who are unable to attend campus owing to exceptional circumstances beyond their control. To be enrolled in a unit in Online Restricted mode, students should contact their Student Advising Office through askUWA
Click on an offering mode for more details.
Unit Overview
- Description
This unit provides students with the opportunity to develop their knowledge and skills in the area of clinical teaching and supervision. Students explore the theories and practice surrounding clinical teaching, learning and supervision. Students use experiences from their own and others' teaching and learning to inform their clinical educator practice.
- Credit
- 6 points
- Offering
(see Timetable)
Availability Location Mode Semester 1 UWA (Perth) Face to face Semester 1 Online Online flexible
- Outcomes
Students are able to (1) discuss learning theories and approaches applicable in clinical education settings; (2) identify student learning needs and negotiate learning goals/learning contract; (3) demonstrate skills in designing and implementing appropriate learning activities relevant to the clinical context; (4) demonstrate skills for facilitating learning in small groups and individual learning in clinical settings; (5) analyse clinical teaching and supervisory scenarios, and develop strategies to deal with dysfunctional/difficult situations; (6) demonstrate competence in providing verbal feedback; and (7) discuss appropriate assessment and evaluation methods in the clinical context.
- Assessment
Indicative assessments in this unit are as follows: (1) teaching in difficult clinical situations essay and (2) self-critique of clinical teaching. Further information is available in the unit outline.
Student may be offered supplementary assessment in this unit if they meet the eligibility criteria.
- Unit Coordinator(s)
- Professor Sandra Carr
- Unit rules
- Advisable prior study
- IMED5801 Principles of Teaching and Learning
- Contact hours
- This unit is available online in asynchronous format of learning. The unit is also offered face to face through three x 8 hour workshops.
- The availability of units in Semester 1, 2, etc. was correct at the time of publication but may be subject to change.
- All students are responsible for identifying when they need assistance to improve their academic learning, research, English language and numeracy skills; seeking out the services and resources available to help them; and applying what they learn. Students are encouraged to register for free online support through GETSmart; to help themselves to the extensive range of resources on UWA's STUDYSmarter website; and to participate in WRITESmart and (ma+hs)Smart drop-ins and workshops.
- Unit readings, including any essential textbooks, are listed in the unit outline for each unit, one week prior the commencement of study. The unit outline will be available via the LMS and the UWA Handbook one week prior the commencement of study. Reading lists and essential textbooks are subject to change each semester. Information on essential textbooks will also be made available on the Essential Textbooks. This website is updated regularly in the lead up to semester so content may change. It is recommended that students purchase essential textbooks for convenience due to the frequency with which they will be required during the unit. A limited number of textbooks will be made available from the Library in print and will also be made available online wherever possible. Essential textbooks can be purchased from the commercial vendors to secure the best deal. The Student Guild can provide assistance on where to purchase books if required. Books can be purchased second hand at the Guild Secondhand bookshop (second floor, Guild Village), which is located on campus.
Face to face
Predominantly face-to-face. On campus attendance required to complete this unit. May have accompanying resources online.
Online flexible
100% Online Unit. NO campus face-to-face attendance is required to complete this unit. All study requirements are online only. Unit is asynchronous delivery, with NO requirement for students to participate online at specific times.
Online timetabled
100% Online Unit. NO campus face-to-face attendance is required to complete this unit. All study requirements are online only. Unit includes some synchronous components, with a requirement for students to participate online at specific times.
Online Restricted
Not available for self-enrolment. Students access this mode by contacting their student office through AskUWA. 100% Online Unit.
NO campus face-to-face attendance. All study and assessment requirements are online only. Unit includes some timetabled activities, with a requirement for students to participate online at specific times. In exceptional cases (noted in the Handbook) students may be required to participate in face-to-face laboratory classes when a return to UWA’s Crawley campus becomes possible in order to be awarded a final grade.
External
No attendance or regular contact is required, and all study requirements are completed either via correspondence and/or online submission.
Off-campus
Regular attendance is not required, but student attends the institution face to face on an agreed schedule for purposes of supervision and/or instruction.
Multi-mode
Multiple modes of delivery. Unit includes a mix of online and on-campus study requirements. On campus attendance for some activities is required to complete this unit. | https://handbooks.uwa.edu.au/unitdetails?code=IMED5804 |
Thank you very much for the wonderful and informative article and interview of Dr. Anna Hicks by Annette O’Neil (“Thin Air—Busting Lingering Myths About Hypoxia,” May 2019 Parachutist). It is indeed very important to inform our fellow skydivers about the risks of hypoxia.
An additional factor influencing the risk of hypoxia is the time spent at a specific altitude. However, most aeromedical studies show the degree of hypoxia as a steady-state result after a certain time of exposure to a specific reduced oxygen level. That is the logical and only useful view as far as pilots are concerned, because they intend to travel for certain times and distances. Skydivers, however, reach a certain altitude and usually jump right away, starting a rapid descent toward more oxygen. This reduces the hypoxia risk before exit since tissue-oxygen levels (luckily) do not sink low enough quickly enough as to reach a steady state. The message is to avoid a prolonged stay at high altitude (i.e., avoid a go-around or second jump run).
The tingling sensation in the fingers and toes (dysaesthesia) is not a result of hypoxia but is instead caused by too-intensive breathing (hyperventilation), as happens rather frequently to skydivers who are too strongly occupied with their oxygen bottles on the way up. This causes the loss of carbon dioxide (hypocapnia), which changes the pH (blood acidity) and therefore reduces blood calcium (which moves toward the tissues and, among other things, causes the increased pulse rate). This tingling does not get any better by breathing more oxygen. The uninformed skydiver will only increase the problem and the symptoms.
To avoid this (common!) problem of hyperventilation, during the 1987 formation skydiving world record attempts from 20,000 feet MSL, every two or three jumpers used the same onboard oxygen source. They inhaled deeply before passing on the oxygen mask, then held their breath until they received the oxygen mask again. This procedure kept the participants safe from hypoxia, as well as from hyperventilation and dysaesthesia. The only hypoxic and hyperventilation symptoms that did occur were among those jumpers who were not participating in the record attempts and therefore exited the aircraft on another jump run, which took many more minutes. This confirms the time factor as described above. These non-record participants also had a tendency to use one oxygen mask each, as there were enough left unused after the exit of the record participants. | https://uspa.org/p/Article/letters-15 |
Keras, TensorFlow, and PyTorch are some of the most popular machine learning and deep learning frameworks being used by professionals and newbies alike.
Deep learning is a subset of machine learning that uses neural networks to train models on large datasets. This compares three popular Deep Learning Frameworks: Keras, TensorFlow, and PyTorch. Here you’ll find key differences between these frameworks and will be able to decide which would be best for you.
What is TensorFlow?
TensorFlow is an open-source software library for machine learning research and development. It provides a set of tools for numerical computation using data flow graphs.
TensorFlow was originally developed by Google Brain team and released as open source software in November 2015.
It is used for machine learning applications, including speech recognition, image recognition, predictive analytics, natural language processing, and other more specialized tasks. See Tensorflow Lite for Android.
TensorFlow was originally developed to support the development of machine learning models, but the scope of TensorFlow has since been expanded to include other types of modeling and data processing.
The core TensorFlow framework provides APIs for expressing parallel computations, training models and executing them on both CPUs and GPUs.
What is Keras?
Keras is an open-source neural network high-level API that can run on top of TensorFlow, Theano or CNTK. It was written in Python, developed with the intention to allow for fast experimentation.
Going from idea to result with the least possible delay allows for faster iteration during the development process, which leads to better models.
Keras’s backend can be configured to use Theano or TensorFlow. This means that it can be used with one or the other without worrying about switching between them, making it easier for developers who want to experiment with different deep learning frameworks without rewriting their code.
Keras has the following key features:
- Support for convolutional neural networks (CNN) for computer vision applications, recurrent neural networks (RNN) for sequence processing applications, and any combination.
- High level of customizability through user-defined callbacks and hooks.
- Support for arbitrary network architectures: multi-input or multi-output models are easily expressed in just a few lines of code.
Keras has two main components:
The first is a high-level API to build and train deep learning models. This API makes it easy to quickly prototype new ideas without getting bogged down in the details of building neural networks. The second is a set of pre-built models that can be used for common tasks such as classification, regression, clustering, and more.
What is PyTorch?
PyTorch is a deep learning framework that provides GPU acceleration and support for both Python and C++. It is one of the most popular frameworks in the deep learning space, with an active community of developers.
PyTorch was developed by Meta’s artificial intelligence research group, which also created Caffe2, a machine learning framework.
The first public release of PyTorch was in January 2016.
Keras vs TensorFlow vs PyTorch what is the difference?
These three frameworks have a lot in common, although they are all slightly different.
TensorFlow vs Keras vs PyTorch Which is Easier for Beginners?
These frameworks have different learning curves. Because of Keras’s simplicity, it’s easier to understand.
Keras vs TensorFlow vs PyTorch Which is Better?
Keras is perfect for programming quick prototypes and things that need to be created without a lot of data.
PyTorch is most suitable for building large models with big data and high performance.
Keras vs TensorFlow vs PyTorch Which is Faster?
PyTorch is comparatively faster than Keras.
PyTorch vs Keras vs TensorFlow Which is more popular?
According to Quora and Kaggle, Keras seems to be the most popular deep learning framework among data scientists for its simplicity and PyTorch by academia and industrial research team for research flexibility. | https://jenniferkwentoh.com/keras-vs-tensorflow-vs-pytorch-which-is-better-or-easier/ |
Rebecca M. Blank offers the first comprehensive analysis of an economic trend that has been reshaping the United States over the past three decades: rapidly rising income inequality. In clear language, she provides an overview of how and why the level and distribution of income and wealth has changed since 1979, sets this situation within its historical context, and investigates the forces that are driving it. Among other factors, Blank looks closely at changes within families, including women’s increasing participation in the work force. The book includes some surprising findings—for example, that per-person income has risen sharply among almost all social groups, even as income has become more unequally distributed. Looking toward the future, Blank suggests that while rising inequality will likely be with us for many decades to come, it is not an inevitable outcome. Her book considers what can be done to address this trend, and also explores the question: why should we be concerned about this phenomenon?
You do not have access to this
book
on JSTOR. Try logging in through your institution for access.
Log in to your personal account or through your institution.
The United States is in an extended period of rapidly rising inequality. Starting in the mid-1970s, all measures of U.S. economic inequality have risen, including inequality in wages, income, and wealth. This development has made income distribution and income inequality a topic of substantial interest among researchers and policy analysts who focus on economic and social issues in the United States.
This book adds to the discussion about inequality in two ways. Part 1 provides a comprehensive look at changes in the level and distribution of income since 1979. Part 2 discusses the forces that drive changes in inequality.
Whereas...
The next several chapters of this book provide a detailed comparison of the composition and distribution of income in the United States in 1979 and in 2007. I am interested in looking at the pretax income available to all nonelderly adults, which I refer to as thetotal-income distribution.Surprisingly, there is almost no research that takes an approach as comprehensive as that taken in this book, looking as it does at changes in the distribution of total income and its sources. There are a large number of papers that investigate changes in wage inequality over the past several decades,...
Most of the research on inequality in the past three decades has focused on rising inequality in hourly or weekly wages, particularly the rapid increases in wages among more-skilled workers in contrast to stagnant or falling wages among less-skilled workers. In this chapter, focusing on workers only, I look at changes in hourly wages as well as changes in weeks and hours of work. These components combine to produce changes in total annual earnings. The results show that annual earnings have changed in ways that are quite different from the better-known changes in hourly wages. It is particularly interesting to...
In this chapter, I look at changes in the distribution and level of total income available to individuals—from their own earnings, from the earnings of others with whom they live and share income, and from the receipt of unearned income from government programs, private assets, alimony payments, and other sources. Earnings are the primary source of income for most individuals and families, but most people receive some unearned income. As a result, earnings and total income need not always move together.
Total family income is the sum of earnings, government income, and unearned income from other sources. Throughout this...
In this chapter, I look at the reasons why total-income inequality is changing and why the overall distribution of total income is shifting upward. Some of these changes are due to shifts in family composition and size; some are due to changes in the level and distribution of income components within family types. I finish this chapter by talking about what these changes in total income and its distribution might signal about overall well-being among individuals.
There are three factors underlying the shifts in total income that I observed in the last chapter. First, within each family type, family size...
The results in part 1 indicate a long-term trend toward rising inequality over the past three decades. In this section, I step back from the data and discuss the events that mightchangesuch a trend and bring about a narrowing of the income distribution within the United States. I focus on the question, What changes might bring the recent period of rising inequality to an end? This question has particular salience given the economic crisis of 2008–09, in which the U.S. economy (and the global economy) was engulfed. Rapid declines in wealth, due to a collapse in financial...
The last chapter discussed some of the historical economic events that have affected income levels and income inequality. This chapter provides some sense of the magnitude of change that is necessary to significantly reduce income inequality in the United States.
Just as many factors have led to rising inequality over the past thirty years, so there are many paths that could lead to reductions in inequality. I will discuss four different types of changes: changes in skills; changes in key economic variables, such as wages, labor-force participation, and investment income; changes in marital choices; and changes in redistributional policies. In...
This last chapter speculates about possible changes in inequality in the United States within the next few decades. Like all prognostication, this is highly risky, since we are all constantly surprised as our personal and national histories unfold.
The United States has been experiencing an extended period of rising inequality since the mid-1970s, following an extended period of downward-trending inequality that began sometime after 1910, including sharp reductions in inequality in the 1920s and the early 1940s. What are the factors that might lead inequality to stabilize or even reverse itself in the near future? What opposing factors might lead...
Processing your request... | http://slave2.omega.jstor.org/stable/10.1525/j.ctt1pnkww |
The Communication and Cultural Competence program is based on case studies that give examples of everyday medical practice in Canada. These modules do not focus on diagnosis and treatment. Instead, they focus on communication between health professionals and patients. Please note that the modules are not intended to show the only way to deal with a situation. Instead, they are intended to provide guidance on how to approach and reflect on these different scenarios.
Modules
Cross-cultural communication
Introduction
MCC role objectives
Communicator
- Initiate an interview with the patient by greeting with respect, attending to comfort and to the need for an interpreter if applicable, orienting to the interview, and consulting with the patient to establish the reason for the visit (1.1)
- When appropriate, facilitate collaboration among families and patients, while maintaining patient wishes as the priority, ensuring confidentiality, and respecting patient autonomy (1.5)
- Gather information about the patient’s concerns, beliefs, expectations, and illness experience (2.3)
- Identify the personal and cultural context of the patient, and the manner in which it may influence the patient’s choices (3.2)
- Provide information using clear language appropriate to the patient’s understanding, checking for understanding, and clarifying if necessary (3.3)
Professional
- Negotiate between patient and family, respecting patient wishes as primary (2.3.1)
- Recognize that attitudes to confidentiality may vary (Aboriginal peoples, minors) (2.3.3)
Sentinel habit
- Incorporate the patient’s experience and context into problem identification and management
Entrustable professional activities
- Lead and work within interprofessional health-care teams (8)
- Collaborate with patients, families and members of the interdisciplinary team (9)
Critical competencies
- Establish and maintain proficiency in clinical knowledge, skills and attitudes appropriate to the practice of medicine (2)
- Seek appropriate consultation from other health professionals, recognizing the limits of one’s own expertise (5)
- Accurately elicit and synthesize relevant information and perspectives of patients and families, colleagues and other professionals accurately (9)
- Convey relevant information and explanations accurately to patients and families, colleagues and other professionals (10)
- Develop a common understanding on issues, problems and plans with patients, families and other professionals to develop a shared plan of care (12)
- Demonstrate a commitment to their patients, profession and society through ethical practice (19)
- Demonstrate knowledge of and apply the professional, legal and ethical codes for physicians (21)
Cultural diversity in Canada
Canada is one of the most culturally diverse societies in the world. Working in any of the health care professions almost anywhere in the country, physicians will interact with patients or other health care workers from the following backgrounds:
- Indigenous peoples of Canada
- Canadian-born, of western European heritage
- Canadian-born, second or third generation from non-western cultures, with or without a knowledge of their heritage language and culture
- Immigrants who have lived in Canada for many years, who have retained little of their heritage language and culture
- Immigrants who have lived in Canada for many years, who have retained most of their heritage culture, including language, integrating little into Canadian society
- Recent immigrants, unilingual, with little or no experience of other cultures
- Recent immigrants who have a wide variety of intercultural experience and who may speak several languages
Cross-cultural education
- Canadian training programs have recognized the necessity of educating learners — and teachers — in the knowledge, skills and attitudes of cross-cultural communication.
- Cultural sensitivity and communication skills are two of the main themes of the MCC role objectives.
- If, as many feel, physicians should approach every patient encounter as potentially cross-cultural, what skills and knowledge do they need? What attitudes are needed for good cross-cultural communication?
- Many physicians and their training programs believe that the major task in developing greater cultural sensitivity is simply learning more about other cultures (Lingard, et al). Such a “cultural cookbook” approach is clearly impossible and trying it could also lead to cultural stereotyping.
- Others might feel that, simply by having lived in or immigrated to another culture, they have acquired cultural competence or sensitivity. This is not necessarily true.
Cross-cultural bioethics
The increasing awareness of cultural issues in health care has profoundly influenced the field of bioethics, another of the main themes in the MCC role objectives. The standard western, principle-based approach taught in medical schools is being challenged. For instance, the feminist movement has changed the way we think about the care of patients, especially in gendered and reproductive issues. Further, the globalization of society, especially in health care, has resulted in the need to approach bioethics from a cross-cultural perspective. This raises many problems and issues. If my values differ from those of my patient, how should I act? Must I accept the behaviour of another culture toward certain groups in society, such as women, if I find that behaviour unethical from my cultural perspective? Do I act according to accepted Canadian standards toward a patient who clearly prefers his or her own cultural norms? In this case and the “Reflective exercises” that arise from it, we will examine three aspects of culture, bioethics and the interaction of the two:
- Self-awareness and self-reflection about one’s own culture, both personal and medical — this is the attitude part
- The skills involved in cross-cultural communication
- The Canadian medical culture and how it may differ from your previous experience — this is the knowledge part
Although intended primarily for graduates of non-Canadian medical schools, Canadian-trained physicians who want to improve their cross-cultural skills can also benefit from this information.
Self-assessment quiz
There are two “Self-assessment” quizzes and one “Reflective exercise.” They are designed to help you assess your knowledge, skills and attitudes about cultural issues in the medical domain and, as such, there are no right or wrong answers.
Reflective exercise 1
Read “Becoming interculturally competent” by Milton J. Bennett.
In this article, Bennett describes six stages of “intercultural sensitivity.” It is generally accepted that the acquisition of such skill is necessary in order to function as a professional in a cross-cultural environment. After reading the article, do the following exercises. Think about the level of intercultural sensitivity that is exhibited in each scenario. Where do you think your level would be if you were involved in a similar situation? What level do you think is needed by a physician in Canada? | https://physiciansapply.ca/cases/case-2-cross-cultural-communication/introduction/ |
Most serious poetry today is still Modernist. Modernism in literature is not easily summarised, but the key elements are experimentation, anti-realism, individualism and a stress on the cerebral rather than emotive aspects.
Discussion
Modernist writing is challenging, which makes it suitable for academic study. Many poets come from university, moreover, and set sail by Modernism’s charts, so that its assumptions need to be understood to appreciate their work. And since Postmodernism still seems brash and arbitrary, writing in some form of Modernism is probably the best way of getting your work into the better literary magazines. How much should you know of its methods and assumptions?
You need to read widely – poetry, criticism and literary theory. Modernism was a complex and diverse movement. From Symbolism it took allusiveness in style and an interest in rarefied mental states. From Realism it borrowed an urban setting, and a willingness to break taboos. And from Romanticism came an artist-centred view, and retreat into irrationalism and hallucinations.
Hence many problems. No one wants to denigrate the best that has been written this last hundred years, but the forward-looking poet should be aware of its limitations. Novelty for novelty’s sake ends in boredom and indifference, in movements prey to fashion and media hype. Modernism’s ruthless self-promotion has also created intellectual castes that carefully guard their status. Often the work is excessively cerebral, an art-for-art’s sake movement that has become faddish and analytical. The foundations tend to be self-authenticating – Freudian psychiatry, verbal cleverness, individualism run riot, anti-realism, overemphasis on the irrational. These concepts may not be wholly fraudulent, but as articles of faith they have not won general assent. Modernist work will give you accredited status, but possibly neither an avant-garde reputation nor wide popularity.
Suggestions
Modernist work is often the most accessible of today’s poetry, thanks to education, public libraries and a vast critical industry. Start therefore with Yeats, Frost, Pound, Eliot, Stevens, Williams, etc., and follow your interests – back into traditional poetry or forward into Postmodernist styles.
Any writing that is true to your personality, authentic and original, is apt to begin as dark poetry. How do you generate these qualities, and then develop them?
The author’s personality is always to be found in a good poem: it is something that only he or she could have produced. But we also expect that the personality will facilitate and further the poem’s intentions. The authentic is that individual voice, unquestionably theirs, which genuine artists find as they seek to represent what is increasingly important to them. Originality does not mean novelty – which is easily achieved – but the means by which experience is presented in a more distinctive and significant manner.
Personality, authenticity and originality are therefore linked, and achieved only by continual effort. Gifts and character make artists, and the two are interdependent.
Discussion
As in life generally, success comes at a price. The creators of dark poetry are often: 1. indifferent to conventional procedures and behaviour, 2. inner-directed, making and following their own goals, and 3. keenly interested in contradictions and challenges.
Better poets can therefore find themselves at odds with society, and there is no doubt that such conflicts make for solitary, cross-grained and somewhat unbalanced personalities. Many past writers had difficult and neurotic personalities, and the same traits are all too evident today. Nonetheless, absurd posturing, sharp feuds and strident ambitions also appear in writers of no talent whatsoever, which suggests that difficulties are the unfortunate side affects of originality and not its sustaining force. Artists may be sometimes unbalanced, but not all unbalanced people are artists.
Creativity differs markedly between the arts and sciences, and even between different art forms. Nonetheless, most creativity shows four phases: challenge, incubation, illumination and exposition. Driving these phases forward, through many interruptions and loopbacks, is the earnest desire to succeed, which naturally taps some inner need. We make poetry out of the quarrel with ourselves, said Yeats, and these fears and obsessions are highly individual. | https://www.ukessayswriters.com/poetry-modernism/ |
Thank you for your patience while we retrieve your images.
Portfolios
▼
Portfolios
▼
Evidence
Inhabitants
Before I Die
Tunnie & Helen
Abstracts
Layers
Weaves
Street
Early Work
Photo Car Trip
Video
Biography
Exhibition Highlights
Contact
Evidence
Thumbnails
In 1972 I was watching the Fellini film Roma and was captivated by splashes of light involving sparks from a street car at night. That scene with its dark nature and surreal quality motivated me to emulate a specific photographic style. It seems strange to me (almost absurd) that such a momentary scene became a motivation for an entire body of work that is interwoven throughout my artistic career. I call these images Evidence and Inhabitants. They are the evidence of places and people I have never been able to fully remember, but manifest themselves in the photographs I make.
I usually capture these pictures in abandoned places and of Inhabitants who might have or may still be living there. I search for the Evidence of humans where no humans currently reside. I am like an archaeologist sifting through a once inhabited location trying to imagine/portray what these people and places were like. | https://www.cjpressma.com/p624092850 |
1. A constant force of 10.0 N, 225° acts on an object as it moves 5.0 m, 120°. Determine the work done on the object by the force.
2. An object moves on the x-axis beginning at x = 2 m and ending at x = – 3 m. During this interval it is subjected to a force given by F(x) = (4 N/m3)x3. Determine the work done on the object.
3. A spring of length 50.0 cm is characterized by the constant k = 80.0 N/m. Determine the work required to compress it from a length of 40.0 cm to 35.0 cm.
4. Starting from rest, a 5.0 kg sled is pulled across a level surface by an applied force of 27 N, 35°. This force acts as the sled is pulled a distance of 2.00 m. The coefficient of friction is 0.25. Determine the speed attained by the sled.
5. A board 1.60 m long is propped up at one end by a table 0.60 m high. A 400 gram block is started sliding downward with speed 8.00 m/s at the top of the ramp and slides to the bottom. The coefficient of friction is 0.30. (a) Determine the work done by each force acting on the block. (b) Determine the speed of the block at the end of the ramp.
6. A cart of mass m is attached to springs as shown in the diagram below and is free to move horizontally on the track where m is the coefficient of friction. The springs behave as a single spring with constant k. Suppose the cart is moved to a position xo relative to its equilibrium position (x = 0) and released from rest. (a) Determine the speed of the cart as it first passes equilibrium. (b) Determine its position at which its speed first reaches zero after its release. (c) Determine how many times the car will pass by x = 0 before coming to a stop.
7. The mass of the space shuttle is 79000 kg and it orbits at a uniform altitude of 350 km. Ignore air resistance and the rotation of the earth. (a) Determine the net work required to put the shuttle into this orbit. (b) In order to return to earth, the shuttle fires retrorockets that reduce its orbital speed by 91 m/s. Determine the speed of the space shuttle when it lands.
8. The actual landing speed of the space shuttle is 100 m/s. Use relevant information from the previous problem. (a) Determine the work done by the earth’s atmosphere during the shuttle’s return to the surface. (b) Estimate the amount of work done by the rocket engines to put the shuttle into orbit. (c) Conceptual question: What difference does the earth’s rotation make? What would you do differently in these calculations if you wanted to account for it?
9. The gravitational field inside the earth may be modeled by g = a r2 + b r, where r = distance from center, a = – 2.851 ´ 10-13 m-1s-2, and b = 3.354 ´ 10-6 s-2. (This model allows for the increasing density towards the earth’s core using r(r) = (-0.00136 kg/m4) r + 12000 kg/m3). (a) Use the function g(r) to determine the potential energy function for an object located inside the earth. Use the earth’s center as a reference point. (b) Supposing an object could fall through a tunnel from the surface to the center of the earth what would be its final speed? (c) What would be the escape speed for an object to be shot out of this tunnel and leave Earth’s gravity?
10. A particle of mass 2.00 kg is free to move along the x-axis. The particle’s potential energy is given by U(x) = x3 − 3x + 3, where x is position in meters and U is energy in joules. The particle begins at rest at the origin. (a) Determine the particle’s maximum speed. (b) Determine the particle’s greatest departure from the origin. (c) Determine the particle’s maximum rate of acceleration (in either direction). (d) Describe in words the overall motion of the object. (e) What initial speed would be required to prevent the particle from returning to the origin? (f) If the particle is given speed 3.00 m/s at the origin what will be its speed at x = −3 m? | http://swansonphysics.com/APPhysicsC/AP%20Physics%20%E2%80%93%20Work%20and%20Energy%20Example%20Problems.htm |
A Beginner’s Guide to Deep Learning Algorithms
This article was published as a part of the Data Science Blogathon.
Introduction to Deep Learning Algorithms
The goal of deep learning is to create models that have abstract features. This is accomplished by building models composed of many layers in which higher layers interpret the input while lower layers abstract the details.
- As we train these deep learning networks, the high-level information from the input image produces weights that determine how information is interpreted.
- These weights are generated by stochastic gradient descent algorithms based on backpropagation for updating the network parameters.
- Training large neural networks on big data can take days or weeks, and it may require adjustments for optimal performance, such as adding more memory or computing power.
- Sometimes it’s necessary to experiment with multiple architectures such as nonlinear activation functions or different regularization techniques like dropout or batch normalization.
Nearest Neighbor
Clustering algorithms divide a larger set of input into smaller sets so that those sets can be more easily visualized -Nearest Neighbor is one such algorithm because it breaks the input up based on the distance between data points.
For example, if we had an input set containing pictures of animals and cars, the nearest neighbor would break the inputs into two clusters. The nearest cluster would contain images with similar shapes (i.e., animals or cars), and the furthest cluster would contain images with different shapes.
Convolutional Neural Networks (CNN)
Convolutional neural networks are a class of artificial neural networks that employ convolutional layers to extract features from the input. CNNs are frequently used in computer vision because they can process visual data with fewer moving parts, i.e., they’re efficient and run well on computers. In this sense, they fit the problem better than traditional deep learning models. The basic idea is that at each layer, one-dimensionality is dropped out of the input; so for a given pixel, there is a pooling layer for just spatial information, then another for just color channels, then one more for channel-independent filters or higher-level activation functions.
Long Short Term Memory Neural Network (LSTMNN)
Several deep learning algorithms can be combined in many different ways to produce models that satisfy certain properties. Today, we will discuss the Long Short-Term Memory Neural Network (LSTMNN). LSTM networks are great for detecting patterns and have been found to work well in NLP tasks, image recognition, classification, etc. The LSTMNN is a neural network that consists of LSTM cells.
Recurrent Neural Network ( RNN )
An RNN is an artificial neural network that processes data sequentially. In comparison to other neural networks, RNNs can understand arbitrary sequential data better and are better at predicting sequential patterns. The main issue with RNNs is that they require very large amounts of memory, so many are specialized for a single sequence length. They cannot process input sequences in parallel because the hidden state must be saved across time steps. This is because each time step depends on the previous time step, and future time steps cannot be predicted by looking at only one past time step.
Generative Adversarial Networks (GANs)
GANs are neural networks with two components: the generator and the discriminator. The generator produces artificial data from scratch, with no human input, while the discriminator tries to identify that it is artificial by comparing it against real-world data. When the two-component compete against each other, this causes one of them to improve (much like how competitors might outdo each other) and eventually leads to better results in both tasks. A GAN typically consists of three modules, the generator module (G), the discriminative module (D), and an augmentation module (A). These modules can be summarized in three equations as follows: G ≈ D + A G(X) ≈ p(X) E
Support Vector Machines (SVM)
One deep learning algorithm is Support Vector Machines (SVM). One of the most famous classification algorithms, SVM, is a numerical technique that uses a set of hyperplanes to separate two or more data classes. In binary classification problems, hyperplanes are generally represented by lines in a two-dimensional plane. Generally, an SVM is trained and used for a particular problem by tuning parameters that govern how much data each support vector will contribute to partitioning the space. The kernel function determines how one feature vector maps into an SVM; it could be linear or nonlinear depending on what is being modeled.
Artificial Neural Networks (ANN)
ANNs are networks that are composed of artificial neurons. The ANN is modeled after the human brain, but there are variations. The type of neuron being used and the type of layers in the network determine the behavior.
ANNs typically involve an input layer, one or more hidden layers, and an output layer. These layers can be stacked on top of each other and side by side. When a new piece of data comes into the input layer, it travels through the next layer, which might be a hidden layer where it does computations before going on to another layer until it reaches the output layer.
The decision-making process involves training an ANN with some set parameters to learn what outputs should come from inputs with various conditions.
Autoencoders Section: Compositional Pattern Producing Networks (CPPN)
Compositional Pattern Producing Networks (CPPN) is a kind of autoencoder, meaning they’re neural networks designed for dimensionality reduction. As their name suggests, CPPNs create patterns from an input set. The patterns created are not just geometric shapes but very creative and organic-looking forms. CPPN Autoencoders can be used in all fields, including image processing, image analysis, and prediction markets.
Conclusion
To summarize, deep learning algorithms are a powerful and complex technology capable of identifying data patterns. They enable us to parse information and recognize trends more efficiently than ever.
Furthermore, they help businesses make more informed decisions with their data. I hope this guide has given you a better understanding of deep learning and why it is important for the future.
There are many deep learning algorithms, but the most popular ones used today are Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN).
I would recommend taking some time to learn about these two approaches on your own to decide which one might be best for your situation.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion. | https://www.analyticsvidhya.com/blog/2022/09/a-beginners-guide-to-deep-learning-algorithms/?utm_source=related_WP |
The art-historical literature on Italian Renaissance courts has traditionally been one of in-depth studies of individual court cities and specific artists. Alison Cole’s lucidly written book summarizes some of this literature for a general audience, focusing on the courts of Naples, Urbino, Ferrara, Mantua, and Milan during the fifteenth century. The work is a revised edition of the author’s 1995 book Virtue and Magnificence: Art of the Italian Renaissance Courts, expanded to reflect recent scholarship. Cole approaches her subject primarily from an art-historical perspective, highlighting the varieties of media, styles, and uses of art at court while presenting a picture of the artists and patrons behind its production. Cole’s writing thus offers the nonspecialist a concise overview of an important and fascinating topic, and an alternative to the many general studies of the artistic centers of Florence, Venice, and Rome.
Like its predecessor, this book begins with several broad chapters before turning to more focused studies of individual courts. In her introduction, “The Fifteenth-Century Renaissance Court,” Cole seeks to characterize these institutions in terms of their political, historical, and diplomatic contexts. Although the term is not explicitly defined, Cole seems to consider an “Italian court” any state of the Italian peninsula ruled by a despotic lord such as a duke, marquis, or, in the case of Naples, a king. The rulers of these states were linked to each other by bonds of marriage, commerce, and diplomacy, and they maintained contact with the courts of France, Germany, Spain, and Burgundy. They all shared the desire to assert the legitimacy of their rule, the honor of their families, and their piety and virtue through art, architecture, and cultural programs. To this end, they sponsored projects that emphasized military prowess, spent lavishly to support religious orders, constructed dazzling palaces, and funded extravagant public spectacles. Courtly art was not only a display of power and prestige; it also was a source of enjoyment and “spiritual nourishment” that provided consolation in times of uncertainty and unrest (17). In this chapter Cole also describes some of the features of court life, touching on its administration, the comings and goings of its courtiers, the military activities of the lord, and the roles of women and children.
Having addressed this broad historical framework, Cole focuses the next two chapters on the culture of artistic production at court. Chapter 1, “Art and Princely ‘Magnificence,’” outlines the cultural values that shaped the patronage of the lord and his family. She relates how “magnificence,” a value deriving from Aristotle’s Ethics, prompted extravagant spending on art and building projects in the public sphere; this spending was regarded as honorable on account of its pious or civic character. On the other hand, “splendor” embodied the “private, less regal, equivalent of magnificence” (36), that is, the idea that finely decorated, exotic, and luxury items brought pleasure to the user and prestige to the owner. Renaissance ideals of decorum helped to determine not only the lavishness of an artistic display, but also its materials, ornament, and style. In all things, context was key: what was appropriate for private use could differ markedly from what was suitable for public display, as Cole reminds us in the final section, “The Princely Residence.” In chapter 2, “The Court Artist,” the author continues these themes, but focuses attention on the figure of the artist. Citing such examples as Cosmè Tura, Mantegna, and Filarete, she addresses the relationship between patron and artist, patrons’ expectations, and artists’ status at court. As she explains, the conditions of an artist’s employment could vary greatly, from one of many paid staff to familiaris, or an intimate member of the household.
The remaining chapters of the book present five case studies of the artistic cultures of Italian courts: Naples under Alfonso of Aragon, Urbino under Federico da Montefeltro, Ferrara under Borso and Ercole d’Este, Mantua under Ludovico Gonzaga, and Milan under Ludovico Sforza. Each chapter centers on the person of the prince, his artistic taste, and his use of art as a tool of self-promotion. Because these rulers often earned their fortunes as condottieri, or mercenary generals, there could be a sense of urgency to present an image of learning and refinement. Thus Federico da Montefeltro had a preference for art marked by “clarity, order, dignity” and “intense pragmatism” (108), which he used to bolster his image as a just, virtuous, and sophisticated prince. In contrast, the Este of Ferrara cultivated artistic styles marked by “a poetic, visual and lyrical complexity” (134) and themes of chivalry, pleasure, and amusement. The text concludes with a brief discussion of early sixteenth-century developments, focusing on the Roman works of Michelangelo, Bramante, and Raphael, and the works of Giulio Romano in Mantua. The volume is richly illustrated with high-quality images and followed by a select bibliography.
The content of these chapters is substantially the same as the 1995 edition, making this book indeed a revision and not an entirely new publication. That said, Cole has fulfilled her promise to incorporate more recent scholarship, and each chapter presents varying amounts of new material. Chapter 6 (“Arms and Letters: Urbino under Federico da Montefeltro”), for example, introduces a discussion about the status of Jews in Urbinese society to contextualize the representations of a Jewish moneylender in Paolo Uccello’s Miracle of the Profaned Host, the predella of the Corpus Domini Altarpiece. Similarly, in chapter 7 (“Local Expertise and Foreign Talent: Milan and Pavia under Ludovico ‘il Moro’”), Cole fleshes out the section on Leonardo da Vinci by incorporating the controversial thesis that the animal in the artist’s portrait of Cecilia Gallerani alludes to the sitter’s pregnancy. Cole further acknowledges the recent attribution of a drawing now known as La Bella Principessa to Leonardo. The select bibliography lists many important studies that have appeared in print since the earlier publication.
Cole’s goal of presenting the artistic cultures of five cities in fewer than three hundred pages is an ambitious one, so naturally the book offers a cursory discussion of the artists, works, and themes it explores. On the other hand, the broad scope of the book allows the author to compare the different centers by drawing attention to how the political situation of each city informed the patronage of the prince and the members of his court. Cole has endeavored to present a multifaceted picture of court culture, touching on a range of artistic and literary works, from the writings of humanists, the collection of ancient and contemporary artworks, and the production of works in painting, sculpture, and architecture as well as tapestries and metalwork. The author’s skill in tying the major themes of each prince’s artistic patronage with the specific exigencies of his situation is one of the major strengths of the book.
The book is not without its shortcomings, including a number of factual inaccuracies that, unfortunately, compromise the author’s authority. Already in the preface, Cole refers to the “despotic lords” who are the principal subject of her book erroneously as signorie (that is, seigniorial governments), rather than signori (lords). Elsewhere, the reader encounters various puzzling statements, such as the second sentence of the introduction, which places the rule of the Roman emperor Augustus in the second century and claims that “most of what we now recognize as modern Italy dates back to [his] reign” (11). Augustus, of course, ruled Rome from the late first century BCE to the early first century CE, not the second century, while the phrase “modern Italy” and its connection to Augustus are not explained. More generally, one wishes for greater clarity on how Cole defines “Italian Renaissance courts” and the various titles the signori held. Surely there is a difference between Naples, ruled by a Spanish king, and Mantua, ruled by a marquis who owed allegiance to the Holy Roman Emperor. The book’s introduction offers few clear parameters, and the author even pulls examples from papal Rome and republican Florence to characterize court culture. The inclusion of Florence and Rome may have been intended to provide additional context for the nonspecialist, but these sections tend to dilute Cole’s focus on secular courts subject to dynastic rule. Finally, the book is without notes, not even ones to credit the translators of the book’s many quotes from Italian and Latin, making it difficult for a reader to pursue this material further.
Despite these limitations, the book remains, like its predecessor, a useful, engaging, and attractively produced introduction to the art of the major Italian Renaissance courts. It will be of interest to general readers and undergraduate students seeking an overview of Italian artistic centers beyond Florence, Rome, and Venice. Overall Cole is amply successful in her stated aim “to explore the distinctive uses of art at court, to distil and bring to life the salient motivations behind the various regional cultures and ‘courtly styles,’ [and] to focus on the artists and individuals associated with them” (28). | http://caareviews.org/reviews/3348 |
Make Your Own Braille Alphabet Tubs
Nikki Cochrane is a foster mom at a children’s home in India. Of her twelve kids, five are completely blind (and another has nystagmus and another is blind in one eye).
With very little access to resources, they rely on what they can find online or what they can create themselves. Recently, Nikki worked on a project that focused on alphabet learning for her kids that would be accessible to all the children. Using tactile objects and print/braille blocks, Nikki created alphabet tubs with a ziplock baggy for each letter.
What You’ll Need
- Ziplock bags for each letter
- A print/braille block for each letter (Nikki used these blocks from Plan Toys)
- A tactile object for each letter (ideas below)
- (optional) a card with the name of the object in print and braille for each letter
Need some ideas for those letters? Some of them can be tricky! I always get stuck on I and J. Here’s Nikki’s list:
- Animals (small animal toys)
- Book
- Car
- Diaper
- Elastic
- Feather
- Glue (a glue stick works well)
- Headband
- India (a small map cutout)
- Jump rope
- Key
- Lollipop
- Money
- Necklace
- Oval (shape)
- Paper
- Q-tip
- Rectangle (shape)
- Spoon
- Toothbrush
- Underwear
- Velcro
- Whistle
- X (the shape)
- Yarn
- Zipper
Playing with the Alphabet
Place all the items in their bags and place the bags in a big plastic tub. Help your kids explore the tubs and open each bag to find what’s inside. Talk about the objects and what they represent and make the sound for each letter.
Some fun games to play may be to try to come up with other items that start with each letter, but don’t fit in your bag (like whale for W or House for H) or you could try to line up all your bags in alphabetical order then sing the alphabet song!
Related Posts
Braille and Literacy, Education
11 Word Games to Help Kids With Dyslexia Recognize Sight Words
Children with dyslexia have difficulty remembering words that don't follow conventional rules, such as high-frequency sight words.
Braille and Literacy, Toys
The Best Braille Toys for Kids Who are Blind
Everything from alphabet blocks to raised line coloring pages and activity books to puzzles to card and board games... and so much more! And it's all in braille ready for...
Braille and Literacy
brailleBOT
AlphaBraille created a fun toy that helps children with visual impairment learn the Braille alphabet. 26 puzzle pieces can be inserted into the device and the device plays the matching... | https://www.wonderbaby.org/articles/braille-alphabet-tubs |
How Are Water Bottles Made?
The water, as a product, comes in many different containers. One of the most common is a plastic bottle. If created from recycled plastics or ex-garrafas documents and materials, plastic water bottles are cheap and efficient.
The plastic used to create water bottles is commonly made from oil. The oil is refined into usable plastic in the form of pellets. Bioplastics, made from plants, provide a greener solution to petroleum-based plastics, but at the cost of shelf life. Many bioplastics degrade much faster.
Melt the plastic pellets technicians to form similar objects the test tubes called preforms. Each pré-molde is placed in a mould and a tube inserted into the opening. Using a mix of infrared and thermal heat, the pré-moldes are heated to the melting point. Once hot enough, the air is pumped through the tube, the expansion of the preform to fit the mold of plastic bottle. Once it is cooled, the bottle is released from the mold.
Green options for plastic bottles of water include reuse and recycling, according to aceinland. Sterilised jars can be reloaded and resold without the cost of manufacture. Recycling of plastic water bottles involves the destruction of old plastic bottles and sterilising the chips shredded. | https://www.itypeusa.com/how-are-water-bottles-made/ |
Welcome to the Rutgers University Libraries. The librarians and staff in the library system are here to help you.
Besides offering various services and resources to support academic course work and research, the Libraries serve an international student body of approximately 4,900 people who come from more than 120 countries around the world.
As an international student studying in another culture and educational system, you may have some difficulties conducting research because our library system and its services are somewhat different from those in your own country. This guide will introduce you to our library system. We hope that you will find the guide useful in your library research and that you have an enjoyable experience using the Libraries.
International Welcome Messages
American Library Systems
Most American academic libraries use an "open stack" system. This means that access to the books and periodicals is not restricted. Librarians can assist you in choosing what to look for, but you will go to the shelves yourself to find what you want. To do that you will need to learn how to use the to find call numbers and other location information. Because books on the same subject are usually shelved together, you may also browse through the shelves for items you need.
For journal articles, you will need to learn how to search subject indexes to find article citations and full-text articles. In today's American libraries, electronic resources are a very important part of research. More and more indexes, abstracts, and journal articles are available in electronic format and are accessible remotely.
If you have questions or need assistance in using the library, please do not hesitate to ask at the reference desk. Reference librarians are available to help you identify information and can show you, step by step, how to find books and other materials. If the librarian has difficulty understanding your question, please write the question on a piece of paper and show it to him or her.
Campus Libraries
There are twenty-six libraries, centers, and reading rooms on the Rutgers campuses. Below is a list of the locations along with the abbreviations used in the and in the law library catalogs. It is good to become familiar with the libraries and librarians in your major field of study. For their hours, phone numbers, and addresses, see Hours, the list of Libraries and Centers and the list of Subject Specialist Librarians.
University-Wide
|Campus / Library Name||Abbreviation|
|RU-Online: The Rutgers Digital Library||RU-ONLINE|
New Brunswick/Piscataway Area
|Campus / Library Name||Abbreviation|
|Alexander Library||ALEXANDER|
|Art Library||ART|
|Annex||ANNEX|
|Carr Library||CARR|
|Chang Science Library||CHANG|
|Douglass Library||DOUGLASS|
|East Asian Library||ALEXANDER EAL|
|Library of Science & Medicine||LSM|
|Margery Somers Foster Center||DOUGLASS|
|Mathematical Sciences and Physics Library||MATH|
|Media Center||MEDIA|
|Robert Wood Johnson||RWJ|
|School of Management &
|
Labor Relations Library
|SMLR|
|Special Collections &
|
University Archives
|SPCOL/UA|
Newark Area
|Campus / Library Name||Abbreviation|
|Criminal Justice Library||CRIMINAL JUSTICE|
|Dana Library||DANA|
|Institute of Jazz Studies||JAZZ|
|Law Library||NEWARK LAW|
|Smith Library||SMITH|
Camden Area
|Campus / Library Name||Abbreviation|
|Law Library||CAMDEN LAW|
|Robeson Library||CAMDEN|
Library Terminology
As an international student, you may find some terms used in the Rutgers University Libraries unfamiliar. The following will provide you with a list of some common terms and their definitions, which may help you as you conduct library research.
Abstract
An abstract is a concise summary of a periodical article or book. It can also refer to an electronic database or a set of print publications which provide citations and summaries of articles or texts published in periodicals, books or other materials. They can usually be searched by subject, author and/or title.
Barcode
Every library user is assigned a 14 digit number (barcode) for their library account. Barcodes are assigned by the staff at library circulation desks to borrowers when they register for library privileges. Each barcode is unique to the individual borrower and can be found on the back of a university ID card or library card. Your card and barcode are needed for all library borrowing transactions. Every book in the library also has a unique barcode.
Bibliography
A bibliography is a list of reference materials such as books and articles used for research. It is often located at the end of an article or book. It can also refer to a collection of information sources on a specific topic, such as books and periodical articles that are published as a book.
Blog
Short for weblog, a blog is a type of website where entries are made and displayed in reverse chronological order. Blogs often provide commentary or news on a particular subject and may contain text, images, and links to other webpages or blogs.
Boolean Searching
A method of combining search terms in database searching using Boolean operators: AND, OR and NOT.
Call Number
A call number consists of a series of letters, numbers or symbols that identifies an individual book or material and shows the order in which the item is stored on a shelf or in a collection of materials. The call number label is usually located on the spine of a book. Most academic libraries in the USA use the Library of Congress classification system in order to determine the call number for each book.
Check Out
In order to borrow a book from the library for a certain period of time, you must take the book to the circulation desk and have it charged out with your university ID or library card.
Circulation Desk
The circulation desk is the place to check out and return library materials, establish your library account and ask questions about your library account and library services.
Citation
A citation is a reference source which usually includes article title, author, publication name, date, volume and pages from journals or books.
Database
A database is a file or collection of bibliographic citations, data, full-text materials, or records of materials stored electronically in a manner that can be retrieved and manipulated.
Due Date
The due date is the date by which library materials on loan should be returned or renewed. If you do not return library materials by the designated due date, you are subject to fines or loss of borrowing privileges.
Folio
A designation in the given to library materials that are large in size and therefore shelved separately.
Hold
Using the "Book Delivery/Recall" button in the , you can initiate delivery of a book to your selected pick up location. It will be kept at the circulation desk for you to pick-up for 14 days.
Holdings
The library collections. These include books, periodicals, microforms, pamphlets, audiovisuals and other resources.
Imprint date
The year of publication of a book as designated on the title page.
Index
A periodical index is a list of bibliographic citations of articles in magazines or journals. It can be used to help find articles on specific topics. A book index is an alphabetical list of important words, phrases and subjects contained within a book along with a list of the associated pages that discuss those terms.
Library Instruction
Library instruction usually consists of a lecture, demonstration and hands-on practice. It is a service provided by librarians to teach students how to use the library's resources efficiently.
Loan Period
This term refers to the length of time library materials may be borrowed.
Manuscript
Handwritten document or book.
Microform
Microform is a storage format with reduced images, as opposed to the electronic or print formats. There are two common kinds of microform: microfiche and microfilm. Microfiche: A 4x6 sheet of plastic film that stores information in a compact form and requires a microfiche reading device in order to be used. Microfilm: A roll of film either 16mm or 35mm that stores patents, periodicals or other documents and requires a reading machine in order to be used.
NetID
All faculty, staff, students and guests are assigned a Rutgers unique identifier known as a NetID, comprised of initials and a unique number (e.g. jqs23). In order to access many of the electronic services available to you at Rutgers, you need to activate your Rutgers NetID. Students and faculty/staff use their NetIDs to gain remote access to the libraries' electronic resources, view electronic reserves, and to request books, articles and Interlibrary Loan materials. Please visit https://netid.rutgers.edu for more information and to activate your Rutgers NetID.
Overdue
The book checked out by you has not been returned or renewed by the due date.
Periodical/Journal
Academic publications with reports on recent studies and/or scholarly essays that are printed on a regular basis, whether monthly, bi-monthly, quarterly, annually, or biannually, are referred to as periodicals or journals.
QuickSearch
The is an electronic database listing all the materials such as books and periodicals owned by the Rutgers University Libraries as well as most of the library's electronic holdings (journal articles). Records in the database provide information about these items such as author, title, subject, call number, publication date, location, and availability.
Recall
Recall is a service by which a patron can request a book that has already been checked out by another patron. When the book is returned to the library, it will be held for the requestor, who will be notified. If a book you have borrowed is recalled, you must return it to one of the Rutgers libraries within two weeks.
Reference Collection
The reference collection consists of materials used frequently for general information. It includes encyclopedias, dictionaries, indexes, and other materials. These materials may not be checked out of the library.
Reference Desk
The reference desk is where you receive in-depth assistance from librarians in your library research. The desk is usually located near the reference collections.
Remote Access
Restricted library electronic resources can be used by students, faculty, and staff from an off campus computer. When working from an off-campus location you will need to log in with your Rutgers NetID for access to our electronic resources.
Renew
Renew is a service which allows you to extend the loan period for the book that you have checked out, unless another user has recalled the book. You can renew your books by using the "" feature in the .
Requests
Refers to materials delivered between Rutgers libraries or through Interlibrary Loan and Article Delivery Services, or non-circulating books requested using the "Item Special Request" button in the .
Reserves
Materials set aside by professors for required reading, viewing, or listening by students as part of their coursework. All reserve articles are available electronically, but other materials such as books, videos and CDs can be borrowed by users with an in-library use only restriction.
Stacks
The shelves that hold the circulating library books are referred to as the 'stacks'. A user will need a call number, the number listed both in the book’s record in QuickSearch and on the spine of the book itself, to locate the volume in the stacks.
Library Policies
It is important to be aware of the following library conditions and procedures:
Loan Period: For graduate students, it is one semester; for undergraduates, it is six weeks. If no one else requests the materials, you can continue to renew the loan. If someone else wants the material, you will receive a recall notice and will have two weeks to return the materials or have to pay a fine.
Library Hours: These vary for different libraries. During the fall and spring semesters the Libraries are open longer. During holidays, semester breaks, and summer sessions, hours are shortened. For detailed information, please see the Libraries' Hours web page or call a reference or circulation desk.
Checking Out Books: Please visit the circulation desk to check out books. You will need to present your Rutgers student ID or library card to check out books.
Circulation Services
This is the starting place for students and faculty to use the Rutgers University Libraries. At the circulation desk of all Rutgers libraries, users can register their library barcode, receive introductory materials about libraries policies, check out or return books, pay fines or fees, and learn about useful library services.
Reference Services
The first place you should stop for help with your research is the reference desk. You are invited and encouraged to ask a librarian for help with your assignments, research, and information needs at any one of the reference desks in the Libraries. This service is provided through one-on-one consultation, tutorials, and library instruction. You may also write to "Ask A Librarian," the Libraries' email reference service, or chat with a librarian via our on-line chat function, which are available from the Libraries' website.
The Libraries' Website
You can search the , the Libraries' information system and online catalog, and other information resources through the Libraries' website at http://www.libraries.rutgers.edu/. From this address you can access catalogs (including QuickSearch), indexes and databases, full-text electronic journals, subject research guides, and library services.
You can access the Libraries' website from campus computer labs, your home, or your office. To use some resources from off campus you will need to follow the "Connect from Off-campus" instructions provided on the Libraries’ website and login using your Rutgers NetID and password.
QuickSearch (Library Catalog)
Use the to find books, periodicals (journals or magazines), and other materials. QuickSearch contains records for items held by the Rutgers University Libraries, including those held on reserve for specific courses and circulation records for all borrowers. QuickSearch record provides information on which Rutgers library holds each title as well as call number, location, journal holdings, and whether each copy of a title is currently checked out or is on the shelf.
For some libraries, items acquired prior to 1972 were not automatically included in the but are being progressively added to the database. Many government documents published prior to 2002 are not included. If you cannot locate the materials you need in the , ask a reference librarian for assistance.
My Library Account
Log in to your QuickSearch account and use your Rutgers NetID and password to review your library account, including checkouts, requests, media bookings, bills, E-ZBorrow & UBorrow requests, and interlibrary loan requests. You can also renew materials you have checked out.
Online Databases (Indexes and Abstracts)
There are many electronic indexes and databases available on the Libraries' website. These databases can help you locate bibliographic information about journal articles, dissertations, government publications, conference papers, and technical reports on specific subjects. Some databases contain or link to full-text materials. Search the to locate materials that are cited in an index when the full text is not provided.
Electronic Journals (E-Journals)
More and more journals are available in electronic format and can be accessed on the Internet. You can search for journals by title or ISSN through QuickSearch.
Library Orientations and Classes
Orientation is a brief introduction to the Libraries for new students. Library instruction classes are requested by faculty members to assist students with their research.
Rutgers Delivery Service (RDS), E-ZBorrow, UBorrow and Interlibrary Loan Service (ILS)
Books
Rutgers Delivery Service
You may request library materials from any Rutgers library to be held for you to pick up through the Rutgers Delivery Service using the Book Delivery and Special Request functions from the book record in the .
E-ZBorrow / UBorrow / Interlibrary Loan
Interlibrary Loan Services allow you to obtain materials not owned by the Rutgers University Libraries or that are already checked out to other library users. There are three main types of interlibrary loans at Rutgers: E-ZBorrow, UBorrow, and Interlibrary Loan. Learn more about ILL services.
Articles
You may request electronic copies of articles from journals owned in print format by the Rutgers University Libraries and also from journals not owned by the Rutgers University Libraries. Request articles via Interlibrary Loan.
Reserves
You can search for course reserves through QuickSearch. This lists books, articles, and other course materials put aside at the request of instructors for students taking specified courses. Reserve materials may be in paper or electronic format. Many reserve collections are located behind the circulation desk; others are located in areas designated specially for reserve materials. These materials are for use only in the library and for limited periods. Electronic reserve materials can be accessed from campus computer labs, your home, or your office. You will be prompted to login with your Rutgers NetID and password for access to electronic reserve documents - from any computer; whether on campus or off-campus.
How to Find Books
To find books in QuickSearch, you may search the , by author, title, subject, keyword, or ISBN. For tips on searching visit QuickSearch Help.
- If you already know the title of the book you are looking for, use the "Browse" search and type the entire title or the first few words of the title. Be sure that the search is set to "Browse alphabetically by Title." Articles such as "a" and "the" at the beginning of a title should be left off.
- If you only know the author's name, use the "Browse" search and type all or part of the author's name. Be sure that the search is set to "Browse alphabetically by Author." When searching personal names, type the last name (surname) first. An author can be a person, government agency, corporation, association, or the sponsor of a conference.
- When you want books about a particular subject, use one of the SUBJECT search options. If you have an exact subject heading from the Library of Congress Subject Headings, use the "Browse" search, making sure that the search is set to "Browse alphabetically by Subject.". Otherwise use a "Basic" search and select "Subject" from the drop-down menu, or ask for assistance at the reference desk.
- If you do not know the exact author's name, title, or subject heading, use a "Basic" search or try an "Advanced" search. Either option will allow you to combine search terms, for example, an author's last name and a keyword in a title.
- After you display the full record for the book, write down the library where it is located, the complete call number, the sub-location, and the status. If the book is IN-LIBRARY, then you may go to that library and visit the stacks to get the book, or you may choose Book Delivery to have it held for you at a circulation desk. If all copies of a book are checked out by other users, you may use the E-ZBorrow or UBorrow services to request another copy of the book.
- If the Rutgers University Libraries do not own the book you want, you may request the book using the E-ZBorrow or UBorrow services. If the book is not available through E-ZBorrow or UBorrow, you can fill out an interlibrary loan request. For any of these services, go to the Libraries’ homepage and click on the "Services and Tools" and then "Borrow / Access / Request" link.
How to Find Journal Articles
If you already know which journal you need, go to the QuickSearch journal search. If you only have the subject and need to find articles, use the following instructions.
- Ask a reference librarian to recommend the most appropriate index for your topic or select an index from the list of Indexes and Databases available on the Libraries’ website. The Rutgers Libraries have a variety of indexes and abstracts accessible in both electronic and paper format. The best way to search indexes and abstracts is by subject. You may also use the Articles tab in the search area on the Libraries' homepage as a place to begin your search.
- When you find a useful article, write down the complete article citation, which includes the author's name, article title, journal title, volume number, issue number (if there is any), date, and page numbers. If the index is electronic, you can also email or download the citations. Some electronic indexes may also include full-text articles or links to full-text.
- Not all periodicals listed in indexes and abstracts are owned by the Rutgers University Libraries. To find out if the Libraries own a specific journal, go to the Journals tab in the search area on the Libraries’ homepage and search by entering the entire journal title or the first few words of the title. Articles (a, an, the, le, la) at the beginning of a title should be left off.
- If the libraries own the journal in electronic format, you will see an "Online Access" link in the record or you may be able to link directly to the full-text from within an electronic index.
- If the library owns the journal in paper format, you can go directly to the shelves and get it. Journals in most Rutgers libraries are arranged alphabetically by title. You cannot borrow journals, but you may make photocopies or scans of articles. You can also request electronic copies of articles using Interlibrary loan and Article Delivery.
- If the Rutgers University Libraries do not own the journal you want, you can request electronic copies of articles using Interlibrary loan and Article Delivery.
Helpful Tips for Library Research
- Apply for your computer account as soon as you register at Rutgers. If you have questions, contact computing services on your campus.
- Make sure you understand your assignment or project. If you do not, ask your instructor to explain it to you.
- Start your library research early because library materials you need may already be checked out by another user. Allow extra time if you need materials that are not owned by the Rutgers University Libraries.
- Use appropriate indexes and reference materials in your research.
- Keep careful and complete notes for your reference citations including author, title, place of publication, publisher, date of publication, volume, and page numbers. You will need this information for your bibliography.
- Remember that you can access various library resources on the Internet from your home, office, and campus computer labs. To use some resources from off campus you will need to follow the "Connect from Off-Campus" instructions provided on the Libraries' website and login using your Rutgers NetID and password.
- Feel free to ask a reference librarian for assistance, or use the "Ask a Librarian" service. | https://www.libraries.rutgers.edu/services_international_students |
The invention relates to a heat-pipe-type LED front fog lamp and a lighting method. According to main heat dissipation methods of existing white-light LED automobile lamps, large-area aluminum heat dissipation pieces or fans are adopted to conduct active heat dissipation. Due to the facts that only the heat dissipation pieces are adopted to conduct passive heat dissipation and the heat generated by white-light LEDs can not be timely guided out, the light failure of the white-light LEDs is serious. The heat-pipe-type LED front fog lamp comprises an LED light source (1). The LED light source (1) is welded to a circuit board (2). The circuit board (2) is welded to a heat tube (5). The heat tube (5) is provided with heat dissipation pieces (6) in a welded mode. A reflective mirror (3) is installed on the heat tube (5). A matched mirror (4) is welded in front of the reflective mirror (3). The heat-pipe-type LED front fog lamp is good in heat dissipation structure. | |
A popular live music venue in Birkenhead is closing its doors. But there's no need to worry, as it's moving to a nearby location.
One of the larger music venues in the town, Hotel California is split across five zones to suit as many different tastes as possible.
Close to the Cammell Laird shipyard, this club welcomes a vibrant mix of DJs and live bands. There’s no genre that goes untouched and its listings are usually swarming with the biggest tribute acts on the circuit.
Hotel California is known as Wirral's premier live Rock music concert venue and bar.
Hotel California is known as Wirral's premier live Rock music concert venue and bar.
The Echo understands the club will close in May and will relocate almost immediately to another premises in Birkenhead. | |
Dave Matthews Band Oct 4, 2009 Rock superstars the Dave Matthews Band kick off ACL's 35th season with hits and songs from their latest album Big Whiskey and the GrooGrux King.
S35 E02Oct 11, 2009
Ben Harper and Relentless7 Oct 11, 2009 Ben Harper debuts his new band, Relentless7, with a rocking set drawn from his album White Lies for Dark Times.
S35 E03Oct 18, 2009
Kenny Chesney Oct 18, 2009 Country music superstar Kenny Chesney hits the ACL stage for a tour through his greatest hits.
S35 E04Oct 25, 2009
Andrew Bird; St. Vincent Oct 25, 2009 Multi-instrumentalist and singer/songwriter, Andrew Bird has captured audiences around the world with his unique style. Through amazing vocals, whistling, violin, guitar and glockenspiel, this Chicago-born musician seems to have brought about a new phase of folk and indie rock.
S35 E05Nov 1, 2009
M. Ward / Okkervil River Nov 1, 2009 M. Ward graduates from guest spots to his own headline ACL performance, highlighting his latest LP Hold Time. Austin indie rock favorite Okkervil River follows.
S35 E06Nov 8, 2009
Elvis Costello/Band of Heathens Nov 8, 2009 It’s been five years since Elvis Costello was last on the Austin City Limits stage. Tonight he returns as part of our 35th anniversary with new music that takes the music legend back-to-basics, combining his witty lyrics and country sounds “with a fitting old-timey feel” (Los Angeles Times).
S35 E07Nov 15, 2009
Willie Nelson and Asleep at the Wheel Nov 15, 2009 Singer-songwriter author, poet and actor, Willie Nelson rose to national stardom during the outlaw country movement of the 1970s, but has remained both a musical and cultural icon through the decades. Since their inception in 1970, Asleep at the Wheel has won nine Grammy Awards, released more than 20 studio albums and have charted more than 20 singles on Billboard’s country charts. Tonight these country legends grace the Austin City Limits stage performing songs from their newest release, Willie and the Wheel.
S35 E08Nov 22, 2009
Pearl Jam Nov 22, 2009 One of the most popular and influential rock bands of the past two decades, Pearl Jam makes their Austin City Limits debut in celebration of their self-released new CD, Backspacer.
S35 E09Jan 10, 2010
Allen Toussaint Jan 10, 2010 Legendary New Orleans songwriter Allen Toussaint hits the ACL stage with songs from his latest LP, The Bright Mississippi, and classic hits like “Southern Nights.”
S35 E10Jan 17, 2010
Mos Def/K’naan Jan 17, 2010 Hip hop conquers the ACL stage with sets from alternative rapper/actor Mos Def, supporting his latest record, The Ecstatic, and Somalian native K’naan, performing songs from his acclaimed LP, Troubadour.
S35 E11Jan 24, 2010
Avett Brothers/Heartless Bastards Jan 24, 2010 Rising roots rock kings the Avett Brothers perform songs from their latest album, I and Love and You. Ohio-to-Austin transplants Heartless Bastards follow with their classic rock 'n roll.
S35 E12Jan 31, 2010
Steve Earle/Kris Kristofferson Jan 31, 2010 Steve Earle pays tribute to his mentor, legendary Texas troubadour Townes Van Zandt. Kris Kristofferson follows with tunes from his latest album.
S35 E13Feb 7, 2010
Esperanza Spalding / Madeleine Peyroux Feb 7, 2010 Singer/composer/bass prodigy Esperanza Spalding debuts on ACL with a mix of jazz, soul and Brazilian pop. Contemporary torch singer Madeleine Peyroux follows in support of her album Bare Bones.
S35 E14Feb 14, 2010
Them Crooked Vultures Feb 14, 2010 Josh Homme from Queens of the Stone Age, Dave Grohl from the Foo Fighters and John Paul Jones from Led Zeppelin combine for high volume rock ‘n’ roll. | https://reelgood.com/show/austin-city-limits-1975/season/35 |
The City of Ilorin will be agog today when the first edition of the Governor AbdulRahman AbdulRazak National Cycling Championship is flagged off.
The race expected to be flagged off by the Deputy Governor, Kayode Alabi at the gate of Kwara Hotels will see about 200 male and female cyclists from over 20 states of the federation participating.
While the male cyclists are expected to ride a total distance of 150km around the Ilorin metropolis, the Female will ride 80km.
Read Also: AFCON 2021 qualifiers: Super Eagles starting XI vs Benin Republic
Meanwhile, Secretary General of the Nigeria Cycling Federation (NCF), Dayo Abulude said that the first four finishers in the male and female categories will take home cash prizes of N200,000, N150,000, N100,000 and N50,000 respectively.
Earlier yesterday, 16 participants in the 3-Day Commissaires course were awarded certificates by the technical director of the federation who represented the president, Bashir Mohammed. | https://gcfrng.com/2019/12/07/ilorin-agog-for-abdulrazak-cycling-championship/ |
In 2001, Eva Pell, Penn State’s Vice President for Research prepared a report for the Trustees on “technology transfer.” In the discussion about why universities should be involved in technology transfer, Pell includes the following account of Bayh-Dole:
Until 1980, the federal government owned about 30,000 patents, many of which had resulted directly from research conducted at universities; and only a small percentage ever became commercialized.
The figure cited was 28,000 patents. But most of these patents arose in defense industry contracting, not university contracting. And there, most of these patents were on inventions that the defense contractors could have owned under federal contracting agreements but chose not to. So this 30,000 figure is inflated. And the “many of which” is simply not true. According to claims made at the time, 5% of these federal patents were licensed for commercial use. However, for biotech inventions, the figure was 23%. For university-managed inventions made with federal support, in the period 1968-78, the rate was about 5%. The government’s licensing rate for biotech inventions was over four times better than the university licensing agents’ rate.
As the for the “only a small percentage ever became commercialized”–this is bombast. First, the overall commercial use rate for patented inventions is about 5%. But Pell’s bombast is designed to mislead the trustees (even if Pell merely repeats stuff that she’s been told is true). The point of the federal patent ownership program was not “commercializing” inventions but rather making them available for public use and benefit. Thus, the focus of executive branch patent policy was in using patents to bring inventions “to the point of practical application” and then releasing the invention for all to use. In some federal programs, such as those at the Department of Agriculture, all inventions reviewed by Harbridge House in 1968 (for two study years) achieved practical application and became commercial products. It is no surprise that many defense, space, and atomic energy inventions did not become commercial products. The “market” for these inventions was the federal government. To “commercialize” these inventions is a meaningless claim. Bombast.
On December 12, 1980, the “Patent and Trademark Act Amendments of 1980”, also known as the Bayh-Dole Act, were [sic] enacted to create a uniform patent policy among the many federal agencies that fund research, and to create an environment more likely to foster commercialization of the patents the federal government had seeded through its financial support.
The Kennedy patent policy of 1963, updated by Nixon, was uniform. Unlike Bayh-Dole, however, the Kennedy patent policy was also flexible rather than arbitrary. And from 1968 to 1978, the NIH and NSF operated an Institutional Patent Agreement program under the Kennedy patent policy. The IPA program permitted participating nonprofits to own inventions arising with federal support, subject to various requirements for managing patents on these inventions. Given that the NIH and NSF were the primary federal agencies funding university research–and the DoD allowed contractors to own inventions made in research–universities already had “an environment…likely to foster commercialization.” Bayh-Dole did little to change this environment for universities.
Furthermore, there’s nothing express in Bayh-Dole that focuses on “commercialization”: the standard is “utilization” or “practical application” of inventions made with federal support. While “commercialization” is one way in which utilization or practical application may be achieved, it is not the only way and not the only way recognized by Bayh-Dole. Even in the use reporting provision, the datum to be reported is “first commercial sale or use.”
Finally, there is no evidence to demonstrate that university ownership of inventions made with federal support creates an environment “more favorable” to “commercialization” than federal ownership or the private ownership of inventions. Harbridge House found that the best commercialization rates (practical application, commercial use, or sale) came about when an invention made with federal support was owned by a contractor with experience in the market. The worst rates were those involving contractors without experience (such as nonprofits) and involving licensing rather than ownership. Thus, as far as the Harbridge House study was concerned, Bayh-Dole presented the worst possible combination: ownership by a contractor without experience, who then must grant patent licenses in order for an invention to achieve practical application.
Bayh-Dole enables small businesses and non-profit organizations, including universities, to retain title to materials and products they invent under federal funding.
This, too, is full of misstatement. Bayh-Dole requires federal agencies to use a patent rights clause that permits nonprofits to “retain” title to inventions that the nonprofits have acquired outside the conditions of the federal funding agreement. Bayh-Dole gives universities no special privilege to obtain title to inventions made with federal support. It simply requires federal agencies to use a contracting provision that prevents the agencies from requiring assignment of title by universities if the universities obtain title. “Retain” does not mean “own.” It means “do not have to sign over to the federal government.”
And Bayh-Dole does not deal with “materials and products”–it deals with inventions that are or may be patentable, when owned by a contractor. The phrase “materials and products” makes it appear to the trustees that Bayh-Dole is broader than it is. Non-bombast version: “If a university obtains title to an invention made with federal support, then Bayh-Dole’s standard patent rights clause permits the university to retain title to that invention.”
One more bit of bombast: “they invent.” The antecedent of “they” in Pell’s sentence is “small businesses and non-profit organizations, including universities.” But universities don’t invent. Inventors invent, and federal common law of inventions provides that inventors own their inventions. So Pell’s presentation assumes what isn’t the case–that universities somehow invent and can retain title to these inventions. Penn State cuts inventors entirely out of this discussion of federal patent policy, as if this was the purpose of Bayh-Dole. The Supreme Court ruled that’s not the case.
By placing few restrictions on the universities’ licensing activities, Congress left the success or failure of patent licensing up to the institutions themselves.
This is strange. Congress places the fewest restrictions on inventors, when they retain title. That’s 35 USC 202(d) and 37 CFR 401.9. Next in line is small businesses. That’s 35 USC 202(c)(1)-(6), (8); 37 CFR 401.14(a)(a)-(j), (l). Then we have nonprofits–they have the restrictions at 202(c)(7) and 401.14(a)(k). Those restrictions include prohibiting assignments of inventions except to patent management organizations (and even then the nonprofit obligations must follow the assignment) and restricting the use of licensing income (share with inventors, deduct only expenses for managing federally supported inventions, and use the rest for scientific research or education).
But Bayh-Dole also is part of federal patent law, not federal procurement law. In federal patent law, Bayh-Dole establishes a new category of invention, the “subject” invention, and specifies as patent law a policy that restricts the use of the patent system for subject inventions. Thus, the licensing program of any owner of a patent on a subject invention must conform to the public covenant that runs with any subject invention.
Yet, there’s deeper bombast here. Congress also required contractors to grant to the Government a license to “practice and have practiced” any subject invention–and “practice” means “make, use, or sell.” That is, the public use of any subject invention does not depend only on “universities’ licensing activities” but also on the government’s activities–including “sell” and “have sold.” Furthermore, Congress included “march-in provisions” that aim to limit the nonuse or unreasonable use of inventions by universities and other contractors. Thus, the “success or failure” is not left to “the institutions themselves.” In practice, federal agencies have indeed left universities to their own devices–but Congress did not do so. Deep bombast.
The success of Bayh-Dole in expediting the commercialization of federally funded university patents is reflected in the statistics.
The Kennedy patent policy was focused on “expediting” bringing inventions to the point of practical application. It did this by limiting a contractor’s patent monopoly to three years from the date a patent issued. Any advantage a contractor gained would arise because the contractor got things done within the six years from date of invention. Harbridge House found that for commercial contractors, 77% of inventions made with federal support by commercial contractors were in use by the time the patent issued.
Prior to 1981, fewer than 250 patents were issued to universities per year. By 1993, almost 1,600 were being issued each year. Of those, nearly eighty percent stemmed from federally funded research.
Bombast. Prior to 1981, patents were largely issued to patent management agents, not to universities. Only a handful of universities took ownership of patents directly–including MIT, UC, Stanford. The rest used agents such as Research Corporation or an affiliated university foundation. Even after Bayh-Dole came into effect, there was still a lag of time for research contracts to be awarded, research to be done, inventions to be made, and three years or so later patents issued–so, perhaps four or five years beyond 1981–so, maybe 1985 or 1986 would show the first patents managed under the Bayh-Dole regime. “Commercialization” of these inventions would take even more time–time to find licensees, to negotiate deals, and time for those licensees to create product. The first outcomes of Bayh-Dole licensing might not be meaningfully realized until about 1990.
I ran numbers at the USPTO web site for patents issued to universities, foundations, and institutes. Let’s talk 1981, since that will be before Bayh-Dole has any effect. There were 548 patents issued to universities, foundations, and institutes. Of these, 146 cited government interest. In 1993, there were 2029 patents issued to universities et al. 585 cited government interest. That’s just under 30% of university et al. patents, not nearly 80%. At this point, Pell is just pulling numbers from her posterior cortex. There’s nothing to indicate the government share of university patenting that Pell cites to the trustees. But the trustees would have nothing to go on except trusting Pell to have her numbers right. If she makes things up, how would they know? So she writes things that sound good, and leads on the trustees. Perhaps she didn’t know any better. Perhaps she was given this information by someone she thought knew something about Bayh-Dole. In any case, it’s bombast.
What is Pell’s goal? Here’s what she had to say:
A final question then, is how will the University benefit from a successful Tech Transfer enterprise? If we look at The University of Wisconsin, we can see that their foundation with its $1 billion endowment got its start over 70 years ago and has been fueled by many successes starting with the discovery of lWarfarin [sic] as I have already said, and synthetic vitamin D.
Lots going on here. First, WARF was started to manage the “synthetic vitamin D.” That is, WARF was started with a meaningful invention already in hand, just as Research Corporation was with the electrostatic precipitator. But Penn State apparently starts its research foundation before it has that first meaningful invention. Thus, it has to subsidize the search for a paying invention rather than have a paying invention at the start. This is a fundamental difference in practice. If one has a successful invention, then one has a good chance of creating a helpful patent management practice around it. But it is not the case that if one creates a patent management practice, that practice will somehow speed up the search for a successful invention. It’s a Bugblatter Beast of Traal kind of moment.
Second, Pell makes it sound like WARF had “many successes” to create its “$1 billion endowment.” But that’s not the case. As Blumenthal, Epstein, and Maxwell point out in a 1986 article, only a few biomedical inventions account for WARF’s licensing “successes.” Here’s an excerpt from a history of the University of Wisconsin by Edmund Cronin and John Jenkins:
So, 42 profitable inventions, or about a 17% commercialization rate. Keep in mind generating money from a license does not mean that the licensed invention has been made into a commercial product or has been used commercially–a company can pay for a license and fail to make or use the invention. The important bit is that WARF had three big hit inventions in its first 50 years–about 1% of its patents–and 12 modest successes (including the big hits) or about 5%. The two “biggest winners” account for $19m in revenue.
How then didWARF get to $1b in its “endowment”? Investing in common stocks and emerging growth companies. Here’s what Cronin and Jenkins have to say:
The early royalties for vitamin D synthesis (it wasn’t synthetic vitamin D, it was vitamin D created by UV radiation, to get around federal regulations regarding adding things to milk) provided funds to invest in the stock market, and it was those investments that built WARF’s “endowment.” If Penn State wanted to follow WARF’s path, the university would aim to hit it rich in the stock market. Funny, but Bayh-Dole prevents such an effort for inventions made with federal support. Nonprofits are forbidden from using income from licensing for stock investments–the money has to be used for “scientific research or education.” More Bugblatter Beast thinking, here.
But here’s Pell:
The royalties from those first discoveries, and others since that time have generated the endowment, which is now being used to seed research and fund graduate education. This certainly is our long-term goal as well.
She seems to think that WARF’s endowment is made from royalties, not from stock investments. Thus, she leads the trustees to think that WARF’s “success” (many lucrative licenses, amounting to a billion dollars in 70 years) can be replicated as a “long-term goal.” Here’s what David Blumenthal et al. say about the WARF situation in “Commercializing University Research“:
A close examination of the experience of this foundation reveals that, with good fortune and good management, a patenting and licensing organization can enjoy financial success. It also indicates, however, that the success experienced by WARF would be difficult to achieve today and that such efforts may yield lower financial returns than university administrators expect.
That would be, a 1% big hit licensing rate and 5% any sort of profit from licensing rate would be difficult to repeat now–and that was Blumenthal in 1986. The University of California recently reported that its commercialization rate–anything becoming a commercial product–was 1 in 200, or 0.5%.
Pell argues that there is a “gap” between university inventions and licensing–a version of the “funding gap” or “valley of death” claim to explain why university patent licensing is so horribly ineffective. The patents it turns out, don’t provide sufficient incentive for a single company or investor to step in and “develop” inventions as products. Thus, money from somewhere else must be “invested” in the inventions to put them in a condition to be licensed as a monopoly to support commercial development and sales. This is all very strange, though it is blithely repeated by university licensing folks. It appears that the patents on university inventions preclude further research development rather than provide incentives that speed that development. To mitigate the problem, administrators then create a secondary market of investment on the prospect that eventually some monopolist will take an exclusive license and the secondary investors who provide the “gap funds” will see a return on their investment when the university makes money. In essence, that’s what the paper startup companies created by universities do–create a betting pool for investors hoping that they can push an invention toward commercial viability and make good money doing that.
We might pause to consider the huge difference in approach between the Kennedy patent policy focus on getting an invention “to the point of practical application” and current university licensing practice, enabled by Bayh-Dole. The Kennedy patent policy allowed federal agencies to use patents to give contractors an incentive to develop inventions for public use quickly–within three years of a patent issuing. Once an invention had been developed to the point of practical application, plus three years of patent monopoly, the invention was to be released via non-exclusive licensing. The focus, then, was on using the patent system to speed the development of an invention to this “point of practical application.” By contrast, university “gap” practice aims to develop an invention sufficiently that it can be assigned to a monopolist for the remaining life of the patent, to extract as much financial value from the patent position as possible. Utterly unlike the Kennedy practice, which limited the duration of the patent monopoly as an incentive to get things developed quickly, the Bayh-Dole enabled practice sells off a financial interest in maintaining the patent monopoly for its entire term.
For Penn State, in 2001, patent licensing under Bayh-Dole wasn’t much working. Inventions had to be developed on the university side rather than by a licensee (and certainly not by multiple licensees). Investment funds here allow “investors” to buy in to the future “commercialization” which likely will never happen, given the reported outcomes at places like Stanford and UC.
What Pell leaves to handwaving is how anything the university does by patenting with a fixation on commercialization will benefit the public or industry. The idea of creating an “endowment” like WARF’s is sketchy enough, starting without a big hit, and relying apparently on federally supported inventions and so cannot be used to speculate in the stock market to build the endowment.
What would Pell’s argument to the trustees have been if she had reported Bayh-Dole accurately and had a clue about WARF? Instead of claiming there was a “federal mandate” to “transfer technology,” where would she find that mandate? Instead of claiming that Bayh-Dole had done anything beneficial, it would be just another federal bungle of regulation to be navigated. Instead of claiming WARF had many successes, she would point out that WARF had just a handful (Stanford’s experience is similar). Rather than trying to create the appearance of a system to manufacture licenseable inventions, Pell might have discussed how to plan for finding a “big hit” invention–by directing inventors to invention management resources, by operating in a voluntary invention ownership policy, by being selective in what inventions to take on, and by working to license non-exclusively–all things that characterized the WARF experience. Instead, Pell uses a fantasy and bombast to justify what Penn State is doing.
Perhaps there is some sense in what Penn State was trying in 2001. But what then would be the non-fantasy justification for the Penn State technology transfer program? | https://researchenterprise.org/2017/07/18/bayh-dole-bombast-in-penn-states-2001-report-on-technology-transfer/ |
FIELD OF THE INVENTION
DESCRIPTION OF THE PRIOR ART
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
FIRST EMBODIMENT
SECOND EMBODIMENT
The present invention relates to a semiconductor memory device and, more particularly, to package and test technique of a semiconductor memory device.
Recently, a main trend on technical fields related to a semiconductor memory device changes from an integration to an operating speed. Therefore, high-speed synchronous memory devices such as double data rate synchronous dynamic random access memory (DDR SDRAM) and RAMBUS DRAM have been focused as a new topic subject. The synchronous memory device is a memory operating in synchronization with an external system clock and SDRAM among the DRAMs has been a main stream in commercially available memory market. In input/output operations, the SDRAM performs data access one time at every clock in synchronization with rising edges of clocks. On the other hand, the high-speed synchronous memory device such as the DDR SDRAM operates in synchronization with falling edges as well as rising edges of clocks so that data access can be performed two times at every clock.
4
8
16
The DRAM products that are being manufactured have X/X/X bandwidths. In other words, each DRAM product can have different pin arrangements and wirings between pins and data pads in the DRAM product according to the bandwidth.
FIG. 1
4
16
is a diagram showing a pin arrangement of conventional X and X SDRAMs (54 pins).
FIG. 1
16
0
15
0
12
0
1
16
Referring to , an X SDRAM includes data I/O pins DQ to DQ, address pins A to A, bank address pins BA and BA, power pins VDD, VSS, VDDQ and VSSQ, data mask pins LDQM and UDQM, command pins /WE, /CAS, /RAS and /CS, a clock pin CK, and a clock enable pin CKE, and each of them are wire-bonded with pads of a die via lead frames. In case of the X SDRAM, 16 DQ pins are all used, and only one pin among the 54 pins is no-connection (NC).
4
0
1
2
3
Meanwhile, since an X SDRAM uses only 4 DQ pins (i.e., DQ, DQ, DQ and DQ), the other 12 DQ pins are in no-connection state. Since the lower data mask pin LDQM among the data mask pins LDQM and UDQM remains in the NC state, the total 14 pins among the 54 pins remain in the NC states.
4
8
16
Since data mask signals are controlled by bit unit, one data mask pin (DQM) is used in the X or X SDRAM and two data mask pins (LDQM, UDQM) are used in the X SDRAM.
FIG. 2
4
8
16
is a diagram showing a pin arrangement of conventional X/X/X DDR SDRAMs (66 pins).
FIG. 2
16
8
4
Referring to , the pin arrangement of DDR SDRAM is almost similar to that of SDRAM except that the DDR SDRAM uses data strobe pins LDQS, UDQS and DQS, a reference voltage pin VREF, a clock bar pin /CK. In other words, the X DDR SDRAM uses 16 DQ pins and the X DDR SDRAM uses 8 DQ pins. The X DDR SDRAM uses 4 DQ pins.
16
4
8
4
8
16
4
8
While the X DDR SDRAM uses two bonded data mask pins LDM and UDM, the X or X DDR SDRAM does not use the lower data mask pin LDM and remains in the NC state. In addition, the X or X DDR SDRAM uses one data mask pin DM. While the X DDR SDRAM uses two bonded data strobe pins LDQS and UDQS, the X or X DDR SDRAM does not use the lower strobe pin LDQS and remains in the NC state so that only one data strobe pin DQS is used.
FIGS. 1 and 2
As shown in , all the semiconductor memory devices have respectively different pin arrangements and wirings according to bandwidths.
Meanwhile, the integrity of semiconductor memory device is increased and several ten millions of cells are integrated within one memory chip. If the number of memory cells is increased, it takes much time to test whether the memory cells are normal or defective. In this package test, the package test time as well as the accuracy of test results must be considered.
To meet these demands in view of the package test time, a parallel test which can perform multi-bit access at the same time is suggested. However, the parallel test has a disadvantage in that differences between data paths or power noise may not be reflected in the test and may therefore affect test reliability.
Accordingly, in order to more accurately check characteristics of product, non-compression method whose test time is long must be used. A following description is made on the assumption of the non-compression method.
FIG. 3
is a conventional wire bonding diagram according to package options.
FIG. 3
FIG. 3
4
100
4
101
8
102
8
110
4
111
8
112
16
120
4
121
8
122
Referring to , in case of an X product , a package option pad (PAD X) is wire-bonded with a VDD pin and another package option pad (PAD X) is wire-bonded with a VSS pin. In , dark portions represent pads wire-bonded with package leads, and bright portions represent floating states. Meanwhile, in case of an X product , a package option pad (PAD X) is wire-bonded with a VSS pin, and another package option pad (PAD X) is wire-bonded with a VDD pin. In case of an X product , package option pads (PAD X) and (PAD X) are wire-bonded with VSS pin.
FIG. 4
is a circuit diagram of a conventional package option signal generation block.
FIG. 4
4
8
130
140
4
8
130
140
Referring to , VDD or VSS applied to the package option pads PAD X and PAD X are buffered through buffer units and and outputs as package option signals sX and sX. Here, the buffer units and are provided with two inverters.
A following table 1 is a package option table of an operation bandwidth according to the wire bonding.
TABLE 1
X4
X8
X16
PAD X4
VDD
VSS
VSS
PAD X8
VSS
VDD
VSS
SX4
H
L
L
SX4
L
H
L
4
8
4
4
8
8
4
8
16
Referring to the table 1, if the package option signals sX and sX are a logic high (H) level and a logic low (L) level, respectively, corresponding chip operates as X. The package option signals sX and sX are a logic low (L) level and a logic high (H) level, respectively, corresponding chip operates as X. The package option signal sX and sX are all a logic low (L) level, and corresponding chip operates as X.
A following table 2 is an address scramble of a conventional SDRAM (DDR SDRAM).
TABLE 2
ADDRESS
A0
A1
A2
A3
A4
A5
A6
A7
A8
A9
A11
A12
X4 PACKAGE
Y0
Y1
Y2
Y3
Y4
Y5
Y6
Y7
Y8
Y9
Y11
Y12
X8 PACKAGE
Y0
Y1
Y2
Y3
Y4
Y5
Y6
Y7
Y8
Y9
Y11
NC
X16 PACKAGE
Y0
Y1
Y2
Y3
Y4
Y5
Y6
Y7
Y8
Y9
NC
NC
16
10
0
9
8
11
0
11
16
4
12
0
12
16
Referring to the table 2, in case of the X package, Y addresses (column addresses) Y to Y are sequentially counted with respect one word line. The entire cells connected to the word line can be screened by performing test 1024 times. At this time, 16 data are inputted/outputted through the bonded pads. In addition, in case of the X package, Y addresses Y to Y are sequentially counted with respect one word line. The entire cells connected to the word line can be screened by performing test 2048 times. At this time, 8 data are inputted/outputted through the bonded pads, so that the test time is taken longer two times compared with the X package. In case of the X package, Y addresses Y to Y are sequentially counted with respect one word line. The entire cells connected to the word line can be screened by performing test 4096 times. At this time, 4 data are inputted/outputted through the bonded pads, so that the test time is taken longer four times compared with the X package. In other words, as the bonded number of DQ pads with respect to the number of physical DQ pads is smaller, the number of data inputted/outputted at one time is reduced so that the entire test time is increased.
8
4
According to the above-described prior art, once the wiring with respect to the package option pads is completed, the test can be performed by using only one package option corresponding to the wiring state at a test mode operation as well as a normal mode operation. Therefore, the X or X package option needs a long test time.
4
8
16
8
4
Meanwhile, in another aspect, if performing only a test with respect to one package option determined by the wire bonding of the package option pads, it is difficult to detect failure according to change of bandwidths. Therefore, there are many cases that the test is performed with respect to other package options as well as corresponding package option. In particular, in case of product bonded with the X or X package, since some of the DQ pins are in the NC states, it is difficult to test the package characteristic of upper bandwidth. However, in case of products bonded with the X package, it is possible to test the characteristic of the bandwidth of X or X package.
16
4
8
8
4
When assuming characteristic of the products bonded with X package is tested, in order to test the X or X package characteristic, the wiring with respect to the package option pad must be modified. In other words, after testing the X package characteristic, the wiring is again modified and then the X package characteristic is tested. In this case, since the wiring modifications corresponding to the respective package options are needed, there is a problem that the packing cost and test time are increased.
It is, therefore, an object of the present invention to provide a semiconductor memory device capable of performing a package test with bandwidth except for default bandwidth without any wiring modification with respect to package option pads.
In accordance with an aspect of the present invention, there is provided a semiconductor memory device which comprises: at least one package option pad wire-bonded in a default package option; a buffer control signal generation means for generating a buffer control signal; and a buffering means for buffering a signal applied to the package option pad in a normal mode in response to the buffer control signal and outputting the buffered signal as a package option signal, blocking the signal applied to the package option pad in a test mode, and outputting a signal corresponding to package option pads except for the default package option as the package option signal.
In accordance with another aspect of the present invention, there is provided a semiconductor memory device which comprises: first and second package option pads wire-bonded in a default package option; a buffer control signal generation means for generating a buffer control signal; a first buffering means for buffering a signal applied to the first package option pad in a normal mode in response to the buffer control signal and outputting the buffered signal as a first package option signal, blocking the signal applied to the first package option pad in a test mode, and outputting a signal corresponding to package option pads except for the default package option as the first package option signal; and a second buffering means for buffering a signal applied to the second package option pad in a normal mode in response to the buffer control signal and outputting the buffered signal as second a package option signal, blocking the signal applied to the second package option pad in a test mode, and outputting a signal corresponding to package options except for the default package option as the second package option signal.
In accordance with further another aspect of the present invention, there is provided a semiconductor memory device which comprises: at least one package option pad wire-bonded in a default package option; a buffer control signal generation means for generating a buffer control signal; a buffering means for buffering signals applied to the package option pad; and a switching means for transferring an output of the buffering means and a signal corresponding to package options except for the default package option in response to the buffer control signal as a package option signal.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to attached drawings.
FIG. 5
is a diagram of a wire bonding structure according to package option in accordance with an embodiment of the present invention.
FIG. 5
4
200
4
201
8
202
8
210
4
211
8
212
16
220
4
221
8
222
Referring to , in case of an X product , a package option pad (PAD X) is wire-bonded with a VDD pin and another package option pad (PAD X) is wire-bonded with a VSS pin. Meanwhile, in case of an X product , a package option pad (PAD X) is wire-bonded with a VSS pin, and another package option pad (PAD X) is wire-bonded with a VDD pin. In case of an X product , package option pads (PAD X) and (PAD X) are wire-bonded with VSS pin.
FIG. 3
16
220
4
200
8
210
In the wire bonding structure applied to the present invention, the structure of the package option pads and the applied signals are the same as the prior art shown in . However, the present invention has the same wire bonding structure of DQ pin as the X product having maximum bandwidth without regard to the X product or the X product . In other words, all the DQ pins are wire-bonded without regard to the package options.
FIG. 6
a block diagram of a package option signal generation circuit in accordance with an embodiment of the present invention.
FIG. 6
60
64
62
60
60
64
Referring to , the package option signal generation circuit in accordance with the present invention includes: at least one package option pad wire-bonded in a default package option; a buffer control signal generation unit for generating a buffer control signal; and a buffer unit for buffering a signal applied to the package option pad in response to the buffer control signal and outputting the buffered signal, or blocking the signal applied to the package option pad and outputting a signal corresponding to package option pads except for the default package option as the package option signal. Here, the buffer control signal generation unit is a test mode signal generation circuit using a mode register set.
62
60
62
60
64
64
The buffer control signal is disabled during a normal mode operation so that the buffer unit buffers the signal applied to the package option pad via a bonding wire to generate the buffered signal to the package option signal. In other words, during the normal mode operation, the semiconductor memory device operates with bandwidth corresponding to the default package option. Meanwhile, during a test mode operation, the buffer control signal is enabled so that the buffer unit blocks the signal inputted from the package option pad and outputs the package option signal corresponding to the package options except for the default package option. In other words, during the test mode operation, the semiconductor memory device operates with bandwidth except for the default bandwidth. At this time, in case where the buffer control signal generation unit outputs one buffer control signal, the bandwidth that can be selected during the test mode is also one. On the contrary, in case where the buffer control signal generation unit outputs two or more buffer control signals, it is possible to perform the test with respect to a plurality of bandwidths during the test mode.
4
8
4
8
4
8
62
8
16
FIG. 6
In the first embodiment of the present invention, two package option pads PAD X and PAD X are used. There is proposed a circuit which selectively outputs package option signals sX and sX according to the operation modes through a logic combination of signals applied to the two package option pads PAD X and PAD X from the buffer unit of and buffer control signals enX and enX.
FIG. 7
62
is a first exemplary circuit diagram of the buffer unit in accordance with the first embodiment of the present invention.
FIG. 7
FIG. 6
62
230
4
4
16
4
4
16
4
240
8
16
8
8
16
8
250
64
16
Referring to , the buffer unit includes: a first buffer unit for buffering a signal applied to the package option pads PAD X wire-bonded according to the package options and a signal applied to the package option pad PAD X in the normal mode in response to the buffer control signal enX to output the buffered signal as a package option signal sX, and outputting the PAD X option signal corresponding to the maximum bandwidth (i.e., X package) in the test mode as the package option signal sX; and a second buffer unit for buffering a signal applied to the package option pad PAD X in the normal mode in response to the buffer control signal enX to output the buffered signal as the package option signal sX, and outputting the PAD X option signal corresponding to the maximum bandwidth (i.e., X package) in the test mode as the package option signal sX. Meanwhile, a mode register set (MRS) control circuit is contained in the buffer control signal generation unit of . Here, it is assumed that the buffer control signal enX is a high active signal.
230
1
16
1
1
4
2
1
4
240
3
16
2
3
8
4
2
8
Meanwhile, the first buffer includes: an inverter INV receiving the buffer control signal enX; a NAND gate NAND receiving an output of the inverter INV and the signal applied to the package option pad PAD X; and an inverter INV receiving an output of the NAND gate NAND to output the package option signal sX. The second buffer includes: an inverter INV receiving the buffer control signal enX; a NAND gate NAND receiving an output of the inverter INV and the signal applied to the package option pad PAD X; and an inverter INV receiving an output of the NAND gate NAND to output the package option signal sX.
FIG. 7
Hereinafter, an operation of the semiconductor memory device with the circuit of will be described in detail.
4
4
8
16
1
2
4
8
4
8
4
16
1
2
4
8
4
8
16
In case of a default X package in which the package option pads PAD X and PAD X are respectively bonded with the VDD pin and the VSS pin, since the buffer control signal enX is a logic low level in the normal mode, the NAND gates NAND and NAND operate like an inverter with respect to the signals applied to the package option pads PAD X and PAD X so that the package option signals sX and sX are a logic high (H) level and a logic low (L) level, respectively. As a result, the corresponding chip operates as the X. On the other hand, in the test mode, since the buffer control signal enX is enabled to a logic high (H) level, the NAND gates NAND and NAND block the signals applied to the package option pads PAD X and PAD X and always output a logic high level. Therefore, all of the package option signals sX and sX are a logic low (L) level, so that the corresponding chip operates as X.
8
4
8
16
1
2
4
8
4
8
8
16
1
2
4
8
4
8
16
In case of a default X package in which the package option pads PAD X and PAD X are respectively bonded with the VSS pin and the VDD pin, since the buffer control signal enX is a logic low (L) level in the normal mode, the NAND gates NAND and NAND operate like an inverter with respect to the signals applied to the package option pads PAD X and PAD X so that the package option signals sX and sX are a logic low (L) level and a logic high (H) level, respectively. As a result, the corresponding chip operates as the X. On the other hand, in the test mode, since the buffer control signal enX is enabled to a logic high (H) level, the NAND gates NAND and NAND block the signals applied to the package option pads PAD X and PAD X and always output a logic high level. Therefore, all of the package option signals sX and sX are a logic low (L) level, so that the corresponding chip operates as X.
16
4
8
16
1
2
4
8
4
8
16
16
1
2
4
8
4
8
16
In case of a default X package in which all of the package option pads PAD X and PAD X are bonded with the VSS pin, since the buffer control signal enX is a logic low level in the normal mode, the NAND gates NAND and NAND operate like an inverter with respect to the signals applied to the package option pads PAD X and PAD X so that all of the package option signals sX and sX are a logic low (L) level. As a result, the corresponding chip operates as the X. On the other hand, in the test mode, since the buffer control signal enX is enabled to a logic high (H) level, the NAND gates NAND and NAND block the signals applied to the package option pads PAD X and PAD X and always output a logic high level. Therefore, all of the package option signals sX and sX are a logic low (L) level, so that the corresponding chip operates as X.
16
A following table 3 is an operation table of an operation bandwidth in the normal mode and the test mode according to the package option (in case of using the enX).
TABLE 3
PACKAGE
X4
X8
X16
X4
X8
X16
OPTION
NORMAL MODE
TEST MODE (enXl6 ″H″)
PAD X4
VDD
VSS
VSS
VDD
VSS
VSS
PAD X8
VSS
VDD
VSS
VSS
VDD
VSS
SX4
H
L
L
L
SX8
L
H
L
L
OPERATION
X4
X8
X16
X16
X16
X16
BANDWIDTH
4
8
16
4
8
Referring to the table 3, in case of the normal mode, the operation bandwidth of the corresponding chip is determined according to the bonding state of the package option pads PAD X and PAD X. However, in case of the test mode, the corresponding chip operates as the X without regard to the bonding state of the package option pads PAD X and PAD X.
FIG. 7
A following table 4 is an address scramble of an SDRAM (DDR SDRAM) in the test mode in accordance with the circuit configuration of .
TABLE 4
ADDRESS
A0
A1
A2
A3
A4
A5
A6
A7
A8
A9
A11
A12
X4 PACKAGE
Y0
Y1
Y2
Y3
Y4
Y5
Y6
Y7
Y8
Y9
NC
NC
X8 PACKAGE
Y0
Y1
Y2
Y3
Y4
Y5
Y6
Y7
Y8
Y9
NC
NC
X16 PACKAGE
Y0
Y1
Y2
Y3
Y4
Y5
Y6
Y7
Y8
Y9
NC
NC
In the normal mode, the address scramble is the same as the table 2.
4
8
16
10
0
9
16
8
4
However, in the test mode, since all of the X/X/X packages input/output 16 data via the bonded pads, Y addresses Y to Y are sequentially counted with respect to one word line. If the test is performed 1024 times, the entire cells connected to the word line can be screened. Therefore, in current maximum bandwidth (i.e., in case of the X product), the test time is not different from the prior art. However, in case of the X product, since the entire cells connected to one word line can be screen by performing the test 1024 times, the test time can be reduced to ½ of the prior art. In addition, in case of the X product, the test time can be reduced to ¼ of the prior art.
FIG. 8
62
is a second exemplary circuit diagram of the buffer unit in accordance with the first embodiment of the present invention.
FIG. 8
FIG. 7
430
440
430
5
4
1
16
450
5
4
440
6
8
2
16
450
6
4
A difference between and is configurations of first and second buffer units and . The first buffer unit includes: an inverter INV receiving a signal applied to the package option pad PAD X; and a NOR gate NOR receiving the buffer control signal enX outputted from the MRS control circuit and an output of the inverter INV to output the package option signal sX. The second buffer unit includes: an inverter INV receiving a signal applied to the package option pad PAD X; and a NOR gate NOR receiving the buffer control signal enX outputted from the MRS control circuit and an output of the inverter INV to output the package option signal sX.
430
440
16
1
2
4
8
4
8
16
1
2
4
8
4
8
16
FIG. 7
Although the first and second buffer units and are implemented using the NOR gates, the buffer units operate in the same manner as those of so that the operation table is also the same as the table 3. In other words, since the buffer control signal enX is a logic low level in the normal mode, the NOR gates NOR and NOR operate like an inverter so that the package option signals sX and sX are determined according to the bonding state of the package option pads PAD X and PAD X. On the other hand, in the test mode, since the buffer control signal enX is enabled to a logic high (H) level, the NOR gates NOR and NOR block the signals applied to the package option pads PAD X and PAD X. Therefore, all of the package option signals sX and sX are a logic low (L) level, so that the corresponding chip operates as X.
FIG. 9
62
is a third exemplary circuit diagram of the buffer unit in accordance with the first embodiment of the present invention.
FIG. 9
8
8
530
7
8
3
7
8
3
4
540
9
8
10
8
4
9
10
8
illustrates the case of outputting the buffer control signal enX for selecting the X option in the test mode. A first buffer unit includes: an inverter INV receiving the buffer control signal enX; a NAND gate NAND receiving an output of the inverter INV and the signal applied to the package option pad PAD X4; and an inverter INV receiving an output of the NAND gate NAND to output the package option signal sX. A second buffer unit includes: an inverter INV receiving the signal applied to the package option pad PAD X; an inverter INV receiving the buffer control signal enX; and a NAND gate NAND receiving outputs of the inverters INV and INV to output the package option signal sX.
4
8
4
8
4
8
4
8
4
8
8
It is assumed that the package option pads PAD X and PAD X are respectively bonded with the VSS pin and the VSS pin so that the corresponding chip operates as the default X. Since the buffer control signal enX is a logic low (L) level in the normal mode, the package option signals sX and sX are respectively a logic high (H) level and a logic low (L) level so that the corresponding chip operates as the X package. Meanwhile, since the buffer control signal enX is a logic high (H) level in the test mode, the package option signals sX and sX are respectively a logic low (L) level and a logic high (H) level so that the corresponding chip operates as the X package.
8
A following table 5 is an operation table of an operation bandwidth in the normal mode and the test mode according to the package option (in case of using the enX).
TABLE 5
X4
X8
X4
X8
PACKAGE
NORMAL
TEST MODE
OPTION
MODE
(enX8 ″H″)
PAD X4
VDD
VSS
VDD
VSS
PAD X8
VSS
VDD
VSS
VDD
sX4
H
L
L
sX8
L
H
H
OPERATION
X4
X8
X8
X8
BANDWIDTH
4
8
16
16
Referring to the table 5, in case of the X product, since the entire cells connected to one word line can be screen by performing the test 1024 times, the test time can be reduced to ½ of the prior art. Meanwhile, in case where the above buffer control signal enX is used in the X product, there is not profitable so that the table 5 does not consider the X product.
FIG. 10
62
is a fourth exemplary circuit diagram of the buffer unit in accordance with the first embodiment of the present invention.
FIG. 10
FIG. 9
630
640
430
11
4
3
8
650
11
4
640
4
8
8
650
12
4
8
A difference between and is configurations of first and second buffer units and . The first buffer unit includes an inverter INV receiving the signal applied to the package option pad PAD X, and a NOR gate NOR receiving the buffer control signal enX outputted from the MRS control circuit and an output of the inverter INV to output the package option signal sX. The second buffer unit includes a NOR gate NOR receiving the signal applied to the package option pad PAD X and the buffer control signal enX outputted from the MRS control circuit , and an inverter INV receiving an output of the NOR gate NOR to output the package option signal sX.
630
640
8
3
4
4
8
4
8
8
3
4
4
8
4
8
8
FIG. 9
Although the first and second buffer units and are implemented using the NOR gates, the buffer units operate in the same manner as those of so that the operation table is also the same as the table 5. In other words, since the buffer control signal enX is a logic low level in the normal mode, the NOR gates NOR and NOR operate like an inverter so that the package option signals sX and sX are determined according to the bonding state of the package option pads PAD X and PAD X. On the other hand, in the test mode, since the buffer control signal enX is enabled to a logic high (H) level, the NOR gates NOR and NOR block the signals applied to the package option pads PAD X and PAD X. Therefore, the package option signals sX and sX are respectively a logic low (L) level and a logic (H) level, so that the corresponding chip operates as X.
FIG. 11
62
750
760
16
8
is a fifth exemplary circuit diagram of the buffer unit using first and second MRS control circuits and in accordance with a first embodiment of the present invention, in which two buffer control signals enX and enX are used.
FIG. 11
730
5
16
8
5
5
4
13
5
4
740
14
16
15
8
6
14
8
7
6
15
8
Referring to , the first buffer unit includes: a NOR gate NOR receiving the first and second buffer control signals enX and enX; a NAND gate NAND receiving an output of the NOR and the signal applied to the package option pad PAD X; and inverter INV receiving an output of the NAND gate NAND to output the package option signal sX. The second buffer unit includes: an inverter INV receiving the first buffer control signal enX; an inverter INV receiving the second buffer control signal enX; a NAND gate NAND receiving an output of the inverter INV and the signal applied to the package option pad PAD X; and a NAND gate NAND receiving outputs of the NAND gate NAND and inverter INV to output the package option signal sX.
FIG. 11
Hereinafter, an operation of the semiconductor memory device with the circuit of will be described in detail.
16
8
5
6
7
4
8
4
8
In the normal mode, since all of the first and second buffer control signals enX and enX are a logic low (L) level, all of the NAND gates NAND, NAND and NAND operate like an inverter so that the package option signals sX and sX represent the signal levels corresponding to the default bandwidth according to the bonding states of the package option pads PAD X and PAD X. As a result, the corresponding chip operations as the default bandwidth.
16
8
In the test mode, the first and second buffer control signals enX and enX are selectively enabled.
16
16
8
5
730
5
4
13
4
6
740
8
7
8
16
First, in case where the first buffer control signal enX is enabled, since the first buffer control signal enX is a logic high (H) level and the second buffer control signal enX is a logic low (L) level, the NOR gate NOR of the first buffer unit outputs a logic low level. The NAND gate NAND blocks the signal applied to the package option pad PAD X and outputs a logic high level. This signal is inverted by the inverter INV and then outputted as the package option signal sX of a logic low level. Meanwhile, the NAND gate NAND of the second buffer blocks the signal applied to the package option pad PAD X and outputs a logic high level. This signal is inverted by the NAND gate NAND and then outputted as the package option signal sX of a logic low level. Accordingly, the corresponding chip operates as the X in the test mode.
8
16
8
5
730
5
4
13
4
7
740
15
8
8
Second, in case where the second buffer control signal enX is enabled, since the first buffer control signal enX is a logic low (L) level and the second buffer control signal enX is a logic high (H) level, the NOR gate NOR of the first buffer unit outputs a logic low level. The NAND gate NAND blocks the signal applied to the package option pad PAD X and outputs a logic high level. This signal is inverted by the inverter INV and then outputted as the package option signal sX of a logic low level. Meanwhile, the NAND gate NAND of the second buffer receives the logic low level via the inverter INV so that the package option signal sX of a logic high (H) level is outputted with regard to other inputs. Accordingly, the corresponding chip operates as the X in the test mode.
16
8
A following table 6 is an operation table of an operation bandwidth in the normal mode and the test mode according to the package option (in case of using the enX and the enX).
TABLE 6
X4
X8
X16
X4
X8
X4
X8
X16
NORMAL
TEST
TEST
MODE
MODE
MODE
PACKAGE
enX8 ″L″, enX16
enX8 ″L″,
enX8 ″8″, enX16
OPTION
″L″
enX16 ″L″
″L″
PAD X4
VDD
VSS
VSS
VDD
VSS
VDD
VSS
VSS
PAD X8
VSS
VDD
VSS
VSS
VDD
VSS
VDD
VSS
SX4
H
L
L
L
L
SX8
L
H
L
H
L
OPERATION
X4
X8
X16
X8
X8
X16
X16
X16
BANDWIDTH
4
8
16
Referring to the table 6, in case of the product packaged in the default X, if the package option signal enX is enabled, the test time can be reduced to ½ of the prior art. If the package option signal enX is enabled, the test time can be reduced to ¼ of the prior art.
FIG. 12
62
850
860
16
8
is a sixth exemplary circuit diagram of the buffer unit using first and second MRS control circuits and in accordance with a first embodiment of the present invention, in which two buffer control signals enX and enX are used.
FIG. 12
830
16
4
6
16
16
8
840
17
8
7
17
16
8
7
8
18
8
8
Referring to , the first buffer unit includes: an inverter INV receiving the signal applied to the package option pad PAD X, and a 3-input NOR gate NOR receiving an output of the inverter INV and the first and second buffer control signals enX and enX. The second buffer includes: an inverter INV receiving the signal applied to the package option pad PAD X; a NOR gate NOR receiving an output of the inverter INV and the first buffer control signal enX; a NOR gate NOR receiving an output of the NOR gate NOR and the second buffer control signal enX; and an inverter INV receiving an output of the NOR gate OR to output the package option signal sX.
FIG. 11
Since the above circuit operates in the same manner as that of , a detailed description about that will be omitted. The operation table is also the same as the table 6. In accordance with the first embodiment of the present invention, it is possible to perform the package test with the bandwidth except for the default bandwidth without modification of wiring with respect to the package option pads. Accordingly, the time taken to modify the wiring can be saved. Meanwhile, in accordance with the first embodiment of the present invention, it is possible to reduce the test time so that the test can be performed with upper bandwidth than the default package, so that the test time is remarkably reduced. In this case, it is possible to perform the failure detection using one test program (for maximum bandwidth) without regard to the package option.
62
4
8
8
4
4
8
4
8
In the second embodiment of the present invention, there is proposed a buffer unit using two package option pads PAD X and PAD X. The buffer unit with a switching structure controlled by buffer control signals test_mode_Xz and test_mode_Xz buffers and outputs the signals applied to two package option pads PAD X and PAD X (normal mode), or provides the package option signals sX and sX corresponding to desired bandwidth (test mode).
FIG. 13
16
is a circuit diagram of the package option signal generation circuit in accordance with a second embodiment of the present invention, showing the case wired with the default X product.
FIG. 13
4
8
310
8
4
8
4
300
4
8
8
4
4
8
4
8
Referring to , the package option signal generation circuit includes: a package option pad PAD X wire-bonded with VSS pin; a package option pad PAD X wire-bonded with the VSS pin; a test mode generation unit for generating two buffer control signals test_mode_Xz and test_mode_Xz for selecting X and X package options in the test mode; and a buffer unit for buffering the signals applied to the package option pads PAD X and PAD X in response to the two buffer control signals test_mode_Xz and test_mode_Xz to output the buffered signals as the package option signals sX and sX (normal mode), or for providing the package option signals sX and sX corresponding to the desired bandwidth (test mode).
300
302
4
4
304
8
8
302
304
The buffer unit includes: a first buffer for buffering an external signal applied to the package option pad PAD X to generate the package option signal sX; and a second buffer for buffering an external signal applied to the package option pad PAD X to generate the package option signal sX. Here, the first and second buffers and are respectively provided with two inverters connected in series to each other.
300
1
2
3
8
4
1
2
3
300
302
304
In addition, the buffer includes: first to third switching unit SW, SW and sW performing a selective switching operation; a logic gate for logically combining the two buffer control signals test_mode_Xz and test_mode_Xz and controlling the first to third switching unit SW, SW and SW. If the package option is two, there is needed only one package option pad and one buffer control signal. In this case, the logic gate for combining the buffer control signals is not needed. Therefore, in the buffer unit , the others except for the first and second buffer and can be considered as the switching structure.
1
1
2
302
304
1
8
4
1
2
1
1
2
3
4
8
3
4
8
2
3
5
6
4
5
6
4
3
The first switching unit SW includes transmission gates TG and TG for transferring outputs of the first and second buffers and to an output stage in response to an output of a NAND gate NAND receiving the buffer control signals test_mode_Xz and test_mode_Xz. The transmission gates TG and TG receive an output of the NAND gate NAND and an inverted signal outputted from an inverter INV in the same polarity and are simultaneously turned on/off. The second switching unit SW includes transmission gates TG and TG for transferring VSS and VDD to the output stage in response to the buffer control signal test_mode_Xz. The transmission gates TG and TG receive the buffer control signal test_mode_Xz and an inverted signal outputted from an inverter INV in the same polarity and are simultaneously turned on/off. The third switching unit SW includes transmission gates TG and TG for transferring VSS and VDD to the output stage in response to the buffer control signal test_mode_Xz. The transmission gates TG and TG receive the buffer control signal test_mode_Xz and an inverted signal outputted from an inverter INV in the same polarity and are simultaneously turned on/off.
1
1
6
Here, the NAND gate NAND can be implemented with an AND gate and an inverter, and can be replaced with other logic gates (for example, NOR gate). Further, the transmission gates TG to TG can be replaced with other switching devices (for example, MOS transistor).
Hereinafter, an operation of the semiconductor memory device with the package option signal generation circuit will be described.
8
4
1
1
1
2
302
304
4
8
4
8
4
8
16
FIG. 7
First, in case of the normal mode, all of the buffer control signal test_mode_Xz and test_mode_Xz are a logic high level. Therefore, since an output of the NAND gate NAND and an output of the inverter INV are respectively a logic low level and a logic high level, two transmission gates TG and TG are turned on so that the buffer units and generate their outputs as the package option signals sX and sX. In , since the package option pads PAD X and PAD X are wire-bonded with the VSS pin so that the package option signals sX and sX are a logic low level, the chip operates as the X.
8
4
1
2
1
1
In the test mode, by enabling one of the buffer control signals test_mode_Xz and test_mode_Xz to a logic low level, the transmission gates TG and TG are tuned on by setting the outputs of the NAND gate NAND and inverter INV to a logic high level and a logic low level, respectively.
8
4
1
2
302
304
3
4
2
4
8
8
In case where the buffer control signal test_mode_Xz is outputted in a logic high level and the buffer control signal test_mode_Xz is outputted in a logic low level, the transmission gates TG and TG of the first switching unit are all turned off so that the path of the first and second buffers and are blocked. Meanwhile, the transmission gates TG and TG of the second switching unit SW are turned on so that the VSS and the VDD are outputted, respectively. At this time, the package option signals sX and sX are a logic low level and a logic high level, respectively, so that the chip operates as the X.
8
4
1
2
302
304
5
6
2
4
8
4
In case where the buffer control signal test_mode_Xz is outputted in a logic low level and the buffer control signal test_mode_Xz is outputted in a logic high level, the transmission gates TG and TG of the first switching unit are all turned off so that the path of the first and second buffers and are blocked. Meanwhile, the transmission gates TG and TG of the second switching unit SW are turned on so that the VDD and the VSS are outputted, respectively. At this time, the package option signals sX and sX are a logic high level and a logic low level, respectively, so that the chip operates as the X.
16
A following table 7 is an operation table of an operation bandwidth in the test mode in the X package of the semiconductor memory device having the package option signal generation circuit in accordance with the second embodiment of the present invention.
TABLE 7
X4
X8
X16
test_mode_X4
L
H
H
test_mode_X8
H
L
H
sX4
H
L
L
sX8
L
H
L
16
4
8
4
4
4
8
8
8
16
Referring to the table 7, in case where the default package is X, if the buffer control signals test_mode_Xz and test_mode_Xz are respectively a logic low level and a logic high level, the corresponding package operates as the X, so that a characteristic of the X package can be tested. If the buffer control signals test_mode_Xz and test_mode_Xz are respectively a logic high level and a logic low level, the corresponding package operates as the X so that a characteristic of the X package can be tested. In the present invention, the test mode means a test mode for changing the package option. The characteristic of the X package is tested in the normal mode state. Accordingly, with respect to one chip in which the default package is completed, it is possible to simply test a characteristic of other bandwidths as well as the default bandwidth without modifying the wiring.
16
8
4
8
4
8
4
16
Meanwhile, although the table 7 illustrates the test mode operation in the X package, it is also applicable to the X package and the X package. For example, in the X package, the VSS pin and the VDD pin is wire-bonded with the package option pads PAD X and PAD X, respectively. To control the test mode bandwidth, the buffer control signals test_mode_X and test_mode_Xz are used.
8
4
8
4
FIG. 5
Following tables 8 and 9 are operation tables of an operation bandwidth in the test mode in the X package and the X package, respectively. It is noted that the wire bonding is performed with respect to all the DQ pins as shown in in case where the present invention is applied to the X package and the X package.
TABLE 8
X4
X8
X16
test_mode_X4
L
H
H
test_mode_X1
H
H
H
sX4
H
L
L
SX8
L
H
L
TABLE 9
X4
X8
X16
test_mode_X8
H
L
H
test_mode_X1
H
H
L
sX8
H
L
L
SX8
L
H
L
In the first and second embodiments of the present invention, since the package test can be performed with the bandwidth except for the default bandwidth without modifying the wiring with respect to the package option pads, the time required to modify the wiring can be saved.
4
8
16
4
8
4
16
8
16
Although the above embodiments describes the case the X/X/X package options are determined using the X PAD and the X PAD as the package option pad, the present invention is also applicable the case of using the X PAD and the X PAD as the package option pad or using the X PAD and the X PAD as the package option pad. In this case, combinations of the logic gates constituting the buffer unit can be varied.
Meanwhile, the NAND gates used in the above embodiments can be implemented with an AND gate and an inverter, and the NOR gate can be implemented with an OR gate and an inverter.
Further, the present invention is also applicable to the case the number of the package option pads increase or decreases according to the number of the operation bandwidth.
According to the present invention, the test cost can be reduced so that the manufacturing cost can be reduced. Further, the test time is reduced so that the productivity is remarkably increased.
While the present invention has been described with respect to certain preferred embodiments only, other modifications and variation may be made without departing from the spirit and scope of the present invention as set forth in the following claims.
BRIEF DESCRIPTION OF THE DRAWINGS
Other objects and aspects of the invention will become apparent from the following description of the embodiments with reference to the accompanying drawings, in which:
FIG. 1
4
16
is a diagram showing a pin arrangement of conventional X and X SDRAMs (54 pins);
FIG. 2
4
8
16
is a diagram showing a pin arrangement of conventional X/X/X DDR SDRAMs (66 pins);
FIG. 3
is a conventional wire bonding diagram according to package options;
FIG. 4
is a circuit diagram of a conventional package option signal generation block;
FIG. 5
is a diagram of a wire bonding structure according to package option in accordance with an embodiment of the present invention;
FIG. 6
a block diagram of a package option signal generation circuit in accordance with an embodiment of the present invention;
FIGS. 7 to 12
are exemplary circuit diagrams of the buffer unit in accordance with a first embodiment of the present invention; and
FIG. 13
is a circuit diagram of a package option signal generation circuit in accordance with a second embodiment of the present invention. | |
Colorado Becomes Sixth State to Ban Cages for Egg-Laying Hens
HB20-1343 may not exactly have a melodic ring to it, but it’s music to our vegan ears. As we recently learned in an article from Live Kindly , HB20-1343 refers to the Egg-laying Hen Confinement Standards which require a ban on cage confinement for egg-laying hens.
The bill, which was sponsored by Dylan Roberts, a member of the Colorado House of Representatives, and Colorado Senator Kerry Donovan, was signed into law by the state’s governor Jared Polis. First, the law specifies that by January 1, 2023, “hens [are] to be confined in an enclosure with at least one square foot of usable floor space per hen” and “...by January 1, 2025, hens [are] to be confined in a cage-free housing system,” with features including at least one square-foot of usable floor space per hen as well as other improved living conditions for the animals. Farmers who don’t adhere to this can face a civil penalty of up to $1,000 per violation.
Colorado becomes the sixth state to outlaw cages for egg-laying chickens, after California, Massachusetts, Michigan, Oregon, and Washington state. More states are expected to follow the Colorado ban. We can not say that this is huge progress, but it's progress.
The victory represents enhanced animal welfare for some six million hens who are held in incredibly tiny cages—so small, in fact, they can’t spread their wings—and will mark the sixth state in the country to outlaw such caged confinement for egg-laying chickens.
While January of 2025 is still a ways off, we’re hopeful this law will pave the way for many states to follow suit to help improve the living conditions of animals across the country. | https://thebeet.com/colorado-just-passed-a-law-that-requires-all-egg-laying-hens-to-be-raised-cage-free/ |
In case you missed it, on Saturday Beyonce released “Lemonade, ” a 12 -track visual album that captured the entire world’s attention when it debuted on HBO.
Woven between songs were interludes of spoken word, one of which captured the attention of our food-loving hearts — a recipe for her grandmother’s lemonade.
Take one pint of water
Add half a pound of sugar
The juice of 8 lemons
The zest of half a lemon
Pour the water from one jug then into the other several times
Strain through a clean napkin
Now, we’re sure this recipe is delicious. But half a pound of sugar sounds like a lot — it equates to about 1 1/8 cups. And now that Beyonce’s got us thirsty for lemonade, we’re hoping for a version that won’t send us ricochetting off the walls.
We excavated up 12 of our favorite homemade lemonade recipes that require a little less sugar.( And of course, you can also adjust the amount yourself .) Take your picking, and let us know your favorite. | https://www.behealthyandfit.com/theres-only-one-problem-with-beyonces-lemonade-recipe/ |
Plaut, Annette S.; Wurstbauer, Ulrich; Pinczuk, Aron; et al.Garcia, Jorge M.; Pfeiffer, Loren N.
We have used the ratio of the integrated intensity of graphene's Raman G peak to that of the silicon substrate's first-order optical phonon peak, accurately to determine the number of graphene layers across our molecular-beam (MB) grown graphene films. We find that these results agree well both, with those from our own exfoliated single and few-layer graphene flakes, and with the results of Koh et al. [ACS Nano 5, 269 (2011)]. We hence distinguish regions of single-, bi-, tri-, four-layer, etc., graphene, consecutively, as we scan coarsely across our MB-grown graphene. This is the first, but crucial, step to being able to grow, by such molecular-beam-techniques, a specified number of large-area graphene layers, to order. | https://ore.exeter.ac.uk/repository/handle/10871/13867 |
Let \(R\) be a finite ring. The example we'll have in mind at the end is the ring of \(2\times 2\) matrices over a finite field, and subrings. A. Kuku proved that \(K_i(R)\) for \(i\geq 1\) are finite abelian groups. Here, \(K_i(R)\) denotes Quillen's \(i\)th \(K\)-group of the ring \(R\). In this post we will look at an example, slightly less simple than \(K_1\) of finite fields, showing that these groups can be arbitrarily large. Before we do this, let us briefly go over why this is true
But even before this, can you think of an example showing why this is false for \(i=0\)?
Of course, \(K_0(R)\) is often not finite even if \(R\) is: for instance, if \(R = \mathbb{F}_q\) then \(K_0(\mathbb{F}_q)\) is just the Grothendieck group of the monoid under direct sum of finite-dimensional vector spaces, and hence \(K_0(\mathbb{F}_q)\cong \mathbb{Z}\). Wait, is \(K_0(R)\) ever finite for any ring \(R\), besides the zero ring?
A Brief Sketch
Let us examine briefly why \(K_i(R)\) is finite for \(i \geq 1\), following Kuku's book "Representation Theory and Higher Algebraic \(K\)-Theory", filling in some of the quicker points in that presentation. The case \(i = 1\) is taken care of because \(K_1(R)\) is a quotient of \(R^\times\). Suppose \(i > 1\). Recall that we can define the higher \(K\)-groups as the homotopy groups \(\pi_i(\mathbf{B}\mathrm{GL}(R)^+)\) where \(\mathbf{B}(-)\) denotes the classifying space, \(\mathrm{GL}(R)\) is the infinite linear group and the \(+\) denotes Quillen's plus construction.
Given any plus construction \(f:X\to X^+_P\) with respect to a perfect normal subgroup \(P\subseteq \pi_1(X)\), it is not hard to show that \(\mathbf{B} P^+\) has the same homotopy type as the universal cover for \(\mathbf{B} G^+\). This gives us a way to compute homotopy for \(i\geq 2\) because if \(\widetilde{X}\to X\) is a universal cover of \(X\) then \(\pi_i(\widetilde{X})\cong\pi_i(X)\) for all \(i\geq 2\).
So actually we can replace \(\mathbf{B}\mathrm{GL}(R)^+\) with \(\mathbf{B} E(R)^+\) where \(E(R)\) is the infinite elementary matrix group, defined as the direct limit \(\varinjlim E_n(R)\) where \(E_n(R)\) is the subgroup of \(\mathrm{GL}_n(R)\) generated by matrices all of whose diagonal entries are \(1\) and just one nonzero entry on some off-diagonal spot.
The bottom line is that we can compute \(K_i(R)\) via \(\pi_i(\mathbf{B} E(R)^+)\) for \(i\geq 2\) (this is true for any ring, and we have not used that \(R\) is finite yet). We will do this in a number of steps.
First, let us prove that \(H_i(\mathbf{B} E(R)^+)\) is finite. For this it suffices to prove that \(H_i(\mathbf{B} E(R))\) is finite, because the plus construction preserves homology. In order to do this, it suffices to prove that \(H_i(E(R),\mathbb{Z})\) is finite for \(i\geq 2\), where now we work with group homology, and this group in turn is isomorphic to \(H_i(E_k(R),\mathbb{Z})\) for some sufficiently large \(k\) by Suslin's stability theorem. Of course \(H_1(E_k(R),\mathbb{Z})\) is finite, being the abelianisation of a finite group. We also claim \(H_i(E_k(R),\mathbb{Z})\) is finite. In fact, \(H_i(G,\mathbb{Z})\) is a finite abelian group for \(i\geq 1\) whenever \(G\) is any finite group because multiplication by the order \(|G|\) is the zero map, and the homology groups are finitely generated since, e.g. they can be calculated via the bar resolution (for example, if \(G\) is cyclic of order \(m\) then \(H_i(G,\mathbb{Z})\cong \mathbb{Z}/m\) for odd \(i\) and zero for nonzero even \(i\)).
So this shows that \(H_i(\mathbf{B} E(R)^+)\) is a finite abelian group. Of course, the same is true of \(H_i(\mathbf{B}\mathrm{GL}(R)^+)\). But, the reason we switched to \(\mathbf{B} E(R)^+\) is because \(\mathbf{B} E(R)^+\) is homotopy equivalent to a universal cover of \(\mathbf{B}\mathrm{GL}(R)^+\) and hence is simply connected. Now we apply a form of the Hurewicz theorem in topology (see Spanier, 9.6.15 for a general statement) that says that given such a simply connected space like \(X = \mathbf{B} E(R)^+\) the Hurewicz homomorphism \(\pi_i(X)\to H_i(X)\) has both finitely genenerated kernel and finitely generated cokernel. Hence we have a sequence \(0\to A\to \pi_1(X)\to C\to 0\) where \(A\) and \(C\) are finitely generated: \(A\) is the kernel of the Hurewicz homomorphism and in fact \(C\) is finite since \(C = \pi_1(X)/A\) injects into a finitely generated (and in our case finite!) group.
Going back to our specific case, we have shown that for \(i\geq 2\) we have \(K_i(R) = H_i(\mathbf{B} E(R)^+)\) is a finitely generated group. Now we can apply a well-known theorem of Cartan and Serre, which gives us an injection
This holds for any \(H\)-space, and in particular the classifying space of a group. Hence for \(i \geq 2\) our \(K\)-group \(K_i(R) = \pi_i(\mathbf{B} E(R)^+)\) is a finitely generated torsion group, and hence finite.
Examples
If \(R\) is a finite ring then, how big can \(K_i(R)\) be for \(i > 0\)? Here is an example I thought of in the process of writing up some notes for myself. Before this, let us look at a warmup example: \(M_n(\mathbb{F}_q)\), the ring of \(n\times n\) matrices over a finite field. Actually, by Morita invariance \(K_1(M_n(\mathbb{F}_q))\) is the same as \(K_1(\mathbb{F}_q) \cong \mathbb{F}_q^\times\cong \mathbb{Z}/(q-1)\). So the size of \(K_1(\mathbb{F}_q)\) already goes to infinity as \(q\to\infty\).
Incidentally, more generally Quillen showed that \(K_{2i-1}(\mathbb{F}_q)\cong \mathbb{Z}/(q^i-1)\) for \(i\geq 1\), so Quillen's computation already shows that \(K_i(\mathbb{F}_q)\) for \(i\not=0\) can be arbitrarily large. However, this computation is already quite complicated and certainly too much to do in one reasonably-sized blog post!
Let's get back to \(M_n(\mathbb{F}_q)\): by the Morita invariance we see that no matter how big \(n\) is, its \(K\)-theory remains constant. However, there are subrings of \(M_n(\mathbb{F}_q)\) whose \(K_1\) groups are much bigger! Here is one example. Consider \(R = T_2(\mathbb{F}_q)\), the subring of \(M_2(\mathbb{F}_q)\) consisting of upper triangular matrices with entries in \(\mathbb{F}_q\), and consider the two-sided ideal \(I\) of \(R\) consisting of matrices of the form \(\left(\begin{smallmatrix} 0 & a\\ 0 & 0\end{smallmatrix}\right)\) where \(a\in \mathbb{F}_q\).
For any ring \(R\) and ideal \(I\) there exists an exact sequence
$$
\(K_1(R)\to K_1(R/I)\to K_0(R,I)\to K_0(R)\to K_0(R/I).\)
$$
Here \(K_0(R,I)\) is a relative \(K\)-group that isn't important for the discussion at the moment. The important thing in this case is that we have an extra special piece of information: namely, that \(R\to R/I\) is a split surjection, which implies (nontrivial!) that \(K_0(R,I)\to K_0(R)\) is injective, so \(K_1(R)\to K_1(R/I)\) is surjective. Moreover (exercise) we see that \(R/I\cong \mathbb{F}_q^2\), so that \(K_1(R)\) has cardinality at least \((q-1)^2\), which is bigger than \(K_1(M_n(\mathbb{F}_q))\). Of course, this also implies that the ring \(T_2(\mathbb{F}_q)\) of \(\mathbb{F}_q\)-upper triangular matrices is not Morita equivalent to the full matrix ring. Of course for split surjections it also turns out that \(K_0(R)\to K_0(R/I)\) is also surjective so we also could have concluded that \(T_2(\mathbb{F}_q)\) is not Morita equivalent to \(\mathbb{F}_q\) by a slightly easier \(K_0\)-calculation.
| |
TULARE COUNTY – Sixteen nonprofits whose mission is to improve the quality of life for Tulare County youth received a total of $95,000 from the Tulare County Board of Supervisors last week.
The organizations awarded funding were: Boys and Girls Club of the Sequoias, $10,000; Self Help Enterprises, $5,000; Pesticide Action Network of North America for El Quinto Sol de America, $5,000; Grandma’s House, $10,000; Tulare Athletic Boxing Club, $5,000; Sundale Foundation, $5,000; The Source, $10,000; Tulare County Symphony League, $5,000; Arts Visalia, $5,000; Family Services of Tulare County, $5,000; Mending Fences at JM Ranch, $5,000; Big Brothers Big Sisters, $5,000; Proteus, Inc., $5,000; The United Methodist Church, $5,000; Foodlink for Tulare County, $5,000; Porterville Strings, $5,000.
The grant applications were made available to nonprofits in February during a grant workshop and included donations made in each supervisorial district. The deadline to submit applications was April 5, 2019.
The applications were reviewed by the Tulare County Youth Commission which then makes recommendations to the board of supervisors.
When the commission was formed in 2008, the supervisors allocated $100,000 from the General Fund to be awarded up to $20,000 in each of the five supervisorial districts as part of the Step Up Youth Activities program. Grants are typically awarded in $5,000 and $10,000 amounts. | http://www.thesungazette.com/article/news/2019/06/26/tulare-county-grants-95000-to-nonprofits-for-at-risk-youth/ |
Britannica.comPhilosopher king, idea according to which the best form of government is that in which philosophers rule.The ideal of a philosopher king was born in Plato’s dialogue Republic as part of the vision of a just city. It was influential in the Roman Empire and was revived in European political thought in the age of absolutist monarchs. It has also been more loosely influential in modern political ...
https://www.britannica.com/topic/philosopher-king
DA: 18 PA: 23 MOZ Rank: 41
En.wikipedia.orgAccording to Plato, a philosopher king is a ruler who possesses both a love of wisdom, as well as intelligence, reliability, and a willingness to live a simple life.Such are the rulers of his utopian city Kallipolis.For such a community to ever come into being, "philosophers [must] become kings…or those now called kings [must]…genuinely and adequately philosophize" (Plato The Republic, 5 ...
https://en.wikipedia.org/wiki/Philosopher_king
DA: 16 PA: 22 MOZ Rank: 38
Gradesfixer.comIn the Republic, Plato takes a radical new step and gives political power to philosophers, or philosopher-kings, and claims that political power and philosophy are best to become one (Republic, 473c-d). However, the theory of the Forms is only an invention, a clever excuse that Plato attempts to use to promote the position of philosophers or ...
https://gradesfixer.com/free-essay-examples/platos-forms-and-philosopher-kings/
DA: 15 PA: 50 MOZ Rank: 71
Versiondaily.comIn the Socratic dialogue Republic, Greek philosopher Plato envisioned and proposed a utopian city-state ruled by a particular class of citizens who possess a firm grip of philosophy. He called the city “Kallipolis” and the rulers “philosopher kings” who are trained under a specialised 50-year-long scholastic program.
https://www.versiondaily.com/plato-philosopher-kings-kallipolis/
DA: 20 PA: 36 MOZ Rank: 56
Preservearticles.comADVERTISEMENTS: Philosopher kings are the rulers, or Guardians, of Plato’s Utopian Kallipolis. If his ideal city- state is to ever come into being, “philosophers become kings… or those now called kings… genuinely and adequately philosophize”. Related posts: Read the short biography of King Bindusara What was the impact of Plato’s works on Renaissance Short biography […]
https://www.preservearticles.com/articles/what-is-philosopher-king-according-to-plato/24761
DA: 24 PA: 50 MOZ Rank: 83
E-ir.infoPlato argues that philosopher kings should be the rulers, as all philosophers aim to discover the ideal polis. The ‘kallipolis’, or the beautiful city, is a just city where political rule depends on knowledge, which philosopher kings possess, and not power.
https://www.e-ir.info/2013/04/17/should-philosophers-rule/
DA: 13 PA: 37 MOZ Rank: 50
Thoughtco.comPlato was a student and follower of Socrates until 399, when the condemned Socrates died after drinking the prescribed cup of hemlock. It is through Plato that we are most familiar with Socrates' philosophy because he wrote dialogues in which his teacher took part, usually asking leading questions -- the Socratic method.
https://www.thoughtco.com/plato-important-philosophers-120328
DA: 17 PA: 36 MOZ Rank: 53
Journals.openedition.orgIn describing Plato’s visit to Russia, Crossman suggests that the communistic regime is the real embodiment of the idea of philosopher king in the 1930s, and that Plato’s observation of Russia is more realistic than the official doctrine of communism such as the proletarian dictatorship and the death of the state.
https://journals.openedition.org/etudesplatoniciennes/281
DA: 24 PA: 25 MOZ Rank: 49
Britannica.comPlato was a philosopher during the 5th century BCE. He was a student of Socrates and later taught Aristotle. He founded the Academy, an academic program which many consider to be the first Western university. Plato wrote many philosophical texts—at least 25. | https://www.keyword-suggest-tool.com/search/plato+philosopher+kings/ |
Your carrier must allow tethering, which might cost extra depending on your plan.
Not all device-carrier combinations support Instant Tethering. For example, I was unable to get this feature to work on my OnePlus 5T on Google Fi (had I been on some other carrier I suspect my 5T would have worked though). I should point out that I was easily able to use Instant Tethering with my Pixel 2 XL on Fi.
Your phone should have the following minimum requirements: Android 7.1 or higher, and Google Play Services 14.9.99.
Currently, Chrome OS 73 is only available in the Canary and Dev channels, but it should soon migrate to the Beta and Stable channels.
Once you have enabled the flag, go to Connected Devices in your Chrome OS device’s settings and set up your phone.
Turn off WiFi on your Chrome OS device and you should be prompted to connect to your phone. If your particular combination of device and carrier plan are supported, you’ll see your device name instead of the “Pixel 2 XL” in a notification as depicted below.
Once connected you’ll be able to see your phone’s battery level in the networking section in your Chrome OS notification panel so you can get an idea of how long your Internet connection will last before you have to connect your phone to a charger. | http://www.aadhu.com/chrome-os-73-brings-instant-tethering-to-non-pixel-smartphones/ |
---
abstract: 'We compute the distribution of the partition functions for a class of one-dimensional Random Energy Models (REM) with logarithmically correlated random potential, above and at the glass transition temperature. The random potential sequences represent various versions of the 1/f noise generated by sampling the two-dimensional Gaussian Free Field (2dGFF) along various planar curves. Our method extends the recent analysis of [@FB] from the circular case to an interval and is based on an analytical continuation of the Selberg integral. In particular, we unveil a [*duality relation*]{} satisfied by the suitable generating function of free energy cumulants in the high-temperature phase. It reinforces the freezing scenario hypothesis for that generating function, from which we derive the distribution of extrema for the 2dGFF on the $[0,1]$ interval. We provide numerical checks of the circular and the interval case and discuss universality and various extensions. Relevance to the distribution of length of a segment in Liouville quantum gravity is noted.'
address:
- 'School of Mathematical Sciences, University of Nottingham, Nottingham NG72RD, England'
- |
CNRS-Laboratoire de Physique Théorique de l’Ecole Normale Supérieure\
24 rue Lhomond, 75231 Paris Cedex-France[^1]
- |
Laboratoire de Physique Théorique et Modèles Statistiques, CNRS (UMR 8626)\
Université Paris-Sud, Bât. 100, 91405 Orsay Cedex, France
author:
- Yan V Fyodorov
- Pierre Le Doussal
- Alberto Rosso
date: 'Received: / Accepted: / Published '
title: 'Statistical Mechanics of Logarithmic REM: Duality, Freezing and Extreme Value Statistics of $1/f$ Noises generated by Gaussian Free Fields'
---
=1
Introduction
============
Describing the detailed statistics of the extrema of $M$ random variables $V_i$ with logarithmic correlation built from those of the two-dimensional Gaussian Free Field (2dGFF) $V(x)$, is a hard and still mostly open problem. It arises in many fields from physics and mathematics to finance. The 2dGFF is a fundamental object intimately related to conformal field theory [@DiFrancesco], and being also a building block of the Liouville random measures $e^{V(z,\bar z)} dz d\bar z$ attracted much interest in high-energy physics, quantum gravity, and pure mathematics communities, see [@Qgrav] for an extensive list of references. In the context of condensed matter physics the 2dGFF is of interest to describe e.g. fluctuating interfaces between phases [@aarts], e.g. their confinement properties, multi-fractal properties of wave functions of Dirac particle in random magnetic field [@chamon] and associated Boltzmann-Gibbs measures [@Yan1], glass transitions of random energy models with logarithmic correlated energies [@carpentier], 2d self-gravitating systems [@selfgrav] etc.. Descriptions of the level lines of the GFF as Schramm-Loewner Evolutions (SLE) and conjectured relations to the welding problem [@Jones] have also contributed in revival of interest in the statistics of the GFF. In mathematical finance there is a strong present interest in limit lognormal multifractal processes [@bacry] (also called log-infinitely divisible multifractal random measures), which is but a closely related incarnation of the same object, see e.g. [@Vargas; @Ostrov]. Last, but not least important, is to look at the logarithmically correlated random sequences as those representing various instances of $1/f$ noises, see e.g. [@1fnoise] and [@FB]. Such noises regularly appear in many applications, and were recently discussed in the context of quantum chaos, where logarithmic correlations arise in sequences of energy levels [@qchaos] or, as one can surmise, in the zeroes of the zeta Riemann function. All this makes understanding extreme value statistics of such noises an interesting and important problem.
While the leading behavior $V_{min} \sim - 2 A \ln M$ is rigorously proved [@mathGFF], surprisingly little knowledge exists on finer properties of the statistics of the GFF-related minima, even heuristically. To serve this as well as many other purposes it is of high interest to study the canonical partition function $Z(\beta)=\sum_{i=1}^M e^{- \beta V_i}$ for the corresponding Random Energy Model (REM) as a function of the inverse temperature $\beta=1/T$. The distribution $P(F)$ of the free energy $F = - T \ln Z$ reduces in the limit of zero temperature $T=0$ to the distribution of the minimum $V_{min}$. A few instances of REM can be solved explicitly, and are frequently useful as approximations: (i) uncorrelated energies with variance $\sim \ln M$, i.e. Derrida’s original REM [@rem], which gives the correct constant $A$ [@chamon](ii) paths with random weights on trees, whose energies exhibit a similar logarithmic scaling of correlations, but with a hierarchical structure rather than a translationally invariant one [@derridaspohn; @trees] (iii) the infinite-dimensional Euclidean version of logarithmically correlated REM and its further ramifications [@Yan1; @infdlog]. In particular, the close analogy of GFF-related statistical mechanics with the models on trees [@chamon; @carpentier], also noted in probability theory [@mathGFF], arises naturally in an approximate, i.e. one loop, RG method, and led to the conjecture [@carpentier] that: $$V_{\mbox{min}} = a_M+b_M y
\label{rescaledmin}$$ with $$\begin{aligned}
&& a_M= A (- 2 \ln M + \tilde \gamma \ln \ln M +O(1)) \quad, \quad b_M=A +O(1/\ln(M))
\label{am&bm}\end{aligned}$$ where $\tilde \gamma=3/2$ and $y$ is a random variable of order unity whose probability density has universal tails $p(y) \sim |y| e^y$ on the side $y \to -\infty$. In addition it was convincingly demonstrated that the log-correlated REM exhibits a freezing transition to a glass phase dominated by a few minima, at the same $T_c$ as predicted by (i) and (ii) [@carpentier; @infdlog]. An outstanding problem left fully open was to characterize the shape of the distribution of the minimum beyond the tail, and in particular investigate whether the universality also extends to that regime.
To address this issue, Fyodorov and Bouchaud [@FB] (FB) recently considered a particular circular-log variant of REM. Denoting here and henceforth the averaging over the random potential with the overbar, the circular-log model is defined via the correlation matrix $C_{ij}=\overline{V_i V_j}$ identical to those of $M$ equidistant points $z_j=\exp(i \frac{2 \pi j}{M})$ on a circle $C_{jk} = 2 G(z_j-z_k)$, where $G(z-z') = - \ln|z-z'|$ is the full plane Green function of the 2dGFF. Equivalently, the above covariance function represents a $2\pi$-periodic real-valued Gaussian random process $V(x)=\sum_{l=1}^{\infty}\left(v_l\,e^{ilx}+\bar{v}_l\,e^{-ilx}\right)$ with a [*self-similar*]{} spectrum $\langle v_l \bar{v}_m
\rangle=l^{-(2H+1)}\delta_{lm}$ characterised by the particular choice of the Hurst exponent $H=0$. Such a process therefore represents a version of the so-called $1/f$ noise.
From the moments $\overline{Z^n}$ FB reconstructed the distribution $P(Z)$ above and at $T_c$. From such a point they proceeded by assuming that for such a model the same freezing scenario as found in Ref. [@carpentier] holds so that the generating function $$\begin{aligned}
\label{main}
&& g_\beta(y) = \overline{ \exp( - e^{\beta y} Z/Z_e ) }, \quad
Z_e=M^{1+\beta^2}/\Gamma(1-\beta^2)\end{aligned}$$ remains in the thermodynamic limit $M\gg 1$ [*temperature independent*]{} everywhere in the glass phase $T \leq
T_c$. As a result of such a conjecture they arrived at the distribution of the minimum of the random potential in their problem. The corresponding probability density for the variable $y$ (defined in (\[rescaledmin\]) with $A=1$) turned out to be given by $p(y)=-g_{\infty}'(y)$ where $$\begin{aligned}
\label{circ}
&& g_{\infty}(y) = g_{\beta_c}(y)=2 e^{y/2}
K_1(2 e^{y/2})\,.\end{aligned}$$ Such a density does indeed exhibit the universal Carpentier-Le Doussal tail $p(y\to -\infty) \sim - y e^y$.
Our broad aim is to investigate analytically and numerically the validity and universality of the above result, and to extend it to other models with logarithmic correlations. In pursuing this goal we will be able, in particular, to extract statistics of the extrema of the (full plane) GFF sampled along an interval, $[0,1]$, with eventually some charges at the endpoints of the interval. This breaks the circular symmetry of the correlation matrix and one finds a different distribution. The moments $\overline{Z^n}$ turn out to be given in some range of positive integer $n$ by celebrated Selberg integrals [@Selberg] [^2] and a first (non-trivial) task is to analytically continue them to arbitrary $n$. After suggesting a certain method for such a continuation we are able to deduce the distribution of free energy $P(F)$ and $g_{\beta}(y)$ at the freezing temperature $\beta=\beta_c$. The same conjecture as in FB then yields the distribution of the minimum. As a by-product of our method we reveal a remarkable [*duality property*]{} enjoyed in the high-temperature phase by the generating function precisely defined as in (\[main\]) and unnoticed in [@FB]. We conjecture such a duality to be intimately related to the mechanisms behind freezing phenomenon. Finally we use direct numerical simulations to verify the freezing scenario for the circular ensemble and the resulting distribution (\[circ\]), as well as to test the new results of this paper for the interval case. Universality and other cases are discussed at the end.
Model and moments
=================
Interval model
--------------
Our starting point is the following continuum version of the partition function of the Random Energy Model generated by a Gaussian-distributed logarithmically-correlated random potential $V(x)$ defined on the interval $[0,1]$: $$\begin{aligned}
\label{1}
&& Z= \epsilon^{\beta^2} \int_{0}^{1} dx x^{a} (1-x)^b e^{- \beta V(x)}\end{aligned}$$ with $a,b>-1$ real numbers and $\beta>0$. The potential $V(x)$ is considered to have zero mean and covariance inherited from the two-dimensional GFF: $$\begin{aligned}
\label{1a}
\overline{V(x) V(x')}=C(x-x')= - 2 \ln|x-x'|\,.\end{aligned}$$ For the integral (\[1\]) to be well defined one needs to define a short scale cutoff $\epsilon \ll 1$. We therefore tacitly assume in the expression (\[1a\]) $V \to V_\epsilon$, with the regularized potential being also Gaussian with a covariance function $C_\epsilon(x-x')$, such that the variance is $C_\epsilon(0)= 2 \ln(1/\epsilon)$. We put for convenience the factor $\epsilon^{\beta^2}$ in front of the integral to ensures that the integer moments $\overline{Z^n}$ are $\epsilon-$independent in the high-temperature phase, see Eq.(\[momentint\]) below. At this stage we do not need to specify the $\epsilon-$regularized form [^3], but it is convenient for our purpose below to require that $C_\epsilon(x)=C(x)$ for $|x|>\epsilon$. Note that for $a=b=0$ the Gibbs measure of the disordered system identifies with the random Liouville measure, and that $Z$ can be interpreted as the (fluctuating) length of a segment in Liouville quantum gravity see e.g. [@Qgrav] .
Below we will also consider a grid of $M$ points $x_i$, uniformly spaced w.r.t the length element $dl=dx x^{a} (1-x)^b$ and the set of values $V_i=V(x_i)$, $i=1,..M$. The correlation matrix $\overline{V_i V_j}=C_{ij}$ at these grid values are $C_{ij}=-2
\ln(|i-j|/M)$ for $i \neq j$, and $C_{ii}=2 \ln M + W$ where $W=\ln(1/(\epsilon M))$ is a constant of order unity, and we will be interested in the limit [^4] of large $M$ at fixed $\epsilon
M$. This generalizes the grid on the unit circle studied in [@FB] where $x_j=e^{i \theta_j}$ with $\theta_j=2 \pi j/M$ and $C_{ij}= C(x_i-x_j)= -2 \ln(2|\sin\frac{(\theta_i - \theta_j)}{2}|)$. We will compare below the two situations. In each case one defines the corresponding (discretized) REM by the partition function $Z_M=\sum_{i=1}^M e^{-\beta V_i}$. We expect, as shown in [@FB] and discussed below, that there is a sense in which universal features of the discretized version are described by the continuum one in the large $M$ limit.
positive moments
----------------
Let us now compute the positive integer moments of $Z$. Denoting $\gamma=\beta^2$, a straightforward calculation gives $$\begin{aligned}
\label{momentint}
&& \overline{Z^n} = \int_0^1\ldots \int_0^1 \prod_{i=1}^ndx_i
x_i^a (1-x_i)^b
\prod _{1 \leq i < j \leq n} \frac{1}{|x_i-x_j|^{2 \gamma}}\end{aligned}$$ where the small scale cutoff is implicit and modifies the expressions for $|x_i-x_j|<\epsilon$. For a fixed $n=1,2,..$, a well defined and universal $\epsilon\to 0$ limit exists whenever the integral (\[momentint\]) is convergent, in which case it is given by the famous Selberg integral formula [@Selberg] $\overline{Z^n}= s_n$, with: $$\begin{aligned}
\label{selberg0}
&& s_n(\gamma,a,b) = \prod_{j=1}^{j=n} \frac{\Gamma[1+a-(j-1)
\gamma] \Gamma[1+b-(j-1) \gamma] \Gamma(1-j
\gamma)}{\Gamma[2+a+b-(n+j-2) \gamma] \Gamma(1-\gamma)}\end{aligned}$$ where $\Gamma(x)$ is the Euler gamma-function. For $a,b>0$ the domain of convergence is given by $\gamma<1/n$. It corresponds to the well known fact that for continuum REM models the distribution of $P(Z)$ develops algebraic tails [^5] hence integer moments $\overline{Z^n}$ become infinite at a series of transition temperatures $T_c^{(n)}=\sqrt{n}$. The true transition in the full Gibbs measure happens however only at $T_c=1$ i.e. $\gamma=\gamma_c=1$. Above $T_c$ the distribution $P(Z)$ exists in the limit $\epsilon=0$, while the formally divergent moments start depending on the cut-off parameter $\epsilon$. Analogous result arises in the log-circular ensemble [@FB] where the moments of $Z_M$ were analyzed, as recalled below. The generalizations for complex $a,b,\beta$, which connect to sine-Gordon physics, as well as a detailed study of the competition with binding transitions to the edges for $a,b<-1$ (in presence of a cutoff) is mostly left for future studies, although some remarks about the binding transitions are made below in Section \[sec:edge\] [^6].
negative moments {#neg}
----------------
Our first aim is to reconstruct the distribution $P(Z)$ from its moments in the high temperature phase $\gamma \leq 1$. This entails analytical continuation of the Selberg integral which is a well known difficult problem. Here we present a solution of this problem at $T_c$, the most interesting point. Let us first obtain the negative integer moments for any $T \geq T_c$. It is convenient to define: $$z=\Gamma(1-\gamma) Z = e^{-\beta f} \quad , \quad z_n = \overline{ z^n }$$ which, as found below, and in [@FB], has a well defined limit as $T \to T_c^+$. One then checks for $a=b=0$ the following recursion relation: $$\label{ratpos}
\frac{z_n}{z_{n-1}}=\frac{\Gamma[1-n\gamma]\,\Gamma^2[1-(n-1)
\gamma]\,\Gamma[2-(n-2)\gamma]}{\Gamma[2-(2n-3)\gamma]\,\Gamma[2-(2n-2)\gamma]}$$ with $z_1=\Gamma(1-\gamma)$ (which also implies $z_0=1$), and a similar formula for any $a,b$. Let us now perform the formal analytic continuation to negative integer moments $m_k \equiv
z_{-k}$ in the above recursion (\[ratpos\]) as $m_k/m_{k+1}
\equiv z_n/z_{n-1}|_{n\to -k}$. It is then easy to solve the recursion starting from $m_0=z_0=1$. Restoring $a,b$ we find: $$\label{analcont}
z_{-k}= \prod_{j=1}^k \frac{\Gamma[2+a+b+(k+j+1)\gamma]}{
\Gamma[1+(j-1) \gamma]\,\Gamma[1+a+ j \gamma] \Gamma[1+b+ j \gamma]}$$ We have checked that these expressions satisfy the convexity property $z_n^{p-m} z_p^{m-n} \geq z_m^{p-n}$ for any integers $n<m<p$ of arbitrary sign, which is a necessary condition for positivity of a probability. For $a=b=0$ the formula (\[analcont\]) was announced very recently in [@Ostrov] as a rigorous consequence of certain recursion relations for Selberg integrals.
Note that the domain in $a,b$ where (\[analcont\]) remains well defined extends to $a> -1-\gamma$, $b>-1-\gamma$, a region larger than the naive expectation $a,b>-1$. This is a signature of the competition between binding to the edge and the random potential as discussed below.
From moments to distribution: the circular case and duality in the high-temperature phase
-----------------------------------------------------------------------------------------
Let us recall for comparison the corresponding analysis for the circle[@FB]. There, the corresponding Dyson Coulomb gas integrals give $z_n=\Gamma(1-n \gamma)$, and such simple formula admits the natural continuation to negative moments $n=-k$. This allows to immediately and uniquely identify the distribution of $1/z$ and leads to the probability densities: $$\label{gumbel}
P(z)= \beta^{-2} z^{- 1/\beta^2-1} \exp(-z^{-1/\beta^2}) \quad , \quad \tilde P(f)=\beta^{-1} \exp(f/\beta - e^{f/\beta})$$ The latter formula implies that the free energy is distributed with a Gumbel probability density for all $T \geq T_c$. Alternatively the (formal) series for positive moments $g_\beta(y):=\overline{e^{-z e^{\beta y}}} =\sum_{n=0}^\infty \frac{(-1)^n}{n!} z_n
e^{n \beta y}$ is directly summed using $\Gamma(z)=\int_0^{\infty}e^{-t} t^{z-1}\,dt$ into the following generating function $$\label{12}
g_\beta(y) = \int_0^\infty dt\, \exp\{-t - e^{\beta y}\,
t^{-\beta^2}\}$$ What went unnoticed in [@FB] was the remarkable duality relation satisfied by the exact expression for this function[^7]: $$\label{circdual}
g_{\beta}(y)=g_{1/\beta}(y)$$ To see this directly define $\tau=e^{\beta y}\,
t^{-\beta^2}$ implying $t=\tau^{-\frac{1}{\beta^2}}e^{-y/\beta}$, and after substituting this back to the integral (\[12\]) we see that $$\begin{aligned}
\label{12b}
g_{\beta}(y) &=&-\frac{1}{\beta^2}
\int_0^\infty d\tau\, \tau^{-1-\frac{1}{\beta^2}}\,e^{ y/\beta}
\exp\{-\tau - e^{ y/\beta}\tau^{-\frac{1}{\beta^2}}\}\,\\ &=& \int_0^\infty d\tau \left[1+\frac{d}{d\tau}\right]
\exp\{-\tau - e^{ y/\beta}\tau^{-\frac{1}{\beta^2}}\} \equiv g_{\frac{1}{\beta}}(y)\end{aligned}$$ as second term in the integrand gives no contribution being full derivative of the expression vanishing at the boundaries of the integration region. This transformation is formal in the sense that the function $g_{1/\beta}(z)$ defined above for $\beta<1$ has nothing to do with the true generating function in the low temperature phase $\beta>1$. Rather, it is just obtained by taking the formula valid in the high temperature phase and changing everywhere $\beta \to 1/\beta$. However the duality relation still gives a precious information, e.g. it implies that an infinite set of derivatives $(\beta \partial_\beta)^n g_\beta(y)=0$ for any $n \geq 1$ odd at the self-dual point $\beta=1^-$. In particular the exact result: $$\begin{aligned}
\partial_\beta g_\beta(y)|_{\beta=\beta_c^-} = 0 \quad , \quad {\rm for}~ {\rm all } ~~ y\end{aligned}$$ shows that the “flow” of this function as a function of temperature vanishes at the critical point, quite consistent with a freezing of the whole function (with continuous temperature derivatives). It is in fact quite amazing that precisely this generating function $g_\beta(y)= \overline{\exp(-e^{\beta y} z)}$, with precisely this built-in temperature dependence, is both conjectured to freeze and shown to be self-dual. It is thus tempting to conjecture that freezing and duality are related, i.e. it is $g_\beta(y)$ and no other variation of it (such as e.g. replacing $e^{\beta y}$ by any other function of both $y$ and $\beta$) which freezes [*because*]{} it is self-dual in the whole high temperature phase. The same type of self-duality relation, as we demonstrate below, extends to the interval case supporting the conjecture.
Unfortunately, the direct methods of resummation which work for the circular case fail for the more complicated problem at hand, the interval $[0,1]$. For this reason one needs to develop a more general procedure, which is done below.
From moments to distribution: generalities
------------------------------------------
Instead here we now define the generic moments $M_{\beta}(s) =
\overline{z^{1-s}}$, $M_{\beta}(1)=1$ for any complex $s$, at fixed inverse temperature $\beta$. In particular, the generating function of the cumulants for the free energy $f=-\beta^{-1} \ln
z$ is related to $M_{\beta}(s)$ via $$\begin{aligned}
\label{cum1}
\sum_{n=0}^\infty \frac{s^n}{n!} \beta^n \overline{f^n}^c = \ln M_{\beta}(1+s)\end{aligned}$$ Definition of the probability density $P(z)$ implies the relation $$\begin{aligned}
\int_{-\infty}^{+\infty} e^{2 t} P(e^t) e^{- s t}\, dt =
M_{\beta}(s)\end{aligned}$$ which can be inverted as the contour integral: $$\begin{aligned}
\label{inv1}
e^{- 2 t} P(e^{-t}) = \frac{1}{2 i \pi}\int e^{- s t} M_{\beta}(s)\,ds \label{int1}\end{aligned}$$ e.g. along a contour parallel to the imaginary axis $s=s_0+ i \omega$, provided the integral is convergent, $s_0$ being chosen larger than any singularity of the integrand.
Further using the definition (\[main\]) the function $g_{\beta}(y)$ is found to satisfy the identities $$\begin{aligned}
\label{rel2}
&& \beta \int_{-\infty}^{+\infty} e^{\beta y (s-1)} g_{\beta}(y)\, dy
= M_{\beta}(s) \Gamma(s-1)\\
&& g_{\beta}(y) = \beta^{-1} e^{\beta y} \frac{1}{2 i \pi}
\int e^{-s y} M_{\beta}(\frac{s}{\beta})
\Gamma(\frac{s}{\beta} -1)\, ds \label{int2}\end{aligned}$$ Hence once we know $M_{\beta}(s)$ we can retrieve all the interesting distributions. Moreover, relation (\[rel2\]) defines after integration by parts the generating function of the cumulants for the probability density defined by $p_{\beta}(y)=-g_{\beta}'(y)$: $$\begin{aligned}
\label{cum2}
\sum_{n=1}^\infty \frac{s^n}{n!} \overline{y^n}^c \equiv \ln{\int_{-\infty}^{\infty} p_{\beta}(y)\,e^{ys}\, dy}= \ln
M_{\beta}(1+\frac{s}{\beta}) + \ln \Gamma(1+\frac{s}{\beta})\end{aligned}$$ Comparison with (\[cum1\]) yields after recalling the series expansion for $\ln \Gamma(1+s)$ in terms of the Euler constant $\gamma_E$ and Riemann zeta-function $\zeta(n)$ the following model-independent relations: $$\label{cum}
\overline{y}=\overline{f}-\gamma_E T, \quad \overline{y^n}^c|_{n
\geq 2} =\overline{f^n}^c + (-1)^n (n-1)! \zeta(n) T^n\,,$$ This relation is valid at all temperature and comes only from the definition of $g_\beta(y)$. It is most useful at $\beta=\beta_c=1$, if we accept the freezing scenario. Given that in that case the l.h.s. freezes at its value at $\beta=1$ then we easily retrieve all cumulants of the free energy for all $T \leq
T_c$ just from the knowledge of $g_{\beta=1}(y)$. Conversely, it is useful to test the freezing hypothesis in numerics, as we will see below.
Let us now discuss how these moment relations reflect duality for the circular case. In the latter model $M_{\beta}(s)=\Gamma(1+ (s-1) \gamma)$, hence from (\[cum2\]) one finds: $$\begin{aligned}
\label{cum2a}
\sum_{n=1}^\infty \frac{s^n}{n!} \overline{y^n}^c = \ln \Gamma(1+ s \beta)
+ \ln \Gamma(1+\frac{s}{\beta})\end{aligned}$$ which is manifestly invariant by the formal transformation $\beta \to 1/\beta$. The latter fact implies, via (\[cum2\]), the self-duality for $p_{\beta}(y)$, hence for $g_{\beta}(y)$. Such an indirect method of proving self-duality for $g_{\beta}(y)$ has advantage when direct verification is difficult in view of cumbersome and/or implicit form for the generating function in the whole high-temperature phase. We shall see later on that it indeed works for the interval case.
Analytical continuation at the critical temperature and distribution of minima on the interval
==============================================================================================
no edge charges
---------------
Let us keep focussing on the critical temperature $\beta=1$. Denoting $M_{\beta=1}(s)\equiv M(s), g_{\beta=1}(y)\equiv g(y)$, we start with $a=b=0$ case (no charges at the end of the interval) for the sake of simplicity. For negative integer values $s=1-n$ one finds from (\[ratpos\]) after exploiting the doubling identity $\Gamma(2 z) = 2^{2 z-1} \Gamma(z)
\Gamma(1/2+z)/\sqrt{\pi}$ the relation: $$\begin{aligned}
&& \frac{M(s+1)}{M(s)} = 2^{3+ 4 s} (1+s)\frac{
\left[\Gamma(\frac{3}{2} + s)\right]^2 }{\pi \Gamma(s) \Gamma(3 +
s)} \label{rec01}\end{aligned}$$ To continue this formula to any $s$ we will use the [*Barnes function*]{}, which under some mild conditions is the only solution [@Barnes] of: $$\begin{aligned}
\label{defbarnes}
&& G(s+1) = G(s) \Gamma(s)\end{aligned}$$ with $G(1)=1$. The Barnes function $G(s)$ is meromorphic in the complex plane and has zeroes at all negative integers $s=0, -1, -2, ...$ [@Barnes]. It can be computed as: $$\begin{aligned}
&& G(z) = (2 \pi)^{(z-1)/2} e^{- \frac{1}{2} (z-1)(z-2) + \int_0^{z-1} dx x \psi(x) }\end{aligned}$$ where $\psi(x)=\Gamma'(x)/\Gamma(x)$, the integral being on any contour not crossing the real negative axis. Using (\[defbarnes\]) one finds the following analytical continuation for the moments, which is one of the main result of this paper: $$\label{res1}
\overline{z^{1-s}} = M(s) = \frac{2^{2 s^2 + s - 2} }{G(5/2)^2
\pi^{s-1}} \frac{1}{\Gamma(s) \Gamma(s+2)}
\left[\frac{G(s+\frac{3}{2})}{G(s)}\right]^2$$ with $G(5/2)=A^{-3/2} \pi^{3/4} e^{1/8} 2^{-23/24}$ where $A$ is Glaisher-Kinkelin constant $A=e^{1/12 - \zeta'(-1)}=1.28242712$. To guarantee that this is the correct continuation, we have checked (i) positivity: $M(s)$ given above is finite and positive on the interval $s \in [0,+\infty[$ i.e. all real moments $n=1-s<1$ exist. (ii) convexity: on this interval $\partial_s^2
\ln M(s) >0$ (iii) convergence of the integrals (\[int1\],\[int2\]) for $s_0>1$. The latter can be used to compute $g_{\beta=1}(y)$ and $\tilde P(f)=e^{f}/(2 \pi i) \int e^{-f s} M(s) ds$, which are plotted in Fig.\[fig:gfree\]. Note finally that it reproduces the negative integer moments (\[analcont\]) for $a=b=0, \gamma=1$.
![Color online. Analytical predictions for the interval $[0,1]$ with no edge charge: (i) Left: plot of $g_{\beta_c}(y)$ which, according to the freezing scenario is also, up to a shift, the cumulative distribution of the minimum $V_{min}$ (ii) Right: the free energy density $\tilde P(f)$ at the critical temperature $\beta_c$ for the interval. Both are obtained by the appropriate inverse Laplace transforms (\[inv1\]) and (\[int2\]) from the analytical continuation (\[res1\]) of the moments as indicated in the text[]{data-label="fig:gfree"}](gfree "fig:"){width="50.00000%"}![Color online. Analytical predictions for the interval $[0,1]$ with no edge charge: (i) Left: plot of $g_{\beta_c}(y)$ which, according to the freezing scenario is also, up to a shift, the cumulative distribution of the minimum $V_{min}$ (ii) Right: the free energy density $\tilde P(f)$ at the critical temperature $\beta_c$ for the interval. Both are obtained by the appropriate inverse Laplace transforms (\[inv1\]) and (\[int2\]) from the analytical continuation (\[res1\]) of the moments as indicated in the text[]{data-label="fig:gfree"}](pfree "fig:"){width="50.00000%"}
The free energy cumulants are to be determined from (\[cum2\],\[cum\]), and one finds $<y>=\frac{7}{2}-2\gamma_E-\ln(2 \pi)$, $<y^2>_c=\frac{4 \pi^2}{3}-\frac{27}{4}$, and for general $n \geq
3$: $$\begin{aligned}
&& <y^n>_c = (-)^{n-1} (n-1)! \big( \zeta(n-1) (2^n-4) - \zeta(n) (2^n 3 -k) +2^{n+1}-1-2^{-n} \big)\end{aligned}$$ with $k=4$ and the same formula for $<f^n>_c$ with $k=3$. As a comparison for the circle $M(s)=\Gamma(s)$, hence $<y>=2 <f>= -2
\gamma_E$, and $<y^n>_c=2 <f^n>_c = 2 (-1)^n (n-1)! \zeta(n)$ for $n \geq 2$.
An important property of $g(y)$ at criticality is its decay at $y
\to -\infty$. Deforming the integration contour in (\[int2\]) one obtains $g(y)$ as a sum of residues over the (multiple) poles of $M(s)$ at $s=-n$, which generates the expansion in powers of $e^{y}$. $$\begin{aligned}
\label{res1a}
&& g(y) = 1 + (y + A') e^y + (A + B y +C y^2 + \frac{1}{6} y^3) e^{2 y} \\
&& +
e^y \sum_{n=2}^\infty \frac{1}{(2 n)!} \partial_s^{2 n} e^{-s y} 2^{-(n+1)(2 n+3 + 4 s)} \pi^{n+1} \\ \nonumber &\times&
\frac{\Gamma(n+1+s)^{2 n+1} \Gamma(n+3+s) G(s+\frac{3}{2})^2 M(n+1+s)}{(s-1) s^2 (s+1)^4
..(s+n-1)^{2n-1} G(n+1+s+\frac{3}{2})^2} |_{s=-n} \nonumber\end{aligned}$$ with $A'=2 \gamma_E + \ln(2 \pi)-1$ and $C = -0.253846$, $B =
1.25388$, $A = -5.09728$. Let us recall that for the circular model the expression (\[circ\]) implies: $$\begin{aligned}
g^{(circ)}(y) = 1 + e^{y} ( y - 1 + 2 \gamma_E) + e^{2 y}
(\frac{1}{2} y - \frac{5}{4} + \gamma_E) + ..\end{aligned}$$ The behaviour $g(y)-1\sim y e^y$ is precisely the universal tail found by Carpentier and Le Doussal [@carpentier]. It has its origin in the $1/z^2$ forward tail which the probability density of $z$ develops at critical $\beta=1$, with the first moment $<z>$ becoming infinite. On the other side $y \to + \infty$ one expects much faster decay, for example $g^{(circ)}(y) = \sqrt{\pi} e^{\frac{y}{4} -2 e^{y/2}} ( 1
+\frac{3}{16} e^{-y/2} +.. )$.
extension to edge charges, binding transition {#sec:edge}
---------------------------------------------
Extending these considerations for any $a,b$, one finds: $$\begin{aligned}
&& M(s)=2^{2 s^2 + s(1+ 2(a+b)) - 3 - 2 (a+b)} \pi^{1-s}
\frac{G(2+a) G(2+b) G(4+a+b)}{\Gamma(2+\frac{a+b}{2}) G(2+ \frac{a+b}{2})^2 G(\frac{5}{2}+ \frac{a+b}{2})^2} \nonumber \\
&& \times \frac{\Gamma(1+\frac{a+b}{2} + s) G(1+ \frac{a+b}{2} +
s)^2 G(\frac{3}{2}+ \frac{a+b}{2} + s)^2}{ G(s) G(1+a+s) G(1+b+s)
G(3+a+b+s)} \label{result}\end{aligned}$$ and checks again positivity and convexity for $s \in [0,+\infty[$ (for $a,b>-1$). We give only: $$\begin{aligned}
\label{cum2ab}
&& \overline{y^2}^c_{a,b} = \frac{\pi ^2}{6}+\gamma_E + 3 \phi(4 + a + b) - \phi(2+a) - \phi(2+b)\end{aligned}$$ with $\phi(x) = \psi(x) + (x-1) \psi'(x)$, and the case $a=b$ in the limit $a \to + \infty$ where one then finds: $$\begin{aligned}
\label{cum23}
&& \overline{y^2}_{a,a}^c = \ln(8 a) +\frac{\pi ^2}{6}+\gamma +1 + O(a^{-1}) \nonumber \\
&& \overline{y^3}_{a,a}^c = - \frac{\pi ^2}{3} - 2 \zeta(3) + O(a^{-1}) +..\end{aligned}$$ i.e. all cumulants have a limit except the second one. This limit is discussed again below.
A remarkable case is $a=b=-1/2$. Then a simplification occurs: $$\begin{aligned}
&& M(s)=M_{-1/2,-1/2}(s) = 2^{2 s^2 - s - 1} \pi^{1-s} \frac{\Gamma(\frac{1}{2} + s)}{s \Gamma(3/2)} \label{simpler}\end{aligned}$$ One can trace this simplification to the fact that the structure of the correlation matrix becomes much simpler in that case, as detailed in Appendix A. The corresponding distribution is easlily found as (again this is for $\beta=\beta_c=1$): $$\begin{aligned}
\label{result0}
&& P(z) = (\frac{\pi}{8})^{3/2} \frac{1}{\Gamma(3/2) z^2} \int_0^z \frac{dz_1}{z_1^{3/2}} \int_{-\infty}^{+\infty} \frac{dt}{\sqrt{2 \pi}} e^{-\frac{t^2}{2} - 3 \sqrt{\ln 2} t - \frac{\pi}{8} \frac{1}{z_1} e^{-2 \sqrt{\ln 2} t}}\end{aligned}$$ which reproduces the above moments, and behaves as $P(z) \sim \pi/z^2$ at large $z$. This yields, after some manipulations: $$\begin{aligned}
\label{result1}
g(y) = \frac{\pi}{4} \int_{-\infty}^{+\infty} \frac{dt}{\sqrt{2 \pi}} e^{-\frac{t^2}{2} - 2\sqrt{\ln 2} t}\int_{e^y}^{\infty}\left(1-\frac{e^y}{u}\right)e^{-\sqrt{\pi u/2}\, e^{-\sqrt{\ln 2} t}}\,du\end{aligned}$$ and one finds $\overline{y^2}^c=4 \ln 2 - 3 + 2 \pi^2/3=6.35232$ and $\overline{y}=1-2 \gamma_E - \ln(\frac{\pi}{2})=-0.606014$ hence $\overline{f^2}^c=4 \ln 2 - 3 + \pi^2/2$.
Let us now discuss briefly the case $a,b<-1$, for simplicity we focus on $b=a$. In that case, the model (\[1\]) requires, at least naively, a short scale cutoff to avoid the divergence near the edges. However, from e.g. the discussion of Appendix D in [@carpentier] we know that there should be a competition between the random potential in the bulk and the binding effect by the edge: in presence of disorder it may be more favorable for the particle to explore the bulk and to remain unbound from the edge. It is quite nice that our analytical continuation captures that effect. As mentioned in Section \[neg\], from the negative moments one can guess that the complete domain over which the high temperature phase extends is: $$\begin{aligned}
\label{result1}
a \geq - 1 - \gamma \quad {\rm and} \quad \gamma \leq 1\end{aligned}$$ with $\gamma=\beta^2$ (here $\beta_c=1$), where equality in the first condition corresponds to the binding transition to the edge, while the second to the freezing transition. This implies in particular that for $\gamma=1$, the case studied above, the binding transition occurs at $a=-2$, and that for any larger value of $a$ the system should be at bulk critical freezing, with however some continuous dependence in $a$. We can indeed check that the result (\[result\]) for $M(s)$ leads to a well defined probability $P(z)$ for any $a>-2$. For instance one sees that the formula (\[cum2ab\]) yields a finite $\overline{y^2}^c$ for any $a>-2$, which however diverges as $a \to -2^+$. The domain of definition becomes $s> -1-a$ for $-1>a>-2$, as the resulting $P(z)$ acquires now a broader tail $\sim 1/z^{3+a}$ at large $z$, while it was $\sim 1/z^2$ for $a>-1$. As $a \to -2$ the tail becomes non-normalizable as $\sim 1/z$, a signature of the binding transition. The case $a=-3/2$ provides a good illustration as (\[result\]) again simplifies into: $$\begin{aligned}
&& M(s) = M_{-3/2,-3/2}(s)=2^{2 s^2 - 5 s + 3} \pi^{1/2-s} \Gamma(s-\frac{1}{2}) \label{simpler2}\end{aligned}$$ which implies that the random variable $z$ can be written $z=z_1 e^{-f_2}$ where $z_1>0$ and $f_2$ are two independent random variables, $f_2$ being gaussian distributed with $\overline{f_2}=- \ln(2 \pi)$ and $\overline{f_2^2}^c=4 \ln 2$, and $z_1$ with distribution $P_1(z_1)=z_1^{-3/2} e^{-1/z_1}/\sqrt{\pi}$, leading to the explicit form: $$\begin{aligned}
P(z) = \frac{1}{z^{3/2} \pi \sqrt{8 \ln 2}} \int_{-\infty}^\infty dt \exp(-\frac{3}{2} t - \frac{1}{z} e^{-t} - \frac{(t+\ln(2 \pi))^2}{8 \ln2})\end{aligned}$$ which does exhibit the $\sim 1/z^{3/2}$ tail at large $z$. We leave further studies of the global phase diagram for arbitrary $a,b$ to the future.
High temperature phase for $[0,1]$ interval with no end charges
===============================================================
Let us consider the segment $[0,1]$ at any $\beta \leq \beta_c=1$, i.e. $\gamma=\beta^2<1$. The moments must satisfy (using again the doubling identity): $$\begin{aligned}
\label{recT}
&& \frac{M_{\beta}(s+1)}{M_{\beta}(s)} = \frac{2^{2+\gamma+4 s \gamma}}{\pi} \frac{\Gamma(\frac{3}{2} + s \gamma) \Gamma(1+ \frac{\gamma}{2} + s \gamma) \Gamma(\frac{3}{2} + \frac{\gamma}{2} + s \gamma)}{ \Gamma(1-\gamma + s \gamma) \Gamma(2+\gamma + s \gamma) \Gamma(1+s \gamma)}\end{aligned}$$ We need to find a way of continuing the moments to the complex plane. To this end we define the function $G_\beta(x)$ for $\Re(x)>0$ by [@Zamo]: $$\begin{aligned}
\label{ZamoBarn}
&& \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \ln G_\beta(x) = \frac{x-Q/2}{2} \ln (2 \pi) + \int_0^\infty \frac{dt}{t} \big( \frac{e^{- \frac{Q}{2} t} - e^{- x t}}{(1-e^{-\beta t})(1-e^{-t/\beta})}
+\frac{e^{-t}}{2} (Q/2-x)^2 + \frac{Q/2-x}{t} \big)\end{aligned}$$ where $Q=\beta+1/\beta$. This function is self-dual: $$\begin{aligned}
\label{ZamoBarndual}
&& G_\beta(x) = G_{1/\beta}(x)\end{aligned}$$ and satisfies the property that we need, see e.g. [@Zamo] and Appendix B, $$\begin{aligned}
\label{Gt1}
&&G_\beta(x + \beta) = \beta^{1/2 - \beta x}(2 \pi)^{\frac{\beta-1}{2}} \Gamma(\beta x)\,G_\beta(x)\end{aligned}$$ One can check that $G_\beta(x)$ for $\beta=\beta_c=1$ coincides with the Barnes function $G(x)$ defined in the previous Section, e.g. setting $\beta=1$ in (\[Gt1\]) one sees that $G_1(x + 1)=\Gamma(x)G_1(x)$, and, using $Q=2$ we have $G_1(1)=1$. Similarly to the standard Barnes function the new function $G_\beta(x)$ has no poles and only zeroes, and these are located at $x=-n \beta - m/\beta$, $n,m=0,1,..$. It provides us with a natural generalization which can be used to perform the required analytical continuation for any temperature.
Using the above properties we find that $$\begin{aligned}
\label{MT}
&& \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! M_{\beta}(s) = A_\beta
2^{(s-1)(2+\beta^2 (2 s+1))} \pi^{1-s} \frac{ \Gamma(1+ \beta^2 (s-1)) G_\beta(\frac{\beta}{2} + \frac{1}{\beta} + \beta s)
G_\beta(\frac{3}{2 \beta} + \beta s) G_\beta(\frac{\beta}{2} + \frac{3}{2 \beta} + \beta s) }{G_\beta( \beta + \frac{2}{\beta} + \beta s)
G^2_\beta(\frac{1}{\beta} + \beta s)} \nonumber \\
&&\end{aligned}$$ with $$\begin{aligned}
&& A_\beta=\frac{G_\beta(\frac{1}{\beta} + \beta)^2 G_\beta( 2 \beta + \frac{2}{\beta})}{G_\beta(\frac{3 \beta}{2} + \frac{1}{\beta}) G_\beta(\frac{3}{2 \beta} + \beta) G_\beta(\frac{3 \beta}{2} + \frac{3}{2 \beta}))}\end{aligned}$$ reproduces correctly the recursion relation (\[recT\]), hence provides an analytical continuation for the moments valid for $\beta<\beta_c=1$. We have checked numerically that it does satisfy positivity, convexity and a convergent inverse Laplace transform from which one can compute $P(z)$ and $g_\beta(y)$ using (\[int1\]) (\[int2\]). We will not study these in details here, but give only a few properties.
Let us first check the duality. One easily sees that if one defines $$\begin{aligned}
&& M_\beta(s) = 2^{1-s} \tilde M_\beta(s)\end{aligned}$$ then $\ln \tilde M_\beta(1+ \frac{s}{\beta}) + \ln \Gamma(1+\frac{s}{\beta})$ is fully invariant under $\beta \to 1/\beta$. From (\[cum2\]) it implies that all $\overline{y^n}^c$ with $n \geq 2$ are invariant by duality, only the average $\overline{y}$ is not. This is not a problem since this average is not expected to be universal, and is easily remedied by defining $\tilde z=z/2$ (which could have been done from the start) and $\tilde g_\beta(y) = \overline{ \exp(- e^{\beta y}z/2) }$. Hence we conclude that up to such a trivial shift the probability $\tilde p_\beta(y)=-\tilde g'_\beta(y)$ is self dual, i.e. $\tilde p_{1/\beta}(y) = \tilde p_\beta(y)$. From the discussion in the previous Section we conjecture that it is this function which freezes at $\beta=\beta_c=1$.
From the result (\[MT\]) we can extract the cumulants of the free energy using (\[cum1\]). We only discuss here the lowest non-trivial cumulant, given by $$\begin{aligned}
\label{f2int}
&& \overline{f^2}^c = \overline{y^2}^c - \frac{\pi^2}{6} T^2 = \frac{1}{\beta^2} \partial^2_s \ln M_{\beta}(1+s)|_{s=0} \\
&& \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! = 4 \ln 2 + h_\beta(\frac{3 \beta}{2} + \frac{1}{\beta}) + h_\beta(\beta + \frac{3}{2 \beta})
+ h_\beta(\frac{3 \beta}{2} + \frac{3}{2 \beta}) -
2 h_\beta(\beta + \frac{1}{\beta}) - h_\beta(2 \beta + \frac{2}{\beta}) + \frac{\beta^2 \pi^2}{6} \nonumber \\
%&& = \beta^2\int_0^{\infty}dt\, t\left\{\frac{e^{\beta^2t}-3e^{-\beta^2t}+2}{(e^{\beta^2 t}-1)(e^t-1)}+
%3e^{-t}\frac{e^{-\beta^2t}}{(e^{\beta^2 t}-1)}\right\} \nonumber\end{aligned}$$ where we have defined the self-dual function (see Appendix B): $$\begin{aligned}
\label{h1b}
\! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! h_\beta(x)= h_{1/\beta}(x) = \partial_x^2 \ln G_\beta(x) = \ln{x}+ \int_0^\infty \frac{dt}{t} e^{-xt} \left(1-\frac{t^2}{(1-e^{-\beta t})(1-e^{-t/\beta})} \right)\end{aligned}$$ and we have used $\psi'(1)=\pi^2/6$. The resulting curve $\overline{f^2}^c$ as a function of $\beta$ is plotted in Fig. \[fig:f2\]. One finds that it increases from $\overline{f^2}^c(\beta \to 0)=3$ to $\overline{f^2}^c(\beta=1) = 7 \pi^2/6 - 27/4 = 4.76454$. More discussion is given in Section and Appendix C, together with high temperature expansions.
Gaussian weight model
=====================
We now briefly discuss a case where the above considerations fail, and present below some hints of why this may happen.
We consider now the continuum partition function for the log-correlated field on the full real axis but with a gaussian weight: $$Z=\epsilon^{\beta^2} \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty dx ~ e^{-x^2/2} e^{- \beta V(x)}$$ This problem is appealing as it leads to Mehta integrals and moments $z^{(G)}_n= \overline{z^n}= \overline{Z^n \Gamma(1-\beta^2)^n} = \prod_{j=1}^{j=n}
\Gamma[1- j \beta^2]$, i.e. simpler expressions than for the interval case considered above.
At criticality $\beta=1$ this implies $M^{(G)}(s+1)/M^{(G)}(s)=1/\Gamma(s)$ for $s=-n$, which naturally suggests $M^{(G)}(s)=1/G(s)$. This is positive for $s>0$ but, surprisingly, convexity fails for for $s>s_c=1.92586..$. Hence this is not an acceptable analytic continuation.
To get another handle on the problem one notes that this model can be obtained from the large $a$ limit of the interval problem $[0,1]_{aa}$. Writing $x=1/2+y$ and performing the change of variable in (\[momentint\]) one finds: $$\lim_{a \to + \infty} (2 \pi)^{- n/2} 2^{2 a n} (8 a)^{\frac{n}{2}
- \frac{n(n-1)}{2} \gamma} z_n(a,a,\gamma) = z^{(G)}_n(\gamma)$$ Not surprisingly one finds that the pointwise limit: $$M^{(G)}(s) = \lim_{a \to + \infty} (2 \pi)^{- (1-s)/2} 2^{2 a
(1-s)} (8 a)^{\frac{1}{2}(1-s^2)} M_{a,a}(s)$$ yields $1/G(s)$ as expected. From this we also get that for large $a$: $$\partial_s^2 \ln M^{(G)}(s) = - \ln(8 a) + \partial_s^2 \ln M_{a,a}(s)$$ While the second term is nicely positive for all $s>0$ the additional factor $- \ln(8 a)$ makes the total sum negative for $s>s_c$, violating convexity. In other words while $M_{a,a}(s)$ corresponds to a well defined distribution of probability, corresponding to the problem on the interval with edge charges, $M_G(s)$ corresponds then to this probability “convoluted by a gaussian of negative variance” and fails to be a probability. Note that such a shift in the second cumulant $\overline{y^2}$ is indeed needed to obtain a finite final result in (\[cum23\]). All higher cumulants $\overline{y^n}^c|_{aa}$ with $n \geq 3$ have a nice finite limit as $a \to \infty$, and can be extracted from the generating function $$\begin{aligned}
&& \sum_{n=0}^\infty \frac{s^n}{n!} \overline{y^{1+n}}^c = \frac{1}{s} - (s-1) \psi(s) + s -
\frac{1}{2} \ln(2 \pi) - \frac{1}{2} \nonumber\end{aligned}$$ obtained from $1/G(s)$. Hence the main problem seems to lie in the second cumulant, and one may speculate that it is related to an inadequate treatment of zero mode fluctuations. Another (possibly related) observation is that for $a>>1$ the whole contribution to the $[0,1]_{aa}$ integral comes from a very small vicinity (of the widths of $L_a\sim 1/\sqrt{a}$ ) of the mid-point $x=1/2$ of the integration domain. One expects a competition between $L_a$ and the regularization scale for the logarithm, so it may be that the result depends on the order of limits $\epsilon\to 0$ and $a\to \infty$. We leave further study of this problem to the future and now turn to numerical studies.
Numerical study
===============
circular ensemble
-----------------
We now turn to the numerical checks for the random variables $V_i$ on $i=1,..M$ grid points and their associated REM of partition function $Z_M=\sum_{i=1}^M e^{-\beta V_i}$. We start with the log-circular ensemble and study the $M \times M$ cyclic correlation matrix (choosing here $W=0$): $$\label{circmat}
C_{ij} = -2 \ln(2|\sin \frac{ \pi (i-j)}{M} |) \quad i \neq j, \quad C_{ii} = 2 \ln M + W$$ whose eigenvalues $\lambda_k=2\ln{M}-2\sum_{n=1}^{M-1}\cos\{\frac{2\pi}{M}nk\}
\ln\{2\sin{\frac{\pi}{M}n}\}$ are all positive, with the uniform mode $\lambda_0=0$ for any $M$. Let us recall that the relation to the continuum model defined above was established in [@FB] where it was shown that at large $M$ one has $\overline{Z_M^n} = z_n \overline{Z_e^n}$ for $\beta^2 n<1$ and $\overline{Z_M^n} \sim M^{1+n^2 \beta^2}$ for $\beta^2 n>1$ (the positive moments which formally diverge in the continuum).
The random variables $V_i$ are generated (for $M$ even) as $$V_l=\sqrt{\frac{2}{M}}\sum_{k=1}^{M/2}\sqrt{\lambda _k}\left[x_k\cos\{\frac{2\pi}{M}kl\}+y_k\sin\{\frac{2\pi}{M}kl\}\right]$$ where the $x_k$ and $y_k$ are two uncorrelated sets of i.i.d. real unit centered Gaussian variables. This is done using Fast Fourier Transform (FFT).
![Color on line. Left: Finite size scaling of $a_M$ for variables with logarithmic correlations, circular ensemble Eq. (\[circmat\]), and interval Eq. (\[intervalM\]), from $M=2^{8}$ to $M=2^{19}$. The predicted slope is $\tilde \gamma=3/2$, numerically we find $\tilde \gamma=1.4 \pm 0.1$. This is compared with independent random variables, the standard uncorrelated REM, where the prediction is $\tilde \gamma=1/2$ as observed. Right: Finite size effect for $b_M$ for variables with the same correlations. The data are consistent with a convergence as $1/\log M$ and extrapolate to $b_M=1 \pm 0.02$, consistent with the predicted value $b_M=1$ in each case (which means an unrescaled variance $\overline{V_{min}^2}^c$ in agreement with the prediction given in the text, in each case).[]{data-label="fig:a_M"}](a_M "fig:"){width="45.00000%"} ![Color on line. Left: Finite size scaling of $a_M$ for variables with logarithmic correlations, circular ensemble Eq. (\[circmat\]), and interval Eq. (\[intervalM\]), from $M=2^{8}$ to $M=2^{19}$. The predicted slope is $\tilde \gamma=3/2$, numerically we find $\tilde \gamma=1.4 \pm 0.1$. This is compared with independent random variables, the standard uncorrelated REM, where the prediction is $\tilde \gamma=1/2$ as observed. Right: Finite size effect for $b_M$ for variables with the same correlations. The data are consistent with a convergence as $1/\log M$ and extrapolate to $b_M=1 \pm 0.02$, consistent with the predicted value $b_M=1$ in each case (which means an unrescaled variance $\overline{V_{min}^2}^c$ in agreement with the prediction given in the text, in each case).[]{data-label="fig:a_M"}](b_M "fig:"){width="45.00000%"}
![Color online. Circular case: cumulative distribution of the rescaled minimum $Q_M(y)$ minus the prediction (\[circ\]) based on the freezing scenario $g_{\beta_c}(y)$. The number of samples is $10^7$. The difference is small compared on the scale of unity. Although it is slow, the convergence is apparent.[]{data-label="fig:cumulative"}](cumulative){width="70.00000%"}
![Color online. Circular case: distribution of the free energy in the high temperature phase, for various temperatures.[]{data-label="fig:freeenergy"}](Free-energy){width="70.00000%"}
From the distribution of the minimum $V_{min}$ in systems of up to $M=2^{19}$ we have computed the coefficients $a_M$, $b_M$ and the distribution of the variable $y$ in (\[rescaledmin\]) by fixing $\overline{y}$ and the variance $\overline{y^2}^c$ to their value for the distribution (\[circ\]). The asymptotics of the coefficients $a_M$ and $b_M$ in (\[rescaledmin\]) are shown in Fig. \[fig:a\_M\]. They exhibit a reasonable agreement with the conjecture (\[am&bm\]) with $A=1$ but one clearly sees that convergence is slow. Convergence to $b_M=1$ would mean that the prediction $\overline{V_{min}^2}=\pi^2/3$ is correct. The cumulative distribution $Q_M(y)$ of the rescaled minimum, i.e. the variable $y$, is shown in Fig. \[fig:cumulative\] where the cumulative distribution (\[circ\]) has been substracted. One sees that although the difference is small its convergence, if any, to zero is extremely slow (empirically a $\sim 1/\sqrt{\ln M}$ seems to roughly account for the data, but we do not wish to make any strong claim here).
Then we computed the distribution of the free energy at various temperatures. In Fig. \[fig:freeenergy\] we have first normalized the free energy distribution to the same average and variance as the unit cumulative Gumbel distribution, i.e. $\exp(-e^x)$, then plotted the difference between the resulting cumulative distribution $Q_{resc}(f)$ and the Gumbel expression. This shows that the convergence is very fast at $\beta=1/2$ but rather slow already at $\beta=1$, where we have little doubt for the result. This is consistent with the fact that the convergence for the minimum is so slow.
To test the freezing scenario we also compute numerically $g_\beta(y)$ for various temperatures. First in Fig. \[fig:freeze1\] we test the convergence of the numerically determined $g_{\beta_c=1}(y,M)$ to the analytical prediction $g_{\beta_c}(y)$ in (\[circ\]) as a function of $M$. Then in Fig. \[fig:freeze2\] we test whether $g_{\beta}(y,M)-g_{\beta_c}(y,M)$ at fixed $\beta>\beta_c$ decreases to zero as $M$ becomes large, which is the freezing conjecture. In practice we first compute the free energies $f_i$, compute their mean $\bar f$ and variance $\sigma$, define rescaled energies $f'_i=(f-\bar f +\gamma_E T - 2 \gamma_E)\sqrt{\frac{\pi^2}{3}(1-T^2/2)}/\sqrt{\sigma}$ and define $g_\beta(y,M)$ as the mean of $e^{-e^{\beta (y-f'_i)}}$ which, by construction and virtue of (\[cum\]), has then the same average, $-2 \gamma_E$, and variance, $\pi^2/3$, as $g_{\beta_c}(y)$ in (\[circ\]). Comparing Fig. \[fig:cumulative\] and Fig. \[fig:freeze1\] we see that a good fraction of the difference in Fig. \[fig:cumulative\] is already due to finite size corrections at $\beta_c$ (which have nothing to do with the testing the freezing scenario).
![Color online. Circular case: convergence of $g_\beta(y,M)$ at $\beta=\beta_c$. We see that the scale is smaller than on Fig. 2 but that convergence is very slow.[]{data-label="fig:freeze1"}](freeze){width="70.00000%"}
![Color online. Circular case: direct test of the freezing scenario: convergence of $g_\beta(y,M)-g_{\beta_c}(y,M)$ (both are numerically measured and rescaled as explained in the text). We see that the scale is smaller by a factor around $2$ to $4$ than on Fig. \[fig:cumulative\], but that convergence is very slow.[]{data-label="fig:freeze2"}](compare){width="70.00000%"}
universality of circular ensemble: cyclic matrices, GFF inside a disk with Dirichlet boundary condition
-------------------------------------------------------------------------------------------------------
It is important to discuss now the universality of this result, as it is a rather subtle point. The general issue of universality for logarithmic REM’s can be formulated as follows. Consider sequences of $M$-dependent correlation matrices $C_{ij}^{(M)}$. What are the possible universality classes for the associated REM in the limit $M \to + \infty$, what are their basin of attraction and conditions for convergence? One may ask two questions: (i) extremal universality classes, i.e. correlation matrices which have asymptotically the same distribution of the minimum $V_{min}$ (up to a shift by a $M$-dependent constant) (ii) more restrictive universality classes valid for any $\beta$, i.e. correlation matrices which have asymptotically the same distribution of free energy, and generating function $g_\beta(y)$ (up to a shift by a $M$-dependent constant) for any $\beta$. It is reasonable to expect each latter universality class (ii) should correspond to a continuum model. Obviously, two sequences $C_{ij}^{(M)}$ which belong to the same class (ii) also have the same distribution of extrema. But there are counter examples to the reverse (see below). Classifying these classes being a formidable problem, here we only make a few remarks about the universality class of the circular ensemble. The class corresponding to the interval is discussed below.
Let us start from (\[circmat\]) and discuss various generalizations in the subset of cyclic (called also periodic or circulant) matrices, i.e which can be written as: $$\label{eigen}
C_{ij}=\frac{1}{M} \sum_{k=0}^{M-1} \lambda_k e^{2 i \pi (i-j) \frac{k}{M}}.$$ with ($M$-dependent) real eigenvalues $\lambda_k$. The eigenvalue $\lambda_0$ corresponds to the uniform mode (often called zero mode in the GFF context). Logarithmic correlations mean that we assume that $\lambda_k \sim 1/k$ in some broad range of $k$ at large $M$, as specified below.
Starting from (\[circmat\]) let us first make the observation that adding a fixed $W>0$ of $O(1)$ on the diagonal of $C_{ij}$ shifts all eigenvalues by a constant $O(1)$ and does not change the universality class, in both sense (i) and (ii), at large $M$. On the other hand, a shift: $$C_{ij} \to C_{ij} + \sigma$$ for all $(i,j)$ shifts only the the uniform mode $\lambda_0 \to \lambda_0 + \sigma$. It is equivalent to add a global random gaussian shift $v$ to all $V_i$, i.e. $V_i \to V_i + v$ where $\sigma=\overline{v^2}$. It thus results in the the convolution of the distribution of $V_{min}$ (and of the free energy) by a gaussian of variance $\sigma$. One such example, discussed again below, is to consider the distribution of the GFF (using the full plane Green function) on a circle of radius $R<1$ (and cutoff $R \epsilon$, i.e performing a global contraction): it shifts all $C_{ij} \to C_{ij} - 2 \ln R$ in (\[circmat\]). Hence we keep in mind that there is really [*a family of distributions differing by their second cumulant*]{}, and will enforce in our numerics the condition $\lambda_0=0$ which we believe selects the distribution (\[circ\]).
### GFF along an arbitrary circle
![Color online. Eigenvalues of the correlation matrix corresponding to the GFF along a circle in a disk domain with zero boundary condition (Dirichlet). []{data-label="fig:disk"}](lambda2){width="50.00000%"}
One possible generalization of the circular model (\[circmat\]) along these lines is the GFF inside a disk of radius $L$ with $V=0$ on the boundary as studied by e.g. Duplantier and Sheffield [@Qgrav]. Using the Dirichlet Green function $G_L(z,z') = - \ln \frac{ L |z-z'|}{|L^2 - z \bar z'|}$, the correlation matrix for the discrete model on a circle of radius $R$ inside the disk is then for $i \neq j$ and denoting $\rho=R/L$: $$\begin{aligned}
\label{dirichlet}
&& C_{ij} = - 2 \ln \frac{ 2 \rho |\sin(\frac{\theta_i - \theta_j}{2})|}{\sqrt{1+\rho^4 - 2 \rho^2 \cos(\theta_i-\theta_j)}} \quad , \quad C_{ii} = 2 \ln L + 2 \ln (1- \rho^2) - 2 \ln \epsilon
\end{aligned}$$ In the small $\rho=R/L$ limit, equivalently fixed $R$ and large $L$ one finds $C_{ij} \approx -2 \ln \rho - 2 \ln ( 2 |\sin(\frac{\theta_i - \theta_j}{2}) |$ and $C_{ii} \approx 2 \ln L - 2 \ln \epsilon$. Choosing [@footnote] $\epsilon=R/M$ one sees that one recovers indeed the FB model (\[circmat\]) (with $W=0$) [*up to a shift $\sigma = 2 \ln (L/R)$ in the zero mode*]{} $\lambda_0$ of the matrix, i.e all eigenvalues of the correlation matrix are the same as FB except the uniform mode. This gives us the precise meaning of the universality of the results of FB [@FB]: it holds for small $\rho=R/L$ for the Dirichlet GFF on the disk and up to a (trivial) convolution by a gaussian of width $2 \ln (L/R)$. The next question is whether the universality extends to other circular contours on the disk with $R/L$ not necessarily small. The answer is no, as can be argued from examination of the eigenvalues, diagonalizing (\[dirichlet\]) for arbitrary $\rho$. As shown in Fig. \[fig:disk\] at large $M$ the eigenvalues are essentially the same as the ones for FB, i.e. small $\rho$, apart from the few largest ones - whose number does not change and remains finite as $M$ becomes large. We expect that however that since these are the [*largest*]{} eigenvalues, despite being few they will change the distribution of the maximum which hence will depend continuously on the ratio $R/L$ (with a similar discussion as above concerning the zero mode and convolution by a gaussian).
### other periodic models
![Color online. Eigenvalues of the correlation matrix corresponding to the periodic models defined in the text.[]{data-label="fig:longrangelambda"}](lambda1){width="50.00000%"}
![Color online. Universality in the case of periodic (circulant) correlation matrices: cumulative distribution of the minimum, with subtraction as in Fig. \[fig:cumulative\]: convergence to a common curve is faster than to the global analytic prediction.[]{data-label="fig:longrange"}](longrange){width="70.00000%"}
On the other hand a much stronger universality property appears to hold when only the [*smallest*]{} eigenvalues are changed. Hence we now test whether the results obtained for the circular case remain valid for all periodic cases (\[eigen\]) with the same behaviour of $\lambda_k \sim 1/k$. Again it is important that $\lambda_0$ be fixed to zero. If $\lambda_0 >0$ this amounts to convoluting the distribution of the minimum by a Gaussian of variance $\lambda_0$. For the model to be logarithmic and strong universality to hold we require that $\lambda_k \to 1/k$ as $M \to \infty$ for $0<k \ll M$. We have tested this conjecture for two models.
Model 1: Sharp model (SM) $\lambda_k=M/k$ for $k=1,\ldots, M/2$ and $\lambda_k= M/(M-k)$ for $k=M/2,\ldots, M-1$
Model 2: The long range model (LRM) which is some discretization of the Joanny-deGennes elasticity of the contact line [@joanny], $\lambda_k = 2 \pi/\sqrt{2(1-\cos(2 \pi k/M))}$.
The eigenvalues of these models are compared to the one of the circular case in Fig. \[fig:longrangelambda\] and one can see that they differ only for $k$ near $M/2$. As can be seen in Fig. \[fig:longrange\] the convergence of these models to the circular case at fixed $M$ is much faster than their (common) convergence to the analytical prediction. We take this as a signature of the strong universality with respect to variations of the correlation matrix which change only the smallest eigenvalues, within the cyclic class. We check in the Appendix C that the first terms in the expansion of $\overline{f^2}^c$ are the same for all these models which supports that the universality holds at any $\beta$, i.e. both in the sense (i) and (ii) defined above.
interval
--------
We now discuss the $[0,1]$ ensemble. We take for the correlation matrix the Toeplitz form $C_{ij}=C(i-j)$, $i,j=1,..M$: $$\label{intervalM}
C_{ii}= 4\sum_1^{M-1}(-1)^k\log\frac{k}{M} + W \;\;\; C_{i \neq j}= - 2 \log\frac{|i-j|}{M}
\label{choice}$$ with $W=0$. This matrix is not diagonal in Fourier space and we cannot use the FFT method. In practice we find the eigenvalues $\lambda_k$ and the normalized eigenvectors $\psi_k(i)$ by a direct diagonalization of the matrix $C_{ij}$. We then generate the correlated random potential as $V_i=\sum_{k=0}^{M-1}\sqrt{\lambda _k} x_k \psi_k(i)$, where the $x_k$ are i.i.d. real unit centered Gaussian variables. Performing this sum together with the direct diagonalization is numerically expensive and limits the size of the number $M$ of correlated numbers. In order to achieve a good statistics ($\sim 10^7$ samples) we analyze here data only up to $M=2^{12}$.
To justify our choice for the diagonal element in (\[intervalM\]) let us recall a useful property of any Toeplitz matrix: if the function $f(\theta)=C_{11}+2\sum_{k=1}^{M-1}C_{1,k+1} \cos{(k\theta)}$ is positive $\forall \, \theta\in[0,2\pi)$, then $C$ is positive definite (for any $M$). This is seen by noting that for any vector $v_k$, $k=0,..M-1$, one has $\sum_{k,\ell=0}^M v_k v_{\ell} C(k-l) = \int_0^{2 \pi} \frac{d\theta}{2 \pi} f(\theta) |v(\theta)|^2$ where $v(\theta)=\sum_{k=0}^{M-1} v_k e^{i \theta k}$ and $C(k)=\int_0^{2 \pi} \frac{d\theta}{2 \pi} f(\theta) e^{i k \theta}$. More importantly it can be shown that the reverse is true for large $M$ [@toeplitz]. For the choice of Eq(\[choice\]) this function has a global minimum at $\theta =\pi$, for which $f(\theta=\pi)=0$. As a result the matrix $C_{ij}$ is positive definite and in the large $M$ limit, one finds that the smallest eigenvalue goes rapidly to zero and the eigenvector components alternate as $(-1)^k$. Note that the diagonal element in (\[intervalM\]) behaves as $C_{ii} \sim 2 \ln M + O(1)$ at large $M$, hence as expected, and similarly to the circular case. Though a convenient choice to prove positivity, there are other choices with similar behaviours at large $M$ which would do as well.
We have analyzed the distribution of the minimum $V_{min}$ and computed the coefficients $a_M$, $b_M$ and the distribution of the variable $y$ in (\[rescaledmin\]) by fixing $\overline{y}=7/2 -2 \gamma_E - \ln(2 \pi)$ and the variance $\overline{y^2}^c=\frac{4}{3} \pi^2 - \frac{27}{4}$ to their value given by the analytically prediction. The convergence to $b_M=1$, shown in Fig. \[fig:a\_M\] (right), is thus a test of our prediction $\overline{V_{min}^2}^c=\frac{4}{3} \pi^2 -
\frac{27}{4}$. The convergence of the coefficients $a_M$ and $b_M$ is quite similar to the circular case. The cumulative distribution $Q_M(y)$ of the rescaled minimum, i.e. the variable $y$, is shown in Fig.\[fig:cumulinterval\] where the cumulative distribution of Fig.\[fig:gfree\] (our analytical prediction) has been substracted. Again, the behaviour resembles the one for the circular case.
The discussion of the universality for the interval class is more delicate since now the lowest eigenvector is no more generically the uniform mode. However a way to realize it from the GFF can be suggested similarly to the above discussion. One can consider the interval embedded near the center in a large disk with Dirichlet b.c. In the limit of small ratio $\rho$ of interval size to disk radius the above interval model applies, again up to a convolution by a gaussian of variance $2 \ln(1/\rho)$.
![Color online. Interval case: cumulative distribution of the rescaled minimum $Q_M(y)$ minus our analytical prediction, $g_{\beta_c}(y)$, shown in Fig. \[fig:gfree\] and based on the freezing scenario. The number of samples is $10^7$. The difference is small compared on the scale of unity. Although it is slow, the convergence is apparent.[]{data-label="fig:cumulinterval"}](cumfree){width="70.00000%"}
temperature dependence of the second cumulant of the free energy
----------------------------------------------------------------
![Color online. Second cumulant $\overline{f^2}^c$ of the free enenergy as a function of inverse temperature $\beta$ for various sizes, as compared to the analytical prediction given in the text. Left: circular ensemble. Right: interval case. []{data-label="fig:f2"}](f2_circular "fig:"){width="45.00000%"} ![Color online. Second cumulant $\overline{f^2}^c$ of the free enenergy as a function of inverse temperature $\beta$ for various sizes, as compared to the analytical prediction given in the text. Left: circular ensemble. Right: interval case. []{data-label="fig:f2"}](f2_Interval "fig:"){width="45.00000%"}
Finally we have also performed some numerical tests of the temperature dependence of our analytical results in the high temperature phase. We have computed numerically, and plotted in Fig. \[fig:f2\] and \[fig:y2\] as functions of $\beta$, the variance of the free energy distribution $\overline{f^2}^c$ as well as $\overline{y^2}^c$ for the circular case (\[circmat\]) and $\overline{f^2}^c$ for the interval case (\[intervalM\]). They are compared to the analytical predictions, i.e. (i) for the circular case: $$\begin{aligned}
\label{f2circ2}
\overline{f^2}^c &=& (\pi^2/6) \beta^2 \quad (\beta<1) \quad {\rm and} \quad \overline{f^2}^c = (\pi^2/6) (2 - T^2) \quad (\beta>1)\end{aligned}$$ which via Eq. (\[cum\]) corresponds to $\overline{y^2}^c = \frac{\pi^2}{6} ( \beta^2 + \frac{1}{\beta^2})$ for $\beta<1$ which freezes into $\overline{y^2}^c = \pi^2/3$ for $\beta>1$, and, (ii) for the interval case formula (\[f2int\]) for $\beta<1$ and $\overline{f^2}^c(\beta) = \overline{f^2}^c(\beta_c=1) + \frac{\pi^2}{6}(1-T^2)$ for $\beta>1$. One can verify the good convergence in the high temperature phase. Questions related to the behavior for small $\beta$, and how the numerical convergence could be further improved, is discussed in the Appendix C.
![Color online. Circular case: second cumulant $\overline{y^2}^c$ as a function of inverse temperature $\beta$ for various sizes $M$ as compared to the analytic prediction given in the text.[]{data-label="fig:y2"}](y2_circular){width="45.00000%"}
more open questions on universality
-----------------------------------
Let us now indicating a simple example where universality (i) of distribution of minimum and of (ii) the free energy at any temperature, discussed above, may differ from each other. Consider the continuum problem on the circle but with an arbitrary smooth and a non singular weight $0<\rho_1 < \rho(\theta) < \rho_2 <1$: $$\begin{aligned}
\label{1prim}
&& Z= \epsilon^{\beta^2} \int_{0}^{2 \pi} d \tilde \theta \rho(\tilde \theta) e^{- \beta V(e^{i \tilde \theta})}
= \epsilon^{\beta^2} \int_{0}^{2 \pi} d \theta e^{- \beta V(e^{i f(\theta)})}\end{aligned}$$ and we consider for instance $\tilde \theta=f(\theta)=\theta+ a \sin(\theta)$ with $a<1$ and $\rho(\tilde \theta)=1/f'(\theta)$. From the second form in (\[1\]) one sees that the associated REM can be chosen as $Z_M=\sum_i e^{- \beta V_i}$ with correlation matrix $C_{ij} = - 2 \ln|2 \sin(\frac{1}{2} f(\theta_i)-\frac{1}{2} f(\theta_j))|$ for $i \neq j$ and $\theta_i=2 \pi i/M$, neither a circulant nor Toeplitz matrix. As shown in the Appendix C, at small $a$, $\overline{f^2}^c=\frac{1}{2} a^2 + O(a^4) + O(\beta^2)$, hence the free energy distribution clearly depends on $a$. On the other hand, the first form in (\[1\]) suggests that it should have the same distribution of the minimum $V_{min}$ as $\rho(\theta)=1$. Indeed, $V(\theta)-T\ln \rho(\theta)$ as $T \to 0$ should have same extremal statistics for a given $\epsilon$ regularization (with $T \to 0$ before $\epsilon \to 0$) as $V(\theta)$ provided $\rho(\theta)$ is non singular. The question of how the freezing scenario works under such a circumstances and what are the universality classes is left for future studies.
Finally, another challenging question about universality, related to the GFF, is about REM’s constructed along different curves in the plane than the circle, or the interval, i.e. $V_i=V(z_i)$ where the $z_i$ lie along a curve and sample it at large $M$ with a density described by some given arc length $\sqrt{d z d \bar z} \rho(z,\bar z)$. One can use conformal maps to relate various curves to each others, e.g. a circle to a slightly deformed circle, with different weight functions $\rho(z,\bar z)$. Hence we are back to understanding the type of problem described in the preceding paragraph, and one should expect some universality in the distribution of the minimum.
Conclusion
==========
To summarize, we have studied analytically and numerically random energy models based on Gaussian random potentials with logarithmic correlations. We have extended the Fyodorov-Bouchaud (FB) results from the circular ensemble to the interval. We have found the proper analytic continuation from the positive integer moments of the partition function, expressed as Selberg integrals, to arbitrary moments. This analytic continuation of the Selberg integrals, previously an outstanding open problem, is solved here. The solution involves Barnes functions and their generalizations which appear in studies of the Liouville field theory, hence strengthening the already noted link between the two problems. This solution, valid in the high temperature phase, allowed us to obtain the full distribution of the free energy $f$ for $\beta \leq \beta_c$ and up to the critical point. It was generalized, at $\beta=\beta_c$ to the case where additional charges exist at the end of the interval.
The knowledge of the generating function $g_\beta(y)=\overline{\exp(- e^{\beta (y-f)}) }$ at $\beta=\beta_c$ allowed us, via the same [*freezing scenario*]{} hypothesis as put forward in FB for the circular ensemble, to obtain the distribution of the minimum of the gaussian free field (GFF) on an interval, expressed as an integral transform of a Barnes function. The freezing scenario, which asserts that precisely this generating function $g_\beta(y)$ becomes temperature independent in the glass phase for $\beta \geq \beta_c$, was until now based on a traveling wave analysis. While rigorous for the Cayley tree based REM, for which it was introduced, it was only based on a one-loop RG analysis for the type of models at hand [@carpentier]. Here we made what we believe should be considered as a step towards better understanding of this freezing scenario: we discovered that, both for the circular ensemble and its interval counterpart, the analytic expression of $g_\beta(y)$ obeys in the high temperature phase the duality with respect to the transformation $\beta \to 1/\beta$. It implies in particular $\partial_\beta g_\beta(y)=0$, for all $y$, at $\beta=\beta_c$ in perfect agreement with a continuous freezing scenario. While one may notice that the generating function $g_\beta(y)$ is special as being the partition function of the Liouville [*model*]{} (see e.g. the discussion in [@carpentier]) further connections to duality in Liouville [*field theory*]{} remains to be understood (the high temperature phase being the analogous of the weak coupling phase in Liouville).
Detailed numerical calculations of the free energy distribution and of the function $g_\beta(y)$ associated to discrete REM versions of the circular and interval models were performed. The freezing scenario is consistent with our results, in both cases, though convergence is found to be very slow. The numerically obtained distribution of the minimum $V_{min}$ of $M$ random Gaussian variables $V_i$ with logarithmic correlation matrices $C_{ij}$ is found to lie close to the predictions, but with only very slow convergence as a function of $M$. In the high temperature phase the convergence to the FB result for the circular case and to the present one for the interval is found to be very convincing, and in full agreement with various high temperature expansions also performed here.
The important question of the universality classes for discrete REM based on logarithmic matrices $C_{ij}$ and for their continuum analogs, is discussed. The continuum circular ensemble of FB is found to provide a single universality class for all circulant matrices with appropriate behavior of their spectrum at large $M$, for which we provide several examples. This strong version of universality holds for any temperature, i.e. identical distribution of free energy for all $\beta$, up to a shift. A weaker version of universality, holding only for the distribution of the minimum $V_{min}$ is discussed through an example. As far as the connection to the GFF is concerned, we discuss the case where the field is sampled along a circle of radius $R$ inside a disk of radius $L$ with Dirichlet boundary condition. We demonstrate universality, [*up to the convolution by a gaussian*]{}, in the limit of a small ratio $R/L$, while in general the distributions depend on the aspect ratio $R/L$.
The present progress opens many more fascinating questions. First one would want to extend these results to other curves in the plane, and even to two dimensional regions. The simplest extension, i.e. the case of the real axis with gaussian weight, also studied here, and for which the present methods are found to fail, shows that more remains to be understood before this can be achieved. Unbounded regions seem to pose a problem, and so does the control of the zero mode. The question of classifying the universality classes remains as a tantalizing open question. One can expect that the conformal invariance of the 2d GFF will play a crucial role in that classification, as it allows to map one curve into another one, with a change in the local length element. The question of which models obey duality and what is the precise connection to the freezing scenario is also outstanding. Further exploration of the connection to the Liouville model, to the Liouville field theory and to Liouville quantum gravity measures, is an important direction for further research. In particular, the distribution of the length of a segment in Liouville quantum gravity seems to directly connect to our results.
[*Acknowledgments*]{}: We are grateful to I. Gruzberg and P. Wiegmann for useful discussions at various stages of the project. YF acknowledges support by the Leverhulme Research Fellowship project “A single particle in random energy landscapes” and PLD from ANR program 05-BLAN-0099-01.
[*Note added:*]{}
After submission, we learned of a recent independent study by D. Ostrovsky [@Ostrovsky2009] who obtained a high temperature expansion of arbitrary moments for the $[0,1]$ problem with no edge charges, and conjectured a formula for these moments. Exploiting the integral representation (\[ZamoBarn\]) for the generalized Barnes function $G_\beta(z)$ together with the following doubling formula: $$\label{doubling}
G_\beta(2z) = C_\beta 2^{2 z^2 - (1+ \beta + \frac{1}{\beta}) z} \pi^{-z} G_\beta(z) G_\beta(z+\frac{1}{2 \beta}) G_\beta(z+\frac{\beta}{2}) G_\beta(z+ \frac{1}{2 \beta} + \frac{\beta}{2})$$ where $C_\beta$ is determined from e.g. $z=1$, we were able to show that his conjecture is equivalent to our formula (\[MT\]). Note however that no discussion of the critical case, duality and freezing is given in [@Ostrovsky2009].
We have also shown that using Dirichlet boundary conditions at large distance $|x|=L$ for the 2D GFF gives a proper meaning to the problematic Gaussian weight case (it yields a shift $2 ln L$ in the second cumulant $\overline{y^2}$, while maintaining all higher cumulants as given in the text).
the special case of $[0,1]_{-1/2.-1/2}$
=======================================
Here we study the model defined by the partition sum: $$\begin{aligned}
&& Z = \int_{-1}^{+1} \frac{dx}{\sqrt{1-x^2}} e^{\beta V(x)} \\
&& \overline{V(x) V(x')} = - 2 \ln |x-x'|\end{aligned}$$ which is a special case of the interval $[0,1]_{ab}$ defined in the text for $a=b=-1/2$. We show that it corresponds to a REM with a correlation matrix which can be diagonalized in the Fourier basis.
Using the change of variable $x=\cos \theta $, hence $dx=\sin \theta d \theta= \sqrt{1-x^2} d\theta$, we see that $Z$ can as well be written as an integral over a half-circle, involving a new gaussian random potential with a modified correlator: $$\begin{aligned}
&& Z = \int_{0}^{ \pi} d \theta e^{\beta \tilde V(\theta)} \\
&& \overline{\tilde V(\theta) \tilde V(\theta')} = - 2 \ln |\cos(\theta) - \cos(\theta')|
= \sum_{n=1}^\infty \frac{4}{n} \cos( n \theta) \cos(n \theta') + 2 \ln 2\end{aligned}$$ where we have used the formula: $$\begin{aligned}
&& \sum_{n=1}^\infty \frac{2}{n} \cos( n A) \cos(n B) = - \ln(2 |\cos A - \cos B|)\end{aligned}$$
To define the corresponding REM we now take a grid $\theta=2 \pi i/M$ for [*the full circle*]{} and take for correlation matrix: $$\begin{aligned}
&& C_{ij} = \sum_{n=1}^\infty \frac{4}{n} \cos( 2 n \pi i/M) \cos(2 n \pi j/M) + 2 \ln 2\end{aligned}$$ for $i \neq j$. The sum is still infinite, but it has the nice property that it can be made finite. Indeed using that: $$\begin{aligned}
&& \cos( 2 (k + m M) \pi i/M) \cos(2 (k+ m M) \pi j/M) = \cos( 2 k \pi i/M) \cos(2 k \pi j/M)\end{aligned}$$ we can now rewrite: $$\begin{aligned}
&& C_{ij} = \sum_{n=1}^{M} \lambda_n \cos( 2 n \pi i/M) \cos(2 n \pi j/M) + 2 \ln 2 \\
&& \lambda_n = \frac{1}{n} + \sum_{m=1}^\infty ( \frac{1}{n+m M} - \frac{1}{m M})
= \frac{1}{n} - \frac{\gamma + \psi(1+\frac{n}{M})}{M}\end{aligned}$$ One can check numerically that all the eigenvalues are positive. Note that we have subtracted an infinite part on the diagonal so now the diagonal element is also well defined: $$\begin{aligned}
&& C_{ii} = \sum_{n=1}^{M} \lambda_n \cos( 2 n \pi i/M)^2 + 2 \ln 2\end{aligned}$$ Hence for this particular interval model $a=b=-1/2$ we can use the Fourier basis to generate the variables on the full circle and take the minimum only for the half circle (i.e. $M/2 \times M/2$ submatrix). It remains to be understood how this links to the simplification observed in formula (\[simpler\]) in the text, and whether there are other examples of such cases where a Fourier basis can be used.
Some properties of the generalized Barnes function
==================================================
Let us first check that the function $G_\beta(x)$ defined by (\[ZamoBarn\]) does indeed satisfy the property (\[Gt1\]). We start from the formula (see [@GR] 8.341.3, p.889):
$$\label{loggamma}
\ln \Gamma(x \beta) = \int_0^\infty \frac{dt}{t} \big[ \frac{e^{- \beta x t} - e^{- t}}{1-e^{-t}}
+e^{-t}( \beta x-1) \big]$$
Now, by straightforward algebra (\[ZamoBarn\]) implies: $$\begin{aligned}
\label{ZamoBarn1}
\! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \ln G_\beta(x+\beta)-\ln G_\beta(x) = \frac{\beta}{2} \ln(2 \pi) + \int_0^\infty \frac{dt}{t} \big[ \frac{e^{- x t}}{(1-e^{-t/\beta})}
+\frac{e^{-t}}{2}(2\beta x-1) -\frac{\beta}{t} \big]\end{aligned}$$ Changing now $t\to \beta t$, and subtracting (\[loggamma\]) gives: $$\begin{aligned}
\label{ZamoBarn2}\fl
&& \phi_\beta(x)\equiv \ln G_\beta(x+\beta)-\ln G_\beta(x)-\ln \Gamma(x \beta) - \frac{\beta}{2} \ln(2 \pi) \\
&& = \int_0^\infty \frac{dt}{t} \big[ \frac{e^{- t}}{1-e^{-t}}
-\frac{e^{-t \beta}}{2} + \beta x\left(e^{-t \beta}-e^{-t}\right)+ e^{-t}-\frac{1}{t} \big]\end{aligned}$$ Now using the identity $$\label{log}
\int_0^\infty \frac{dt}{t}\left(e^{-t}-e^{-t \beta}\right)=\ln{\beta}$$ we see that $\frac{\partial}{\partial x}\phi_\beta(x) =-\beta\ln{\beta }$ which implies $$\phi_\beta(x)=-\beta x \ln{\beta}+\phi_\beta(0),$$ with $$\begin{aligned}
\label{ZamoBarn2}
&& \phi_\beta(0) = \int_0^\infty \frac{dt}{t} \big( \frac{ e^{-t}}{1-e^{-t}}
-\frac{e^{-t \beta }}{2} + e^{-t}-\frac{1}{t} \big)\end{aligned}$$ In turn, it is easy to see this integral converges and $\frac{d}{d\beta}\phi_\beta(0)=\frac{1}{2\beta}$, hence $\phi_\beta(0)=\frac{1}{2}\ln \beta+\phi_{\beta=1}(0)$. Combining altogether we see that $\phi_\beta(x)=(\frac{1}{2}-\beta x)\ln{\beta}+c$, where $$\label{ZamoBarn2}
\fl c = \int_0^\infty \frac{dt}{t} \big( \frac{ e^{- t}}{1-e^{-t}}
+\frac{e^{-t}}{2}-\frac{1}{t} \big)\equiv \lim_{z\to 0}\int_0^\infty \frac{dt e^{-zt}}{t} \big( \frac{1}{e^{t}-1}
+\frac{1}{2}-\frac{1}{t} \big)+\int_0^\infty \frac{dte^{-zt}}{t}\frac{e^{-t}-1}{2}$$ which using [@GR] 8.341.1, p.888 yields $$\label{ZamoBarn2}
\fl c = \lim_{z\to 0}\left[\ln{\Gamma(z)}+z-(z-1/2)\ln{z}-\frac{1}{2}\ln{2\pi}+\int_0^\infty \frac{dte^{-zt}}{t}\frac{e^{-t}-1}{2}\right]$$ The last integral is equal to $\frac{1}{2}\ln{\left(z/(z+1)\right)}$, and after straightforwardly taking the limit we find finally $c=-\frac{1}{2}\ln{2\pi}$, in full agreement with (\[Gt1\]).
Next we want to obtain the asymptotics. For this it is useful to note that: $$\begin{aligned}
\label{h1a}
h_\beta(x) := \partial_x^2 \ln G_\beta(x) = \int_0^\infty \frac{dt}{t} ( e^{-t} - \frac{t^2 e^{- x t}}{(1-e^{-\beta t})(1-e^{-t/\beta})} )\end{aligned}$$ Exploiting again the identity (\[log\]) we can rewrite the above formula in a form more convenient for applications, see Eq. (\[h1b\]) in the text. For example, by changing variables $t=\tau/x$ in (\[h1b\]) we immediately can find the asymptotic behaviour for $x\to \infty$ at fixed $\beta$ to be given by $$h_\beta(x)=\ln{x}-\frac{1}{2x}\left(\beta+\frac{1}{\beta}\right)+\ldots$$ as long as $x\gg max(\beta,\beta^{-1})$. The same asymptotic behaviour holds for $h_\beta(z)$ in the complex plane for $\Re{z}>0$ and $|z|\to \infty$.
Note also the useful doubling formula (\[doubling\]) for $G_\beta(x)$ which we were not able to trace in the available literature.
High temperature expansions
===========================
high temperature expansion of REM models
----------------------------------------
It is useful to derive high temperature expansions for a Gaussian REM $Z_M(\beta) =\sum_{i=1}^M e^{- \beta V_i}$ with an arbitrary correlation matrix $\overline{V_i V_j}=C_{ij}$. One expands: $$\begin{aligned}
&& Z_M(\beta) = M - \beta \sum_i V_i + \frac{1}{2} \beta^2 \sum_i V_i^2 - \frac{1}{6} \beta^3 \sum_i V_i^3 + O(\beta^4)\end{aligned}$$ which leads to: $$\begin{aligned}
&& \ln Z_M(\beta) = \ln M - \frac{\beta}{M} \sum_i V_i + \frac{1}{2} \beta^2 ( \frac{1}{M} \sum_i V_i^2 - \frac{1}{M^2}
\sum_{ij} V_i V_j) \\
&& - \frac{1}{6} \beta^3 ( \frac{1}{M} \sum_i V_i^3 - 3 \frac{1}{M^2}
\sum_{ij} V_i V_j^2 + 2 \frac{1}{M^3} \sum_{ijk} V_i V_j V_k )+ O(\beta^4) \nonumber\end{aligned}$$ This leads to the average free energy:
$$\begin{aligned}
&& F_M(\beta) = - \frac{1}{\beta} \overline{\ln Z_M(\beta)} = - \frac{1}{\beta} \ln M - \frac{1}{2} \beta (\frac{1}{M} \sum_i C_{ii} - \frac{1}{M^2}
\sum_{ij} C_{ij}) + O(\beta^3)\end{aligned}$$
and the variance: $$\begin{aligned}
&& \overline{f^2}^c = \frac{1}{\beta^2} \overline{ \ln^2 Z_M(\beta) - \overline{\ln Z_M(\beta)}^2}
= \frac{1}{M^2} \sum_{ij} C_{ij} + \beta^2 \big( \frac{1}{M^2} \sum_{ij} (C_{ii} C_{ij} + \frac{1}{2} C_{ij}^2) \nonumber \\
&& - \frac{1}{M^3} \sum_{ijk} (3 C_{ij} C_{ik} + C_{ii} C_{jk} ) +
\frac{5}{2} \frac{1}{M^4} \sum_{ijkl} C_{ij} C_{kl} \big) +
O(\beta^4) \label{expand}\end{aligned}$$ This result for $\overline{f^2}^c$ is useful to test universality in the sense (ii), i.e. at any temperature. Let us examine several cases.
Consider first the periodic case discussed in the text, where $C_{ij}$ is a cyclic (i.e. circulant) matrix, i.e. of the form (\[eigen\]). Then $\frac{1}{M^2} \sum_{ij} C_{ij} = \lambda_0/M$. Fixing $\lambda_0=0$ as we did here, we find that the expression for the second cumulant of the free energy simplifies and that it vanishes at $\beta=0$ as: $$\begin{aligned}
&& \overline{f^2}^c = \frac{\beta^2}{2 M^2} \Tr C^2 + O(\beta^4) = \frac{\beta^2}{2 M^2} \sum_{k \neq 0} \lambda_k^2 + O(\beta^4)\end{aligned}$$ It is now easy to check that both the discrete circular model (\[circmat\]), the sharp model (SM) and the long range model (LRM) behave in the limit $M \to +\infty$ as: $$\begin{aligned}
&& \overline{f^2}^c = \beta^2 \sum_{k=1}^\infty \frac{1}{k^2} + O(\beta^4) = \frac{\pi^2}{6} \beta^2 + O(\beta^4)\end{aligned}$$ i.e. as the continuum circular model for which one has (\[f2circ2\]). This is consistent with the conjecture that these models belong to the same universality class at any temperature. Furthermore the coefficient of $\beta^2$ can also be obtained, e.g. for the discrete circular ensemble (\[circmat\]), as: $$\begin{aligned}
&& \lim_{M \to \infty} \frac{1}{2 M^2} \sum_{ij} C_{ij}^2 = \frac{1}{2} \int_0^{2 \pi} \frac{d\theta_1}{2 \pi} \int_0^{2 \pi} \frac{d\theta_2}{2 \pi} [ 2 \ln 2 |\sin( \frac{\theta_1-\theta_2}{2} )| ]^2 = \frac{\pi^2}{6}\end{aligned}$$ Note that the diagonal does not contribute to this limit (its contribution is $O(\ln^2 M/M)$ and that will be a general fact.
For the discrete interval model (\[intervalM\]) we compute the two first terms in the high temperature expansion. We see that the terms involving $C_{ii}$ cancel out, as it should. In the limit $M\to \infty$ we replace remaining sums by integrals and get $$\begin{aligned}
&& \overline{f^2}^c =-2I_0+2\beta^2\left[I_1+5I_0^2-6\tilde{I}\right]+O(\beta^4)\,,\end{aligned}$$ where we have defined the integrals: $$\begin{aligned}
\label{beta0direct}
I_0 &=&\int_0^{1} dx_1 \int_0^{1} dx_2 \ln|x_2-x_1|, \quad I_1=\int_0^{1} dx_1 \int_0^{1} dx_2 \ln^2|x_2-x_1| \\
&& \tilde{I}=\int_0^{1} dx_1 \int_0^{1} dx_2 \int_0^{1} dx_3 \ln|x_2-x_1| \ln|x_3-x_1|\end{aligned}$$ Calculation of these integrals give: $$I_0 = -3/2 \quad , \quad I_1=\frac{7}{2},\quad \tilde{I}=\frac{17}{6}-\frac{\pi^2}{18}$$ which gives the final result: $$\begin{aligned}
\label{expancorr}
&& \overline{f^2}^c =3+\beta^2\left[\frac{2}{3}\pi^2-\frac{9}{2}\right]+O(\beta^4)\,,\end{aligned}$$ As we show below this coincides with our analytical prediction from the continuum model, see (\[f2n\]) below.
As we see in Fig. \[fig:f2\] for the discrete interval model (\[intervalM\]) at finite $M$ $\overline{f^2}^c(\beta=0)$ is smaller than $3$. In fact, one can add a $W_M$ on the diagonal in (\[intervalM\]) so as to tune this value to exactly $3$ for any $M$, without changing the universality class (i.e. $W_M$ goes to zero fast enough). One could try to systematize this idea, e.g. to add to the correlation matrix of the discrete model some other matrix, subdominant in the limit $M \to \infty$, so as to fit the lowest orders coefficients in $\beta^p$ to their actual value for the continuum model - those are given below for the interval, see formula (\[f2n\]). We have checked for the circular case that it can be easily implemented up to $p=2$. Whether this will allow to select better discrete models with faster convergence even at lower temperature is left for future studies.
Concerning the class of model (\[1\]), we can similarly check that for the associated discrete REM, i.e. $C_{ij} = - 2 \ln|2 \sin(\frac{1}{2} f(\theta_i)-\frac{1}{2} f(\theta_j))|$ for $i \neq j$ and $\theta_i=2 \pi i/M$ one has: $$\begin{aligned}
&& \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \lim_{M \to \infty} \frac{1}{M^2} \sum_{ij} C_{ij} = - 2 \int_0^{2 \pi} \frac{d\theta_1}{2 \pi} \int_0^{2 \pi} \frac{d\theta_2}{2 \pi} \ln [ 2 |\sin(\frac{f(\theta_1)-f(\theta_2)}{2})| ] = \frac{1}{2} a^2 + O(a^4)\end{aligned}$$ for $f(\theta)=\theta+a \sin(\theta)$, hence at small $a$, $\overline{f^2}^c=\frac{1}{2} a^2 + O(a^4) + O(\beta^2)$ as announced in the text, and there is no universality valid at all temperature (the universality class in the sense (ii) defined above depends on the function $f(\theta)$).
Finally the same expansion (\[expand\]) holds for any continuum REM of the form $Z= \int dx \rho(x) e^{- \beta V(x)}$ and can be obtained from the above just replacing $\frac{1}{M^n} \sum_{i_1,..i_n} \to \frac{1}{(\int dx \rho(x))^n} \int_{x_1,..x_n}$ and $C_{i_1,i_2}$ by its continuum expression $C(x_1,x_2)$.
high temperature expansion of the analytical result for the interval
--------------------------------------------------------------------
Let us derive the high temperature expansion of our analytical result (\[f2int\])-(\[h1b\]) for the second cumulant of the free energy. Since an independent method also exists to obtain this expansion, as displayed in the discussion above, this constitutes a check of our solution in the high temperature phase. For this we need to use (\[h1b\]) for $x=\alpha_1 \beta +\alpha_2\frac{1}{\beta}$, where $\alpha_{1,2}$ are given positive constants. By introducing $\tau=t/\beta$ we have: $$\begin{aligned}
\fl h_\beta\left(\alpha_1\,\beta+\frac{\alpha_2}{\beta}\right)=\ln{\frac{\alpha_2}{\beta}}+\ln{\left(1+\frac{\alpha_1}{\alpha_2}\,\beta^2\right)}+
\int_0^\infty \frac{d\tau}{\tau} e^{-\alpha_2\tau-\alpha_1\beta^2\tau} \left(1-\frac{\tau^2\beta^2}{(1-e^{-\beta^2 \tau})(1-e^{-\tau})} \right) \nonumber \\
&&\end{aligned}$$ which can be easily used to expand in powers of $\beta^2$. In particular, the leading term from (\[f2int\]) is a constant given by: $$\begin{aligned}
\fl \overline{f^2}^c = 2\ln{3}+\ln{2} + \int_0^{\infty}\frac{d\tau}{\tau}\left(1-\frac{\tau}{1-e^{-\tau}} \right)\left(
2e^{-\frac{3}{2}\tau}-e^{-\tau}-e^{-2\tau}\right)\end{aligned}$$ where the integral can be computed by pieces using the formulas (see 3.311.7 [@GR]): $$\int_{0}^{\infty}d\tau\,\frac{e^{-\mu\tau}-e^{-\nu\tau}}{1-e^{-\tau}}=\psi(\nu)-\psi(\mu)
,\quad \mbox{and}\quad
\int_0^\infty \frac{d\tau}{\tau}\left(e^{-\mu\tau}-e^{-\nu\tau}\right)=\ln{\frac{\nu}{\mu}}$$ combining altogether we find: $$\begin{aligned}
\label{beta0analytic}
\overline{f^2}^c(\beta=0) = 4\ln{2}-[\psi(1)+\psi(2)-2\psi(3/2)]=3\end{aligned}$$ in agreement with the result obtained above in (\[beta0direct\]) by a direct method.
This expansion can be carried to higher order. Using mathematica and some heuristics we find that it can be put in the form: $$\begin{aligned}
\label{f2n}
&& \overline{f^2}^c = 3 + ( \frac{2}{3} \pi^2 - \frac{9}{2}) \beta^2 + \sum_{k=2}^\infty
(-1)^{k+1} 3 k ~(\zeta(k+1)- (1 + \frac{B_k}{k}) ) ~\beta^{2 k}\end{aligned}$$ where the $B_k$ are the Bernouilli numbers ($B_k=0$ for $k$ odd).
References {#references .unnumbered}
==========
[99]{}
Di Francesco P, Mathieu P, and Senechal D 1997 Conformal Field Theory (Springer , 1997)
Duplantier B and Sheffield S 2009 e-preprints arXiv:0901.0277 and arXiv:0808.1560
D.G.A.L. Aarts, M. Schmidt, H.N.W Lekkerkerker, Science, 304, 847 (2004).
Chamon C, Mudry C and Wen X-G 1996 Phys. Rev. Lett. [**77**]{}, 4194; Castillo H E, Chamon C C, Fradkin E, Goldbart P M and Mudry C 1997 Phys. Rev. B [**56**]{}, 10668
Fyodorov Y V 2009, J. Stat. Mech. P07022 \[e-preprint arXiv:0903.2502\]
Carpentier D, Le Doussal P 2001, Phys. Rev. E [**63**]{}, 026110
Abdalla E, Tabar M R R 1998 , Physics Letters B [**440**]{}, 339
Astala K, Jones P, Kupiainen A, Saksman E 2009 e-preprint arXiv:0909.1003
Bacry E, Delour J and Muzy J F 2001, Phys. Rev. E [**64**]{} 026103; F. Schmitt F 2003, Eur. J. Phys. B [**34**]{} 85.
Vargas V and Rhodes R, e-preprint arXiv:0807.1036.
Ostrovsky D 2008, Lett. Math. Phys [**83**]{} 265.
Gyorgyi G, Moloney NR, Ozogany K, and Racz Z 2007 Phys. Rev. E [**75**]{}, 021123 and 2008 Phys. Rev. Lett. [**100**]{} 210601
Fyodorov Y V and Bouchaud J P 2008 [*J. Phys. A: Math. Theor.*]{} [**41**]{} 372001.
Faleiro E, Gomez JMG, Molina RA, Munoz L, Relano A, and Retamosa J 2004 Phys. Rev. Lett. [**93**]{} 244101.
Bolthausen E, Deuschel J-D and Giacomin G 2001 Ann. Probab. 29 1670; Bolthausen E, Deuschel J-D and Zeitouni O. 1995, Comm. Math. Phys. [**170**]{} 417; Daviaud O, the Annals of Probability 2006, [**34**]{}, No. 3, 962.
Derrida B 1981 Phys. Rev. B [**24**]{}, 2613.
Derrida B, and Spohn H 1988 J. Stat. Phys. [**51**]{} 817.
Mörters P, Ortgiese M. 2008 J Math Phys [**49**]{} 125203.
Fyodorov Y V and Bouchaud J P 2008 [*J Phys A: Math*]{} &[*Theor*]{} [**41**]{} 324009 ; Fyodorov Y V and Sommers H-J 2007 [*Nucl. Phys. B \[FS\]*]{} [**764**]{}, 128.
Forrester P J, Warnaar S O , Bull. Amer. Math. Soc. (N.S.) [**45**]{}, 489.
Adamchik VS “On the Barnes Function” Proceedings of the 2001 International Symposium on Symbolic and Algebraic Computation (July 22-25, 2001, London, Canada). New York: Academic Press, pp. 15-20, 2001a.
Gradshteyn I S, and Ryzhik I M , Table of Integrals, Series, and Products, [*6th ed. Academic Press, 2000*]{}, Eq. 6.561.16 (p.668).
Fateev V , Zamolodchikov A, Zamolodchikov A 2000 e-preprint Arxiv:hep-th/0001012.
Joanny J F and De Gennes P G 1984 J. Chem. Phys. [**81**]{}, 552 .
Bottcher A, Grudsky S 2005 “ Spectral Properties of Banded Toeplitz Matrices” (SIAM, Philadelphia)
equivalently one can choose $\epsilon=1/M$ and $W=-2 \ln R$ which, as we know, does not change the distribution of the maximum
Ostrovsky D 2009, Comm. Math. Phys. [**288**]{} 287
[^1]: LPTENS is a Unité Propre du C.N.R.S. associée à l’Ecole Normale Supérieure et à l’Université Paris Sud
[^2]: In a somewhat different but related context this fact was noticed, but not much exploited in [@bacry]
[^3]: there are various useful cutoffs, e.g. the circle average, see e.g. [@Qgrav], or the scale invariant cone construction, see e.g. [@Vargas]
[^4]: in practice we want that $\min_i(|x_i-x_{i+1}|)>\epsilon$.
[^5]: for finite grid $1/M$ these tails are cut far away by log-normal behaviour, see a detailed discussion in [@FB]
[^6]: the full conditions for convergence in (\[selberg0\]) are $\Re(a), \Re(b)>-1$, $\Re(\gamma) < \min(1/n,(a+1)/(n-1), (b+1)/(n-1))$
[^7]: In general such duality holds for the transformation $\beta \to \beta_c^2/\beta$ but we specialized in this paper to $\beta_c=1$
| |
A British girl attends an anti-registration protest at Place de la Concorde.
Character History
Heroes Evolutions
Save the Cheerleader, Destroy the World
On November 4, 2011, the British girl attends a protest at Place de la Concorde to oppose the European Union's version of the Evo Registration Act. When Antoine Mercier uses his ability to see snipers on the roofs, the British girl overhears him mutter to himself that something is wrong. When she asks him what it is, he replies that police are surrounding them. The British girl says "Let 'em try, yeah?" and starts to generate heat with her right hand. As she does so, one of the snipers fires at the girl to prevent her from using her ability, but the bullet hits the girl next to her instead. The British girl then releases the heat in her hand, causing it to engulf two officers and light them on fire. | https://heroeswiki.com/British_girl |
Q:
Radius of convergence of $\sum_{n=1}^{\infty} { (n \sin{\frac{1}{n}})^{n} x^n } $
We need to calculate the radius of convergence $R$ of:
$$\sum_{n=1}^{\infty} {\left(n \sin{\frac{1}{n}}\right)^{n} x^n }.$$
Here's what I did:
$$ \lim_{n\to\infty} { \left| \frac{c_n}{c_{n+1}}\right|} = \lim_{n\to\infty} {\left| \frac{(n\sin{\frac{1}{n}})^n} {((n+1) \sin{\frac{1}{n+1}})^{n+1}} \right|} $$
But I get to a place like this:
$$= \lim_{n\to\infty} { \left|\frac{1}{n\sin{\frac{1}{n}}} \right|} = \left|\frac{1}{\infty \cdot 0}\right|$$
How do I precisely calculate that limit? Is there a better way to find $R$?
A:
Hint:
$$n\sin\frac{1}{n}=\dfrac{\sin\frac1n}{\frac1n}$$
| |
252 F.2d 890
58-1 USTC P 9349
Leo PERLMAN and Sima Perlman, Petitioners-Appellants,v.COMMISSIONER OF INTERNAL REVENUE, Respondent-Appellee.ESTATE of Paul BACKER, by Leo Perlman, Abraham S. Guterman,Alfred G. Baker Lewis, and Charles Backer,Executors, and Julia Backer,Petitioners-Appellants,v.COMMISSIONER OF INTERNAL REVENUE, Respondent-Appellee.
Nos. 91, 92, Dockets 24693, 24730.
United States Court of Appeals Second Circuit.
Argued Feb. 6, 1958.Decided March 4, 1958.
Abraham S. Guterman, New York City, for petitioners-appellants.
Morton K. Rothschild, Department of Justice, Washington, D.C. (Charles K. Rice, Asst. Atty. Gen., Lee A. Jackson, Department of Justice, Washington, D.C., on the brief), for respondent-appellee.
Before MEDINA, WATERMAN and MOORE, Circuit Judges.
MOORE, Circuit Judge.
1
The petitioners, Leo Perlman and Sima Perlman, his wife, appeal from a decision of the Tax Court, 27 T.C. 775, which adjudged a deficiency of $11,072.78 in their 1950 income tax. In a similar case a 1950 income tax deficiency of $9,517.68 was found against the Estate of Paul Backer and Julia Backer, his wife. The appeals in the two cases have been consolidated.
2
Leo Perlman participated in 1943 in the organization of a company now known as Union Casualty & Life Insurance Company (referred to as the 'Company'). He was the executive vicepresident and owned during the period in question 20% to 23.75% of the stock. Because of the then financial structure of the Company, Perlman received during the five years, 1943-1947, only a portion of his $7,500 annual salary, in total $19,886.20 out of the $37,500, leaving $17,613.80 accrued but unpaid. However, he paid an income tax on the full amount as if it had been received and the Company reported a similar amount as an expense.
3
In 1950 the Chief Examiner of the New York Department of Insurance insisted that the unpaid salary be disposed of by payment or by being written off. Because payment would have jeopardized the Company's condition, Perlman reluctantly consented to cancellation. Perlman deducted the $17,613.80 from petitioners' 1950 income tax return as a loss or expense and the Company included this amount as income in its return.
4
Backer also cancelled unpaid compensation under similar circumstances.
5
The sole question presented is whether the cancelled indebtedness represents a deduction from gross income as defined in sections 23 and 24 of the Internal Revenue Code of 1939, 26 U.S.C.A. 23, 24 or is a contribution to the Company's capital.
6
Section 29.22(a)-13 of Treasury Regulations 111 (1939 Code) provides:
7
'Cancellation of Indebtedness.-- (a) In General.-- * * *. In general, if a shareholder in a corporation which is indebted to him gratuitously forgives the debt, the transaction amounts to a contribution to the capital of the corporation to the extent of the principal of the debt.'
8
Cancellation of the unpaid salary indebtedness was necessary to enable the Company to continue its business in which Perlman had a substantial personal and financial interest. The reasoning of this Court in Lidgerwood Mfg. Co. v. Commissioner, 2 Cir., 229 F.2d 241, certiorari denied, 351 U.S. 951, 76 S.Ct. 848, 100 L.Ed. 1475, is applicable here. Such a cancellation may be a contribution to capital even though a ratable contribution is not made by the other stockholders (Chenango Textile Corp. v. Commissioner, 2 Cir., 148 F.2d 296; Carroll-McCreary Co. v. Commissioner, 2 Cir., 124 F.2d 303).
9
The cancellation increased the capital of the Company. Petitioners are not without benefit from the transaction because as the Tax Court (Raum, J.), commented 'it increased the basis of petitioner's stock, and he will obtain tax benefit therefrom when he subsequently sells or otherwise disposes of his stock in a taxable transaction.'
10
The decisions of the Tax Court in Docket No. 24693 and Docket No. 24730 are affirmed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.