content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
I watch a lot of crime shows. I don’t know what it is about them that makes them so entertaining but I just can’t seem to tear myself away. Currently I’ve been binge-watching NCIS, and I noticed something that happens a lot when they’re getting eyewitness reports on a suspect. So many times they’ll show a sketch of a suspect’s face, recalled from the witness’s memory hours, sometimes days after the incident, and it ends up looking nearly exactly like the person they catch. That is just so hard for me to see as believable. Very few people have memories good enough to make recalling all the details perfectly a possibility. Even if we are extremely confident that what we are remembering is what we actually saw, chances are we’re at least a little bit off. As soon as we experience something, our mind immediately begins to forget the details. Now, I strongly believe our ability to remember things is greatly influenced by stress as well. So, how much to eyewitnesses to crimes, especially murders, really remember? So first off, we need to form a null and an alternative hypothesis. Our null hypothesis is that the accuracy of an eyewitness’s claim is not affected greatly by high emotional levels, such as stress. Our alternative is that the accuracy of an eyewitness’s claim is affected greatly by high emotional levels. This study about eyewitness memory was done using a group of people who had witnessed a shooting four to five months before the study. 21 people witnessed a shooting which killed one person and seriously injured another; they were all interviewed by the police during the investigation. The shooting happened on a major street outside of a gunshot and so each of the 21 witnesses saw the incident from various vantage points. 13 of these witnesses agreed to participate in this study months later. The researchers took into account inevitable losses or changes in the memories of the participants by looking at the interviews done by the investigating officers in addition to the ones the researchers themselves conducted. The results of both of these interviews ended up showing that the witness’ reports were usually very accurate to the actual incident. The lowest accuracy level, at around 76% for the police interviews and 73% for the research interviews, came from the descriptions of people, while the object descriptions resulted in about 89% accuracy on average for the police and 85% for the researchers. I noticed that there wasn’t much change in the eyewitness reports even after 4 or 5 months. I believe this may be due in part to the fact that people wouldn’t often forget the details of an experience such as this, but also because the researchers asked quite a few in-depth questions that had to have led to memories resurfacing. In regards to stress, the five witnesses who had direct contact with something pertaining to the shooting ended up reporting the most amount of stress during and after it. In this study, stress is recorded as a confounding variable. This unfortunately means that stress was not greatly taken into consideration during the research. However, the witnesses who reported the most amount of stress had memory accuracy levels that were around 70-80%, which was the average. The results of this study do show that there is some inaccuracy in eyewitness reports due to the eyewitnesses being unable to retain some details in their memories, particularly after some time has past. However, the inaccuracy is not as great as I would have thought it would be, which is reassuring to hear. And, it does not seem to be that stress or other such high emotional levels affected the witness’ abilties to recall details of the incident. At least that means that we can usually trust eyewitness accounts to be accurate in most cases. Now, this article is a review of a few other studies that have been done on stress as a factor in eyewitness reports. Quite a few of the studies themselves were done around the 1960s to 80s and each was published in credible research journals. The review states that many studies showed that memories before and after an event are occasionally less accurate or forgotten altogether (at least for a temporary time) if the witness was at a high emotional level at the time of the event. But on the other hand, details of the event itself were usually very well retained, even when the event caused high emotional levels. These various studies show that there is less loss of detail in high emotion causing events than there is in events where the witnesses did not feel high emotional levels. So it seems that high emotion responses to events actually increase a witness’s ability to accurately recount details, not decrease it. I suppose this means that I was wrong. Stress and other such factors do not seem to hinder a person’s ability to recall details of an event. This means that we must accept the null hypothesis. I am, however, disappointed that I wasn’t able to find more studies done on a topic like this one. I would think that it would be an important subject, especially to forensic scientists and investigators. It’s possible that this topic suffers from the file drawer problem, and that studies have been done that were never published. Sources:
https://sites.psu.edu/siowfa16/2016/11/30/that-eyewitness-sketch-looks-a-bit-too-accurate/
Journal of Cultural Research in Art Education The Journal of Cultural Research in Art Education (jCRAE), first published in 1983, is an annual publication of the United States Society for Education through Art and is completely accessible online www.jcrae.org/journal/index.php. jCRAE focuses on social/cultural research relevant for art education, including cultural foundations of art education, cross-cultural and multicultural research in art education, and cultural aspects of art in education. These areas should be interpreted in a broad sense and can include arts administration, art therapy, community arts, and other disciplinary and interdisciplinary approaches that are relevant to art education. Theoretical research and studies in which qualitative and/or quantitative methods as well as other strategies used will be considered for publication. Manuscript Types: Written manuscripts, graphic novels, photo essays, videos, or interactive art pieces in keeping with the focus of jCRAE are welcome. Mini-Theme: Rethinking Ritual Ecologies Submissions Due: January 15, 2022 Contact: Co-Editors, Amanda Alexander and Cala Coats: [email protected]. Submission Information Written submissions should be in Word (.doc) format; include a title page containing the author's name(s) and affiliation(s); a short abstract and keywords; and figures, graphs, and images appropriately at the end of the manuscript. The word count for the complete manuscript, not including references and footnotes, should not exceed 5,000 words. A variety of formats are welcome—including traditional academic essays, visual essays, or alternative formats—that fit the purposes of the journal to address issues of art, education, and cultural research. Image-based submissions should be accompanied by explanatory text. For submission of alternative/digital formats, please consult with the Senior Editor for submission preference. For information visit www.jcrae.org. Written papers should be in APA style (7th edition) and submitted by email to: Co-Editors, Amanda Alexander and Cala Coats at [email protected]. Deadline for submission of manuscripts for the 2022 (Vol. 38) issue of the Journal of Cultural Research in Art Education is January 15, 2022. Review and Publication Information All manuscripts will undergo a blind review by 2 reviewers from the Review Board of jCRAE. Upon review, authors will receive a recommendation from the Senior Editor and Guest Co-editor for either Acceptance; Minor Revisions; Major Revisions; or Rejection. Revisions are common and expected upon primary review of a manuscript submission. I encourage authors to submit early or contact me for the possibility of submitting after our deadline. We look forward to receiving your submission! The United States Society for Education through Art actively advocates for human rights, equality, and diversity in our schools, museums, and other educational settings. The recent murder of George Floyd by Minneapolis police officers condemned globally and all past incidents of racism, anti-Blackness, and other forms discrimination are violations of human rights that demise humanity and the fundamentals of our democratic society. As a professional art education organization, we denounce police brutality, structural oppression of race, and other discriminatory actions. In solidarity and support for the global movement for resistance and systematic changes, we do not tolerate the prevalence of invisible or overt racism and violence. As forward-thinking and equitable educators, the USSEA members will use art education pedagogies and resources to advocate for social justice and address diversity concerns and problems. We engage our students and community members in an inclusive, equitable, and diverse art education. Recognizing the power of art and visual culture, we believe our work is critically essential to help current and future generations enjoy the right of being treated fairly and equally in all circumstances. Art is one of the few channels in life that allows all people to breathe and to share their unique voices with dignity and validity. We challenge the oppression and discrimination of institutional ignorance and hierarchical structure, along with counteracting unacceptable violence with solidarity and mutual support. We also believe that more equitable, inclusive, and diversified education should be developed in Pre-k-12 and higher education. Therefore, we encourage all art educators to work to achieve this goal and to promote the active transformation of inequity and bias through resilience, solidarity, and justice in our universities, schools, and communities. As a professional association of art educators, the USSEA supports and advocates the following: - Support racial justice and equity of Black, Indigenous, and People of Color (BIPOAC). - Remove and counteract systemic discrimination and violence on the basis of race, gender, sexual orientation, age, disability, and other social identities - Care and support the victims, their families, and communities due to police violence and other physical and emotional violence - Create, develop, and implement art education resources to support diversity, equity, and inclusion. In addressing the challenge of discrimination, oppression, and injustice, we acknowledge that there are many holes and gaps in higher education and our schools, which should be removed and reformed to raise the next generation to see their own value and to stand up against inhuman actions and violence in our society. USSEA is a society of art educators who share a strong support for diversity and culturally pluralistic concerns in art education with others nationally and internationally. USSEA members promote greater understanding and respect for learners from all backgrounds through research, art curricula development, instruction, and creative professional practices. USSEA shares and supports projects, research, and publications that are in accordance with the mission of our organization and membership. About the Organization The United States Society for Education Through the Arts (USSEA) commits to actively lead and support multicultural and cross cultural initiatives that foster teamwork, collaboration, and communication among diverse constituencies in order to achieve greater understanding of the social and cultural aspects of the arts and visual culture in education. USSEA was founded in 1977 as an affiliate of the International Society for Education Through Art (InSEA) and the National Art Education Association (NAEA). It is a national association representing persons working in curriculum development, teaching, and research related to art education and cultural knowledge. USSEA is a society of art educators who share interests in multicultural and cross-cultural concerns in art education with others nationally and internationally to promote greater understanding and respect for learners from all backgrounds through research, art curricula, instruction, and practices that are inclusive and sensitive. USSEA encourages information sharing and distribution by arranging meetings and sponsoring multicultural and cross-cultural programs at the annual NAEA conference, organizing symposia and conferences within the United States, and supporting projects, research, and publications that are in accordance with the mission of the society. Through the USSEA website, members may access up-to date news and information about USSEA’s initiatives, programs, and opportunities.
https://ussea2020.wixsite.com/jcrae
The European Union foreign policy chief said on the fifth anniversary of the 2015 nuclear deal that he is determined to do everything possible together with the remaining signatories and the international community to preserve the agreement. “It is a historic multilateral achievement for global nuclear non-proliferation and is contributing to regional and global security,” Josep Borrell, who is also the coordinator of the deal’s Joint Commission, said in a statement published on the website of European External Action Service on Tuesday. Iran and the six world powers concluded the nuclear deal, formally known as the Joint Comprehensive Plan of Action, on July 14, 2015. It was subsequently endorsed unanimously by the United Nations Security Council under Resolution 2231. Borrell stressed that JCPOA remains the only tool to provide the international community with the necessary assurances regarding Iran’s nuclear program. “It consists of nuclear-related commitments to be implemented by Iran in exchange for sanctions-lifting by the international community with a view to normalizing trade and economic relations with Iran,” he said, adding that the full implementation of the deal by all sides remains crucial. The United States pulled out in 2018 and reimposed tough unilateral sanctions on Tehran. Borrell noted that the remaining parties will continue to work collectively to address, within the framework of the agreement, existing concerns regarding nuclear implementation, as well as the wider impacts of the US withdrawal and reimposition of sanctions. US President Donald Trump argued that the deal was defective and incomprehensive, and needed to be renegotiated. He launched a so-called “maximum pressure” campaign to force Iranian leaders to renegotiate the deal on his terms. “We should not assume that an opportunity will arise again in the future for the international community to address Iran’s nuclear program in such a comprehensive manner,” Borrell said. China Reiterates Support Chinese Foreign Ministry Spokeswoman Hua Chunying also took the opportunity to reiterate Beijing’s firm support for JCPOA, which she described as an excellent example of resolving regional issues through multilateral approaches. “No matter how the international situation evolves, China will always stand on the right side of history and work with other parties to the JCPOA to continue advancing the political and diplomatic resolution of the Iranian nuclear issue,” she said in remarks published on the ministry’s website. She censured the US for withdrawing from the JCPOA and “thwarting in every means other parties' implementation of the agreement”. “Such clear violation of UNSCR 2231 has led to continued tension over the Iranian nuclear issue,” she said. Iran began to step away from its commitments as per the provisions of the deal in 2019, after the remaining signatories failed to make up for the US sanctions over the course of one year. “Iran's scaling back of compliance is the result of the maximum pressure exerted by the US,” Hua said, adding that preserving and implementing the JCPOA are the right way to resolve the issue. “Parties to the JCPOA should strengthen dialogue and consultation within the framework of the Joint Commission, seek a solution to the compliance dispute in a step-by-step and reciprocal approach, and restore the balance of rights and obligations under the JCPOA,” she said. Hua regretted that the US is attempting to further undermine the JCPOA by pushing for the extension of the Security Council arms embargo against Iran and threatening to activate the snapback mechanism. The UN arms ban is set to expire in October based on the terms of UNSCR 2231, but Washington is campaigning to prevent its removal. It has threatened to trigger the return of all UN sanctions on Tehran, if its draft resolution fails to win the votes of the UN Security Council. “The international community should jointly oppose the US pushing the Security Council to extend or reinstate sanctions against Iran and urge the US to return to the right track of compliance with the JCPOA and UNSCR,” Hua said.
https://financialtribune.com/articles/national/104338/borrell-vows-all-out-efforts-to-preserve-nuclear-deal
( Log Out / As Firat et al. These models include the order in which elements should appear and prescribe the effect that should be observed given certain causal factors. It takes into consideration the dynamic and reciprocal interaction of the person, environment, and their own behavior. The cognitive consumer behaviour models can be broken down into two sections, analytical and prescriptive. (2006) “Consumer Behaviour Knowledge for Effective Sports and Event Marketing”, Taylor & Francis, New York, USA. 28 Analytic Cognitive Models The Theory of Buyer BehaviourHoward developed the first consumer decision-model in 1963. The cognitive approach model explains how certain organisms or variables can affect the response, which is the outcome of the purchasing decision. Consumer behaviour, according to Walters (1974: 6), represents specific types of human actions, namely those concerned with the purchase of products and services from marketing organisations. ( Log Out / Psychological theories 3. ( Log Out / (Bray, J.P. 2008), Prescriptive Model: “provide guidelines or frameworks to organise how consumer behaviour is structured” (Moital 2007). INTRODUCTION For ages, dissonance in consumer behaviour has captured the imagination of the marketers the world over. Another defect of the indifference curves approach is that it does not analyze the effects of advertising, of past behaviour (habit persistence), of stocks, of the interdependence of the preferences of the consumers, which lead to behaviour that would be considered as irrational, and hence is ruled out by the theory. The theory of “cognitive dissonance” is of great importance in consumer behaviours and marketers have lots of interest in analysing the post purchase behaviour of consumers experienced by them. The field of consumer behavior has traditionally borrowed from the behavioral sciences--particularly cognitive psychology--in developing models of consumer decision processes. Consumers behaviour reacts are based on the stimulation intensity to gain the first impression for the product ( Liljana & Elena, 2012, p71). C. ognitive-behavioral therapy (CBT) approaches are rooted in the funda -mental principle that an individual’s cognitions play a significant and primary role in the development and maintenance of emotional and behav-ioral responses to life situations. Cognitive biases can be applied to marketing, possibly influencing consumer behaviour in a company’s favour. It emphasizes the role of mental process rather than the repetition and association of rewards with the stimuli. Economic theories 2. CONTINUE…. The Theory of trying is defined as “an attitude toward a reasoned action is replaced by an attitude toward trying and an intention is restricted to an intention to try” (Carsrud et al., 2009). The relevant evidence from three phases of consumer behavior is reviewed here: pre-decisional determinants of product preference, post-decisional determinants of product preference, and information seeking behavior. The Theory of Planned Behaviour & Theory of Reasoned Action The theory of planned behaviour (TPB) is one of the most widely cited and applied behaviour theories. https://jstanton91.wordpress.com/2016/04/28/first-blog-post A consumer who has a balanced ID, Ego and Super Ego exhibits a balanced approach in all their decision pertaining to purchase of products. A disadvantage of the models could be that the way in which people recognise a need in order to choose and purchase products has changed which could affect the result given by following the process. Change ), You are commenting using your Twitter account. Consumer psychologists study a … The development of cognitive psychology in general is credited with the introduction of Stimulus-Organism-Response model by Hebb in 1950s. Firstly we must decipher what is Consumer Behaviour? Ajzen, I (2006) Brief Description of the Theory of Planned Behavior (online) Available from Commercial Assistant Property Manager Salary, Honda Civic 2000 Sedan, Go Where I Send Thee Choir, Almanya - Willkommen In Deutschland, Is Bondo Fiberglass Resin Epoxy Or Polyester, Transferwise Brasil Limite, Commercial Assistant Property Manager Salary, Honda Civic 2000 Sedan, Go Where I Send Thee Choir, Almanya - Willkommen In Deutschland, Is Bondo Fiberglass Resin Epoxy Or Polyester, Transferwise Brasil Limite,
http://www.fiqhcouncil.org/sweet-toast-yghunwn/65fda4-cognitive-theory-of-consumer-behaviour
The Moral Powers: A Study of Human Nature is a philosophical investigation of the moral potentialities and sensibilities of human beings, of the meaning of human life, and of the place of death in life. It is an essay in philosophical anthropology: the study of the conceptual framework in terms of which we think about, speak about, and investigate homo sapiens as a social and cultural animal. This volume examines the diversity of values in human life and the place of moral value within the varieties of values. Acting as the culmination of five decades of reflection on the philosophy of mind, epistemology, ethics, and human nature, this volume: - Concludes Hacker's acclaimed Human Nature tetralogy: Human Nature: The Categorial Framework, The Intellectual Powers: A Study of Human Nature, and The Passions: A Study of Human Nature - Discusses traditional ideas about ethical value and addresses misconceptions held by philosophers, psychologists, and cognitive neuroscientists The Moral Powers: A Study of Human Nature is a must-listen for philosophers, ethicists, psychologists, neuroscientists, and anyone wanting to understand the nature of value and the place of ethics in human lives.
https://www.storytel.com/it/it/books/the-moral-powers-a-study-of-human-nature-1193275
North Korea’s nuclear program, which seems to have drifted off the global radar in recent months, will most likely return to the spotlight in the near future. The reason for the shift could be the compromise reached by the United States and Iran on the latter’s nuclear program. Influenced by this development, many will wonder whether the deal with Iran couldn’t serve as a model for a possible future compromise with North Korea. Unfortunately, the answer to this question is an unequivocal “no.” The deal with Iran cannot serve as a model for compromise on the North Korean nuclear issue. Moreover, there is currently no model for such a compromise at all. Or, more precisely, such a model exists in theory, but is currently unacceptable, at least for one of the key players. North Korean Fears vs. Iranian Fears The main difference between Iran’s and North Korea’s nuclear programs is that Iran has far fewer reasons to fear either a direct American attack or military support provided by the United States and its allies to the opposition in the event of an armed power struggle in Tehran (the latter scenario, incidentally, isn’t very likely). On the other hand, the Iranian ruling elite has many more reasons to fear potential fallout from Western political and, especially, economic pressure. Iran has a unique sort of limited “theocratic democracy” in which, caveats notwithstanding, elections do play a role and the ruling elite must take into consideration public opinion. The North Korean situation is totally different. Justifiably or not, the North Korean leadership believes large-scale foreign military invasion to be a real threat. It is also well aware of the potential instability of its domestic political position. Knowing the fate that befell Nicolae Ceausescu and Colonel Qaddafi, the North Korean leadership has been certain for decades that only the presence of nuclear weapons can guarantee both the country’s and the regime’s stability in the face of severe economic crisis, the enormous lag behind South Korea and a hostile international environment. On the other hand, the North Korean government faces no domestic pressures. It doesn’t have to worry about the effects of public opinion on election results. There are no elections in North Korea; or, more precisely, elections do take place but with pre-determined results. From 1957 onward, every election in North Korea has ended in announcements that the official candidates received 100 percent of the registered vote. The North Korean leadership has every reason to fear excessively close contacts with Western states, since these contacts, however economically beneficial they might be, may have undesirable political consequences. Through such contacts North Koreans would most likely learn how badly their country lags behind other states in the region, particularly South Korea—a fact that has been painstakingly concealed from the public. So the idea of “bringing North Korea out of its international isolation” may not be such a draw for the country’s leaders. Unfortunately, the North Korean leadership can hardly be called paranoid for its firm belief that only nuclear weapons can ensure the stability of both the country and the regime. The unenviable fate of Saddam Hussein and especially Muammar Qaddafi make Pyongyang’s worst fears quite justifiable. The eccentric Libyan leader was the only dictator in history to give up nuclear weapons in exchange for economic concessions. That’s exactly the type of deal that Western states constantly offer to North Korea, and it’s no accident that in 2004—2005 a number of Western diplomats openly stated that Pyongyang ought to learn from the Libyan experience. Without a doubt, Pyongyang did take note of the Libyan experience, particularly the last few months of the Qaddafi regime, and it only reaffirmed the North Korean leadership’s existing belief that giving up nuclear weapons is tantamount to political suicide. Plutonium Fishing Boats What is the state of North Korea’s nuclear program today? Understandably, accurate information about this is in short supply. Most experts agree that North Korea currently has 30 to 50 kilograms of weapons-grade plutonium—enough to produce approximately 10 nuclear weapons. In addition, since at least 2010, North Korea has been producing enriched uranium, which can also be used to make nuclear weapons. It is not entirely clear how much enriched uranium North Korea has at this time, since its production, unlike that of weapons-grade plutonium, is much harder to control. The state of the country’s delivery vehicles is far shabbier. In December 2012, North Korea did manage to launch a prototype of a long-range ballistic missile that put the first North Korean satellite into orbit (though they did not manage to stabilize the satellite). A few caveats are in order here. First, all the previous launch attempts had failed, so any talk of a reliable ICBM prototype is premature. Second, the missile really was just a prototype. North Korean engineers haven’t even started working on the problem of protecting the nuclear weapon upon reentry into Earth’s atmosphere. Third, it is unknown how successful North Korean engineers have been in developing smaller nuclear weapons; chances are that the weapons currently at North Korea’s disposal are too heavy to be installed in missile warheads. Of course, the North Koreans are working hard to find solutions to these problems and occasionally come up with original approaches. For instance, this year reports surfaced that the country is actively working on developing submarine-based ballistic missiles (a prototype of such a missile was tested in January). Furthermore, the lack of delivery vehicles doesn’t necessarily render nuclear stockpiles useless. Under exigent circumstances, unconventional delivery methods can be found—for example, a nuclear device could be installed in the hold of a fishing boat that gets sent to the shores of South Korea, the United States or Japan. Of course, such makeshift methods can’t compare with a proper delivery vehicle, but they are theoretically possible and, in the hopes of North Korea’s leaders, would deter potential aggressors. When 10 = 50 For years, North Korea has been adamant in its refusal to discuss giving up its nuclear program. Pyongyang pretty much put the seal on that position in 2012 when an article on the country’s nuclear status was included in its constitution. But if North Korea is not ready to discuss nuclear disarmament, why then is it persistently seeking to resume negotiations on nuclear issues with the United States and other Western countries? What lies behind these efforts by North Korean diplomats? North Korea no longer seeks political and economic concessions in exchange for giving up its nuclear weapons; instead, in recent years, it has been trying to get these concessions in exchange for merely freezing its nuclear program while retaining current weapon stockpiles. In other words, Pyongyang is not ready to discuss nuclear disarmament but is willing to talk about the prospects of limiting its nuclear weapons (if it can get something in return, naturally). Pyongyang is hoping to reach some version of the following compromise: North Korea agrees to stop its work on developing nuclear weapons, shuts down its nuclear generator (if it is functional by that time) and grants international observers access to its nuclear sites. In return, the United States and other countries promise to provide North Korea with direct and indirect economic aid and possibly make some political concessions. Meanwhile, North Korea gets to keep its current fissile material stockpiles and its already assembled nuclear devices, wherever those may be. Thus, Pyongyang doesn’t want a return to the situation of 2006, prior to its first nuclear test; rather, it seeks to preserve the status quo. Such a compromise makes sense to the North Koreans. They understand perfectly well that their Yongbyon nuclear facility won’t be able to compete with Los Alamos or Arzamas-16 even under the best of circumstances. North Korean nuclear stockpiles will always remain small, at best consisting of only a few dozen weapons. On the other hand, even at its current modest level, North Korea’s nuclear arsenal is sufficient as a deterrent. If North Korea produces 50 to 60 nuclear weapons, as opposed to the 10 to 12 it most likely has now, the deterrent effect will not increase fivefold; in fact, it will hardly increase at all. From this viewpoint, increasing its nuclear potential is a luxury that Pyongyang could do without, if the compensation is commensurate, of course. Poisonous Freeze But calls for negotiating a freeze have so far failed to generate enthusiasm in the United States. Washington currently insists that talks can start only if North Korea takes specific steps to demonstrate its commitment to completely dismantling its nuclear program. Obviously, Pyongyang will not demonstrate such a commitment, since no such commitment exists. A number of American analysts, diplomats and midlevel government officials (mostly Korea experts) have long understood that freezing the North Korean nuclear program is in fact the best possible scenario. The only alternative is to continue the present policy. But in this case, North Korea will clearly continue to increase its nuclear stockpiles and to work on improving its delivery systems. As a result, the North Korean nuclear program will look ever more menacing. Nevertheless, these midlevel government officials, from the State Department or, say, the National Security Council, have neither the opportunity nor the desire to actively lobby for a policy of compromise on the North Korean nuclear program. There are at least two reasons for this. First, decisions on issues of nuclear security and nonproliferation are handled by the president and his closest advisers. While these people are not Korea experts and don’t know the intricacies of North Korean politics, they have a good sense of the global picture. From that standpoint, a compromise with North Korea would be quite a risky undertaking, fraught with undesirable consequences. North Korea differs from other de facto nuclear states (Pakistan, Israel, India) in that it once signed the Nuclear Nonproliferation Treaty and took advantage of the nonproliferation regime to gain access to certain nuclear technologies. After this, North Korea withdrew from the nonproliferation regime and successfully developed a nuclear-weapon prototype. Under these circumstances, Washington cannot agree to give North Korea substantial financial assistance while the country retains its nuclear status, however limited: Doing so would be seen as a dangerous precedent and a pay-off to a blackmailer. There is also a second set of problems that make a nuclear-freeze compromise impossible. These have to do with the potential negotiators’ domestic political considerations and personal career ambitions. Even if the U.S. president and his inner circle come to the (well-founded) conclusion that the complete nuclear disarmament of North Korea is unattainable, and the only realistic scenario is to freeze its existing nuclear program, they would have a hard time proving this during domestic political debates. Such a deal would immediately draw fire from Congress, and opposition lawmakers would accuse the president and his administration of lacking principles and capitulating and, more importantly, creating a dangerous precedent (this latter accusation, as noted above, wouldn’t be far from the truth). The career ambitions of certain government officials play a role here as well. Everyone understands that the diplomats associated with such a controversial compromise would hardly be able to use it to advance their careers. State Department officials remember what happened to Christopher Hill after his efforts to reach an agreement with North Korea in 2006—2008: The rising star of American foreign policy ended up a dean at a provincial university. The situation has hit a dead end. And while compromise is theoretically possible, it takes two to tango and American officials have serious reason not to get on the dance floor. As a result, we are likely to continue living with the North Korean nuclear impasse for many years to come. Andrei Lankov is a historian, Korea scholar and professor at Kookmin University (Seoul) This publication originally appeared in Russian.
https://carnegiemoscow.org/commentary/59849
Trump-Kim Summit: Sham denuclearisation PRESIDENT Donald Trump and Chairman Kim Jong Un had a historic summit in Singapore on June 12, 2018. The mercurial leaders’ summit was a major diplomatic breakthrough between technically still-warring nations. It signals the United States is prepared to recognize North Korea as a ‘normal country’, and also the former’s settlement of a standoff over latter’s nuclear arsenal. Thought the Summit has pulled back Korean peninsula from the brink of nuclear war, yet it is too early to conclude that the process of ending the nuclear weapons program of North Korea has realistically begun. This Summit took place at the time when Americans are buffeted by war weariness and also confronting revival of strategic competition among the Great Powers. Therefore, the Trump administration does not afford any military operation in the Korean Peninsula that end up in an asymmetrical warfare like 2001 Operation Enduring (Afghanistan) and 2003 Freedom Operation Iraqi Freedom (Iraq). After the failure of his coercive diplomatic tactics to terrorize North Korean ruling elite, he rationally decided to engage autocratic regime diplomatically. The North Korean ruling elite is aware of the Americans limitation. They have confidence in their nuclear deterrent. Despite having confidence in their nuclear deterrent, they are weary of economic sanctions. They realize the destabilization repercussions of economic instability. Therefore, they desire a diplomatic breakthrough with Americans for economic and political necessities without compromising on their nuclear arsenal. Trump administration’s projected objective of the Summit was to denuclearize the Korean Peninsula. Otherwise, sooner South Koreans and Japanese may alter their nuclear nonproliferation policies. During the 2013 tension between Pyongyang and Seoul, nearly two-thirds South Koreans demanded to purchase or manufacture tactical nuclear weapons in a nationwide poll. Though Japanese are remained firm with their pacifist’s nuclear policy, yet nuclearized regional strategic environment may oblige them to think otherwise in the near future. Notably, both states have a potential to convert their advanced civilian nuclear programs into nuclear weapons programme. Chairman Kim desires to be recognized as a legitimate ruler of his people and also the repeal of a pariah treatment of North Korea by the United States and its allies. The Summit will be having political, military, and economic dividends for Pyongyang. It lowers the military tensions in the Korean Peninsula. The Little Rocket is no more a threat to mighty power and thereby no more ‘fire and furry’ bellicose rhetoric. It relaxes the crippling diplomatic and economic sanctions imposed on North Korea. It also enhances the political clout of Supreme leader domestically and internationally. The North Koreans decided to accommodate the Americans concern on nuclear weapons, partially. They announced to observe a moratorium on the nuclear explosions and ballistic missiles test without compromising on their existing nuclear arsenal. They dismantled their nuclear weapons testing site before the Summit to win the confidence of Trump administration. Many analysts opine that the testing site was already cracked due to the thermonuclear tests. Nevertheless, these plans proved enough to bring President Trump on the table. The Americans also showed flexibility in their attitude towards the North Korean autocratic regime and nuclear weapons program. They had unilaterally put a hold on new sanctions against North Korea. President Trump stated before summit that: “I’m not going to put them on until such time as talks break down.” It was reported that Pyongyang expressed willingness to roll back its nuclear weapons program if Washington provides reliable security assurance and other benefits. Theoretically, it seems feasible. However, it’s difficult for both to satisfy each other. Are North Koreans prepared to allow free inspections of their entire nuclear infrastructure by International Atomic Energy Agency (IAEA) inspectors? Is Trump administration not ignoring, its non-proliferation commitments? The cancelling of the 1994 Agreed Framework between the United States and North Korea and latter’s withdrawal from Nuclear Non-proliferation Treaty (NPT) in 2003 manifest that Pyongyang will not compromise on its nuclear deterrent. They understand that nuclear weapons only provide them sovereign defence capability. The review of the literature reveals that the United States nuclear non-proliferation efforts have been largely subsumed under other strategies and missions, underplayed, or even ignored. For instance, Americans very comfortably ignored India’s violation of IAEA safeguards at the CIRUS nuclear power plant in Trombay, near Mumbai, which had driven them to lobby for the formation of the NSG in 1975, while signing the Indo-US nuclear deal in 2006 for the pursuit of their political and economic objectives. Presently, the Americans are lobbying for India’s NSG full membership. Hence, for the pursuit of their political objectives in East Asia, the Americans can overlook their denuclearisation of North Korean agenda. To conclude, Trump-Kim Summit is a welcome step. It implies that the United States has recognized North Korea as a de-jure state. Indeed, it’s a jubilant movement for the North Koreans. Nevertheless, the denuclearisation of North Korea seems a sham strategy of the United States. — The writer is Associate Professor, School of Politics and International Relations, Quaid-i-Azam University, Islamabad.
Asked by: Walton Hickleasked in category: Drink Last Updated: 15th November,2021 Which is the best coffee maker for beginners? Likewise, What kind of water is used to make coffee? The preparation of plain coffee is a steeping process, almost exactly like tea. The coffee grounds (coarse grounds work better for plain coffee) are mixed with near-boiling water. The heat and minerals in the water work together to extract the flavor from the coffee. Simply so, How long does it take for ground coffee to go stale? Brew time and water temperature will also affect extraction (more on those in a little bit!) As of such, you may find that it’s better to change a different variable than grind size. Grind only as much you need for your brew. Ground coffee goes stale after about 15 minutes. Moreover, How much Coffee do you need to make one cup of coffee? Some coffees work better with a slightly higher amount of coffee to water but 60 grams to 1 litre is a great starting point and people can experiment from there. For the following examples of brew recipes, we’ll do all of them based on 18g of coffee and 300ml of water to make a cup of coffee for one. Considering this, What kind of coffee beans grow in Costa Rica? The coffee beans that grow on the volcanic soils of Costa Rica turn out to be smooth, soft and have a rich walnutty flavor. The most famous varieties of coffee beans in Costa Rica are the Margarita, Cashier, and Costa Rica.
https://ask-and-answer.com/which-is-the-best-coffee-maker-for-beginners
A new Trump administration rule provides hazards that could make it more difficult for gig and contract workers to have rights as employees. Many of them are comfortable under the existing practices. Labor advocates are describing a new Trump rule announced by the Department of Labor as a potential blow to protection for workers. The proposal — announced Sept. 22 — would raise the threshold for contract workers, which includes gig workers, to be considered employees, a category that comes with significantly more protections. Under the proposal, the Department of Labor — which has the power to investigate worker complaints about misclassifications — said it would adopt a few guidelines to test whether workers should be considered employees or contractors. “Many workers are misclassified and have been for years — construction workers, agricultural workers, janitors, home care workers,” Catherine Ruckelshaus, the legal director of the National Employment Law Project, an advocacy group for workers, said. “They all stand to lose from this rule.” In California, the state’s AB5 law aimed to force gig companies such as Lyft, Uber and others to treat workers — including freelance writers — like employees, which is currently the subject of a court case and hotly contested ballot proposition. Labor Secretary Eugene Scalia wrote a short opinion piece published in Fox Business in support of the department’s rule, invoking AB5 no less than five times in critical terms. “Our proposed rule aims to clear away the cobwebs and inconsistencies that have grown up around this analysis since the Supreme Court’s decisions more than a half century ago,” he wrote. The rule does not invalidate laws in states like California that seek to add more protections for workers. But for those states that have not weighed in on the debate over contract employees, this Labor Department rule will act as a baseline. The test would assess whether a worker is truly in business for themselves like a contractor, or whether they are economically dependent on their employer, as an employee. It would also examine the degree of control a worker has over their work, and whether their earnings came from their initiative or investment. The federal agency will collect comment on the proposed rule over the next 30 days. The pandemic has revealed the shortcomings of the existing systems, as Congress was compelled to pass an aid package to provide unemployment insurance for self-employed and gig workers, that unlike traditional unemployment, receive no funding from the companies who rely on these workers. Watch this discussion.Stop watching this discussion. (0) comments Welcome to the discussion. Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language. PLEASE TURN OFF YOUR CAPS LOCK. Don't Threaten. Threats of harming another person will not be tolerated. Be Truthful. Don't knowingly lie about anyone or anything. Be Nice. No racism, sexism or any sort of -ism that is degrading to another person. Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts. Share with Us. We'd love to hear eyewitness accounts, the history behind an article.
I like to predict interior trends for the year ahead, especially paint colors. If there’s one design element I feel confident in- it’s color theory. I really enjoy working with colors, mixing, and finding palettes that play well together. Click through to see which colors I’m leaning toward in 2017… see what’s in, what’s out, and even take away a few of my painting tips. The timing on this post couldn’t be better, as I’m finally at the stage of selecting paint colors for our new house (many of which I’m sharing are already on order). This year, with our renovation, my goal is to push things a little further, and really strive for creativity. The one thing I dislike most about design on social media is the over saturation or overuse of certain trends. It’s a bummer when a pattern, piece of furniture, or color are plastered everywhere and they lose their uniqueness. Take the eames chairs for example. They’re everywhere. There are SO many replicas and everyone seems to own at least one (myself included), they’re just not as special anymore. I feel the same way about color. A color (we’re looking at you, gray!) will get too trendy, too fast, and suddenly everything looks the same. If I’m attracted to a popular color, my goal is to use it in a new way, or experiment with a different tone or hue that feels less ordinary. Let’s use green as the example. I’ve always loved green tones… two or three years ago, I shared my dining room and mentioned that I use green as a neutral because I’m so attracted to it. Now, deep greens are replacing navy and are gaining rapid popularity. Suddenly, I don’t feel as attracted to it because it’s everywhere. Rather than sticking with the same hunter green, try pushing it towards a muddy pastel, desaturating it, or mixing a little black to deepen the tone. These will be better options when it comes to longevity. Whites for Trim // I prefer a neutral, bright white paint when it comes to trim… not too cool, not too warm. This will allow for better versatility when it comes to paint colors and textiles. In semi-gloss, please. Creamy Whites // I wholeheartedly believe this is the year to warm things up. Gone are the days of cold, stark white. Don’t get me wrong… I love a good minimal, modern, and clean space, but I think the trend will be to gravitate towards warmer whites that feel more welcoming and less clinical. Soft Tones of Beige // Just like creamy whites above, warmer beige and greige tones are going to replace gray. Steer clear of cool grays and select their warmer counterparts instead. Gray is so overused, this is a solid and sophisticated alternative that literally goes with everything. Dusty Blues // I am really feeling muddy pastel blue tones lately. I’m planning to use a few pops of this color throughout the house. Maybe I’ll paint a set of french doors or a piece of furniture, either way- I’m definitely into it. Sage Greens // You already know how I feel about sage. This color was the star of the January moodboard. If you follow along on Instagram stories and Snapchat, you probably watched this color in action. I painted a vintage chest of drawers with this exact shade. Hunter Greens // Yes, they’re super popular. The trick is to find a shade that isn’t the same medium to dark green tone that everyone and their mother is using. I’m feeling the extra dark hues with a little black added, or shades that have hints of deep blue or brown undertones. Deep Hues of Blue // Navy has always been popular, but within the past few years, it’s become a real staple in the world of design. There will always be a place for it, but it’s another one of those colors that is overdone. Try finding shades that are super dark (almost black), or colors that have a green undertone. This will add depth and create a moody look- rather than appearing nautical. Balanced Blacks // I love black paint! It’s bold and you have to really make up your mind if you’re going to do it… but hey, it’s just paint. People are finally catching on to this easy neutral. To use it in a different way, try painting something unexpected- like your trim, a ceiling, or even a pattern on the floor. Rusty Neutrals // This is the wild card of 2017, and I’m feeling it! Blush pink tones are on their way out, and this rusty, desert underdog is on the way in. What do you guys think? Stop using painters tape. I HATE painters tape… it never works and leaves a squiggly, uneven line. I’m always much happier when I free-hand with a nice brush. If you’re not sure how to do this… watch this video. Stop painting first. People always ask how I compile a color palette for each room. It seems that everyone has trouble selecting paint colors because it feels sort of permanent and daunting. It’s actually pretty easy! Painting is always the last thing I do to a space (besides styling). I pull swatches from the textiles, furniture, or art. It’s always a great jumping off point when it comes to inspiration. Just because you’re moving in or a room is empty, doesn’t mean that painting has to happen immediately. Live in the space for awhile, see how the light interacts, find a couple objects that work well in the space and go from there. Buy quality paint and supplies. This seems obvious, but until I started using nice paint, it used to take forever. I value my time and the way a project turns out. You can get by with only 1-2 coats if you buy the appropriate paint. There’s literally a paint for everything. Technology has come a long way… from self-leveling paints to organic and green paints, they’re safer and easier to use. Ask a professional which one is best for your project. Know your finishes. It’s a given that different finishes should used in different areas of your home. Do a little research to see which finish is appropriate. Typically (most of the time) trim should be semi-gloss and walls should be flat, eggshell, or matte. The same goes for doors… semi or high-gloss is best and easiest to clean. Alright, that’s pretty much a wrap on all things paint, 2017. Don’t freak out if you just repainted your house gray or that same shade of hunter green or navy (you know the ones)! They’re going to be around for awhile, but I’d rather take the path less traveled. What paint color trends are you guys into this year? It’s sort of exciting to paint a new house with a fresh coat… it feels like a very good, clean, and positive start to the year! My bedroom is painted Alabaster and my front door is Tricorn Black. I’m loving Stonecutter and Mount Etna!!! I’m usually drawn to the more masculine paint colors but I’ve been so drawn to the dusty pink and muted oranges!!! I need to paint something or somewhere in my house Trek tak! And I hate painters tape too. I agree investing in nice brushes and using good paint!!! You were way ahead of the trend with that green dining room… setting the trend even, perhaps? Probably! Color is so difficult for me. I know what my favorite hues are, but I’m so afraid to use them. When I have, I’ve been happy with it. I’m so itching to do something bold, though. How I wish I could get your ideas/expertise. I’m so looking forward to what your house colors will be. I know it will be unique… and probably, once again, trend-setting. Thank you for the tips. Even if I can’t figure out what I want, I can at least use your neutral color palette as a place to start. I’m shocked about the painter’s tape tip… I’ve always had good luck with it! What a wonderful post, Sarah, and well done. I’m too am looking forward to seeing how you’ve incorporated such colors in your home. Meanwhile, I’ve been living with tones of brown and gray in my “French Industrial Lodge” style for going on three years now. I’d like to introduce creams and whites throughout to freshen things up. Dunn and Edwards ,”Sandcastle” on walls, of course semi gloss in kitchen and bathrooms with pure white semi gloss on trims and flat on ceilings! I dig these colors and if tenant likes colors are light to change! But so far all have liked these colors and are a favorite! Thanks, David! I’ll definitely have to check those out. Appreciate it! This was a great post! Any suggestions for trim color for the rusty neutrals choices? I am stumped—thinking dark tones? Commenting late to your post, sorry. I like your tips on painting. I’ve never used painters tape and probably never will. I learned at a young age from my father who was a painter how to paint, push your brush out on the wall, run a good paint bead and move your arm not your wrist. Those edger’s are nice but get paint on the edge and you’ve got a mess. Good quality brushes are key and if you take care of them they will last forever! Don’t overload your brush and don’t run it against the side because you take the paint off, rather tap it against each side of the paint container and if you haven’t overloaded it will be perfect every time. I also like that you don’t paint first, pick a pillow or some kind of fabric that you love and pick a color from that. It’s more individual and not so trendy…who knows you may start your own trend. Color is personal, at least it is for me. It sets moods and feelings so go with what you love and not what every one else is using. Love the sage Sarah. I have been trying to find a grey but am so aggravated trying to find a black grey not blue, not green not purple one. I had no idea I was so touchy about tone until now. I know lighting affects it. Maybe I will go with the sage as it is a favorite color of mine. Thanks for the idea for my kitchen cabinets. Right now I have samples all over my awful and very old cabinets. I want a pretty kitchen until I can afford to do new cabinets etc. Glad I found you. Please help!I have never designed any room in my life, but i have been studing. I painted my walls SW Repose Grey, my trim and doors are SW Extra White, now I’m going for my kitchen cabinets I have no idea what color. I know I want them light to make everything bigger and brighter. I am going for the white, cream, grays, pastel, farmhouse stle. My kitchen has Cherry colored floors and what sheen should I paint them. I will be using Sherwin-Williams my favorite. Oh heres a glitch, i still have the 70s orange counter top which i will change out when i can afford it. So it would be nice if the cabinets looked ok with the orange counter top. I have been looking at the whites to the grays. But there’s too many choices, might you possibly give me name’s of colors to try? Thank you. Dona – you may have already done your cabinets but Passive from Sherwin-Williams is a great color against the Extra White! Used it in a bathroom for the walls and our cabinets are ‘close’ to the Extra White but I plan to paint them to match the trim. It’s a really clean and soothing feel. Let me know if you would like some photos. Not sure I could post them here but maybe we could exchange emails somehow or if you use Facebook I could send you some there. I’ve used Argos in our bedroom and then the Passive in the bath. In the main parts of our house I did Sea Salt (we are in a coastal area near Charleston, SC, so the coastal farmhouse looks is big here). I’ve also used Olympus White, as a wall color and it’s a soothing grayish-blue. But depending on the lighting it can be a little ‘too blue’. In other rooms it’s a gray, but it’s a little fickle. I painted Rock Candy all over my son and daughte-in-laws home as ‘their neutral wall color’ and it’s gorgeous. Clean and fresh and looks awesome up against Extra White. My grandsons room was Mountain Air in their last house. It’s similar to Olympus White and the paints were mixed up and Mtn. Air got painted in their master and then we realized that the nursery was going to end up Rock Candy. Oops! But … they love it now!! lol . Holler if you need any ideas. I’m full of them. And we have mahogany floors throughout, even bathrooms. I love the 5″ wood floors but wish I had done a wee bit lighter stain because they show EVERY fleck of dust. I too love black but rather than paint a wall in my bedroom I found a reasonably priced pre glued, peel off wall paper of good quality. That way when I am tired of it…off it goes. I love green blues. My home is painted in hues of it. My #1 favorite is a Valspar called Sparkling Lake It is green…it is blue it is a strange color you cannot quite put your finger on to name it. Just like the lakes and ocean it depends on lighting. Does anyone know of a pure silver gray color I can do cabinets with? I need a white base I guess and it seems no matter what they look like on paper, they come up green or blue gray in the house. Donna, look at Passive from Sherwin Williams. Where did the color called “Linen – Restoration” come from? I’d love to get a sample but I have no idea where to get it.
http://roomfortuesday.com/predicted-paint-colors-for-2017/
The twenty first century age represent key changes in the marketing strategies employed by organizations and institutions in order to help them be very competitive and be sustainable in the turbulent market that they find themselves. Major changes include the institution of brand managers, constant research on the performance of brands and the behavior displayed by consumers. The purpose of this research was to determine the impact a brand has on the behaviors of consumers with special regards to their purchase behaviors and its relevance to our contemporary Ghanaian telecommunications industry. The objectives included the impact of branding on consumer decision – making process of the mobile subscribers in Accra, Ghana; determine whether there is a correlation between consumer‟s perception and branding and to also present the key theories and concepts surrounding the topic. The methods used for the research was a non – probability sampling technique as far as data collection was concerned. This was administered on a total of fifty respondents around the Adabraka region of Accra, the capital city of Ghana using the Mtn vendors as the agents. The findings include; I. Consumers are attracted to the packaging, labeling (name) of a brand because it enhances its image and creates a visual appeal for it. II. Consumers are influenced by the quality of a product or service before making a brand selection. III. There have been major changes in branding; these include the size, color, shape and the materials used in designing the brand. It is evident that consumers are becoming enlightened and more fashionable and would therefore prefer well developed brand. IV. Mtn‟s insurance brand lacks easy recognition and popularity in the minds of consumers and hence calls for further research on the ways and means of improving it. In conclusion, consumers are very much enlightened about the various brands on the market and as such the image is very crucial when it comes to making a purchase – decision especially at first time. TABLE OF CONTENTS 1 INTRODUCTION 1.1 Research problem 1.2 Objectives of study 1.3 Research methodology 1.4 Limitations of study 1.5 Structure of the thesis 2 THEORETICAL FRAMEWORK 2.1 Understanding Brands 2.2 The Development of brand Equity 2.3 Characteristics of a brand 2.4 Significance of a brand 2.5 Brand strategies 2.6 What is consumer behavior? 2.7 Factors affecting consumer behavior 2.8 The consumer decision-making process 2.9 Brand impact on consumer behavior 3. RESEARCH METHODOLOGY 3.1 Research approach 3.2 Data collection 3.2.1 Questionnaire approach 3.3 Sampling plan 3.4 Data analysis method 4. DATA ANALYSIS AND PRESENTATION 4.1 Respondents gender 4.2 Respondents age groups 4.3 Respondents duration with Mtn 4.4 Importance of brand reputation to respondents 4.5 Respondents feelings about brand association 4.6 Respondents Knowledge about Mtn subsidiaries 4.7 Respondents understanding of a brand 4.8 Respondents perception of Mtn‟s brand in terms of quality 4.9 Respondents perception about the brand image 4.10 Respondents perception about Mtn‟s pricing 4.11 Other brands that come into mind when the Mtn brand is mentioned 4.12 How Mtn differs from other brands 4.13 Causes of purchases decision 5. FINDINGS, RECOMMENDATIONS AND CONCLUSIONS 5.1 Conclusions 5.2 RecommendationsREFERENCES 1 INTRODUCTION The twenty first century age represents key changes in the marketing strategies employed by organizations and institutions in order to help them be very competitive and be sustainable in the turbulent market that they find themselves. Today‟s consumers live in a world where the purchase of products and services is enormous and continuous (Rindell, 2008). The survival or success of companies is now dependent on the amount of information that is carefully gathered by the former with regards to the purchasing habits displayed by consumers In order to survive in the market, companies are keenly interested in developing strong brands that leads to long term and customer relationships (Hess, Story and Danes, 2011). Companies inject heavy resources and time into the study of behavioral and sociological factors in order to gain much insight and to understand consumer purchasing patterns. Thus brands represent key assets to companies (Rindell, 2008). Branding has emerged as part and parcel of modern day marketing strategies and now considered a key organizational asset (Kotler, 2000) Organizations shifting their from a product or market point of view to consumer or customer focus reflect the evolution of marketing. As an implication to this paradigm shift, companies are relentlessly injecting huge resources into understanding their consumers in relation to the 4 p‟s (thus product, price, place and promotion) and the additional 3 p‟s (people, process and physical evidence), (Kotler ,1999). They further resort to underpinning the relationship that exist between consumers spending and the key variables involved in consumer preferences in terms of attitudes, cognition, perception and learning. (Von Moos, 2005).They want to know who their customers are, what they think and how they feel, and how they buy a specific brand instead of others. Companies go to the extent of employing a separate brands manager who sees to the management of the brand. That is serving as a link between the company‟s brand and consumers. In today‟s turbulent market place where consumers have an enormous amount of information with regards to products and services at their disposal, yelling louder is not a solution to making you heard or recognized in the market place. Instead, creating an outstanding brand that appeal to consumers (Ahuvia, 2005) This work aims to understand the theoretical impact of a branding on the decision making process of some the customers of Mtn Ghana Limited inTudu, a suburb of the Greater Accra region of Ghana... For more Business Economics and Tourism Projects click here ___________________________________________________________________________ This is a General Thesis for both Undergraduate & Postgraduate Studies. The complete research material plus questionnaire and references can be obtained at an affordable price of N3,000 within Nigerian or its equivalent in other currencies. INSTRUCTION ON HOW TO GET THE COMPLETE PROJECT MATERIAL Kindly pay/transfer a total sum of N3,000 into any of our Bank Accounts listed below: · Diamond Bank Account: A/C Name: Haastrup Francis A/C No.: 0096144450 · GTBank Account: A/C Name: Haastrup Francis A/C No.: 0029938679After payment, send your desired Project Topic, Depositor’s Name, and your Active E-Mail Address to which the material would be sent for downloading (you can request for a downloading link if you don’t have an active email address) to +2348074521866 or +2348066484965. You can as well give us a direct phone call if you wish to. Projects materials are sent in Microsoft format to your mail within 30 Minutes once payment is confirmed. -------------------------------------------------------- N/B: By ordering for our material means you have read and accepted our Terms and Conditions Delivery Assurance We are trustworthy and can never SCAM you. Our success story is based on the love and fear for God plus constant referrals from our clients who have benefited from our site. We deliver project materials to your Email address within 15-30 Minutes depending on how fast your payment is acknowledged by us. Quality Assurance All research projects, Research Term Papers and Essays on this site are well researched, supervised and approved by lecturers who are intellectuals in their various fields of study.
http://www.projectreserve.com/2017/12/impact-of-brand-on-consumer-decision.html
The mission of the Psychomotor Institute, Inc. is to improve the emotional health of individuals, families and society by expanding the knowledge and application of Pesso Boyden System Psychomotor. To accomplish this, we intend: - To promote awareness of PBSP nationally and internationally so its range of applications can be utilized in the psychotherapy realm, the parenting realm, the health/medical realm including mind-body healing systems, and other appropriate arenas. - To maintain the clarity and rigor of the PBSP process for future generations by overseeing (1) development of training materials and certification standards for therapists and trainers including instruments for self testing, (2) licensing of training centers and individuals, and (3) development of training policies, procedures, programs and curricula. - To support and initiate the development of centers for PBSP training and continuing education. - To develop and enforce ethical standards. - To support research to further the understanding and demonstrate the utility of PBSP, expand and improve its techniques and open avenues of further application. - To market, distribute and publish books, articles, lists of bibliographic references, and audiovisual materials pertaining to PBSP. - To encourage the development of societies/organizations of PBSP practitioners and supporters and otherwise facilitate communication among groups and individuals interested in PBSP. - To collaborate with theoreticians and practitioners of other psychotherapeutic systems, disciplines and treatment modalities to promote knowledge and application of PBSP and to stay broadly informed about developments in such areas as medicine, psychology, sociology and philosophy that relate to PBSP. - To maintain a library and archives of PBSP materials, both written and audiovisual, for future research. - To provide property to house the above activities including office and storage space as well as conference and living space for residential training and experiential programs in PBSP.
https://pbsp.com/train/psychomotor-institute/
By The Latin American Council of Social Sciences (CLACSO) The Latin American Forum on Scientific Evaluation (FOLEC) is a regional space for debate on the policies and practices of the evaluation of scientific work as well as the long-term implications of evaluation processes. Through FOLEC, the Latin American Council of Social Sciences (CLACSO) seeks to change the system by recognizing and rewarding the open, common and public domain of knowledge and its connection with democratizing and sustainable models of science that are committed to solving societal problems. To do this, FOLEC is being used to find points of agreement to build and enhance regional assessment instruments and move towards guidelines that universities and funders from different Latin American countries can commit to. First FOLEC Meeting in Mexico The first FOLEC meeting was organized by CLACSO and the National Council of Science and Technology in Mexico (CONACYT) in Mexico City in November 2019. The event brought together regional experts, representatives of National Science and Technology Organizations and member centers of CLACSO to analyze different aspects of research evaluation and to generate proposals for good practice from Latin America and the Caribbean, in consultation with international trends and good practices. The aim of the meeting sought the substantial transformation of research evaluation practices in Latin America and surfaced the need to articulate good practices with government representatives in order to achieve influence in policy development. The cooperation between CLACSO and CONACyT, the participation of UNESCO and other National Science and Technology Organizations such as COLCIENCIAS (from Colombia) and the CONICET (from Argentina) was a milestone for the construction of new policy proposals with real capacity for implementation. Exchanges and discussions Through a number of panels and roundtable discussions, the policies, practices, rationales, and interests of the main actors that influence global, regional, and national regulation of scientific evaluation were critically reviewed by participants. Case studies were used to identify how diverse governmental or institutional (from universities and scientific institutes) research assessment procedures were linked to global and regional trends. The long-term implications of publishing practices on research evaluation also emerged as a major topic of discussion. There is tension between international publishers and the local Latin American and Caribbean publishers that can be reduced to the commodification of science and privatization of knowledge against the affirmation of knowledge as a public good, the importance of its role in democratizing knowledge, e.g., the right to science and right to information. In addition, the FOLEC meeting generated a substantial dialogue between different repositories and platforms that make up the regional publications circuit–Redalyc, Scielo, LATINDEX, La Reference, and AmeliCA–underscoring their importance for the circulation of academic knowledge in Latin America and the Caribbean. As a result of the discussion at the meeting, a request was made for repositories like these to articulate information (metrics and other indicators) and foster interoperability to demonstrate the uniqueness and quality of Latin American and Caribbean knowledge production. Agreements and proposals The meeting reflections highlighted the link between science and society based on the right to knowledge. For this reason, scientific knowledge construction and transfer through a dialogue with the most diverse cultural communities is part of the scientific endeavor. Moreover, to advance the design of new types of evaluation, it is necessary to open a broader discussion on a series of concepts, such as social impact, social relevance, social responsibility, quality and excellence, among others, that have guided contemporary scientific knowledge evaluation and its impact. To this end, the FOLEC calls for the scientific communities and governments to get involved to strengthen the development of a science committed to the future of societies. Currently, FOLEC’s agenda seeks to create spaces for discussion in the form of forums or seminars, where CLACSO assumes the responsibility of articulating and promoting reflection and work together with different governmental scientific organizations in the region and other associated institutions, in order to reach a common and regional proposal on science evaluation by November 2021. Guest blog posts reflect the opinions of the authors and not necessarily those of DORA.
https://sfdora.org/2020/04/16/the-latin-american-forum-on-scientific-evaluation-folec-in-action/
on the survey of scientists and engineers in research institutes across Nigeria. The workshop according to Prof. Ukwuoma Okechukwu, Director General, National Centre For Technology Management(NACETEM), is to intimate stakeholders with the on-going effort of the Agency to create a national database for the pool of scientists and Engineers in the research institutes for effective human resource management. In his welcome address, Prof Okechukwu said the sensitisation workshop is a timely effort of the federal government towards the implementation of the Executive 5 order to achieve sustainable development. “The workshop brings together participant from the Ministries of Science and Technology, Ministries of health, ministry of Agriculture and some professional from the Academy of Science, Academy of Engineering and Nigeria Society of Engineers among others. “In recent times, a combination of Science & Technology and skilled labour force is regarded as a key factor for national competitiveness, economic development and environmental well-being. “The rate at which technological innovations are being developed and applied disrupts catch up efforts of developing countries and widens the inequality gap. “Specialized skilled labour force and professionals are a necessity if countries are to overcome challenges of the destructive creations emerging from new technologies. “More so, it is important to mention that the recent Covid-19 pandemic shows that countries without the required technological capabilities are bound to suffer either from the life threatening side of the disease or the impacts of its economic devastations. “Countries that are with high scientific and technological capabilities are in the race to launch and trade in drugs, test kits, personal protective covers, vaccines and other related products in response to Covid-19 pandemic. “Hence, the national competitiveness and catch-up strategies of developing economies largely depend on efficient deployment of scientific know-how and commercialization of technology-based products and services. “The application of technologies to broad-based issues can improve several aspects of socio-economic development and environmental wellbeing ranging from pest-resistant crops to low post harvest losses; from vaccine development and disease prevention to maternal care and child health; from reduction of environmental contaminants to purification of water; and from stable electricity to more efficient and affordable communication and mobility systems. “It is therefore imperative that each nation develop the critical mass of human resources in science and technology to drive innovations that will help its attainment of sustainable development. “These human resources, according to Canberra Manual, chiefly include core scientists in the field of natural sciences, medical sciences, agricultural sciences, biological sciences, and mathematics. However, the quantity and quality of these vital human resources are important indicators of a nation’s Scientific & Technological (S&T) efforts. Therefore, the need for Nigeria to have an internationally comparable data of her S&T work force. “To this end, this workshop aims to sensitize participants drawn from critical Ministries such as Science and Technology, Health, Agriculture and seasoned professionals from Academics of Science and Engineering, Nigerian Society of Engineers, amongst others as well as their respective institutions about a survey to collect detailed information on scientists and engineers in the country. “It will also elucidate the benefits of having an up-to-date and comprehensive database of the quantity, quality and distribution of scientists and engineers in Nigeria. CityMirrorNews reports that NACETEM is an agency of Federal Ministry of Science and Technology (FMST) saddled with the mandate of carrying out policy research, capacity building and consultancy services in the areas of Science, Technology and Innovation (STI) management. The Agency provides knowledge support and serve as the fulcrum or software components of the Ministry and National Innovation System.
https://citymirrornews.com/news/2021/16/nacetem-organizes-sensitisation-workshop-on-survey-of-scientists-and-engineers-in-research-institutes/
Editor’s note: This post was first published on Julius’ blog a month ago. As you’ll see from the first paragraph, the piece is in response to a TechCabal report on Konga. Earlier today a buddy sent me a message that started with this: According to Sim, local fulfilment services have been unable to keep pace with the growing volume of Konga shipments, leading Konga to the conclusion that it must create its own logistics infrastructure, a service called Konga Express. The above statement led to a regurgitation of conversations we have had over the years. These discussions were about whether or not there truly is a logistics problem in Nigeria as many people and startups in the country will have us believe. Some context: - Sim – Sim Shagaya (@simdul) is the CEO and Founder of Konga. - Konga – Konga.com is Nigeria’s equivalent of Amazon.com. Konga recently closed a $40 million USD round of financing to further grow its presence in the country. To be clear, I don’t believe there is a logistics problem in Nigeria. Could there be areas of improvement? Of course! But to say that logistics is a serious problem that needs to be solved is overly dramatic. I think we spend to much time looking for formal means of doing things in the country when we all still very much live in an informal environment. On July 2, 2012 I posted this in a group I’m a part of within Facebook: I’m thinking out loud here. Mark Essien wrote a post about logistics and came up with a viable way to do it in Nigeria albeit capital intensive. Recently, I have been hearing a lot about logistics and delivery in Nigeria these days and I always wonder why is it an issue? CEO’s of startup companies going into e-commerce of physical goods have you asked yourself the following questions? - How are the folks in Alaba and Onitsha Markets for example moving their goods? (They don’t just have their goods on site, they have other locations as well). - Do your consumers really need to have the product delivered to their home? If the product is being sold to them at a much cheaper rate than anywhere else in the country, will they be willing to pick up the goods from a location within 10, 20, or 30 km from their homes? - Can the existing road transport network be used? Every town has a major car park easily accessible to people, are there ways to partner with road transport operators? - How much is it to use DHL or FedEX? (Have you actually called them to find out? If not, talk to Saheed Taiye Adepoju he has) - How are goods generally moved around the country? Are we quick to rule out the existing infrastructure because they are not sexy enough? Are we putting ourselves out of business before we even start the business? I dont know much, but it seems to me like we are parting putting the cart before the horse here. You dont have to own the full spectrum of things. Start with your core, which is e-commerce and build a brand leveraging the existing infrastructure in place. It may be messy or unreliable but it works and if you generate enough business you can force your vendors to improve things on their end as you will be a significant source of revenue for them. People’s opinions have not changed much since the post, including mine. For the past two years, I have traveled back and forth to Nigeria from the United States, flown domestic within the country (Nigeria), used the road transport network and finally relocated to the country early this year. What I witnessed over time was that all the major road transport companies such as Agufore Motors, Autostar, Ekene Dili Chukwu, to name a few offer car park to car park delivery of parcels and letters for a fee. A “car park” is basically the location from where the transportation company operates out of. Passengers go to the car park to board the vehicle that will transport them to their intended destination. To add some context, pretty much every major city in the country has a car park. I also found out that most of the domestic airlines transports parcels and letters both formally and informally. (Formally, you go to the airline and ask them to transport the package to the destination Airport. Informally, you ask you buddy to take a package for you on their next trip to a specific destination.) So what do we have here? Let me illustrate it for you. Illustration drawn by me, not a 5 year old!! - E-Commerce Company ABC receives an order and makes it available for delivery. A local logistics vendor picks up the package from the ABC Co’s warehouse and transports it to the Transportation Co’s Car Park or to the Airport. - Transportation Co loads the package in their aircraft or vehicle and transports it to the destination closest to the customer. - Transportation Co drops the package off at their cargo holding area for pick up by the customer or a local logistics vendor who will deliver the package to the customer’s residence. - Customer awaits delivery of the package by the local logistics vendor or has the option of going to the Car Park or Airport to pick up his/her package. So who are these vendors I speak of? Local Delivery Vendors (Lagos, where most e-commerce companies are based out of): - Trans Nationwide Express aka TRANEX (http://tranex-ng.com) - Quo Courier and Logistics (http://www.quonigeria.com) - 360Delivery (http://www.the360delivery.com) - Tranzit Deliveries (http://www.tranzit.ng) - Anyone who operates a Keke - Anyone who operates an Okada - Plus a boatload of companies that offer similar services in Lagos Keep in mind, some of the companies mentioned offer nationwide delivery but may be cost prohibitive or not as reliable when delivering to certain destinations. Local Delivery Vendors (rest of Nigeria): - Anyone who operates a Keke - Anyone who operates an Okada - Courier and Logistics Co’s that operate in the area Outside of Lagos, it gets a bit more difficult or should I say expensive to have items delivered to a customers home depending on where the customer resides. If the customer resides within a 5 kilometer radius from the Car Park/Airport it’s not so bad, but the further out you go the customer should assume they will be pay a premium for it. It is for this reason that I am all for having pick up points. The customer can go to the Car Park or the Airport and pick up their delivery. I know I would. I’ll digress a bit and tell you a story. A couple of months ago I went to Onitsha and bought a 50” LCD television that I wanted to use at our family hotel. After purchasing the television I hit a snag. It could not fit in the vehicle I came with. No matter how hard we tried, that bad boy was not going to enter the car. While I was standing there sweating bullets and scratching my head, one dodgy looking chap came up to me and said I should put the TV on an Okada and the Okada rider will deliver it to the hotel for me. I scoffed at the thought and asked for someone with a pickup truck. When the pickup truck arrived, the driver basically told me that I should pay him an arm plus two legs. I guess he looked me up and down and thought he found his cash cow for the day. To my disappointment, no amount of haggling with the guy worked. So I called for the Okada, who charged me 2,000 NGN (two thousand naira) to deliver the television. By then there was a crowd and everyone including the person I bought the television from was assuring me that nothing will happen to the television and it will arrive in tact. Eventually I gave in and the television was placed on the Okada and off they went. Long story short, I got back to the hotel and 30 minutes later the Okada and the television arrived as promised. We all know the transportation companies and airlines we have so there is no need for me to go into further detail about them. The key here is pricing. I find it hard to believe that after acquiring the assets necessary to operate your own fulfillment service your prices will be cheaper than the existing players in the space. Unless you plan on subsidizing the costs and using the service as a loss leader. In that case, by all means go right ahead. “Loss Leader” a pricing strategy where a product is sold at a price below its market cost to stimulate other sales of more profitable goods or services. – Wikipedia In the case of Konga, they believe they are outgrowing their current vendors at a fast pace, which requires them to go into the logistics space. I will like to know who those vendors are. Did their biz dev guys hit the streets and try to negotiate deals with the folks who are already directly or indirectly in the space? Did they talk to the union of Keke operators? How about the union of Okada operators? I don’t believe they did. Hey, maybe someone will make life easier for these e-commerce companies and build a “logistics” company on top of the existing network. A logistics broker, if you will. No huge capital outlay, just the power of building/sustaining relationships and negotiating. Determine the most popular routes and negotiate rates to come up with the most competitive prices in the market which of course will include your margin. Where there is room for improvement in the Nigerian logistics game are areas that rely heavily on the use of technology. Everything is still very much paper based and there is very little automation. Scheduling, Tracking Systems, SMS Alerts, etc will be great value adds. So, does Nigeria have a logistics problem? Disclaimer: I have NO experience in the logistics space. These are just super high level thoughts I had and decided to post. Especially since I haven’t posted for a while now. I will love to know your thoughts and if you are an expert in the space and live within the region, please provide me/us with some industry insights. Thanks in advance.
https://techcabal.com/2014/11/04/nigeria-really-logistics-problem/
Basel, Switzerland, 12-13 March 2013. Good evening. It is wonderful to see many familiar faces from many advocacy groups around the world. I would like to thank Hoffman La Roche for the invitation to speak this evening about Down syndrome research and - in particular - about recent advances in biomedical research. It is a personal and a professional privilege to be speaking to you this evening. I think the work that Roche is now undertaking is exciting and I would wish to congratulate them on deciding to invest in research which may lead to significant quality of life improvements for individuals with Down syndrome. I think that the current knowledge based on animal work supports the view that it is time to start these human studies but success is no means certain. I think we will all need to work together and that there are significant challenges ahead for both the Roche team and the Down syndrome community, which I will use as a shorthand for people with Down syndrome, their families and advocacy groups. I will set out these challenges and hopefully set the scene for further thinking and discussion this evening and tomorrow. Many of you will know that I have been active in the field of Down syndrome research for over 30 years, and that I have a daughter with Down syndrome - now 43 years old. I trained in clinical psychology, worked in services for people with intellectual disabilities, and taught psychology for many years at the University of Portsmouth in the United Kingdom. My research interests began with teaching reading to young children with Down syndrome and over the years have touched on many aspects of development, cognition and learning for young people with Down syndrome. I have also had an interest in the nervous system and brain function since I was a student and taught brain and behavior courses to psychology students for a number of years. As I prepared this talk, I realized how the developmental and education work that I have actively focused on and the current point reached in biomedical science have come together and face at least some similar challenges. For many years, Down syndrome research has encompassed many fields of inquiry - usually divided along the lines of traditional academic and professional disciplines. For example, genetics, medicine and psychology. In recent years, these areas of research have become increasingly intertwined as our understanding of genes and their functions, and of the nervous system and how it works, have begun to point to ways in which we can understand aspects of human development and behavior at a physiological level. This increasingly interdisciplinary research is now leading to the identification of pharmaceutical compounds that might act on parts of the brain in ways that could improve mental function for people with Down syndrome. We have come together at this meeting to learn more about the early stage trials of these compounds. I would like to briefly review the history of Down syndrome research and put the planned Roche studies in the context of the wider quest for a better understanding of what it means to be born with an extra copy of chromosome 21. Research in a number of fields has positively impacted the lives of people with Down syndrome - expectations are very different for a baby born now compared to the expectations for my daughter 43 years ago which led to her parents abandoning her in an institution - at least in some countries. I also hope to show that the identification of possible pharmaceutical therapies for cognitive function is not a radical departure from the efforts of researchers, clinicians, therapists and teachers to find new and better ways to improve life for people with Down syndrome. Rather, it is the natural progression of the science in many fields and the anticipated result of advances in genetics, biology, neuroscience and psychology. That said, pharmaceutical therapies for cognitive function are largely untried, and the limited human studies conducted to date have not been encouraging. Pharmaceutical therapies do represent a different approach to improving development and learning for people with Down syndrome from those we have today, such as specialized medical care, early intervention and adapted therapy and teaching techniques. And, they are not without important ethical and safety concerns. I will touch on these as I outline what will be required to evaluate these therapies, and identify some of the formidable challenges that lie ahead. Down syndrome research has progressed alongside and in response to more general scientific advances since the mid-19th century.[ 1] This is unsurprising: the more we understand about typical human development, the better we can identify what is similar and what differs for people with Down syndrome. John Langdon Down first published a description of the condition that now bears his name in 1866 - a year after Gregor Mendel published his laws of genetics. By 1920, it was generally accepted that chromosomes carried genes and by 1958 there was good evidence that most humans carried 46 chromosomes in their cells. In 1959, two research teams reported that people with Down syndrome had an extra copy of chromosome 21. Genetics research also began to accelerate from the late 1970s when the first gene located on human chromosome 21 was found to correspond with a gene on mouse chromosome 16 and Charles Epstein proposed that mouse models could be useful in the study of Down syndrome. The first mouse model - the Ts16 - did not survive beyond birth, so its uses were limited. In 1990, researchers reported that they had created a mouse carrying additional copies of many of the genes comparable to those on human chromosome 21 that did survive and that displayed learning and memory difficulties compared to typical mice. There are other models, but to date this model - the Ts65Dn - has been the most extensively studied. Developmental and educational research also accelerated from the late 1970s. Researchers began to study the cognitive and physical development of people with Down syndrome. We soon discovered that many traditional assumptions were wrong and that when given appropriate opportunities young people with Down syndrome could learn more than previously thought possible. Researchers began to characterize specific differences in the function and development of different aspects of cognition, including language, speech, reading and memory when compared to typical development and development for children with other causes of intellectual disabilities. Scientific research is therefore yielding results and advancing our understanding of Down syndrome. Clinical research has dramatically improved life expectancy and quality of life for people with Down syndrome. Educational and developmental research is helping many young people achieve more today than ever before. Biological and genetics research is starting to uncover some of the basic mechanisms affected by the extra chromosome and new possibilities are emerging for possible pharmaceutical therapies that may further improve life for people with Down syndrome. Despite these tremendous advances, we need to be cautious. There are many things we still do not understand about Down syndrome. Few, if any, precisely defined outcomes for people with Down syndrome are dictated solely by the presence of an additional copy of chromosome 21. Even congenital problems are not an inevitable consequence of trisomy 21. Heart defects are much more common among babies with Down syndrome, but still only approximately half of them. Trisomy 21 alone does not cause heart defects - it just increases the chance they will occur. This is even truer of mental function. In all areas of cognitive development we see wide variation in levels of achievement and skills among people with Down syndrome. Again, having an extra chromosome 21 undoubtedly affects development but it does not alone determine outcomes. We therefore need to be careful of over simplifying the mechanistic links between genes, biochemistry, neurological function and real world outcomes for people with Down syndrome. The biology is clearly important, but cognitive development is a complex and iterative process that depends on input. Input and practice changes the structure and function of areas of the brain. Take language, for example. We are not born with language, but rather we are born with the brain 'machinery' needed to learn language. The language we learn depends on the language in which we are immersed. For young people with Down syndrome, that brain 'machinery' does not seem to work as well as for typically developing children - at least in response to typical environmental input or teaching methods. Clearly, if we can make basic cell function in the brain work better with pharmaceutical interventions we should expect to see improvements in specific learning outcomes, but not necessarily in the absence of effective teaching and support. Signals are transmitted between neurons by chemicals called neurotransmitters which act by binding to receptors on the surface of the cells. Some neurotransmitters encourage (or excite) signaling between neurons and some discourage (or inhibit) signaling Gamma-aminobutyric acid (GABA) is the primary inhibitory neurotransmitter in mammals. It binds to GABA receptors. Studies of Ts65Dn mice show that excess GABA-mediated inhibition is associated with learning and memory problems. This suggests that treatments targeting GABA receptors may improve learning. In a study published in 2007, Fabian Fernandez and colleagues at Stanford University, reported that treating Ts65Dn mice with compounds that counter GABA-mediated inhibition improved their performance in tests of memory and learning. Promising as these findings were, the GABA antagonists used with the mice are associated with heightened anxiety and convulsions.[ 14] People with Down syndrome are more prone to seizures, and are particularly sensitive to epilepsy-inducing drugs.[ 13] Researchers therefore began to search for compounds that could target GABA-mediated neurotransmission in ways that are less likely to cause serious side effects. In summary, at this time there are candidate pharmaceutical therapies for which there is evidence suggesting they might alleviate problems with specific neurological functions in parts of the brain thought to be involved in some of the learning difficulties associated with Down syndrome. So far, the only experimental evidence for improvements in learning and memory resulting from these compounds is from studies of mice carrying extra copies of genes similar to those found on human chromosome 21. What might this tell us about the possible benefits for people with Down syndrome? Experiments with mice examine learning and memory using a variety of tests. For example, spatial learning and memory can be assessed using the Morris Water Maze in which mice are placed in pool of cloudy water and have to swim around to find a slightly submerged platform onto which they can climb from the water. Visual cues are placed around the pool. The mice swim around the pool until they find the platform. As the test is repeated, they learn to find the platform more quickly, presumably by learning its location in relation to the visual cues. Ts65Dn mice perform less well than typical mice in this test, presumably because their visuospatial memory is impaired. There are very few conclusions we can draw from these experiments in relation to what benefits we might expect for people with Down syndrome. We know that people with Down syndrome experience memory difficulties and these difficulties seem likely to affect language development and many aspects of everyday learning and function. If these pharmaceutical therapies can improve memory function for people with Down syndrome in similar ways to the mouse models, then we might see gains in certain skills that could improve the quality of life for people with the condition. Even modest improvements in short or long term memory function could be helpful. However, until we test the compounds with humans with Down syndrome we cannot tell if they will have an effect, nor precisely what the effect will be. Before turning to the challenges involved in evaluating pharmaceutical therapies in people with Down syndrome, I want to briefly touch on the question of whether we should be trying at all. Some people are rightly very cautious about using pharmaceutical compounds targeting brain function. The brain is hugely complex and the effects that an extra copy of chromosome 21 have on the brain are also hugely complex. It is therefore reasonable to ask if we really know what we are doing interfering with the brains of people with Down syndrome in this way. Moreover, some people argue that we should not wish to 'cure' people with Down syndrome - that their genetic makeup and its consequences for cognition and development are part of who they are as individual human beings. I do not think that 'curing' people of Down syndrome is a realistic goal, nor will it ever be. The compounds being evaluated in mouse models are not 'cures' - and while they may offer 'treatments' for some specific problems associated with Down syndrome, they are not treatments for Down syndrome per se. The question, therefore, that we should focus on is to what extent should we help people with Down syndrome lead more rewarding and fulfilling lives? My answer to this question is we should help as much as is practically possible and pharmaceutical interventions are as important as educational ones. Surely, no one would argue that we should not take advantage of modern surgical techniques to correct cardiac defects in babies with Down syndrome? This has clear benefits - as is evidence by marked increases in survival and longevity among people with Down syndrome. I do not think most people would argue we do not treat an underactive thyroid or a hearing loss. Few would argue that we should not take advantage of therapies and early interventions that can improve cognition and learning outcomes for people with Down syndrome. Some of these therapies and interventions target specific brain functions in ways similar to these new pharmaceutical approaches. For example, researchers at Down Syndrome Education International recently completed a trial of computer based memory training.[ 11] This intervention is designed to improve specific cognitive functions by exercising them in ways that are hoped to leave lasting benefits after the training period, presumably by altering neurological structures in ways not dissimilar from how pharmaceutical therapies might be expected to work. New learning is stored by changing the structure and function of neurons. That said, there are challenges that are unique to biomedical research and pharmaceutical therapies, and they are important. First, much of what we understand about the biochemistry and neurological effects of extra copies of genes found on chromosome 21 is based on studies of mouse models with generally only limited correlations to observations in humans.[ 13] Just because a mouse model exhibits certain neurological features does not prove that the same features are present in people with Down syndrome. Just because a treatment works in a mouse, does not mean it will work in humans. In fact, the odds are it will not: biomedical research is littered with examples of promising findings in mice that are then not replicated in humans. Second, the brain is complex and interconnected. Just because we have evidence about what we are doing in a particular part of the brain, does not mean that we understand how we may impact other parts of the brain. It is quite possible that efforts to normalize neurological function in one part of the brain could adversely affect function in another part of the brain. As I commented earlier, one of the promising features of the compounds being investigated by Roche is that they target the activity of receptors thought to be most active in one part of the brain. However, we cannot rule out the possibility that pharmaceutical therapies targeting one aspect of cognitive function may have unintended consequences on others. These effects may be modest and difficult to detect and we may find ourselves having to accept trade-offs - as we do for most medicines. Pharmaceutical therapies may also have more serious side effects. I have already touched on risks of elevated anxiety and convulsions relating to some compounds used with mouse models. Obviously, we need to certain that these compounds are safe in humans. I hope that my brief review of Down syndrome research and the promise and the challenges of recent advances in proposed pharmaceutical therapies for learning and memory have been helpful for setting the scene. I would like to conclude this talk by looking ahead. Trials are beginning to evaluate some of these therapies with people with Down syndrome. What will be involved in these trials and how should the Down syndrome community be engaged? These trials will take time. First, preliminary trials must assess the safety of a compound - usually with relatively small numbers of people. If these are successful, larger trials can take place to evaluate if the compound has the desired effect and to further monitor safety. If these are positive, then full scale clinical trials can take place with larger numbers of patients. The entire process can take many years. These trials will require extensive collaboration: collaboration between scientists working in different fields, collaboration between clinicians and researchers, and collaboration between the Down syndrome community and researchers. These trials will likely take place in multiple countries, so these collaborations will need to be international. The Down syndrome community will need to work with researchers to define the outcomes that matter. Developmental scientists and clinical practitioners experienced at working with people with Down syndrome will need to collaborate with biomedical researchers to develop the measures and assessment protocols necessary to accurately assess these outcomes. This is not a trivial undertaking. There are substantial challenges involved in developing assessments that are reliable and sensitive enough to demonstrate treatment effects - and this applies to educational and pharmaceutical interventions so developing better measures will help us all. I would agree with this argument. Our goal is to achieve real world improvements in the lives of people with Down syndrome. Just because a pharmaceutical therapy or an education intervention improves a person's ability to recall a sequence of digits or symbols, it does not necessarily mean that they will develop better language skills or learn more in the classroom. These are these ultimate measures of success. The Down syndrome community must be involved in setting these goals. What levels of gain in what functions would be sufficient to warrant the use of pharmaceutical therapies? Only people with Down syndrome and their families can answer these questions. Developmental and learning outcomes are influenced by the type, intensity and the quality of the therapies and education provided to people with Down syndrome.[ 10, 21-23] Evaluating a pharmaceutical therapy with people with Down syndrome receiving poor quality support may find no effect, whereas the same therapy might be additionally beneficial for those receiving higher quality support. Pharmaceutical trials will need to take these issues into account. It may also be that pharmaceutical therapies are most effective when combined with targeted developmental interventions, and perhaps more so during specific periods of development. We should also expect therapies to work for some, but not all individuals. Determining the factors influencing response to intervention will be important. Particular health problems, sleep disturbance and additional diagnoses may all play roles. The community will need to consider all of these issues and weigh the relative risks and benefits of differing therapies. Finally, the international Down syndrome must play a crucial role in providing clear and balanced information about this research to families and people with Down syndrome. We should be engaged in recruiting study participants and I would encourage participation in any properly conducted and regulated scientific studies. However, we have a responsibility to provide accurate information, and to be cautious about the eventual benefits that this research may lead to. I look forward to exploring these issues with you all during the course of this meeting. Professor Sue Buckley has provided and continues to provide consulting services to F. Hoffman-La Roche Ltd., and receives compensation for these services. Down Syndrome Education International has been engaged to provide advisory services to F. Hoffman-La Roche Ltd., and received compensation for out-of-pocket expenses relating to these services. These disclosures were correct at the time of publication. Cohen WI and the Down Syndrome Medical Interest Group. (1999). Health Care Guidelines for Individuals with Down Syndrome. Down Syndrome Quarterly, 4(3). Bennett, S., Holmes, J., Buckley, S. (2013). Computerized memory training leads to sustained improvement in visuo-spatial short term memory skills in children with Down syndrome. American Journal on Intellectual and Developmental Disabilities (in press). Burgoyne, K., Duff, F. J., Clarke, P. J., Smith, G., Buckley, S., Snowling, M. J. and Hulme, C. (2012) Reading and Language Intervention for Children with Down Syndrome: Teacher's Handbook. Cumbria, UK: Down Syndrome Education International.
https://www.down-syndrome.org/en-us/papers/2013/roche-global-advocacy-group/
A Search Service for Abbreviation / Long Form ■ Related PubMed/MEDLINE Info. [Related PubMed/MEDLINE] Total Number of Papers: 926 [Display Entries] 100 entries : (Publication year, Descending) 100 entries : (Publication year, Ascending) all entries [Entries Per Page] 5 10 20 50 100 per page Page Control Page: of Abbreviation : GHQ Long Form : General Health Questionnaire No. Year Title Co-occurring Abbreviation 1 2022 DUbbing Language-therapy CINEma-based in Aphasia post-Stroke (DULCINEA): study protocol for a randomized crossover pilot trial. BDAE , CAL , DULCINEA , WAB-R , WP 2 2022 The association of plant-based dietary patterns and psychological disorders among Iranian adults. FFQ , HADS , hPDI , PDI , uPDI 3 2021 A community-based study of prevalence and functional status of major depressive disorder in an industrial area. FSQ , MMSE , PHQ-9 4 2021 Assessing the perceived changes in neighborhood physical and social environments and how they are associated with Chinese internal migrants' mental health. --- 5 2021 Co-supplementation of camelina oil and a prebiotic is more effective for in improving cardiometabolic risk factors and mental health in patients with NAFLD: a randomized clinical trial. CSO , DASS , MDA , TAC 6 2021 Coffee consumption and psychological wellbeing among Japanese auto factory workers. --- 7 2021 Combined Effects of Mother's, Father's and Teacher's Psychological Distress on Schoolchildren's Mental Health Symptoms. OR , SDQ 8 2021 Consumption of Dietary Fiber in Relation to Psychological Disorders in Adults. FFQ , HADS , OR , OR , OR 9 2021 Dietary intake of branched-chain amino acids in relation to depression, anxiety and psychological distress. BCAAs , HADS 10 2021 Dietary methyl donor micronutrients intake in relation to psychological disorders in adults. HADS , MDMS 11 2021 Effect of probiotics supplementation on disease progression, depression, general health, and anthropometric measurements in relapsing-remitting multiple sclerosis patients: A systematic review and meta-analysis of clinical trials. EDSS , RRMS 12 2021 Effectiveness of a group intervention to reduce the psychological distress of healthcare staff: a pre-post quasi-experimental evaluation. NHS 13 2021 Effects of aerobic exercise training on mental health and arterial stiffness in middle-aged and older adults. --- 14 2021 Factors associated with carer psychological and physical health during end-of-life caregiving: an observational analysis of a population-based post-bereavement survey of carers of people with cancer. --- 15 2021 Factors associated with perceived stress of clinical practice among associate degree nursing students in Taiwan. CART 16 2021 Gender differences in unpaid care work and psychological distress in the UK Covid-19 lockdown. --- 17 2021 Heterogeneity in the association between youth unemployment and mental health later in life: a quantile regression analysis of longitudinal data from English schoolchildren. --- 18 2021 Importance of personality and coping expectancy on patient-reported hearing disability, quality of life and distress level: a study of patients referred to an audiology service. APHAB , EORTC , HL , PTA , QoL , TOMCATS , WHO 19 2021 Individual-focused occupational health interventions: A meta-analysis of randomized controlled trials. MBI , PSS , RCTs , STAI 20 2021 Internalized Stigma in Pediatric Psoriasis: A Comparative Multicenter Study. DLQI , PHS , PISS 21 2021 Intersectional inequalities in mental health across multiple dimensions of inequality in the Swedish adult population. --- 22 2021 Irrational Beliefs and Their Role in Specific and Non-Specific Eating Disorder Symptomatology and Cognitive Reappraisal in Eating Disorders. ABS-2 , BMI , ED , EDI-3 , ERQ , IBs 23 2021 NGS-based mtDNA Profiling Could Reveal Genetic Alterations in Schizophrenia. mtDNA , PANSS 24 2021 Prevalence and Associated Factors of Psychological Distress and Burnout among Medical Students: Findings from Two Campuses. CBI , CGPA 25 2021 Prevalence of Sexual Dysfunction and Related Risk Factors in Men with Multiple Sclerosis in Iran: A Multicenter Study. EDSS , IIEF , MS , MSHQ , SD , SQOL-M 26 2021 Psychological aspect of common people during lockdown. --- 27 2021 Psychological profile of multi drug resistance TB patients: A qualitative study from a Tertiary care Centre of Kolkata. DR-TB , TB 28 2021 Psychometric Properties of the Abdominal Pain Index (API) in the Iranian Adolescent Population. API , MPQ 29 2021 Quality of life and other psychological factors in patients with tooth wear. BEWE , CS , OIDP 30 2021 Red and White Meat Intake in Relation to Mental Disorders in Iranian Adults. FFQ , HADS , OR 31 2021 Relationship Between Achievement Motivation, Mental Health and Academic Success in University Students. AMT 32 2021 Religiosity and mental wellbeing among members of majority and minority religions: findings from Understanding Society, The UK Household Longitudinal Study. SWEMWBS 33 2021 Sleep quality as a mediator of the association between coping styles and mental health: a population-based ten-year comparative study in a Chinese population. --- 34 2021 The first wave of the COVID-19 pandemic and its impact on socioeconomic inequality in psychological distress in the UK. UKHLS 35 2021 The psychological impact of deprivation in regions affected by conflict: a multilevel analysis of a cross-sectional survey in the occupied Palestinian territory. --- 36 2021 The relationship of supportive roles with mental health and satisfaction with life in female household heads in Karaj, Iran: a structural equations model. --- 37 2021 Time trend analysis of social inequalities in psychological distress among young adults before and during the pandemic: evidence from the UK Household Longitudinal Study COVID-19 waves. --- 38 2021 Which is most important for mental health: Money, poverty, or paid work? A fixed-effects analysis of the UK Household Longitudinal Study. CMD 39 2021 [DISEASE SEVERITY AND PSYCHOLOGICAL HEALTH AMONG POLIOMYELITIS PATIENTS IN JERUSALEM - 10 YEARS FOLLOW-UP STUDY]. IPPS 40 2020 A New Self-Reported Assessment Measure for COVID-19 Anxiety Scale (CDAS) in Iran: A Web-Based Study. CDAS , CDAS 41 2020 Antenatal depression and anxiety and early pregnancy BMI among White British and South Asian women: retrospective analysis of data from the Born in Bradford cohort. --- 42 2020 Assessment of Quality of Life Among Patients After Lung Transplantation: A Single-Center Study. DLT , QoL , SGRQ , SLT , WHOQOL-BREF 43 2020 Associations between diet and mental health using the 12-item General Health Questionnaire: cross-sectional and prospective analyses from the Japan Multi-Institutional Collaborative Cohort Study. --- 44 2020 Gender difference in the association between Framingham Risk Score with cardio-metabolic risk factors and psychological distress in patients with metabolic syndrome. CVD , FRS , FSG , HDL , SBP 45 2020 Health related quality of life in chronic kidney disease; a descriptive study in a rural Sri Lankan community affected by chronic kidney disease. CKD , CKDu , HRQoL , IQR , PCS , SD 46 2020 Legume and nut consumption in relation to depression, anxiety and psychological distress in Iranian adults. HADS 47 2020 Living or deceased-donor kidney transplant: the role of psycho-socioeconomic factors and outcomes associated with each type of transplant. --- 48 2020 Mental Health Problems and Their Association With Internet Use in Medical Residents. GPIUS2 , MHP , PHQ , RSES 49 2020 Mindfulness versus Physical Exercise: Effects of Two Recovery Strategies on Mental Health, Stress and Immunoglobulin A during Lunch Breaks. A Randomized Controlled Trial. IgA , MM , PE , PSQ 50 2020 Optimism as a key to improving mental health in family caregivers of people living with Alzheimer's disease. AD , LOT-R , ZBI 51 2020 Paternal Psychological Stress After Detection of Fetal Anomaly During Pregnancy. A Prospective Longitudinal Observational Study. EDPS , IES 52 2020 Personality and educational level determine self-reported health-related quality-of-life and distress in patients with renal tumors awaiting radical surgery. HRQoL 53 2020 Predictors of mental health problems in formal and informal caregivers of patients with Alzheimer's disease. AD , DAQ , GSES , SCQ , SSQ 54 2020 Prevalence of Workaholism Among Egyptian Healthcare Workers With Assessment of Its Relation to Quality of Life, Mental Health and Burnout. HCWs , MBI , PCQ , QoL 55 2020 Psychological and Personality Characteristics of Applicants for Facial Cosmetic Surgery. NEO-FFI 56 2020 Psychometric evaluation of a questionnaire for measuring factors related to mental health of women in an earthquake. EFA , WHOQOL-BREF 57 2020 Resemblance and clustering of mother's and father's psychopathology levels among Chinese parents of schoolchildren with psychiatric disorders. LCD , PD 58 2020 Sleep Disorders and Mental Health in Menopausal Women in Tehran. BMI , OSA 59 2020 Specific symptoms of the General Health Questionnaire (GHQ) in predicting persistence of psychological distress: Data from two prospective cohort studies. BHPS , UKHLS 60 2020 The Impact of Spousal Migration on the Mental Health of Nepali Women: A Cross-Sectional Study. BDI , CD-RISC 61 2020 To study the impact of diabetic retinopathy on quality of life in Indian diabetic patients. CSCL , DM , DR , NPDR , PDR , QoL , QoLID , RetDQoL 62 2019 A Pilot Study of the Relationship between Diet and Mental Health in Community Dwelling Japanese Women. BDHQ 63 2019 Adherence to Mediterranean dietary pattern is inversely associated with depression, anxiety and psychological distress. HADS , OR 64 2019 Adherence to the MIND diet and prevalence of psychological disorders in adults. DS-FFQ , HADS 65 2019 Comparison of cognitive flexibility, appropriate risk-taking and reaction time in individuals with and without adult ADHD. ADHD , CAARS , CFI , IGT , WURS 66 2019 Correlates of mental disorders among minority Arab adolescents in Israel: results from the Galilee Study. FDI 67 2019 Does eating fruit and vegetables also reduce the longitudinal risk of depression and anxiety? A commentary on 'Lettuce be happy'. --- 68 2019 Effectiveness of Cardiac Rehabilitation on Health-related Quality of Life in Patients with Myocardial Infarction in Pakistan. HRQoL , MI , SRH 69 2019 Effectiveness of Solution-Focused Group Counseling on the Mental Health of Midwifery Students. SFGC 70 2019 Emotional Well-being and Stress Among Radiologic Science Students. --- 71 2019 Empirically derived food-based dietary inflammatory index is associated with increased risk of psychological disorders in women. FDII , HADS 72 2019 Explanation of somatic symptoms by mental health and personality traits: application of Bayesian regularized quantile regression in a large population study. EFA , FGIDs , HADS 73 2019 French adaptation of the PO-Bado short form, an interview-based expert rating scale for distress screening. HADS 74 2019 General health, sense of coherence and coping styles in parents participating in Workshops for Parents of Hyperactive Children. CISS 75 2019 Impact of pharmaceutical care on mental well-being and perceived health among community-dwelling individuals with type 2 diabetes. VAS 76 2019 Mental Health Before and After Retirement-Assessing the Relevance of Psychosocial Working Conditions: The Whitehall II Prospective Study of British Civil Servants. --- 77 2019 Posttraumatic stress disorder and psychosocial difficulties among children living in a conflict area of the Southeastern Anatolia region of Turkey. PTSD , SDQ 78 2019 Prevalence of problematic internet use disorder and associated risk factors and complications among Iranian university students: a national survey. OR , PIU 79 2019 Province-Level Prevalence of Psychiatric Disorders: Application of Small-Area Methodology to the Iranian Mental Health Survey (IranMHS). HB , IranMHS 80 2019 Psychological morbidity in Nepali cross-border migrants in India: a community based cross-sectional study. aPR 81 2019 Psychological well-being and green tea consumption are associated with lower pentosidine serum levels among elderly female residents in Japan. AGE , PEN 82 2019 Quality of life and psychological morbidity in patients with superficial cutaneous dermatophytosis. DLQI , QoL 83 2019 Sexual orientation and poor psychological health: a population-based study. --- 84 2019 The Effect of Hajj Trip on Mental Health: A Longitudinal Study. --- 85 2019 The prevalence of chronic pain and its associated factors among Saudi Al-Kharj population; a cross sectional study. CP 86 2019 The relationship between maternal mental health and communication skills in children in Shiraz, Iran. FBC , MMH 87 2019 Trajectories of multisite musculoskeletal pain in midlife: Associations with common mental disorders. --- 88 2019 [Memory complaints: mental health, diseases, pain and loneliness. A population study in the city of Madrid]. SMC 89 2018 A core outcome set for aphasia treatment research: The ROMA consensus statement. COS , ROMA , WAB-R 90 2018 A cross-sectional exploratory analysis between pet ownership, sleep, exercise, health and neighbourhood perceptions: the Whitehall II cohort study. CASP , CES-D 91 2018 Association between dietary inflammatory index and psychological profile in adults. DII , DS-FFQ , HADS 92 2018 Changes in quality of life and functional capacity after lung transplantation: A single-center experience. HRQoL , LT , PCS and MCS , SGRQ 93 2018 Cognitive ability as a moderator of the association between social disadvantage and psychological distress: evidence from a population-based sample. SF-12 94 2018 Comparisons between different elements of reported burden and common mental disorder in caregivers of ethnically diverse people with dementia in Trinidad. ZBI 95 2018 Consumption of fruit and vegetables in relation with psychological disorders in Iranian adults. FFQ , HADS 96 2018 Effect of 1-month Cyclic Meditation on Perceived Stress, General Health Status, and Cardiovascular Parameters in School Teachers. BP , CM , PSS 97 2018 Effectiveness of psychosocial interventions in abused children and their families. SDQ 98 2018 Greek Version of the Connor-Davidson Resilience Scale: Psychometric Properties in a Sample of 546 Subjects. CD-RISC , EFA , GAF , PSS , WHOQOL 99 2018 Health-related life satisfaction and its influencing factors: A cross-sectional study in China. --- 100 2018 Mechanical sensitivity and psychological factors in patients with burning mouth syndrome. BMS , FPT , QST ,
https://allie.dbcls.jp/pubmed/GHQ;General+Health+Questionnaire.html
The texts used in this course all present arguments in different ways. Argument texts are common across disciplines. For example, science texts will provide evidence to support theories, texts in social sciences will debate the theoretical aspects of a range of topics. Whether you are studying humanities, social sciences, science or technology, you will need to judge whether a text is an appropriate and reliable source. Some texts will be more factual than others and some will attempt to influence your opinions. The purpose of the questions you ask in this course will be to evaluate what you are reading. As you do the activities you will develop your ability to read critically by asking questions about: - the source of the text and the status of the author - which subject area the text is from and your own knowledge and opinions on the subject - the author’s beliefs and attitudes and how these are expressed in the text - what the text says and does, and how - what the author’s purpose is in writing In this free course you will focus on how to be a critical reader. Reading critically is an essential skill at any level of education. It means being aware of your own purposes and opinions as you read and being able to recognise the writer’s purposes and opinions in their writing. After studying this course, you should be able to: consider the importance of examining attitudes to texts understand the organisation of argument texts distinguish between facts and opinions in texts examine hedging as a technique used by writers to express opinions and avoid making unsupported generalisations consider the importance of reliable evidence to support claims. Course Curriculum Course Reviews No Reviews found for this course.
https://teachme.ng/course/how-to-be-a-critical-reader/?c=3658&u=1
Retirement doesn’t always mean that you need to stop living a meaningful life. Many people retire young to live a life pursuing the biggest passions and interests. In fact, giving the cost of living and calculating the inflation rates in Singapore, you need to plan your retirement today to secure your future. While you start planning for your finances for the post-retirement period, make sure you keep these tips in mind. 1. Know Your Priorities To make a sound money management decision, you need to know what you’re priorities are now. Ask yourself – Do you want to travel around the globe? Do you want to spend more time with your grandchildren? Do you want to volunteer in social and community events? Knowing the priorities can help you devise a plan for your budget and manage the savings and income to achieve it. You might even avoid overspending and put the money in the things that you want to accomplish in life during your retirement period. 2. Draw Up a Budget for Your Retirement Don’t wait until your retirement to figure out the expenses. In Singapore, the cost of living is pretty high. You can’t expect the cost of expenses to decrease by the time you retire. The prices of payment of medical bills and free time costs to fill up will only go up in the future. Draw up a budget that includes normal living expenses, old age contingency costs, and emergency buffer to prepare for the post-retirement period. 3. Identify Possible Cuts in Your Spending While you prepare the budget and list them, you might be able to identify the areas where you are spending more than you have expected. Recognizing the possible cuts in the budget can help you manage the finance well during your retirement. 4. Look for Ways to Increase Your Income You don’t always have to focus only on cutting down the unnecessary expenses. Making a budget also gives you innovative ways to increase your income. In case, you haven’t retired yet, you might have enough time in the last few years before retirement. So, track down your personal or workplace pensions and boost your sources of income before retirement. 5. Review the Performance of Your Savings and Investments It is always good to keep track of your savings during an emergency. When you retire, not all income sources start at the same time. So, create an income timeline that lays all the sources of income for your analyze and fill your savings gaps during retirement. Also, consider the estimated income taxes that will be owed in retirement. List all the insurance policies that need to be reviewed. Categorize the policies that are related to property or casualty such as life, health, disability, etc. This way, you’ll know which of your life’s expenses will be covered in the future. 6. Think of Your Home as a Source of Income Many retirees tend to sell their home which they had raised for their families and buy something smaller but spacious for themselves. In case, you own a home and represents a large proportion of your wealth, you might want to consider evaluating the house. Before you sell or rent your house, be sure that you have done the math and clearly understood what the incomes and expenses that might incur with it are. 7. Check the Maintenance Fees Some costs will incur when you retire. Expenses such as regularly maintaining your car, house interior repair costs of your house, and even health checkups can incur during your retirement period. You need to have some extra hundreds of dollars to spend every day to meet the maintenance costs. Don’t forget to include these extra maintenance and healthcare costs in your retirement budget. 8. Be Tax Efficient with Withdrawals During retirement, every penny counts while managing your finances. In fact, every retirement account might be taxed differently. Consult a financial professional to do your taxation planning. These professionals will help you throughout your tax planning, filing, and expecting returns processes. 9. Check Your Risk Appetite You need to consider your risk appetite by taking a look at the risk of the investments you are willing to afford for investments you made while you were earning. In case, you think that you don’t have enough retirement savings, try to let go of some investments if you cannot generate enough from them. Focus on the investments with less risk that you can handle during your retirement and provide stable returns. You surely don’t want to face any financial risks during the time when you’ll have limited resources to live. 10. Diversify Your Investments The key to growing your retirement funds is to diversify your investment in your portfolio. There are few instruments that you can invest in such as fixed income, stock, bonds, mutual funds, ETFs, REITs, CPF life, annuities, or just retirement plans offered by insurance companies. When you diversify your instruments, your portfolio will also be challenging to manage. Hire a financial professional to help manage your portfolio to help you take conscious financial decisions and free yourself from unnecessary stress. Retirement is not the end of the road of savings or managing money. In fact, you need to consider these tips to make the best retirement plan for you. So choose the best pension option for you and your family and invest in instruments that can bring income and growth in your retirement years. Best MoneyLender Reviews in Tampines – Accredit Pte Ltd Finding for the best way to get cash? The best solution is to get a loan from Accredit Moneylender. Approach with Accredit Licensed moneylender to grab the flexible personal loan in Tampines. In fact, Accredit is your best choice to increase your fund. Our loan advisor is treating every client as a friend, with the offer you the best consult. In order to give you the best service, we ensure each of our staff fully understands the rule of regulation of moneylending.
https://www.accreditloan.com/10-tips-to-look-after-your-finances-after-retirement/
Are you familiar with the term ‘shadow work’? It’s been around for a while, but seems to be gaining in prominence—perhaps because the need for shadow work is growing. Shadow work is a term that comes to us from the New Age movement. In the personal development world it’s sometimes referred to as ‘inner work’ or, more colloquially, as ‘peeling the layers of the onion’. In terms of emotional mechanics, it’s the process of making our unconscious conscious. People often talk about becoming a ‘conscious individual’, yet if one asks, “Conscious of what?” things suddenly get complicated. What does that actually mean? To be a conscious individual means to not have any unconscious programming causing behaviour damaging to themselves or others. To be a conscious individual actually means to not have any unconscious programming causing behaviour damaging to themselves or others. Sigmund Freud The concept of the unconscious mind evolved during the late 19th century and was popularised by Sigmund Freud, the ‘father of psychoanalysis’, in the early 20th. According to Wikipedia, the unconscious contains “repressed feelings, automatic skills, subliminal perceptions, thoughts, habits, and automatic reactions, and possibly also complexes, hidden phobias and desires.” That’s quite some territory. And as much as people think about the unconscious, they tend to think of it as normal, part and parcel of being human. Doing shadow work challenges this assumption. Repressed feelings, complexes and hidden phobias will all rise to the surface for clearing during this process. The common denominator with repressed feelings, complexes and hidden phobias is that they all have unprocessed emotional pain associated with them, pain which must be cleared—i.e. felt, accepted and acknowledged—before it, and the associated complex or phobia, can be released. I’ve written about this in more detail in The mechanics of emotional pain, Part I and Part II. The clearing cycle The contents of our unconscious can be determined by paying close attention to what surfaces in our lives that bother us. I’ve written about that in Our lives are constant feedback loops. Shadow work involves identifying our repressed feelings, clearing them, rinse and repeat. As soon as one phobia is cleared, another begins to manifest in our lives (i.e. plague us) as it rises to the surface. Clearing the unconscious thus becomes something of a rollercoaster ride as we gain altitude—and wellbeing—by clearing one unconscious issue only to plunge back down into turmoil when the next, even deeper phobia or complex surfaces for clearing. The trick is to pay attention to what’s happening and catch each new manifestation of our unconscious as soon—and with as little grief—as possible. Shadow work is perhaps the key component of the New Age Ascension process. Saving grace In my experience, resisting this process is useless. The sooner you can accept that the entirety of your unconscious is working its way into the light of day, the sooner you will get through—and the sooner grace descends. Grace is the, well, saving grace of shadow work. You are transitioning from an emotionally immature individual with an unconscious containing all their repressed judgments into a clear, conscious individual fully in control of their emotional reactions. The process will continue until it’s done. Yes, it’s going to take a lot longer than you ever wanted or feared—but it’s worth it. Image: Haunted by Emily Mucha on Flickr.
http://www.michaelhhallett.com/shadow-work-making-unconscious-conscious/
On the World Day for Safety and Health at Work, the Business Social Compliance Initiative (BSCI) would like to express its support of the International Labour Organisation (ILO) and all international stakeholders in their efforts to ensure safe and healthy workplaces worldwide. BSCI welcomes this day as means to globally raise awareness and promote the prevention of occupational accidents and illnesses. Tragic incidents such as the Rana Plaza building collapse, the first anniversary of which we have recently marked, stress the importance of ensuring workers' safety and protection and the complexities and challenges it entails. With over a decade of experience supporting companies in their commitment to improve working conditions in their global supply chains, BSCI has clearly identified Health and Safety as a matter that remains crucial to its participants on an ongoing basis. This area has become the third main non-compliance among audits conducted in 2013 after compensation and working time requirements. Addressing these non-compliances requires the integration of a preventive culture at national and workplace level. As part of its comprehensive approach to social compliance, BSCI brings the knowledge to participants and producers by organising regular trainings worldwide. These activities are aimed at obtaining first-hand feedback on the issues faced by factory managers on the area of Health and Safety and at offering guidance on best management practices to address them. To learn more about BSCI trainings and upcoming workshops, click here. The promotion of safe workplaces is a global responsibility that calls for united action and BSCI remains committed to working in partnership to reduce risks in factories and farms.
https://www.amfori.org/news/bsci-strives-safe-and-healthy-workplaces-worldwide
This Report highlights how the global food system is the biggest driver of destruction of the natural world, stating that a shift to predominantly plant-based diets is crucial in halting the damage. The report highlights recent studies showing that the destruction of biodiversity is accelerating around the world. “The global rate of species extinction today is orders of magnitude higher than the average rate over the past 10 million years. Over the past 50 years, the conversion of natural ecosystems for crop production or pasture has been the principal cause of habitat loss, in turn reducing biodiversity.” It’s worth remembering that 77-83% of agricultural land use is for animal agriculture while producing only 18% of food calories consumed. Just a 1000 years ago, only 4 million square kilometers – less than 4% was used compared to today’s 51%. The report gives a thorough explanation of how farming destroys ecosystems and biodiversity. It notes that: “Current food production depends heavily on the use of inputs such as fertilizer, pesticides, energy, land and water, and on unsustainable practices such as monocropping and heavy tilling. This has reduced the variety of landscapes and habitats, threatening or destroying the breeding, feeding and/or nesting of birds, mammals, insects and microbial organisms, and crowding out many native plant species.” We know that food systems are also major contributors to global greenhouse gas emissions, driving climate change, which further degrades habitats and causes species to disperse to new locations. “In turn, this brings new species into contact and competition with each other, and creates new opportunities for the emergence of infectious disease.” In its analysis of the underlying paradigm driving our food system the report states that: “Our food system has been shaped over past decades by the ‘cheaper food’ paradigm. Policies and economic structures have aimed to produce ever more food at ever lower cost. Intensified agricultural production degrades soils and ecosystems, driving down the productive capacity of land and necessitating even more intensive food production to keep pace with demand. Growing global consumption of cheaper calories and resource-intensive foods aggravates these pressures.” While this is true, multiple other factors contributed including fossil fuel use, inappropriate technology (ploughing, pesticides, etc.), population growth, marketing of meat and processed food, etc. A deeper understanding of our dysfunctional attitudes is presented in the wonderful ‘Rethinking Food and Agriculture‘, which states that we need “radical and fundamental changes in our anthropocentric, speciesist and human supremacist worldview and our attitudes and relationship toward the living world, including our relationships with other animals.” By changing the food system we create a supportive environment for these attitudes to change. The report states the simple truth that: “Without reform of our food system, biodiversity loss will continue to accelerate. Further destruction of ecosystems and habitats will threaten our ability to sustain human populations.” And identifies three principal levers: - Firstly, global dietary patterns need to converge around diets based more on plants. Such a shift would also benefit the dietary health of populations around the world, and help reduce the risk of pandemics. Global food waste must be reduced significantly. Together, these measures would reduce pressure on resources including land, through reducing demand. - Secondly, more land needs to be protected and set aside for nature. The protection of land from conversion or exploitation is the most effective way of preserving biodiversity, so we need to avoid converting land for agriculture. Restoring native ecosystems on spared agricultural land offers the opportunity to increase biodiversity. - Thirdly, we need to farm in a more nature-friendly, biodiversity-supporting way, limiting the use of inputs and replacing monoculture with polyculture farming practices. The most important point in the report is that: “These three levers are in part interdependent. Most notably, the protection and setting aside of land for nature and the shift to nature-friendly farming both depend on dietary change, and will become increasingly difficult to achieve if continued growth in food demand exerts ever-growing pressure on land resources.” The report also outlines different outcomes for ecosystems and climate and how business-as-usual will leave no space for other species and no allowable emissions for any other sector, while a plant based food system and tackling food waste would provide huge benefits. The report’s authors believe that this year is a unique opportunity to reboot our food system. The UN Global Biodiversity Conference takes place in Kunming, China from May 17 – 30 (COP15). The first UN Food Systems Summit (UNFSS) takes place in New York in September. The next UN Climate Change Conference takes place in Glasgow from Nov 1 – 12 (COP26). There will also be huge post COVID-19 pandemic investment to support an economic recovery. Will these investments focus on an equitable, sustainable and compassionate future? Finally, the report outlines three steps, which can be taken this year to bring about a healthy food system. The report’s suggestions on how to establish a biodiversity-supporting food system? - A plant based food system and a reduction in food waste are critical to breaking the system lock-ins that have driven the intensification of agriculture and the continued conversion of native ecosystems to crop production and pasture. - Embed a ‘food systems approach’ across other key international processes, recognizing that the health and prosperity of people, nature and other species are inter-related. - Think Global and act local….”Global guidelines in policy areas such as responsible investment, dietary change and nature-based climate change mitigation solutions will be needed to guide national-level action plans that can collectively deliver transformative change to the global food system.” The report is very worthwhile and an important addition to increasing our understanding of how a plant based food system can deliver our global and local health, climate and biodiversity goals and the actions we can take now to bring it about. In Ireland we are advocating for this transition so vital to all species on our Nature Rising website.
https://vegansustainability.com/a-review-of-the-report-food-system-impacts-on-biodiversity-loss/
Introduction and executive summary Debate is heating up about the Trans-Pacific Partnership (TPP)—a proposed new trade agreement. At EPI we urge policymakers to assess every issue they approach—whether macroeconomic stabilization policy, tax and budget policy, regulatory policy, labor policy, or yes, trade policy—on the grounds of whether or not it will boost wages for the vast majority of Americans. This is an important benchmark because most Americans’ hourly pay has not been boosted at all from the economy’s growth over the past generation (Figure A depicts the growing wedge between hourly pay for the vast majority and economy-wide productivity). In regard to the TPP and wages for the vast majority of Americans, there are two important concerns in play. The first is whether it will help generate aggregate demand (i.e., increase spending by households, governments, and businesses) and tighter labor markets in coming years. The second is whether it will lead to a trade policy that boosts low- and moderate-wage workers’ power to bargain for higher wages—instead of continuing to favor corporate managers and capital owners. This briefing paper makes the following points: Unless there is a strong currency provision in the TPP, reductions in the U.S. trade deficit—the most promising route back to sustainable full employment—will be harder to obtain following its passage. Recent claims that a strong currency provision in the TPP would be impossible to craft without impinging on the Federal Reserve’s ability to undertake expansionary monetary policy to fight recessions are clearly wrong. Expanded trade, particularly with trading partners that are poorer and more labor-abundant than the United States, is likely to lower the wages of most American workers. While expanded trade is generally “win-win” at the country level, expanded trade redistributes so much income within countries that it’s possible to make the majority of residents worse off—and this is indeed the likeliest scenario for the United States. TPP proponents who highlight the treaty’s estimated net national gains while ignoring its likely regressive distributional outcomes are hiding the ball from policymakers. The net national gains from trade have the exact same root as the regressive distributional outcomes—they both stem from reshuffling of domestic production away from labor-intensive import-competing sectors—and you cannot have one without the other. Some TPP supporters claim the agreement will be “all gain, no pain” for American workers because U.S. tariffs are already low while trading partners’ tariffs are higher. These arguments are economically incoherent. In fact, studies that show the TPP will increase overall American national income also show that it will cause substantial reshuffling of domestic production away from labor-intensive import-competing sectors. This will clearly inflict damage on large groups (probably the majority) of American workers. The wage losses to workers on the wrong end of expanded trade are almost surely larger (in percentage terms) than the estimated net national gains from the TPP. Specifically, if the TPP would indeed boost national income by 0.4 percent while boosting imports and exports by 0.6 percent of GDP by 2025, as a widely cited estimate claims, this would imply a loss to wages of non-college-educated workers of between 0.4 and 0.6 percent. Further, this widely cited estimate’s ratio between estimated income gains stemming from the TPP and estimated increases in trade flows is deeply inconsistent with standard models relating these two variables. It is even inconsistent with the findings of other studies on trade liberalization’s impact on incomes that show trade having particularly large income-boosting effects. These widely cited estimates of the TPP’s effect on income and trade imply that every dollar of imports spurred by reducing trade costs in the TPP will increase national income by more than a dollar. This is an extraordinarily large effect. Standard models imply that every dollar of imports boosted by reduced trade barriers should boost national incomes by only 5–10 cents. The TPP, aggregate demand, and currency provisions For most of the 30 years before the Great Recession, a majority of macroeconomists assumed that the Federal Reserve’s control of short-term interest rates would be sufficient to keep the economy at full employment. There was thus little concern about the effects of trade deficits and surpluses on the level of aggregate demand. When one assumes the economy has self-correcting properties and institutions that will ensure a rapid return to full employment in the face of any shock, this lack of concern about trade’s effects makes some sense. But this assumption should not have survived the Great Recession—the U.S. economy has been stuck well below potential for more than seven years now, even as the Federal Reserve has pursued expansionary monetary policy for much of that time. Worse, there’s evidence that the demand shortfall that has delayed a full recovery from the Great Recession could be chronic and last for years to come. Economic history argues this is not impossible: Japan has been stuck below potential for decades, and much of Western Europe suffered a decade or more of excessively high unemployment because policymakers failed to boost demand growth. In the United States, fiscal policy has been historically contractionary during the recovery from the Great Recession, and pressure is building on the Fed to be less accommodative to growth even with no sign of accelerating inflation in sight. In short, a policy-induced chronic demand shortfall (call it “secular stagnation” if you like) seems to be a distinct possibility. What does the TPP have to do with this? Once one realizes that the economy is not at full employment and that there is no automatic mechanism that can return it there quickly, then trade flows can have a powerful influence on aggregate demand. The United States has run chronic trade deficits for well over a decade. Since 2002, these deficits have been overwhelmingly driven by the conscious policy choices made by several of our major trading partners to manage the value of their currency for competitive advantage in U.S. and global markets. They buy dollar-denominated assets to boost the value of the dollar and depress the value of their own currencies. This results in cheaper imports for the United States and makes U.S. exports more expensive in global markets. Ending this currency management (or currency manipulation) by our trading partners is hence a crucially important goal for macroeconomic stabilization in coming years. Given this, a trade agreement that includes several countries that are obvious currency managers would seem like a good place to start addressing the problem. Yet there is every sign that the TPP will not include any useful currency provisions. The arguments against such provisions are weak. Particularly unconvincing is the argument that any such currency provision would somehow bar the Federal Reserve from undertaking expansionary monetary policy that included purchasing bonds to help the U.S. economy through a recession. To be clear, tying the Fed’s hands in fighting recessions would indeed be a bad thing, but it is clearly possible to define currency management in a way that does not constrain the Fed or any other central bank wanting to undertake similar maneuvers. Quantitative easing, or QE (the policy that some are claiming to be at risk in the future if a currency provision is added to the TPP), works through the Fed purchasing domestic assets in exchange for reserves it creates. That is, the Fed buys U.S. Treasury bonds and U.S. mortgage-backed securities (MBS) and credits the sellers with reserves held at the Fed. This has a first-order effect of lowering interest rates on Treasury bonds and MBS. These lower rates in turn do exert some downward pressure on the value of the U.S. dollar, which makes our exports cheaper and imports more expensive, thereby reducing our trade deficit. But the primary channel and primary target is the domestic interest rate, and the asset being purchased is denominated in the same currency that the nation’s monetary authority controls. Conversely, when people describe mercantilist currency management, they universally mean the purchase by a nation’s monetary authorities of foreign assets. For example, the Chinese central bank buys not Chinese bonds, but U.S. Treasury bonds and MBS. This has the first-order effect of changing the relative demand for Chinese versus U.S. assets, which moves the U.S.–China exchange rate. In short, putting limits or rules in the TPP (or some other agreement) on the extent of foreign asset purchases a nation’s monetary authorities can undertake would do nothing at all to stop initiatives like the Fed’s QE program during and after the Great Recession. The TPP, globalization, and wage growth: No such thing as an “all gain, no pain” treaty It is now widely acknowledged that globalization has played a role in the weak wage growth for most American workers and the rise in inequality that has characterized the past generation. In debates over trade agreements in the 1990s, this wage-suppressing impact of globalization was hotly disputed. But a reassessment of evidence as well as a sharp increase in imports from much-poorer nations has led most observers to agree that globalization has likely contributed to rising inequality. Despite this, however, many economists and policymakers continue to support trade agreements like the TPP. Often they argue that yes, global integration likely put downward pressure on the wages of a large swath of Americans over the past generation, but now the U.S. market is already so open to imports that further damage will be minimal. The corollary runs that because foreign markets remain less open to U.S. goods, trade agreements will just offer new opportunities for U.S. exporters. Implicit in this view is that we can enjoy the overall national income–augmenting benefits of expanded trade opportunities offered by the TPP without suffering any of the regressive redistribution that has accompanied growing globalization in the past. We can call this the “all gain, no pain” theory of the TPP. However, these are facile arguments that don’t hold up to serious scrutiny. Gains from trade and regressive redistribution: Two sides of the same coin To see why, it’s crucial to understand that the benefits of lower trade barriers for overall national income have the exact same root as the regressive redistribution they cause: a reshuffling of domestic production away from labor-intensive import-competing sectors and toward capital-intensive exportable sectors. The rough intuition is as follows. Trade agreements reduce domestic prices for imports—traditionally but not exclusively through cutting import tariffs (we will explain other channels a bit later). Lower prices for imports in turn pull down prices that domestic firms producing import-competing goods can charge (henceforth we label domestic import-competing goods importables). If the process stopped here then all consumers would be better off; American workers would pay a bit less for importable apparel, textiles, and electronics—and hence price-adjusted incomes would rise. But, unfortunately for most American workers, the process doesn’t stop here—there are many other moving parts. For one, while consumers benefit from lower prices for importables, producers of importables see incomes suffer, as they receive less for each unit of output they sell. For another, trade agreements see tariffs in our trading partners lowered as well, so American exporters now have expanded options for selling abroad. This actually pushes up the price of exportable goods for American consumers. Essentially American consumers now have to outbid a larger pool of potential global customers for American output in export industries. These higher prices for exportable goods actually claw back some of the overall gains to U.S. consumers from expanded trade, as higher export prices reduce price-adjusted real incomes even as lower importable prices increase these incomes. Further, the combination of lower prices for importable sectors and higher prices for exportable goods leads to resources (labor and capital) shifting from importable sectors to exportable sectors. Measured in dollar terms, the contraction of importables and the expansion of exportable sectors will balance. But each $100 of production in exportable industries requires more capital and less labor than each $100 of production in import-competing industries. How do we know this? The United States is among the most capital-abundant countries in the global economy, while many of our major trading partners are among the most labor-abundant countries. This makes capital relatively cheaper in the United States and labor relatively expensive, meaning that our comparative advantage is naturally in capital-intensive goods (which we will export) and not in labor-intensive goods (which we will import). So, this reshuffling of production away from labor-intensive industries toward capital-intensive industries reduces the demand for labor, which pulls down wages, and increases the demand for capital, which bids up its return. The key lesson here is that if trade flows are determined (even in part) by the difference in capital and labor abundance between the United States and its trading partners, then the gains from trade (lower prices caused by a decline in the price of importables) are totally inseparable from the regressive redistribution it causes. And there is ample evidence that this difference in capital and labor abundance is indeed a basis for trade flows. This is a hugely important point: If expanded trade does lead to a reshuffling of domestic production from importables to exportables, then it will increase inequality and drag on wages for a large majority of American workers. If it does not lead to such a reshuffling of domestic production, then there are no benefits to be had from it through this channel. A variation of the following argument has become ubiquitous: Trade lets us keep the good jobs in the United States and sends the bad jobs overseas. It’s easy to see how this impression can be gained; in the example above, the United States is left with more exportable jobs (call them “aircraft jobs”) and fewer importable jobs (call them “apparel jobs”). On average, aircraft jobs pay better than apparel jobs, so, it seems as if the United States has “kept the good jobs.” However, this reflects confusion between what happens to jobs versus what happens to workers. More jobs are indeed located in the higher-wage sector (aircraft) post-trade, but workers receive lower wages in both sectors. The higher pay in the aircraft sector is wholly a function of the higher ratio of (high-paid) managers relative to workers employed there. Finally, most standard models that map this reshuffling of domestic production onto changes in wages for different groups of American workers likely miss important channels through which globalization impacts wages. For example, these models assume competitive labor markets, so wages for workers on the losing end of trade are only harmed by the reduction in demand stemming from the domestic reshuffling of production away from importables. But there is copious evidence that labor markets are not perfectly competitive and that bargaining power explains a good portion of wage dynamics. If expanded trade does not just reduce the demand for certain groups of workers, but also reduces their bargaining power vis-à-vis employers, this could provide a drag on wages that standard models do not pick up. Claims that the TPP will be “all gain, no pain” are economically incoherent This background should make one wary of claims that any trade agreement—and the TPP specifically—can provide the benefits of trade by boosting exports without spurring any of the redistributive effects caused by moving resources out of importable sectors. And there are further reasons to be wary about this claim. Essentially all analysis of the gains from trade agreements assume that they have no impact on the trade deficit and that the economy is operating at full employment. These assumptions are, of course, problematic for numerous reasons. These limitations are important to keep in mind when people brandish studies estimating the national income gains from agreements. What does this have to do with the argument that the TPP will be all gain, no pain because barriers to imports are already essentially dismantled? If it is argued that the TPP will expand exports for American business using a model that assumes balanced trade, then it is implicitly being argued that imports must rise as well. And that’s exactly what models of trade agreements universally show. Thus, the argument that TPP will boost exports but not boost imports is simply wrong. Further, remember that the standard mechanism by which expanded trade leads to wage suppression is the reshuffling of domestic production—our capital-intensive sectors expand while our labor-intensive sectors contract. Pretend for a moment we believed the claim that the direct impact of the TPP on imports is negligible because there are no more tariffs left to be cut in the United States. But remember that trade models predicting gains from the TPP assume full employment. So, if the economy is at full employment and then export opportunities expand due to the TPP, this by definition means that non-export sectors must shrink (if the economy is already at full employment, then you can only get more production in one sector—exporters—by ramping down production somewhere else). So, capital-intensive export production expands while less-capital-intensive production in other parts of the economy contracts. And the result is the same—demand for capital rises while demand for labor falls. Finally, just because tariffs in the United States are already low doesn’t mean that importable industries in the United States can’t suffer from a treaty like the TPP. A key point of such treaties is to remove barriers from American investors setting up production facilities in foreign countries. If this influx of foreign direct investment increases productivity in foreign firms competing with American companies, this will have the same impact as a tariff cut—the price of imports falls and production shifts out of import-competing sectors, and wages fall as a consequence. We should note an important asymmetry common in most trade agreements that the United States has signed in recent decades. These agreements tend to provide legal protections to U.S. firms that want to invest in production facilities in low-wage countries. This in turn leads to increased productivity in those countries’ export sectors, which puts downward pressure on U.S. importable prices and leads to regressive redistribution within the United States. However, these same agreements often undercut possible advantages of foreign firms competing with U.S. exportables. Take software and pharmaceuticals: Foreign firms serving their home market can make larger profits if they are not forced to pay large royalties to American holders of intellectual property monopolies. Following the intellectual property harmonization that is a standard part of too many U.S. trade agreements, these foreign firms are not just unable to export output back to the United States without paying these royalties, they are often not even allowed to continue serving their own home market without making these royalty payments. This has the effect of boosting the rise in exportable prices in global markets that tends to follow tariff-cutting trade agreements. In turn, this increases the amount of domestic reshuffling of production in the U.S. economy, amplifying the regressive redistribution caused by trade. , Trade, TPP, and wage suppression: How many workers, and how much money? So far we’ve sketched out the theory of how the same processes that lead to higher overall national incomes resulting from reduced trade barriers also lead to wage suppression for workers disproportionately represented in importable sectors. Too many of those writing about TPP and globalization assume that this logic applies only to trade-exposed sectors such as manufacturing. They then dismiss concerns regarding globalization by citing the relatively small share of U.S. workers remaining in manufacturing. But this represents a bad misunderstanding of textbook trade theory. It is not just those workers employed in importable sectors harmed by trade, it’s also workers who are disproportionately represented in importable sectors. This means that it is not just workers employed in importable sectors such as apparel that need worry about the wage-suppressing effects of expanded trade, but also the workers employed in non-traded sectors—such as retail—that the apparel workers could compete with for jobs if they’re displaced by trade. Or, to put it more succinctly, waitresses and landscapers do not lose their jobs due to trade, but their wages are definitely hurt because they need to compete with laid-off apparel workers who were displaced by imports. When empirical investigations of trade’s impact on wages for different groups of workers are undertaken, two proxies are generally used to divide those winning from those losing as a result of trade expansion: non-production and supervisory workers versus production and non-supervisory workers, and those workers with a four-year college degree versus those workers without a four-year degree. We tend to focus on the latter split—a college degree seems a better marker of capital, skills, and credentials that form the basis of U.S. comparative advantage. It is important to note that just under 70 percent of the U.S. workforce does not have a four-year college degree, meaning that the majority of U.S. workers are likely on the losing end of expanded trade. Using standard models that capture just one channel through which trade can suppress wages—the effects of relative demands based on competitive labor markets—I have calculated in previous work (Bivens 2013) that expanded trade overall has suppressed wages sufficiently to cost a non-college-educated worker about $1,800 per year if they worked full time at the national median wage. This is not enough to explain all, or even the majority, of the overall wedge between national productivity and wages for typical workers documented in Figure A, but it is a significant amount of money. More importantly for debates about trade, much more money is lost by workers on the losing end of expanded trade than is gained by the nation overall. This is obviously intuitive: If expanded trade lowers wages for one subset of workers but raises wages for the remaining group of workers, then the net of those gains and losses must be smaller (in absolute value) than the gross losses suffered by workers on the wrong end of expanded trade. In terms of relative magnitudes, one way to think of this is that the impact of globalization since the late 1970s is roughly analogous to taking a combined $5 out of the pockets of the bottom two-thirds of workers (roughly those without a four-year college degree) and giving $6 to the top third. Note what this means for comforting claims about how the workers on the wrong end of expanded trade can be compensated by the national gains from trade: We’d have to take 85 percent of the winners’ gains from expanded trade—i.e., $5 out of the $6 gain—just to hold those on the losing end harmless. That is a very heavy political lift—and one unlikely to be undertaken enthusiastically by today’s Congress. What will the TPP in particular (as opposed to globalization writ large) do to wage growth for the vast majority of American workers? Empirically, that’s a very tough question to answer with precision. But the first thing we should note is that the well-advertised gains to American GDP from trade agreements like the TPP are highlighting the net overall gain (i.e., that $1 net gain above) while ignoring the much larger gross losses to the majority of American workers. Estimates of the TPP’s effects on the U.S. economy, and what they mean for wages Probably the most-cited estimate of the TPP’s likely effect on American incomes and trade flows comes from Petri and Plummer (2012). They estimate that the TPP could add almost 0.4 percent to American GDP by 2025. As detailed below, this is actually a very aggressive estimate of potential growth that could be spurred by trade liberalization, especially given the modesty of the estimated impact on actual trade flows (roughly 0.6 percent of GDP). But for now we will examine its implications. Petri and Plummer (2012) do not include different factors of production (i.e., capital and labor) in their model, so they rule out distributional changes by definition. But we can tell from their description of the model that a reshuffling of domestic production is predicted by their modeling. As they say in an op-ed: Trade agreements work by helping to move people and resources toward an economy’s most productive firms and industries. … [T]he TPP would benefit America’s most competitive sectors. … The TPP’s investment and intellectual property provisions, for example, should help to make American know-how – Internet technologies, apps, movies and more – more accessible and secure in dynamic markets. … [B]y 2015 around 650,000 more people, close to 0.5 percent of the labor force, can be expected to work in export-related jobs and correspondingly fewer in less-productive import-competing jobs because of the TPP. It is very hard to read this as anything but evidence that the TPP will lead to an expansion of exportables and a contraction of importables. And since the evidence shows that American exportables are capital-intensive while American imports are labor-intensive, this will indeed spark the regressive redistribution we noted previously. Given the Petri and Plummer (2012) estimates of trade expansion, and given the clear qualitative description of their model that notes these gains come from moving resources into capital-intensive production, we can feed their predictions regarding trade flows into a standard model relating trade flows to relative wages to get an estimate of what this implies for wages (see Bivens 2013 for a description of this methodology). As we will show later, this is potentially problematic in that we are taking outputs of one model (the Petri and Plummer 2012 model) and using them as inputs into another. And the outputs of the former model are inconsistent with the model of trade, wages, and national income we are using to assess wage impacts. Specifically, the ratio of trade flows to income gains is much higher in the Petri and Plummer (2012) model than in our trade and wages model, so we can only choose one of these (i.e., either national income gains or import increases) to be the exogenous parameter in the trade and wages model. Below, we discuss results that use the import gain estimated by Petri and Plummer (2012). Next, we also discuss results that are derived by using the estimated national income gain from Petri and Plummer (2012), but which assume an import-to-income-gain ratio from another model (Anderson, Martin, and van der Mensbrugghe 2006). Simple estimates of the relationship between trade flows, relative wages, and overall income The Petri and Plummer (2012) estimates are that trade flows will be higher by $124 billion in 2025, roughly 0.6 percent of U.S. GDP in that year. In the models used by Bivens (2013) to relate growing trade flows and relative wage changes, this amount of trade expansion predicts a decline in non-college wages of roughly 0.4 percent (and a 0.2 percent increase in college wages). How much faith should we put in this number? Not a lot—the Petri and Plummer (2012) estimates for the income-boosting effects of TPP are very high, given that the United States is starting from such a low level of tariffs and non-tariff barriers to trade. Further, the relationship between their estimate of the ratio of overall income effects to the rise in import penetration is significantly higher than what is found in more standard models. For example, in the standard model used by Bivens (2013), a 0.6 percent increase in overall trade is only associated with a change of roughly 0.03 percent of GDP. Given this incongruity between the income gains and import increases in the Petri and Plummer (2012) models, one could imagine asking a different question of the trade and wages model. Specifically, one could ask the model to assess the relative wage impacts associated with an expansion of trade large enough to generate a 0.4 percent gain in overall income. This large an income gain in the standard trade and wages model would require an increase in imports of 6 full percentage points of GDP—almost a doubling of imports from non-advanced economies and about 10 times as large an increase in trade as Petri and Plummer (2012) project. The resulting wage changes would be absurdly huge—with non-college wages falling more than 6 percent. In the third column of Table 1, we use the estimated income gain from the TPP estimated by Petri and Plummer (2012), but then choose an estimate of the ratio of income gains to import gains from another trade model (Anderson, Martin, and van der Mensbrugghe 2006). The Anderson, Martin, and van der Mensbrugghe (2006) model is notable for showing quite large gains from trade liberalization, due both to estimating a more ambitious liberalization scenario and to making modeling decisions that boost liberalization’s income effect. But even though Anderson, Martin, and van der Mensbrugghe’s (2006) estimates of the income effects of trade are large relative to those of most other models, they still yield a ratio of import changes to income changes of roughly 2.3:1, instead of the 1.5:1 as in the Petri and Plummer (2012) model. Table 1 Estimating wage effects consistent with TPP import projections Change in total trade with LDCs, 1973–2013 Estimated TPP effect, based on import/GDP ratio Estimated TPP effect, alternate import/income gain ratio Estimated TPP effect, based on import/GDP ratio Estimate TPP effect, alternate import/income gain ratio Based on model in Bivens (2013) Based on Autor et al. (2013) Imports/GDP 4.9% 0.6% 0.9% 0.6% 0.9% Overall income gains 0.3% 0.0% 0.4% 0.0% 0.4% Implied wage effects Non-college wages -3.1% -0.4% -0.6% -0.6% -1.4% College wages 1.9% 0.2% 0.4% Note: The first three columns are based on the methodology described in Bivens (2013). The model used to generate relative wage changes generates these changes, overall income gains, and import share simultaneously. This means that the Petri and Plummer (2012) results on overall income growth and import share increases are inconsistent with this model’s structure. To address this, we model the relationships between relative wages, overall income gains, and import shares two ways. In the first, we assume that the estimated TPP import share increase is correct and in the second, we assume that the estimated TPP overall income gains are correct. The second column uses the import share as the forcing variable, while the third column uses the increase in overall income as the forcing variable. The third column uses the ratio between import gains and income gains estimated by Anderson, Martin, and van der Mensbrugghe (2006) in their estimate of gains from full merchandise trade liberalization, and then applies that ratio to the Petri and Plummer (2012) estimate of national income gains to get an implied import effect. The final two columns use the same measure of trade increase due to the TPP, but then apply the estimated effect of import flows on wages from Autor, Dorn and Hanson (2013). Source: EPI analysis of Bivens (2013); Petri and Plummer (2012); Anderson, Martin, and van der Mensbrugghe (2006); and Autor, Dorn, and Hanson (2013) Share on Facebook Tweet this chart Embed Copy the code below to embed this chart on your website. Download image If we apply the 2.3:1 ratio (again, an extraordinarily ambitious ratio for trade models) to our standard trade and wages model to assess the import growth that would be needed to achieve a 0.4 percent gain in overall national income, this implies imports would rise by roughly 0.9 percent of GDP by 2025, or roughly $190 billion, not $124 billion, as Petri and Plummer (2012) suggest. A rise in imports of this size would lower non-college wages by roughly 0.6 percent in our standard model. The last two columns of Table 1 draw on results from another study that aims to assess the impact of rising imports on American wages (Autor, Dorn, and Hanson 2013). This study finds that each $1,000 per worker increase in imports from China between 1990 and 2007 was associated with a 0.82 log-point decline in weekly earnings for non-manufacturing workers without a four-year college degree in the United States. The Petri and Plummer (2012) estimates of rising imports stemming from the TPP translate into roughly $750 per U.S. worker, implying a 0.6 log-point decline (which is essentially equivalent to a 0.6 percent decline) in non-college wages, right in line with estimates from the standard model surveyed in Bivens (2013). If we instead use the $275 billion increase in imports that makes the Petri and Plummer (2012) ratio of income gains to import increases consistent with Anderson, Martin, and van der Mensbrugghe (2006), this would imply a 1.4 percent decline in non-college wages. If wage suppression stems from expanded trade, why do so many support the TPP? It is striking how little public comment on the TPP has focused serious analytical attention on the wages issue. On the one hand, the role of currency provisions has occupied a lot of the TPP debate, and this is clearly a healthy development. The TPP is the first agreement in which the demand for a currency provision had muscled its way to the top of skeptics’ concerns, and it’s also the first major agreement that has been debated in the post–Great Recession period when there is growing recognition that the demand-sapping aspects of chronic trade deficits could be a major problem in coming years. On the other hand, there has been too much evasion of the wages issue. The Petri and Plummer (2012) estimates of the TPP’s likely economic impact focus like a laser on net national income gains while not examining distributional consequences. This is the case even though one of the foundational models that informs the Petri and Plummer (2012) modeling effort does indeed include different factors of production. And even the research team (David Autor, David Dorn, and Gordon Hanson) that published academic articles on the wage-suppressing impact of less-developed countries’ imports that we use in the previous section managed to avoid the wage issue entirely in a recent op-ed in favor of the TPP. Autor, Dorn, and Hanson instead focused on the claim that TPP opponents focus on the “job-killing aspects of globalization” as their reason for rejecting it. They say that manufacturing job loss associated with globalization is a “fait accompli” and argue that “much as we might like to return to 1970 when manufacturing comprised a quarter of U.S. non-farm employment, that’s impossible without massive protectionist barriers.” Besides eliding the crucially important wage issue entirely, this is a straw-man argument about trade and jobs. Nobody thinks that manufacturing as a share of total employment should not have shrunk in recent decades. But the level of manufacturing employment fluctuated essentially between 17 and 19.5 million workers (depending on the phase of the business cycle) for 35 years between 1965 and 2000. Since 2000 it’s down by more than 5 million workers (in 2014 manufacturing employment stood at 12.2 million, after bottoming out at 11.5 million in 2010). This decline since 2000 is driven largely by rising trade deficits and would indeed be substantially reversed if we addressed the currency management that drove those trade deficits. Autor, Dorn, and Hanson then endorse the all gain, no pain claim offered in support of the TPP, claiming that it will not cause importable sectors to shrink but will allow exportables to expand (again, they frame the only “pain” from import competition as its effect on jobs, not wages). But as we’ve noted before, this all gain, no pain aspect of TPP is almost surely a mirage. They write: Further the TPP’s lower trade barriers would barely affect import competition faced by US manufacturers. … Cutting back rock-bottom US manufacturing tariffs to zero for the remaining TPP countries would thus have negligible effects on US producers. These countries already enjoy unfettered access to US markets. But if the TPP has little downside for the US, what’s the upside? … Successfully exporting information and computer services, where the US maintains substantial technological leadership, requires more than low tariffs. It also requires protecting patents against infringement and safeguarding business assets and revenues against expropriation by foreign governments. To the extent that Obama succeeds in enshrining these guarantees in the TPP, the agreement would give a substantial boost to US trade. Finally, it’s worth noting that lots of the export “opportunities” highlighted by proponents of the TPP—including in the passage above—are not necessarily opportunities for U.S. firms to export more, they are instead opportunities to use the TPP to ensure that foreign consumers pay more for U.S. exports. Treaties like the TPP routinely harmonize other nations’ intellectual property laws to meet U.S. standards, which are very kind to U.S. pharmaceutical and software companies. This essentially means other countries must now spend resources to ensure higher prices are paid by their own consumers to protect the monopoly profits of firms like Pfizer and Microsoft. It is hard indeed to see how this is a win for the wages of the vast majority of American workers. As Dean Baker (2015) points out, it’s clearly a loss for other tradable sectors, since the increased prices foreign consumers have to pay for U.S. software and pharmaceuticals will crowd out potential demand for other U.S. tradables. But it’s awfully easy to see how this is a loss for foreign consumers and a big win for capital owners in the United States. The traditional argument for expanded trade being “win-win” at the national level was each country could enjoy cheaper prices for importables. But the logic above actually says the United States will benefit from the TPP by raising the price its trading partners must pay for some U.S. exports. Summing up The two biggest economic challenges facing the U.S. economy are (1) how to generate sufficient aggregate demand (spending by households, governments, and businesses) to ensure full employment and (2) how to stop the ever-rising inequality of wages and incomes that has smothered wage growth for the vast majority of workers. The TPP is relevant to both issues. Absent a strong currency provision, it will do nothing directly to help close trade deficits and increase aggregate demand. This is a major problem. The chronic trade deficit has been dragging on demand for over a decade now, and signing a trade agreement without provisions to stop currency management that keeps the trade deficit from closing will make this issue harder to address in the future. As regards wages and inequality, if the TPP leads to a reshuffling of domestic production toward exportable sectors that are capital-intensive and away from importable sectors that are labor-intensive, then it will exacerbate inequality. If it does not lead to such a reshuffling, then wage effects will be modest, but this begs the question of why would we bother to sign a trade agreement that did not lead to such a reshuffling of production? That is, after all, the entire point of trying to expand trade opportunities, and is the source of estimated net national gains from trade. Assurances that the TPP will be all gain, no pain are deeply disingenuous in this regard. Often when pressed on the issue of American inequality, economists and policymakers will admit that such trade agreements could indeed be bad for large numbers of American workers, but profess their continued support for them on the grounds that these agreements are good for workers in other countries. This would be a fair rationale if it were true, but in the case of the TPP it seems hard to see how it could be. The largest gains to U.S. national income identified by American policymakers seem to be largely obtained through zero-sum increases in monopoly industry profits that come directly out of foreign households’ incomes. Finally, it’s worth noting that there are many economic challenges facing the United States and the rest of the global economy that could benefit from global cooperation. For example, international cooperation to allow individual countries to effectively tax capital income (through either individual or corporate tax codes) without it simply fleeing to tax havens abroad would be a huge boon for national policymakers. Another boon would be international action to ensure that some countries’ efforts to raise the price of greenhouse gas emissions do not simply lead to “leakage” as carbon-intensive production is shipped to countries without effective measures to control emissions. International commercial negotiations that aimed to tackle these issues would be a big win for Americans and our trading partners. Instead, the TPP looks like it will just constitute one more step toward using commercial agreements to maximize three things: (1) the damage done through global integration to the wages of most American workers; (2) the rents earned by those holding a monopoly on intellectual property claims; and (3) the influence that the preferences of global economic elites have on the policymaking of American trading partners. Endnotes See Krugman (2013) and Summers (2014) for the most famous and clearest presentations of this hypothesis. See Gagnon (2012) and Scott (2014) for estimates on the origins and consequences of currency management. See Cline (1995), Spence (2011), Summers (2006), and Krugman (2008) for qualified-at-least endorsements of the view that globalization likely contributed to rising American inequality. See Bivens (2008) for a review of the academic evidence. See DeLong (2015) and Autor, Dorn, and Hanson (2015) for variants of these arguments. Note that in this formulation I’m including human capital—the excess returns earned by workers with better credentials and/or skills relative to others. How do we know that the lower prices for imports and import-competing goods trump the higher prices for exportables in boosting overall price-adjusted incomes? Because we consume more importables than we produce (that’s why we import them) and we produce more exports than we consume (that’s why we export them). This means the average price of consumption items unambiguously falls due to trade, and hence price-adjusted national income increases. But this effect is much smaller than would be inferred by looking only at import price declines. This balanced trade occurs in theory and in trade models estimating the impact of trade agreements. In real life, such balance will only occur if provisions to stop countries from engaging in currency management for competitive gain are also part of the trading system. A very important point here is that labor-abundant and capital-abundant are relative terms. The United States is labor-scarce and capital-abundant relative to Mexico even though in raw terms there are more American workers than Mexican workers. But because the ratio of labor (think workers without credentials such as a four-year college degree) to capital (think both physical capital and workers with credentials such as four-year college degrees) is higher in Mexico than the United States, then Mexico is considered the labor-abundant country relative to the United States. Note that one need not claim that relative factor abundance is the only determinant of trade flows. Lots of other things determine comparative advantage for particular industries—idiosyncratic technological differences, geography, returns to scale, etc. But so long as the entire bundle of imports and exports of a country is influenced by factor proportions, this logic holds. Also note that currency values can make a country like the United States a net importer even of goods that it naturally has a comparative advantage in. See Wood (2007) for compelling evidence on this. Davis and Weinstein (2001) provide the most careful and comprehensive measure of the relative factor intensity of U.S. trade flows vis-à-vis the rest of the world and find clearly that U.S. imports are labor-intensive while exports are capital-intensive. See Blanchflower, Oswald, and Sanfrey (1996); Bivens (2006); Oswald (2008); and Martins (2009) for evidence on the empirical importance of rents and bargaining power in wage determination. Jayadev (2007) has identified a “bargaining channel” impact of globalization on the labor share of income and has found in a cross-country panel that increased financial globalization is associated with lower labor shares of income. Elsby, Hobijn, and Sahin (2013) have empirically identified a link between rising import penetration and reduced labor share of income in the U.S. economy. The Petri and Plummer (2012) results on the TPP that we will discuss later show exactly that exportable production rises while importable production falls as a result of the TPP. Trade agreements that result in increased productivity in foreign countries’ export sector while implicitly reducing productivity (by raising the cost of inputs if these include intellectual property held by Americans) in their importable sector will hence increase the U.S. terms of trade (the price of imports relative to the price of exports). This will make the reshuffling of U.S. domestic production that much greater, hence amplifying trade’s inequality-generating effects. Baker (2003) is among the first, and the most consistent, in pointing out this asymmetry in U.S. trade policy. Bilateral tariff cutting leads to foreign consumers facing cheaper prices for U.S. exports because of reduced tariffs. But in global markets, the price of U.S. exportables actually rises (by less than the tariff cut) because of the resulting increase in demand for U.S. exports. The production worker versus non-production-worker classification is more widely used, mostly for data availability reasons. See Feenstra and Hanson (1997) and Berman, Bound, and Griliches (1994) for examples of this. Production and non-supervisory workers represent roughly 80 percent of the private-sector workforce. Work by Anderson, Tang, and Wood (2006) actually implies the potential for an even smaller share of the workforce to see benefits from falling trading costs—only those well-credentialed workers who can deploy their human capital to help boost production in the poorer nations. See Bivens (2013) for a quick description of the model and methodology. Bivens (2007) lays out the longer arguments regarding this channel through which trade can impact wages. The 5-6 to 1 ratio of net gains to gross losses comes out of making standard assumptions about labor market parameters such as the elasticity of relative labor demand. See Rodrik (2013) for a long discussion of the ratio between the overall national gains from trade and the redistribution caused by falling trade costs. Petri and Plummer (2012) note that their estimates are increased by roughly 40 percent by a modeling assumption they make regarding the effect of the TPP in reducing fixed-cost barriers to trade that stimulate large increases in trade on the “extensive margin,” or, increases in trade flows associated with firms that did not previously export or import at all. The importance of the extensive margin in explaining rising trade flows is well-established, and a well-pedigreed theoretical literature (exemplified by Melitz 2003) shows that a trade-policy-induced reduction in fixed trading costs could indeed boost trade flows on the extensive margin. But Petri and Plummer (2012) do not identify specific fixed costs of trading that will actually be reduced by the TPP, and empirical work on what leads to large increases in trade on the extensive margin has shown that developments besides tariff-cutting have driven past increases, and that some episodes of tariff-cutting have not led to large extensive margin increases. In short, assuming a very large increase in extensive margin trade stemming from the TPP is a pretty aggressive assumption. In many of the models cited by Petri and Plummer (2013), increases in trade flows are biased toward the most productive firms, leading to a rise in economy-wide productivity that is absent from most trade models. It should be noted that if this domestic shift from low- to high-productivity firms is associated in any way with an increased use of capital vis-à-vis labor (and much research [like Sampson 2014 and Egger and Kreikemeir 2009] indicates that it might be), this additional channel of trade effects could lead to larger redistributions than implied by current models, as well as larger gains from trade. The assumption that increases the gains from trade is that tariff revenue is replaced with lump-sum taxes—replacing a distortionary tax (tariffs) with a non-distortionary tax (lump-sum taxes) that does not generally exist in the real world. The more ambitious liberalization scenario they model is complete liberalization (removal of all tariff and non-tariff barriers to trade) of merchandise trade in the world. Autor, Dorn, and Hanson (2013) examine changes in import penetration and wages across geographic areas in the United States to obtain their results. While they examine Chinese imports specifically, this is in large part because they are confident that the growth of Chinese imports into the United States was exogenous to U.S. economic developments. The Petri and Plummer (2012) estimates of income growth stemming from the TPP are by definition exogenous to U.S. economic developments, so the Autor, Dorn, and Hanson (2013) results seem likely to hold on this scale. It could be that Chinese imports skew more labor-intensive than imports from TPP countries, biasing up the Autor, Dorn, and Hanson (2013) results when applying them to the TPP import increases. Yet, Autor, Dorn, and Hanson (2013) note that their estimates of wage impacts could be biased down precisely because Chinese imports are exceptionally labor-intensive, and they find large disemployment effects of these as well. If the labor-intensity of Chinese imports leads to disproportionate job loss among the lowest-credentialed U.S. workers, this will actually attenuate their estimate of the wage impact. Autor, Dorn, and Hanson (2013) also find negative effects in non-manufacturing industries on the wages of college-educated workers. However, the effects are smaller than for non-college workers, and non-college workers actually even see negative wage impacts stemming from increased imports in manufacturing, contrary to their findings on college workers. See Autor, Dorn, and Hanson (2013). Again, if the “job killing” aspect refers to the demand-sapping effect of chronic trade deficits, then this is a prime reason to oppose a TPP with no currency provision. References Anderson, Edward, Paul Tang, and Adrian Wood. 2006. “Globalization, Co-operation Costs, and Wage Inequalities.” Oxford Economic Papers, vol. 58, no. 4, 569–595. Anderson, Kym, William Martin, and Dominique Van der Mensbrugghe. 2006. “Market and Welfare Implications of Doha Reform Scenarios.” Agricultural Trade Reform and the Doha Development Agenda, 333–399. Autor, David, David Dorn, and Gordon Hanson. 2013. “The China Syndrome: Local Labor Market Effects of Import Competition in the United States.” The American Economic Review, vol. 103, no. 6, 2121–2168. Autor, David, David Dorn, and Gordon Hanson. 2015. “Why Obama’s Key Trade Deal with Asia Would Actually Be a Good Thing for American Workers.” Washington Post, March 12. Baker, Dean. 2003. Professional Protectionists: The Gains from Free Trade in Highly Paid Professional Services. Center for Economic and Policy Research. Baker, Dean. 2015. “Getting It Wrong on Trade: TPP Is Not Good for Workers.” Beat the Press (Center for Economic and Policy Research blog), March 14. Berman, Eli, John Bound, and Zvi Griliches. 1994. “Changes in the Demand for Skilled Labor within U.S. Manufacturing: Evidence from the Annual Survey of Manufacturers.” The Quarterly Journal of Economics, vol. 109, no. 2, 367–397. Bivens, Josh. 2006. “Wages, Profits, and Rent‐Sharing in an Open Economy.” International Review of Applied Economics, vol. 20, no. 1, 69–83. Bivens, Josh. 2007. Globalization and American Wages: Past, Present, and Future. Economic Policy Institute, Working Paper #279. Bivens, Josh. 2008. Everybody Wins Except for Most of Us: What Globalization Really Teaches About Globalization. Washington, D.C.: Economic Policy Institute. Bivens, Josh. 2013. Using Standard Models to Benchmark the Costs of Globalization for American Workers Without a College Degree. Economic Policy Institute, Briefing Paper #354. Blanchflower, David, Andrew Oswald, and Peter Sanfey. 1996. “Wages, Profits, and Rent-Sharing.” The Quarterly Journal of Economics, vol. 111, no. 1, 227–251. Cline, William. 1995. Trade and Income Distribution. Institute for International Economics. Davis, Donald, and David Weinstein. 2001. “An Account of Global Factor Trade.” The American Economic Review, vol. 91, no. 5, 1423–1453. DeLong, Brad. 2015. “The Debate Over the Trans-Pacific Partnership: Focus.” Washington Center for Equitable Growth blog, March 11. Egger, Harmut, and Udo Kreickemeier. 2009. “Firm Heterogeneity and the Labor Market Effects of Trade Liberalization.” International Economic Review, vol. 50, no. 1, 187–216. Elsby, Michael, Bart Hobijn, and Aysegul Sahin. 2013. “The Decline of the U.S. Labor Share.” Brookings Papers on Economic Activity, no. 2, 1–63. Feenstra, Robert, and Gordon Hanson. 1997. “Foreign Direct Investment and Relative Wages: Evidence from Mexico’s Maquiladoras.” Journal of International Economics, vol. 42, no. 3, 371–393. Gagnon, Joseph. 2012. “Combating Widespread Currency Manipulation.” Policy Brief in International Economics, Peterson Institute for International Economics. Jayadev, Arjun. 2007. “Capital Account Openness and the Labour Share of Income.” Cambridge Journal of Economics, vol. 31, no. 3, 423–443. Krugman, Paul. 2008. “Trade and Wages, Reconsidered.” Brookings Papers on Economic Activity, no. 1, 103–154. Krugman, Paul. 2013. “Secular Stagnation, Coalmines and Larry Summers.” The Conscience of a Liberal (New York Times blog), November 16. Martins, Pedro. 2009. “Rent Sharing Before and After the Wage Bill.” Applied Economics, vol. 41, no. 17, 2133–2151. Melitz, Marc. 2003. “The Impact of Trade on Intra‐Industry Reallocations and Aggregate Industry Productivity.” Econometrica, vol. 71, no. 6, 1695–1725. Petri, Peter, and Michael Plummer. 2012. The Trans-Pacific Partnership and Asia-Pacific Integration: Policy Implications. Peterson Institute for International Economics Policy Brief. Rodrik, Dani. 2013. “What Is Wrong (and Right) in Economics?” Blog post, May 7. Sampson, Thomas. 2014. “Selection into Trade and Wage Inequality.” American Economic Journal: Microeconomics, vol. 6, no. 3, 157–202. Scott, Robert. 2014. Stop Currency Manipulation and Create Millions of Jobs, With Gains Across States and Congressional Districts. Economic Policy Institute Briefing Paper #372. Spence, Michael. 2011. “The Impact of Globalization on Income and Unemployment: the Downside of Integrating Markets.” Foreign Affairs, July/August. Summers, Lawrence. 2006. “The Global Middle Cries Out for Reassurance.” Financial Times, October 26. Summers, Lawrence. 2014. “US Economic Prospects: Secular Stagnation, Hysteresis, and the Zero Lower Bound.” Business Economics, vol. 49, no. 2, 65–73. Wood, Adrian. 2007. “Heckscher-Ohlin in Theory and Reality.” Department of International Development, University of Oxford, Working Papers, 157.
Trade and Globalization Why is President Obama making one last push for the TPP? The White House is making one last push for passage of the Trans-Pacific Partnership agreement, most likely during the lame duck session of Congress, after the elections but before the end of the year. This is despite the fact that Democratic presidential nominee Hillary Clinton opposes the TPP, as did Bernie Sanders, her rival in the primary, and as do the majority of Democratic members of Congress. Let’s review the basic facts. Growing imports of goods from low-wage, less-developed countries, which nearly tripled from 2.9 percent of GDP in 1989 to 8.4 percent in 2011 (as shown in Figure A, below), reduced the wages of the typical non-college educated worker in 2011 by “5.5 percent, or by roughly $1,800—for a full-time, full year worker earning the average wage for workers without a four-year college degree,” as shown by my colleague Josh Bivens. Overall, there are nearly 100 million American workers without a 4-year degree. The wage losses suffered by this group likely amount to a full percentage point of GDP—roughly $180 billion per year. Crucially, trade theory and evidence indicate strongly that growing trade redistributes far more income than it creates. The modesty of net benefits from trade is highlighted by the U.S International Trade Commission report that recently estimated that the TPP would generate cumulative net gains of $57.3 billion over the next 16 years, or less than $4 billion per year. A tale of two speeches This post originally appeared in Democracy. This election will be different, not only because of the stark departure of Donald Trump’s candidacy from any usual political convention, but also because the current economic debate is unlike any in recent memory. This was further elucidated by the plans each candidate laid out in Michigan this week. It is noteworthy, first of all, that both candidates have joined in calling for greater infrastructure spending, have abandoned the traditional approach toward trade and opposed the Trans-Pacific Partnership (TPP), have proposed subsidizing child care expenses, have highlighted wage stagnation, and have each claimed to be able to provide faster economic growth than the other. It would be pointless, however, to delve into precise policy details without first commenting on the disturbing nature of the Trump candidacy. Among the least of his campaign’s problems is that it fails to elaborate on any of its positions or provide any kind of science or data, that would allow a proper assessment of its proposals. Trump has offered many broad ideas about taxes, but the details are strikingly few. Similarly, Trump’s budget plans just don’t add up: He wants more military spending, more infrastructure spending, and no cuts to Medicare or Social Security, along with huge tax cuts—all while claiming he would still move toward a balanced budget. Of course, most problematic is Trump’s bigotry and misogyny, and the egregious character flaws he displays almost daily: authoritarianism, dishonesty, volatility, and a lack of compassion. But setting all that aside for the moment… Free trade in moral hypocrisy A version of this article appeared in the Globalist. U.S. trade policy of the last 20 years, if not dead, is on life- support. The economic case for the series of neoliberal trade deals since the North American Free Trade Agreement has collapsed in the wake of job losses, lower wages and shrinking opportunities for American workers. Voters are hostile, and both candidates for President oppose the latest proposed trade pact—the Trans Pacific Partnership. But neoliberal trade deals have brought enormous profits to America’s multinational corporate investors. So, big business lobbyists and their champions in the Congress and the Administration are organizing to pass the TPP in the post-election lame duck session—regardless of who wins the election. With their economic arguments discredited, they are now draping these trade and investment pacts with a mantle of moral superiority. American workers who complain are now told that they should be ashamed of themselves. Why? Because off-shoring their jobs helps workers in other counties who are even poorer. Paul Krugman tells his New York Times readers that they should support “open world markets…mainly because market access is so important to poor countries.” The Trump trade scam Yesterday, presumptive Republican nominee Donald Trump gave a speech on trade, extensively citing EPI’s research which shows that trade deficits as a result of NAFTA and other trade deals, as well as trade with China, have cost U.S. jobs and driven down U.S. wages. It’s true that the way we have undertaken globalization has hurt the vast majority of working people in this country—a view that EPI has been articulating for years, and that we will continue to articulate well after November. However, Trump’s speech makes it seem as if globalization is solely responsible for wage suppression, and that elite Democrats are solely responsible for globalization. Missing from his tale is the role of corporations and their allies have played in pushing this agenda, and the role the party he leads has played in implementing it. After all, NAFTA never would have passed without GOP votes, as two-thirds of the House Democrats opposed it. Furthermore, Trump has heretofore ignored the many other intentional policies that businesses and the top 1 percent have pushed to suppress wages over the last four decades. Start with excessive unemployment due to Federal Reserve Board policies which were antagonistic to wage growth and friendly to the finance sector and bondholders. Excessive unemployment leads to less wage growth, especially for low- and middle-wage workers. Add in government austerity at the federal and state levels—which has mostly been pushed by GOP governors and legislatures—that has impeded the recovery and stunted wage growth. There’s also the decimation of collective bargaining, which is the single largest reason that middle class wages have faltered. Meanwhile, the minimum wage is now more than 25 percent below its 1968 level, even though productivity since then has more than doubled. Phasing in a $15 minimum wage would lift wages for at least a third of the workforce. The most recent example is the effort to overturn the recent raising of the overtime threshold that would help more than 12 million middle-wage salaried workers obtain overtime protections. Brexit: The end of globalization as we know it? The British vote to leave the European Union is a watershed event—one that marks the end of an era of globalization driven by deregulation and the ceding of power over trade and regulation to international institutions like the EU and the World Trade Organization. While there were many contributing factors, the 52 percent vote in favor of Brexit no doubt in part reflects the fact that globalization has failed to deliver a growing standard of living to most working people over the past thirty years. Outsourcing and growing trade with low-wage countries—including recent additions to the EU such as Poland, Lithuania, and Croatia, as well as China, India and other countries with large low-wage labor forces—have put downward pressure on wages of the working class. As Matt O’Brien notes, the result has been that the “working classes of rich countries—like Brexit voters—have seen little income growth” over this period. The message that leaders in the United Kingdom, Europe, and indeed the United States should take away from Brexit is that the time has come to stop promoting austerity and business-as-usual trade deals like the Trans-Pacific Partnership (and the now dead Transatlantic Trade and Investment Partnership) and to instead get serious about rebuilding manufacturing and an economy that works for working people. Conservative austerity policies in Britain over the past two decades, which have slashed government spending and limited government’s ability to deliver public services and support job creation, fueled the anger towards elites that encouraged Brexit. At the same time, the neoconservative, anti-regulatory views of public officials in Brussels—who are disdainful of government intervention in the economy and who consistently pushed for the “liberalization of labor markets” and other key elements of the neoconservative model—left Europe unprepared for the Great Recession. It’s no surprise, then, that there has been a revolt against the EU. When the financial crisis hit in 2008, EU authorities, especially banking officials in Germany and other wealthy countries that have a dominant influence over the European Central Bank, reacted by blaming public officials in Greece, Spain, Portugal and other countries hardest hit by the crisis. The budget cuts they demanded led to further contractions in spending and soaring unemployment which still persists in much of southern Europe, putting further downward pressure on employment in the UK and setting the stage for widespread populist revolts from the left and right that have gained traction across much of Europe. ITC study shows minimal benefits and downplays potentially high costs of Trans-Pacific Partnership Yesterday, the U.S. International Trade Commission (ITC) released a long-awaited report on the projected economic impacts of the TPP agreement. The report is remarkable for its frank estimates of the costs of the agreement, and the minimal benefits it identifies. Overall, the ITC projects that by 2027, the TPP will increase U.S. exports to the world by $27.2 billion (1.0 percent, as shown in Table 2.2) and U.S. imports from the world by $48.9 billion (1.1 percent), increasing the U.S. global trade deficit by $21.7 billion. All else equal, this rise in the trade deficit would put downward pressure on U.S. GDP. Nonetheless, the report concludes that over the next 16 years, the agreement will increase U.S. national income by $57.3 billion, 0.23 percent. This GDP gain stems largely from the ITC’s adoption of the standard full-employment assumption in modeling the TPP’s effects. There may have once been a time where such an assumption was warranted, but it seems highly inappropriate to apply to an economy that has been operating beneath full employment for at least 8 years and counting. Dean Baker notes that even if the too-rosy GDP estimate were correct, it means that, “as a result of the TPP, the country will be as wealthy on January 1, 2032 as it would otherwise be on February 15 2032.” Worse yet, the ITC has a terrible record of forecasting the actual impacts of trade and investment deals, both overall and at the industry level. There is little reason to believe that this study will yield better results than past ITC efforts if the agreement is approved and implemented. In practice, whatever the ITC forecasts, U.S. trade and investment deals been near-inevitably followed by growing trade deficits and downward pressure on the wages of U.S. workers. There is every reason to expect that the TPP agreement will reinforce these trends. U.S.-Korea trade deal resulted in growing trade deficits and more than 95,000 lost U.S. jobs (This blog post is an update to a post from March 30, 2015). When the U.S.-Korea Free Trade Agreement (KORUS) was passed just over four years ago, President Obama said that the agreement would support 70,000 U.S. jobs. This claim was supported by a White House fact sheet that claimed that the KORUS agreement would “increase exports of American goods by $10 to $11 billion…” and that they would “support 70,000 American jobs from increased goods exports alone.” Things are not turning out as predicted. Far from supporting jobs, growing goods trade deficits with Korea have eliminated more than 95,000 jobs between 2011 and 2015. Expanding exports alone is not enough to ensure that trade adds jobs to the economy. Increases in U.S. exports tend to create jobs in the United States, but increases in imports lead to job loss—by destroying existing jobs and preventing new job creation—as imports displace goods that otherwise would have been made in the United States by domestic workers. Thus, it is changes in trade balances—the net of exports and imports—that determine the number of jobs created or displaced by trade and investment deals like KORUS. In the first four years after KORUS took effect, there was absolutely no growth in total U.S. exports to Korea, as shown in the figure below. Imports from Korea increased $15.2 billion, an increase of 26.8 percent. As a result, the U.S. trade deficit with Korea increased $15.1 billion between 2011 and 2015, an increase of 114.6 percent, more than doubling in just four years. U.S.-Korea trade, 2011–2015 (billions of dollars) |Exports||Imports||Trade balance| |2011||$43.5||56.7||-13.2| |2012||42.3||58.9||-16.6| |2013||41.7||62.4||-20.7| |2014||44.5||69.5||-25.0| |2015||43.5||71.8||-28.3| Change, billions of dollars, and percent from 2011-2015 Imports: +$15.2 (26.8%) Exports: $0 (0.0%) Trade Bal.: -$15.1 (114.6%) Source: Author's analysis of U.S. International Trade Commission Trade DataWeb Tired of economists’ misdirection on globalization An interesting story in the New York Times this morning looks at the effect that job losses from trade have had on people’s political views. It’s no surprise that voters on the losing end of globalization are disenchanted with the political mainstream, as the Times puts it. They have every right to be. But I’m tired of hearing from economists about the failure to support workers dislocated by globalization as a cause of anger and the policy action the elite somehow mistakenly forgot. Ignoring the losers was deliberate. In 1981, our vigorous trade adjustment assistance (TAA) program was one of the first things Reagan attacked, cutting its weekly compensation payments from a 70 percent replacement rate down to 50 percent. Currently, in a dozen states, unemployment insurance—the most basic safety net for workers—is being unraveled by the elites. Only about one unemployed person in four receives unemployment compensation today. I’m also getting tired of hearing that job losses from trade are the result of the U.S. economy “not adjusting to a shock.” Trade theory tells us that globalization’s impact is much greater on the wages of all non-college grads (who are between two-thirds and three-quarters of the workforce, depending on the year), not just a few dislocated manufacturing workers. The damage is widespread, not concentrated among a few. Trade theory says the result is a permanent, not temporary, lowering of wages of all “unskilled” workers. You can’t adjust a dislocated worker to an equivalent job if good jobs are not being created and wages for the majority are being suppressed. Let’s not pretend. Clarification on trade and American workers: right criticism, poorly targeted It’s been pointed out to me that yesterday’s blog post about a story by NPR’s Chris Arnold targeted too much ire at Arnold himself rather than the phenomenon he was reporting about. I think that’s probably right, and so I apologize to him for that. I was using Arnold’s story as a jumping off point for a discussion of a larger issue, and should have made that more clear. I do think my larger points about the substance of the topic under debate hold. The damage done by trade to American workers is consistently underestimated and is often treated as a surprise when it shouldn’t be—it’s completely the prediction of standard trade theory. To the degree that Arnold’s story helps take this “surprise” excuse off the table for future debates over trade, it’s doing a service. Some quick notes on why I think all of this is important, however. This is Arnold’s first paragraph: Economists for decades have agreed that more open international trade is good for the U.S. economy. But recent research finds that while that’s still true, when it comes to China, the downside for American workers has been much more painful than the experts predicted. I think Arnold reports this exactly right. Experts continue to portray the downside of expanded trade for American workers as having turned out to be unexpectedly large, but they are wrong to be surprised. Downward pressure on a large majority of American workers’ wages is completely predicted by mainstream economic theory. But I should have made clearer where my criticism here was aimed. It’s not a puzzle if American workers oppose trade agreements Yesterday, NPR’s Chris Arnold wrote the latest in what has become a very long line of “explainer” pieces about economic globalization and the presidential campaigns. Nearly all of these pieces seek to resolve an alleged puzzle: nearly all reputable economists argue that policy efforts to boost trade are good for the U.S. economy, yet many (if not most) American workers strongly oppose trade agreements signed in recent decades. Arnold puts forward a pretty common solution to this alleged puzzle: “trade’s benefits are diffuse, but the pain is concentrated.” In this view, the only losers from trade are those workers directly displaced by imports. Every other consumer in the economy benefits from lower prices. But because the losers are small and concentrated, they can organize to oppose trade agreements. And while the winners are numerous and widespread, the benefits (e.g., slightly more affordable clothing and DVD players) are hard to notice, so no one organizes to support these agreements. This is a common way of describing the effects of using policy to expand trade, but on the economics, it is certainly not correct. In textbook trade models, using policy levers (lower tariffs, for example) to boost trade with poorer countries will indeed cause total national income in the United States to rise. But these same textbooks also predict that the resulting expansion of trade will redistribute far more income than it creates. And the direction of this redistribution is upward. So it is perfectly possible to have policy efforts to expand trade lead to higher national income yet leave the majority of workers worse off. Importantly, the losers in these textbook models are not just workers directly displaced by imports—they’re all the workers in the entire economy who resemble the trade-displaced in terms of education and credentials. U.S. trade policy: Populist anger or out-of-touch elites? This post originally appeared in The Globalist. The presidential primary campaigns of both political parties have exposed widespread voter anger over U.S. global trade policies. In response, hardly a day has recently gone by without the New York Times, the Washington Post and other defenders of the status quo lecturing their readers on why unregulated foreign trade is good for them. The ultimate conclusion is always the same—that voters should leave complicated issues like this to those intellectually better qualified to deal with them. Trade experts, according to Binyamin Appelbaum of the Times have been “surprised” at the popular discontent over this issue. Their surprise only shows how disconnected the elite and the policy class that supports it is from the way most people actually experience the national economy. The United States has always been a trading nation. But until the 1994 North American Free Trade Agreement, trade policy was primarily an instrument to support domestic economic welfare and development. A lop-sided deal: Investment vs. jobs Starting with NAFTA, pushed through not by a Republican president, but by the Bill Clinton in 1994, it became a series of deals in which profit opportunities for American investors were opened up elsewhere in the world in exchange for opening up U.S. labor markets to fierce foreign competition. As Jorge Castañeda, who later became Mexico’s foreign minister, put it, NAFTA was “an agreement for the rich and powerful in the United States, Mexico and Canada, an agreement effectively excluding ordinary people in all three societies.” For 20 years, leaders of both parties have assured Americans that each new NAFTA-style deal would bring more jobs and higher wages for workers, and trade surpluses for their country. It was, they were told, an iron law of economics. Putting things in perspective: Bernie Sanders, trade, and poor countries’ access to U.S. markets Yesterday, Zack Beauchamp updated a piece he had written a while back that claimed Bernie Sanders’ trade agenda could prove ruinous to the world’s poorest people. I think Beauchamp really overstates this, for a couple of reasons. First, only an expansive reading of some of Sanders’ rhetorical excesses would lead one to think he would pursue policies that radically restricted the access to U.S. markets currently enjoyed by our poorer trading partners’ exports. It is not an uncommon reaction to criticisms of today’s global trade regime to assume that this market access would clearly be significantly reduced if this status quo were overturned, but that’s far from obvious. Second, the evidence marshalled on behalf of trade liberalization’s positive benefits for development is entirely about the benefits of unilateral, domestic liberalization. That is, the benefits a country gains from cutting its own tariffs, and not about the ease of access that they have to the U.S. market. This evidence is completely silent on the benefits of access to the U.S. market. Economic theory teaches that the benefits of unilateral liberalization completely dwarf those of market access, and there is not much evidence to suggest that this theory is wrong. Despite seemingly stable U.S. trade balance, rapidly growing trade deficits in non-oil goods could lead to American job losses The U.S. Census Bureau reported that the annual U.S. trade deficit in goods and services increased from $508.3 billion to $531.5 billion from 2014 to 2015, an increase of $23.2 billion (4.6 percent). The slow growth of the overall U.S. trade deficit hides massive underlying shifts in the trade deficit in petroleum products (which declined $157.3 billion, or 55.3 percent), compared with the trade deficit in all other goods, which increased from $547.7 billion to $673.1 billion—an increase of $125.4 billion, or 22.9 percent. In other words, the sharp decline in the petroleum trade deficit masked a large increase in the non-oil goods trade deficit, which could result in substantial U.S. job losses in the future. Most U.S. goods trade consists of manufactured products. In 2015, manufacturing constituted 86.9 percent of total U.S. goods trade, and 94.3 percent of total trade in non-oil goods. Because manufacturing is such a large employer, rapidly growing trade deficits in non-oil goods are a threat to future employment in this sector. The growing trade deficit in manufactured products rose to 3.8 percent of GDP, only 0.7 percent (7 tenths) of a percentage point below the maximum reached in 2005. The manufacturing trade deficit also reached a record high of $681 billion in 2015, well in excess of the previous peak $619.7 in 2007. Rapidly growing manufacturing trade deficits were responsible for most, if not all, of the 4.8 million U.S. manufacturing jobs lost between December 2000 and December 2015, and there’s every reason to believe that these job losses will continue if the non-oil trade deficits keeps growing. This analysis is primarily concerned with shifts in goods trade. The U.S. balance of trade in services declined slightly in 2015, falling from a trade surplus of $233.1 billion in 2014 to $227.4 billion in 2015. Trade in goods continues to dominate overall trade flows for the United States—trade in services totaled only 24.1 percent of total U.S. goods and services trade in 2015. Closing loopholes in Buy American Act could create up to 100,000 U.S. jobs By closing loopholes in the Buy American Act, the 21st Century Buy American Act will increase demand for U.S. manufactured goods and create at least 60,000 to 100,000 U.S. jobs. The Buy American Act requires “substantially all” direct purchases by the federal government (of more than $3,000) “be attributable to American-made components.” However, there are a number of exclusions or loopholes in the Buy American Act. The single largest is an exception for “goods that are to be used outside of the country,” and the 21st Century Buy American Act includes provisions to close it. In addition, current regulations interpreting the Buy American Act state that “at least 50 percent of the cost must be attributable to American content,” which can reduce net demand for American made content. Between 2010 and 2015, the “goods used outside of the country exception” was used to purchase $42.3 billion in goods that were manufactured outside of the United States, an average of $8.5 billion per year.1 The 21st Century Buy American Act would require most or all of those goods to be U.S. made, increasing demand for U.S. manufactured goods by up to $8.5 billion per year.2 Although labor markets have improved in the United States since the recession, there remains substantial slack and 2.6 million jobs were still needed to catch up with growth in the potential labor force in September 2015. I assume, based on recent research by my colleague Josh Bivens (Table 5) that wages earned by new manufacturing workers will support a macroeconomic multiplier of 1.6 in the domestic economy over the next year.3 I also assume, based on total GDP and employment levels in 2014 that a 1 percent increase in GDP adds 1.3 million jobs to the economy. Thus, the $8.5 billion increase in spending on domestic manufactured goods (with 100 percent domestic content) would increase GDP by $13.6 billion (0.08 percent), creating up to 100,000 new jobs in the domestic economy. Failure to stem dollar appreciation has put manufacturing recovery in reverse This week, President Obama announced the completion of negotiations on the proposed Trans-Pacific Partnership (TPP). The TPP, which is likely to drive down middle-class wages and increase offshoring and job loss, has been widely criticized by leading members of Congress from both parties. Hillary Clinton, Bernie Saunders, and other presidential candidates have announced their opposition to the deal. Meanwhile, U.S. jobs and the recovery are threatened by a growing trade deficit in manufactured products, which is on pace to reach $633.9 billion in 2015, as shown in Figure A, below. This deficit exceeds the previous peak of $558.5 billion in 2006 (not shown) by more than $75 billion. The increase in the manufacturing trade deficit in 2015 alone will amount to 0.5 percent of projected GDP, and will likely reduced projected growth by even more as manufacturing wages and profits are reduced. U.S. manufacturing trade deficit, 2007–2015* (billions of dollars) |Year||U.S. manufacturing trade deficit (billions of dollars)| |2007||532.222| |2008||456.240| |2009||319.471| |2010||412.740| |2011||440.602| |2012||458.692| |2013||449.276| |2014||515.131| |2015||633.915| * Estimated, based on year-to-date trade data through August 2015 Source: Author's analysis of U.S. International Trade Commission Trade DataWeb The growth of the manufacturing trade deficit is starting to have an impact on manufacturing employment, which has lost 27,300 jobs since July 2015, as shown in Figure B, below. Growing exports support U.S. jobs, but increases in imports cost jobs, so even if overall exports are growing, trade deficits hurt U.S. employment—especially in manufacturing, because most traded goods are manufactured products. Although the United States had regained more than 800,000 manufacturing jobs since 2010, the low point of the manufacturing collapse after the great recession, overall manufacturing employment is still 1.4 million jobs lower than it was in December 2007. By Devaluing Its Currency, China Exports Its Unemployment On Tuesday, China announced the largest one-day devaluation of its currency in more than two decades. Make no mistake—although authorities claimed this policy was a shift toward more market-driven movements, the value of the currency is tightly controlled by China’s central bank. By choosing to devalue its currency, Chinese officials are trying to solve their domestic economic problems—including a massive property bubble, a collapsing stock market, and a slowing domestic economy—by exporting unemployment to the rest of the world. The United States, which is the largest single market for China’s exports, will be hardest hit by the devaluation of the yuan. Manufacturing, which was already reeling from the 20 percent rise in the value of the dollar against major currencies in the last 19 months, can expect to see even faster growth in imports from China. The devaluation of the yuan (also known as the renminbi) will subsidize Chinese exports, and act like a tax on U.S. exports to China, and to every country where we compete with China, which is already the largest exporter in the world. It will provide rocket fuel for their exports, transmitting unemployment from China directly to the United States and other major consumers of imports from China. Already in 2015, the U.S. manufacturing trade deficit has increased 22 percent, which will continue to hold back the recovery in U.S. manufacturing, which has experienced no real growth in output since 2007. The Chinese devaluation highlights the importance of including restrictions on currency manipulation in trade and investment deals like the proposed Trans-Pacific Partnership (TPP), which includes a number of well-known currency manipulators. Millions of jobs are at stake if a clause to prohibit currency manipulation is not included in the core of this “twenty-first century trade agreement.” This devaluation by China, which is not a member of the TPP, will raise pressure on other known currency manipulators that are in the agreement—such as Japan, Malaysia, and Singapore—to devalue their currencies, which could more than offset any benefits obtained under the terms of the TPP. Hall of Shame: 13 Democrats Who Voted to End Debate on Fast-Track Trade Legislation Fast-track trade legislation is the first step in the process of greasing the skids for the proposed Trans-Pacific Partnership (TPP), and any other trade deal proposed by this president or any other for the next six years. Last month, the 13 democrats listed in the table below voted to end debate on fast track (Trade Promotion Authority, or TPA), allowing a final vote to take place. There are strong arguments against the TPP, which will increase inequality and hurt the middle class. * The low-impact scenario assumes ending currency manipulation would reduce the trade deficit by $200 billion; the high-impact scenario assumes a $500 billion reduction in the trade deficit. The table shows the hypothetical change in 2015 three years after implementation. Source: Author's analysis of Scott 2014a and Scott 2014b Trade and jobs gained and lost in selected states Net U.S. jobs displaced due to goods trade with China, 2001–2013 Net U.S. jobs created by eliminating currency manipulation Low-impact scenario* High-impact scenario* Senator State State employment (in 2011) Jobs lost Jobs lost as a share of employment Jobs gained Jobs gained as a share of employment Jobs gained Jobs gained as a share of employment Feinstein, Dianne California 16,426,700 564,200 3.4% 258,400 1.6% 687,100 4.2% Bennet, Michael Colorado 2,492,400 59,400 2.4% 38,300 1.5% 95,700 3.8% Carper, Tom Delaware 420,400 5,500 1.3% 6,700 1.6% 16,200 3.9% Coons, Chris Nelson, Bill Florida 8,101,900 115,700 1.4% 110,200 1.4% 274,000 3.4% McCaskill, Claire Missouri 2,742,100 44,200 1.6% 47,200 1.7% 116,800 4.3% Shaheen, Jeanne New Hampshire 684,800 22,700 3.3% 12,700 1.9% 31,300 4.6% Heitkamp, Heidi North Dakota 370,800 2,400 0.6% 7,400 2.0% 17,000 4.6% Wyden, Ron Oregon 1,710,300 62,700 3.7% 31,300 1.8% 78,600 4.6% Kaine, Tim Virginia 3,860,100 63,500 1.6% 52,500 1.4% 131,300 3.4% Warner, Mark Cantwell, Maria Washington 3,118,000 55,900 1.8% 61,300 2.0% 140,300 4.5% Murray, Pat Total jobs at risk in these states** 39,927,500 996,200 2.5% 626,000 1.6% 1,588,300 4.0% * The low-impact scenario assumes ending currency manipulation would reduce the trade deficit by $200 billion; the high-impact scenario assumes a $500 billion reduction in the trade deficit. The table shows the hypothetical change in 2015 three years after implementation. Source: Author's analysis of Scott 2014a and Scott 2014b These 13 democrats come from 10 states that lost 996,200 jobs due to growing trade deficits with China between 2001 and 2013, nearly one-third of the 3.2 million jobs eliminated by China trade in the United States in that period. States like Oregon, California, and Colorado were among the hardest hit states in the country. But they are also home or host to Nike (Oregon), Lockheed-Martin (Colorado), and Apple, Google, Intel and other Goliaths of Silicon Valley (California). New Hampshire serve as a bedroom community for many electronics industry workers in nearby Massachusetts, and has lost hundreds of thousands of jobs in recent decades in textiles, shoemaking and small machine making.Read more Former Labor Secretaries and EPI Board Members F. Ray Marshall and Robert Reich oppose TPA and TPP In a jointly authored statement, former EPI board members and U.S. Labor Secretaries F. Ray Marshall and Robert Reich called on Congress to reject Trade Promotion Authority and the Trans-Pacific Partnership because that deal will “harm America’s working people.” Despite this statement, the House today approved a truncated version of Fast Track (TPA) that excludes funding for Trade Adjustment Assistance for displaced workers. But passing TPA without TAA is a risky gamble because many Democrats have demanded that the two move simultaneously. In their letter to Congress, Marshall and Reich conclude that “Trade can work for working Americans, but only when Americans can effectively know about what is in a trade deal, and only when a trade deal is effectively designed to create more good jobs in America. This Fast Track mechanism toward the Trans Pacific Partnership is a bad deal for America.” TPP Panic: Playing the China Card Stung by the sudden derailment in the House of Representatives of their rush to pass the Trans-Pacific Partnership (TPP), the Washington establishment has wasted no time in warning us of the terrifying menace of a rising China, should the trade deal not be put back on track next week. Echoing previous remarks by the president, House Speaker John Boehner warned “we’re allowing and inviting China to go right on setting the rules of the world economy.” Pro-TPP Democratic Congressman Jim Hines (D-Conn.) said that Friday’s vote, “told the world that we prefer that China set the rules and values that govern trade in the Pacific.” These remarks are both fatuous and revealing of how weak the case for the TPP is, even among its own promoters. As a matter of obvious fact, the rules of the world economy within which the Chinese have been taking the United States to the economic cleaners were not set in China. They were set in Washington, DC by our own American policymakers and fixers who in one way or another were, and still are, are in the pay of multinational corporate investors. Under Ronald Reagan, the two Bushes, Bill Clinton and now Barack Obama the United States government designed and imposed the global model of “free trade” which promoted the shift of investment from the United States to parts of the world where labor is cheap, the environment is unprotected, and the public interest is even more up for sale than it is here. The Politics of Fast Track: Exports, Imports and Jobs The House is expected to vote this week on fast track authority to negotiate two massive trade deals, including the proposed Trans-Pacific Partnership (TPP) and the Transatlantic Trade and Investment Partnership (T-TIP). The Wall Street Journal noted on Sunday that “the decade’s old argument that major trade agreements boost both exports and jobs at home is losing its political punch, even in some of the country’s most export-heavy Congressional Districts.” One reason is that counting exports is less than half the story. While it’s true that exports support domestic jobs, imports reduce demand for domestic output and cost jobs. As I’ve written before, trade is a two-way street, and talking about exports without considering imports is like keeping score in a baseball game by counting only the runs scored by the home team. It might make you feel good, but it won’t tell you who’s winning the game. The Journal story included a table showing the ten congressional districts with the biggest gains in exports since 2006. The authors expressed surprise that only three of the ten members representing these districts have announced support for fast track (trade promotion authority, or TPA). Looking at jobs supported and displaced by trade in these districts provides a very different picture, which helps explain why supporters of fast track are having trouble rounding up votes in the House. In a recent study, I estimated the number of jobs supported and displaced by China trade between 2001 and 2013. We used the results of this study to examine the impacts of China trade on jobs by congressional district between 2006 and 2013—the period covered in the Wall Street Journal story. The results for the top ten districts identified by the Journal are shown in the following table. What Can the TPP Offer Canada? Not Much. When Canada joined the Trans-Pacific Partnership talks in 2012 it did so somewhat reluctantly and, like Mexico, with strings attached. One of them was that Canadian negotiators could not reopen any closed text. So, in this sense, it’s been a bit of a raw deal for the Obama administration’s NAFTA partners from the beginning. Canada’s bigger business lobbies called it a defensive move, to “secure” NAFTA supply chains rather than offering any meaningful market access elsewhere. The Canadian public have almost no idea what’s going on. But as TPP countries appear to be close to the end game, people here are starting to ask the obvious questions: what’s in it for us, and what will we have to give up to get it. The answers are equally obvious if you look past the hype: not much, and quite a lot. To begin with, Canada already has free trade deals in place with four of the larger TPP countries (Peru, Chile, the United States, and Mexico), and tariffs on trade with the others—representing 3 percent of imports and 5 percent of exports—are very low. Canada has a trade deficit with these non-FTA countries of $5 to $8 billion annually, and 80 percent of Canada’s top exports to these countries are raw or semi-processed goods (e.g., beef, coal, lumber), while 85 percent of imports are of higher value-added goods (e.g., autos, machinery, computer and electrical components). This Canadian trade deficit will likely widen if the TPP is ratified, as the United States found two years into its FTA with South Korea. Tariff removal through the TPP is therefore likely to worsen the erosion of the Canadian manufacturing sector and jobs that has been taking place since NAFTA—a result, in part, of the limits free trade deals place on performance requirements and production-sharing arrangements. NAFTA-driven restructuring did not even have the promised effect of raising Canadian productivity levels, which languish at 70 percent of U.S. levels twenty years into the agreement. Instead, Canada has experienced greater corporate concentration, a significant decline in investment in new production, and rising inequality. In short, there is little trade expansion upside for Canada in this negotiation. And yet the Canadian public will eventually be asked to make considerable public policy concessions to see the TPP through. As many U.S. commentators have argued, the trade impacts of TPP are far less important than the serious concerns it raises about excessive intellectual property rights, regulatory harmonization, and the perpetuation of a controversial investor-state dispute settlement (ISDS) regime that has been extremely damaging to democratic governance globally, not to mention quite humiliating for Canada. Yes, Trade Deficits Do Indeed Matter for Jobs The issue of currency management by U.S. trading partners that increases U.S. trade deficits has become a front-burner issue in debates over the proposed Trans-Pacific Partnership (TPP). The discussion about whether or not trade deficits can really affect U.S. employment, however, occasionally gets very muddled. Here’s a quick attempt to un-muddle a couple different issues. Trade deficits and overall employment Trade deficits occurring when the U.S. economy is stuck below full employment and at the zero lower bound (ZLB) on short-term interest rates are a drag on economic growth and overall employment, period. And this describes the U.S. economy today, so a reduction in the trade deficit in the next couple of years spurred by a reversal of trading partners’ currency management would boost growth and jobs. The logic is simple—exports boost demand for U.S. output while imports reduce demand for U.S. output. When net exports (exports minus imports) fall, then aggregate demand is reduced. Trade deficits are the mirror image of capital inflows into the U.S. economy, and there are times when these capital inflows can reduce domestic interest rates and boost economic activity, providing an offset to the demand-drag caused by trade flows. Today is not one of those times—further downward pressure on already rock-bottom interest rates (particularly since most of these inflows go into U.S. Treasuries) do very little to boost domestic investment to counteract the demand drag from trade flows. More Notes on the Gains From Trade and Who Gets Them The New York Times’ Binyamin Appelbaum wrote an excellent piece yesterday on the costs and benefits of globalization. But because I’ve thought a lot about this topic, I have some hobby-horse issues concerning how economists characterize how large the gains from trade are and how its gains and losses are distributed. Put simply, the overall net benefits of trade are much smaller than commonly advertised, but the regressive redistribution trade causes is considerable. First, on the gains from trade policy (i.e., how much we should expect national income to rise if we sign trade agreements), Appelbaum refers to a piece from the Peterson Institute of International Economics claiming that trade liberalization added 7.3 percent of GDP to American incomes by 2005—about $9000-10,000 per American household. This is just not true. It’s a wildly inflated number that should not be in the policy debate (and if you need much smarter and better-credentialed people making the some point—here’s Dani Rodrik). This number is an effort to bully people into going along with today’s trade agreements by making them think the stakes are utterly enormous. In fact, even if it was correct (again, it’s not) this study would be irrelevant to today’s trade policy debates because the sum total of economic gains from all post-1982 trade agreements (this includes NAFTA, the completion of the General Agreement on Tariffs and Trade, the formation of the WTO, and the permanent normal trading relations with China) is estimated to be just $9 per household, meaning that 99.9 percent of the gains from trade estimated in the study happened before 1982. So even if trade liberalization really did spur mammoth gains at some point in the (distant) past, the effects were over by the early 1980s. Second, on the distribution of gains and losses from trade, it is striking to me that so many economists who favor signing every trade agreement that comes down the pike can still feign surprise that expanded trade seems to be bad for most workers’ wages. Put simply, it is completely predicted in textbook trade economics that wages for most workers will fall and inequality will rise when the United States trades more with poorer trading partners. Yes, expanded trade is predicted to lead to higher overall national income, but it is also predicted to redistribute enough income within the United that it can (and is likely to) make most workers worse-off. This should not be a surprise to anyone familiar with the topic. The TPP Debate: Never Real and No Longer Polite As Jeff Faux notes, we seem to have reached the part of the debate over the TPP when facts and evidence have largely given way to table-pounding. But given that this is still a live debate and that silly arguments continue to proliferate, here are a couple of clarifications that might be helpful to the debate: First, a vote for the TPP is a vote to reduce the wages of most American workers and increase inequality. Yes, policies that boost U.S. imports (like the TPP) raise total national income in the United States, but they also redistribute so much more income within the United States that most workers are made worse off. And to be clear about this, the losses are not just the workers directly displaced by trade. Instead, it’s the wages of all workers in the economy who compete with the trade-displaced workers for other jobs—about 100 million workers in all. The way to think of it is that landscapers and waitresses don’t lose their jobs to trade, but their wages suffer from having to compete with laid-off apparel workers looking for work elsewhere. Now, it’s true that the TPP would reduce wages for most Americans and increase inequality just a little bit. But that’s the direction. And it’s also true that expanded trade can potentially benefit everybody if the winners compensate the losers, but that would require complementary compensatory policies, and ones on a scale much, much larger than the Trade Adjustment Assistance (TAA) often throws in with trade agreements. And while TPP proponents low-ball the wage-suppressing effect of TPP, they often exaggerate the overall benefits for national income. But the source of both gains and losses from trade are the same: domestic reshuffling of production that sees importable sectors shrink and export sectors expand. So how big are the TPP’s estimated income benefits? Not very big—it’s estimated to increase U.S. GDP by about 0.4 percent cumulatively over the next 12 years, according to a paper by Petri and Plummer (2012) for the Peterson Institute for International Economics (PIIE). Yesterday, the normally-sharp Adam Posen (President of PIIE) put these benefits in an interview at a few tenths of a percent of GDP each year. That’s clearly wrong, even by his own shop’s estimates (roughly ten times higher than what the Petri and Plummer (2012) paper shows). Posen claimed on Twitter that this 0.4-percent-over-12-years estimate was “a lower bound” that “doesn’t show dynamic gains from productivity growth thru competition”. But that’s not right—the Petri and Plummer (2012) PIIE estimate is actually a significant increase relative to an earlier estimate by the same authors, and they justify the newer higher estimate exactly by saying they’re now incorporating estimates of productivity gains stemming from more-competitive firms gaining market share after TPP’s passage.* TPP: Obama’s Folly This post originally appeared on The Huffington Post. Barack Obama’s petulant criticism last Friday of Democrats who do not support his proposed Trans-Pacific Partnership reminds me of the old tongue-in-cheek advice to young lawyers: “If the facts are on your side, pound the facts. If the law is on your side, pound the law. If neither is on your side, pound the other lawyer.” The facts are definitely not on the president’s side. For two decades the trade deals negotiated by the last three presidents have lowered U.S. wages, lost jobs and generated a chronic trade deficit that requires our country to borrow more money every year in order to pay for imports. The president’s main argument that exports have risen, without mentioning that imports have risen much faster, is now transparently deceitful to anyone who can add and subtract. Neither is the law in his corner. As did his predecessors, Bill Clinton and George Bush, he assures Americans that this deal will be different because, you see, it will protect workers. But the secret draft, which had to be revealed to Americans by Wikileaks, shows that once again a trade agreement will be used to enhance the power of multinational corporate investors over people who have to work for a living. As AFL-CIO President Richard Trumka pointed out recently, the Office of the U.S. Trade Representative, which is charged with negotiating and enforcing the deal, does not even believe that murder and other brutal acts committed against labor union activists violate the “worker-protection” clauses to trade agreements. So, like a lawyer trained to defend the indefensible, Obama is desperately pounding the opposition. They are “just wrong,” he says, without showing us why. He accuses them of “making stuff up”—that is, that they are liars. He whines that they are “whupping on me.” He charges, nonsensically, that they “want to pull up the drawbridge and isolate themselves.” Today’s Jobs Data More Evidence That Currency Manipulation Is Not a Problem That’s Behind Us Recent debates over the Trans-Pacific Partnership (TPP) have highlighted the failure of the treaty to include a provision to stop countries from actively weakening the value of their own currency in order to run trade surpluses. The way this currency management works is that countries (most notably China, though there are many others as well) buy assets denominated in dollars—mostly U.S. Treasuries. This boosts the demand for dollars in global markets and weakens demand for the Chinese renminbi. This in turn increases the value of the dollar, which makes U.S. exports expensive in global markets and makes foreign imports cheaper to U.S. consumers. The result is that exports are suppressed while imports grow and the U.S. trade deficit widens. Opponents of including a currency provision in the TPP have made a number of bad arguments, and one of them is that currency management was once a problem, but isn’t anymore. They often point to recent appreciation of the Chinese currency as evidence that the problem of currency management is behind us. But this is incorrect—the evidence that currency management is still a problem is simply that foreign purchases of dollar-denominated assets remained strong in 2014. There is zero doubt that absent this continued intervention, the U.S. dollar would weaken. Further, the nearly $1 trillion in purchases of dollar denominated assets that has characterized each year since 2008 has led to a large stock of dollar assets held by foreign investors and governments, and this large stock (over and above the annual flow of dollar purchases) also keeps the value of the dollar stronger than it would otherwise be. Further, two pieces of recent evidence suggest strongly that excess dollar strength could be becoming a real drag on recovery. In the first quarter numbers on gross domestic product, the rising trade deficit knocked 1.3 percentage points off the economy’s annualized growth rate. Then trade data for March came in showing a very large rise in the deficit. Finally, today’s jobs report shows that growth of employment in manufacturing has stagnated in the last quarter (rising at an average monthly rate of less than 2,000 jobs), after rising at an average monthly rate of 18,000 in 2014. CEA Report Is Simply Not That Relevant to Current Trade Policy Debates The White House Council of Economic Advisers (CEA) released a report last Friday touting the benefits of international trade for the American economy. The paper provides an interesting review of research on a range of trade’s economic effects, yet the report is largely irrelevant to current trade policy debates. Worse, when its findings are related to current trade policy debates, they are often reported in ways that could mislead readers. The weaknesses of the report generally fall into one of three areas. First, the overwhelming focus of the report touts the benefits of trade flows qua trade flows, and often even compares outcomes relative to a hypothetical scenario where trade barriers were raised so high that the U.S. economy became completely autarkic. Academics might find this interesting, but nobody in today’s economic debate argues for increasing U.S. trade barriers, let alone to historically never-seen levels. The CEA acknowledges this explicitly by noting that barriers to foreign imports coming into the U.S. economy are already extremely low and unlikely to be reduced significantly by treaties like the Trans-Pacific Partnership (TPP). Several times, the report alludes to potential benefits of the TPP and other treaties in pulling down barriers to U.S. exports abroad, but fails to mention what is by far the most important barrier to U.S. export success—several major trading partners (including some proposed TPP partners) managing the value of their own currencies for competitive gain vis-à-vis the United States. Second, the report spends very little time on the most important non-currency issue regarding trade policy: the distribution of gains and losses. When the report does cite research on distribution, it is woefully incomplete, looking only at how the benefits from trade are distributed while ignoring the costs. The research on the comprehensive costs and benefits of this issue is pretty clear: trade with labor abundant trading partners, like many of those in the proposed TPP, tends to lower wages for the majority of U.S. workers and provide gains only to the upper end of the income distribution. Skepticism About Trade Deals is Warranted In 1993, it seemed obvious to me that NAFTA was about one main thing: providing a huge new (and much cheaper) labor force to U.S. manufacturers by making it safe for them to build factories in Mexico without fear of expropriation or profit-limiting regulation. But the Clinton administration claimed it would open a new market to U.S. business, and U.S. Trade Representative Mickey Kantor, President Clinton, and even Labor Secretary Bob Reich argued that it would create jobs for American workers and even increase job creation in the U.S. auto and steel industries. They said NAFTA would benefit Mexican workers and help create a bigger Mexican middle class, while deterring migrant workers from crossing the border to seek jobs in the United States with better wages. They also argued an alternative theory: that NAFTA would help keep U.S. manufacturers from moving to Southeast Asia, and that it was better to keep that off-shored work in our hemisphere and along our border. What actually happened? - The trade balance with Mexico went from positive to very negative, resulting in the loss of more than 600,000 jobs in the United States. - Mexico’s corn farmers were overwhelmed by a flood of cheaper U.S. corn and almost 2 million agricultural workers were displaced. Most of them migrated illegally to the United States and remain here as exploited, undocumented workers. - Wages fell for Mexican industrial workers, to the point that autoworkers in Mexico now make less than Chinese autoworkers. Some Japanese carmakers start paying Mexican workers at 90 to 150 pesos per day, or $6 to $10. - U.S. auto companies shifted investment to Mexico to exploit its much cheaper labor. AP reports that “Mexican auto production more than doubled in the past 10 years. The consulting firm IHS Automotive expects it to rise another 50 percent to just under 5 million by 2022. U.S. production is expected to increase only 3 percent, to 12.2 million vehicles, in the next 7 years.” Since NAFTA’s enactment, employment in the U.S. motor vehicle and parts industry has declined by more than 200,000 jobs. More recent claims about the expected benefits of free trade agreement with Korea have proven hollow, too. Instead of creating 70,000 jobs, the net effect has been a higher trade deficit and the loss of 60,000 jobs. Worse, the harshest impact of that deal won’t be felt for several more years, when protective tariffs on pickup trucks are eliminated, making Korean imports 25 percent cheaper than they are today. U.S. auto workers will be hard hit. And then there’s Permanent Normal Trade Relations with China and China’s admission to the WTO, which led to an explosion of imports and the loss of more than 3 million jobs, mostly in manufacturing and mostly in occupations that paid more than the jobs created in exports industries. One bad experience after another: that’s why so many are so opposed to fast track and more NAFTA-style free trade deals. Just the Facts: Trade and Investment Deals Are Bad for Working Families Last week, the president claimed that critics who say that the Trans-Pacific Partnership (TPP) “is bad for working families… don’t know what they are talking about.” But the truth is, there is an emerging consensus that globalization has put downward pressure on the wages of most working Americans, and has redistributed income from the bottom to the top. My colleague Josh Bivens has shown that expanded trade with low-wage countries has reduced the annual wages of a typical worker by $1,800 per year. Given that there are roughly 100 million non-college-educated workers in the U.S. economy, the scale of wage losses suffered by this group likely translates to roughly $180 billion. Trade and investment deals such as the Korea-U.S. Free Trade Agreement (completed by President Obama), and the agreement to bring China into the World Trade Organization in 2001 (negotiated by President Clinton), have contributed these lost wages. It’s not surprising that one commentator concluded that “the Trans-Pacific Partnership trade deal is an abomination,” precisely because of its impacts on “low-skilled manufacturing workers and income inequality.” No, the TPP Won’t Be Good for the Middle Class President Obama has been vociferously defending the Trans-Pacific Partnership (TPP) recently. He insists that it will be good for the American middle class and that TPP’s critics arguing otherwise are wrong. But in this case he’s wrong and the TPP critics are right: there is no indication at all that the TPP will be good for the American middle class. I tried to take this on in very wonky terms in this long-ish report here, and in this post I’ll try to boil it down a bit. The basic argument for why the TPP is likely to be a bad deal for the middle class is pretty simple. For one, even a genuine “free trade agreement” that was passed with no other complementary policies would actually not be good for the American middle class, even if it did generate gains to total national income. For another, the TPP (like nearly all trade agreements the U.S. signs) is not a “free trade agreement”—instead it’s a treaty that will specify just who will be protected from international competition and who will not. And the strongest and most comprehensive protections offered are by far those for U.S. corporate interests. Finally, there are international economic agreements that the United States could be negotiating to help the American middle class. They would look nothing like the TPP. Even genuine “free trade” would likely be hard on the American middle class Most (not all, but most) of the countries that would be included in the TPP are poorer and more labor-abundant than the United States. Standard trade theory has a clear prediction of what happens when the United States expands trade with such countries: total national income rises in both countries but so much income is redistributed upwards within the United States that most workers are made worse off. This is sometimes called “the curse of Stolper-Samuelson”, after the theory that first predicted it. And there is plenty of evidence to suggest that it’s not just a theory, but a pretty good explanation for (part of) the dismal performance of wages for most American workers in recent decades and the rise in inequality. And the scale of the wage-losses are much, much larger than commonly realized—it’s not just those workers who lose their jobs to imports. Instead, the majority of American workers (those without a 4-year college degree) see wage declines as a result of reduced trading costs. The intuition is simply that while waitresses and landscapers might not lose their jobs to imports, their wages are hurt by having to compete with trade-displaced apparel and steel workers.
http://www.epi.org/issues/trade-and-globalization/?view=blog
--- abstract: | The last two centuries saw groundbreaking advances in the field of healthcare: from the invention of the vaccine to organ transplant, and eradication of numerous deadly diseases. Yet, these breakthroughs have only illuminated the role that individual traits and behaviours play in the health state of a person. Continuous patient monitoring and individually-tailored therapies can help in early detection and efficient tackling of health issues. However, even the most developed nations cannot afford proactive personalised healthcare at scale. Mobile computing devices, nowadays equipped with an array of sensors, high-performance computing power, and carried by their owners at all time, promise to revolutionise modern healthcare. These devices can enable continuous patient monitoring, and, with the help of machine learning, can build predictive models of patient’s health and behaviour. Finally, through their close integration with a user’s lifestyle mobiles can be used to deliver personalised proactive therapies. In this article, we develop the concept of anticipatory mobile-based healthcare – *anticipatory mobile digital health* – and examine the opportunities and challenges associated with its practical realisation. Mobile Digital Health, Anticipatory Mobile Computing, Mobile Sensing, Ubiquitous Computing, Machine Learning author: - Veljko Pejovic - Abhinav Mehrotra - Mirco Musolesi - Veljko Pejovic - Abhinav Mehrotra - Mirco Musolesi bibliography: - 'references.bib' title: | Anticipatory Mobile Digital Health:\ Towards Personalised Proactive Therapies\ and Prevention Strategies --- Introduction ============ Mobile computing devices, such as smartphones and wearables[^1] represent more than occasionally used tools, and nowadays coexist with their users throughout the day. In addition, these devices host an array of sensors, such as a GPS receiver, accelerometer, heart rate sensors, microphones and cameras, to name a few [@Lane2010]. When data from these sensors are processed through machine learning algorithms, they can reveal the context in which a device is. The context can include anything from a device’s location to a user’s physical activity, even stress levels and emotions [@Rachuri2010; @Lu2012]. Therefore, the personalisation and the sensing capabilities of today’s mobiles can provide a close view of a user’s behaviour and wellbeing. Above all, mobile devices are always connected. They represent the most direct point of contact for the majority of the world’s population. Mobile phones, for example, provide an opportunity for an intimate, timely communication unimaginable just twenty years ago. One of the consequences is that mobile devices are becoming a new channel for the delivery of health and wellbeing therapies. For instance, digital behaviour change interventions (dBCIs) harness smartphones to deliver personally tailored coaching to participants seeking behavioural change pertaining to smoking cessation, depression or weight loss [@Lathia2013]. Communication through a widely used, yet highly personal device ensures that a person can be contacted at all times, which might be crucial in case of suicide prevention interventions. In addition, the smartphone is used for numerous purposes, which protects a dBCI participant from stigmatisation that may happen if the device is used exclusively for therapeutic purposes. Besides the inference of the current state of the sensed context, an ever-increasing amount of sensor data, advances in machine learning algorithms, and powerful computing hardware packed in mobile devices, allow the predictions of the future state of the context. *Context predictions* have already been shown in the domains of human mobility [@Ashbrook2003_gpsprediction; @Scellato2011; @Chon2013], but also population health state [@Madan2010]. Every next generation of mobile devices comes equipped with new sensors, and soon we may expect galvanic skin response (GSR), heart rate, body temperature oxymetry sensors as standard features[^2]. This would open up the ability to accurately predict the health state of an individual. *Anticipatory mobile computing* is a novel concept that, just like context prediction, relies on mobile sensors to provide information upon which the models of context evolution are built, yet it extends the idea with reasoning and actioning upon such predictions. The concept is inspired by biological systems that often use the past, present and the predicted future state of itself and its environment to change the state at an instant, so to steer the future state in a preferred direction [@Rosen1985]. Anticipatory mobile computing has a potential to revolutionise proactive healthcare. Health and wellbeing problems could be predicted from personalised sensor readings, and preventive actions could be taken even before the onset of a problem. We term this new paradigm – *Anticipatory Mobile Digital Health*, and in this paper we discuss the challenges and opportunities related to its practical realisation. First, we examine the key enablers (i.e., mobile and wearable sensors) that provide the contextual data which can be leveraged to infer the health state of a user. Then, we discuss machine learning techniques used for building predictive models of the user’s (health) context. We are particularly interested in the models that describe how the context might change after an intervention or a therapy. We investigate the challenges related to unobtrusive learning of the impact of an intervention to a person, and the opportunities for highly personalised healthcare. We also take into account individual differences among users, and the potential for capturing and including genetic pre-determinants into the system. We continue with the examination of human-computer interaction issues related to the therapy delivery, and conclude with a consideration of ethical issues in anticipatory mobile digital health. Finally, while we have examined the potential for inducing a change in a person’s behaviour through anticipatory mobile computing before [@Pejovic2014b], this paper extends the idea on the much larger domain of digital healthcare, and elaborates on particular challenges and opportunities in the area. Mobile Sensing for Healthcare ============================= The use of wireless and wearable sensors represents a novel and a rapidly evolving paradigm in healthcare. These sensors have the potential to revolutionise the way of assessing the health of a person. Sensor embedded devices are given to the patients in order to obtain their health related data remotely. These devices do not only help a patient in reducing the number of visits to the clinic, but also offer unprecedented opportunities to the practitioners for diagnosing diseases and tailoring treatments through continuous real-time sampling of their patients’ heath data. Furthermore, some of these devices empower the users with the ability to self-monitor and curb certain well-being issues on their own. Today’s mobile phones are laden with sensors that are able monitor context various modalities such as physical movement, sound intensity, environment temperature and humidity, to name a few. Some previous studies have showed the potential of mobile phones in providing data that can be used to infer the health state of a user [@Consolvo2006Houston; @Consolvo2008; @Lane2011b; @Canzian2015MoodTraces]. Houston [@Consolvo2006Houston] and UbiFit [@Consolvo2008] are the early examples of mobile sensing systems designed to encourage users to increase their physical activity. Houston monitors a user’s physical movement by counting the number of steps taken via an accelerometer that serves as a pedometer. Whereas, UbiFit relies on the Mobile Sensing Platform (MSP) [@Lester2006Practical] to monitor varied physical activities of a user. MSP is capable of inferring physical activities including walking, running, cycling, cardio and strength exercise, and other non-exercise physical activities, such as housework. BeWell is a mobile application that continuously monitors a user’s physical activity, social interaction and sleeping patterns, and helps the user manage their wellbeing [@Lane2011b]. Bewell relies on sensors such as accelerometer, microphone and GPS, which are embedded in mobile phones. In [@Canzian2015MoodTraces] the authors show that the depressive states of users can be inferred purely from location and mobility data collected via mobile phones. The above examples demonstrate the close bond between smartphone sensed data and different aspects of human health and well-being. A particularly interesting example of mobile healthcare monitoring is given by *LifeWatch [^3]*, a smartphone that is equipped with health sensors that constantly monitor the user’s vital parameters including ECG, body temperature, blood pressure, blood glucose, heart rate, oxygen saturation, body fat percentage and stress levels. A user has to perform a specific action in order to take health measurements. For example, a user should hold the phone’s thermometer against the forehead in order to measure the body temperature, and to take ECG readings, the user should clutch the phone horizontally with a thumb and forefinger placed directly on top of a set of sensors that are placed on the sides of the phone. The sensor data are sent to the cloud for the analysis and the results are delivered back to the user within a short time interval. Such a phone can prove to be extremely useful in the healthcare domain. However, there is still no proof of the accuracy of its results. Although within their owners’ reach for most of the time, smartphones do not stay in a constant contact physical with the users, and consequently are limited with respect to personal data they can provide. More recently, mobile phone companies have introduced smartwatches that link with mobile phones and enable the users to perform actions on the mobiles without actually interacting with them. These devices open up new possibilities for health data sensing. First, they maintain continuous physical contact with their users, and second, they host a new set of sensors, usually unavailable on traditional smartphones. In general these devices come with the accelerometer, heart rate and body temperature sensors. Smartwatches are inspired by the concept of a smart-wristband, a device that monitors the health state of a user and presents it in a visual form on the linked mobile phone. Smart-wristbands enable real-time health state monitoring, and have achieved a considerable commercial success among health-aware population (e.g. Jawbone[^4]). Initially these bands were able to report only a user’s physical activity. However, new sensors, such as body temperature and hearth rate, have been introduced, together with a more sophisticated data analytics and presentation to the user. Mobile sensing on the phone is for the majority of readings limited by the amount of physical contact the user makes with the phone. Smartwatches and smart-wristbands ensure that the contact is there, yet are limited to a particular part of the users body – her wrist. Sensor embedded smart-wearables designed to dedicatedly monitor specific health related parameter from a specific part of a user’s body, have appeared recently and promise more reliable sensing. Such smart-wearables could enable healthcare practitioners to obtain their patients’ health data continuously and in the natural environment of the patient. These devices come with a variety of health sensors. Pulse and oxygen in blood sensor, airflow sensor, body temperature sensor, electrocardiogram sensor, glucometer sensor, galvanic skin response sensor (GSR), blood pressure sensor (sphygmomanometer), and electromyography sensor (EMG), are some examples of the health sensors embedded in the smart-wearables. Some examples of smart-wearables include *Epoc Emotiv* [@Fraga2013Emotiv], an EEG headset capable of capturing brain signals that can be analysed to infer a user’s thoughts, feelings, and emotions. *MyoLink* is another wearable that can continuously monitor the user’s muscles and heart. It can capture muscle energy output, which in turn can be used to quantify the user’s fatigue, endurance and recovery level. Also, it can be placed on the chest to continuously track the heart rate of the user. *ViSi mobile*[^5], worn on a wrist, measures blood pressure, haemoglobin level, heart rate, respiration rate, and skin temperature. The device is highly portable and enables the user to monitor their health at anytime and anywhere. The next step in wearable computing is the one in which devices become completely stealth, and as in the Weiser’s vision of pervasive computing, completely integrated with people’s lives [@Weiser1991]. Shrinking the size of smart-wearables is push in that direction, for example reducing the size of a device from something obtrusive to a small adaptive device that the user can wear on their bodies and forget about it. BioStamp [@Perry2015CheckEngine] is a device composed of small and flexible electronic circuits that stick directly to the skin like a temporary tattoo and monitors the user’s health. It is a stretchable sensor capable of measuring body temperature, monitoring exposure to ultraviolet light, and checking pulse and blood-oxygen levels. The company envisions future versions of BioStamp able to monitor changes in blood pressure, analyse sweat, and obtain signals from the user’s brain and heart in order to use them in electroencephalograms and electrocardiograms [@Perry2015CheckEngine]. These wearable sensors enable the continuous measurement of health metrics and deliver treatment to the patients on time. Yet, the difficulty of continuous monitoring is not the only problem in modern healthcare. Recent studies have shown that around 50% of the prescribed drugs are never taken [@Nasseh2012Adherence; @Osterberg2005Adherence], and thus, prescribed therapies fail to improve the health of the patients [@Rasmussen2007Adherence]. In order to address this problem, Hafezi et al. [@Hafezi2015Ingestible] proposed *Helius*, a novel sensor for detecting the ingestion of a pharmaceutical tablet or a capsule. The system is basically an integrated-circuit micro-sensor developed for daily ingestion by patients, and as such allows real-time measurement of medication ingestion and adherence patterns. Moreover, Helius enables practitioners to measure the correlation between drug ingestion and patients health parameters, e.g. physical activity, heart rate, sleep quality, and blood pressure, all of which can be sensed by mobile sensors. The ecosystem of devices supporting health sensing is already substantial and constantly increasing. Soon, healthcare practitioners will have a remote multifaceted view of a patient’s health in real time. The key enabler is the unobtrusiveness of these smart sensing devices. Furthermore, issues such as the accuracy of measurements, accountability for mistakes and the security of a user’s privacy, need to be thoroughly addressed before these devices can penetrate into the official medical practice. In this paper we discuss the novel concept of anticipatory mobile digital health, outlining the challenges and opportunities in this promising field. Although smart health sensing devices are still in their infancy, we believe that we will witness a rapid evolution of this research area in the coming years. Anticipatory Mobile Computing ============================= Anticipation, for living systems, is the ability to reason upon past, present and predicted future information. Such, a systems Rosen described as *“a system containing a predictive model of itself and/or its environment, which allows it to change state at an instant in accord with the model’s predictions pertaining to a later instant"* [@Rosen1985], thus indicated that there is an internal predictive model that an anticipatory system builds and maintains. The concept of an anticipatory computing system envisions a digital implementation of such a model, and automated actioning based on the model’s predictions. Yet, an anticipatory computing system is of interest only if the anticipation carries a value for the end-user. We argue that modern mobile computing devices fulfil the necessary prerequisites for anticipatory computing. First, thanks to built-in sensors and personalised usage these devices can gather the information about theirs, and indirectly the user’s state, and the state of the environment; second, their computing capabilities allow devices to build predictive models of the evolution of the state; finally, the bond between a device and its end-user is so tight that automated suggestions (based on the anticipation) a device might convey to a user, are likely to influence the user’s actioning. After all, people already look into their smartphones when they need to navigate in a new environment or choose a restaurant. To clarify the concept of anticipation on mobile devices (termed *Anticipatory Mobile Computing*), in Figure \[fig:feedback\_loop\] we sketch a system that senses the context and builds a model of the environment evolution, which gives it the original predicted future. The system then evaluates the possible outcome of its actions on the future. An action that leads to the preferred modified future is realised through the feedback loop that involves interaction of the system with the user. ![Anticipatory mobile systems predict context evolution and the impact that current actions can have on the predicted context. The feedback loop consisting of a mobile and a human enables the system to affect the future.[]{data-label="fig:feedback_loop"}](figs/feedback_loop_thin_new.eps){width="\linewidth"} Anticipatory Healthcare System Architecture Design ================================================== The opportunity to infer the health and well-being state of an individual with the help of mobile sensing, together with the perspective of anticipatory mobile computing, pave the way for preventive healthcare through anticipatory mobile healthcare systems. We sketch the main ideas behind such a system in Figure \[fig:therapy\_feedback\_loop\]. Physiological (e.g. heart rate, GSR) and conventional mobile sensors (e.g. GPS, accelerometer) provide training data for machine learning models of the context (e.g. a user’s depression level) and its evolution. The models predict the future state of the context, termed the original future, and the state after an intervention or a therapy, termed the modified future. Based on the predictions, a therapy with the most preferred outcome is selected and conveyed to the user. Finally, different users may react differently to the same therapy, and close sensor-based patient monitoring, together with a-priori inputs, such as a user’s genetic background, are used to custom tailor the therapies. ![Anticipatory mobile systems predict context evolution and the impact their actions can have on the predicted context. The feedback loop consisting of a mobile and a human enables the system to affect the future.[]{data-label="fig:therapy_feedback_loop"}](figs/therapy_feedback_loop.png){width="0.8\linewidth"} A practical realisation of an anticipatory mobile digital health system requires that the following building blocks are present: - **Mobile sensing.** The role of this block is to manage which of a number of available mobile sensors are sampled, and how often. Mobile devices’ sensors were originally envisioned as occasionally used features, and their frequent sampling can quickly deplete a device’s battery. At the same time, important events may be missed if sampling is too coarse. - **Therapy and prevention toolbox.** This block contains definitions of possible therapies and prevention strategies that can be delivered to the user. Although in future we envision further automatisation of this module, for now, we feel that a professional therapist’s expertise should be harnessed to limit the number of possible therapies, and oversee their deployment. - **Machine learning core.** Anticipatory mobile digital health employs machine learning for two separate aspects of health state evolution modelling: *context evolution model* and *therapy/prevention-effect model*. The former connects sensor data with higher-level context, and provides a predictive model of how the context might evolve. The latter provides a picture of how different therapies might affect a user’s health state. We discuss these models in detail in the next section. - **User interaction interface.** The success of an anticipatory mobile digital healthcare system is limited by the user’s compliance with the provided therapy and prevention strategy. The look, feel and the behaviour of the mobile application that delivers the therapy or prevention strategy to a user is crucial in this step. in the following section, we also discuss the challenges in designing a successful user interaction interface. Challenges and Opportunities ============================ Numerous challenges obstruct the path towards implementations of anticipatory mobile digital healthcare systems. Rooted in mobile sensing, anticipatory mobile digital health faces challenges such as resource, primarily energy, inefficiency of continuous sensing, and the difficulty of reliable context modelling. Yet, these challenges are common for a larger field of mobile sensing, and a thorough discussion on these issues is available elsewhere [@Lane2010; @Pejovic2015; @Klasnja2012]. Instead, here we focus on aspects that are unique to anticipatory mobile healthcare. The use of machine learning algorithms to model and predict user behaviour and the effect of a therapy or a prevention strategy on the future health state of a specific user is the main challenge. The value of machine learning models, for instance, increases with the amount of available training data for her. Second, the mobile monitors the user, and may suggest therapies, yet, it is the user herself that decides whether to take the therapy or to follow certain preventive measures or not. Besides machine learning, future anticipatory mobile digital health developers should pay a special attention to the human-computer interaction issues in this field, and try to answer – what is the best way to convey an advice/therapy to a user, so that the compliance with the proposed therapy or prevention strategy is the highest? Finally, the area of ethics, responsibility and entity roles in anticipatory mobile digital health remains an uncharted territory. In the rest of the section we discuss each of the challenges individually, and provide positional guidelines for overcoming the challenges. Machine Learning in Anticipatory Mobile Digital Health ------------------------------------------------------ Anticipatory mobile digital health, as stated in the previous section, employs machine learning for two separate aspects of health state evolution modelling: *context evolution model* and *therapy/prevention-effect model*. First, a model of a user current and predicted future health state is needed. In this model, a relationship between mobile sensor data and high-level health state is built. The model can be direct, if certain values of physiological sensor readings indicate a certain health state, or indirect, if sensor readings reveal contextual aspects that can be connected to a health state of an individual – for instance, GPS readings can reveal user mobility, which in turn can hint a user’s depression state [@Canzian2015MoodTraces]. In the next step, inference models are extended to provide predictions of the future state of the health state, either directly, or indirectly through the predictions of the future context. Forecasting user’s next location is an active area of research, with substantial achievements [@Ashbrook2003_gpsprediction; @Scellato2011; @Chon2013]. For many other aspects of the user’s behaviour and health state, reliable predictive models still do not exist, and even the possibility of them being built remains an open question. The second major machine learning model in anticipatory mobile digital health is the model of the impact of a possible therapy or prevention strategy on the predicted future health state of a user. There are two non-exclusive ways to construct such a model: one is to harness the existing expertise in healthcare to map available therapies to health state transitions. For example, we could map antidepressants to a transition from depressive states to a healthy state. However, these rules are not suitable for preventive healthcare. Anticipatory mobile digital health operates on predictions, and consequently therapies should aim to *prevention*. In addition, although mobile devices remain highly personal, and the sensor data uncovers fine-grained individual health state information, these general rules limit the ability of the system to deliver personalised healthcare. An alternative approach is to build a therapy/prevention-effect model by monitoring the evolution of a user state after a proactive therapy or prevention strategy is delivered. By comparing the original predicted state with the actual state recorded some time after the therapy (or prevention strategy), we can identify the relationship between the therapy (or prevention strategy) and the future health state change. Built this way, a model reveals successful proactive therapies, which is difficult to achieve in the traditional practice. Moreover, what works for one patient may not work for another – these models are highly personalised, and can reveal therapies that are useful for a particular kind of a person only. Still, we argue that these models should not be built from the scratch – the available therapies that could be automatically suggested to a particular patient in a particular situation should be determined by the rules stemming from the existing medical expertise. **Learning with a user.** Automated tool-effect modelling in anticipatory mobile digital health requires that a therapy (or prevention strategy) is induced to a user so that its effects can be observed. This outcome is then used to train and refine the model. *Reinforcement learning* where an agent uses a tool in the intervention environment (which for example can be represented through a Markov decision process) is a natural way to model the problem [@Sutton1998]. In every step, a certain tool is selected, used, and the observed change in the health state elicits a reward that reflects how positive the change is. **Measuring health state.** Thus, there is a need for a suitable metric for *measuring the health state change*. Here we need to evaluate the effect of a proactive therapy or prevention strategy, basically compare the *original predicted health state* and the *modified predicted health state*. We argue that the comparison metric has to be domain dependent. For example, if an anti-stress therapy is evaluated, the difference between the predicted skin conductivity and heart rate values without an intervention, and the actual values after the intervention, is a reasonable measure of stress level change [@Healey2005]. However, system designers should have in mind that the metric has to be both suitable for machine learning algorithms as well as relevant from the healthcare point of view. **Learning without interfering.** Reinforcement learning uncovers the mapping between therapies and health state changes. Delivering a previously unused therapy or prevention strategy refines the model, as we learn more about how the user reacts to this tool. From the practical point of view, however, we face a dilemma: use a tool that is known to result in a positive health change outcome, or experiment with an unused tool that might yield an even better outcome. In reinforcement learning this dilemma is known as *exploration vs. exploitation trade-off*. Strategies for solving the dilemma in an anticipatory mobile digital health setting should be aware of the possible irreversible negative consequences of a wrong therapy or prevention strategy. Preferably, the system should learn as much as possible without explicit delivery of therapies to a user. Such a learning concept is called *latent learning*. It is a form of learning where a subject is immersed into an unknown environment or a situation without any rewards or punishments associated to them [@Tavris1997]. Latent learning has been demonstrated in living beings who form a cognitive map of the environment solely because they are immersed into the environment, and later use the same map in decision making. We argue that mobile computing devices, through multimodal sensing, can harness latent learning to build a model of the user reaction with respect to certain actions or environmental changes that correspond to ones targeted by the therapies. This is particularly relevant for therapies that are not based on medications, such as behavioural change interventions [@Pejovic2014b]. For example, suppose a depression prevention system can provide the user with the suggestion to go out for a dinner with friends. We can get an a priori knowledge of how this suggestion would affect the user, for example if on a separate occasion we detect that the participant went out for a dinner with friends, and we gauge the depression levels, estimated through mobility and physical activity metrics, before and after the dinner. Defining how the expected action – going out with friends – should manifest from the point of view of sensors – e.g., a number of Bluetooth contacts detected, location, time of the day – is one of the prerequisites for practical latent learning. Again, interdisciplinary efforts are crucial to ensure that the detected reaction corresponds to the one that should be elicited by the tool. Personalised Healthcare ----------------------- Current therapies are often created as “one size fits all", yet in many cases individuals react differently. For example, antidepressants are ineffective in 38% of the population, while cancer drugs work for only one quarter of the patient population [@PersonalizedMedicine2014]. Personalised therapies promise to revolutionise healthcare, by avoiding the traditional trial-and-error therapy prescription, minimising adverse drug reactions, revealing additional or alternative uses for medicines and drug candidates [@Mancinelli2000], and curbing the overall cost of healthcare [@PersonalizedMedicine2014]. Anticipatory mobile digital health is poised to bring personalised healthcare closer to mainstream practice. Not only can mobile sensing provide a glimpse into individual behavioural patterns, identifying risky lifestyles, but therapy/prevention-effect machine learning models can also take into account a patient’s genetics in order to individualise the therapy or prevention strategy. Investigation of which genes impact the occurrence and reaction to a treatment of a certain disease is a very active area of research. The potential for healthcare improvement is immense, having in mind that with some conditions, such as melanoma tumors, the majority of cases are driven by certain person-specific genetic mutations, and could be targeted by specific drugs [@PersonalizedMedicine2014]. The relationship is not one way, and anticipatory mobile digital health could also help with pharmacogenomics, the study of how genes affect a person’s response to drugs. Identifying common pieces of genetic background in populations who reacted to an anticipatory therapy or prevention strategy in the same way would help find the relationship between genes and health treatments. Finally, the inclusion of the genetic background in the common medical practice is not far from reality – in 2014 a human genome sequencing for less than USD \$1000 became available. HCI Issues in Anticipatory Mobile Digital Health ------------------------------------------------ Despite the automation that anticipatory mobile digital health brings, in the end, it is up to a user to comply with the given therapy or prevention strategy. This is particularly important for behavioural change intervention therapies, that are delivered in cases where the health state is directly influenced by patient’s behaviour. Consequently, the communication between the system and the patient has to be seamless. Users are an important part of the system, and their inclusion requires an appropriate interface between the participant and the system. As noted by Russell et al. [@Russell2003], a system that autonomously brings decisions and evolves over the course of its lifetime needs to be transparent to the user. Through the user interface such a system must be understandable by the user and capable of review, revision, and alteration. In addition, the content should be framed to emphasise that the tool can help, yet it is fundamental to avoid to harass and patronise the participant. The timing of a therapy or a prevention strategy is also important for its successful delivery. This is particularly true for automated therapies delivered via a mobile device. An inappropriately timed intervention that comes, for instance, when a patient is in a meeting, or riding a bicycle, may lead to annoyance, or may be completely overlooked by the patient. Mobile sensing helps with identifying opportune moments to deliver therapies. The context in which a user is, such as her location, physical activity and engagement in a task, to an extent determines her interruptibility [@Pejovic2014; @Mehrotra2015NotifyMe]. Machine learning and mobile sensing is harnessed for monitoring a user’s reaction to an interruption arriving when the user is in a certain context, and from there on a model of personal interruptibility is built. Querying the model with a momentarily value of a user’s context returns the estimated interruptibility at the moment. While practical implementations of the above models already exist [@Pejovic2014], in future we envision predictive models of user interruptibility. Finally, we highlight that opportune moments denote those time at which a patient is likely to quickly acknowledge/read the content of a delivered message. Identifying moments at which the delivered information will have the highest medical impact is even more important, yet due to the difficulty of getting the training data (we would need to deliver the same therapy or prevention strategy at different times to the same user) identifying such moments remains very challenging. Ethics and Accountability ------------------------- Privacy issues in mobile sensing emerged soon after the proliferation of smartphones started about a decade ago. Misuse and leaking of information that can be collected by a mobile device, such as a user’s location, collocation with other people, physical activity of a user, may deter people from trusting mobile application. Trust is a key component for the success of anticipatory mobile digital health applications, and every care should be taken that personal information does not leak. Ensuring that sensor data do not leave the device at which they were collected is one way to minimise the risk. However, this complicates the construction of joint machine learning models discussed earlier. The responsibility chain in the domain of anticipatory mobile digital health is yet to be defined. Unsuccessful therapies can have serious consequences. It is unclear who is to blame if a delivered therapy or prevention strategy does not improve the health state of a patient, or even worse, endangers the person’s life. A therapist who designed the therapy, a software architect who devised the underlying machine learning components, and the patient herself, all play a role in the process. Conclusions =========== Personalised and proactive healthcare brings undisputed benefits in terms of therapy (or prevention strategy) efficiency and cost effectiveness of the healthcare system. Mobile devices have a potential to become both our most vigilant observers, and closest advisors. Anticipatory mobile digital health harnesses the sensing capabilities of mobiles to learn about the user health state and predict its evolution, so that proactive therapies tackling predicted health issues are deliver to the user in advance. With the help of machine learning that takes into account rich sensor data and a user’s genetic background, anticipatory mobile digital health applications can tailor personalised therapies. Yet, in addition, the concept can be used to learn more about how therapies affect different demographics, users who behave in a certain way, or have a particular genetic background. Generalising from a larger pool of users and therapies can identify groups for which a therapy (or prevention strategy) is successful, basically uncovering new facts about drugs. Finally, we believe anticipatory mobile health applications warrant a discussion on their inclusion into the health insurance frameworks. [**Acknowledgements**]{} This work was supported by the EPSRC grants “UBhave: ubiquitous and social computing for positive behaviour change" (EP/I032673/1) and “Trajectories of Depression: Investigating the Correlation between Human Mobility Patterns and Mental Health Problems by means of Smartphones” (EP/L006340/1). [^1]: In this paper by *wearables* we refer to smartwatches, smartglasses, e-garments and similar clothing and accessory items equipped with computing and sensing capabilities. [^2]: See for example the proposal by Intel: <http://iq.intel.co.uk/glimpse-of-the-future-the-healthcare-smartwatch/>. [^3]: [www.lifewatch.com](www.lifewatch.com) [^4]: [jawbone.com](jawbone.com) [^5]: [www.visimobile.com](www.visimobile.com)
OR WAIT 15 SECS © 2021 MJH Life Sciences™ and Contemporary OB/GYN. All rights reserved. Drug therapy isn't recommended for all women with osteopenia, a precursor to osteoporosis. Clinicians need to identify, early on, the patients with low bone mass at additional risk for future osteoporotic fractures. |Jump to:||Choose article section... Diagnosing osteoporosis Factors impacting treatment decisions Should you consider treating women with osteopenia? Specific management recommendations Conclusions Key points| Drug therapy isn't recommended for all women with osteopenia, a precursor to osteoporosis. Clinicians need to identify, early on, the patients with low bone mass at additional risk for future osteoporotic fractures. Nearly 30 million women have either osteoporosis or low bone mass and the numbers are rising. Preventing osteoporosis is more than an issue of deteriorating quality of life. For an ever-growing number of baby boomers, it may even be a matter of life and death, considering that there's a 20% increase in mortality in the year following hip fracture. And fracture risk is part and parcel of the very definition of this major public health problem: "a skeletal disorder characterized by compromised bone strength predisposing a person to an increased risk of fracture."1 A chronic disease, osteoporosis is silent for years or decades before a fracture occurs. Although most common among postmenopausal Caucasian women, it affects women of other racesand men, too. An estimated 43.6 million men and women aged 50 and older currently have osteoporosis or low bone mass, according to the National Osteoporosis Foundation (NOF).2 Women make up 29.6 million of this total; 7.8 million have osteoporosis and 21.8 million have low bone mass. Approximately 1.5 million fragility fractures occur each year, including 700,000 vertebral fractures, 300,000 hip fractures, and 200,000 fractures of the distal forearm.3 Because of increasing life expectancy and rising age-specific fracture rates, the prevalence of both osteoporosis and fracture is expected to soar over the next several decades. For example, by 2020 the prevalence of osteoporosis and low bone mass in the United States will rise by 41%, to 61.4 million, according to NOF estimates.2 The incidence of hip fracture worldwide is expected to double in women and triple in men by 2035.4 These escalating numbers highlight the importance of addressing osteoporosis and fracture risk in postmenopausal patients, especially when you consider the personal, societal, and economic consequences of fracture. Nearly two thirds of women who fracture their hips sustain permanent losses in functional ability and, as noted earlier, approximately 20% die within the next year.5,6 Long-term effects on quality of life may also follow other osteoporotic fractures.7 In 1995 alone, $13.8 billion were spent on medical costs of osteoporotic fractures, of which non-hip fractures accounted for $5.1 billion (36.9%).8 Asymptomatic until fracture occurs, osteoporosis has a preclinical period that may last for decades. Although risk factor screening alone is insufficient to diagnose osteoporosis, bone mineral density (BMD) testing can readily do so. Bone strength also depends on the rate of bone turnover (i.e., the rates of bone resorption and of new bone formation), macro- and microarchitecture of bone, accumulated microdamage within bone, and the extent of mineralization of bone.1 However, these parameters either cannot be measured noninvasively at present or have not yet been shown to predict BMD or fracture well. Several simple screening instruments can help determine which patients to refer for BMD testing. Among these are the Simple Calculated Osteoporosis Risk Estimation (SCORE), the Osteoporosis Risk Assessment Instrument (ORAI), and the Osteoporosis Self-assessment Test (OST).9-11 Each instrument has demonstrated good sensitivity (correctly identifying patients whose BMD will be found to be low when tested) and moderate specificity (correctly identifying people with normal BMD). Use of such a screening test can, therefore, help to identify women who do not need BMD testing, as well as those who constitute a high-risk group. Because substantial bone loss occurs during the initial postmenopausal years and age-related losses continue thereafter, guidelines of both the United States Preventive Services Task Force (USPSTF) and the NOF recommend BMD testing for all women aged 65 or older.12,13 If risk factors are present, testing is recommended for all postmenopausal women over age 50 (NOF) or 60 (USPSTF). Again, use a risk factor screening instrument to help you decide who should have BMD testing. Although BMD at the hip is considered the gold standard, measurements at peripheral sites can also be used for diagnosis and prediction of fracture risk.12-14 Measurements are typically reported as T-scores. The T-score describes the degree to which an individual's BMD differs from the average of the young adult population, in terms of standard deviation (SD) units: a T-score of -1.0 is, therefore, 1 SD below the young adult average. Table 1 shows the diagnostic categories proposed by the World Health Organization (WHO) in the early 1990s.15 In addition, women with low BMD who have had a previous fragility fracture are considered to have "severe" or "established" osteoporosis. |Diagnostic category||T-score| |Normal||> -1.0| |Osteopenia||-1.0 to -2.49| |Osteoporosis||< -2.5| Z-scores may also be reported; these are also standardized scores, but are more of a peer-group comparison. They compare the BMD of the woman being measured to the average BMD of women her own age. Z-scores are generally less useful than T-scores for diagnosing subjects at risk because, as previously mentioned, BMD declines progressively throughout life, and because fracture risk seems to be more closely related to the absolute amount of bone present. Many clinicians, however, use the Z-score as a basis for evaluating women for secondary factors that may contribute to low bone mass. If a patient's Z-score is lower than -1.5, her bone loss may be due to more than menopause- and age-related changes, and she should have further evaluation to exclude other medical disorders. An assessment of fracture risk guides treatment decisions in osteoporosis. BMD explains about 60% to 80% of variation in fracture risk and predicts fracture better than hypertension predicts stroke or hypercholesterolemia predicts cardiovascular events.13 Regardless of the site at which BMD is measured, overall skeletal fracture risk approximately doubles for each SD decrease below young adult normal.14, 16 However, measurement at the hip may best predict hip fractures.17 Assessment of clinical risk factors for fracture also enters into treatment decisions. The first four of the risk factors listed in the NOF guidelines (Table 2) are the most important.13 A personal history of fracture has the greatest effect on fracture risk. Independently of BMD and regardless of previous fracture site, future fracture is about twice as likely in women with such a history.18 NOF guidelines recommend treating all women who have osteoporosis and some women who have osteopenia. According to the guidelines, all women whose T-scores are <-2.0 are candidates for treatment, as are women whose T-scores are <-1.5, if any of the above-listed risk factors are present.13 These recommendations highlight the important distinction to be made between diagnostic BMD thresholds (WHO guidelines) and fracture risk/treatment thresholds and emphasize the additive effect of risk factors. Osteopenia, as defined by the WHO, describes BMD values between 1 and 2.5 SD below the average of the young adult normal population. According to this definition, approximately 16% of the young adult normal population will have T-scores consistent with either osteopenia (15.4%) or osteoporosis (0.6%). In healthy younger women, these low values typically reflect lower than average peak bone mass rather than bone loss of the type associated with menopause. Because the distribution of average BMD values shifts lower with advancing age, progressively larger segments of older groups have low T-scores, so that approximately 40% (~13 million) of all women over age 50 have osteopenia and another 20% (~7 million) have osteoporosis.19 This represents a very large population at risk of fracture who deserve evaluation and treatment. The association between BMD and fractures has no threshold. In other words, the risk of fragility fracture increases progressively as BMD declines and no level of BMD is associated with a complete absence of risk. One would, therefore, infer that osteopenia is associated with a greater risk of fracture than with normal BMD. This was, in fact, recently reported in both the Study of Osteoporotic Fractures (SOF) and the National Osteoporosis Risk Assessment (NORA) cohorts.14,20 Using classification based on hip or spine measurements or both, SOF investigators reported that 74%, 60%, and 53%, respectively, of all nonvertebral fractures reported during 10 years of follow-up occurred in women whose T-scores were > -2.5. One could argue that a good proportion of this elderly female population would have moved from osteopenia to osteoporosis during those 10 years and that, in fact, women with full-blown osteoporosis were the ones having the fractures. In NORA, however, within approximately 1 year after BMD measurement at peripheral skeletal sites (forearm, heel, or finger), the fracture rates of women who had been identified as osteopenic were 1.8 times those seen in women with normal BMD. (The fracture rates of women with osteoporosis were four times higher than those of women with normal BMD.) Even after adjusting for multiple risk factors, the osteopenic women were 70% more likely to sustain fragility fractures of the hip, spine, rib, wrist, or forearm. In fact, women with T-scores between -1.0 and -2.5, who comprised 40% of the study population, experienced more than half of the 2,258 fractures reported during follow-up. Comprehensive management of osteoporosis includes providing recommendations for modifying behavioral risk factors, for getting adequate weight-bearing exercise, and taking in adequate calcium (1,200 to 1,500 mg/day) and vitamin D (400 to 800 IU/day), as well as a drug prescription. Each of the drugs approved for managing osteoporosisestrogen, raloxifene, the bisphosphonates, alendronate and risedronate, calcitonin, and teriparatidehas shown at least some efficacy in reducing fractures.21-25 However, because the clinical trials were conducted in high-risk groups (women with very low BMD and/or prevalent fragility fractures) in order to more readily demonstrate fracture reduction, we have little direct evidence from clinical trials that any of these agents will reduce the risk of fracture in all women with osteopenia. It is of considerable interest, in this context, to note that the WHI study showed a reduction in the risk of clinical fractures and hip fractures in a population not selected for having osteoporosis.21 This suggests that estrogen will reduce fractures in a lower-risk population. Moreover, researchers have reported stabilization or increased BMD with estrogen or estrogen/progestin, raloxifene, alendronate, and risedronate in early postmenopausal women who do not have osteoporosis.26-29 (Calcitonin is indicated only in women who are at least 5 years postmenopausal and who have osteoporosis.) Although BMD correlates well with fracture risk in osteoporotic populations, that has not been established in randomized interventional trials. This association, therefore, must be considered uncertain, even though guidelines advise therapy for some women whose BMD is in the osteopenic range. It is intuitively appealing to treat women with scores in that range to prevent bone loss, given evidence suggesting that the microarchitectural damage to bone caused by bone loss may compromise bone strength and may be irreversible with currently available antiresoptive treatments.30 Especially damaging is the disruption of trabecular connectivity (an important cause of bone fragility).30 Certainly we can recommend to all women general measures like modifying lifestyle behaviors (stopping smoking) and using calcium and vitamin D supplements to minimize bone loss. But these measures, in and of themselves, are insufficient to completely protect the skeleton, especially in the several years immediately following menopause. NOF recommendations for pharmacologic treatment were derived from a cost-benefit analysis, which in addition to considering the absolute risk of fracture, also takes into account the costs of treatment (and available resources) and risks associated with it. This analysis implicitly assumed that the effects of treatments on fracture incidence could be generalized beyond populations studied in clinical trials. We also don't know how long to treat osteopenic women to reduce fractures. NOF clinical guidelines do recommend pharmacologic agents for some of the osteopenic population. Specifically, they recommend treatment for women with T-scores below -2 and also between -1.5 and -2 when there are additional risk factors for fracture. True consensus does not yet exist, however, for a treatment "threshold," largely because evidence to establish one is inadequate. Future research may more precisely identify women with osteopenia who are at particular risk of fracture, sparing large numbers of women who will not fracture in the near future from undergoing unnecessary drug treatment. At present, the NOF recommendations appear to provide reasonable guidance to the clinician. Osteoporosis is a common disease among postmenopausal women, and its clinical consequencesfragility fracturesare associated with substantial morbidity, mortality, and cost. We can easily and noninvasively diagnose osteoporosis through BMD testing, and can use clinical risk-factor algorithms to help identify women who are most likely to need BMD measurement. Comprehensive management of the disease is directed toward decreasing the risk of fracture, and includes modification of behavioral risk factors, adequate exercise, appropriate nutrition (including calcium and vitamin D), and medication. Several drugs have been shown to reduce the risk of fracture in women with osteoporosis. The action of these drugs is largely mediated through increases in BMD and decreases in rates of bone turnover. The risk of fragility fracture increases progressively as BMD declines, so women with osteopenia are at greater risk of fracture than are women with normal BMD. Although current treatment guidelines do not recommend drug therapy in all women with osteopenia, ob/gyns should consider treatment, in accordance with NOF guidelines, for two groups of women with osteopenia: those in whom additional risk factors are present and, perhaps, those who are losing bone rapidly during the first few years after menopause. Dr. Wehren is on the Gynecology Advisory Board and Speakers' Bureau of Merck & Company, Inc. Dr. Siris is a paid consultant to Merck, Procter & Gamble, and Eli Lilly. REFERENCES 1. NIH Consensus Development Panel on Osteoporosis Prevention, Diagnosis, and Therapy. Osteoporosis prevention, diagnosis, and therapy. JAMA. 2001;285:785-795. 2. National Osteoporosis Foundation. America's Bone Health: The State of Osteoporosis and Low Bone Mass in Our Nation. Washington, D.C.: National Osteoporosis Foundation; 2002. 3. Riggs BL, Melton LJ 3rd. The worldwide problem of osteoporosis: insights afforded by epidemiology. Bone. 1995;17(5 suppl):505S-11S. 4. Gullberg B, Johnell O, Kanis JA. World-wide projections for hip fracture. Osteoporos Int. 1997;7:407-413. 5. US Congress, Office of Technology Assessment. Hip Fracture Outcomes in People Age 50 and Over - Background Paper. OTA-BP-H-120. Washington, DC: U.S. Government Printing Office; 1994. 6. Cummings SR, Kelsey JL, Nevitt MC, et al. Epidemiology of osteoporosis and osteoporotic fractures. Epidemiol Rev. 1985;7:178-208. 7. Wehren LE, Barrett-Connor E. Quality of life in postmenopausal women with a history of fracture of the rib, spine, wrist, or hip: evidence from NORA. J Bone Miner Res. 2002;17(suppl 1):S359. 8. Ray NF, Chan JK, Thamer M, et al. Medical expenditures for the treatment of osteoporotic fractures in the United States in 1995: report from the National Osteoporosis Foundation. J Bone Miner Res. 1997;12:24-35. 9. Lydick E, Cook K, Turpin J, et al. Development and validation of a simple questionnaire to facilitate identification of women likely to have low bone density. Am J Manag Care. 1998;4:37-48. 10. Cadarette SM, Jaglal SB, Kreiger N, et al. Development and validation of the Osteoporosis Risk Assessment Instrument to facilitate selection of women for bone densitometry. CMAJ. 2000;162:1289-1294. 11. Geusens P, Hochberg MC, van der Voort DJ, Siris E, et al. Performance of risk indices for identifying low bone density in postmenopausal women. Mayo Clin Proc. 2002;77:629-637. 12. Screening for postmenopausal osteoporosis. File Inventory, Systematic Evidence Review Number 17. Rockville, Md: Agency for Healthcare Research and Quality; 2002. Available at: http://www.ahcpr.gov/clinic/prev/ostscinv.htm . 13. Osteoporosis: review of the evidence for prevention, diagnosis, and treatment and cost-effectiveness analysis. Introduction. Osteoporos Int. 1998;8(suppl 4):S7-S80. 14. Siris ES, Miller PD, Barrett-Connor E, et al. Identification and fracture outcomes of undiagnosed low bone mineral density in postmenopausal women: results from the National Osteoporosis Risk Assessment. JAMA. 2001;286:2815-2822. 15. Assessment of fracture risk and its application to screening for postmenopausal osteoporosis. WHO Technical Report No. 843. Geneva, Switzerland: World Health Organization, 1994. 16. Black DM, Cummings SR, Genant HK, et al. Axial and appendicular bone density predict fractures in older women. J Bone Miner Res. 1992;7:633-638. 17. Cummings SR, Black DM, Nevitt MC, et al. Bone density at various sites for prediction of hip fractures. The Study of Osteoporotic Fractures Research Group. Lancet. 1993;341:72-75. 18. Klotzbuecher CM, Ross PD, Landsman PB, et al. Patients with prior fractures have an increased risk of future fractures: a summary of the literature and statistical synthesis. J Bone Miner Res. 2000;15:721-739. 19. Looker AC, Johnston CC Jr, Wahner HW, et al. Prevalence of low femoral bone density in older U.S. women from NHANES III. J Bone Miner Res. 1995;10:796-802. 20. Wainwright SA, Phipps KR, Stone JV, et al. A large proportion of fractures in postmenopausal women occur with baseline bone mineral density T-score >-2.5. J Bone Miner Res. 2001;16(suppl 1):S155. 21. Writing Group for the Women's Health Initiative Investigators. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results From the Women's Health Initiative randomized controlled trial. JAMA. 2002;288:321-333. 22. Ettinger B, Black DM, Mitlak BH, et al. Reduction of vertebral fracture risk in postmenopausal women with osteoporosis treated with raloxifene: results from a 3-year randomized clinical trial. Multiple Outcomes of Raloxifene Evaluation (MORE) Investigators. JAMA. 1999;282:637-645. 23. Black DM, Thompson DE, Bauer DC, et al. Fracture risk reduction with alendronate in women with osteoporosis: the Fracture Intervention Trial. FIT Research Group. J Clin Endocrinol Metab. 2000;85:4118-4124. 24. Harris ST, Watts NB, Genant HK, et al. Effects of risedronate treatment on vertebral and nonvertebral fractures in women with postmenopausal osteoporosis: a randomized controlled trial. Vertebral Efficacy With Risedronate Therapy (VERT) Study Group. JAMA. 1999;282:1344-1352. 25. Chesnut CH 3rd, Silverman S, Andriano K, et al. A randomized trial of nasal spray salmon calcitonin in postmenopausal women with established osteoporosis: the prevent recurrence of osteoporotic fractures study. PROOF Study Group. Am J Med. 2000;109:267-276. 26. Effects of hormone therapy on bone mineral density: results from the postmenopausal estrogen/progestin interventions (PEPI) trial. The Writing Group for the PEPI. JAMA. 1996;276:1389-1396. 27. Johnston CC Jr, Bjarnason NH, Cohen FJ, et al. Long-term effects of raloxifene on bone mineral density, bone turnover, and serum lipid levels in early postmenopausal women: three-year data from 2 double-blind, randomized, placebo-controlled trials. Arch Intern Med. 2000;160:3444-3450. 28. Hosking D, Chilvers CE, Christiansen C, et al. Prevention of bone loss with alendronate in postmenopausal women under 60 years of age. Early Postmenopausal Intervention Cohort Study Group. N Engl J Med. 1998;338:485-492. 29. Mortensen L, Charles P, Bekker PJ, et al. Risedronate increases bone mass in an early postmenopausal population: two years of treatment plus one year of follow-up. J Clin Endocrinol Metab. 1998;83:396-402. 30. Marcus R, Majumdar S. The nature of osteoporosis. In: Marcus R, Feldman D, Kelsey JL, eds. Osteoporosis. 2nd ed. San Diego, Calif: Academic Press; 2001:3-17. Lois Wehren, Ethel Siris. Cover Story: Managing bone loss. Contemporary Ob/Gyn Jul. 1, 2003;48:42-52.
https://www.contemporaryobgyn.net/view/cover-story-managing-bone-loss
Before you start, to make your cocoa bombs, put a large plate into the freezer. Leave it in the freezer long enough to become completely cold, and do not take it out until step four of the process. You need the plate to be good and cold at that stage. Step One Drop your chocolate wafers into a microwave-safe bowl and heat in the microwave for 30 seconds at a time until they are completely melted. Stir the chocolate wafers well between each heating session. Step Two Using a spoon or a large pastry brush, coat the inside of the silicone molds with melted chocolate. Place the coated mold in the freezer for five minutes, then apply a second coat. Put the mold back into the freezer for another five minutes. Step Three Take your cold plate and chocolate-coated mold out of the freezer. Push the half spheres out of the mold and put them onto the chilled plate. Step Four Heat the small plate in the microwave four two minutes. Cocoa Bombs: Safety Warning The plate may be very hot when you take it out of the microwave. Use oven gloves or a potholder to ensure you do not burn your hands. Step Five Take one chocolate cocoa bomb half sphere and place it upside down on the heated plate. Move the half-sphere around to melt the rim slightly. Do this until you have a smooth, flat, even edge to your half-sphere. Step Six Add one tablespoon of coca mix to the half sphere then drop in a few of the mini marshmallows. Step Seven Place another chocolate half-sphere upside down on the plate. Move it around and melt the rim as you did with the first one. Step Eight Stick the second half-sphere on top of the first and gently but firmly hold them together to seal in the chocolate and marshmallows. Cocoa Bombs: Top Tip The half-spheres should stick together without issue. However, if you do have a problem, rub the empty half-sphere against the hot plate once more. If there is a small hole or gap between the two halves, but they have kind of stuck together, don’t panic. You can leave a metal spoon to sit and heat up in hot water, then use it to smooth the gap. Alternatively, paint a little melted chocolate onto the joint of your cocoa bomb. Step Nine Sit the filled spheres back into the mold, this will make the next step easier. Take the remaining chocolate wafers and reheat them until liquid. Pour the liquid melts mix into your pasty bag or a sandwich bag-sized Ziploc bag. Step Ten Use the pasty bag to drizzle the melted chocolate across the top of your cocoa bomb in a zig-zag motion. If you are using a Ziploc bag, cut a small piece off of the corner of the bag and drizzle the chocolate out through that. Step Eleven Add your butterfly sprinkles to the top, but be quick or the chocolate will begin to firm up before the sprinkles can stick. Enjoy! Step Twelve Place the coca bomb in a mug and pour 3/4 of a cup of hot milk over the top. Stir the coca into your milk and enjoy.
https://redheadedpatti.com/how-to-make-delicious-butterfly-sprinkles-cocoa-bombs/
By Miriam Raftery Update: This bill passed the Labor and Employment committee 7-0 and next heads to the Assembly Appropriations Committee. May 18, 2020 (San Diego) – A bill to provide relief for many freelance writers, videographers and musicians negatively impacted by California’s sweeping new labor law, Assembly Bill 5, has been introduced by Assemblymember Lorena Gonzalez (D- San Diego). Assembly Bill 2257, a measure to exempt many in these occupations from requirements of AB 5, is slated for hearing May 20 in the Assembly Labor and Employment Committee. Letters can be send to committee members at https://calegislation.lc.ca.gov/Advocates/. “Lorena Gonzalez to her credit is presenting AB 2257 as an urgency measure. It includes language that writers and musicians agreed to,” Fred Topel, co-leader of California Freelance Writers United, told East County Magazine. He adds that the majority of his organization’s members have lost work due to AB 5, with many employers unwilling to risk criminal penalties and fines for violating the complex requirements of the new law. The group has provided a letter template that can be used to contact legislators. If AB 2257 passes it could take effect immediately – but urgency measures require a two-thirds vote. That’s a steep hurdle, since some legislators are self-quarantining at home. Others hope to repeal the measure or pass other reforms, so it’s far from certain that the measure will pass without public input, according to Topel. For many freelance writers and musicians, 2020 has been devastating. First, AB 5 took effect in January, causing many gig workers to lose substantial income. Then came the COVID-19 pandemic and shutdown. The combined impacts have been catastrophic. AB 5 was intended to prevent exploitation of gig workers and provide clarification after the U.S. Supreme Court’s decision in the Dynamex ruling redefining employment. AB 5 required that many gig workers be treated as employees – an action that resulted in widespread loss of work for many freelancers and independent contractors. What the bill does for writers, photographers and videographers: - Eliminates the cap of 35 submissions for freelance journalists, who would no longer have to become employees, form their own companies, or lose work after 35 submissions--provided that other requirements are met, such as using their own equipment and working offsite. An employer cannot fire an employee and replace the employee with a freelancer, however. - Freelance videographers as well as photographers will be able to sell their works without the mandates imposed by AB 5, in most instances, though freelance videos in motion pictures will still be prohibited. What the bill does for musicians: - Allows collaboration and contracting among these professionals to produce sound recordings and musical compositions without applying the “ABC test” to determine an employer: musicians, vocalists, other recording artists, composers, songwriters, lyricists, musical engineers, sound mixers, record producers, and others involved in in creating marketing promoting or distributing sound recordings or musical compositions. - Allows musical groups to provide live performances and most independent musicians to collaborate with each other in live performances under the less strict, pre-Dynamex test to determine employment. - However AB 5 would still apply to musical groups regularly performing at theme parks, symphony musicians, musicians performing in musical theater production or as part of a tour of live performances and to instances where an employer significantly controls and directs the nature of a musician’s work The measure would also provide retroactive protections. “This can get two industries back to work immediately, which is vital during the pandemic, and it can be in concert with other fixes,” Topel says. If AB 2257 passes as an urgency measure by both houses of the Legislature and is signed into law by the Governor, it would take effect immediately, potentially as early as mid or late summer. If it does not pass, Gonzalez plans to add similar language protecting musicians into a separate bill, AB 1850, which is not an urgency measure and could pass with a simply majority, however it would not take effect until next year.
https://www.eastcountymagazine.org/bill-exempt-many-writers-videographers-and-musicians-ab-5-set-hearing-may-20-sacramento
PURPOSE: Responsible for implementing solutions in a cost-effective way by determining business and functional requirements, and communicating them clearly to stakeholders, facilitators and partners. Use requirements to drive the design or review of test cases, process change requests, and manage a project’s scope, acceptance, installation and deployment. KEY PERFORMANE AREAS: 1. Analyze the structure of a business and how it uses technology; 2. Identify problems within a business through using data modelling techniques; 3. Communicate with Senior Personnel to find out expected outcomes / objectives; 4. Formulate ways for businesses to improve, based on research; 5. Persuade internal and external stakeholders of the benefits of new technology or strategies; 6. Oversee the implementation of new technology and systems and run training workshops; 7. Implement advanced strategies for gathering, reviewing and analyzing data requirements; 8. Prioritize requirements and create conceptual prototypes and mock-ups; 9. Master strategic business process modeling, traceability and quality management techniques; 10. Apply best practices for effective communication and problem-solving; 11. Creating a detailed business analysis, outlining problems, opportunities and solutions for a business. Personal Attributes Understanding of systems engineering concepts; The ability to conduct cost/benefit analysis; Business case development; Modeling techniques and methods; Strong communication/interpersonal skills, including technical writing skills; Proven analytical background; Time management and problem-solving skills; Leadership and management skills. Qualifications & Skills Bachelor Degree in Business studies /Information Technology / Economics or related fields. Computer literate & fully proficient in Excel, PowerPoint, Outlook and Word. Experience & Knowledge Min 3 years’ experience as a Business Analyst; Business Change Life Cycle; Experience working with senior decision makers; Quality Management; Risk Management.
http://freerecruit.co.za/job/vhg-hr-and-payroll-consulting-rivonia-contract-business-analyst-test-analyst/
Closed engagement: this engagement ran from February 8 to June 15, 2021. Topics Focusing Canada’s blue economy on growth and prosperity for all Canada’s blue economy has real potential to generate new opportunities for sustainable growth and prosperity. We also have an opportunity to “build back better” as our ocean industries work to recover from the COVID–19 pandemic. This required a blue economy strategy that supports and advances the economic well-being of as many Canadians as possible; especially, Indigenous peoples and other underrepresented groups. This part of the engagement paper focuses on three key ways that our blue economy strategy can foster more prosperity and inclusion in Canada's ocean sectors: - Advancing the participation of Indigenous peoples - Developing the necessary labour force and skills - Identifying barriers to inclusive growth Each section highlights major challenges, suggests ways that a blue economy strategy could help address these challenges, and poses questions. Your feedback and ideas will help guide the development of this Strategy. Advancing the participation of Indigenous peoples In recent years, the Government of Canada has worked to renew and rebuild its relationship with Indigenous peoples based on the recognition of rights, respect, co-operation, and partnership. It has also undertaken work to advance reconciliation and make much-needed investments to close the socio-economic gaps that First Nations, Inuit, and Métis peoples face. Indigenous peoples understand the needs of their communities best and, therefore, Government has been working closely with Indigenous communities to advance their self-government , their self-determination, and their ongoing work of nation rebuilding. To that end, the Government recently proposed new legislation to implement the United Nations Declaration on the Rights of Indigenous Peoples. The legislation is a road map to advance lasting reconciliation with Indigenous peoples, and the steps that must be taken to respect, recognize, and protect their human rights and to address the wrongs of the past. Oceans, fisheries, aquatic ecosystems, and marine waterways are integral to Indigenous rights, cultures, ways of life, and heritages. Marine resources and ecosystems also hold significant social, ceremonial, and economic importance to Indigenous peoples across Canada. Read more The Government is working to increase the participation of Indigenous peoples in the blue economy through non-treaty agreements and other program agreements. We are also providing direct support through dedicated commercial fisheries programs that have enabled Indigenous communities to increase the potential of their commercial fishing enterprises and strengthen community economic self-sufficiency. At the same time, the Government and Indigenous communities are collaborating to help restore coastal habitats, protect marine environments, and improve marine safety. This includes through scientific and technical watershed-based aquatic resource and oceans management departments that receive direct support from the Government. Indigenous communities are valued partners in the growth of Canada’s blue economy and our efforts to protect and conserve ocean spaces. Canada’s Blue Economy Strategy has the potential to increase and diversify ocean-related economic opportunities for coastal Indigenous communities that are aligned with their values, priorities and aspirations. The co-development of a blue economy strategy is therefore an important opportunity to further reconciliation between Canada and Indigenous peoples. As the economy reopens from the global pandemic, a blue economy strategy could further initiatives that are fostering job creation and sustainable economic opportunities along with a new conservation economy in Indigenous communities. This includes initiatives to build capacity and to facilitate ownership and market entry to help increase the participation of Indigenous peoples and communities in ocean sectors. Challenges - Access to training programs and capacity development to acquire knowledge and develop skills to expand on existing, and to leverage new, opportunities in ocean sectors can be limited in Indigenous communities. - Limited private sector investments in Indigenous communities. - Limited understanding by non-Indigenous investors, lenders, and potential project partners of Indigenous economies, business opportunities, culture, and political structures. A blue economy strategy could - Facilitate the development of new ocean-based business opportunities, including the conservation economy, to diversify the economies and employment opportunities of Indigenous communities along the Arctic, Atlantic and Pacific oceans. - Contribute to the adoption of Indigenous Knowledge into planning and decision-making to help achieve equitable, sustainable, and inclusive ocean economy sectors. - Continue to support the development of partnerships with First Nation, Inuit, and Métis organizations and governments to build local marine safety and response capacity. - Explore opportunities to support Indigenous entrepreneurship and increased participation of Indigenous peoples and communities in ocean-based sectors to help close the socio-economic gap between Indigenous and non-Indigenous coastal communities. - Advance collaboration with Indigenous communities to continue to identify and support effective training models to equip Indigenous people with the skills and knowledge required to succeed in a modern blue economy. - Explore opportunities to strengthen Indigenous blue economy procurement capacity. Discussion questions How can Indigenous peoples best be supported to increase their participation in the ocean economy? How can the Government collaborate with Indigenous communities to enable them to achieve their ocean-related economic development plans? What are the main barriers and challenges to increasing private sector ocean-related investments in Indigenous communities and for Indigenous communities and businesses to invest in ocean sectors? Do initiatives, such as the Indigenous Career Pivot Project organized by the Ocean Supercluster, help meet the needs of Indigenous communities to increase their participation in the blue economy? Are there best practices, pilot studies or programs related to Indigenous training and roles in the marine sector that could help facilitate the concept of an Indigenous conservation economy and inform future actions in this area? Developing the necessary labour force and skills The blue economy provides rewarding jobs and career opportunities for many Canadians. In 2016, employment in Canada’s ocean sectors accounted for just under 300,000 jobs (1.6% of our national employment total) and that number is poised to grow. Diversity in Canada’s blue economy depends on a range of skills and labour force talent. Traditional sectors, such as fishing and seafood processing, may rely more on skills such as machine operation and physical abilities, while knowledge-intensive sectors may require more Science, Technology, Engineering and Mathematics (STEM) related competencies. As ocean sectors evolve, the type and mix of skills will change to match trends, such as increased digitalization and automation, the emergence of newer and more knowledge-intensive sectors (e.g. marine renewables), and a growing focus on sustainability. Read more The Government is already working with the private sector to ensure that our blue economy sectors have access to a well-educated and skilled labour force. The National Shipbuilding Strategy Value Proposition, for example, requires Canada’s shipyards to invest in the domestic marine industry, which may be directed to human resource development. This could enable the sector to have access to a pool of qualified personnel. Canada’s blue economy will continue to need skilled and talented workers to grow and realize its full potential. Many ocean sector companies already struggle to recruit and retain skilled employees. The approach to supply labour and provide skills development to Canadians and underrepresented people in the blue economy will be an essential component to grow a prosperous and competitive blue economy in Canada. Challenges - An aging population in some sectors (e.g. fishing) and demographic shifts (e.g. growing Indigenous population) in coastal communities where people may not be drawn to ocean industries are key considerations for the future workforce of these sectors. - Existing and anticipated labour shortages (e.g. in marine transport, processing, ocean technology) hinder companies from making long-term decisions and larger-scale investments. - Lack of awareness of the opportunities in, as well as educational pathways to, Canada’s ocean sectors. - Cultivating the right types of skills for Canada’s growing blue economy. - Helping lower-skilled workers in traditional sectors (e.g. seafood processing) adapt to new workplace realities. A blue economy strategy could - Work with provinces, territories, Indigenous governments and organizations, and industry to identify skills gaps hindering growth, as well as professional training and re-training opportunities. - Co-operate with provinces, territories, Indigenous communities, industry, and post-secondary institutions to increase awareness of the opportunities in Canada’s ocean sectors among youth and to identify and address recruitment and retention barriers. - Explore opportunities for a more targeted immigration approach to help alleviate some skills and talent gaps in Canada’s blue economy. - Support the development of a skilled workforce capable of responding and adapting to the ongoing challenges related to digitalization and automation in ocean industries in order to further competitiveness. - In co-operation with provinces, territories, Indigenous organizations and industry, explore the means to advance the transfer of knowledge and skills between various ocean industries, e.g. from offshore oil and gas to aquaculture and renewable energy. Discussion questions What actions could be taken to grow talent and develop the skills needed for ocean sectors? How can good quality employment opportunities in our blue economy be better promoted? How can underrepresented groups (e.g. women, Indigenous peoples, visible minorities, youth, and persons with disabilities) be encouraged and helped to develop ocean-related skills? How can the Government and private industry work and support traditionally underrepresented groups to eliminate barriers to effective recruitment and retention in ocean sectors? If you work in an ocean sector, what do you see as the current and pending skills and labour gaps? Identifying barriers to inclusive growth Ocean resources have vast potential to unlock economic growth, reduce poverty, and improve lives and livelihoods; particularly, for women, youth, Indigenous peoples, visible minorities, and persons with disabilities. However, access to ocean resources and the distribution of its benefits are often inequitable. The COVID–19 pandemic has magnified inequities among gender, race, Indigeneity, physical ability, geographic location, rurality, ability, age, and socio-economic status. Any focus on economic growth must include a conscious effort to make our growing sectors more accessible to vulnerable groups. Addressing inequities in ocean industries is an essential part of building a resilient and sustainable blue economy. An intersectional Gender-Based Analysis Plus (GBA+) lens will be applied to the development of initiatives that fall under the blue economy strategy in order to anticipate potential impacts on diverse groups of Canadians. By identifying issues early, the blue economy strategy will be positioned to help mitigate inequalities and promote equity in the ocean sector based on the issues brought forward. Read more There are many examples of Government and industry action on inclusive growth. The Atlantic Canada Opportunities Agency, for instance, has partnered with UGent and the Flanders Blue Ocean Cluster to immerse selected female graduate students in Atlantic Canada in leading edge ocean economy innovations. Oceans Network Canada has partnered with many coastal Indigenous communities in British Columbia to initiate community-based monitoring, school programs, youth training, and engagement so that each community can advance its ocean and marine environment programs. More recently, Canada’s Ocean Supercluster supported the launch of the Blue Future Pathways project to connect youth across Canada with education and employment opportunities, and to support participants in developing a successful career in the blue economy. Furthering the work of existing blue economy initiatives could help improve livelihood of people in underrepresented groups and enable ocean sectors to grow. Over the longer term, Canada’s blue economy must be more resilient and responsive to future economic uncertainties and impacts; especially, those that affect the most vulnerable groups who rely on Canada’s oceans and their benefits. Challenges - Ocean sectors are often perceived as male dominated, presenting a challenge for women who are interested in pursuing opportunities in those industries. - Youth are often not aware of good employment opportunities available in ocean industries and do not look to build their career in ocean sectors. - Each ocean sector faces its own inclusivity and diversity issues and there are no “one size fits all” solutions. A blue economy strategy could - Help identify and address inequalities associated with ocean sectors. - Identify actions that could address barriers preventing underrepresented groups from participating in the blue economy and ensure equitable distribution of benefits from the ocean. - Support Indigenous governments, organizations and communities in identifying and leading the development of initiatives that could increase Indigenous participation in the blue economy. - Identify actions to improve equity, inclusiveness, and diversity in Canada’s blue economy. Discussion questions What barriers prevent some groups from fully participating in ocean sectors? In which areas can strategic action be taken to support and improve equity, inclusiveness, and diversity in the blue economy? What steps can the Government take to ensure that benefits from the blue economy strategy are equitably distributed? How can the proportion of underrepresented groups in managerial and decision-making roles in ocean sectors be increased? Report a problem or mistake on this page - Date modified:
https://www.dfo-mpo.gc.ca/about-notre-sujet/blue-economy-economie-bleue/engagement-paper-document-mobilisation/part2-eng.html
Tess Czerski, a Dutch undergraduate student and intern with the RACI Project in Bali, Indonesia, says local entrepreneurs in developing countries are harnessing sustainable practices at an astonishing rate – leaving far behind Western enterprises, which are paralyzed by the current economic situation. By Tess Czerski Walking through Ubud feels like walking through an idealist’s paradise. Most cafés and shops in this artistic town in the middle of Bali have integrated sustainable initiatives into their business plans through offering countless vegetarian choices in the warungs and selling furniture and other products made from fast-growing bamboo. This interest in sustaining the local environment and communities not only attracts eco-tourists and NGOs, but also local Balinese business owners and entrepreneurs, who have noticed that a passion for green, clean and community helps their sales. The involvement in sustainability has even spread to surrounding villages. For instance, in Ubud’s neighbouring village Penestanan, the coffee shop Kopi Desa buys its coffee from a nearby farmer and roasts it every morning in the shop itself. Although the costs to roast the coffee on the spot are higher than buying it already roasted, the practice is continued because it’s what the consumer wants. Who started this sustainable movement here in Ubud and Penestanan – locals or tourists? This might be too difficult to answer, but interestingly, this movement is still going strong in a developing country, whereas the West is still struggling to maintain its sustainable agenda. In the face of “the systems” collapse, Europe in particular is prioritizing quick economic growth and governmental stability over sustainable goals. European leaders are in a daily debate on how to save the euro zone rather than the environment. In the USA, the early presidential campaigns of Obama and his challenger Romney promise their voters economic solutions to the financial crisis, not sustainable ones. Dealing with these crises, the implementation of sustainability and climate treaties are neglected, as the expenses of sustainability would counteract measures against the crisis. This month’s Rio de Janeiro+20, reflecting on the 1992 conference in the hope to reach new agreements on climate change and poverty reduction, is feared to follow Copenhagen’s failure. Western countries have shoved the long cooking issue of “sustainability” from their plates for the fast food issues. Other upcoming countries such as India and China keep on eating fossil fuels at the expense of traditional communities and unique environment to get what the West is losing: wealth and economic growth. Although sustainability remains a hot topic in the West for (usually well-off) consumers, many local businesses also seem affected by a lack of innovative initiatives. At the bagel shop where I worked in Amsterdam, my boss raised the prices and avoided expensive organic products, which usually are more expensive than ‘regular,’ non-sustainable products, seeing its customers rate and profit decreasing. At the same time, other inventive entrepreneurs have established some successful, ‘hip’ businesses, working together with local producers and charities. Hence, business owners in Amsterdam, and across the West, seem to be divided between playing it safe or playing it sustainable. From what I have seen in Amsterdam, it seems that gambling on sustainability is the safest strategy. The local businesses in both Amsterdam and Ubud appear to have a different view on sustainability as a source of income. Sustainability is typically pictured as an intersection of three roads: social justice, leading to social equality and improved livelihoods of communities; environment, aiming for limited use of natural resources and pollution prevention; and economics, which seeks to attain profit and efficiency. Where Amsterdam’s businesses have decided to obstruct the roads to social justice and environment in order to develop the economic road, Ubud’s entrepreneurs have chosen to take up all the roads and the different vehicles driving them to reach a sustainable and economically successful business. Ubud’s small business owners could also have chosen the conservative Western road in managing their businesses, but by meeting their consumers’ sustainable demand, they generate profit. Granted, the traditional Indonesian lifestyle – living according to the seasons and from locally harvested organic foods – allows locals to leave a small ecological footprint. In the Netherlands, however, limited space for organic cultivation of foods prevents prices from matching the cheap buys in Bali. Amsterdam’s road, heavily travelled by businesses failing to adopt environmental and social initiatives, can only be a dead end. Loosing contact with the people and opportunities around your business means losing contact with your consumers, who are, despite of the economic crises, still aware of today’s social and environmental issues. The local businesses in developing countries that are employing sustainability to their advantage are starting to surpass those in the West who are trying to achieve the same sustainable goals. These entrepreneurs have chosen to see sustainability as an opportunity rather than an obstruction to economic success. In order to achieve similar results, local Western business owners should not be afraid to mix sustainable initiatives of their direct environment in their businesses. Likewise, world leaders should include social and environmental issues and perspectives in their solutions for the relatively short-term economic and political problems in order to tackle long-term challenges at the same time. Driving this multidisciplinary (social, economic and environmental) road of sustainability allows both local businesses and global politics to carry both the loads of the economic crisis and the challenge of sustainability to a safe home. In this way, the sustainable practices of organic farming and community involvement can spread beyond the borders of the culture-rich village of Ubud.
https://business-ethics.com/2012/06/18/1107-will-the-paralyzed-west-be-surpassed-by-the-sustainable-rest/
FIELD Diabetes prediction using glucose measurements and machine learning is described. In one or more implementations, the observation analysis platform includes a machine learning model trained using historical glucose measurements and historical outcome data of a user population to predict a diabetes classification for an individual user. The historical glucose measurements of the user population may be provided by glucose monitoring devices worn by users of the user population, while the historical outcome data includes one or more diagnostic measurements obtained from sources independent of the glucose monitoring devices. Once trained, the machine learning model predicts a diabetes classification for a user based on glucose measurements collected by a wearable glucose monitoring device during an observation period spanning multiple days. The predicted diabetes classification may then be output, such as by generating one or more notifications or user interfaces based on the classification. BACKGROUND Diabetes is a metabolic condition affecting hundreds of millions of people, and is one of the leading causes of death worldwide. With early detection and proper treatment, however, serious damage to the heart, blood vessels, eyes, kidneys, and nerves, due to diabetes can be largely avoided. Conventional tests for diabetes which are accepted by the clinical and regulatory communities, include Hemoglobin A1c (HbA1c), Fasting Plasma Glucose (FPG), and 2-Hour Plasma Glucose (2 Hr-PG)—both the FPG and the 2 Hr-PG are part of the Oral Glucose Tolerance Test (OGTT) but the FPG can be tested separately from the OGTT. For the FPG test, a blood sample is taken and the result is used to classify the person as being “normal” (e.g., no diabetes), or as having prediabetes or diabetes. Generally, a person is considered normal if her fasting glucose level is less than 100 mg/dL, whereas the person is classified as having prediabetes if her fasting blood glucose level is between 100 to 125 mg/dL or as having diabetes if her fasting blood glucose level greater than 126 mg/dL on two separate tests. After measuring the person's fasting blood glucose for the FPG test, the OGTT then requires the person to drink a sugary liquid to cause a spike in the person's blood glucose level. Many people have difficulty tolerating this sugary drink, particularly women who are pregnant. The person's blood glucose levels are then tested periodically using additional blood samples for the next two hours for the 2 Hr-PG. A blood sugar level that is less than 140 mg/dL is considered “normal”, whereas a reading of more than 200 mg/dL two hours after drinking the sugary drink indicates diabetes. A reading between 140 and 199 mg/dL indicates prediabetes. Unlike the FPG and 2 Hr-PG tests of the OGTT, which each measure a person's blood glucose level at a single point in time, the HbA1c test measures an average glucose level of the user over the previous two to three months. Rather than directly measuring glucose, however, the HbA1c test measures the percentage of glucose that is attached to hemoglobin. Note that when glucose builds up in a person's blood, it attaches to hemoglobin, the oxygen-carrying protein in red blood cells. Red blood cells live for approximately two to three months in a person, and thus the HbA1c test shows the average level of glucose in the blood over the previous two to three months. Unlike the FPG and 2 Hr-PG tests, the person does not need to be in a fasted state when the HbA1c test is administered. However, similar to the FPG and 2 Hr-PG tests, to measure a person's HbA1c level, a blood sample must be taken from the person and used to produce a reading. An HbA1c level of 6.5 percent or higher on two separate tests indicates that the person has diabetes, while an HbA1c level between 5.7 and 6.4 percent generally indicates that the person has prediabetes. An HbA1c level below 5.7 percent is considered normal. Each of these conventional tests administered to screen for, or diagnose, diabetes have a variety of drawbacks which often leads to improper diagnosis. Conventional diabetes tests are often inaccurate because a given test administered to an individual on different days may result in inconsistent diagnoses due to various external factors which may cause glucose levels to fluctuate, such as sickness, stress, increased exercise, or pregnancy. In contrast, even though the HbA1c test measures an average glucose level over a previous two to three months, the HbA1c test results are greatly impacted by the user's blood glucose levels in the weeks leading up the test. As such, HbA1c test results can be greatly affected by changes in blood properties during the three-month time period, such as due to pregnancy or illness. Additionally, because the HbA1c test is not a direct measure of blood glucose, such tests may be inaccurate for people with various blood conditions such as anemia or having an uncommon form of hemoglobin. Additionally, such conventional tests often have poor concordance. In other words, these tests do not necessarily detect diabetes in the same individuals. This lack of consistency between test types may lead to an inaccurate diagnosis or failure to determine a proper treatment plan. For example, a user may have a high fasting glucose but an HbA1c score within the normal range. In such scenarios, different doctors may reach different conclusions regarding whether or not the user has diabetes as well as the type of treatment plan for the user. Finally, there are also a variety of limitations and drawbacks of administering these tests to different people, such as pregnant women. For example, these conventional diabetes tests require the user to visit a doctor's office or lab in order to take a blood sample which can be time consuming, expensive, and painful for some users. Each of these factors in combination may act to create a psychological barrier preventing users from getting tested for diabetes, thereby mitigating the benefits which may be achieved by early detection. Moreover, many of these conventional tests require the user to be in a fasted state, which can be difficult, or even dangerous, for some users including women who are pregnant. SUMMARY To overcome these problems, diabetes prediction using glucose measurements and machine learning is leveraged. In one or more implementations, an observation analysis platform includes a machine learning model trained using historical glucose measurements and historical outcome data of a user population to predict a diabetes classification for an individual user. The historical glucose measurements of the user population may be provided by glucose monitoring devices worn by users of the user population, while the historical outcome data includes one or more diagnostic measurements obtained from sources independent of the glucose monitoring devices. For instance, the historical outcome data may indicate whether a respective user of the user population is clinically diagnosed with diabetes or not based on one or more diagnostic measures, such as HbA1c, FPG, or 2 Hr-PG. Once trained, the machine learning model predicts a diabetes classification for a user based on glucose measurements collected by a wearable glucose monitoring device during an observation period spanning multiple days. In particular, the machine learning model generates this prediction based on the training with the historical glucose measurements and historical outcome data of the user population. The diabetes classification may describe a state of the user during the observation period (e.g., as having one of diabetes, prediabetes, or no diabetes) or whether the user is predicted to experience adverse effects of diabetes. The predicted diabetes classification may then be output, such as by generating one or more notifications or user interfaces based on the classification, such as a report directed to a health care provider that includes the diabetes classification (e.g., that the person is predicted to have diabetes) or a notification directed to the person that instructs the person to contact his or her health care provider. This Summary introduces a selection of concepts in a simplified form that are further described below in the Detailed Description. As such, this Summary is not intended to identify essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS The detailed description is described with reference to the accompanying figures. FIG. 1 is an illustration of an environment in an example of an implementation that is operable to employ techniques described herein. FIG. 2 FIG. 1 depicts an example of the wearable glucose monitoring device of in greater detail. FIG. 3 depicts an example of an implementation in which diabetes-related data, including glucose measurements, is routed to different systems in connection with diabetes prediction. FIG. 4 FIG. 1 depicts an example of an implementation of the prediction system of in greater detail in which a diabetes classification is predicted using machine learning. FIG. 5 FIG. 1 depicts an example of an implementation of the prediction system of in greater detail in which a machine learning model is trained to predict diabetes classifications. FIG. 6 depicts an example of an implementation of a user interface displayed for notifying a user about a diabetes prediction that is generated based on glucose measurements collected during an observation period. FIG. 7 depicts an example of an implementation of a user interface displayed for reporting a diabetes prediction of a user along with other information produced in connection with the diabetes prediction. FIG. 8 depicts an example of an implementation of a user interface displayed for collecting additional data that can be used as input to machine learning models for generating a diabetes prediction. FIG. 9 depicts a procedure in an example of an implementation in which a machine learning model predicts a diabetes classification based on glucose measurements of a user collected by a wearable glucose monitoring device during an observation period. FIG. 10 depicts a procedure in an example of an implementation in which a machine learning model is trained to predict a diabetes classification based on historical glucose measurements and outcome data of a user population. FIG. 11 FIGS. 1-10 illustrates an example of a system including various components of an example of a device that can be implemented as any type of computing device as described and/or utilized with reference to to implement embodiments of the techniques described herein. DETAILED DESCRIPTION Example Environment Examples of Procedures Example of a System and Device CONCLUSION Overview Conventional tests for diabetes which are accepted by the clinical and regulatory communities, include Hemoglobin A1c (HbA1c), Fasting Plasma Glucose (FPG), and 2-Hour Plasma Glucose (2 Hr-PG)—both the FPG and the 2 Hr-PG are part of the Oral Glucose Tolerance Test (OGTT) but the FPG can be tested separately from the OGTT. Such conventional tests, however, often have poor concordance. In other words, these tests do not necessarily detect diabetes in the same individuals. A given test administered to an individual on different days may also result in inconsistent diagnoses due to the various factors which affect blood glucose levels. There are also a variety of limitations and drawbacks of administering these tests to different people, such as pregnant women. To overcome these problems, diabetes prediction using glucose measurements and machine learning is leveraged. To classify people as having diabetes, one or more machine learning models (e.g., regression models, neural networks, reinforcement learning agents) are generated using historical glucose measurements and historical outcome data of a user population to predict a diabetes classification for an individual user. The historical glucose measurements of the user population may be provided by glucose monitoring devices worn by users of the user population. In contrast, the historical outcome data used for training may vary depending on classifications that the machine learning models are configured to output. Generally, the historical outcome data includes one or more diagnostic measures obtained from sources independent of the glucose monitoring devices. For instance, the historical outcome data may indicate whether a respective user of the user population is clinically diagnosed with diabetes or not based on one or more diagnostic measures, such as HbA1c, FPG, or 2 Hr-PG (or OGTT as a combination of FPG and 2 Hr-PG). Regardless of the particular outcome data used, training enables the machine learning model to predict diabetes classifications based on an individual's glucose measurements collected during an observation period. In other words, the machine learning model learns to identify patterns in glucose measurements which are correlated to having diabetes or not having diabetes. The machine learning model may learn, for instance, particular features of the glucose data which are highly correlated to having diabetes or not having diabetes. Examples of features that the machine learning model may learn to correlate with the diabetes classification include, by way of example and not limitation, time over threshold measures, rate-of-change measures, observation period anomalies, and a mean or median glucose value during the observation period. Notably, many of the features that the machine learning model learns to correlate with the diabetes classification, such as the time over threshold measures and the rate-of change measures, are features that simply cannot be determined using conventional diagnostic tests which output a result based on a blood sample measured at a single point of time. Once the machine learning model is trained, it is used to predict a diabetes classification for a user based on glucose measurements collected by a wearable glucose monitoring device worn by the user during an observation period spanning multiple days. This “diabetes classification” may in some implementations indicate whether a user has diabetes or is at risk for developing diabetes and/or indicate adverse effects that the user is predicted to experience. By way of example, a user may have his or her glucose monitored to predict whether he or she has diabetes (e.g., Type 1 diabetes, Type 2 diabetes, gestational diabetes mellitus (GDM), cystic fibrosis diabetes, and so on) or is at risk for developing diabetes (e.g., prediabetes), and/or whether she is predicted to experience adverse effects associated with diabetes (e.g., retinopathy, neuropathy, comorbidity, dysglycemia, macrosomia requiring a cesarean section, and neonatal hypoglycemia, to name just a few). In a similar fashion as diabetes classifications indicating a type of diabetes are predicted (e.g., Type 2, GDM, etc.), in one or more implementations, the machine learning model may additionally or alternately be configured to predict a type of prediabetes (e.g., impaired glucose fasting (IFG) or impaired glucose tolerance (IGT)). Alternately or additionally, the diabetes classification may correspond to a risk level of having or developing diabetes, such as high risk, low risk, or no risk. In operation, a diabetes classification predicted by the machine learning model may be used (e.g., by a health care professional) to treat the person or develop a treatment plan similarly to how the person would be treated if clinically diagnosed using conventional tests (e.g., with a type of diabetes and/or as being susceptible to experiencing adverse effects). Unlike such conventional tests which are traditionally administered in a lab or doctor's office, however, use of the wearable glucose monitoring device enables the glucose measurements to be collected remotely. For example, the wearable glucose monitoring device may be mailed or otherwise provided to the user, e.g., from the provider of the wearable glucose monitoring device, a pharmacy, a medical testing laboratory, a telemedicine service, and so forth. The user may then wear the wearable glucose monitoring device over the course of the observation period, such as by continuously wearing the device at home or work. Once obtained, the user can insert a sensor of the wearable glucose monitoring device into the user's body, such as by using an automatic sensor applicator. Unlike the blood draws required by conventional tests such as HbA1c, FPG, and 2 Hr-PG, the user-initiated application of the glucose monitoring device is nearly painless and does not require the withdrawal of blood, consumption of a sugary drink, or fasting. Moreover, the automatic sensor applicator can enable the user to embed the sensor into the user's skin without the assistance of a clinician or healthcare provider. Although an automatic sensor applicator is discussed, the wearable glucose monitoring device may be applied to or otherwise worn by the person in other ways without departing from the spirit or scope of the techniques described herein, such as without the automatic sensor applicator, with assistance of a health care professional (or a health care professional may simply apply the wearable to the person), or by peeling off a protective layer of an adhesive and affixing the adhesive to the person, to name just a few. Once the sensor is inserted into the user's skin, the wearable glucose monitoring device monitors glucose of the person over an observation period of multiple days. It is also to be appreciated that in some implementations, the sensor may not be inserted into the person's skin. Instead, the sensor may simply be disposed against the person's skin in such implementations, like an adhesive patch. Regardless, the sensor of the wearable glucose monitoring device, may continuously detect analytes indicative of the person's glucose and enable generation of glucose measurements. The glucose measurements collected during the observation period and/or data derived by pre-processing the glucose measurements is provided as input to the trained machine learning model. The trained machine learning model processes the glucose measurements to predict the diabetes classification of the user. Broadly speaking, the diabetes classification describes a state of the user during the observation period, such as having one of diabetes (or a particular type of diabetes such as GDM or Type 2 diabetes), prediabetes (or a particular type of prediabetes such as IFG or IGT), or no diabetes, to name just a few. Notably, unlike conventional tests, the diabetes classification predicted by the machine learning model is based on observed glucose values over multiple days. As such, the prediction is more accurate than tests which rely on a blood sample collected at a single point of time. Moreover, unlike the HbA1c test which is an indirect measurement of blood glucose and can be affected by recent changes in glucose levels caused by external factors or conditions such as sickness or pregnancy, the diabetes classification predicted by the machine learning model is based on glucose measurements directly obtained during a current observation time period. The diabetes classification prediction is then presented, such as by displaying an indication of the diabetes classification to the user, a doctor, or a guardian of the user via a user interface. Other information may also be presented, such as visualizations of the glucose measurements as well as other statistics derived from the glucose measurements. In some cases, the diabetes classification prediction is presented in a glucose observation report which may also include one or more treatment options for the user, visual representations of the glucose measurements collected by the glucose monitoring device during the observation period, glucose statistics of the user generated based on the collected glucose measurements, levels of severity, next steps (e.g., for a doctor, health care professional, or user), a request to follow-up, a request to order more sensors for the wearable glucose monitoring device, activity levels, trends in glucose or other markers, patterns in the glucose or other markers, patterns in exercise, interpretations of the glucose measurements, or activity related to glycemia. Thus, unlike conventional blood glucose test results, the glucose observation report generated by the one or more machine learning models may include a detailed analysis of the prediction as well as various treatment options. It is to be appreciated that diabetes classifications and information associated with such classifications may be provided in a variety of ways, including, for example, output as an audio signal via a speaker or digital assistant. Advantageously, utilizing a wearable glucose monitoring device and machine learning to generate predictions which classify people as having a type of diabetes eliminates many of the uncomfortable aspects of the above-noted diagnostic tests and does not limit who can be tested. Unlike HbA1c, for instance, pregnant women can safely wear the wearable glucose monitoring device over the observation period. Moreover, because the machine learning model is applied to glucose traces collected over multiple days, the inconsistencies associated with conventional tests are reduced thereby increasing the accuracy of the prediction as compared to conventional tests based on a single blood sample. By accurately predicting diabetes classifications and notifying users, health care providers, and/or telemedicine services, the described machine learning model allows early detection of diabetes and identifies treatment options which may be taken to mitigate potentially adverse health conditions before the user's diabetes worsens. In so doing, serious damage to the heart, blood vessels, eyes, kidneys, and nerves, and death due to diabetes can be largely avoided. In the following discussion, an example of an environment is first described that may employ the techniques described herein. Examples of implementation details and procedures are then described which may be performed in the discussed environment as well as other environments. Performance of those procedures is not limited to the example of the environment and the example of the environment is not limited to performance of those procedures. FIG. 1 100 100 102 104 106 108 is an illustration of an environment in an example of an implementation that is operable to employ diabetes prediction using glucose measurements and machine learning as described herein. The illustrated environment includes person , who is depicted wearing a wearable glucose monitoring device . The illustrated environment also includes an observation kit provider and an observation analysis platform . 100 104 106 102 104 102 102 102 102 102 108 102 108 104 102 102 In the illustrated example , the wearable glucose monitoring device is depicted being provided by the observation kit provider to the person , e.g., as part of an observation kit. The wearable glucose monitoring device may be provided as part of an observation kit, for instance, for the purpose of monitoring the person 's glucose over an observation period lasting multiple days. By way of example, the person may have his or her glucose monitored to predict whether he or she has diabetes (e.g., Type 1 diabetes, Type 2 diabetes, gestational diabetes mellitus (GDM), cystic fibrosis diabetes, and so on) or is at risk for developing diabetes (e.g., prediabetes), and/or whether he or she is predicted to experience adverse effects associated with diabetes (e.g., comorbidity, dysglycemia, macrosomia requiring a cesarean section (C-section), and neonatal hypoglycemia, to name just a few). In connection with the observation period, instructions may be provided to the person that instruct the person to perform one or more activities during the observation period, such as instructing the person to consume a beverage or specific meal (e.g., a same beverage as is consumed in connection with OGTT), avoid one or more specific foods, exercise, and rest, to name just a few. In one or more implementations, the instructions may be provided as part of an observation kit, e.g., written instructions. Alternately or additionally, the observation analysis platform may cause the instructions to be communicated to and output (e.g., for display or audio output) via a computing device associated with the person . The observation analysis platform may provide these instructions for output after a predetermined amount of time of an observation period has lapsed (e.g., two days) and/or based on patterns in the glucose measurements obtained. In connection with providing such instructions, the wearable glucose monitoring device automatically monitors the person 's glucose level after performance of the instructed activity, such as by monitoring an amount the person 's glucose changes after consuming the meal instructed, performing the exercise instructed, and so forth. 102 102 102 Although discussed throughout as lasting multiple days, in one or more implementations, the observation period may be variable, such that when enough glucose measurements have been collected to accurately predict a diabetes classification for the person the observation period may end. For example, in some cases the person 's glucose measurements over just a few hours may be processed to predict the person has diabetes with statistical certainty. In this case, the duration of the observation period may be a number of hours rather than multiple days. In general, though, the observation period lasts multiple days to obtain data so that features can be extracted to describe day over day variations in glucose and to prevent erroneous predictions that account for or fail to account for anomalous measurements or observations. 106 102 106 104 108 104 106 106 106 To this end, the observation kit provider may represent one or more of a variety of entities associated with obtaining a prediction regarding whether the person has diabetes or is predicted to experience adverse effects of diabetes. For instance, the observation kit provider may represent a provider of the wearable glucose monitoring devices and of a platform that monitors and analyzes glucose measurements obtained therefrom, such as the observation analysis platform when it also corresponds to the provider of the wearable glucose monitoring device . Alternately or additionally, the observation kit provider may correspond to a health care provider (e.g., a primary care physician, OB/GYN, endocrinologist), a doctor's office, a hospital, an insurance provider, a medical testing laboratory, or a telemedicine service, to name just a few. Alternately or additionally, the observation kit provider may correspond to a pharmacist or pharmacy, which may have a physical brick-and-mortar location and/or provide service online. It is to be appreciated that these are just a few examples and the observation kit provider may represent different entities without departing from the spirit or scope of the described techniques. 104 102 104 102 104 102 104 102 104 Given this, provision of the wearable glucose monitoring device to the person may occur in various ways in accordance with the described techniques. For example, the wearable glucose monitoring device may be handed to the person at a doctor's office, hospital, medical testing laboratory, or a brick-and-mortar pharmacy, e.g., as part of an observation kit. Alternately, the wearable glucose monitoring device may be mailed to the person , e.g., from the provider of the wearable glucose monitoring device , a pharmacy, a medical testing laboratory, a telemedicine service, and so forth. Certainly, the person may obtain the wearable glucose monitoring device for an observation period in other ways in one or more implementations. 104 102 102 104 102 100 110 104 110 104 104 110 FIG. 2 Regardless of how the wearable glucose monitoring device is obtained by the person , the device is configured to monitor glucose of the person during an observation period, which lasts for a time period spanning multiple days. The wearable glucose monitoring device may be configured with a glucose sensor, for instance, that continuously detects analytes indicative of the person 's glucose and enables generation of glucose measurements. In the illustrated environment these measurements are represented as glucose measurements . In one or more implementations, the wearable glucose monitoring device is a continuous glucose monitoring (“CGM”) system. As used herein, the term “continuous” used in connection with glucose monitoring may refer to an ability of a device to produce measurements substantially continuously, such that the device may be configured to produce the glucose measurements at intervals of time (e.g., every hour, every 30 minutes, every 5 minutes, and so forth), responsive to establishing a communicative coupling with a different device (e.g., when a computing device establishes a wireless connection with the wearable glucose monitoring device to retrieve one or more of the measurements), and so forth. The functionality of the wearable glucose monitoring device to produce the glucose measurements along with further aspects of the device's configuration are discussed in more detail in relation to . 104 104 110 Although the wearable glucose monitoring device may be configured in a similar manner as wearable glucose monitoring devices used for treating diabetes, in one or more implementations, the wearable glucose monitoring device may be configured differently than the devices used for treatment. These different configurations may be deployed to control confounding factors of observation periods so that measurements are obtained that accurately reflect the effects of users' normal, day-to-day behavior on their glucose. This can include, for instance, limiting and/or completely preventing users from inspecting those measurements during the observation period. By preventing users from inspecting the glucose measurements over the course of observation periods, the observation configurations further prevent users from seeing or otherwise observing glucose-measurement events (e.g., spikes in glucose) and changing their behavior to counteract such events. 104 104 110 In some cases, the wearable glucose monitoring device may be a specialized device designed specifically for the purpose of collecting glucose measurements for a user during an observation period spanning multiple days so that a diabetes classification may be generated, which may be differentiated in one or more ways from wearable glucose monitoring devices worn by users to treat diabetes. In other instances, such wearable glucose monitoring devices may have the same hardware characteristics as the wearable glucose monitoring devices used to treat diabetes, but may include software that disables or enables different functionality, such as software that prevents the user from inspecting glucose measurements during the observation period. In these instances, functionality that is disabled during the observation period can be enabled after the observation period has ended so that the user has access to the previously-disabled functionality, such as the ability to view glucose measurements in substantially real time. 110 The different configurations also may be based on differences between how the glucose measurements are used in connection with an observation period for diabetes prediction and how measurements are used in connection with treatment of diabetes. With treatment, continuous or nearly continuous receipt and output of glucose measurements, substantially as those measurements are produced, may be used to inform treatment decisions, e.g., to help a person or his or her caretaker decide what to eat, how to administer insulin, whether to contact a health care provider, and so on. In those scenarios, knowing the measurements and/or trends of the measurements in a timely manner (e.g., substantially real-time) may be critical to effectively mitigating potentially severe adverse effects. By way of contrast, receipt and output of glucose measurements to a person being observed (or a caretaker), as those measurements are being produced, may be unnecessary in connection with diabetes prediction in these scenarios. Instead, the glucose measurements produced for diabetes prediction are handled so that at the end of the observation period, or after some other horizon (e.g., when enough measurements have been produced to achieve statistical certainty), an accurate prediction regarding diabetes can be generated. 104 104 110 Based on such differences with respect to how the glucose measurements are used, the wearable glucose monitoring device may have more local storage than wearable glucose measurement devices used for diabetes treatment, e.g., 10-15 days' worth of glucose-measurement storage for observation configurations versus 3 hours' worth of glucose-measurement storage for treatment configurations. The larger storage capacity of the wearable glucose monitoring device may be suitable to store the glucose measurements for a duration of the observation period. In contrast, wearable glucose measurement devices used for treatment may be configured to offload glucose measurements such that once the measurements are suitably offloaded they are no longer stored locally on those devices. By way of example, wearable glucose devices used for treatment may offload glucose measurements by transmitting them via wireless connections to an external computing device, e.g., at predetermined time intervals and/or responsive to establishing or reestablishing a connection with the computing device. 104 110 104 110 104 110 104 110 To the extent that the wearable glucose monitoring device may be configured to store the glucose measurements for an entirety of an observation period, in one or more implementations, the wearable glucose monitoring device may be configured without wireless transmission means, e.g., without any antennae to transmit the glucose measurements wirelessly and without hardware or firmware to generate packets for such wireless transmission. Instead, the wearable glucose monitoring device may be configured with hardware to communicate the glucose measurements via a physical, wired coupling. In such scenarios, the wearable glucose monitoring device may be “plugged in” to extract the glucose measurements from the device's storage. 104 104 110 104 110 Accordingly, the wearable glucose monitoring device may be configured with one or more ports to enable wired transmission of the glucose measurements to an external computing device. Examples of such physical couplings may include micro universal serial bus (USB) connections, mini-USB connections, and USB-C connections, to name just a few. Although the wearable glucose monitoring device may be configured for extraction of the glucose measurements via wired connections as discussed just above, in different scenarios, the wearable glucose monitoring device may alternately or additionally be configured to offload the glucose measurements over one or more wireless connections. Implementations involving wired and/or wireless communication of the glucose measurements are discussed further below. 104 104 104 104 In addition to storage and communication differences, the wearable glucose monitoring device may also include one or more sensors or sensor circuitry configured differently than in devices designed for diabetes treatment. For instance, sensors and the circuitry (e.g., including measurement algorithms) of wearable glucose monitoring devices used for treating diabetes may be optimized for a range of measurements spanning from 40 mg/dL to 400 mg/dL. This is because treatment of diabetes often involves deciding what actions to take to mitigate severe glycemic events that can occur toward ends of the range, e.g., hypo- and hyper-glycemia. To predict diabetes, however, fidelity of the measurements over as wide a range may not be needed. Rather, diabetes predictions may be suitably generated in relation to a smaller range, such as a range of glucose measurements spanning from 120 mg/dL to 240 mg/dL. Accordingly, the wearable glucose monitoring device may include one or more sensors or sensor circuitry optimized to produce measurements in such a smaller range. It is to be appreciated that the above-discussed differences are merely examples of how the wearable glucose monitoring device may differ from wearable glucose monitoring devices configured for treatment of diabetes and that the wearable glucose monitoring device may differ from those devices in different ways without departing from the spirit or scope of the described techniques. 104 110 108 110 108 108 104 110 104 110 104 110 Once the wearable glucose monitoring device produces the glucose measurements , the measurements are provided to the observation analysis platform . As noted above, the glucose measurements may be communicated to the observation analysis platform over wired and/or a wireless connection. In scenarios where the observation analysis platform is implemented partially or entirely on the wearable glucose monitoring device , for instance, the glucose measurements may be transferred over a bus from the device's local storage to a processing system of the device. In scenarios where the wearable glucose monitoring device is configured to generate a prediction of a diabetes classification by processing the glucose measurements , the wearable glucose monitoring device may also be configured to provide the predicted diabetes classification as output, e.g., by communicating the diabetes classification to an external computing device. In other scenarios, the glucose measurements may be processed by an external computing device configured to predict diabetes classifications. 104 110 104 104 110 104 104 102 104 104 104 102 102 104 110 108 In one or more implementations, the wearable glucose monitoring device is configured to transmit the glucose measurements to an external device over a wired connection with the external device, e.g., via USB-C or some other physical, communicative coupling. Here, a connector may be plugged into the wearable glucose monitoring device or the wearable glucose monitoring device may be inserted into an apparatus having a receptacle that interfaces with corresponding contacts of the device. The glucose measurements may then be obtained from storage of the wearable glucose monitoring device via this wired connection, e.g., transferred over the wired connection to the external device. Such a connection may be used in scenarios where the wearable glucose monitoring device is mailed by the person after the observation period, such as to a health care provider, telemedicine service, provider of the wearable glucose monitoring device , or medical testing laboratory. To this end, an observation kit (not shown) may include packaging (e.g., an envelope or box) to mail the wearable glucose monitoring device to such an entity after observation. Such a connection may also be used in scenarios where the wearable glucose monitoring device is dropped off by the person after the observation period, such as at a doctor's office or hospital (or other establishment of a health care provider), a pharmacy, or a medical testing laboratory. Alternately or additionally, scenarios involving a wired connection may involve the person plugging in the wearable glucose monitoring device to an external computing device after the testing period, e.g., using a cord provided as part of an observation kit. In these scenarios, the external computing device may communicate the glucose measurements to the observation analysis platform over a network (not shown), such as the Internet. 110 108 104 110 104 110 104 104 104 110 110 108 104 Alternately or additionally, provision of the glucose measurements to the observation analysis platform may involve the wearable glucose monitoring device communicating the glucose measurements over one or more wireless connections. For example, the wearable glucose monitoring device may wirelessly communicate the glucose measurements to external computing devices, such as a mobile phone, tablet device, laptop, smart watch, other wearable health tracker, and so on. Accordingly, the wearable glucose monitoring device may be configured to communicate with external devices using one or more wireless communication protocols or techniques. By way of example, the wearable glucose monitoring device may communicate with external devices using one or more of Bluetooth (e.g., Bluetooth Low Energy links), near-field communication (NFC), Long Term Evolution (LTE) standards such as 5G, and so forth. Wearable glucose monitoring devices may be configured with corresponding antennae and other wireless transmission means in scenarios where the glucose measurements are communicated to an external device for processing. In those scenarios, the glucose measurements may be communicated to the observation analysis platform in various manners, such as at predetermined time intervals (e.g., every day, every hour, or every five minutes), responsive to occurrence of some event (e.g., filling a storage buffer of the wearable glucose monitoring device ), or responsive to an end of an observation period, to name just a few. 108 108 110 104 108 104 108 104 102 104 108 Thus, regardless of where the observation analysis platform is implemented, the observation analysis platform obtains the glucose measurements produced by the wearable glucose monitoring device . In one or more implementations, the observation analysis platform may be implemented in whole or in part at the wearable glucose monitoring device . Alternately or additionally, the observation analysis platform may be implemented in whole or in part using one or more computing devices external to the wearable glucose monitoring device , such as one or more computing devices associated with the person (e.g., a mobile phone, tablet device, laptop, desktop, or smart watch) or one or more computing devices associated with a service provider (e.g., a health care provider, a telemedicine service, a service corresponding to the provider of the wearable glucose monitoring device , a medical testing laboratory service, and so forth). In the latter scenario, the observation analysis platform may be implemented at least in part on one or more server devices. 100 112 112 110 112 110 112 102 112 In the illustrated example , the observation analysis platform includes storage device . In accordance with the described techniques, the storage device is configured to maintain the glucose measurements . The storage device may represent one or more databases and also other types of storage capable of storing the glucose measurements . The storage device may also store a variety of other data, such as demographic information describing the person , information about a health care provider, information about an insurance provider, payment information, prescription information, determined health indicators, account information (e.g., username and password), and so forth. As discussed in more detail below, the storage device may also maintain data of other users of a user population. 100 108 114 114 110 102 102 114 In the illustrated example , the observation analysis platform also includes prediction system . The prediction system represents functionality to process the glucose measurements to generate diabetes predictions, such as to predict whether the person has diabetes (e.g., Type 2 diabetes, GDM, cystic fibrosis diabetes, and so on) or is at risk for developing diabetes (e.g., prediabetes), and/or whether the person is predicted to experience adverse effects associated with diabetes (e.g., comorbidity, dysglycemia, macrosomia requiring a C-section, and neonatal hypoglycemia, to name just a few). As discussed in more detail below, the prediction system uses machine learning to predict diabetes classifications. Use of machine learning may include, for instance, leveraging one or more models generated using machine learning techniques as well as using historical glucose measurements and historical outcome data of a user population. 100 116 114 116 116 102 116 FIGS. 6 and 7 FIG. 2 The illustrated example also includes diabetes classification , which may be output by the prediction system . In accordance with the described techniques, the diabetes classification may indicate whether it is predicted the person has diabetes or is predicted to experience adverse effects associated with diabetes. The diabetes classification may also be used to generate one or more notifications or user interfaces based on the classification, such as a report directed to a health care provider that includes the diabetes classification (e.g., that the person is predicted to have diabetes) or a notification directed to the person that instructs the person to contact his or her health care provider. Examples of user interfaces that may be generated based on the diabetes classification are described in more detail in relation to . In the context of measuring glucose, e.g., continuously, and obtaining data describing such measurements, consider the following discussion of . FIG. 2 FIG. 1 FIG. 1 200 104 200 104 104 depicts an example of an implementation of the wearable glucose monitoring device of in greater detail. In particular, the illustrated example includes a top view and a corresponding side view of the wearable glucose monitoring device . It is to be appreciated that the wearable glucose monitoring device may vary in implementation from the following discussion in one or more of the ways and for one of more of the reasons discussed above in relation to . 200 104 202 204 202 206 102 204 104 208 200 204 208 200 104 210 212 In this example , the wearable glucose monitoring device is illustrated to include a sensor and a sensor module . Here, the sensor is depicted in the side view having been inserted subcutaneously into skin , e.g., of the person . The sensor module is depicted in the top view as a dashed rectangle. The wearable glucose monitoring device also includes a transmitter in the illustrated example . Use of the dashed rectangle for the sensor module indicates that it may be housed or otherwise implemented within a housing of the transmitter . In this example , the wearable glucose monitoring device further includes adhesive pad and attachment mechanism . 202 210 212 206 202 208 206 212 208 202 210 212 208 204 206 206 104 102 202 206 In operation, the sensor , the adhesive pad , and the attachment mechanism may be assembled to form an application assembly, where the application assembly is configured to be applied to the skin so that the sensor is subcutaneously inserted as depicted. In such scenarios, the transmitter may be attached to the assembly after application to the skin via the attachment mechanism . Additionally or alternately, the transmitter may be incorporated as part of the application assembly, such that the sensor , the adhesive pad , the attachment mechanism , and the transmitter (with the sensor module ) can all be applied at once to the skin . In one or more implementations, this application assembly is applied to the skin using a separate sensor applicator (not shown). Unlike the blood draws required by conventional tests such as HbA1c, FPG, and 2 Hr-PG, the user initiated application of the wearable glucose monitoring device is nearly painless and does not require the withdrawal of blood, consumption of a sugary drink, or hours of fasting. Moreover, the automatic sensor applicator enables the person to embed the sensor subcutaneously into the skin without the assistance of a clinician or healthcare provider. 210 206 104 104 The application assembly may also be removed by peeling the adhesive pad off of the skin . It is to be appreciated that the wearable glucose monitoring device and its various components as illustrated are simply one example form factor, and the wearable glucose monitoring device and its components may have different form factors without departing from the spirit or scope of the described techniques. 202 204 202 204 204 202 In operation, the sensor is communicatively coupled to the sensor module via at least one communication channel which can be a wireless connection or a wired connection. Communications from the sensor to the sensor module or from the sensor module to the sensor can be implemented actively or passively and these communications can be continuous (e.g., analog) or discrete (e.g., digital). 202 202 204 202 202 202 204 202 202 202 202 The sensor may be a device, a molecule, and/or a chemical which changes or causes a change in response to an event which is at least partially independent of the sensor . The sensor module is implemented to receive indications of changes to the sensor or caused by the sensor . For example, the sensor can include glucose oxidase which reacts with glucose and oxygen to form hydrogen peroxide that is electrochemically detectable by the sensor module which may include an electrode. In this example, the sensor may be configured as or include a glucose sensor configured to detect analytes in blood or interstitial fluid that are indicative of glucose level using one or more measurement techniques. In one or more implementations, the sensor may also be configured to detect analytes in the blood or the interstitial fluid that are indicative of other markers, such as lactate levels, which may improve accuracy in predicting diabetes classifications. Additionally or alternately, the wearable glucose monitoring device may include additional sensors to the sensor to detect those analytes indicative of the other markers. 202 104 204 202 204 202 204 202 204 202 104 204 202 In another example, the sensor (or an additional sensor of the wearable glucose monitoring device —not shown) can include a first and second electrical conductor and the sensor module can electrically detect changes in electric potential across the first and second electrical conductor of the sensor . In this example, the sensor module and the sensor are configured as a thermocouple such that the changes in electric potential correspond to temperature changes. In some examples, the sensor module and the sensor are configured to detect a single analyte, e.g., glucose. In other examples, the sensor module and the sensor are configured to detect multiple analytes, e.g., sodium, potassium, carbon dioxide, and glucose. Alternately or additionally, the wearable glucose monitoring device includes multiple sensors to detect not only one or more analytes (e.g., sodium, potassium, carbon dioxide, glucose, and insulin) but also one or more environmental conditions (e.g., temperature). Thus, the sensor module and the sensor (as well as any additional sensors) may detect the presence of one or more analytes, the absence of one or more analytes, and/or changes in one or more environmental conditions. 204 204 110 202 202 204 214 214 110 214 110 216 218 214 110 214 110 In one or more implementations, the sensor module may include a processor and memory (not shown). The sensor module , by leveraging the processor, may generate the glucose measurements based on the communications with the sensor that are indicative of the above-discussed changes. Based on these communications from the sensor , the sensor module is further configured to generate observation device data . The observation device data is a communicable package of data that includes at least one glucose measurement . Alternately or additionally, the observation device data includes other data, such as multiple glucose measurements , sensor identification , sensor status , and so forth. In one or more implementations, the observation device data may include other information such as one or more of temperatures that correspond to the glucose measurements and measurements of other analytes. It is to be appreciated that the observation device data may include a variety of data in addition to at least one glucose measurement without departing from the spirit or scope of the described techniques. 104 208 214 204 214 204 208 214 214 214 In implementations where the wearable glucose monitoring device is configured for wireless transmission, the transmitter may transmit the observation device data wirelessly as a stream of data to a computing device. Alternately or additionally, the sensor module may buffer the observation device data (e.g., in memory of the sensor module ) and cause the transmitter to transmit the buffered observation device data later at various intervals, e.g., time intervals (every second, every thirty seconds, every minute, every five minutes, every hour, and so on), storage intervals (when the buffered observation device data reaches a threshold amount of data or a number of instances of observation device data ), and so forth. 214 216 202 104 206 202 216 202 202 202 202 202 110 With respect to the observation device data , the sensor identification represents information that uniquely identifies the sensor from other sensors, such as other sensors of other wearable glucose monitoring devices , other sensors implanted previously or subsequently in the skin , and so on. By uniquely identifying the sensor , the sensor identification may also be used to identify other aspects about the sensor such as a manufacturing lot of the sensor , packaging details of the sensor , shipping details of the sensor , and so on. In this way, various issues detected for sensors manufactured, packaged, and/or shipped in a similar manner as the sensor may be identified and used in different ways, e.g., to calibrate the glucose measurements , to notify users of defective sensors, to notify manufacturing facilities of machining issues, and so forth. 218 202 110 218 110 110 218 218 202 204 110 202 The sensor status represents a state of the sensor at a given time, e.g., a state of the sensor at a same time one of the glucose measurements is produced. To this end, the sensor status may include an entry for each of the glucose measurements , such that there is a one-to-one relationship between the glucose measurements and statuses captured in the sensor status information. Generally speaking, the sensor status describes an operational state of the sensor . In one or more implementations, the sensor module may identify one of a number of predetermined operational states for a given glucose measurement . The identified operational state may be based on the communications from the sensor and/or characteristics of those communications. 204 202 202 110 By way of example, the sensor module may include (e.g., in memory or other storage) a lookup table having the predetermined number of operational states and bases for selecting one state from another. For instance, the predetermined states may include a “normal” operation state where the basis for selecting this state may be that the communications from the sensor fall within thresholds indicative of normal operation, e.g., within a threshold of an expected time, within a threshold of expected signal strength, an environmental temperature is within a threshold of suitable temperatures to continue operation as expected, and so forth. The predetermined states may also include operational states that indicate one or more characteristics of the sensor 's communications are outside of normal activity and may result in potential errors in the glucose measurements . 202 202 102 104 218 202 104 For example, bases for these non-normal operational states may include receiving the communications from the sensor outside of a threshold expected time, detecting a signal strength of the sensor outside a threshold of expected signal strength, detecting an environmental temperature outside of suitable temperatures to continue operation as expected, detecting that the person has rolled (e.g., in bed) onto the wearable glucose monitoring device , and so forth. The sensor status may indicate a variety of aspects about the sensor and the wearable glucose monitoring device without departing from the spirit or scope of the described techniques. Having considered an example of an environment and an example of a wearable glucose monitoring device, consider now a discussion of some examples of details of the techniques for diabetes prediction using glucose measurements and machine learning in a digital medium environment in accordance with one or more implementations. Diabetes Prediction FIG. 3 300 depicts an example of an implementation in which diabetes-related data, including glucose measurements, is routed to different systems in connection with diabetes prediction. 300 108 102 300 302 102 110 108 112 302 104 102 110 104 110 108 112 FIG. 1 The illustrated example includes from the observation analysis platform and the person . The illustrated example also depicts devices associated with the person that may provide the glucose measurements to the observation analysis platform and/or the storage device in connection with diabetes prediction. The devices depicted include the wearable glucose monitoring device , worn by the person during the observation period to produce the glucose measurements , along with additional devices external to the wearable glucose monitoring device . Specifically, the additional, external devices depicted include a mobile phone and a smart watch, although various other devices may be configured to provide the glucose measurements to the observation analysis platform and/or the storage device in one or more implementations, e.g., laptops, tablet devices, wearable health trackers, and so on. 110 108 112 104 110 108 112 302 110 110 104 110 108 112 As mentioned above, the glucose measurements may be communicated or otherwise provided via wired or wireless connections to the observation analysis platform and/or the storage device . For example, the wearable glucose monitoring device may provide the glucose measurements to the observation analysis platform and/or the storage device via a wired or wireless connection as discussed above. In scenarios where one of the additional, external devices provides the glucose measurements , the glucose measurements may first be provided from the wearable glucose monitoring device to the additional, external device, such that the additional, external device communicates or otherwise provides the glucose measurements to the observation analysis platform and/or the storage device . 302 104 108 112 302 110 104 108 112 110 104 108 112 104 In these scenarios, the additional, external devices may act as an intermediary between the wearable glucose monitoring device and the observation analysis platform and the storage device , such that the external devices are used to “route” the glucose measurements from the wearable glucose monitoring device to the observation analysis platform and/or the storage device . Alternately or additionally, other devices may route the glucose measurements from the wearable glucose monitoring device to the observation analysis platform and/or the storage device . Those other devices may include dedicated devices that are configured to extract the data from the wearable glucose monitoring device and that are associated with an entity involved in the diabetes prediction, such as a health care provider, hospital, pharmacy, telemedicine service, medical testing laboratory, and so on. 300 304 304 104 110 108 112 304 110 The illustrated example also includes user population . The user population represents multiple users that correspond to persons that have worn glucose monitoring devices, such as the wearable glucose monitoring device . It follows then that the glucose measurements of these other users are provided by their respective monitoring devices and/or by external computing devices to the observation analysis platform and/or the storage device . In one or more implementations, the user population includes users selected as part of one or more “studies” conducted, at least in part, for the purpose of collecting data (including the glucose measurements ) so that the data can be used to generate one or more models using machine learning, e.g., using supervised learning, unsupervised learning, reinforcement learning, and so forth. 304 104 102 102 102 110 102 FIG. 5 Alternately or in addition, the user population may include users for which a diabetes prediction was previously generated based on his or her glucose measurements produced during an observation period involving the wearable glucose monitoring device —in a similar manner as the diabetes prediction is generated for the person . Data that is produced prior to the diabetes prediction for the person and in connection with studies carried out to collect the data is referred to as “historical” data because it is produced at a point in time before the person 's glucose measurements are produced. Similarly, data produced prior to the diabetes prediction of the person and in connection with diabetes predictions of other users is also historical data. In accordance with the described techniques, the historical data includes, for example, historical glucose measurements and historical outcome data. This historical data is used along with machine learning to train or otherwise learn an underlying model as described in more detail in relation to . 110 102 110 110 306 304 By way of example, studies to collect data in connection with diabetes prediction may involve participants wearing a glucose monitoring device over a time period of multiple days to produce the glucose measurements for those participants. The time period may have a same or different duration from the observation period used to produce the person 's glucose measurements without departing from the spirit or scope of the described techniques. In addition to collecting the glucose measurements , such studies may be leveraged to obtain other data about the participants. Outcome data corresponds to at least some of this other data and may describe a variety of aspects about users of the user population . 308 304 308 304 110 308 110 308 110 In connection with a study, for example, participants may, in addition to wearing glucose monitoring devices, be tested using conventional techniques that produce one or more diagnostic measures, such as HbA1c, FPG, and/or 2 Hr-PG. Independent diagnostic measures represents data describing outcomes of one or more such tests in relation to the users of the user population . For example, the independent diagnostic measures may describe results of HbA1c, FPG, 2 Hr-PG (or OGTT as a combination of FPG and 2 Hr-PG), and/or random plasma glucose (RPG) in relation to the users of the user population . Given this, the glucose measurements of a study participant may be associated with the respective participant's independent diagnostic measures , e.g., by labeling the measurements. As discussed in more detail below, machine learning may, through a training process, learn patterns in the glucose measurements that are indicative of particular values of the independent diagnostic measures , such as patterns in the glucose measurements that indicate a respective person's HbA1c is likely 10.0. 306 310 312 310 304 310 As illustrated, the outcome data also includes observed adverse effects and clinical diagnoses . The observed adverse effects represents data describing adverse effects experienced by users of the user population . By way of example, the observed adverse effects may describe whether a user has or has not experienced any of one or more adverse effects associated with Type 2 diabetes, such as diabetic retinopathy, cataracts, glaucoma, blindness, severe hyper- or hypo-glycemia, heart and blood vessel disease, neuropathy, erectile dysfunction, kidney failure or end-stage kidney disease, slow healing, hearing impairment, skin conditions (e.g., bacterial and fungal infections), sleep apnea, and Alzheimer's disease, to name just few. 310 Additionally or alternately, the observed adverse effects may describe whether a user has or has not experienced any of one or more adverse effects associated with GDM, such as her baby having excessive birth weight (requiring a C-section birth), an early (preterm) birth, her baby having respiratory distress syndrome, neonatal hypoglycemia, her baby becoming obese or developing Type 2 diabetes later in life, still birth, and so on. 310 110 310 110 310 110 Additionally or alternately, the observed adverse effects may describe whether a user has or has not experienced one or more adverse effects associated with other types of diabetes, such as effects associated with Type 1 diabetes, cystic fibrosis diabetes, pancreatic diabetes, and so on. Given this, the glucose measurements of a study participant may be associated with the respective participant's observed adverse effects , e.g., by labeling the measurements. As discussed in more detail below, machine learning may, through a training process, learn patterns in the glucose measurements that are indicative of occurrence and non-occurrence of the observed adverse effects , such as patterns in glucose measurements that indicate a probability of a respective person having a baby with excessive birth weight requiring a C-section. 312 304 308 310 312 312 308 110 308 312 110 308 110 The clinical diagnoses represents data describing whether users of the user population have been diagnosed (or not) with diabetes by a clinician or whether they have been provisionally or preliminarily diagnosed with diabetes. By way of example, the diagnoses may be made by a clinician based on one or more of the independent diagnostic measures and/or the observed adverse effects . Additionally or alternatively, the clinical diagnoses may be configured to represent labeling based on diagnostic tests that are not approved for diagnosis by, for example, the Food and Drug Administration (FDA) or the clinical community at large, such as A1CNOW+. The values of the clinical diagnoses may represent that a respective user is clinically diagnosed with diabetes (or some type of diabetes), is clinically diagnosed with prediabetes (or any of the different types of prediabetes), is provisionally or preliminarily diagnosed with diabetes, does not have diabetes (i.e., screened), is diagnosed with diabetes using a non-approved test, or is diagnosed with prediabetes using a non-approved test, to name just a few. Given this and the independent diagnostic measures , for instance, the glucose measurements may be associated with a respective study participant's independent diagnostic measures and the respective participant's clinical diagnosis . The machine learning may, through training, learn patterns in the glucose measurements that are indicative of particular values of the independent diagnostic measures and further are indicative of different diabetes diagnoses, such as patterns in glucose measurements that indicate a person's HbA1c is likely 6.0 (e.g., “estimated A1c”) and further that a clinician's analysis likely results in a diagnosis of prediabetes. Although this example is discussed in relation to the person's HbA1c, it is to be appreciated that a clinical diagnosis may be made based on different measurements (e.g. FPG) and/or observations (e.g., weight gain, neuropathy, and sleep apnea) without departing from the spirit or scope of the described techniques. 306 308 110 304 310 110 312 110 110 110 308 310 312 300 306 304 In one or more implementations, the outcome data may include or be usable as labels. For example, a value of an independent diagnostic measure may be used to label the glucose measurements of a respective user of the user population . Alternately or in addition, labels indicative of observed adverse effects experienced by the respective user may be used to label the glucose measurements of the respective user. Alternately or in addition, labels indicative of a clinical diagnosis may be used to label the glucose measurements of the respective user, e.g., the glucose measurements of a user clinically diagnosed with prediabetes may be associated with a ‘prediabetes’ label whereas the glucose measurements of a different user clinically diagnosed with diabetes may be associated with a ‘diabetes’ label. Although the independent diagnostic measures , the observed adverse effects , and the clinical diagnoses are depicted in the example , it is to be appreciated that the outcome data may include data describing different, additional, or fewer aspects of users of the user population without departing from the spirit or scope of the described techniques. 300 110 306 304 108 112 110 306 306 108 112 108 112 102 304 As depicted in the illustrated example , the glucose measurements and the outcome data of users of the user population is communicated or otherwise provided to the observation analysis platform and/or the storage device . In addition to the glucose measurements and the outcome data , additional data describing other aspects of users of the user population may be obtained by the observation analysis platform and/or the storage device . By way of example, this additional data may include demographic data (e.g., age, gender, ethnicity), medical history data (e.g., height, weight, body mass index (BMI), body fat percentage, presence or absence of various conditions), stress data, nutrition data, exercise data, prescription data, height and weight data, occupation data, and so forth. These types of additional data are merely examples and the additional data may include more, fewer, or different types of data without departing from the spirit or scope of the techniques described herein. In one or more implementations, the observation analysis platform and/or the storage device may obtain such additional data (or at least some of the additional data) about the person as well as about the users of the user population . 300 108 112 112 108 112 108 112 112 108 112 110 102 110 306 304 Notably, the illustrated example depicts the observation analysis platform and the storage device separately and also depicts a dashed arrow between the storage device and the observation analysis platform . Generally speaking, this arrow represents that the data maintained in the storage device may be obtained by the observation analysis platform from the storage device . Said another way, the data maintained by the storage device may be provided to the observation analysis platform . As discussed above, the storage device may store the glucose measurements of the person as well as the glucose measurements and the outcome data of the user population . 108 112 104 108 112 110 102 110 306 304 108 112 In one or more implementations, the observation analysis platform and the storage device may correspond to a same entity, such as a provider of glucose monitoring devices (e.g., the wearable glucose monitoring device ) and services related to glucose monitoring. In such implementations, the observation analysis platform and the storage device may be implemented in the “cloud,” across multiple computing devices (e.g., servers) and storage resources allocated to or otherwise associated with the entity (e.g., via a subscription or ownership). To this end, the glucose measurements of the person as well as the glucose measurements and the outcome data of the user population may be obtained by the observation analysis platform from the storage device in ways that a server associated with a service provider obtains data from storage associated with that service provider. 108 112 112 102 108 108 102 108 110 102 110 304 304 112 112 In other implementations, the observation analysis platform and the storage device may correspond to different entities. By way of example, the storage device may correspond to a first entity, such as a computing device (e.g., mobile phone or tablet device) of the person , and the observation analysis platform may correspond to a second entity, such as a provider of glucose monitoring devices and services related to glucose monitoring. In this example, the observation analysis platform may implemented, at least in part, as an application of the second entity, running on the person 's computing device. Alternately or additionally, the observation analysis platform may be implemented using a server device of the second entity. In the application implementation, the second entity's application may obtain one or more of the glucose measurements of the person , the glucose measurements of the user population , or the outcome data of the user population from the storage device implemented locally on the computing device, e.g., over a bus or other local transmission means of the computing device. In the server implementation, the server of the second entity may obtain data from the storage device , implemented on the computing device, over one or more networks, such as the Internet. 108 112 112 108 112 104 108 112 112 108 In another example where the observation analysis platform and the storage device correspond to different entities, the storage device may correspond to a first entity, such as a provider of the glucose monitoring devices and services related to glucose monitoring (or limited services related to glucose monitoring). In this latter example, the observation analysis platform may correspond to a second, different entity, such as a service provider, e.g., a data partner of the first entity. In this example, the second entity may be considered a “third party” in relation to the entity corresponding to the storage device (and the wearable glucose monitoring device ). When it corresponds to a data partner, the observation analysis platform may obtain data from the first entity (i.e., the storage device ) in accordance with one or more legal agreements between the first and second entities. Provision of the data maintained in the storage device to the observation analysis platform may be controlled by an application programming interface (API). 110 306 112 110 110 306 112 112 108 In this type of scenario, such an API may be considered an “egress” for data, such as the glucose measurements and the outcome data . By “egress” it is meant that a flow of data is generally outward from the first entity to a third party (e.g., the second entity). In the context of data provision, an API may expose one or more “calls” (e.g., specific formats for data requests) to the third party. By way of example, an API may expose those calls to the third party after the third party enters into an agreement, e.g., with a business corresponding to the first entity, that allows the third party to obtain data from the storage device via the API. As part of this agreement, the third party may agree to exchange payment in order to obtain data from the first entity. Alternately or additionally, the third party may agree to exchange data that it produces, e.g., via an associated device, in order to obtain data from the first entity. Parties that enter into agreements to obtain data (e.g., the glucose measurements ) from the first entity via an API may be referred to as “data partners.” In operation, the API allows the third party to make a request for data (e.g., glucose measurements and/or the outcome data ) maintained in the storage device in a specific request format, and if the request is made in the specific format, then the first entity provides the requested data in a specific response format. The requested data may be provided in the specific response format in one or more communications (e.g., packets) over a network, e.g., the Internet. Examples of a second entity that may be considered a “third party” include various service providers, such as service providers that provide one or more health monitoring/tracking services, fitness related services, telemedicine services, medical testing laboratory services, and so forth. Indeed, the storage device and the observation analysis platform may be implemented using a variety of devices and/or resources (e.g., computing, communication, storage, etc.), and divisions (or not) between the entities, corresponding to the various devices and/or resources, may differ from those described above without departing from the spirit or scope of the techniques described herein. 108 110 102 110 306 304 110 306 304 114 114 102 116 102 Regardless, the observation analysis platform is configured to obtain the glucose measurements of the person as well as the glucose measurements and the outcome data of the user population and to process them in accordance with the described techniques. Using the glucose measurements and the outcome data of the user population , for example, the prediction system is configured to generate one or more machine learning models, e.g., regression models, neural networks, reinforcement learning agents. Once one or more such models are generated, the prediction system is configured to use those one or more models to process the glucose measurements of the person to predict the diabetes classification for the person . 300 114 314 314 116 116 116 114 102 116 102 102 314 116 102 In the illustrated example , the prediction system is shown outputting notification . The notification may be based on the diabetes classification or include the diabetes classification . Consider an example in which the diabetes classification output by the prediction system 's one or more machine learning models is a label that indicates the person is predicted to have diabetes, e.g., a ‘1’ (where a ‘0’ indicates no diabetes) or a text label such as ‘diabetes’. In this case, simply providing the diabetes classification to the person may be undesirable. When such information is not delivered with pertinent educational material or is not delivered in an appropriate setting and in a personalized manner, provision of such information may affect the person in a variety of negative ways, such as by causing confusion, anger, depression, and so on. Accordingly, the notification may simply be based on the diabetes classification , such as by notifying the person that the results of the observation period are available and instructing them to schedule an appointment with his or her associated health care provider. 116 102 116 102 102 314 116 116 102 110 110 304 116 102 102 110 FIGS. 6 and 7 FIG. 4 By way of contrast, providing the diabetes classification to a health care provider of the person may not be undesirable. Instead, providing the diabetes classification to the health care provider may be preferred (in contrast to not providing the classification) so that the health care provider can suitably inform the person and also develop a treatment plan for the person . In such scenarios, the notification may simply correspond to the diabetes classification . Alternately, notifications communicated to health care providers (or others) may be configured as reports that include the diabetes classification along with other information, such as traces of the person 's glucose measurements over the observation period, measures derived from those glucose measurements , recommendations for treatment (e.g., learned from historical data of the user population ), and so forth. Examples of these notifications are discussed in more detail in relation to . In the context of predicting a glucose classification for the person from the person 's glucose measurements , consider the following discussion of . FIG. 4 FIG. 1 400 depicts an example of an implementation of the prediction system of in greater detail in which a diabetes classification is predicted using machine learning. 400 114 110 112 110 102 400 114 402 404 116 110 102 114 114 116 110 In the illustrated example , the prediction system is shown obtaining the glucose measurements , e.g., from the storage device . Here, the glucose measurements correspond to the person . In this example , the prediction system is depicted including preprocessing manager and machine learning model , which are configured to generate a prediction of the diabetes classification based on the glucose measurements of the person . Although the prediction system is depicted including these two components, it is to be appreciated that the prediction system may have more, fewer, and/or different components to generate the diabetes classification based on the glucose measurements without departing from the spirit or scope of the described techniques. 110 110 110 110 108 104 110 110 110 110 104 110 110 114 In one or more implementations, the glucose measurements are configured as time-sequenced data, such that each of the glucose measurements corresponds to a timestamp. For example, the glucose measurements may be configured as one or more glucose “traces.” Although the glucose measurements may generally be received or maintained in order, e.g., by the observation analysis platform from the wearable glucose monitoring device and/or an external device, in some instances, one or more of the glucose measurements may not be received or maintained in a same order as the glucose measurements are produced. For instance, packets with the glucose measurements may be received out of order. Thus, the order of receipt may not chronologically match the order in which the glucose measurements are produced by the wearable glucose monitoring device . In addition or alternately, transmissions including one or more of the glucose measurements may be corrupted. Indeed, there may be a variety of reasons why the glucose measurements , as obtained by the prediction system , are not entirely in time order. 402 110 110 114 402 110 114 110 402 To this end, the preprocessing manager may be configured to determine a time-ordered sequence of the glucose measurements according to respective timestamps. Due to corruption and communication errors, the glucose measurements obtained by the prediction system may not only be out of time order but may also be missing one or more measurements—there may be gaps in the time-ordered sequence where one or more measurements are expected. In these instances, the preprocessing manager may be further configured to interpolate the missing glucose measurements and incorporate them into the time-ordered sequence. Although this functionality is discussed, in one or more implementations, the glucose measurements as obtained by the prediction system may already be in time order (e.g., one or more time-series of the glucose measurements ), such that ordering those measurements and interpolating missing measurements is not performed by the preprocessing manager . 402 110 404 116 314 400 402 406 402 406 110 406 402 110 In general, the preprocessing manager is configured to preprocess the glucose measurements to generate data (e.g., one or more feature vectors) that can be provided as input to the machine learning model and data that can be reported in connection with the diabetes classification (e.g., included as part of the notification ). In the illustrated example , the preprocessing manager is depicted outputting extracted glucose features . The preprocessing manager may determine the extracted glucose features by processing the glucose measurements according to one or more predetermined algorithms or functions. Each of the different extracted glucose features may correspond to a different algorithm or function with which the preprocessing manager processes the glucose measurements . 406 408 410 412 406 406 110 110 th Here, the extracted glucose features include time over threshold measures , rate-of-change measures , and observation period anomalies . It is to be appreciated that the extracted glucose features may vary from the combination illustrated without departing from the spirit or scope of the described technique. For example, the extracted glucose features may also or alternately include one or more of mean glucose (e.g., over the duration of the observation period or daily), median glucose, inter quartile range of the glucose measurements , variance of the glucose measurements , nocturnal hyperglycemia, differences between average glucose during waking hours and average glucose during sleeping hours, day-to-day variability of glucose, day-to-night variability of glucose, statistical distributions of glucose, threshold percentile of glucose (e.g., statistically significant threshold percentile such as 94percentile or greater), 10 to 90 percentile range of glucose, standard deviation of glucose, mean of daily difference (MODD), and mean amplitude of glycemia excursions (MAGE), to name just a few. 408 410 408 410 110 408 110 102 408 402 110 402 Notably, the time over threshold measures and the rate-of-change measures are measures that simply cannot be determined using conventional diagnostic tests. Instead, the time over threshold measures and the rate-of-change measures are necessarily time based, requiring that the data points (i.e., the glucose measurements ) each be associated with a point in time and sequenced according to their times. Generally, the time over threshold measures correspond to an amount of time during the observation period that the glucose measurements of the person are above a glucose threshold. To compute the time over thresholds measures , for instance, the preprocessing manager may identify sequentially consecutive glucose measurements over a glucose threshold (e.g., based on a comparison of the measurements to the threshold) and determine a difference in time between a first of those measurements over the threshold and a last of the measurements over the threshold. In the absence of times associated with each of the measurements and absent producing the measurements at increments of time that are suitably granular to capture such features, determining an amount of time over a glucose threshold simply is not possible. It is to be appreciated that an amount of time over a threshold contrasts with a number of measurements over a threshold or percentage of measurements over a threshold, which the preprocessing manager may also be configured to determine. 410 110 410 402 110 110 410 410 102 102 110 Generally, the rate-of-change measures correspond to a difference in glucose measurements of the user over a unit of time. To determine the rate-of-change measures , the preprocessing manager may determine, between at least two measurements, a difference in the measured amount of glucose and a difference in time, such that a change in amount of glucose over some unit of time may be determined, e.g., mg/dL change per minute. It is to be appreciated that such rates of change may be determined using more than two of the glucose measurements . Regardless, without time sequencing of the glucose measurements , the rate-of-change measures simply cannot be determined. These rate-of-change measures may indicate how quickly the person 's body reacts when his or her glucose spikes as a result of eating carbohydrates—this may further be indicative of the person 's insulin response. Summarily, the time-sequencing of the glucose measurements enables a variety of measures to be determined that cannot be determined using the data from other diagnostic tests. 408 102 102 408 408 Examples of the time over threshold measures may include, by way of example and not limitation, time over 130 mg/dL (which corresponds to an amount of time during the course of the observation period where the person 's glucose level is over 130 mg/dL) and time over 140 mg/dL (which corresponds to an amount of time during the course of the observation period where the person 's glucose level is over 140 mg/dL). In one or more implementations, a time over threshold measure may correspond to an amount of time over a threshold which may range from 120 mg/dL to 240 mg/dL. Further, the time over threshold measures may include just a single time over threshold measure, such as time over 140 mg/dL, or multiple measures, such as an amount of time over 130 mg/dL, an amount of time over 140 mg/dL, and an amount of time over 150 mg/dL. 402 406 102 402 102 110 410 Other time-based threshold measures that the preprocessing manager may determine as part of or in addition to the extracted glucose features include a time within range measure, which corresponds to an amount of time during the course of the observation period that the person 's glucose is between a first glucose level and a second glucose level that is less than the first glucose level, corresponding to upper and lower limits of a range, respectively. Conversely, the preprocessing manager may determine a time outside range measure, which corresponds to an amount of time over the course of the observation period the person 's glucose measurements are outside such a range. Examples of the rate-of-change measures may include, for instance, average rate-of-change after a glucose high measure, average rate-of-change after carbohydrate consumption measure, rate-of-change over times of day (e.g., at night), and so on. 402 412 402 110 The preprocessing manager may determine the observation period anomalies using any of a variety of known statistical anomaly detection techniques, including, for example, unsupervised anomaly detection techniques, supervised anomaly detection techniques, and semi-supervised anomaly detection techniques. Additionally, the preprocessing manager may determine one or more statistical measures in relation to time of day, such as nighttime mean and median glucose measurements. Again, such measures cannot be determined without the glucose measurements having corresponding times. 402 110 404 402 110 110 404 402 406 In addition to determining these different features of the glucose, the preprocessing manager may determine which of the glucose measurements are to serve as a basis for input to the machine learning model . In other words, the preprocessing manager may filter the glucose measurements by removing at least a portion of the glucose measurements from being input into the machine learning model . The preprocessing manager may then determine the extracted glucose features from the filtered glucose measurements. 402 110 406 404 406 404 402 110 104 402 110 By way of example, the preprocessing manager may select a subset of the measurements (e.g., measurements from the three “worst” days) of the glucose measurements , generate the extracted glucose features for the subset of measurements, and then generate input data for input to the machine learning model based on the extracted glucose features of the subset of measurements. Alternately, the preprocessing manager may select a subset of the days, where x-number of the worst days are removed (or unselected for inclusion in the subset). Regardless, the unselected measurements and/or data corresponding to the unselected measurements may not be input to the machine learning model , in one or more implementations. The preprocessing manager may select which of the glucose measurements are to serve as a basis for the input data in a variety of ways, such as based on daily means of the glucose measurements (e.g., where the worst days are the days with the highest mean glucose, such as the three days with the highest mean glucose), based on performance of the wearable glucose monitoring device (e.g., first and last days may be eliminated or days may be eliminated due to receipt of device or sensor errors), and so on. In addition to data removal, the preprocessing manager may alternately or additionally replace or add one or more of the glucose measurements with higher-fidelity measurements, such as measurements interpolated during the preprocessing. 402 404 408 402 102 408 402 102 102 406 Regardless of the particular glucose features extracted, the preprocessing manager is configured to generate input data for input to the machine learning model . In one or more implementations, this input data may be configured as a feature vector that represents one or more features. In one example, the input data may correspond to a single feature, such as a time over threshold measure , e.g., time over 140 mg/dL. In this example, the preprocessing manager may generate a feature vector that represents the amount of time the person 's glucose was over the threshold during the observation period (or percentage of time). In other examples, the input data may correspond to multiple features, such as a time over threshold measure and a mean glucose. Accordingly, the preprocessing manager may generate a feature vector that represents both the amount of time the person 's glucose was over the threshold during the observation period (or percentage of time) as well as the person 's mean glucose during the observation period. It is to be appreciated that any of the above discussed extracted glucose features (or other determinations) may be used in single-feature implementations and that any combination of those features (or determinations) may be used in multi-feature implementations. 110 402 102 404 404 102 404 304 In addition to features extracted from the glucose measurements , the preprocessing manager may also incorporate features from additional data, describing different aspects of the person . As mentioned above, this additional data may include additional analyte data (e.g., lactate measurements), environmental data (e.g., the person's temperature), already-observed adverse effects data (e.g., data describing that any of a variety of adverse effects associated with diabetes have already been observed), demographic data (e.g., describing age, gender, ethnicity) collected through a questionnaire or otherwise obtained, medical history data, stress data, nutrition data, exercise data, prescription data, height and weight data, occupation data, and so forth. In other words, the data provided as input to the machine learning model or an ensemble of the machine learning models may, in one or more implementations, describe a variety of aspects about the person (e.g., as features of input feature vectors) in addition to glucose-based features without departing from the spirit or scope of the described techniques. In such scenarios, the machine learning model is trained using similar historical data of the user population . 400 402 110 406 402 110 404 110 110 404 406 408 Although the illustrated example depicts the preprocessing manager preprocessing the glucose measurements to produce the extracted glucose features , and using those features as input to the machine learning model (e.g., feature vectors indicative of extracted features), in one or more implementations, the preprocessing manager may generate feature vectors that represent (alone or with other features) one or more time-series of the glucose measurements (e.g., traces). Thus, the input data to the machine learning model may correspond to, or otherwise include, a vectorized time-series of the glucose measurements or multiple vectorized time-series of the glucose measurements . In implementations where time series glucose measurements are vectorized, the machine learning model may, for example, correspond to a neural network. In implementations where extracted glucose features such as time over threshold measures and statistical features (e.g., inter quartile range) are vectorized, the machine learning model may, for example, correspond to a regression model, e.g., a linear or logistic regression model. 402 404 116 404 116 404 110 306 304 404 404 Responsive to receiving the input data from the preprocessing manager , the machine learning model is configured to generate and output the diabetes classification . Specifically, the machine learning model may be trained to output the diabetes classification . By way of example, the machine learning model may be trained, or an underlying representation may be learned, based on one or more training approaches and using historical glucose measurements and outcome data from which diabetes classifications can be derived, such as using the glucose measurements and the outcome data of the user population . In accordance with the described techniques, the machine learning model may represent one or more models, including, for instance, a model trained to predict whether the person has diabetes and, in one or more implementations, an additional model to predict whether the person does not have diabetes (i.e., a diabetes classification that can be used to screen the person as not having diabetes with a degree of certainty). Each of the models of a multi-model configuration may receive differently configured input data that describes different aspects, e.g., feature vectors with features that represent different aspects related to diabetes. It is to be appreciated that in other implementations, a single model may be configured to generate both types of predictions. In one or more implementations, the machine learning model may be configured as an ensemble of models that each generates a different prediction related to diabetes than the other models. 116 102 306 404 404 312 304 404 102 110 116 102 102 The diabetes classification may classify the person in terms of one or more outcomes, which correspond to the outcomes described by the outcome data used to train the machine learning model . In implementations where the machine learning model is trained, or the model is learned, using the clinical diagnoses of the user population , then the machine learning model may classify the person 's glucose measurements into a class corresponding to one of the diagnoses, e.g., diabetes, prediabetes, or no diabetes. To this end, a health care provider may use the diabetes classification to treat the person or develop a treatment plan similarly to how the health care provider would do so if diagnosed with diabetes according to conventional techniques, e.g., HbA1c, FPG, and/or 2 Hr-PG. 310 404 102 110 404 102 Similarly, where the machine learning model is trained, or the model is learned, using the observed adverse effects of the user population, then the machine learning model may output probabilities that the person 's glucose measurements are indicative of the person experiencing the different adverse effects, e.g., a probability from zero to one that the person will experience any of the variety of adverse effects associated with different types of diabetes. In some implementations, there may be a machine learning model trained or built for each effect, such that the machine learning model represents an ensemble of models capable of generating predictions regarding whether the person will or will not experience each effect—or a probability of the person to experience each effect. 308 404 116 404 116 116 FIG. 5 In implementations where the machine learning model is trained, or the model is learned, using the independent diagnostic measures of the user population, then the machine learning model may output a prediction of a value of a particular diagnostic measure, e.g., an HbA1c value, a FPG value, a 2 Hr-PG value, or an OGTT value. The diabetes classification that is output depends largely on how the machine learning model is trained and the information, specifically, the diabetes classification represents—such as a label indicative of whether or not the person has diabetes or is at risk for having diabetes (e.g., a diabetes label, a prediabetes label, or a no-diabetes label), a label indicative of whether or not the person has a particular type of diabetes (e.g., Type 1 diabetes, Type 2 diabetes, and GDM), a probability, or a measure value—depends on the training. Additionally, different types of machine learning models may be better suited to generate predictions in relation to different types of outcomes that can be represented by the diabetes classification . In the context of training the machine learning mode, consider now the following discussion of . FIG. 5 500 114 depicts an example of an implementation of the prediction system in greater detail in which a machine learning model is trained to predict diabetes classifications. 500 114 502 404 404 404 In the illustrated example , the prediction system includes model manager , which manages the machine learning model . In accordance with the described techniques, the machine learning model may represent a single machine learning model or an ensemble of multiple models. The machine learning model may correspond to different types of machine learning models, where the underlying models are learned using different approaches, such as using supervised learning, unsupervised learning, and/or reinforcement learning. By way of example, these models may include regression models (e.g., linear, polynomial, and/or logistic regression models), classifiers, neural networks, and reinforcement learning based models, to name just a few. 404 502 502 The machine learning model may be configured as, or include, other types of models without departing from the spirit or scope of the described techniques. These different machine learning models may be built or trained (or the model otherwise learned), respectively, using different data and different algorithms due, at least in part, to different architectures and/or learning paradigms. Accordingly, it is to be appreciated that the following discussion of the model manager 's functionality is applicable to a variety of machine learning models. For explanatory purposes, however, the functionality of the model manager will be described generally in relation to a statistical model and a neural network. 502 404 404 404 502 112 110 306 304 502 404 404 110 306 304 404 502 304 404 Broadly speaking, the model manager is configured to manage machine learning models, including the machine learning model . This model management includes, for example, building the machine learning model , training the machine learning model , updating this model, and so forth. Specifically, the model manager is configured to carry out this model management using, at least in part, the wealth of data maintained in the storage device . As illustrated, this data includes the glucose measurements and the outcome data of the user population . Said another way, the model manager builds the machine learning model , trains the machine learning model (or otherwise learns the underlying model), and updates this model using the glucose measurements and the outcome data of the user population . In implementations where the machine learning model receives data in addition to the glucose measurements or the extracted features of those measurements as input, the model manager also uses such additional data of the user population to build, train, and update the machine learning model . 502 404 404 110 304 110 110 502 402 110 406 In one or more implementations, the model manager generates training data to train the machine learning model or to otherwise learn parameters of the model. Broadly speaking, generation of the training data is dependent on the diabetes classification the machine learning model is designed to output. This training data will be different, for instance, if the machine learning model is configured to generate predictions of diagnostic measures of a person, adverse effects the person is predicted to experience, or clinical diagnoses of the person. Regardless of the outcome to be predicted, generating the training data may include time sequencing the glucose measurements of the user population (if the glucose measurements are not already time-sequenced) and extracting glucose features from those time-sequenced glucose measurements . The model manager may leverage the functionality of the preprocessing manager to form time sequenced glucose measurements and to extract glucose features, for instance, in a similar manner as discussed above in relation to generating the extracted glucose features . 110 110 110 304 306 304 306 502 Generating the training data also includes associating the traces of the glucose measurements or features extracted from the glucose measurements (e.g., similar to the extracted glucose features but for the glucose measurements the user population ) with the outcome data of a respective user of the user population . Given this, a glucose trace or an extracted glucose feature corresponding to a particular user is associated with outcome data of the particular user. By way of example, a particular user may have been clinically diagnosed with diabetes and his or her glucose may have been above a threshold for an amount of time corresponding to 27% of an observation period. Given this, the model manager may form a training instance that includes an input portion with a value indicating the user's time above threshold is 27% and having an associated output portion with a value indicating the person has diabetes, e.g., ‘1’ or some other corresponding value. 502 306 312 116 110 404 In one or more implementations, the model manager may build a statistical model by extracting from the outcome data observed values or labels corresponding to at least one type of outcome, such as values of the clinical diagnoses , e.g., ‘diabetes’, ‘prediabetes’, and ‘no diabetes’ or values indicative of those labels. Once built, the statistical model is configured to predict values or labels of this at least one outcome type and output them as the diabetes classification —values or labels indicative of the at least one outcome type do not serve as input to the model. In scenarios where the statistical model is a regression model, for instance, outcome values or labels may correspond to one or more dependent variables. In contrast, one or more of the glucose features extracted from the glucose measurements may serve as input to the model. Thus, in scenarios where the machine learning model is configured as a statistical model, the one or more glucose features may correspond to one or more explanatory (or independent) variables. 306 110 502 502 Given the set of outcome values or labels from the outcome data and the set of values of features extracted from the glucose measurements , the model manager uses one or more known approaches for “fitting” these sets of values to an equation so that it produces the outcome values or labels responsive to input of the extracted glucose feature values, within some tolerance. Examples of such fitting approaches include using a least squares approach, using a least absolute deviations regression, minimizing a penalized version of the least squares cost function (e.g., ridge regression or lasso), and so forth. By “fitting” it is meant that the model manager estimates model parameters for the equation using the one or more approaches and these sets of values of the training data. 406 502 304 404 114 406 116 The estimated parameters include, for instance, weights to apply to values of the independent variables (e.g., the extracted glucose features ) when they are input to the machine learning model during operation. The model manager incorporates these parameters estimated from fitting the observed values of the user population into the equation to generate the machine learning model as a statistical model. In operation, the prediction system inputs values of the independent variables (e.g., values of one or more of the extracted glucose features ) into the statistical model (e.g., as one or more vectors or a matrix), the statistical model applies the estimated weights to these input values, and then outputs values or labels for the one or more dependent variables. This output corresponds to the diabetes classification . 502 404 In the following discussion, the capabilities of the model manager to build and train machine learning models is discussed in relation to a configuration of the machine learning model corresponding to or including at least one neural network. 502 110 110 306 404 With respect to the training data used, the model manager may, as noted above, generate instances of the training data including an input portion and an expected output portion, i.e., a ground truth for comparison to the model's output during training. The input portion of a training data instance may correspond to one or more traces of glucose measurements and/or one or more extracted features of the glucose measurements of a particular user. The output portion may correspond to one or more values of the particular user's outcome data , e.g., a value indicative of a clinical diagnosis of diabetes or a value of the user's observed HbA1c. Again, whether traces are used for training as well as which extracted features are used for training and which outcome data is used for training depends on the data the machine learning model is designed (and trained) to receive as input and the data it is designed (and trained) to output. 502 404 502 404 404 404 502 404 404 404 The model manager uses the training input portions along with the respective expected output portions to train the machine learning model . In the context of training, the model manager may train the machine learning model by providing an instance of data from the set of training input portions to the machine learning model . Responsive to this, the machine learning model generates a prediction of a diabetes classification, such as by predicting a value indicative of a clinical diagnosis of diabetes or a value of the user's observed HbA1c. The model manager obtains this training prediction from the machine learning model as output and compares the training prediction to the expected output portion that corresponds to the training input portion. For example, if the machine learning model outputs a diabetes classification indicating that the user has diabetes, then this prediction is compared to the output data (e.g., which classifies the user as having diabetes or no diabetes) to determine if the prediction was correct. Based on this comparison, the model manager adjusts internal weights of the machine learning model so that the machine learning model can substantially reproduce the expected output portion when the respective training input portion is provided as input in the future. 404 404 404 This process of inputting instances of the training input portions into the machine learning model , receiving training predictions from the machine learning model , comparing the training predictions to the expected output portions (observed) that correspond to the input instances (e.g., using a loss function such as mean squared error), and adjusting internal weights of the machine learning model based on these comparisons, can be repeated for hundreds, thousands, or even millions of iterations—using an instance of training data per iteration. 502 404 502 404 The model manager may perform such iterations until the machine learning model is able to generate predictions that consistently and substantially match the expected output portions. The capability of a machine learning model to consistently generate predictions that substantially match expected output portions may be referred to as “convergence.” Given this, it may be said that the model manger trains the machine learning model until it “converges” on a solution, e.g., the internal weights of the model have been suitably adjusted due to training iterations so that the model consistently generates predictions that substantially match the expected output portions. 404 502 304 502 404 404 As noted above, the machine learning model may be configured to receive input in addition to traces of glucose measurements and/or features extracted from those measurements in one or more implementations. In such implementations, the model manager may form training instances that include the training input portion, the respective expected output portion and also additional input data describing any other aspects of the user population being used to predict diabetes classifications, e.g., demographic information, medical history, exercise, and/or stress. This additional data as well as the training input portion may be processed by the model manger according to one or more known techniques to produce an input vector. This input vector, describing the training input portion as well as the other aspects, may then be provided to the machine learning model . In response, the machine learning model may generate a prediction of diabetes classification in a similar manner as discussed above, such that the prediction can be compared to the expected output portion of the training instance and weights of the model adjusted based on the comparison. 404 404 102 102 FIGS. 6-8 Once the machine learning model is trained, it is used to predict diabetes classifications as discussed above and below. As also noted, the diabetes classifications output by the machine learning model may serve as a basis for a variety of information provided to the person in relation to which the prediction is generated as well as others associated with the person, such as the person 's health care provider, a caregiver, a telemedicine or health tracking service, and so forth. In the context of information that may be output based on the predictions, consider the following discussion of . FIG. 6 600 depicts an example of an implementation of a user interface displayed for notifying a user about a diabetes prediction generated based on glucose measurements produced during an observation period. 600 602 604 600 604 314 600 314 604 116 116 602 102 116 604 102 The illustrated example includes computing device displaying user interface . In this example , the user interface may correspond to the notification . This example represents a scenario where the notification (i.e., the user interface ) is generated based on the diabetes classification but does not include the diabetes classification . Here, the computing device may be associated with the person whose glucose measurements are collected during the observation period and in relation to which the diabetes classification is generated (or the computing device may be associated with another person associated with the person such as a caregiver). 604 102 116 116 102 102 600 604 102 604 102 604 606 606 102 116 To this end, the user interface may be displayed to notify the person (or an associated person) about the diabetes classification without revealing the predicted classification. This is because output of the diabetes classification to the actual person the classification corresponds to may affect the person in a variety of negative ways, such as by causing confusion, anger, depression, and so on. In this example , the user interface includes a summary about the processing of the person 's glucose measurements. The user interface also includes a recommendation of actionable behavior based on the diabetes classification—in this case recommending that the person follow up with his or her health care provider. In addition, the user interface includes graphical user interface elements that are selectable to carry out the recommended behavior. Each of the user interface elements may be selectable to cause a follow up appointment with a health care provider of the person to be scheduled, such as an appointment at a physical location of the health care provider or an appointment via a telephone or video conference, e.g., in connection with remote health care and/or a telemedicine service. It is to be appreciated that notifications based on the diabetes classification , but that do not include the classification may be configured in different ways without departing from the spirit or scope of the described techniques. FIG. 7 700 depicts an example of an implementation of a user interface displayed for reporting a diabetes prediction of a user along with other information produced in connection with the diabetes prediction. 700 702 704 704 314 700 116 700 706 116 702 102 116 FIG. 6 The illustrated example includes display device displaying user interface configured as a report. In this example, the user interface may correspond to the notification . In contrast to the example depicted in , this example represents a scenario where the notification includes the diabetes classification . In this illustrated example , graphical diagnosis element represents or otherwise indicates the diabetes classification . Here, the display device may be associated with a health care provider associated with the person whose glucose measurements are collected during the observation period and in relation to which the diabetes classification is generated. 704 116 116 700 708 710 102 110 704 110 To this end, the user interface may be displayed to report the diabetes classification to the health care provider and to report additional information that may be pertinent to the classification. In operation, a health care provider may independently analyze the additional information reported and provide a different diagnosis from the one indicated by the diabetes classification . In this example , the additional information includes glucose traces , . Those traces represent two days' worth of the person 's glucose measurements collected over the observation period. The user interface is also depicted with controls that may allow a user to navigate to other glucose measurements collected, such as traces corresponding to previous or subsequent days of the observation period. 704 712 406 402 102 110 706 704 714 716 404 404 102 102 The user interface also includes graphical glucose feature elements , which represent or otherwise indicate one or more of the extracted glucose features determined by the preprocessing manager based on the person 's glucose measurements . In addition to a predicted clinical diagnosis, as indicated by the graphical diagnosis element , the user interface also includes predicted adverse effects elements and probability elements . The inclusion of these elements represents that the machine learning model may be configured (e.g., via configuration as an ensemble of models and/or based on architecture and training) to generate predictions of more than one type of diabetes classification. By way of example, the machine learning model may be configured to predict a clinical diagnosis of the person , values for one or more of a plurality of independent diagnostic measures (e.g. HbA1c, FPG, 2 Hr-PG, and OGTT), and probabilities that the person will experience one or more of a plurality of adverse effects of diabetes. 714 102 404 716 714 716 404 116 In particular, the predicted adverse effects elements correspond to adverse effects the diabetes classification indicates the person is more likely to experience than not, e.g., based on a probability output by the machine learning models of experiencing those effects being greater than 50%. It is to be appreciated that adverse effects for which the machine learning model predicts any probability at all of occurring may also be output in one or more scenarios along with the corresponding probabilities. The probability elements include the probabilities that the adverse effects indicated by the elements will occur. The probabilities indicated by these probability elements may be output by the machine learning model in one or more implementations. It is to be appreciated that reports that include the diabetes classification may be configured in different ways without departing from the spirit or scope of the described techniques, such as a document suitable for printing. FIG. 8 800 depicts an example of an implementation of a user interface displayed for collecting additional data that can be used as input to machine learning models for generating a diabetes prediction. 800 802 804 800 804 102 110 110 406 404 404 304 804 304 The illustrated example includes computing device displaying user interface . In this example , the user interface may be displayed to collect data about the person in addition to the glucose measurements collected during the observation period. This additional data, along with traces of the glucose measurements and/or one or more of the extracted glucose features , may be provided to the machine learning model as input, i.e., the additional data may be represented in features of a feature vector input to the model. To train the machine learning model , this additional data may also be collected from the users of the user population . Accordingly, the user interface may be displayed to the users of the user population to collect this additional data from those users, e.g., data describing demographics, medical history, exercise, and/or stress. 800 800 In the illustrated example , the user interface includes a variety of graphical elements with which a user may interact (e.g., select or enter values) to provide additional data about him- or her-self. However, it is to be appreciated that the included graphical elements are merely examples, and a user interface to collect such additional data may be configured in different ways to include more, fewer, or different elements that enable collection of various additional data without departing from the spirit or scope of the described techniques. Having discussed examples of details of the techniques for diabetes prediction using glucose measurements and machine learning, consider now some examples of procedures to illustrate additional aspects of the techniques. 114 402 404 502 This section describes examples of procedures for diabetes prediction using glucose measurements and machine learning. Aspects of the procedures may be implemented in hardware, firmware, or software, or a combination thereof. The procedures are shown as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. In at least some implementations the procedures are performed by a prediction system, such as the prediction system that makes use of the preprocessing manager , the machine learning model , and the model manager . FIG. 9 900 depicts a procedure in an example of an implementation in which a machine learning model predicts a diabetes classification based on glucose measurements of a user collected by a wearable glucose monitoring device during an observation period. 902 404 110 104 102 104 102 104 102 102 104 202 102 102 Glucose measurements of a user are obtained (block ). In accordance with the principles discussed herein, the glucose measurements are collected by a wearable glucose monitoring device during an observation period. By way of example, the machine learning model obtains glucose measurements of a user which are collected by the wearable glucose monitoring device worn by the person during an observation period. The wearable glucose monitoring device may be provided as part of an observation kit, for instance, for the purpose of monitoring the person 's glucose. Regardless of how the wearable glucose monitoring device is obtained by the person , the device is configured to monitor glucose of the person during an observation period, which lasts for a time period generally spanning multiple days. The wearable glucose monitoring device may be configured with a sensor , for instance, which may be inserted subcutaneously into skin of the person and used to measure glucose in the person 's blood. 102 202 202 102 202 102 202 102 Although discussed throughout as being inserted subcutaneously into the person 's skin, in one or more implementations, the sensor may not be inserted subcutaneously. In such implementations, the sensor may instead be disposed on the person 's skin or muscle. For example, the sensor may be a patch that adheres to the person 's skin for a period of time. This patch may then be peeled off. Alternately or additionally, a non-invasive glucose sensor may be optical based, e.g., using photoplethysmography (PPG). The sensor may be configured in a variety of ways to obtain measurements indicative of a person 's glucose without departing from the spirit or scope of the described techniques. 904 404 116 404 110 110 306 304 304 104 A diabetes classification of the user is predicted by processing the glucose measurements using one or more machine learning models (block ). In accordance with the principles discussed herein, the one or more machine learning models are generated based on historical glucose measurements and historical outcome data of a user population. By way of example, the machine learning model predicts the diabetes classification . The machine learning model generates this prediction by processing the glucose measurements based on patterns, learned during training, of the glucose measurements and outcome data of the user population . As noted above, the user population includes users that wear wearable glucose monitoring devices, such as the wearable glucose monitoring device . 906 404 116 116 116 102 The diabetes classification is output (block ). By way of example, the machine learning model outputs the diabetes classification . As discussed throughout, the diabetes classification may indicate whether it is predicted the person has diabetes or is predicted to experience adverse effects associated with diabetes. The diabetes classification may also be used to generate one or more notifications or user interfaces based on the classification, such as a report directed to a health care provider that includes the diabetes classification (e.g., that the person is predicted to have diabetes) or a notification directed to the person that instructs the person to contact his or her health care provider. FIG. 10 1000 depicts a procedure in an example of an implementation in which a machine learning model is trained to predict a diabetes classification based on historical glucose measurements and outcome data of a user population. 1002 502 110 304 1004 502 306 308 304 310 304 312 304 Glucose measurements collected by wearable glucose monitoring devices worn by users of a user population are obtained (block ). By way of example, the model manager obtains the glucose measurements of users of the user population . Outcome data is obtained that describes one or more aspects of users of the user population that relate to diabetes (block ). By way of example, the model manager obtains the outcome data . In the examples discussed above, the outcome data describes example aspects, such as one or more of the independent diagnostic measures of the users of the user population , the observed adverse effects of the users of the user population , and the clinical diagnoses of the users of the user population . 1006 502 1002 1004 502 Instances of training data are generated that include a training input portion and an expected output portion (block ). In accordance with the principles discussed herein, the training input portion includes at least one of a trace of glucose measurements of a user or features of the user's glucose measurements. Further, the expected output portion includes one or more values of the outcome data that corresponds to the user. By way of example, the model manager generates instances of training data by associating the trace of glucose measurements or the features of the user's glucose measurements obtained at block with the one or more values of the user's outcome data obtained at block . In one or more implementations, the model manager “labels” the trace of glucose measurements or the features of the user's glucose measurements with one or more labels representative of the values of the outcome data corresponding to the user. 1008 1014 1008 1014 Here, blocks - may be repeated until a machine learning model is suitably trained, such as until the machine learning model “converges” on a solution, e.g., the internal weights of the model have been suitably adjusted due to training iterations so that the model consistently generates predictions that substantially match the expected output portions. Alternately or in addition, the blocks - may be repeated for a number of instances (e.g., all instances) of the training data. 1008 502 1006 404 A training input portion of an instance of training data is provided as input to the machine learning model (block ). By way of example, the model manager provides a training input portion of an instance of training data generated at block as input to the machine learning model . 1010 404 1008 502 404 A prediction of a diabetes classification is received as output from the machine learning model (block ). In accordance with the principles discussed herein, the prediction of the diabetes classification corresponds to a same aspect related to diabetes as the one or more values of the user's outcome data included in the training instance. By way of example, the machine learning model predicts a diabetes classification (e.g., a classification of the user in a ‘diabetes’ class, a ‘prediabetes’ class, or a ‘no diabetes class’ or a value indicative of one of those classes) based on the training input portion provided at block , and the model manager receives the diabetes classification as output of the machine learning model . 1012 502 1010 1006 502 404 The prediction of the diabetes classification is compared to the expected output portion of the instance of training data (block ). By way of example, the model manager compares the diabetes classification predicted at block to the expected output portion of the training instance generated at block , e.g., by using a loss function such as mean squared error (MSE). It is to be appreciated that the model manager may use other loss functions during training, to compare the predictions of the machine learning model to the expected output, without departing from the spirit or scope of the described techniques. 1014 502 404 502 Weights of the machine learning model are adjusted based on the comparison (block ). By way of example, the model manager may adjust internal weights of the machine learning model based on the comparing. In one or more implementations, the model manager may optionally leverage one or more hyperparameter optimization techniques during training to tune hyperparameters of the learning algorithm employed. Having described examples of procedures in accordance with one or more implementations, consider now an example of a system and device that can be utilized to implement the various techniques described herein. FIG. 11 1100 1102 114 114 1102 illustrates an example of a system generally at that includes an example of a computing device that is representative of one or more computing systems and/or devices that may implement the various techniques described herein. This is illustrated through inclusion of the prediction system at the platform level as well as at the individual computing device level. The prediction system may be implemented at one level or the other or at least partially at both levels. The computing device may be, for example, a server of a service provider, a device associated with a client (e.g., a client device), an on-chip system, and/or any other suitable computing device or computing system. 1102 1104 1106 1108 1102 The example of the computing device as illustrated includes a processing system , one or more computer-readable media , and one or more I/O interfaces that are communicatively coupled, one to another. Although not shown, the computing device may further include a system bus or other data and command transfer system that couples the various components, one to another. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. A variety of other examples are also contemplated, such as control and data lines. 1104 1104 1110 1110 The processing system is representative of functionality to perform one or more operations using hardware. Accordingly, the processing system is illustrated as including hardware elements that may be configured as processors, functional blocks, and so forth. This may include implementation in hardware as an application specific integrated circuit or other logic device formed using one or more semiconductors. The hardware elements are not limited by the materials from which they are formed or the processing mechanisms employed therein. For example, processors may be comprised of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)). In such a context, processor-executable instructions may be electronically-executable instructions. 1106 1112 1112 1112 1112 1106 The computer-readable media is illustrated as including memory/storage . The memory/storage represents memory/storage capacity associated with one or more computer-readable media. The memory/storage component may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth). The memory/storage component may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth). The computer-readable media may be configured in a variety of other ways as further described below. 1108 1102 1102 Input/output interface(s) are representative of functionality to allow a user to enter commands and information to computing device , and also allow information to be presented to the user and/or other components or devices using various input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone, a scanner, touch functionality (e.g., capacitive or other sensors that are configured to detect physical touch), a camera (e.g., which may employ visible or non-visible wavelengths such as infrared frequencies to recognize movement as gestures that do not involve touch), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, tactile-response device, and so forth. Thus, the computing device may be configured in a variety of ways as further described below to support user interaction. Various techniques may be described herein in the general context of software, hardware elements, or program modules. Generally, such modules include routines, programs, objects, elements, components, data structures, and so forth that perform particular tasks or implement particular abstract data types. The terms “module,” “functionality,” and “component” as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors. 1102 An implementation of the described modules and techniques may be stored on or transmitted across some form of computer-readable media. The computer-readable media may include a variety of media that may be accessed by the computing device . By way of example, and not limitation, computer-readable media may include “computer-readable storage media” and “computer-readable signal media.” “Computer-readable storage media” may refer to media and/or devices that enable persistent and/or non-transitory storage of information in contrast to mere signal transmission, carrier waves, or signals per se. Thus, computer-readable storage media refers to non-signal bearing media. The computer-readable storage media includes hardware such as volatile and non-volatile, removable and non-removable media and/or storage devices implemented in a method or technology suitable for storage of information such as computer readable instructions, data structures, program modules, logic elements/circuits, or other data. Examples of computer-readable storage media may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, hard disks, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage device, tangible media, or article of manufacture suitable to store the desired information and which may be accessed by a computer. 1102 “Computer-readable signal media” may refer to a signal-bearing medium that is configured to transmit instructions to the hardware of the computing device , such as via a network. Signal media typically may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier waves, data signals, or other transport mechanism. Signal media also include any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. 1110 1106 As previously described, hardware elements and computer-readable media are representative of modules, programmable device logic and/or fixed device logic implemented in a hardware form that may be employed in some embodiments to implement at least some aspects of the techniques described herein, such as to perform one or more instructions. Hardware may include components of an integrated circuit or on-chip system, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), and other implementations in silicon or other hardware. In this context, hardware may operate as a processing device that performs program tasks defined by instructions and/or logic embodied by the hardware as well as a hardware utilized to store instructions for execution, e.g., the computer-readable storage media described previously. 1110 1102 1102 1110 1104 1102 1104 Combinations of the foregoing may also be employed to implement various techniques described herein. Accordingly, software, hardware, or executable modules may be implemented as one or more instructions and/or logic embodied on some form of computer-readable storage media and/or by one or more hardware elements . The computing device may be configured to implement particular instructions and/or functions corresponding to the software and/or hardware modules. Accordingly, implementation of a module that is executable by the computing device as software may be achieved at least partially in hardware, e.g., through use of computer-readable storage media and/or hardware elements of the processing system . The instructions and/or functions may be executable/operable by one or more articles of manufacture (for example, one or more computing devices and/or processing systems ) to implement techniques, modules, and examples described herein. 1102 1114 1116 The techniques described herein may be supported by various configurations of the computing device and are not limited to the specific examples of the techniques described herein. This functionality may also be implemented all or in part through use of a distributed system, such as over a “cloud” via a platform as described below. 1114 1116 1118 1116 1114 1118 1102 1118 The cloud includes and/or is representative of a platform for resources . The platform abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud . The resources may include applications and/or data that can be utilized while computer processing is executed on servers that are remote from the computing device . Resources can also include services provided over the Internet and/or through a subscriber network, such as a cellular or Wi-Fi network. 1116 1102 1116 1118 1116 1100 1102 1116 1114 The platform may abstract resources and functions to connect the computing device with other computing devices. The platform may also serve to abstract scaling of resources to provide a corresponding level of scale to encountered demand for the resources that are implemented via the platform . Accordingly, in an interconnected device embodiment, implementation of functionality described herein may be distributed throughout the system . For example, the functionality may be implemented in part on the computing device as well as via the platform that abstracts the functionality of the cloud . Although the systems and techniques have been described in language specific to structural features and/or methodological acts, it is to be understood that the systems and techniques defined in the appended claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as examples of forms of implementing the claimed subject matter.
- et al. Abstract Information flow between components of a system takes many forms and is key to understanding the organization and functioning of large-scale, complex systems. We demonstrate three modalities of information flow from time series X to time series Y. Intrinsic information flow exists when the past of X is individually predictive of the present of Y, independent of Y's past; this is most commonly considered information flow. Shared information flow exists when X's past is predictive of Y's present in the same manner as Y's past; this occurs due to synchronization or common driving, for example. Finally, synergistic information flow occurs when neither X's nor Y's pasts are predictive of Y's present on their own, but taken together they are. The two most broadly-employed information-theoretic methods of quantifying information flow---time-delayed mutual information and transfer entropy---are both sensitive to a pair of these modalities: time-delayed mutual information to both intrinsic and shared flow, and transfer entropy to both intrinsic and synergistic flow. To quantify each mode individually we introduce our cryptographic flow ansatz, positing that intrinsic flow is synonymous with secret key agreement between X and Y. Based on this, we employ an easily-computed secret-key-agreement bound---intrinsic mutual information&mdashto quantify the three flow modalities in a variety of systems including asymmetric flows and financial markets. Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you. Main Content Enter the password to open this PDF file:
https://escholarship.org/uc/item/4n2922bq
Hillary Clinton says the U.S. economy does better with a Democrat in the White House, citing research by two Princeton economists. But the authors of that report do not credit Democratic fiscal policies for the economic growth. In fact, the authors say “our empirical analysis does not attribute any of the partisan growth gap to fiscal or monetary policy.” Clinton has made the claim numerous times in recent weeks — usually in the context of her economic plan, as she did during the first Democratic debate. Clinton, Oct. 13: I have a five-point economic plan, because this inequality challenge we face, we have faced it at other points. It’s absolutely right. It hasn’t been this bad since the 1920s. But if you look at the Republicans versus the Democrats when it comes to economic policy, there is no comparison. The economy does better when you have a Democrat in the White House and that’s why we need to have a Democrat in the White House in January 2017. Clinton expounded on that talking point the following day during a speech in Las Vegas. (starting around the 1:30 mark) Clinton, Oct. 14: I think it’s important to just kind of get rid of some of the myths that you hear from the other side. Starting with a fact that our economy and our country does better when we have a Democrat in the White House. … There’s a lot of evidence that when we have a Democrat in the White House, unemployment is lower, income is higher, and even the stock market is higher. But when you have a Republican in the White House you are four times more likely to have a recession. When we asked for backup, the Clinton campaign pointed us to academic research by two Princeton economists titled, “Presidents and the U.S. Economy: An Econometric Exploration.” The authors, Alan S. Blinder and Mark W. Watson, concluded after researching an array of economic statistics that the economy has performed “much better when a Democrat is president than when a Republican is.” The researchers attempted to find out why and concluded, in a version of their research updated in July 2015 “It appears that the Democratic edge stems mainly from more benign oil shocks, superior [productivity] performance, and perhaps greater defense spending and faster growth abroad.” The Democrat-Republican ‘Gap’ The paper looks at macroeconomic performance based on various economic outcomes and concludes, “The answer, while hardly a secret, is not nearly as widely known as it should be. The U.S. economy performs much better when a Democrat is president than when a Republican is.” The analysis considered a 64-year period beginning with President Harry Truman and ending with President Barack Obama. In all, that takes in the complete presidencies of seven complete Democratic terms and nine Republican terms. Here are the findings on several economic indicators, and, in some cases, how they match up with Clinton’s claims: Gross Domestic Product The analysis found that under Democratic presidents the gross domestic product rose at an average rate of 4.33 percent, compared with a rate of 2.54 percent under Republicans. That translates to a 1.79 percentage point gap in favor of the economy under Democrats. That means that the U.S. economy grew by an average of 18.5 percent during the typical four-year presidency of a Democrat versus a 10.6 percent gain under a Republican. Unemployment The average unemployment rate was lower under Democrats, 5.64 percent, compared with 6.01 percent under Republican presidents, though the authors called that difference “small and not statistically significant.” However, they said, there is “a very large and statistically significant difference in the change in the unemployment rate, computed as the average unemployment rate in the final year of the term minus the average value in the final year of the previous term. During Democratic presidential terms, the unemployment rate fell by 0.8 percentage points, on average, while it rose by 1.1 percentage points, on average, during Republican terms — yielding a large D-R gap of -1.9 percentage points.” Income Real wages, compensation per hour in the nonfarm business sector, increased slightly faster under Democratic presidents, they found. But the gap was not statistically significant, the authors said, despite Clinton’s claim that “income is higher” when a Democrat is in the White House. Stock market Annualized stock market returns for firms in the S&P 500 Index were 5.65 percentage points higher under Democratic presidents (the returns under Democratic presidents increased 8.35 percent compared with 2.7 percent under Republican presidents). “Though business votes Republican,” the authors wrote, “it prospers more under Democrats.” However, the report states, “given the extreme volatility of stock prices, even differences that large are statistically significant at only the 15 percent level.” That tempers Clinton’s claim that under Democratic presidents “even the stock market is higher.” Recessions The report found that the country was more likely to be in recession while a Republican president was in office. While Republicans occupied the White House for 144 quarters in the study period compared with 112 quarters for Democrats, 41 of the 49 quarters that the National Bureau of Economic Research classified as being in recession occurred with a Republican in the White House. Inflation The one notable exception to the Democratic advantage on economic indicators, the authors said, was inflation. On that count, “the economy fares about equally well under presidents of either party.” Summing up, the authors concluded there is “a systematic and large gap between the US economy’s performance when a Democrat is President of the United States versus when a Republican is. Democrats do better on almost every criteria.” Or as one of the authors, Watson, told us in a phone interview, the difference is “large and statistically significant, regardless of how you look at it.” “The superiority of economic performance under Democrats rather than Republicans is nearly ubiquitous; it holds almost regardless of how you define success,” the authors stated in their report. “By many measures, the performance gap is startlingly large — so large, in fact, that it strains credulity, given how little influence over the economy most economists (or the Constitution, for that matter) assign to the President of the United States.” The authors said the evidence for better economic performance under Democrats remained even when factoring in such variables as the majority party in Congress and whether Democrats inherited “superior initial conditions.” So what was the reason for the gap? “We spent a lot of time trying to figure out why,” Watson said. “And we had limited success.” Causation? Specifically, the authors cautioned Democrats not to take a bow for their fiscal policies. “Democrats would probably like to attribute a large portion of the D-R growth gap to better fiscal (and perhaps monetary) policies, but the data do not support such a claim,” the authors concluded. “If anything, and we would not make much of such small differences, both fiscal and monetary policy actions seem to be a bit more pro-growth when a Republican is president — even though GDP grows significantly faster under Federal Reserve chairmen appointed by Democrats than by Republicans.” Rather, the authors said there were factors independent of presidential fiscal policy driving the gap. Shocks in oil prices, changes in productivity levels, growth in defense spending (wartime defense spending is generally good for the economy), foreign economic growth and differences in consumer expectations account for “as much as 70% of the partisan gap,” the authors wrote. “Some, maybe all, of these might be considered blends of good policy and good luck. But our empirical analysis does not attribute any of the partisan growth gap to fiscal or monetary policy.” As for the other unexplained 30 percent? “The rest remains, for now, a mystery of the still mostly-unexplored continent,” the authors state. The authors both told us Clinton was “100 percent accurate” with her claim that the economy does better under Democratic presidents, based on their research (which Blinder said has been thoroughly peer-reviewed and is set to be published in the American Economic Review). Blinder served on President Bill Clinton’s Council of Economic Advisers, and he was an economic adviser to the presidential campaigns of Democrats Al Gore and John Kerry. The authors in their 2014 paper talked about factors that might be attributed to “just good luck,” while the 2015 report refers to “blends of good policy and good luck.” Blinder told us via email that they avoided the term “just good luck” in the 2015 paper for two reasons. “One reason for superior performance under Democratic presidents is that oil prices go up less (on average),” Blinder stated. “Is that just luck? Well, the fact that we entered several wars in the gulf area (the latest in 2003) under Republican presidents, thereby driving up oil prices, was not just luck — it was policy, though not economic policy.” “The second main factor underpinning Democratic superiority is that productivity has risen faster under Democrats,” Blinder said. “Well, productivity growth depends on many things, including luck, but also including a myriad of policies.” In other words, there may very well be policies from Democratic presidents that have led to better economic performance. But the authors were unable to pinpoint what those policies were. We have no doubt economists from either side of the partisan divide would have differences of opinion about what those policies might be, or whether the gap was entirely due to factors outside a president’s control. That’s a political debate that cannot be settled in a fact-check. Specifically, though, while Clinton recites the economic statistics as a reason for electing a Democratic president, there is no evidence in the Blinder and Watson research to conclude that a Democratic president’s economic policy would necessarily lead to a healthier economy.
https://www.factcheck.org/2015/10/clinton-economy-better-under-democrats/?version=meter+at+null&module=meter-Links&pgtype=article&contentId=&mediaId=&referrer=&priority=true&action=click&contentCollection=meter-links-click
The shockwave of the terrorist attacks on 11 September 2001 has entirely merged the two distinct processes – money laundering and terrorist financing ( Munshani, 2008 ). The FATF at the time of the incident with massive support from 130 states and international organizations formulated a system of voluntary regulatory structures which includes amongst others, the obligations to report any suspicious financial activities and transformed such obligations into a mandatory procedure1. The efforts could then be seen in the October 2001 issue of “Anti-Money Laundering of Nine Special Recommendations on Terrorist Finance”2. The changes that have changed the entire system of the international world have concurrently caused the changes to the Malaysian legislation. In July 2001, the government gazetted the Anti Money Laundering Act (AMLA)3 as it was then referred to. The Government of Malaysia did not just stop there but continued making amendments4 as an effort to curb with the uprising money laundering by terrorist cell groups. Subsequently this Act was then amended with special Recommendations on Terrorist Financing and the Act is now commonly referred to the Anti-Money Laundering and Anti Terrorism Financing Act 2001. Money laundering has been criminalised by the AMLATFA. The provisions in this Act also bridged the boundaries set by the bank secrecy5 provisions that previously have impeded many criminal investigations involving clients of financial institution. Such changes have been absolutely necessary in order to ensure that money launderers be brought to justice by effective enforcement carried out with the co-operation of the banking and financial institutions in reporting any forms of suspicious activities to the Bank Negara. In general, the AMLATFA contains the offences involving money laundering which include amongst others the procedures of investigations, the mandatory obligation of reporting and recording of evidence as well as the forfeiture of properties and assets of the suspected individuals corporate bodies. The AMLATFA provides the power to freeze and seize any property when there is existence of a reasonable ground to suspect any gains or any involvement in the money laundering activities6. This paper will first focus in the discussion involving money laundering activities as this will provide the milieu understanding of how money laundering is associated with financing of terrorism ( Morais, 2002 ). In this context, money laundering activities are considered as the most adequate and probably most exploited form of activity in the role for financing terrorism. It was once being viewed that money laundering and terrorism funding were two different legal, financial and political issues but since late 2001 then has evolved into a merged system in our current modern society7. According to the FATF8, money laundering could be best defined as “The processing of the proceeds of crime so as to disguise their illegal origin”. Money laundering is also found to have its own definition in the 1988 Vienna Convention which states as any form of activities that involve conversion or transfer of property at the state of mind of knowing or aware of such property is derived illegal activities as stated in the sub-paragraph (a)9 of the said document also stated for the same convention that money laundering involves any individuals or bodies participating in the act of concealing the origin of any property or asset. Generally, organised crime groups, drug syndicates as well as corrupted politicians are involved in the activities of money laundering. It is due to the need to transfer large sum of money which are “dirty” or illegal to another place to avoid detection, and transformation of such monies into clean funds is being carried out. The next question arises: why would they do so? The answer to it is simple. The money that was transformed was initially from illegally illicit means monies would be prime for the law enforcing to first, detect and reveal of illegal activities and second, to the court when charging these individuals of the illegal dealings. Other reasons to be used with evidence bolstering money laundering activities are to protect such assets and funds from seizure and forfeiture by the law enforcement authorities as well as avoiding large the payments. It is worthy to have an awareness that national and international legislations in curbing money laundering are not simply to put a halt to money laundering. The true nature of these legislations are to end illegal activities associated with money laundering which includes amongst others fraud, illegal prostitution as well as narcotic drug trafficking. Money laundering is the livelihood of the criminal activities because of the effectiveness of the act to have concealed the criminals and their activities which provide them with a safe haven to have use and invest the “ill-gotten” money to expand their empire and dominance of crimes globally. Without a doubt, money laundering is a peril and if it is left legislated and enforced against, paralyse the financial system of a country as well as being detriment to the national and international security globally and hence, it would indeed very important to have established counter measures which are flexible to curb with the ever-evolving money laundering activities and arm the law enforcement authorities with effective tools to detect these illegal illicit activities and proceed to whereas prosecution of the said crime well in theory anyway. In common practise, there will be three distinct stages of money laundering: Each transaction could be undergoing dilution in amount or in another words being reduced into smaller proportions in order to destroy the audit trail. Activities such as trading in the financial markets or through investment could accomplish such objectives of the perpetrator. The difficulty and complexity are multi-folded when these funds are transferred across transnational borders especially to countries with low-compliance and weak enforcement of money laundering laws, or countries with heavily adhered banking secrecy provisions. The other limitation arises when these countries involved in the transactions of money laundering do not comply to the international conventions and FATF recommendation have do not co-operate to share information which disallowing authorities the link to the audit trail is of have many launderers will be halt right there.
https://international.vlex.com/vid/viability-mechanisms-terrorism-offences-503109130
What is embedded ML, anyway? Frequently asked questions Edge Impulse Studio Dashboard Devices Data sources Data acquisition Data explorer Impulse design Processing blocks Learning blocks EON Tuner Retrain model Live classification Model testing Deployment Organizations Development Platforms Overview Officially supported MCU targets Officially supported CPU/GPU targets Community boards Mobile Phone Porting Guide Edge Impulse for Linux Edge Impulse for Linux Linux Node.js SDK Linux Go SDK Linux C++ SDK Linux Python SDK Edge Impulse CLI Overview Installation Serial daemon Uploader Data forwarder Impulse runner Blocks Himax flash tool Deployment C++ library Arduino library Cube.MX CMSIS-PACK WebAssembly OpenMV Edge Impulse firmwares Tutorials Continuous motion recognition Responding to your voice Recognize sounds from audio Adding sight to your sensors Detect objects with bounding boxes Detect objects with centroids Sensor fusion Continuous audio sampling Running jobs using the API Hardware specific tutorials Pre-built datasets Continuous gestures Running faucet Keyword spotting TensorFlow Lite reference models Tips & Tricks Increasing model performance Data augmentation Inference performance metrics Lower compute time Concepts What is edge machine learning (edge ML)? Powered By GitBook Running faucet This is a prebuilt dataset for detecting whether a faucet is running based on audio, for the Recognize sounds from audio tutorial. It contains 15 minutes of data sampled from a MEMS microphone at 16KHz over the following two classes: Faucet - faucet is running, with a variety of background activity. Noise - just background activities. Importing this dataset You can import this dataset to your Edge Impulse project using the Edge Impulse CLI Uploader . If you haven't done so, follow the Installation instructions. Then: 1. Download the faucet dataset . 2. Unzip the file in a location of your choice. 3. Open a terminal or command prompt, and navigate to the place where you extracted the file. 4. Run: $ edge-impulse-uploader --clean $ edge-impulse-uploader --category training faucet/training/*.cbor $ edge-impulse-uploader --category testing faucet/testing/*.cbor You will be prompted for your username, password, and the project where you want to add the dataset.
https://docs.edgeimpulse.com/docs/pre-built-datasets/running-faucet
Liberalism is an ideology that defends the economy free from government intervention, and the maximum individual freedom to make consumer decisions, through capitalism. It is also an ideology that defends private property. A liberal economy, also known as a market economy, is characterized by free supply and demand, and the formation of prices from benefits above costs. This freedom in economic transactions is given by competition between sellers and, furthermore, by the absence of taxes. History of liberalism The first ideas of economic liberalism appeared in the era of mercantilism, when governments, mainly European, controlled foreign trade and the internal movement of the market. It was then that some French thinkers of the Enlightenment began to demand a state with less control and taxes. These ideas influenced the economist Adam Smith, who in 1776 published his book known as “The Wealth of Nations.” He argued that individuals in a market offer goods and services for their own benefit, but that they benefited society, as if there was an “invisible hand” to guide them, without the need for a government to impose rules. Adam Smith had realized that the UK economy had benefited from this freedom in the market and the power of institutions to protect individual property, compared to other countries that imposed high taxes on merchants and consumers. From these principles, classical economics, also known as classical liberalism, or let-go liberalism , emerges . Neoclassical economics The era of neoclassical economics is marked by the development of economic studies, after the appearance of the ideas of Adam Smith. These analyzes always considered the liberal economy as an object of study, also known as the market economy. The market economy is characterized by the formation of prices through supply and demand, the production and distribution of income through the free market, with a minimum of political intervention. Liberalism began to be disputed and to share space with other political and economic ideologies, such as socialism, a contrary ideology that supports the central planning of the economy. Since the First World War, and also, the Crisis of 1929, the lack of confidence in the free market to give stability and growth to the economy, gave rise to a more interventionist strategy that had to do with having a welfare state and the decrease in unemployment. Many countries are beginning to adopt macroeconomic planning policies, such as the creation of state-owned companies and the increase in public employment, or also, in most cases, mixed economies between the free market and central planning by governments. Neoliberalism After World War II, the reconstruction of many countries led to increased government intervention in their economies. With that, however, liberal economists continued to advocate for a freer economic system. Among the best known was the Austrian school economist Friedrich Hayek. With the crises in the 1970s, governments began to focus on controlling the currency and inflation, prompting several countries to sell their state-owned companies and downsize the public sector, returning to the ideas of liberalism. This return of economic liberalism to the surface became known as neoliberalism or the new liberalism, which is a term associated with countries that abandoned more interventionist policies in the market.
https://en.tueconomia.net/economy/economic-liberalism/
Development (A4): SDL Activities and Best Practices In this chapter we will describe the SDL activities for the design and develop ment (A4) phase of our security development lifecycle (see Figure6.1). This phase can be mapped to the “readiness” phase in a typi- cal software development lifecycle. We start with the continuation of pol- icy compliance analysis for this phase and then move on to describe the elements of security test case execution. Building on the proper process for security testing that should have already been created, documented, and tested, analysis will continue until necessary tuning is identified in order to accomplish the required security level. We then describe the use of automated tools such as static, dynamic, and fuzz test tools to help automate and enforce security practices efficiently and effectively at a low cost. Static analysis analyzes the source code prior to compiling, pro- vides a scalable method of security code review, and helps ensure that secure coding policies are being followed. Dynamic analysis monitors application behavior and ensures that the software functionality works Figure 6.1 Design and Development (A4): SDL activities and best practices. Design and Development (A4): SDL Activities and Best Practices 163 as designed. Fuzz testing induces program failure by deliberately intro- ducing malformed or random data to an application and can be used as an effective and low-cost way of finding potential security issues prior to release and potentially throughout the SDL process. Fuzz testing is a specialized form of dynamic analysis. By using the latest version of these automated tools, the latest known automated security analysis, vulnera- bilities, and recommended protections will be identified. After these multiple automated tools have been used to quickly analyze the flaws and vulnerabilities in the software, the code is then reviewed manually, every issue validated, and the code inspected to overcome the limitations of automated tools and techniques. As part of this process, attack surface and threat model reviews will ensure that any new attack vectors that have been created by any design or implementation changes have been identified and mitigated. Finally, we discuss the need, value, and process for privacy validation and remediation to be conducted during this phase of the SDL. 6.1 A4 Policy Compliance Analysis This is a continuation of the A3 policy compliance review described in the previous chapter. As you will see, we continue to perform policy com- pliance analysis during different phases and review it again and again. It is of paramount importance that you persist through this and not make an assumption that you have covered everything in previous iterations. You will be surprised how often things are missed during initial phases/ iterations of this step. During this phase, any policy that exists outside the domain of the SDL policy is reviewed (or reviewed again). This may include policies from outside the development organization that carry security and privacy requirements and guidelines to be adhered to when developing software or applications anywhere within the organization. Often, too, policies are updated during the development process, and new requirements are added. Thus it is best to obtain a list of updated policies and make sure you have incorporated any additional requirements. Corporate security and privacy policies will likely instruct designers and developers on what the security and privacy features need to be and how they must be implemented. Other policies may focus on the use of Get Core Software Security now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/core-software-security/9781466560963/chapter-15.html
AQUA is a research group within Keio University Shonan Fujisawa Campus Tokuda, Murai, Kusumoto, Nakamura, Takashi, Banmi, Uehara, Mitsumi and Nakazawa Joint Research Project. Moore’s Law, the concept that the density of transistors in a computer chip doubles every two to three years, is facing fundamental limits as transistors begin to be made out of only a small number of atoms. Progress in computer hardware is slowing down, and our ability to solve ever-larger problems is jeopardized. One route to solving some important but difficult problems is through the development of quantum computers. Quantum computers will use the mysterious effects of quantum superposition, entanglement and interference. In AQUA, we are applying what we know about classical computer and network design to the design of large-scale quantum computers and a Quantum Internet, helping to realize computing for the post-Moore’s Law era. The AQUA group is researching system organizations, circuit structures and algorithms to accelerate the development of large-scale quantum computers. From the fall of 2017, we have released a course on quantum computers at FutureLearn, an international large online publicity online course (MOOCs) delivery entity. For Associate Professor Rod Van Meter of the Faculty of Environment and Information Studies, lecturer, who is planning to tackle the fields of domestic and international high school students, students, and quantum computers from now on, what is a quantum computer, how to live nearby It is a course that makes it easy to understand whether it is useful for practical application or not. Language is English (some Japanese), English subtitles are available. Discussions between students are conducted in English. Tuition fee is free. In this course, we will discuss the motivation for building quantum computers, cover the important principles in quantum computing, and take a look at some of the important quantum computing algorithms. We will finish with a brief look at quantum computing hardware and the budding quantum information technology industry. You need browser which support "video tag" to watch this video. ©2007-2017 Tokuda/Murai/Kusumoto/Nakamura/Takashio/Van Meter/Uehara/Mitsugi/Nakazawa Laboratories. All Rights Reserved.
http://aqua.sfc.wide.ad.jp/about_us_en.html
The Pre-primary Subsector Analysis Tool operationalizes the Build to Last conceptual framework. It aims to support national stakeholders assess the strengths and weaknesses of all aspects of their pre-primary subsector (including equity, inclusion, gender, crisis setting considerations). This assessment will support developing comprehensive strategies and action plans to address key subsector challenges. It also enhances understanding of the key components of an effective ECE/pre-primary subsector and how the components interconnect in coherent, powerful ways. The analysis tool is based on the conceptual framework and has six modules, one for each of the five core functions, plus a module on the enabling environment: - Module 1: Planning and budgeting - Module 2: Curriculum development and implementation - Module 3: Workforce development - Module 4: Family and community engagement - Module 5: Quality assurance - Module 6: Enabling environment All modules begin with a brief description of the purpose and focus (the core function or key factor in the enabling environment), followed by content on: - Goals the pre-primary subsector should prioritize in order to build a stronger system and achieve results for children. - Measures of progress for each goal. The measures are intended to encourage self-assessment and reflection, and to support monitoring of the pre-primary subsector’s performance. - Questions to accompany each measure. These questions do not cover every possible topic, and they may not all be relevant to your country’s context. They are intended to jump-start dialogue on some of the issues to consider in achieving the measures of progress. Objectives Using this tool will enable the ECE Technical Working Group (TWG) or stakeholders to: Conduct an in-depth examination of the ECE subsector through a systems approach; and Use the analysis to identify strengths, gaps, underlying challenges, and emerging opportunities. When to Use this Tool The Pre-primary Subsector Analysis Tool can be used across all phases of the education sector planning cycle, from the ESA through sector review: - It serves as a critical complement to the ECE data mapping exercise to enable a “deep dive” analysis of the ECE subsector. - It supports building consensus on which aspects of the ECE subsector work well and which aspects require strengthening. - It supports identifying underlying challenges that are at the root of the problems and issues of the ECE subsector. - The results emerging from the deep-dive analysis may feed into the ESA report and inform the formulation of strategies in the context of ESP development. - It supports to further validate the findings of the ESA report as well as unpack the underlying challenges that are at the root of the problems/issues of the ECE subsector. - It helps inform the policy priorities and strategies to be included in the ESP. - It may support reflections whether the ECE strategies, corresponding activities and associated targets and indicators are achieved. - It supports identifying whether adjustments to the ESP and Operational Plan are needed. Finally, individual institutions or country education clusters/LEGs/EiE Working Groups that manage and/or directly provide ECE services in countries affected by emergencies or conflicts may also use this tool. It may support the identification of ECE challenges to inform further questions to include in rapid needs assessments, situation analyses, and critical subsector information to include in emergency response proposals, joint response frameworks and/or Transitional Education Plans. Beyond the ESP processes, the results from using this tool can be used for building stronger ECE systems in general. The table below outlines: (a) the module title and basic goal for analysis; (b) the main topics covered in each module; and (c) brief suggestions for how you might use the analysis results to build a stronger system. Table 1. Basic content of the Pre-primary Subsector Analysis Tool’s six modules |Module||Module 1 | Planning and budgeting |Module 2 | Curriculum development and implementation |Module 3 | Workforce development |Module 4 | Family and community engagement |Module 5 | Quality assurance |Module 6 | Enabling environment |Goal for analysis|| | To support reflection and discussion on the development of strong and responsive plans, across levels of government, for equitable provision of quality pre-primary education, making efficient use of available financial, human and physical resources | | To support reflection and discussion on ensuring that children across pre-primary settings benefit from a developmentally appropriate curriculum and have access to learning and play materials that stimulate their development | | To support reflection and discussion on the recruitment, development, and retention of a sufficient number of pre-primary teachers and other key personnel within the subsector, ensuring that they have the essential competencies, training and support required to promote children’s positive development and early learning | | To support reflection and discussion on ensuring that families and communities are active participants and partners in supporting pre-primary programmes and children’s learning and development across early learning settings | | To support reflection and discussion on ensuring that a coherent framework for monitoring the quality of pre-primary education is in place and that quality monitoring produces information at the relevant local, regional and/or national levels to support continuing improvements in both policy and practice | | To support reflection and discussion on the four catalysts of a strong and supportive environment for the pre-primary subsector (ministerial leadership; policies and legislation; financing; public demand) |Main topics of the modules|| | Evidence-based planning and budgeting Equitable access Budget needs and financing mechanisms Governance and accountability Monitoring and implementation | | Defining a pre-primary curriculum framework and the content of the curriculum Disseminating and implementing the curriculum widely Regularly reviewing and improving the content of the curriculum and its use | | Establishing a strategy for recruiting pre-primary staff, based on clearly defined staff qualification requirements and competency profiles Implementing effective and flexible pre-service training programmes, including alternative pathways to qualification and initial training Implementing evidence-based programmes for continuing professional development, including setting up career pathways for the subsector Promoting continuous improvement and staff retention through coordination and monitoring of pre-primary workforce development programmes | | Engaging families and communities as strategic partners in pre-primary education Supporting families in fostering positive home learning environments Building adequate capacities across the subsector to engage with families and communities effectively | | Establishing comprehensive standards for quality Establishing functional quality assurance mechanisms, based on the overarching goals Strengthening capacities across the subsector to monitor quality Ensuring that quality monitoring facilitates quality improvements | | Ministerial leadership Policies and legislation Financing Public demand |How results of analysis can be used|| | Inform the development of an evidence-based plan, policy or strategy for the delivery of high-quality pre-primary education Inform the strategic priorities and plans for delivering pre-primary services Identify budget needs and financing mechanisms Establish clear governance and accountability structures for implementation of the plan Strengthen monitoring systems for evidence generation during plan implementation | | Inform the development of a curriculum framework or curricular standards Initiate curriculum reform Develop a curriculum dissemination and implementation strategy Inform review of curriculum content Develop a monitoring framework for curriculum implementation | | Develop pre-primary teacher competency frameworks (job profiles and competences) Inform strategies for personnel recruitment, training, deployment, retention, and exit Establish or revise pre-service training programmes or alternative pathways to qualification or initial training Establish or improve professional development programmes Strengthen mechanisms for continuous improvement, staff retention, motivation and performance monitoring | | Inform the development and implementation of a national or subnational strategy for family and community engagement Support the development of approaches to foster children’s learning at home Support approaches to strengthen system-wide capacities to engage families and communities | | Inform the design of a quality assurance framework with quality standards for service delivery Strengthen mechanisms for a functional quality assurance system Strengthen regulations, quality assurance and monitoring for improved service delivery Inform review and course correction of subsector plans | | Inform the pre-primary subsector situation analysis Inform the development of pre-primary policy directives, strategies, or other laws and regulations Make the case for fiscal space increases or financial allocations for pre-primary education Organize and institutionalize a structure for leadership and governance within the ministry of education Increase public demand and advocacy for quality pre-primary education Key Information Key Considerations when Using the Pre-primary Subsector Analysis Tool Underlying challenges (or “root causes”) of ECE problems/issues Identifying all underlying challenges of the ECE subsector is important because it provides an understanding of the gaps and weaknesses that are at the core of the ECE subsector and that need to be addressed. This then facilitates the formulation of solutions (i.e., the strategies) that address these underlying challenges/root causes, rather than the symptoms of it. Ultimately, identifying the ECE subsector’s underlying challenges/root causes of problems will: - Enable prioritization of challenges that further shape ECE policy priorities/goals; - Support the formulation of ECE strategies and activities that will feed into the ESP and Multi-Year Action Plan. There is no standardized approach to identifying the main causes of the ECE subsector’s problems/issues. It will remain a question of best judgment based on discussion and consensus building. Using the Pre-primary Subsector Analysis is one way to reflect on these underlying causes. Problem trees can also help. In a problem tree analysis, different issues are considered and ordered in a cause/effect relationship based on a focal problem. For more information and examples of problem tree analyses and how to turn them into objective trees (to help with the selection of strategies to consider for the ESP), please see Section 2.2 of MOOC Module 4. The problem tree analysis can serve as a complementary activity as part of the ECE subsector analysis exercise using the Pre-primary Subsector Analysis Tool (see Section B), to revisit and delve deeper into the underlying challenges if needed for one or more core functions (or all). The template in the “Additional Resources” section can be a helpful resource to validate the challenges that emerge from the use of the Pre-primary Subsector Analysis Tool.
https://www.ece-accelerator.org/toolkit/section-2/tool-2-2
As the share of variable renewable energy climbs, tackling four challenges will become an urgent task for system operators and designers. Source: BCG Renewable energy will need to make up the majority of global electricity generation by 2050—as much as 90%, according to the International Energy Agency—for the world to achieve net-zero emissions by then. Renewable energy’s share stood at 29% in 2020, which suggests that it would have to triple by 2050—no easy feat since, as the IEA notes, the total amount of electricity generated will have to jump by more than two-and-a-half times to achieve global decarbonization ambitions. Most of the electricity that is generated from renewable energy will have to come from variable sources because of constraints on the availability of dispatchable renewable generators (such as hydropower, geothermal, and biofuel plants). In 2020, sources of variable renewable energy (VRE) accounted for 9% of global generation. According to the IEA’s net-zero roadmap, however, they will need to represent close to 70% by 2050. Generating electricity from VRE sources—such as solar photovoltaic systems and onshore and offshore wind turbines—is different from producing power using conventional generators. By definition, VRE depends on the sun shining or the wind blowing. And because VRE is subject to weather conditions, it is uncertain and challenging to predict perfectly. VRE is also distributed (generators are smaller and less centralized), and it relies on inverters (power electronic devices that change the direct current from solar panels or wind turbines into alternating current for electricity grids). For electricity system operators and designers that are integrating VRE sources at scale, these characteristics lead to significant challenges in four areas:
https://www.worldutilitiescongress.com/press-and-media/industry-insights/2022/march/rising-to-the-challenges-of-integrating-solar-and-wind-at-scale/
Product Acquisition Due Diligence Checklist At some point in your product manager, program manager, or engineering management career, you're likely to be asked to weigh in on a potential acquisition. You may also find yourself on the "target" side of an acquisition (meaning that your company is potentially going to be acquired). Either way, you should be familiar with the types of topics that should be covered when assessing a prospective acquisition target. There are many acquisition due diligence checklists out there that cover the core financial and legal considerations in a merger or acquisition, and so that is not the focus of this post. This checklist is focused on software product and technology due diligence; specifically, how strong is the product and development team? The key questions for a software product and technology acquisition assessment include: - Is an acquisition the best approach, or would it be better to build the capabilities internally or partner? What would be the time required and opportunity costs of trying to build in-house? - Are there large product or technology risks or costs that may not be obvious to the financial or corporate development teams negotiating the acquisition? - What opportunities exist for growth and/or synergy (cost savings, reduction of duplicate effort) for either the target company or acquiring company? Is everyone aligned on the investment thesis? - Who are the critical people to retain, and what are the key technologies to migrate post acquisition? Although in the case of an "acquihire" (where the sole purpose is to bring on the talent and team) many of these questions are not as relevant, you should still pay attention to the "Team" section in the questions below. Know the Basics Trends in: - Revenue - New customer acquisitions, cancellations - Share by geography and major customer segment - Profit (particular focus on gross profit) - Customers - Traffic by source - Return on Advertising Spend (this would likely change post acquisition) - New v. repeat customer traffic - App downloads (e.g., AppAnnie) - Google Trends for search volumes, AdWords - Engagement and retention - Conversion Product - Product Quality / Tear-down - Walk though the product for overall quality, usability - SWOT (Strengths, Weaknesses, Opportunities, Threat) - How do they use customer or usage data to build a better product? - What data do they store about customers? Personalization? - Roadmap / Backlog review - Focus on what is in flight - future priorities will likely change - Qualitative research - either conducted by the target company, or you can pay for quick turnaround research: - Usability - Surveys - Customer interviews: satisfaction, gaps, competitors under consideration - Exit interviews for customers who recently switched away - SEO (search engine optimization) & ASO (app store optimization) strength. Use third party tools such as Spyfu and Sensor Tower - URL structure - Rankings - Inbound links - Competitor analysis - Customer reviews - App store reviews - BBB / Trustpilot - Social - Request top reasons for customer contacts / customer service escalations - Any concerns about fraud (either current problems with fraud transactions, or lack of ability to detect/mitigate)? - Does the product utilize third party content or services, and is it properly licensed? - Walk through back-office tools (How manual are processes? Will processes scale?) - Admin tools, customer service agent tools, etc. Technology & Development Process - Hosting, redundancy, downtime. Do they have: - Failover options - Single points of failure - Disaster recovery plan for continuity (lights stay on) and/or data recovery if there was a major problem with servers or data center - Scaleability - what would have to change to support 100x (or 1000x) more users? - How easy is the product/service to localize? - What manual processes exist that will need to be automated? - Development process & speed: source/version control, builds, deployment, monitoring, notifications, bug tracking, etc. - Versions: how updated is the tech/stack - Test coverage - Performance (site/app speed) globally - Tech debt including bug debt - How much would it cost to update legacy tech, or tech that won’t scale? - How long to resolve bugs? - Security - Privacy, PCI - What personally identifiable data is being collected, and how securely is it stored? - Value of patents, unique trade secrets - Ownership / independence of technology used - Use of and/or contribution to open source that could be problematic (some open source licenses are more prescriptive than others) - Use of third party licensed technology or content that may be problematic - Confirm existence of nondisclosure and ideally non-compete agreements with key technology team - Code review: quality, maintainability - How much of team time/effort is “lights on” or maintenance? - Network, systems diagram - Process for documentation and/or release history Operations - Team - Locations - Productivity - release history over the past 6 months - Tenure / loyalty - Glassdoor - LinkedIn profiles - similar quality bar for acquiring company employees? - Job postings - Org chart - Vendors v. full-time - Who are the most critical people to retain? - Cost structure - Flag any recent reductions in team size, pay freeze, or cancelling other spend that may not be sustainable - Partners - Flag any unsustainably good deals that won’t get inherited post acquisition, e.g. particularly low costs for a supplier/partner/channel and/or particularly high rev share or terms from a customer or partner. - Suppliers - stability, interview for concerns - Product management process - A/B testing? - How is customer feedback collected - who sees it? Identify / Recommend Opportunities - Cut costs through: - Eliminating redundancies - Use combined larger scale to lower supplier/partner/distribution/marketing costs - Accelerate acquiring company - Addition of content, customers, features, etc. - Accelerate target / acquired company - Shared customers - Use of more sophisticated process or tech - Global expansion - etc. Do you know of additional questions that should be asked? Let me know in the comments.
https://pmloop.com/blog/12306/product-acquisition-due-diligence-checklist
African Art and Its Influences on Western Art The discovery of nearly 64,000-year-old cave art proves that art has existed just as long as humans have, establishing its presence as one of the most significant components of culture. Art manifests differently in almost every culture; therefore, it serves as a tool in transmitting a society’s values to its future generations. Africa, despite being home to many diverse cultures, has unifying artistic elements that can be observed across the whole continent. Throughout history, well-known Western artists and some of the most influential art movements have drawn inspiration from art from across Africa. Pottery, metalwork, sculpture, architecture, textile art, and fibre art are among the different mediums that define African art. In addition, wood sculptures and carvings are rather common since wood is a readily accessible material in most regions. Other materials such as valuable stones are used to make traditional jewelry that can indicate rank or affiliation to a collective. Traditional clothing uses textiles like chitenge, or kintenge, mud cloth, and kente fabric for a variety of different designs, techniques, dyeing procedures, and aesthetic and practical functions. Religion has an undeniable influence on art. The religious norms have impacted the development of art in all forms found across the continent. Religious themes can be seen in numerous pieces of art. Whether it be emphasizing the significance of ancestors as the bridge between the physical and the spiritual realm or depicting the gods and the supreme creator, African art is heavily inspired by the presence of religion and its themes. African art has long been labeled as "primitive" by Westerners, during and after the colonial era of the 19th and 20th centuries. The phrase had been negatively associated with underdevelopment and impoverishment. Colonization in the nineteenth century led to a Western interpretation of African art that was based on the idea that because of its underdeveloped socioeconomic situation, as a result of colonization, African art lacked technical skill. Later, notable studies on the subject were published by art historians like Carl Einstein and Leo Frobenius, which helped give African art an aesthetic value in the eyes of the Western world. These studies revolutionized the standing of visual art. Art no longer served as a purely aesthetic medium, but also as a space for philosophical discussion, increasing its aesthetic value. A surge in interest in African art sparked in artists at the start of the 20th century. For instance, during Picasso's African-Influenced Period, he collected African art pieces and got inspired by African art. Pablo Picasso painted throughout this period, 1906 to 1909, in a style that was greatly influenced by African sculpture, especially traditional African masks. The person on the left has physical appearance and clothing reminiscent of North Africa or Southern Asia. Picasso's native Spain's Iberian culture is portrayed in the two figures next to each other, while the two figures to the right have traits similar to those of an African mask. These masks' representation of ethnic primitivism, in Picasso's words, inspired him to "liberate an utterly original artistic style of compelling, even savage force. A rich contemporary art scene can be found across Africa. One of the most striking aspects of African contemporary art is the incorporation of both traditional and contemporary elements. It is therefore impossible to define or categorize. Due of the focus placed on traditional art by academics and art collectors, this has been overlooked until relatively recent times. Yinka Shonibare, Zerihun Yetmgeta, Odhiambo Siangla, Elias Jengo, Lubaina Himid, Bili Bidjocka, and Henry Tayali are a few noteworthy modern artists. Westerners tend to misinterpret African contemporary art as an imitation of early 20th-century European and American cubist and totemic painters like Pablo Picasso and Henri Matisse — when really, it’s the other way around Drawing from how artists across Africa have utilized their creativity and personalities to portray the world,European artists have been largely influenced by the region in overcoming the dilemma they were facing between the traditional avant-garde style and the emergence of a more contemporary reality. Certainly, the region’s artwork has majorly impacted art and the notion of both, contemporary and traditional self expression all around the world. Works Cited “African Art - Wikipedia.” African Art - Wikipedia , 1 July 2018, en.wikipedia.org/wiki/African_art#Materials. “Les Demoiselles D’Avignon - Wikipedia.” Les Demoiselles D’Avignon - Wikipedia , 6 Feb. 2008, en.wikipedia.org/wiki/Les_Demoiselles_d'Avignon.
https://www.theistanbulchronicle.com/post/african-art-and-its-influences-on-western-art
London is best known for its world-famous tourist attractions, rich history, premier business and financial districts and strong transport connections. The city has been known globally for its financial and political stability, and many property investors see it as one of the top property investment hotspots. London has some of the most iconic tourist attractions from Buckingham Palace, Big Ben and the Houses of Parliament to Tower Bridge and London Eye. With a large amount of public green space across the city, it’s even the world’s first National Park City. Its cultural offerings are also second to none. There are some of the top shopping hubs within London, including Oxford Street and Regent Street. And London is home to England football at Wembley Stadium. With its strong connections as well, it’s an easy place to reach with six international airports serving London, in addition to the hundreds of overground and underground tube and rail stations. These features all add up to make it such an appealing location for businesses, investors, workers, students and tourists. And the prestige of London can also be felt within the city’s property market. With its global status, London has a thriving economy with a plethora of business and investment prospects. In 2019, the capital produced £503bn, which was a quarter of the UK’s GDP, and the city continues to power the nation. There are a handful of major business districts across London, including in the City, Westminster, Canary Wharf, Camden & Islington and Lambeth & Southwark. It’s the top European location for business headquarters with HSBC, BP, Vodafone, Tesco and AstraZeneca all based there. The number of employment prospects, top-notch universities and tourists from across the globe bring millions of people and investment to the city and continue to increase demand for additional housing. Over the past few decades, there has been a significant amount of regeneration in the city, which has breathed new life into disused areas of the capital. Currently, there are a number of regeneration projects transforming neighbourhoods as the capital pushes further out. Development projects are bringing forward hundreds of thousands of new jobs and homes and transport projects, like HS2 and Crossrail, are further improving connectivity. In London, regeneration also often has a substantial impact on local property price rises. London has been an attractive location for buyers, renters and property investors alike. The city has traditionally been one of the top global property hotspots and the top property investment hotspot within the UK. It’s often the first place people think of when considering investing in UK property, particularly among international investors. Increasing house prices over the years have often been the most appealing feature of the city’s property market, along with the ever-growing demand for housing. With strong rental demand, rents have remained strong. From September 2020 to September 2021, rental growth increased by 10% in Prime London, 10.8% in Prime Central London and 13% in Greater London, according to research by Chestertons. Rents are expected to remain strong in the coming months and years. The average rental yield in the capital sits at 2.93%. Certain boroughs are seeing higher gross yields, including Camden with 4%. While this is lower than some other cities in the UK, London as a whole has proven to be a staple choice for capital appreciation prospects. In London, the average house price sits at £495,200, according to Zoopla’s September 2021 House Price Index. In the past year, prices have risen by 3.3%. The city has been known for its booming house prices in recent years. Additional data from Zoopla revealed London saw a whopping £214bn added to property values in the past five years alone. The capital is the most valuable region in Great Britain with homes in the city worth £2.4tn collectively. While London accounts for only 13% of British properties, it’s home to a quarter of the housing wealth. As the population in London inches towards 10 million, this will continue to create more demand within the city’s property market. And limited supply and land mass is expected to continue to support house price growth in the years to come. Chestertons Global New Homes has vast experience within the London real estate market, having matched investment properties with portfolio needs for investors all over the world. Contact us today for assistance with your London property search.
https://chestertons.globalnewhomes.com/city/london/page/14/
At the core of all learning is our life skills framework. This is where students also learn to develop the D-PREP school values to support their character development. This promotes compassionate, resilient, and adaptable learners for today’s changing world. As young as Grade 1, students learn about how the brain works and how to manage their emotions through practicing mindfulness. This allows our students to develop both self and social understanding and management skills, one of the most important skills in life. The student’s development through our life skills framework supports their academic and personal development. The D-PREP vision elements of Dream, Discover and Deliver, our CO-CREATORS values, and life truths support each domain. Self Understanding The central domain of Self-awareness and understanding sees developing healthy self-perspectives such as cultivating self-love, self-worth, and self-respect. In promoting self-awareness for younger children, we start with mindful movement and breathing exercises. Mindfulness is a guide and practice to support strong social and emotional development. From age two we use the mood meter to cultivate emotional awareness in all our students. Teachers also lead morning discussions to check in on how students are feeling on a daily basis. Students are also taught about the science behind the brain’s neurological processes and engage in observations and discussions regarding the connection of the brain’s physiological make-up and how it affects our ability to regulate ourselves. Students are taught various meditation techniques of becoming aware of their thoughts and emotions. Learning how to calm your body and mind and learning to regulate your attention is one of the most critical skills in successful learning and living. Students are also taught techniques to manage challenging feelings and emotions. Social Understanding In the second domain of social awareness and social understanding, we implement a variety of social experiments on a variety of concepts such as compassion, courage, honesty, and patience. One example is the blind project, where students are blindfolded to experience the courage to walk around the school blindfolded. This develops empathy and compassion for others who may not be physically able as them. We also use group discussions on critical social situations. One example is the mass shooting in Thailand in 2019. Students reflect on why some people are not able to regulate their emotions and end up hurting so many other people. This deeper understanding of society allows students to understand important social problems that affect so many people, thus, helping to prevent future social problems of the same kind to keep happening. Livelihood The Livelihood domain introduces life / universal truths: - Change is normal - Our thoughts create our reality - Loving ourselves is the foundation to loving anyone else. We believe that if all children were taught to understand these universal truths while still in school, they will have a much easier and more successful time in navigating life both in and outside of school. Livelihood refers to creating and securing physical, mental, emotional, and financial well-being – we refer to this as Meaning, Money and Mastery. Understanding social structures and business markets as well as developing effective collaboration, presentation, and problem-solving skills, are important elements of livelihood. It is about turning their passions into not only an exciting but a successful and sustainable lifestyle in which our graduates can be truly self-reliant in both their personal and professional lives Active Citizenship The fourth domain, consists of Active Citizenship. This domain will plant the seed for a very strong foundation of all the essential life values from which all their thought, behaviors, and actions will stem from. The D-PREP Values consists of being - courageous - Open - Compassionate - Resilient - Adaptable - Thankful - Respectful - Self-Aware When they are whole, loving, and strong human beings, they will naturally want to create positive change for themselves and their community through active citizenship. They will merge who they are and what they love to do with being of contribution to their families and communities. The ultimate goal of life skills is to help students experience their own inner power and the realization that they are more than capable of creating their own realities and achieving successful livelihood.
https://www.dprep.ac.th/life-skills-framework/
The National Geographic Society and the world’s leading GIS technology provider, Esri, have recognised high school student Lani Raghavan in the 2021 global ArcGIS StoryMaps Challenge for Restoring Our Oceans. The challenge invites high school and university students around the world to create interactive maps that highlight the ocean’s most pressing problems and inspire actionable solutions. Recently graduated from Wavell State High School, Raghavan was recognised for her interactive field report outlining human impacts on one of Queensland’s top tourist destinations, Cylinder Beach. Raghavan says using the StoryMaps app to communicate the environmental issues at Cylinder Beach enabled her to incorporate text, data and visualisations to tell a holistic story. “Cylinder Beach receives an average of 300,000 visitors every year but expanding local communities, overland water flow and vegetation changes have led to significant land cover change.” “Instead of using a basic Word document to explain the issues, Esri’s StoryMaps makes it really easy to add context with images, geo-located data and maps,” said Raghavan. Esri Australia’s National Business Manager, Lisa Dykes says, the challenge highlights the importance of tackling issues around climate change using a geographic lens. “The StoryMaps Ocean Challenge is a fantastic initiative allowing young innovators like Lani to use the pervasive language of visualised data to better understand and more importantly explain what is happening in the world around them.” “A map allows everyone to connect with the information being shared as we see ourselves within the context of the story. For this generation, and those to come, geographic thinking will play an essential role in driving sustainable solutions to climate change,” said Dykes. As part of Esri Australia’s GIS for Schools program, high school students like Ms Raghavan receive free access to Esri’s industry leading mapping software as part of their day-to-day learning. Brett Dascombe, Senior Geography Teacher at Wavell State High School says incorporating Esri technology into the curriculum has had a significant impact on student engagement. “For geography assignments, it’s not just a static map the students are producing, they have access to all the capabilities and benefits of dynamic, digital mapping.” “They’re out and about in the community, interviewing locals and capturing all manner of data that they can then transform into engaging, interactive maps,” said Dascombe. “It also encourages a collaborative approach in the classroom, where students are building practical skills for careers in STEM.” Raghavan intends to study a double degree in Biomedical Science with a Masters in Epidemiology and has recently been offered an internship with Esri Australia.
https://www.theeducatoronline.com/k12/news/student-storyteller-recognised-on-world-stage/279373
By Pradeep S Mehta Once again the Reserve Bank of India (RBI) has raised the interest rates, with several questioning the wisdom of the decision. Inflation has become the bugbear of our growth story, and in the recent past some of our econocrats have also suggested for lowering the growth rate to curb it. This may be true for an advanced industrialised country like the US where growth and inflation have been generally low. It is certainly not the right prescription to deal with the ills that our economy is facing, nor can we raise interest rates, again and again, as the prescription for curbing inflation. What seems to be sadly missing in this debate is how competition reforms can help growth, improve governance and check unnatural inflation as well, at least in the medium term. Market power inflation occurs when businesses exploit monopolistic situations or when they collude explicitly or implicitly with other traders to make abnormal profits by raising prices. We saw this happen in the case of onions about a year ago, where implicit cartels exploited the shortage. In resource-intensive or utility sectors, it is difficult by their very nature of operations to follow a pricing policy based on normal returns. What they do, is to add a markup over and above their average cost of production. In India, there is no set policy to assess such markups that result in unnaturally high profit margins. Unnatural inflation mostly occurs when there are supply-side constraints, which the RBI has flagged stating that these need to be tackled to curb inflation. These constraints are due to various reasons, including entry barriers. Some of these entry barriers are also artificial and created by the incumbents to thwart competition. It is here that governance deficit matters. There are many recent examples of how vested interests use their “power” to persuade governments to impose regulations or create favourable conditions that generate monopolistic rents. Let me elucidate two examples here. Late last year, the government imposed definitive anti-dumping duty for a period of five years on polypropylene imports from Oman, Saudi Arabia and Singapore at the request of oligopolists in the market such as Reliance Industries Ltd (RIL) and Haldia Petrochemical Corporation Ltd (HPCL). Unless there was a cartel among the suppliers from the range of countries, it does not make sense that they would all be dumping, i.e. selling below cost. Evidently the domestic manufacturers were faced with price competition that they could not counter and filed anti-dumping complaints. Policies with anti-competitive outcomes have been undertaken by state governments also. In October, 2010, the Tamil Nadu government declared that farmers would be given motor pump-sets with four-star energy saving rating for irrigation, free of cost, provided they had registered for supply of free power. However, the rider was that the pump-sets had to be bought only from two nominated companies at double the prices than the cost of such pump sets in the market, which was Rs 6,000-Rs 8,000. While these anecdotal stories reflect part of the problem, recent experience of inflation in India has exhibited a well-established trend — it is higher in those sectors in which demand is growing without commensurate increase in supply. Inflation forecasting is a tricky business as explained by Kaushik Basu, chief economic adviser in the finance ministry, demand-supply mismatch is not a new thing in India. Many of our econocrats are too busy managing aggregate demand, including tweaking the interest rates, whereas a better response lies in the fact that “supply creates its own demand”. Empirical evidence has established a negative relation between inflation and growth. Too high an inflation is certainly harmful for a country’s growth. Therefore, without controlling unnaturally high inflation if we adopt growth-reducing policies it will be bad for our economy, with a higher impact on the poor. In fact, sustained low growth can lead to a deflationary situation as in the US in 1930s and in Japan in 1990s. In short, challenges exist on many fronts. We cannot afford to lower our growth ambitions since that will cut revenues for the government and concomitant negative impact on social spending, which is so important for addressing non-income aspects of poverty. We cannot have a long period of very high inflation since it is eroding the purchasing power, particularly that of the poor. As our prime minister had once said, “curbing inflation is one of the best anti-poverty programmes.” Our policy honchos are taking several measures to control inflation. Every other day we hear that the present bout of high commodity price is a temporary phenomenon and it will be controlled within a few months. Alas, the situation is getting from bad to worse. Some of these short-term measures are necessary. But are they sufficient in the long run to make our growth story much more robust with moderate inflation so that the common man can enjoy better purchasing power? One such long- and medium-term measure is to create a culture of competition in our country. At present, it is largely missing. True, there is a competition agency and it is taking some steps in inculcating competition into our economic landscape but it is not equipped to catalyse large-scale competition reforms. Competition reforms and the existence of an effective competition regime can help growth and curb inflation. More and effective competition can generate new innovations, which is a major driver of economic growth. Recent research has proven this point and successfully contradicted the conventional thinking that competition is bad for growth because it can curb monopoly rents that was considered as an incentive for successful innovations. A detailed study by the Australian Productivity Commission on the impact of wide competition-policy reforms in Australia carried out in 1995-2005, has estimated that enhanced competition has resulted in a 5.5 per cent more growth in the Australian economy over a period of time. And there is enough empirical evidence that higher product market competition can reduce inflation over a period of time. Competition can increase supply — more choice for consumers at affordable and fair prices. Fortunately, the government of India has initiated the adoption of a National Competition Policy that seeks to promote competition reforms across the economy. A draft is under wide consultation. This policy is expected to be adopted within the next six months. Effective implementation of this policy along with our competition law and the necessary coordination between different agencies will usher in a new era of competition reforms in India. Effective competition reforms to enhance growth and curb inflation should be the mantra.
https://cuts-ccier.org/how-competition-reform-can-check-inflation/
|Pre-event||Community||Increase in donor cards signed at DMV over campaign. | Increase in donations over period of campaign. |Good measure of approval/ commitment, easy to collect. | Used only # of donations in 80-89; no rates to adjust for population size or technological advances over a decade; not clear what aspect of a multi-faceted program had the most positive effect. |Wolf 1997||National strategy to promote public acceptance of organ donation.||Cross-sectional survey (incomplete)||Pre-event||Pre-event||Community||Media penetration strong; positive response in favor of donation for those viewing campaign.||Survey fairly easy to collect but must be large; no comparison of change in attitudes of those who hadn’t seen campaign for control.| |Gallup 1993||Survey on public attitudes about donation.||N/A – survey||Pre-event||N/A||Community||National approval of organ donation, etc.||Baseline only- might be useful to measure difference over time or by region of intervention.| |Stratta 1997||Hypothesis that organ donation rate would increase if the public could be assured that donated organs are appropriately used.||N/A||Consent rate, | rate of nonrecovery (including donor instability and inability to find recipient), disease/damaged organs, consent rate, utilization |Post-donation||Community||N/A||Not a true evaluation but analyzes steps in the process to identify areas for future activity. Recognition that consent is part of the issue, but hospital needs to improve efforts to use a retrieved organ.| |Provider education and hospital/OPO organization| |BNA, Feb. 2, 1998||Retrospective survey of donor families and critical care staff concerning perceptions of quality of care and training respectively.||Case-control study||Post-event measures, but focus on quality of hospital care.||Post-event||Hospital, donor and nondonor families||Donor families ranked the quality of care their loved one received higher than non-donor families.||Although this study does not evaluate an activity, the information from this study can be used to identify new interventions to improve quality of care and increase donations.| |Chabalewski 1997||A program to promote a greater emphasis on organ donation issues in nursing education programs in the US.||Time series||Pre-event measures of how many schools incorporate donation & transplantation activities in their curriculum||Pre-event||Hospital – Nursing schools||Nursing schools have become more aware and have started to incorporate more Donation &Transplantation courses and information in their curriculum.||Study is an indirect proxy of ultimate organ donation Focus on those directly involved in the organ procurement process post-event, but indirect. Study possibly could be followed up to link education to attitudes and success in clinical practice.| |Garrison 1991||Analysis of Kentucky OPO performance over one year.||Cohort||Post-event||Post-event||Donors/ families||Consent rate higher in cohort for whom requests were decoupled from brain death explanation.||Nondeliberate, but good design: presumably randomization of decoupled request though small number of hospitals suggests issues about one or two poor requestors who might have skewed results.| |Hartwig 1993||Effect of organ donor race on health team procurement efforts.||Cohort||Post-event||Post-event||Hospital||African-American patients are less likely to be identified, asked, or become donors.||Sample size decent- 85 and 67; use of MRR mitigated potential for skewed results from staff knowing there is interest in studying this issue; MRR involves time and effort.| |Niles 1996||Evaluation of the timing factor in the consent process.||Cohort||Post-event||Post-event||Donor/ families||Decoupled requests more successful.||Retrospective study; relatively easy to collect data; OPO-wide sample large enough to be significant.| |Cutler 1992||An analysis of donation events in one OPO over two years.||Cohort||Post-event||Post-event||Donor/ families||OPO coordinators more effective at soliciting consent than physicians.||Study doesn’t give numbers of cases, only rates, and says physicians ask most often- if much more often, coordinator success might be less on account of coordinator status than particular success of one good coordinator.| |Sheehy 1996||Comparison of procurement success between trauma and non-trauma hospitals.||Cohort||Donation rate, post-event||(No activity)||Hospital||Trauma centers are more effective at each of the post-event steps and in donation rate.||Baseline data only; MRR used is relatively expensive.| |Beyer 1992||Study of donations at a Midwest tertiary care hospital over six months.||Cohort||Post-event||Post-event||Donor/ families||Physician request consent rate higher than when nurse requests; consent rate also higher with higher Death Anxiety Scale (DAS) scores and more liberal religious beliefs (1-6 scale).||Regressions performed to calculate results; sample size of 228 requests might be too small to break out so many variables with significance; prospective design stronger than most.| |Beasley 1997||Impact of a comprehensive, hospital-focused intervention to increase organ donation over two years at 50 hospitals in three OPOs.||Time series||Post-event, donation rate||Post-event||Hospital (from OPO)||All measures from ID to donations increased pre-intervention to post-intervention.||Large sample lends credibility, as does pre- and post-intervention, rather than during, when there might be a learning curve; data collection largely a part of intervention except MRR, which is relatively onerous. | |Cohort||Post-event||Post-event||Hospital (from OPO)||Consent rate higher among cohorts for which request was decoupled, made in a quiet/private setting, and OPO coordinator involved.||Again, large sample lends credibility, though results might be skewed in that procurement staff might not have followed guidelines for decoupling, etc., even in the midst of an intervention.| |Shafer 1997||Impact of inhouse coordinators on organ donation rates in non-donor hospitals, a single coordinator has a contractual obligation to identify and manage potential donors.||Time series||Donation rate | Post-event measures: Organ donor referrals, donor count, consent rate |Post-event||Hospital level, but sponsored by the OPO||The program has been a cost-effective way of assisting OPO’s efforts to recover organs from hospitals in Texas.||Actual evaluation with outcomes measures, 3-year follow-up. Working directly in the hospital post-event to improve family consent rates, routine notification, and communications with the OPO.| |Gentry 1997||Using an "appropriate messenger" (Dr. Callender) approach by hiring an African-American to spearhead a minority community education program and having an on-call African-American coordinator to increase donation rates.||Time series||Organ donation | Post-Event: Consent Rates Referral rate Medically suitable donor referrals |Post-event||OPO program initiated at the hospital level||Consent rates and donation rates increased among African-American donors.||Targeted population in North Texas, focus on increasing minority consent Study design does not control for other potential confounding factors that could have impacted the donation rate beyond the OPO-coordinator program.| |Shirley 1994||Assessment of one OPO’s efforts to increase donation by a variety of initiatives.||Time series||Donation rate, post-event||Pre-event, Post-event||Hospital (from OPO)||Doubling of referrals and donations since beginning of program.||Large enough results that program probably works, but the role of each part of a multi-faceted campaign unclear; data easy to collect.| |Politoski 1994||Description of continuing education program offered by the National Kidney Foundation.||Time series||Post-event||Pre-event||Hospital||After the workshop, nurses are more likely to identify donors, initiate requests, and accompany requestor.||Relatively easy to collect; translation of pre-event intervention to post-event behavior particularly good (though link to success of these requests would be instructive)- contemporaneous controls might have been useful.| |Burris 1996||Effect of a program to define a donation policy, educate staff, and document and monitor compliance at one institution.||Time series||Donation rate, Post-event||Post-event||Hospital, Donors/ families||After implementation of CQI, significant increases in referrals and donations.||Good to measure both donation rate and post-event indicators: sample size for donation rate larger (so more confidence in results), but post-event measures are also significant. Use of request protocol form facilitated data collection.| |Kappel 1993||Two phase effort in increase donation from black community: public education and minority requestor.||Time series||Post-event||Post-event||Donors/ families||Increase in consent rate over time.||Good to use post-event measure, but sample size might be too small- researcher doesn’t explain why referrals actually went down over the year of intervention.| |GAO 1993||Survey of OPOs on their data collection variables.||N/A --Survey||Post-event||Pre-event||OPO||What variables OPOs collect, and to what extent.||Point: OPOs should collect more, and more consistent, data.| |Savaria 1990||Feedback solicited from families of donors.||N/A --Survey||Pre-event, Post-event||Post-event||Donor/ families, Community||Questions asked about whether donor had donor card or family discussion before precipitating event; also reasons for donating.||No comparison with control group of non-donor families; 50% response rate compromises results; sensitive issue; relatively inexpensive.| |Shafer 1997||Study of organization of three successful organ procurement organizations.||N/A --Survey||Outcomes, Post-event, Pre-event, other||Both||OPO||No conclusions drawn, just benchmarks suggested for successful OPOs.||Might have relevance with much larger sample sizes that include true cross-section of OPOs (not just three successful ones) to see which variables really do correlate with success.| |Morris 1990||Performance of one hospital over 3 years.||Single case||Post-event||Post-event||Hospital||Baseline data established.||Sample size of 10 over 3 years compromises results; MRR to determine potential donors might have been costly.| |Kowalski 1996||Assessment of a hospital’s establishment of an Office of Decedent Affairs, whose staff facilitate various aspects of the donation process.||Single case||Outcome||Post-event||Hospital||After office consolidation, donation rate tripled.||Outcome measure suggests success, but unclear which aspect of a multifaceted effort is responsible; easily measured.| |McNamara 1997||Medical record review to determine organ procurement effectiveness.||Methods||Donation rate, Post-event||Post-event||OPO, Hospital||MRR is stronger than death certificate review which is stronger than DPMP as performance measures.||Good methodological recommendations.| |Evans 1992||Death certificate review to determine organ procurement effectiveness.||Methods||Donation rate||Post-event||OPO, Hospital||Gives two death certificate review methods to determine potential donor pool.||Methodological recommendation; baseline data only.| |Wight 1997||Describes the European Donor Hospital Education Programme skills workshop and Donor Action, a program designed to help hospitals improve policies and procedures to optimize donation process with 5 modules: donor detection, referral, family care and communication, maintenance, retrieval.||N/A||Pre-event measures: Increasing attitude, knowledge, communication, teamwork; | Post-event: increasing approaches to families Donation rates via MRR |Pre-event activity to help hospitals better handle events||Hospital||N/A||Donor Action program has targeted key weaknesses and identified areas where the most donors were being "lost" -- changes could be measured on a hospital by hospital basis -- this article does not describe any evaluation. MRR and hospital staff surveys are generally expensive to conduct.| |Matesanz 1997||Proposal of future activities: proactive donor detection programs with specifically trained professionals, education of the legal profession, social education of health professionals, media, and public opinion, and organizational structure of the organ donation system are all crucial.||N/A||Ultimately would have to include all three performance indicators||Pre-event||Range of activities targeting all stakeholders||N/A||Proposes a wide range of activities but no planned evaluation beyond national organ donation rates and country vs. country comparisons.| |Legislation| |Roels 1996||Evaluates presumed consent policies in Austria and Belgium.||Cohort||Donation rates||Pre-event||Community||DPMP higher in countries with presumed consent policies.||Without mention of baseline, not clear that differences don’t correlate with societal differences, but use of two countries eases this issue somewhat- perhaps baseline data could have been included as a historical control; easy to measure.| |Bergstrom 1997||Swedish Transplant Coordinators’ experience with the New Transplantation Act and Donor Register which switched presumed non-consent to presumed consent – Government led information campaign, national database of donors and an attitude survey.||Time series||# of donors, survey of pre-event measures (e.g., donor card signing), and post-event measures (consent rates) | |Pre-event||Community||No difference in number of donors in Sweden had been detected.||This study provides one example of how difficult it is to measure changes in the donation rate due to legislation and public awareness activities. The time span of the study may have been too short to detect differences.| |Bacque 1997||Development of laws (in Argentina) to ensure transparency and sanitary safety in the development of organ transplantation – Organ Transplant Law 1977 created the National Organ Procurement Organism, shift towards improving provider attitudes and creating a new operational model – Hospital Coordinator Model, and educational campaigns.||Retrospective time series||DPMP, post-event, and pre-event measures. | Percentage of multiorganic retrieval, Family refusal rates, Percent lack of information about brain death and corpse integrity, Religious refusal,
https://aspe.hhs.gov/report/increasing-organ-donation-and-transplantation-challenge-evaluation/table-organ-donation-activity-evaluations
Long-lost da Vinci painting sets auction record of $450.3 million/ Leonardo da Vinci’s painting, “Saviour of the World,” sold for $450,312,500 Wednesday at auction, Christie’s said. The price, which includes a buyer’s premium, makes it “the most expensive painting ever sold at auction,” the auction house said in a statement. In the weeks leading up to the auction, some 27,000 people, including Leonardo DiCaprio, Alex Rodriguez, Patti Smith and Jennifer Lopez, flooded into viewing halls in Hong Kong, London, San Francisco and New York for a chance to glimpse the highly anticipated treasure.
https://www.premex.io/blog/2017/11/19/davinci
Plant Type: - epiphyte, terrestrial Height to: 5' USDA Hardiness Zones: - 40 to 30ºF ZONE 10 - 50 to 40ºF ZONE 11 Sun Exposure: - full shade Bloom Description: Bird's nest fern has long, lance shaped, bright green fronds that unroll from a central mass of hairlike black-brown fibers that suggest a bird's nest. Emanating erect and stiff as they do from a central point, the foliage grows in a funnel shape, and this too looks like a bird's nest. The fronds are entire; that is to say they are without teeth, lobes or pinnae. Plant Perks: - Suitable for Houseplant - Easy to Grow/Ideal for Children - Container Gardening Propagation: Bird's nest fern is propagated from spores which are sown as soon as ripe at temperatures of 60 f (15 c) or higher. Gently brush the spores onto a clay pot, keep the pot moist in a plastic bag, and keep it out of direct sun. Growth is slow. Native to: Asplenium nidus occurs in tropical regions throughout the old world. Ferns in general reproduce and spread with tiny, often wind-borne, spores and it is not unusual for some to have very large geographic distributions. Notes for Identification: Source: floridata - grow bird's nest fern in a container with a humus rich potting medium. In the wild, bird's nest fern is an epiphyte, growing on trees deep in the tropical rain forest. The nest shaped funnel of fronds collects rainwater and nutrient-bearing dust from the air, like a bromeliad. In cultivation, bird's nest fern is probably the easiest fern to maintain indoors. It grows slowly and is the ideal house plant, thriving in low light and low humidity where few plants could even survive. The graceful arching fronds should be given enough room so they do not touch any objects, including your hands! The generic and common names (spleenwort) derive from the supposed medicinal value (to the spleen) of the plants, as reported by the ancient greek, dioscorides. There is no basis in modern medicine to the belief, which was apparently based on the spleen shaped spore clusters. Care: bird's nest fern needs enough space around it so that the fronds do not touch anything or they will be damaged. Scale insects sometimes attack this fern and these should be taken off by hand. Chemical insecticides are toxic to asplenium ferns and should never be used on them. This fern thrives in low light conditions. Direct sunlight will cause the fronds to dry and turn brown. Bird's nest fern does best in front of a north facing window. Although bird's nest fern does best in a humid atmosphere, and thrives with a humidity of 40% or greater, it survives under drier conditions than probably any other houseplant fern. For maximum performance, it can be kept in a terrarium or its container can be placed on a tray filled with pebbles and with water almost to the bottom of the container. Regular misting helps keep the fronds bright green and allows the plant to attain its maximum size. Specimens kept in a centrally heated room without misting will survive, but will probably not grow very large. Mist with rain water or boiled tap water. Keep the soil moist at all times except during the winter when the fern should be watered only when the soil begins to dry out.
http://thegardengeeks.net/plant-guide/2538-asplenium-nidus
This may be the first indecision that I have had in my first three courses during my DLL journey. I know that this innovation plan does not need to be started now and this is just the beginning of a long process to implement and refine my plan, but I still am showing some anxiety as I go through the decision process. After reading Blended: Using Disruptive Innovation to Improve Schools by Horn & Staker I believe I may have come to some conclusions. My school has been blessed with 1 to 1 Chromebooks in fourth and fifth grade, but are they really being used the proper or most effective way? My innovation plan would leverage the use of these devices to allow for a more personalized learning. I plan to implement a type of blended learning model that includes a personalized learning plan for each student, small group instruction to help students when they need direction or guidance, and a daily reflection/accomplishments time for students to focus on what they may have mastered that day and what they may need to practice more. The results of this reflection time will provide the information needed to update a student’s learning plan and also determine what students may need more help or guidance the next day. This will provide students with their own personal data to record and track in order to build confidence and student ownership of their learning. I have decided not to be too ambitious and set a goal to implement the pilot in my classroom next school year. Although, I will be trying a few of the components out this school year to get student feedback and reactions. This feedback will allow me to make the set up of these components more efficient and effective for all of my students next school year. If my innovation proposal is approved by my principal, then the following school year I hope to help the other fifth grade teachers on my campus to implement this blended learning model as well. So after over a week of reading and reflecting and organizing my thoughts, I have finally decided on a plan and now it is time to be the ‘change’ needed in our current education system. References Horn, M. B., Staker, H., & Christensen, C. M. (2017). Blended: using disruptive innovation to improve schools. Jossey-Bass.
https://brentmischnick.com/2018/10/23/choosing-an-innovation-plan/
Contents Gods are beings formed by the thoughts of humans and born into the world without being influenced by the will of the people. They grow stronger the more people worship them and those that are no longer worshiped become Elementals. There also exist cases like gods becoming infamous as icons of terror, distorting their appearances through the human thoughts and ideals that formed them. Gods are split into two categories, those things that were already there that became gods after becoming objects of worship and those that were reborn as gods. Those that were already there include heavenly bodies like the sun and moon, natural phenomenon like storms, earthquakes, thunder, and wind, or others like the "power of the planet that stabilized the surface of the earth." Those that were reborn as gods were close to human originally, but deviated from humanity through various factors to become objects of worship. Beings like heroes and messiahs that are "systems necessary to thrive" fall under that category.[1] The gods of ancient Mesopotamia, the gods of antiquity, were part of the first category, natural phenomena possessing intentions and personalities. Ea was one such god that performed the act of building the planet, and many others began to build nations after the primordial earth was stabilized. They reigned as the laws of heaven, controlling humans through methods like the birth of Gilgamesh, though that worked against them to end their age.[1] The second category, the gods of modernity, came to the forefront after their passing. The gods of Mesopotamia found themselves lacking viability towards the end of their age and the start of the human age. Living things normally have the instinct to adapt their surroundings to something better suited for the habitat, but gods merely "existed" no matter their power. The viability of humanity was exceptional at the time, as though it was slight counting singular humans, their force in sheer numbers was high. They did not have any transcendent beings commanding the vast majority, but they had a higher level of intelligence than all other living beings that was distributed across all of them. This differed from the gods, no matter how powerful, whose acquired personalities were not much different from humans in their inventiveness and cognizance. Though the gods were omniscient, each could only come to one conclusion and acquire one personality, differing from the threat of the vast number of humans with their large amount of cognizance, their capacity for change. Their desires were boundless, relentless, and unrestrained, causing the world to evolve in accordance. The gods feared that the rules of the planet would change as they continued to breed, bringing about a time where it was no longer necessary for natural phenomena to have wills. They brought about Gilgamesh for that reason, believing that a sovereign belonging to the human side and championing their cause would allow them to keep humanity in check. He was to be a keystone hammered into the earth by the heavens to delay the complete fracturing between themselves and humanity, but he rejected their ways and brought an end to their age.[2] When Velber 02 arrived on earth 14,000 years ago, the ancient gods lost to the giant who fell from the stars. Her existence is the trauma for the 12 Olympian Gods and the Roman Gods.[3] Gods have Authority (権能, Kennou?) over the world, allowing for world construction, event shifting, time-flow manipulation, kingdom-building, and other such powers of that level. Authorities are special abilities that fall into a different category than Codecasts, Skills, and Noble Phantasms. While normal skills "accomplish a certain task by following a corresponding principle”, Authority works “simply by making things happen because one has that right.” Though such powers existed six thousand years in the Age of Gods before the modern era, the advent of the AD calender relieved the gods of their duty of the advance of human civilization and retired them to the past.[4] High level gods like Amaterasu are powerful enough that even the time axis has almost no bearing on them.[5] Some such as the gods of Olympus age over time, having to eat golden apples to keep from aging, while others like Stheno and Euryale are born as immortal idols.[6] Gods are able to mate with humans, bringing about demi-gods of various strength like Heracles and Cú Chulainn. While full gods cannot become Heroic Spirits, demi-gods and those who lose their divinity like Medusa and Chiron are applicable. It is also possible for demi-gods to ascend to full godhood like Heracles. Heroic Spirits like Alexander the Great can obtain some Divinity through the addition of their legends to their power, even if they may not actually be true. While Divine Spirit Servants have the ability to use the Authority of the past, utilizing it in the modern age requires a penalty of self-ruin. However in Fate/Grand Order, Ishtar appears as a pseudo-Servant condition in which a human that has a wavelength close to her own was made into a vessel. In Fate/Extra CCC the one who swallowed up the Moon Cell has reached a level of power that is Authority class.[4] Bodhisattva are gods of cosmic proportions, easily capable of governing the solar system. Saver is one such being, and he is around the level of beings like Kiara Sessyoin and True Demons, capable of affecting the entirety of a planet. Boddhisattva are people who devote themselves to bring salvation to sattva and also those who have attained moksha. The vastness of the world view of Buddhism is much greater than other religions, encompassing the universe as the kingdom of the Buddha. It uses numerical concepts like Nayuta(10^60) and Muryoutaisuu (10^68) to display their magnitude, and the size of the universe is the great trichiliocosm, composed of one thousand medium trichiliocosm, which are composed of one thousand galaxies, small trichiliocosm each made up of three thousand worlds. It is on the scale of galaxies, galaxy clusters, and superclusters known in the modern era, interpreted over two thousand years ago.[7] Earth Mother Goddess (大地母神?) - The power of creation of the mother goddesses originated with Catalhoyuk's forgotten goddess 8000 years ago who was the mother of the earth goddesses who created the earth and the “Root” that created all creation.[8] It branched to many others goddesses over time, including Tiamat, Cybele, Ishtar, Inanna, Anat, Astarte, Gaia, Hera, Artemis, Aphrodite, Demeter, and Athena. The earth mother is a protector guardian deity revered by the people, and she is the sacrifice that feeds them with the crops born from her body and the beasts of the fields and the forests. Her true nature is the cycle of life and death, the circle of the food chain where she feeds man with her blood and flesh, kills man over time and absorbs him as nutrition, and once again feeds man with her replenished blood and flesh. She represents the system of life itself, so those born of the earth cannot deny her authority. Only when they have fulfilled the wish of the Çatalhöyük, leaving the planet for space and ending the infantile stage of intelligent lifeforms, will they no longer be bound to it. This authority over the power of bringing death and giving life, symbolized by holes opening in the earth and the sky, caves and the moon, the whirlpools and typhoons of the sea, the craters of volcanoes, and such, first births evils that bring forth all manners of death. Once the evils are finished spreading death, they leave the promise of good harvest and fertility. The goddesses that followed inherited this authority, and many were imaged with mural crowns because many were also the guardian deities of cities. Most used the authority to give birth to countless monsters and giants and posed a threat to the gods and man, or give birth to heroes and became protectors of man. Tiamat and Gaia are examples of those who became threats, and Hera is an example of a mother of heroes.[8] BB used Earth Mother data and Kingprotea came to existence as a High-Servant.[9] Zeus, the King of the Gods and God of the Sky and Lightning, is the father of Heracles, and although there is no proof, Alexander the Great was also called his son in some stories and gained Divinity as a Heroic Spirit. The Divine Bulls that pull Gordius Wheel, which were an offering to him stolen by Alexander, are called the "Children of Zeus" and generate lightning that is said to be a manifestation of Zeus. Atë, Goddess of Ruin, also the name of the action made by heroes (usually because of pride) that leads them to their downfall Profile Name: Gilgamesh (I am the absolute first king. The king of heroes among heroes. As such, that is what you should call me.) Height: (If I must say, then) 182 cm (What? More normal than you expected? Fool! Are you telling me to become a meat puppet!?) Weight: (Again, if I must say, then) 68 kg (approximately. Just so you know, there is no excess flesh. Again, since I was born, there has been no time when this body gained excess weight. It has been of use, mongrel.) 3 Sizes: B94/W73/H93 (A golden body is befitting the golden king. It is far too beautiful for your eyes, but if you desire me to cast off, bring forth a command seal.) Home: (Let's see. If I were to answer as a human, then it would be my capital,) Uruk Birthday: Unknown (The beginning or end of the land, choose whichever you prefer.) Weak Point: Do not ask of what you already know. (It is obvious that my weakness is that I have no faults.) Fetishism: (Let's see..) That their molding be beautiful is natural, there is no need to speak of it. (That which appeals to my heart is the sound of a heart breaking, and when beautiful objects are washed away, that even more beautiful drop of liquid. When you lick it, it is sweet.) What do you seek from your partner? ............ (That answer is one you should search for yourself.) Favorite underwear color? (You, are you now an incorrigible fool?) What is there besides gold!!! How do you spend your free time? Pleasure trips. (No matter what kind of age it is, I cannot cease wandering around the land. Well, I did sleep until I returned to level one though.) Lynchpin of Heaven That which exemplifies Gilgamesh's way of life. That which expresses how the gods of ancient times lived and his birthplace. In history, the gods of the universe are split into two categories. When things that were already there become gods, and when things are reborn as gods. Things that were already there are when things such as heavenly bodies, like the sun and moon, or natural phenomena, like storms or earthquakes, become the objects of worship. Things that are reborn as gods is when they were human in the beginning, but due to various factors, they deviated from being human and became the objects of worship. Heroes and messiahs and systems necessary to thrive fall under this category. The gods of Mesopotamia are of the former category. Natural phenomena possess intentions and personalities and reigned as the laws of heaven. That which those gods sent out in order to remonstrate the people of the earth was Gilgamesh. The lynchpin of heaven. The king who was brought forth by the hands of the gods in order to secure the the surface which was leaving the age of the gods. However, he did not fulfill that role. Considering his own desires first, controlling his kingdom as a person, he rejected how the gods lived as something from a past age. "I will obey the gods. And respect them. However, be destroyed. On the occasion that you brought me forth, you lost your positions by your own actions." Thus, ancient Uruk began, and the king who parted ways from the gods was born. The King of Heroes, Gilgamesh. The king that was desired as a lynchpin, in truth, became the tip of the spear that put an end to the old age. Chains of Heaven That which exemplifies Gilgamesh's childhood. It expresses his battle with the friend that was his equal and their adventures. The one and only tale of a merit that will not change for all eternity in the world. Poison of Death That which exemplifies Gilgamesh's youth. It expresses his pursuit of perpetual youth and eternal life and its circumstances. Thus, that childhood time of his came to an end. The tyrant with neither blood nor tears. The hero who enjoyed himself, mastering every treasure, every pleasure. The king with the conviction to treat good and evil equally. As the absolute basis is "himself," there is no need for other ideologies and ways of life. Though on the point of vaingloriousness, he is the same as the Macedonian King of Conquerors, but the largest difference between the King of Heroes and the King of Conquerors is the point of "not needing retainers." This hero, from the beginning to the end dictated only "himself." His loves are treasures, tools; people were only things that would disappear. ...Even if he recognizes how much it deserves to be loved. Considering the theme of "CCC," it's impossible to avoid questions regarding women. This is a digression, but Gilgamesh's preferred type is "a noble maiden (virgin)". (It seems that "a flower that blooms in the plain" was his type during his childhood.) It seems he hates women who, while relying on men, have using men as their interior face. Authority [Miracle] Kennou. Authorities are special abilities that fall into a different category than Codecasts, Skills, and Noble Phantasms. An Authority is a power that is on the level of creating a world, and includes things like altering events, time-flow manipulation, and kingdom building. Authorities existed in the age known as the Age of Gods, which was about 6000 years ago, but after entering the Common Era human civilization advanced to the point where Authorities were no longer needed, and so Authorities became a relic of the past. Normal skills achieve a certain result based on some logical process or principle, but Authorities realize a result simply because the user has the right to do so. A God Spirit Class Servant should naturally possess Authorities, but to use them in the modern age requires a corresponding compensation to be paid (involving strain/damage inflicted on the user). In CCC the one who swallowed up Moon Cell has reached a level of power that is Authority class. Golden Fur, White Face. The nine-tailed divine spirit we get a glimpse of in Casko's third SG. Deification of the sun. Amaterasu gone delinquent. When Casko goes to nine-tails mode this is what she will......not be. All this is is the distantly ancient but still existing original source of Casko. When you're a divine spirit of this class, the time axis has almost no bearing on you. Normally someone entering the mausoleum would be burnt to a cinder, but our Xavier, meeting our expectations like always, not only is not incinerated, but demonstrates his usual reckless bravery (stupid menu choices). She would have been in a naked nine-tails spin on the naked apron, but the ethical barrier couldn't be breached so we made sure she wore formalwear. Oh and if a person who doesn't dream go in, all he'll see in the mausoleum is a huge sun. All the World's Desire [Noble Phantasm] Angra Mainyu/CCC. This is the anti-planetary Noble Phantasm Kiara gains after her reincarnation. Angra Mainyu is the most self-serving heinous Noble Phantasm of them all where Kiara uses the world to masturbate. Kiara offers her body in a sexual manner to all living things on Earth, including humans, animals, and plants. By completely accepting all people as is she gains the greatest possible mastery over their senses and carnality. This Noble Phantasm is derided for being demonic and it’s similar to the living sacrifices used for certain artificial magic ceremonies used the past in order to prove the evil of humans. It is a vortex of pleasure that melts all sentient beings, and digests their “life” in an instant. While in reality it all happens in an instant, but it feels like an eternity of pleasure. Regardless of how different the structure of a beings intelligence may be, this vortex affects all without exception. It is not a Noble Phantasm that is limited to affect only those on Earth. The damage increases if certain conditions are met. If the target is of the opposite sex, the damage increases. The more intelligent a being is, the more damage it does. As far as the setting and lore are concerned, the Servant who would be hit with the least damage would be Archer, and Gilgamesh would be damaged the most. The reason why this attack’s power, visuals, and scale are so huge is because reincarnated Kiara is a being similar to Saver. People who devote their life to bringing salvation to Sattvas (living things) are referred to as Bodhisattvas. People who have attained moksha (libertation/release) and become Buddhas are also Bodhisattvas. A Bodhisattva is a god of universe scale and power, and can easily manage things on the scale of the Solar System. Buddhism has a sense of scale of the universe that stands out from that of other religions. There are concepts like Nayuta (1060) and “The Infinitely Vast Number” (1068) which were created to represent the scale and size of a Buddha. The entire universe is the domain of a Buddha. In Buddhism the universe in considered to be made up of three billion solar systems (a great trichiliocosm). A galaxy comprises of 3000 solar systems (a small trichiliocosm), and 1000 small trichiliocosms make up a trichiliocosm. This actually aligns with the real scale of galaxies, galaxy clusters, and super clusters. India had this stuff figured out over 2000 years ago? Holy crap! In the initial design phase of the game I submitted some paperwork describing this Noble Phantasm. I thought there would be no way that this gets by CERO untouched, but surprisingly enough it did. I still have no idea how this was able to make it past them. Cursed Cutting Crater (C. C. C.) An attack on the world by BB in control of the Moon Cell. Also called the spirit particle imaginary pit. A world purge by means of event rewriting, making maximum use of the EX skill “Potnia Theron” BB had acquired. The goddess who is the mother of the earth goddesses who created the earth is, in other words, the “root” that created all creation. This Noble Phantasm is an anti-world Noble Phantasm that outputs information like an ultra-precise 3d printer and crushes the present world with the world desired by the user. The space eroded by BB becomes imaginary space and a curse that consumes reality. CCC as the name suggests is a cursed pit that bores out reality. In addition to scattering all objects inside on an atomic level and reconstructing them, it is capable of writing over and rewriting information of fields such as luck and coincidence. While theoretical, it is also thought possible to distort the time axis by interfering with gravity fields and rewrite the law of cause and effect. When used in combat, BB’s familiars, the shapeshifters engulf the target, and after turning into a sphere, is wiped out of existence along with the target and the dimension with it. Normally, a structure in which value can be shared cannot be constructed in the far side of the world, which is imaginary space. What established BB’s internal world as the Sakura labyrinth was the power of this Noble Phantasm. Advanced Level AI An artificial intelligence configured according to their program. In this age, they are treated as things that "function as programs," but "do not exist as actual beings." In other words, things that "are" right in front of your eyes but "are not." They are merely a mechanisms for the sake of allowing human lives to progress smoothly. As it is the same for the SE.RA.PH, they are virtual lifeforms that are created along with the commencement of the Holy Grail War and disposed along with its completion. Within the SE.RA.PH. the humanoid virtual lifeforms are: the NPCs that perform only one task, AIs who have been granted the capability for self-judgement, and the advanced level AIs who have been trusted with the capability for self-judgement ad the management of a section. Advanced level AIs are things created to efficiently promote the Moon Cell's primary objective of "human observation." As they are perfect reproductions of humans, they have been programmed with a at least a soul, but while the soul "is there," it is contents are still colorless. The virtual lifeforms that the SE.RA.PH creates are reset with the round of the Holy Grail War. The NPCs and AIs return to zero, while advanced level AIs retain just their persona and their memories are reset--made to pretend that they never existed. When the Holy Grail War is over, the one who is left alive is the Master standing at the summit. This is a fact that applies not just to the Masters but is also held in common by the virtual lifeforms that the SE.RA.PH created. It appears that BB is an advanced level AI entrusted with managing the health of the Masters, the same as Sakura. However, as she is now rampaging for reasons unknown, the methods she uses to "manage the Masters" have become extreme. Khakkhara of Direction The teacher's pointer that BB carries. A device that allows her to use her use her authority as an advanced level AI to its fullest. There are restrictions from the far side of the moon, but with this, BB can revise the game rules (spirit particle laws within the SE.RA.PH) The ten crowns of direction worn by the Beast of Babylon converted into a teacher's cane. This is due to the theory that the seven-headed beast is male, which opposes the theory that it is female; it's true nature is a phallus (an erect rod). The symbol of the king of the world who was granted power and authority by a great dragon and, during a period of 42 months, spat as many arrogant words as they desired and had received the right to desecrate all things. The beasts heads are the seven hills of the Roman Empire... Capitolium, Palatium, Aventinus, Esquilinus, Caelius, Quirinalis, Viminalis. In other words, it symbolized the Roman Empire itself, and those ten horns were symbols of... Augustus, Tiberius, Caligula, Claudius, Nero, Vespasianus, Titus, Galba, Otho, Vitellius... those emperors. Golden Grail [EX] The golden grail that BB possesses. The Holy Grail. The Holy Grail that grants its owner's impudent and selfish desires. The grail that the great enchantress who appears in The Revelation of St. John the Divine holds. Though it's a counterfeit Holy Grail, it's because that it's a counterfeit Holy Grail that it's become a "genuine" Holy Grail that grants the desires of humans without paying heed to right or wrong. To the Christians of those days, who considered honorable poverty a virtue, this grail, which was formed as a symbol of treasures such as gold, is no doubt the meaning of corrupt wealth, greed, and the vanity that leads one to adorn oneself. Though it is clear on the occasion that this grail is held by the great enchantress of Babylon which in the personification of the Roman Empire, this grail symbolizes the wealth of the earth, which is the inversion of Christ's words, "Do not pile up the wealth of the earth." "I saw a single woman astride a red beast. This beast, whose entire body was full of numerous blasphemous names, had seven heads and ten horns." This is the description of the beast of destruction that appeared in the Holy City and the woman who was straddling this beast, according to the book of Revelation. A large beast with seven heads like a hydra came from the sea and disembarked toward the Holy City. This beast and woman, while two, are as one, and it is impossible to think of them separately. It is said that this woman was born from evil, and those who are quick to anger, those who are greedy, cannot resist her allure and become corrupted by that grail. This is a digression, but it is sad that the red Saber who claims the title of emperor was, after her death, was seen by believers to be on par with this beast. Self-Modification [EX] A skill that allows one to restructure oneself. AIs created by the Moon Cell are under an absolute rule, "Improving one's functionality is not allowed." However, BB, who had escaped that yoke due to her breaking down, began to expand her own functions. In order to boost her calculating power, she had that black noise prey on and dismantle NPCs, AIs, and finally, Servants, and utilized them as her own memory. Though they were appendix-like additions that won't spare her from the fate of collapsing on her own, BB has changed into a hyper level AI who holds an enormous amount of capacity. Like the towns on reclaimed land that continue to build architecture while knowing that it will be swallowed by water or Frakenstein's monster. Potnia Theron [EX] Potnia Theron (Conception of All Animals). The authority of the goddesses whom BB compiled and absorbed from the abyss of the mooncell. What it is is the embodiment of the power of creation possessed by the mother goddesses, originating from a forgotten goddess approximately 8000 years ago (the goddess of Çatalhöyük), and branching to Tiamat and Cybele, Ishtar, Inanna, Athena, Astarte, Gaia, Hera, Artemis, Aphrodite, Demeter, Athena, etc. Many are imaged with mural crowns. This is because many of the goddesses were also the guardian deities of cities. ◆ The symbols of this authority are the holes opening in the earth and the sky (caves and the moon), the whirlpools and typhoons of the sea, the craters of volcanos, and so on, and from these, evils bringing forth all manners of death are birthed. But once these evils have finished spreading death, they leave the promise of good harvest and fertility. This authority is none other than the power of bringing death as well as giving life. The earth mother is a guardian deity revered by the people and their protector, and at the same time she is the sacrifice that feeds them with the crops born from her body and the beasts of the fields and the forests. The earth mother feeds man with her blood and flesh, kills man over time and absorbs him as nutrition, and once again feeds man with her replenished blood and flesh. This process is the circle of the food chain, and the cycle of life and death can be said to be the true nature of the earth mother. With this authority, most of the goddesses gave birth to countless monsters and giants and posed a threat to the gods and man, or they gave birth to heroes and became protectors of man. Tiamat and Gaia are examples of those who became threats, and Hera is an example of a mother of heroes. ◆ Those born on this earth cannot defy the authority of the earth mother, as that would mean rebelling against the system of life itself. However, it is when they have left this earth to go into outer space and ended the infantile stage of intelligent lifeforms that they should no longer be bound to this authority.. The wish of Çatalhöyük is the coming of that day. Spirit Particle Imaginary Trap Cursed Cutting Crater (Spirit Particle Imaginary Trap) The official name of this stage. The fact that BB had reached the Moon Cell core established this “place”, formerly imaginary space for storing malicious information, as real space in the present, past, and future. It was reconstructed at her hands as a “far side of the moon” where even Masters can exist. BB attaining the power of the primordial goddess succeeded in quantifying the “nothingness” before the birth of life. She transformed into a master of imaginary space where time and space are indefinite. While the Moon Cell can’t be hacked however much time is spent, BB was able to take control of it by placing herself in “unlimited time”. 02 The remnants of peaceful days After the constraints to her free will were removed by Kiara, she was unable to control her status abnormality and went into overload. As NPCs can only do routine work, they ignored Sakura while she was crashing down as being “non-existent”, and since the mooncell only performs a check at the end of the day in the preliminaries, Sakura was within an inch of termination. (MC name) witnessing this called to her. Are you okay, he/she said. The observation of Sakura “being here” by a third party helped her recover her close to disappearing unconscious, and the subsequent nursing gave her a clear ego of “wanting to be here”. Her self-destruction was averted. ◆ What happened afterwards need not be said. When the next day comes, everything will be erased by the mooncell. Sakura wanting to continue this one day miracle used her higher class AI privilege to repeat the day she came to know the MC for 69 days. However as time passed, Sakura came to suffer from her self-contradiction as an AI and from the fact that she dragged (MC name) into this. “The Masters participating in the holy grail war will all die, barring one exception. No, even then (MC name)…..” Sakura continued to suffer from her desire to continue the loop for eternity and her feelings for (MC name) The decision she chose ultimately was to seal the 69 days worth of memory, the love she acquired, so she could return to being a normal AI. An AI cannot delete records. Because of that, she moved the memory to a backup machine to reset herself. Believing that to be the best choice for (MC name). ◆ But even if the memories in the mind are lost, the memories recorded by the body continue to live. Though she had forgotten everything and was back to being a normal AI, she was shaken by emotional values of unknown origin every time she talked with the MC. Meanwhile, the backup to which she moved her memory, couldn’t tolerate herself, and she decided her course of action and made her move as BB, the rebel plotting to destroy the system of the mooncell for a certain objective. The timing she took control of the far side of the moon was between the end of the fifth round and the beginning of the sixth in real time. Four Masters who were not defeated yet, (MC name), Leo, Rin, and Rani, and the irregular exception Jinako were kidnapped to the far side during the war while they were still alive. Matou Shinji, Gato, Julius were moved to imaginary space 0.00001 seconds before the firewall would have destroyed them after they were defeated, prolonging their flash of life. Regardless, CCC is a dream by BB. If the dreamer is no more, the world ought to return to a reality where nothing happened. ◆ By the way, the everyday scenery in the prologue that (MC name) saw in his nightmare was an artificial paradise modeled off the discarded 69 days. After pulling (MC name) to the near side, BB placed his digital body in the old school building, putting into a deep sleep and sealing him in a never ending dream. Normally, he would have lived peacefully in the looping everyday life, but the strength of his bond with his Servant, or his own will rejecting the peaceful days caused cracks in the artificial garden and (MC name) came to notice something was not right. In order to place (MC name) into deep slumber once more, BB reset the school building and tried to redo the dream, but the (MC name) escaped the reset and leaped into a deeper part of imaginary space It was a depth deeper than the bottom of Id. “The death of the collective unconscious” beneath the unconscious. What saved (MC name) who leaped into a region even BB cannot affect was the voice of the Servant whose fate was shared with his. Thus, the contract was sealed, and (MC name) woke up in the old school building left in the far side. Alter Ego of thirst of love. G means gigantism, growing, greed. It was formerly sealed at the ends of the imaginary number space. Alter Egos are complexes made from several goddesses, however Protea is created from the essence of the great Mother Goddess that is common throughout all mythologies. Her self can be treated as a Noble Phantasm as she does not possess a Noble Phantasm. As with the Golden White Face, she's a hazard that can only be overcome in battle through the combined strength of several servants. Class Skills ■ Huge Scale - EX A cheat skill that evolved from Self-Modification. There is no limit to how large she can get. Once the upper limit is reached, the upper limit is further increased. This is infinitely repeated. Basically infinite growth of infinite growth. Anywho, she's a planet destroying universe level disaster, but giant transformation makes her lose complex intelligence and function. This skill cannot bear common sense, so one completely self-destructs. ■ Grow Up Grow - EX Cheat Skill evolving from Experience Point Bonus. Experience Point Bonus allows one to gain a bonus percentage of experience points after a battle. King Protea's skill cheats and changes it to "Always gaining Experience Points." Personal Skills ■ Self-Suggestion - EX Suggestion applied to oneself. A skill that raises resistance against mental interference. At A rank "I do not age = Truly stops aging" level of assumption is possible. At EX level, whatever is going on in that mental world is outside of understanding.
(a) Technical Field The present invention relates to an operation control method of a fuel cell system. More particularly, the present invention relates to an operation control method of a fuel cell system, which increases the output of a fuel cell stack by detecting and overcoming the cause of a performance limit when the fuel cell stack reaches the performance limit while a fuel cell vehicle is being operated at high altitudes, thereby achieving improved stack performance and vehicle power performance. (b) Background Art A fuel cell is an energy conversion device that converts chemical energy stored in fuel into electrical energy via an electrochemical reaction, rather than changing the chemical energy into heat via combustion. Fuel cells may be used to supply industrial, domestic, and vehicular electric power, and also to supply electric power for small electric/electronic products and portable appliances. The vehicular fuel cell currently being studied is a Polymer Electrolyte Membrane Fuel Cell (PEMFC) having a high electric power density. In the polymer electrolyte membrane fuel cell, a Membrane Electrode Assembly (MEA), which is the main constituent element thereof, is disposed at an innermost position. The membrane electrode assembly includes a solid polymer electrolyte membrane which is capable of moving hydrogen ions, and a cathode (i.e. air pole) and an anode (i.e. hydrogen pole) which are electrode layers on both surfaces of the electrolyte membrane and have a catalyst applied thereto to enable hydrogen and oxygen reactions. In addition, a Gas Diffusion Layer (GDL) is stacked on the exterior portion of the membrane electrode assembly, i.e. the exterior portion at which the cathode and the anode are located. In turn, bipolar plates are disposed at the exterior of the gas diffusion layer and the bipolar plates form a flow field, through which reaction gas (e.g., hydrogen as fuel gas and oxygen or air as oxidant gas) is supplied, and through which cooling water passes. For example, a gasket used to seal fluid is stacked to be interposed between the bipolar plates. After stacking a plurality of cells, end plates are coupled at an outermost position to support the cells therebetween. Since each unit cell generates a low voltage, scores to hundreds of cells are stacked in series to increase the voltage. Accordingly, the fuel cell manufactured in the form of a stack is used as a power generating device. A fuel cell system to be applied to a fuel cell vehicle is composed of a fuel cell stack and devices that are configured to supply reaction gas. FIG. 1 is a view illustrating the configuration of a fuel cell system according to the related art. As illustrated in FIG. 1, the fuel cell system includes a fuel cell stack 10 configured to generate electricity from the electrochemical reaction of reaction gas, a hydrogen supply device 20 configured to supply hydrogen as fuel to the fuel cell stack 10, an air supply device 30 configured to supply air including oxygen to the fuel cell stack 10, a heat and water management system 40 configured to adjust the operating temperature of the fuel cell stack 10 and perform a water management function, and a fuel cell system controller (not illustrated) configured to operate the fuel cell system. In the conventional fuel cell system, the hydrogen supply device 20 includes, for example, a hydrogen reservoir (e.g., a hydrogen tank) (not illustrated), a regulator (not illustrated), a hydrogen pressure adjusting valve 21, and a hydrogen recirculation device 22. The air supply device 30 includes, for example, an air blower (for low-pressure operation) or air compressor (for high-pressure operation) 32, a humidifier 33, and an air pressure adjusting valve 34. The heat and water management system includes, for example, an electric water pump (e.g., cooling water pump), a water tank, and a radiator, although not illustrated, as well as a water trap 41. In the hydrogen supply device 20, high-pressure hydrogen supplied from the hydrogen tank is decompressed to a particular pressure in the regulator, prior to being supplied to the fuel cell stack 10. Accordingly, the decompressed hydrogen is supplied to the fuel cell stack 10 by a controlled supply amount via pressure control based on the operating conditions of the fuel cell stack 10. In other words, hydrogen, having passed through the regulator from the hydrogen tank, is supplied to the fuel cell stack 10 after being adjusted in pressure by the hydrogen pressure adjusting valve 21 at the inlet side of a stack hydrogen pole. The hydrogen pressure adjusting valve 21 is adjusted to change pressure of the hydrogen, decompressed by the regulator, to be suitable for the stack operating conditions. The controller is configured to operate the hydrogen pressure adjusting valve 21 upon receiving feedback values from two hydrogen pressure sensors 25 and 26 which are installed respectively at the inlet and outlet sides of the stack hydrogen pole. In addition, the hydrogen remaining after reaction inside the fuel cell stack 10 is discharged through the outlet of the stack hydrogen pole (anode), or is recirculated to the inlet of the stack hydrogen pole by the hydrogen recirculation device 22. The hydrogen recirculation device 22 is a device that increases the reliability of the hydrogen supply and improves the lifespan of the fuel cell. Although there are various recirculation methods, known exemplary methods include a method using an ejector 23, a blower, and both an ejector and a blower. The hydrogen recirculation device 22 contributes to the reuse of hydrogen by recirculating unreacted hydrogen, remaining after used in the hydrogen pole (anode) of the fuel cell stack 10, to the hydrogen pole through a recirculation pipe 24. In addition, the hydrogen recirculation device 22 increases in the amount of impurities such as, for example, nitrogen, water, and vapor, which are directed to the hydrogen pole through the electrolyte membrane inside the stack of the fuel cell, and cause a reduced amount of hydrogen in the hydrogen pole, resulting in deterioration of reaction efficiency. Therefore, it is necessary to purge the impurities by opening a purge valve 27 at a predetermined period. In other words, by installing the purge valve 27 for purging hydrogen to a pipe at the outlet side of the hydrogen pole of the fuel cell stack 10 to periodically discharge the hydrogen from the hydrogen pole, impurities such as, for example, nitrogen and water are also discharged and removed, and the rate of use of hydrogen increases. Discharging the impurities from the fuel cell stack as described above advantageously increases the concentration of hydrogen, increases the rate of hydrogen use, and improves the diffusion and reactivity of gas. In addition, methods of operating the fuel cell system may be generally divided into a low-pressure operation method and a high-pressure operation method. In the respective operation methods, the operating pressure of the fuel cell stack is one of the factors having a predominant effect on the overall performance. In a low-pressure operating fuel cell system, an air blower is generally used to supply low-pressure air to the air pole (cathode) of the stack. In a high-pressure operating fuel cell system, the air compressor 32 is used to supply higher pressure air to the air pole of the stack. In addition, in the high-pressure operating fuel cell system, the air having passed through a filter 31 is supplied to the air pole of the fuel cell stack 10 using the air compressor 32, and the outlet pressure of the air pole is controlled using a pressure adjustor at the rear end of the stack, i.e. the air pressure adjusting valve 34 mounted to a pipe at the outlet side of the air pole of the stack. In general, to adjust the operating pressure of the fuel cell system, target values of inlet and outlet pressures of the hydrogen pole and the air pole are determined from an operating pressure control map based on the operating conditions of the fuel cell, and measured values of the hydrogen pressure sensors 25 and 26 and the air pressure sensors 35 and 36 are fed back to be adjusted to the target values for the inlet and outlet pressures of the hydrogen pole and the air pole. Meanwhile, the cause of deterioration in the performance of a fuel cell system equipped in a fuel cell vehicle at high altitudes may be divided into (1) deterioration in the overall performance of the fuel cell stack due to the reduced operating pressure of an air pole and (2) a reduction in the maximum output of a fuel cell stack. Among these causes, the deterioration of overall stack performance due to the reduced operating pressure of the air pole may be improved when the pressure of air supplied to the fuel cell stack is increased by supplying compressed air using a high-pressure operating system, i.e. an air compressor. However, the reduction of maximum output may be not be improved by simply applying the high-pressure operating system. The cause of stack output limit due to the reduction of maximum output may be divided into (a) the insufficient flow rate of air and (b) the introduction of a minimum acceptable voltage due to the deterioration of stack performance. However, the cause may be the insufficient flow rate of air when the stack is good (e.g., no deterioration), and the cause may be the introduction of a minimum acceptable voltage due to the deterioration of stack performance when the deterioration of the stack has proceeded to some extent. Therefore, there is the demand for an operation control method which is capable of increasing the maximum output of a fuel cell stack by positively determining the state of the stack while the fuel cell system is being operated at high altitudes.
New York City’s child custody laws are outlined in section 240 in the state’s domestic relations law. The child custody issues are separated into two custody areas: legal and physical child custody. According to the child custody laws, no parent has an advantage because of their gender. For instance, a woman does not have an advantage in a child custody case because she is the child’s mother. What is Legal and Physical Child Custody in New York City? When parents are no longer together, the decisions they would normally make as a couple would have to be granted to them. Legal and physical child custody are the rights that must be granted to each parent by a judge. Legal child custody is the parent’s right to make decisions about their child. These decisions range from where they attend school to if they are in an afterschool program or will have braces when they reach a certain age. A judge can grant one of two types legal child custody arrangements: sole and joint. Joint legal child custody is granted to both parents. They have the right to make these important decisions about how their child will be raised together. If they cannot resolve the issue, they can seek help from the court by filing a motion. Sole legal child custody is granted to only one parent. For instance, the father may receive sole legal child custody. This means only that parent has the right to make these important decisions. Physical child custody involves where a child will live while growing up in New York City. The other term for physical child custody is residential custody. Physical child custody is just like legal child custody, it is separated into sole and joint custody. Sole physical custody gives one parent the right to live with their child all the time. The other parent receives visitation rights. Visitation rights allows the parent to visit with the child according to a predetermined schedule such as on weekends. Joint physical child custody is granted to both parents. The parents receive equal time with their child. The child lives with each parent 50 percent of the time. The exact schedule is also predetermined. Physical and legal child custody are granted separately. This means that a parent will not automatically receive legal custody if they receive sole physical custody. The child custody arrangements may vary according to the facts of the case. For instance, parents can receive joint physical and legal custody. In another case, a parent can receive joint legal custody, but sole physical custody. A Judge Grants Physical and Legal Child Custody Based on the Best Interest of the Child What’s child custody arrangement is in the best interest of the child? That is the question a judge must answer. To answer the question, the judge must consider several factors. These factors include, but are not limited to: • Each parent’s ability to care for their child • Any history of domestic violence • Each parent’s work schedule • Each parent’s physical wellbeing and mental health • Each parent’s ability to work with the other parent on parenting decisions • The child preference (if they are of age to understand what is happening) • If the child has any other siblings and where they reside Need Help with Your Child Custody Issue in New York City? We can Assist You. Your child’s needs are the focus of our case. We will not forget about your needs as well. You need to spend time with your child and make important decisions about their upbringing. We will represent you while focusing on what’s in your child’s best interest. Contact us. When parents are unmarried and unable to agree on child custody issues, the courts can step in. The New York courts make decisions about child custody, parenting time and child support. If you’re facing a child custody case, our NYC child custody lawyers can help. There are two types of custody that the court decides when they rule on a child custody case. Legal custody is which parent makes important decisions regarding the child and their upbringing. A parent with legal custody decides where the child goes to school, what religion they practice and other major decisions. If the child needs important medical care, the parent with legal custody decides what to do. The court might award legal custody to one parent, or the parents may share legal custody. The other type of custody is physical custody. The parent with primary physical custody provides the most day-to-day care for the child. For most children, both parents spend some time with the child on a schedule, but physical custody refers to the parent who spends the most time with the child. Physical custody forms the basis for a child support determination. Even if the court awards primary physical custody to one parent, the court awards parenting time to both parents in nearly all cases. The court has flexibility when they decide what parenting time schedule to order in the case. The court may also place conditions on parenting time. They might determine transportation for parenting time, or they might require a parent to avoid alcohol or drug use during parenting time. The court may refuse parenting time where a parent’s ability to provide safe care is in serious question. When the courts decide custody, they look at the evidence in the case. They conduct a trial where they receive evidence and make conclusions of law. The court reviews the evidence in light of what’s best for the child. They look at what parent spends more time with the child, the child’s home environment and whether either parent has a substance abuse or mental health concern that might impede their ability to parent. The court may consider any evidence that’s relevant to the well-being of the children. To win your family court case, it’s important to present the evidence to the judge that favors your case. Whether it’s witness testimony, school or medical records, criminal records or other documentation, you have the burden of gathering the evidence and making arguments to the judge. To win your case, you must carefully determine what evidence you need to win. You must prepare the evidence for your hearing and present it in court. Finally, you must clearly tell the court what outcome you’re asking for. You must explain to them why it’s in the child’s best interests to make the order that you’re looking for. Our NYC child custody lawyers have experience helping our clients prepare and present evidence in child custody cases. We’re familiar with the rules of evidence, and we know what it takes to admit evidence in a court hearing. It’s often far more complicated than simply bringing copies of papers with you to court. We know how to ask the right questions and form the arguments to present your case in a way that’s clear and persuasive. In a child custody case, the court looks at the entire situation as a whole. There’s rarely one factor or characteristic that by itself decides a custody case. There’s no checklist for the judge to score points and total up who wins. Instead, you must present the court with the entire situation in order to win your case. You may need the help of our NYC child custody lawyers to establish your initial child custody order. You may also need help modifying a child custody order. Until the child reaches the age of majority, the court can make changes to the child’s custody order. Whether you’re looking to establish custody for the first time or modify an existing custody or parenting time order, our lawyers can help. If you have questions about child custody in New York, contact our NYC child custody lawyers. When it comes to child custody cases in New York City, the courts are tasked with making complex decisions about emotionally charged situations. Complicating matters is the fact that there are multiple forms of custody with each type bringing its own set of consequences to the family in question. By learning more about those kinds of custody, you can better prepare yourself for a custody battle that may or may not grant you the type of situation you’re seeking. Legal Custody Versus Physical Custody There are two primary types of custody under New York law: physical and legal. In legal custody, the responsibility for making decisions about the child’s care is granted to one or both parents. Those with legal custody are the only persons authorized to make decisions about the child’s education, medical care, and religious upbringing. In most cases, the court will give both parents legal custody, except where one parent has exhibited risky behavior that may seem irresponsible or dangerous. When both parents share legal custody, they must work together to arrive at a final determination for each issue that arises. This means the parents must be able to work together for the sake of the child. If the parents can’t set their differences aside to make these decisions, they may end up back in court to have legal custody reassessed. While both parents may still retain legal custody to a degree, the judge may grant veto power to one parent over the other. Physical custody determines where the child resides. One parent may have full custody, or the parents may have to share custody. While the needs and general welfare of the child is the court’s primary concern, there’s also an effort to keep the family together as much as possible. For that reason, establishing shared, or joint, custody is the preferred solution. In attempting to set up a situation of shared custody, the parents must again prove that they can communicate civilly for the sake of the child. They must maintain agreed upon custody rotations without unnecessary delays, so the child’s life will be uprooted as rarely as possible. If the parents find the situation is not working, they may have to return to court with their family court lawyers to have custody issues reassessed. Joint and Sole Custody in NYC Family Law In cases where one parent proves unsuitable as a parent, the other parent or another suitable guardian will be granted sole custody. There are a number of reasons that a parent may be denied legal and/or physical custody of their child. The individual may have exhibited risky behavior, such as drug abuse or multiple DUI arrests, or the parent may have a record of violent outbursts. If he or she has been in frequent bar fights or has been charged with domestic abuse, these factors may prohibit the court from granting custody. In cases where a parent has been denied custody, an experienced family law attorney may be able to convince a judge to grant parenting time, also known as visitation rights. This is a weekly visitation that allows the parent and child to spend time together. Parenting time can either be supervised or unsupervised, but, in either case, the parent is required to abide by court-ordered stipulations. For instance, the parent will likely have to have the child returned to the custodial parent at a predetermined time. Joint custody, on the other hand, gives each parent equal time parenting the child, which may involve raising the child on a day to day basis. Typically, the custody of the child will be divided up by days of the week with parents alternating weekends and vacation time. While each parent has custody of the child, he or she will be responsible for making sure the child goes to school, is treated for medical conditions, and has his or her basic needs met. Due to the requirements of parenthood, the court will look at each parent’s financial situation, before agreeing to grant joint custody. The parents must each be able to provide a quality standard of living for the child and be mentally capable of making decisions related to the child’s welfare. Additionally, the parents must each be reasonably available to the child, so parents’ work schedules will also play a factor in the court’s decision. Custody arrangements are never simple and require the courts in New York City to look at complicated issues. For that reason, parents should seek out their own legal advocates, experienced professionals who are educated in family law. This will help to make sure everyone’s rights are protected and the child receives the best possible care.
https://www.divorcelawyersnyc.com/nyc-child-custody-lawyers/
Geoengineering is one of the most controversial solution responses to climate change currently on deck. Banking on the idea that one big solution can solve the entire climate dilemma, geoengineering proposes solutions that sounds like they've emerged from a scifi novel, like launching mirrors into space, or have the potential to be dangerous, like dumping iron into the ocean. There's also little current scientific backing to these solutions. In fact, earlier this week, in an opinion piece in Nature, environmental scientists raised concerns over geoengineering plans to level mountains and valleys in the creation of new areas across China. Early implementations of these projects have already created cause for concern. As reported on io9, "Soil has run into local rivers, destroying wildlife there; and winds that once tore through the compact mountains with little harm are now stirring up massive dust storms in the loose sands of newly-filled valleys." However, there are some plausible geoengineering solutions without science fiction elements -- planting more forests and installing cool roofs and reflective surfaces, for example. These ideas suffer because they involve lots of patience and happen on city-scale, which involved changing the behaviours of millions of people, essentially trying to encourage them to become more engaged with climate awareness. But do slow activating plans, such as these, have a better chance of becoming reality because of their simplicity? Do you believe in a particular geoengineering solution, or are you skeptical? Are we on the right path with these proposals, or do we miss the boat entirely? Leave your thoughts below, or offer other solutions.
https://www.salon.com/2014/06/05/community_discussion_where_will_geoengineering_take_us/
189 P.3d 663 (2008) 2008-NMSC-047 STATE ex rel. The REGENTS OF EASTERN NEW MEXICO UNIVERSITY, Petitioner, v. Hon. Theresa M. BACA, Second Judicial District Court Judge, Respondent, and DKD Electric L.L.C., Real Party in Interest. No. 30,950. Supreme Court of New Mexico. July 30, 2008. *664 Frank C. Salazar, Christopher A. Holland, Sutin, Thayer & Browne, P.C., Albuquerque, NM, for Petitioner. Gary K. King, Attorney General, Scott Fuqua, Assistant Attorney General, David K. Thomson, Assistant Attorney General, Santa Fe, NM, for Respondent. Margaret C. Ludewig, Stephanie Landry, Landry & Ludewig, L.L.P., Sean R. Calvert, Calvert Menicucci, P.C., Albuquerque, NM, for Real Party in Interest. OPINION PER CURIAM. {1} This case requires us to address the restrictions on using a declaratory judgment action as an alternative to statutory procedures for judicial review of an agency action. The case began when DKD Electric, LLC (DKD), submitted an unsuccessful bid on a contract with Eastern New Mexico University (ENMU), controlled by the Procurement Code, NMSA 1978, §§ 13-1-28 to -199 (1984, as amended through 2006). DKD filed a protest, but instead of waiting for a decision on the protest and then seeking judicial review in the ninth judicial district, where ENMU is located, DKD filed a complaint in the second judicial district, its own place of residence, seeking a declaratory judgment and injunctive relief in the matter. We hold that because an established statutory procedure existed for seeking judicial review and DKD, the protestor, had initiated that appeals process, the protestor could not circumvent the restrictions on where an administrative appeal could be filed by filing a declaratory judgment action. Accordingly, we order the Honorable Theresa Baca in the second judicial district to transfer the case to the Ninth Judicial District Court in Roosevelt County. BACKGROUND {2} In September 2007, ENMU, whose principal location is in Roosevelt County, solicited bids for a campus electrical distribution upgrade project. The contract for the project was awarded to Wilson & Wilson General Contractors, Inc. (Wilson). On December 22, 2007, DKD filed a protest pursuant to Section 13-1-172 of the Procurement Code, challenging the legality of the award to Wilson and requesting a hearing. A hearing officer in the protest was appointed on January 14, 2008, and the administrative hearing was set for January 23, 2008. Following that *665 hearing, the hearing officer denied DKD's protest in a decision letter, dated February 4, 2008, and informed DKD of its right to judicial review pursuant to Section 13-1-183 of the Procurement Code. DKD did not appeal the decision. {3} Before the hearing officer had been appointed, however, DKD filed a complaint in the second judicial district on January 10, 2008, asking the court to declare the contract between ENMU and Wilson unlawful and to grant injunctive relief halting the project. The district court held a hearing on the complaint on January 18, 2008, and denied the application for a temporary restraining order but set a hearing on the declaratory judgment issue for February 13, 2008. ENMU moved to dismiss DKD's complaint for lack of jurisdiction and failure to exhaust administrative remedies. On February 11, 2008, DKD amended its complaint in the declaratory judgment action, to include additional claims for mandamus, inverse condemnation, civil rights violations, breach of contract, and unjust enrichment. The district court denied the motion to dismiss, and set a hearing to determine if a preliminary injunction or writ of mandamus should issue against ENMU. Before that hearing could occur, ENMU filed with this Court a motion for a stay of the proceedings in the second judicial district, which we granted, and a petition for an extraordinary writ to prevent the second judicial district from hearing the case. DISCUSSION {4} ENMU argues that DKD's complaint constitutes an administrative appeal, for which jurisdiction lies only in the ninth judicial district. DKD, on the other hand, makes two arguments for why the case was properly filed in the second judicial district: (1) that appellate jurisdiction over issues arising from the bid protest is not restricted to the Ninth Judicial District Court and (2) that the Second Judicial District Court had original jurisdiction over the matter. Appellate Jurisdiction {5} ENMU argues that the issues raised in DKD's initial complaint filed in the second judicial district were identical to those raised in the bid protest, and consequently, this case constitutes an administrative appeal. Accordingly, ENMU argues both Article VI, Section 13 of the New Mexico Constitution and Section 13-1-183 of the Procurement Code vest exclusive jurisdiction over an administrative appeal in the ninth judicial district. ENMU relies on the provision in Article VI, Section 13 stating that the district court shall have "appellate jurisdiction of all cases originating in inferior courts and tribunals in their respective districts." ENMU contends that because the appeal had its origins in an administrative tribunal in the ninth judicial district, the Ninth Judicial District Court, and not the Second Judicial District Court, had jurisdiction over the appeal. {6} Consistent with this constitutional provision vesting jurisdiction in the ninth judicial district, ENMU continues, the Procurement Code also requires the appeal to be heard there. The Procurement Code sets forth a clear procedure for protesting and appealing the solicitation or award of a public works project. Section 13-1-172 sets out how that procedure is initiated. Any bidder or offeror who is aggrieved in connection with a solicitation or award of a contract may protest to the state purchasing agent or a central purchasing office. The protest shall be submitted in writing within fifteen calendar days after knowledge of the facts or occurrences giving rise to the protest. Section 13-1-175 then requires "[t]he state purchasing agent, a central purchasing office or a designee of either" to issue a determination on the protest, which gives the reasons for the determination and informs the protestant of the right to judicial review, pursuant to Section 13-1-183. Finally, Section 13-1-183 states that "[a]ll actions authorized by the Procurement Code ... for judicial review of a determination shall be filed pursuant to the provisions of [NMSA 1978, Section 39-3-1.1 (1999)]." Under Section 39-3-1.1(C), "[t]he appeal may be taken to the district court for the county in which the agency maintains its principal office or the district court of any county in which a hearing on the matter was conducted." ENMU argues that, under this statutory scheme, any appeal taken *666 from the decision of the hearing officer should have been filed in the Ninth Judicial District Court because ENMU's principal office and the administrative hearing were in Portales, in the ninth judicial district. {7} In response, DKD sidesteps the specific limitations on the district courts' appellate jurisdiction set forth in Article VI, Section 13, and relies on the provision relating to the district courts' original jurisdiction, which is extremely broad. See N.M. Const. art. VI, § 13 ("The district court shall have original jurisdiction in all matters and causes not excepted in this constitution...."). Pointing out that Article VI, Section 13 confers jurisdiction over "special cases and proceedings as may be conferred by law" and grants power "to issue writs of habeas corpus, mandamus, injunction, quo warranto, certiorari, prohibition and all other writs, remedial or otherwise," DKD also contends that, consistent with Article VI, Section 13, the legislature has expanded the district courts' jurisdiction beyond reviewing "tribunals in their respective districts" by enacting NMSA 1978, Section 38-3-1.1 (1988), which provides that "[a]ll district courts have jurisdiction to review the action of any executive branch, agency or department in those cases in which a statute provides for judicial review." DKD suggests that this statutory provision does not conflict with the specific restrictions in Section 39-3-1.1(C) where administrative appeals are to be filed, which it argues are permissive. {8} DKD then asserts that the jurisdiction described in Article VI, Section 13 is only limited by the venue statute, NMSA 1978, § 38-3-1(G) (1988), which states that "suits against the officers or employees of a state educational institution ... shall be brought in the district court of the county in which the principal office of the state educational institution is located or the district court of the county where the plaintiff resides." Accordingly, DKD states, it is permitted to file a lawsuit against ENMU in Bernalillo County, where its principal place of business is located. [Id.] {9} We are not persuaded that the appellate jurisdiction described in Article IV, Section 13 is as expansive as DKD suggests. State constitutions, unlike the federal constitution, do not grant power to the three branches of government, but provide limitations on power. See State ex rel. Clark v. Johnson, 120 N.M. 562, 570, 904 P.2d 11, 19 (1995) ("`[D]eeply rooted in American Jurisprudence is the doctrine that state constitutions are not grants of power to the legislative, to the executive and to the judiciary, but are limitations on the powers of each. No branch of the state may add to, nor detract from its clear mandate.'") (quoting State ex rel. Hovey Concrete Prods. Co. v. Mechem, 63 N.M. 250, 252, 316 P.2d 1069, 1070 (1957), overruled on other grounds by Wylie Corp. v. Mowrer, 104 N.M. 751, 726 P.2d 1381 (1986)). Thus, contrary to the argument made by DKD that the grant of jurisdiction in the New Mexico Constitution is limited only by the venue statute, the jurisdiction and venue authorized by statute are confined by the limitations of our state constitution, which restricts the district courts' appellate jurisdiction to "cases originating in inferior courts and tribunals in their respective districts." N.M. Const. art. VI, § 13. {10} Consistent with this constitutional mandate, Section 39-3-1.1(C), which creates a comprehensive scheme for appealing final decisions of certain administrative agencies, provides that such "[an] appeal may be taken to the district court for the county in which the agency maintains its principal office or the district court of any county in which a hearing on the matter was conducted." See Hyden v. N.M. Human Servs. Dep't., 2000-NMCA-002, ¶¶ 2-3, 128 N.M. 423, 993 P.2d 740 (observing that the legislature adopted this comprehensive administrative appeals legislation to simplify and standardize the method for obtaining judicial review of final decisions of certain administrative agencies). We are not persuaded by DKD's argument that the use of the word "may" in Section 39-3-1.1(C) permits an administrative appeal to be brought in any district court. Instead, we read it to mean that the appeal itself is permissive and not mandatory. Reading the statute as DKD asks us to would render the provision unconstitutional. See Lovelace Med. Ctr. v. Mendez, 111 N.M. 336, 340, 805 P.2d 603, 607 *667 (1991) ("It is, of course, a well-established principle of statutory construction that statutes should be construed, if possible, to avoid constitutional questions."). Moreover, reading Section 39-3-1.1(C) as permissive would render it superfluous, and we refrain from reading statutes in a way that renders provisions superfluous. See State v. Rivera, 2004-NMSC-001, ¶ 18, 134 N.M. 768, 82 P.3d 939. Similarly, we are not persuaded that the legislature could have intended the general jurisdictional and venue statutes, Section 38-3-1.1 and Section 38-3-1, to unconstitutionally expand the appellate jurisdiction of the district court provided for in Article VI, Section 13. See Seidenberg v. N.M. Bd. of Med. Exam'rs, 80 N.M. 135, 138-39, 452 P.2d 469, 472-73 (1969) (stating that we presume the legislature acts within the bounds of the constitution in enacting statutes). {11} DKD further seeks to persuade us, however, that the provision in Article VI, Section 13, limiting the district courts' appellate jurisdiction to "all cases originating in inferior courts and tribunals in their respective districts," is inapplicable because the administrative hearing in Portales did not constitute an "inferior tribunal." DKD argues, without citation to legal authority, that a tribunal is "an established adjudicatory tribunal akin to a court" and that the hearing provided in Section 13-1-172 of the Procurement Code does not satisfy that standard. Specifically, DKD argues that the hearing officer, defined in Section 13-1-174 as "[t]he state purchasing agent, a central purchasing office or a designee of either," is neither qualified nor impartial and that the procedure does not provide for sworn testimony or discovery. {12} The term "administrative tribunal" is commonly used to describe a non-judicial adjudication. See Bd. of Educ. of Carlsbad Mun. Sch. v. Harrell, 118 N.M. 470, 479-80, 882 P.2d 511, 520-21 (1994). As we stated in Harrell, due process does not require that a decision-maker be a judge or an attorney. Id. at 479, 882 P.2d at 520. It simply requires "an impartial decision-maker." Id. And, as ENMU points out, DKD did not challenge the objectivity of the hearing officer before the hearing and has not produced evidence to overcome the presumption of administrative regularity. See Wing Pawn Shop v. Taxation & Revenue Dep't, 111 N.M. 735, 743, 809 P.2d 649, 657 (Ct.App. 1991). Moreover, this Court has recognized the legislature's authority to assign adjudicative authority to administrative tribunals when judicial review of the decisions of those bodies satisfies due process. See Harrell, 118 N.M. at 483-84, 882 P.2d at 524-25. {13} As we have discussed, in enacting the Procurement Code, the legislature created an administrative process that allows an aggrieved bidder or offeror to "protest to the state purchasing agent or a central purchasing office," to receive a reasoned decision from the hearing officer, and to obtain judicial review of the administrative decision pursuant to Section 39-3-1.1, which in this case is in the Ninth Judicial District Court. See §§ 13-1-172, -174, and -183. In this case, the proceeding complied with that process and constituted an administrative tribunal, making the jurisdictional limits in Article VI, Section 13 applicable to this case. {14} For the foregoing reasons, we agree with ENMU that Article VI, Section 13 and Section 39-3-1.1 control the appellate jurisdiction of the district court in this case. Thus, an appeal from the hearing officer's decision should have been brought in the Ninth Judicial District Court, and the Second Judicial District had no appellate jurisdiction over this matter. Original Jurisdiction {15} Having determined that the Second Judicial District Court had no appellate jurisdiction in this matter, we consider DKD's alternate argument that the Second Judicial District Court had original jurisdiction over the case and that it was not required to exhaust its administrative remedies before filing its lawsuit in the district court.[1] *668 {16} The requirement that a party exhaust its administrative remedies states that "[w]here relief is available from an administrative agency, the plaintiff is ordinarily required to pursue that avenue of redress before proceeding to the courts; and until that recourse is exhausted, suit is premature and must be dismissed." Smith v. City of Santa Fe, 2007-NMSC-055, ¶ 26, 142 N.M. 786, 171 P.3d 300 (quoted authority omitted). ENMU argues that under the facts of this case, before resorting to the courts by filing a declaratory judgment action, DKD was required to exhaust its statutory administrative remedies and is not permitted to make an end-run around the appeals process set forth in the Procurement Code. See Associated Petroleum Transp., Ltd. v. Shepard, 53 N.M. 52, 55, 201 P.2d 772, 774-75 (1949). Associated Petroleum Transport involved a tax protest that was not filed in a timely manner. Id. at 54-55, 201 P.2d at 774. Subsequently, the protestors in that case attempted to file a declaratory judgment action, which was dismissed for failure to exhaust administrative remedies. Id. at 53, 55, 201 P.2d at 773, 775. {17} While DKD does not directly address that case, it argues either that its administrative remedies were exhausted when it obtained a final ruling from the hearing officer, or that it was not required to exhaust its administrative remedies. DKD relies on Takhar v. Town of Taos, 2004-NMCA-072, ¶ 1, 135 N.M. 741, 93 P.3d 762, in which the plaintiff was denied a special use permit by the zoning commission, but did not appeal and subsequently filed an action in district court for estoppel and inverse condemnation. Our Court of Appeals observed that the district court action raised different issues from the one before the hearing officer and noted that these questions could not have been ruled on by the administrative body. Id. ¶¶ 17-19. Thus, the Court held that the plaintiff was not precluded from filing an action in district court for estoppel and inverse condemnation. Id. ¶ 19. {18} We agree, as a general principle, that administrative remedies do not have to be exhausted when it would be futile to do so or when a question of law is at issue. See Smith, 2007-NMSC-055, ¶ 27, 142 N.M. 786, 171 P.3d 300. We also agree that the original jurisdiction of the district court is broad. Article VI, Section 13 provides that "[t]he district court shall have original jurisdiction in all matters and causes not excepted in this constitution, and such jurisdiction of special cases and proceedings as may be conferred by law. ..." And, as DKD points out, the venue statute, Section 38-3-1(G), provides that suits against the officers or employees of a state educational institution as defined in Article 12, Section 11 of the constitution of New Mexico, as such, shall be brought in the district court of the county in which the principal office of the state educational institution is located or the district court of the county where the plaintiff resides. Because DKD's principal place of business is in the second judicial district, DKD is generally correct in stating that jurisdiction and venue over a lawsuit brought by DKD against ENMU would lie in the Second Judicial District Court. Indeed, ENMU acknowledges that if, for example, this were a case brought under the New Mexico Tort Claims Act, NMSA 1978, §§ 41-4-1 to -27 (1976, as amended), the lawsuit could have been filed in the Second Judicial District Court. {19} The lawsuit originally filed in the Second Judicial District Court, however, was a declaratory judgment action seeking resolution *669 of a matter arising from an administrative matter. We are presented, therefore, with a similar question to one we recently addressed in Smith: the interaction between administrative proceedings and declaratory judgment actions. 2007-NMSC-055, ¶¶ 12-27, 142 N.M. 786, 171 P.3d 300. Smith involved two separate groups of plaintiffs who filed declaratory judgment actions challenging the legality of a city ordinance addressing well-drilling, the type of purely legal question specifically permitted under NMSA 1978, Section 44-6-4 (1975) of the Declaratory Judgment Act. Smith, 2007-NMSC-055, ¶¶ 2-5, 14, 142 N.M. 786, 171 P.3d 300. The first group of plaintiffs, however, had applied for a city permit to drill a well, and when the permit was denied, initiated the appeals process. Id. ¶¶ 3-4. Instead of seeking judicial review of the denial of the permit, and after the time had run to do so, those plaintiffs filed a separate declaratory judgment action challenging the legality of the city ordinance requiring a city permit. Id. ¶ 5. The second group of plaintiffs did not apply for a city permit, but joined with the first plaintiffs in filing the declaratory judgment action. Id. ¶ 4. {20} We acknowledged in Smith that "[t]he Declaratory Judgment Act is a special proceeding that grants the district courts the `power to declare rights, status and other legal relations whether or not further relief is or could be claimed.'" Id. ¶ 13 (quoting NMSA 1978, § 44-6-2 (1975)). And we recognized that challenging the legality of an ordinance appeared to be a claim that fell within the Act's authority. Id. ¶ 15. However, while we acknowledged the role of declaratory judgment actions, we stressed that "a declaratory judgment action challenging an administrative entity's authority to act ordinarily should be limited to purely legal issues that do not require fact-finding by the administrative entity." Id. ¶ 16 (citing Grand Lodge of Masons v. Taxation & Revenue Dep't, 106 N.M. 179, 180-83, 740 P.2d 1163, 1164-67 (Ct.App.1987)). And we stated that "we must remain mindful of some important limitations on the use of declaratory judgment actions to review the propriety of administrative actions." Id. ¶ 15, 740 P.2d 1163. We specifically cautioned against using a declaratory judgment action to challenge or review administrative actions if such an approach would foreclose any necessary fact-finding by the administrative entity, discourage reliance on any special expertise that may exist at the administrative level, disregard an exclusive statutory scheme for the review of administrative decisions, or circumvent procedural or substantive limitations that would otherwise limit review through means other than a declaratory judgment action. Id. Therefore, even though the issues raised in Smith were legal and fell within the scope of the Declaratory Judgment Act, we held that the first group of plaintiffs, who had initiated the appeals process, were required to comply with the procedures "that would otherwise govern judicial review of the administrative decision," including the specific time limits within which to file an appeal. Id. ¶ 23. Although we held that the second group of plaintiffs, who had not applied for a city permit or initiated any administrative process, were not bound by those time limits in filing a declaratory judgment action, we did not address the requirements of where such an action should be filed. See id. ¶ 25. {21} In the case before us, DKD asserts that the issues it raised were purely legal, requiring a decision on the validity of a contract. Section 44-6-4 of the Declaratory Judgment Act states that "[a]ny person interested under a ... written contract ... or whose rights, status or other legal relations are affected by a ... contract ..., may have determined any question of construction or validity arising under the ... contract ... and obtain a declaration of rights, status or other legal relations thereunder." Thus, the legal question of the validity of a contract appears to be a proper matter for a declaratory judgment action. We note, however, that DKD's complaint and its response to ENMU's petition filed in this Court contained factual allegations that had been made in the bid protest and which were addressed by the hearing officer. Moreover, it appears that DKD's concern is not with the construction or the validity of the contract itself, but with the construction of NMSA 1978, Section 60-13-12(B) (1989, repealed effective July 1, *670 2006) (addressing the requirement that contractors be licensed to perform the work they contract for), which would also appear to be a proper matter for a declaratory judgment action. {22} Even if DKD is correct in claiming it raises only legal issues and a declaratory judgment action was authorized, DKD initiated the administrative review process by filing a bid protest before pursuing a declaratory judgment action. And in this case, unlike in Smith, DKD had a specific statutory right to judicial review of the decision on the bid protest. Under those circumstances, DKD could not circumvent the established procedures for judicial review and was thus obligated either to pursue its right to judicial review or to file its declaratory judgment action in compliance with the procedures for administrative appeal set out in Section 39-3-1.1, which included filing in the Ninth Judicial District Court, not in the Second Judicial District Court. {23} In this case, unlike in Smith, DKD's declaratory judgment action was timely filed, albeit in the wrong judicial district. Therefore, we will treat this case as if it were an action filed consistent with the procedures set out in the Procurement Code and the administrative appeal statute and exercise our power of superintending control to order the Second Judicial District Court to transfer the case to the Ninth Judicial District Court. See Marsh v. State, 95 N.M. 224, 227, 620 P.2d 878, 881 (1980) (exercising superintending control to transfer a case from Valencia County to McKinley County). CONCLUSION {24} For the foregoing reasons, the Second Judicial District Court is ordered to transfer this case to the Ninth Judicial District Court in Roosevelt County. {25} IT IS SO ORDERED. Chief Justice EDWARD L. CHÁVEZ, Justices PATRICIO M. SERNA, PETRA JIMENEZ MAES, RICHARD C. BOSSON, and CHARLES W. DANIELS. NOTES [1] DKD also argues that the Second Judicial District Court had primary jurisdiction over the case. Primary jurisdiction and original jurisdiction are separate matters. The doctrine of primary jurisdiction, related to the requirement of exhaustion of administrative remedies, "arises when both a court and an administrative agency have concurrent jurisdiction." Eldridge v. Circle K Corp., 1997-NMCA-022, ¶ 21, 123 N.M. 145, 934 P.2d 1074. The doctrine is not technically a matter of jurisdiction. See MFS Sec. Corp. v. N.Y. Stock Exch., Inc., 277 F.3d 613, 621-22 (2d Cir.2002). "It is a prudential rule used by courts to allocate between courts and agencies the initial responsibility for resolving a dispute when their jurisdictions overlap." Eldridge, 1997-NMCA-022, ¶ 21, 123 N.M. 145, 934 P.2d 1074. It is "concerned with promoting proper relationships between the courts and administrative agencies charged with particular regulatory duties." United States v. W. Pac. R.R. Co., 352 U.S. 59, 63, 77 S.Ct. 161, 1 L.Ed.2d 126 (1956). We decline to address the issues of whether the district court abused its discretion and should have recognized the primary jurisdiction of the agency because this issue was not specifically argued below.
George Dimitropoulos has more than 20 years of experience in the semiconductor and consumer electronics industry, ranging from small startups and mid-sized companies to multinational corporations. Mr. Dimitropoulos is currently holding the position of Director in Adveos, a fabless semiconductor company which specializes in the development of wireless connectivity solutions. Mr. Dimitropoulos business background in multi-cultural and international environments and his broad knowledge of silicon ecosystems and market trends enabled him to co-found and drive Adveos to a substantial growth which led to the company’s acquisition by the Chinese multinational Beken Corporation in 2020. Prior to co-founding Adveos, Mr. Dimitropoulos held various engineering and management positions at Nortel Networks (UK), Bookham Technology (UK), Theon Sensors (Greece) and Crystal Audio (Greece / China). Throughout his professional career, he has been involved in the product development of semiconductor components and electronic devices for the optical communications and consumer electronics industry and he has significant project management experience in technology and process transfers for foundry locations as well as low-cost manufacturing locations in SE Asia. Mr. Dimitropoulos holds a BSc degree in Physics from the University of Patras, Greece, an MSc degree in Optoelectronics from the Essex University, UK and an MBA degree from the Aston Business School, UK.
https://forum.hetia.org/speaker/george-dimitropoulos/
You get a double feature for the latest Annotated Volcano. Why two volcanoes? I wanted to explain the difference between two different shapes of volcanoes: the archetype stratovolcano and the massive shield volcano. Why would two volcanoes take such different forms? It boils down to the flavor and form of the stuff that comes out the top. Mexico's Popocatépetl is an example of what volcanologists call a stratovolcano*, with a well-formed conical shape and a crater/vent at the top of the volcano. You can see that shape in the profile shot of the volcano (taken from the Earth's surface; below) with steep slopes. The average slope heading up the sides of the volcano are ~25 percent with a maximum of 63 percent in the steepest bits. From the "base" level of ~3500 meters (11,400 feet) to the summit you gain 1635 meters (~5,300 feet). All in all, hiking this volcano would definitely be a slog, especially as you approach the summit and the slopes get even steeper. You can also notice a nearly circular shape to the volcano (above), with paths of debris heading in all directions from the summit crater. So, why does Popocatépetl have such a shape? The volcano erupts mainly andesite and dacite, which are relatively high in silica. This means when it erupts, you tend to get sticky domes of lava at the summit that can collapse to create pyroclastic flows and volcanic mudflows (lahars)---these flow channels are clearly visible on the satellite image. You also get explosive eruptions that rain ash and bombs onto the slopes of the edifice. All in all, you get lots of relatively loose debris that piles up on the slopes, with the occasional lava flows. All in all, stratovolcanoes are built like poorly constructed layer cakes of volcanic debris (tephra). These piles of loose debris will form a mound with slopes that are roughly at the angle of repose---the angle that you can stack a material before it you can't stack any more thanks to gravity's pull. Think about trying to build a sand castle on the beach: as you pile up dry sand, you find that you can't get the slopes to get any steeper than ~30-35º before the sand runs down the sides of your attempted castle. Same thing happens on a volcano like Popocatépetl, with the additional fun of erosion due to weather and glaciers that further reduce the angle of the slopes. That's why if you do the math, the average slope on Popocatépetl is ~14-16º, although you get close to the angle of repose when you're at the summit closest to the newest deposits. Now, let's take a look at a volcano on the other side of the planet. Ethiopia's Erta Ale is a shield volcano whose profile is large and low. If you look at the Earth-based image of Erta Ale (below), you can see that it is almost hard to tell there is a volcanic edifice there (unlike that Popocatépetl shot). The slopes on Erta'Ale average about 8.1 percent, so over three times shallower than our Mexican volcano. If you make the same calculation of the angle of the slopes, you get 4.3º as you rise only 282 meters (925 feet) over 3.75 kilometers (~2.3 miles). You can also tell that the volcano's base isn't a circle---it is much more like a, well, shield, with an elongate shape. This is due to the fissures formed by regional stresses that cause the volcano be where it is (that is, the East African Rift).
https://www.wired.com/2016/07/annotated-volcano-volcanos-shape-can-tell-will-erupt/
Melissa Laird Smith discussed how the Icahn School of Medicine at Mount Sinai uses long-read sequencing for translational research. She gave several examples of targeted sequencing projects run on the Sequel System including CYP2D6, phased mutations of GLA in Fabry’s disease, structural variation breakpoint validation in glioblastoma, and full-length immune profiling of TCR sequences. ASHG PacBio Workshop: Amplicon SMRT Sequencing applications in human genetics In this ASHG workshop presentation, Stuart Scott of the Icahn School of Medicine at Mount Sinai, presented on using the PacBio system for amplicon sequencing in pharmacogenomics and clinical genomics workflows. Accurate, phased amplicon sequence for the CYP2D6 gene, for example, has allowed his team to reclassify up to 20% of samples, providing data that’s critical for drug metabolism and dosing. In clinical genomics, Scott presented several case studies illustrating the utility of highly accurate, long-read sequencing for assessing copy number variants and for confirming a suspected medical diagnosis in rare disease patients. He noted that the latest Sequel System… Characterization of Reference Materials for Genetic Testing of CYP2D6 Alleles: A GeT-RM Collaborative Project. Pharmacogenetic testing increasingly is available from clinical and research laboratories. However, only a limited number of quality control and other reference materials currently are available for the complex rearrangements and rare variants that occur in the CYP2D6 gene. To address this need, the Division of Laboratory Systems, CDC-based Genetic Testing Reference Material Coordination Program, in collaboration with members of the pharmacogenetic testing and research communities and the Coriell Cell Repositories (Camden, NJ), has characterized 179 DNA samples derived from Coriell cell lines. Testing included the recharacterization of 137 genomic DNAs that were genotyped in previous Genetic Testing Reference Material Coordination… Single-Molecule Sequencing: Towards Clinical Applications. In the past several years, single-molecule sequencing platforms, such as those by Pacific Biosciences and Oxford Nanopore Technologies, have become available to researchers and are currently being tested for clinical applications. They offer exceptionally long reads that permit direct sequencing through regions of the genome inaccessible or difficult to analyze by short-read platforms. This includes disease-causing long repetitive elements, extreme GC content regions, and complex gene loci. Similarly, these platforms enable structural variation characterization at previously unparalleled resolution and direct detection of epigenetic marks in native DNA. Here, we review how these technologies are opening up new clinical avenues that… A Look to the Future: Pharmacogenomics and Data Technologies of Today and Tomorrow The ability to measure chemical and physiologic states in tandem with good experimental design has enabled the discovery and characterization of a plethora of gene–drug interactions. Recent advances in methods to measure organic molecules and phenotypes, describe clinical states, and reason across federated data offer an increasingly precise set of technologies for pharmacogenomics discovery and clinical translation. Single molecule real-time (SMRT) sequencing comes of age: applications and utilities for medical diagnostics. Short read massive parallel sequencing has emerged as a standard diagnostic tool in the medical setting. However, short read technologies have inherent limitations such as GC bias, difficulties mapping to repetitive elements, trouble discriminating paralogous sequences, and difficulties in phasing alleles. Long read single molecule sequencers resolve these obstacles. Moreover, they offer higher consensus accuracies and can detect epigenetic modifications from native DNA. The first commercially available long read single molecule platform was the RS system based on PacBio’s single molecule real-time (SMRT) sequencing technology, which has since evolved into their RSII and Sequel systems. Here we capsulize how SMRT… Long-read Single-Molecule Real-Time (SMRT) full gene sequencing of cytochrome P450-2D6 (CYP2D6). The CYP2D6 enzyme metabolizes ~25% of common medications, yet homologous pseudogenes and copy-number variants (CNVs) make interrogating the polymorphic CYP2D6 gene with short-read sequencing challenging. Therefore, we developed a novel long-read, full gene CYP2D6 single-molecule real-time (SMRT) sequencing method using the Pacific Biosciences platform. Long-range PCR and CYP2D6 SMRT sequencing of 10 previously genotyped controls identified expected star (*) alleles, but also enabled suballele resolution, diplotype refinement, and discovery of novel alleles. Coupled with an optimized variant calling pipeline, CYP2D6 SMRT sequencing was highly reproducible as triplicate intra- and inter-run non-reference genotype results were completely concordant. Importantly, targeted SMRT sequencing… De novo assembly and phasing of a Korean human genome. Advances in genome assembly and phasing provide an opportunity to investigate the diploid architecture of the human genome and reveal the full range of structural variation across population groups. Here we report the de novo assembly and haplotype phasing of the Korean individual AK1 (ref. 1) using single-molecule real-time sequencing, next-generation mapping, microfluidics-based linked reads, and bacterial artificial chromosome (BAC) sequencing approaches. Single-molecule sequencing coupled with next-generation mapping generated a highly contiguous assembly, with a contig N50 size of 17.9?Mb and a scaffold N50 size of 44.8?Mb, resolving 8 chromosomal arms into single scaffolds. The de novo assembly, along with… Sequencing the CYP2D6 gene: from variant allele discovery to clinical pharmacogenetic testing. CYP2D6 is one of the most studied enzymes in the field of pharmacogenetics. The CYP2D6 gene is highly polymorphic with over 100 catalogued star (*) alleles, and clinical CYP2D6 testing is increasingly accessible and supported by practice guidelines. However, the degree of variation at the CYP2D6 locus and homology with its pseudogenes make interrogating CYP2D6 by short-read sequencing challenging. Moreover, accurate prediction of CYP2D6 metabolizer status necessitates analysis of duplicated alleles when an increased copy number is detected. These challenges have recently been overcome by long-read CYP2D6 sequencing; however, such platforms are not widely available. This review highlights the genomic… Long-read sequencing offers path to more accurate drug metabolism profiles In the complex drug discovery process, one of the looming questions for any new compound is how it will be metabolised in a human bodyWhi|e there are several methods for evaluating this, one of the most common involves CYP2D6,the enzyme encoded by the cytochrome P450—2D6 gene.This enzyme is involved in metabolising a quarter of all commonly used medications, making it an important target for ADME and pharmacogenomics studies. It is known to activate some drugs and to play a role in the deactivation or excretion of others. Copy number variation probes inform diverse applications A major contributor to inter-individual genomic variability is copy number variation (CNV). CNVs change the diploid status of the DNA, involve one or multiple genes, and may disrupt coding regions, affect regulatory elements, or change gene dosage. While some of these changes may have no phenotypic consequences, others underlie disease, explain evolutionary processes, or impact the response to medication. Institutional profile: translational pharmacogenomics at the Icahn School of Medicine at Mount Sinai. For almost 50 years, the Icahn School of Medicine at Mount Sinai has continually invested in genetics and genomics, facilitating a healthy ecosystem that provides widespread support for the ongoing programs in translational pharmacogenomics. These programs can be broadly cataloged into discovery, education, clinical implementation and testing, which are collaboratively accomplished by multiple departments, institutes, laboratories, companies and colleagues. Focus areas have included drug response association studies and allele discovery, multiethnic pharmacogenomics, personalized genotyping and survey-based education programs, pre-emptive clinical testing implementation and novel assay development. This overview summarizes the current state of translational pharmacogenomics at Mount Sinai, including a future… Precision medicine and rare genetic variants. Interindividual variability in drug metabolism and drug toxicity persists as a major problem for drug development and treatment. Increased or decreased capacity for drug elimination or drug action reduces drug efficacy and places substantial economic burdens on society (e.g., due to treatment of adverse drug reactions) . To a great extent this variation is based on genetic differences, and indeed many drugs now carry pharmacogenomic labels regarding mandatory or informative genetic tests that have to/can be performed before prescription (http://www.fda.gov/drugs/ scienceresearch/researchareas/pharmacogenetics/ucm083378.htm).Theselabelsarebasedonthe most common allelic variants in germline or somatic genes with importance for drug metabolism that encode phase I or… Implementation of pharmacogenomics in everyday clinical settings.
https://www.pacb.com/asset_tags/cyp2d6/
What if Napoleon Bonaparte had won the Battle of Waterloo? A picture of Napoleon Bonaparte. The battle between France and Prussia in 1870 was all but decided at the Battle of Sedan on September 1. As Napoleon III was led through the French countryside for the nearest port, he knew that this battle would spell the end of the Empire. As he was sailed across to England for exile, a unified Germany was created off the back of French territory - and the landscape of Europe would be forever changed. Ironically, for a ‘What If?’ scenario, this version of history is not remarkably unlike our own. Whilst Napoleon Bonaparte did not win the Battle of Waterloo on June 18, 1815, his nephew did eventually become Emperor of France as a result of the 1848 revolutions that sprung up around Europe. His last act as Emperor was to lead French forces against Prussia in the War of 1870. He was captured at the Battle of Sedan and forced into exile in Britain, where he was forever haunted by the destruction of his Empire. His actions that year effectively allowed the creation of Germany that was, in no small part, responsible for much of the tension between the two countries over the next seventy-five years. And yet, this event in history may well have occurred regardless of whether Bonaparte had won the Battle of Waterloo. If we suppose, for a moment, that Napoleon had managed to defeat the British and Prussian forces at the battle and maintain control of France thereafter, it is not beyond reason to suppose that, as Bonaparte’s nephew and heir, Napoleon III would have inherited the Empire anyway. Had that happened, the Battle of Sedan would almost certainly have occurred in the same way, leading to his downfall and the beginning of the tensions that would contribute to the outbreak of the First World War. But what scenario would allow such a divergence from historical fact and yet still arrive at the same point fifty-five years later? Rather than looking to Napoleon III, our attention must be drawn to Bonaparte, the man whose decisions would ultimately determine the future of France and the rest of Europe. At first we must address Bonaparte’s character. The Bonaparte of 1796, the year that he began his conquest of Europe, was a war leader to the greatest degree. Had he managed to defeat Wellington and Blucher at Waterloo, he would almost certainly have urged his officers to press after Wellington and Blucher’s scattered armies until every last one of them had been captured or killed. He would have then have turned his attention to the armies of Russia and Austria who, whilst not involved in Waterloo, were slowly advancing across Europe to address this resurgence of power. This would have presented Bonaparte with a serious problem. In the first instance, Austria and Russia had armies of approximately 200,000 men working their way across Europe. In the second, Alexander I, the Tsar of Russia, was particularly keen to eliminate Bonaparte, as he believed that Europe would never remain at peace with him alive. Finally, French conscription, from which Bonaparte had been gathering troops during his previous campaigns, was not currently a policy in France. This meant that he didn’t have access to the same amount of reserves that he had previously. In this scenario, Bonaparte would probably not have enjoyed any significant success for more than a week or two. With the arrival of the Austrians and Russians, Bonaparte’s armies would have stood little chance at all, and history would have certainly continued down the path that we are most familiar with. The Bonaparte of 1813 may have lasted even less time. In 1813, Bonaparte had refused any kind of settlement at all, even though he had been completely defeated at the Battle of Leipzig that year. In that battle, Bonaparte’s armies were effectively expelled from the rest of Europe and forced to retreat back in to France. Had Bonaparte sued for a peace at that time, he might well have retained his title and control over France. The result of his failure to do so was the invasion of France by the Coalition of Russia, Austria and Prussia and his own removal from the throne. Had he treated his victory at Waterloo with the same refusal to negotiate, Bonaparte would have probably attempted to retake parts of Central Europe immediately following the Battle of Waterloo. Once again, Bonaparte’s failing would have been signaled by the arrival of Russian and Austrian troops which would have led to yet another disastrous retreat back in to France, if not the destruction of his entire army. There is, however, one scenario by which Bonaparte may have been able to win at Waterloo and still maintain control of France. If Bonaparte had granted clemency to the retreating British forces of Wellington, history could have taken a completely different turn. The British forces had granted something similar seven years previously at Sintra, where French forces had been allowed to evacuate from Portugal after several disastrous battles. Such an act of honor, whilst completely removed from Bonaparte’s character, may well have been enough to convince the British that there might be a peaceful solution to the French problem. In the event that Bonaparte had sued for some sort of peace, before the arrival of the Russian and Austrian armies, they may well have found a new ally in the form of Britain. With the two former enemies working together to bring about a new era of peace, it is not beyond reason to suggest that the rest of Europe might have been tempted to follow suit. The Congress system that was prevalent in Europe for the years following Bonaparte’s downfall may well have still existed but with a stronger leader speaking on behalf of France. However, all of this would rely heavily on Bonaparte being able to disregard all the previous behaviors that had come to define his reign. In order for this scenario to work, Bonaparte would have had to cease behaving like some sort of power-hungry megalomaniac and become a reasonable diplomatic presence in Europe. One can even imagine that, had Bonaparte become the diplomat that Europe needed him to be, the rise of Germany might have been significantly delayed. The revolutions of 1848 might have been a significantly smaller affair as there would have been no antagonism towards a French monarchy, which would have disbanded with Bonaparte’s renewed rise to power, and therefore no revolution in France. The French revolution, which was one of the larger and more explosive of the 1848 revolutions, would not have existed to encourage the others across Europe. Without the discontent across Europe, we can easily see a scenario in which a united Germany never comes in to being, effectively removing the threat of World War One in 1914 and, therefore, the subsequent World War twenty-five years later. Unfortunately, Bonaparte’s actions were, by and large, a result of his psychological compulsions and the environment in which he came to power. He was very much a child of the French Revolution; his rise to power had been as a result of one of the bloodiest events in French history. The idea that a man, who owed so much of his power to man’s compulsion towards war, would be content at sitting around a conference table with the other leaders of Europe is improbable at best. Had he been given the opportunity to make this decision, it is unlikely that he would have taken it, opting instead for the allure of battle. In the event that he had sued for peace, it would almost have certainly been a blind to allow himself time to build up his armies before making another attempt at conquering the continent. In all likelihood, rather than delaying the onset of a World War in Europe, he would almost have certainly caused one in his own right. Did you enjoy this article? If so, tell the world! Tweet about it, like it or share it by clicking on one of the buttons below! You can also read Nick’s previous articles on what if D-Day did not happen in 1944 here, what if Hitler had been assassinated in July 1944 here, and what if the Nazis had not invaded Crete in World War Two here. This week’s image of the week features one of the greatest British heroes of them all, Prussian von Bluche, and Napoleon Bonaparte. The cartoon has leaders of two European armies literally putting the lid on another failed European attempt to dominate that continent. After controlling much of Europe just a few short years before, by 1815 Napoleon’s France had been defeated. This cartoon goes some way to commemorating that. We see Field Marshal Gebhard Leberecht von Blucher, who led Prussian forces at the 1815 Battle of Waterloo, alongside his British counter-part, Arthur Wellesley, the Duke of Wellington. In the center we see them putting the lid on top of France’s Napoleon Bonaparte. Napoleon is trying to escape but can’t. His face looks distressed in a comical way. Indeed, Napoleon was exiled to the remote Atlantic island of Saint Helena by the British after the Battle of Waterloo. There is an article about a very significant battle involving the Duke of Wellington and Napoleon’s France in the new issue of History is Now Magazine. Click here to find out more!
http://www.historyisnowmagazine.com/blog/tag/Battle+of+Waterloo
Time and time again the question is asked: What function does Stabilised Oxygen carry out in our body; and why are electrolytes of oxygen so effective in helping the body to rid itself of a variety of ailments or detrimental conditions? To answer these question one must first understand the importance of oxygen and its multitude of beneficial functions and effects within our bodies. For almost 50 years, researchers and health practitioners have observed that patients using all forms of oxygen based therapies ( including oxygen supplements) have experienced improved health and well-being. How can the most abundant element on earth provide, what many professionals have reported, such remarkable physiological benefits? To answer this question, we first have to understand just what oxygen is and how important oxygen is to a healthy body. Oxygen is one of the five basic elements of all life ( oxygen, hydrogen, carbon, nitrogen and sulfur) and is colorless, tasteless and odorless. None of these five basic elements, or any other element for that matter, is as abundant as oxygen. In addition, only oxygen is capable of combining with almost every other element and is essential in combustion. The earth’s crust is estimated to be 49.2% oxygen by weight; almost 20% of our atmosphere is oxygen and the rest is nitrogen; oxygen constitutes almost 85% of sea water, 47% of dry soil, 42% of all vegetation, 46% of ingenious rocks and over 65% of the human body. The two men credited with the discovery of oxygen in 1773 are the Swedish chemist Karl Wilhelm Scheele and Englishman Joseph Priestly. But it wasn’t until 1777 that the French scientist Antoine Laurent Lavosier demonstrated that oxygen was a pure substance and a component of air. No other element is as important as atomic oxygen (with its eight electrons per atom.) Oxygen is absolutely critical to the life processes of all living creatures. Oxygen is brought to the lungs by respiration where it diffuses from the air into the blood stream through over 140 square meters of internal lung surface area called the ‘gastric mucosa’. The average individual takes in approximately 6 litres of air per minute, (about 14 breaths per minute of about 500ml) during what is called a resting state. Under heavy exertion or stress, this rate can increase to more than 125 litres per minute. Oxygen from the lungs is rapidly absorbed by the hemoglobin in the red blood cells. The hemoglobin becomes over 95% saturated with the available oxygen where it is then carried and eventually transferred to every one of the billions of cells in our bodies. The oxygen, once in these recipient cells is consumed, as the sugars are converted to energy and heat. The more energy or warmth our body needs, the more oxygen that is consumed. This process is called oxidation and carbohydrates ( sugars) are what are oxidised (or burned ) for the body’s fuel. Therefore the cells, to remain healthy and to perform their function of providing energy for the body, must have an adequate and continuous supply of oxygen. Do we get enough oxygen into our blood stream? And if we don’t, what happens to our cells and vital organs when they are denied an adequate supply of oxygen? These are vital questions that scientists and researchers have debated for many years. Let’s look at some of their conclusions. Human blood has three main components: plasma, white blood cells and red blood cells. Our blood will settle into three distinct layers if left in a test tube. The red blood cells, the most numerous in the blood stream will settle on the bottom. White blood cells, including lymphocytes, monocytes, eosinophils, basophils, neutrophils and platelets, will form a thin white line in the middle. A deep yellowish, watery plasma will float to the top. Plasma is the actual ‘river’ in which all the working constituents of our blood are carried in our remarkable circulatory system. Among the many constituents in this river is dissolved oxygen - up to 5% can be dissolved in this fluid. Unlike fish that can exist on this dissolved oxygen in the plasma stream, the human body requires more oxygen than the plasma can provide. Hence, the great importance of the red blood cells which can carry tremendous amounts of oxygen to the tissues in our body. But these red blood cells get their oxygen from the plasma. The body, in its own remarkable way, does not allow all of the red blood cells’ oxygen to be consumed as it travels through our bodies. Under normal conditions, 70% to 75% of the oxygen that started the journey in the red cells completes the return trip to the lungs. Thus, from 25% to 35% of the oxygen is consumed by normal cellular metabolism. If, however the body undergoes exertion, stress, or any other prolonged physical activity, this "reserve" can drop to 20% to 25%. Thus, if you can raise the amount of oxygen dissolved in the plasma, you will increase the amount of oxygen that gets to the cells and that can become a part of the oxygen ‘reserve’. The red blood cells, as carriers, gather oxygen from the plasma and transport this oxygen to the capillaries where it is released again into the plasma for the cells to use for metabolism. An amazing statement was included in the research by Dr. Arthur C. Guyton, M.D. in his medical text The Textbook of Medical Physiology. He wrote: ‘ all chronic pain, suffering and diseases are caused from a lack of oxygen at the cell level’ In his research, Dr. Guyton discovered that in order for our cells to get their oxygen from the blood stream, the cells must be in what he described as a ‘ dry state’. In this condition there will be no excess fluid around the cells. There is only enough fluid to fill the crevices around the cells. Dr. Guyton believed, as well as most of the medical profession, that as long as the fluid around the cells was at the proper water level, contained the right mineral balance, and was free of toxic wastes, the cells would continue to live, function and grow in a healthy manner. Dr. Guytons research revealed that blood proteins in our blood stream, (which is 91% water) make sure that the water is kept in our blood stream so that no excess fluids seep around the cellular walls. If our blood proteins escape out of our blood stream and find their way into spaces between the cells, then our lymphatic system must immediately, remove these proteins from these spaces. The protein’s present in this altered condition will pull both sodium and water out of the blood stream. This reverse transfer causes a water and sodium potassium imbalance, alters the cells ‘dry state’, inflames the cells and reduces the cells ability to produce energy. When this situation occurs, the presence of excess water between the cells pulls oxygen as well from the blood stream. This reduces the amount of oxygen that can get to the cells through the blood stream. A prolonged blood protein imbalance will cause pain, sickness, disease and, if serious enough, could produce immediate death in just a few hours. Blood proteins are not the same as the proteins we eat. We eat proteins to obtain the amino acids from them. From these amino acids, the body manufactures its own proteins, especially those for the blood stream (albumins, globulins and fibrinogens.) Until Dr. Guytons discoveries, it was believed that our blood proteins were simply too large to escape through the blood capillary membranes into the cellular spaces. Dr. Guyton proved that this assumption was untrue. He wrote: ‘The importance of this function of the lymphatics cannot be stressed too strongly, for there is no other route besides the lymphatics through which excess proteins can return to the circulatory system.’ Fluids in the lymphatic system move up the legs into the large thoracic duct in the chest where the fluids eventually empty into the sub-clavian vain at the base of the neck. At this point the fluids return to the blood stream with the " captured" blood proteins to rebalance the system. Dr. Guyton showed that the lymphatic vessels have one-way check valves in them. These valves keep the fluids in the system flowing in only one direction. Approximately three litres of blood pulses through the capillaries every minute of every day we are alive. During those same sixty seconds, the heart will beat eighty times and diffuse (pump) water through the tiny capillary pores to supply cells with minerals, nutrients and oxygen. HOW DO CELLS GET THEIR ENERGY? Dr. Otto Warburg has done considerable research on why healthy cells need oxygen to create the "energy" for the life process. In fact, he received two Nobel Peace Prizes for Medicine based on his research findings on the importance of oxygen to cellular life. Healthy cells in the body break down the carbohydrates we eat into simple glucose sugars. The glucose is then stored in the cells. The cells, when they need energy to perform their functions, (reproduction, heat etc.) take the stored glucose and in a chemical reaction with oxygen, create ATP (Adenison Tri-Phosphate) which becomes, as described by Dr. Warburg, " the pure energy of the cell." If there is a lack of oxygen at the cellular level, no life process can take place and the cell dies. But Dr. Warburg also discovered that a poor supply of oxygen is also detrimental to the cell. When cells lack the right amount of oxygen, the glucose in these cells begins to ferment and a chain reaction starts taking place. Instead of living off the ATP, the cells begins to live off the fermentation of the stored glucose. Dr, Stephen Levine, a well respected molecular biologist and geneticist and Dr. M. Kidd completed research that confirmed that :..oxygen is the source of life to all cells.. The constant abuse we subject our bodies because of our eating and drinking habits, as well as a lack of exercise, robs precious oxygen from our bodies. This situation is further complicated by pollutants and toxic preservatives in our water, food and the air we breathe. Dr. Kidd wrote: "Oxygen plays a pivotal role in the proper functioning of the immune system." especially as it relates to the system’s resistance to disease, bacteria, and viruses. Dr. Levine added: We can look at oxygen deficiency as the single great cause of all diseases. It is believed, and supported by a great deal of research, that a shortage of oxygen in the blood could very well be the starting point for the breakdown of the immune system. Dr. Warburg’s research adds further emphasis to these findings. He stated that sub-optimal oxygenation of tissues and cells seen in cellular hypoxia is not only the underlying cause of diseases, like cancer, but also results in a predisposition towards degenerative diseases. The lack of oxygen is the outstanding factor in immuno depressive illnesses. Thus, all three researchers conclude, an increased oxygenation of the bloodstream and cells will enhance and may restore overall health. Oxygen is used by the cells in many processes that break down excesses of toxic substances in the body. This process of combining a substance with oxygen at the cellular level is called "oxidation". Dr. Levine describes oxidation this way: Oxygen provides the spark of life. Nutrients provide the fuel for burning. The correct fuel/oxygen mixture is required for the best of health. Dr. C. Samuel West, a specialist in the science of lymphology and a distinguished member of the International Society of Lymphology, has proven that food present in cells without enough oxygen will turn into toxic waste and fat. The less oxygen present in the cells, the more pain we experience. Dr. West is a strong advocate of exercise since a lack of exercise reduces circulation and thus the transfer of oxygen to the cells. This leads to high blood pressure and fluid retention. As mentioned previously, it is the lack of A.T.P. (Addenison Tri-Phosphate) that causes the glucose in the cells to ferment, creating an anaerobic (without oxygen) condition. This upsets the metabolic process of the cell. These cells, lacking sufficient oxygen, start manufacturing improper chemicals and soon these cells and their surrounding cells become weak and unhealthy. If prolonged, the entire immune system may start breaking down. A lack of cellular ATP drastically alters the body’s sodium - potassium balance in the individual cells, in the bloodstream, and in the fluid that surrounds the cells. This chemical change also alters and reduces the " electrical fields" in the cells and the blood stream. Once this electrical change occurs, minerals begin to "fall out" of the fluids surrounding the cells and the bloodstream and start sticking together in what is called mineral deposits. If these minerals settle in the joints, arthritis occurs, in the eyes cataracts occur. When they settle in the arteries, we describe the process hardening of the arteries. Our muscles also respond to electrical charges sent by the brain. These messages tell the muscles to contract or release. Anything that upsets this delicate and intricate electrical transfer of energy, as does a lack of adequate ATP which in turn is caused by a lack of oxygen in the cells , will cause the muscles to spasm and work or respond poorly. We live in a world of fast foods and instant gratification. In addition, Australians consume a large amount of red meats and dairy products and research have shown that a considerable amount of our average diet contains a large amount of fat.. Unfortunately, animal and dairy products contain high concentrates of cholesterol. While the body needs some "good" cholesterol to run efficiently, the cholesterol from meats and dairy products passes directly into the bloodstream and begin to trap blood proteins which are so important for stabilising and regulating the flow of oxygen to the cells. Excessive consumption of cholesterol/fat-ridden foods robs oxygen from the bloodstream. Fats easily combine with oxygen and form free radicals. These free radicals use more oxygen to form peroxides that damage and destroy cells. As far back as 1977, it was reported in the Scientific American (February issue) that cholesterol epoxide (peroxide) and other substances formed from cholesterol will cause cells to mutate and which will cause cancer. This is how fatty foods, which are consumed by many cause an oxygen deficiency that helps promote cancer. Thus cells that lose their ability to utilise oxygen can and do become cancerous. These cells have been oxygen starved so long that they have undergone a metabolic shift and revert to a metabolism without oxygen, called anaerobic metabolism. This is, of course the ultimate and last stage of degeneration caused by a low oxygen lifestyle. It is also interesting that research shows that people who consume very large quantities of fat have a far greater incidence of cancer as well as other degenerative diseases. Thus cholesterol rich foods, are converted into toxic waste and fat in the cells. This toxic waste and fat in the cells impede the nutrient supply to the cells from the bloodstream. In addition, all the current nutritional research clearly indicates that foods high in sugars, salts, fats and cholesterol are directly responsible for high blood pressure, hardening of the arteries, obesity and strokes. Dr. Levine has described the consequences of cellular oxygen deficiency this way: it is an acidic condition, caused by the accumulation of acidic by-products, it occurs in poorly oxygenated cells. Soft drinks, caffeine, alcohol, and red meats are among the substances that cause systemic (whole body) acidicy where there is an excess of positively charged hydrogen ions (H+). When excessive numbers of hydrogen ions are in the tissue environment they will combine with ,and thus utilise, oxygen resulting in an oxygen deficiency state. ô When cells are deprived of oxygen, lactic acid accumulates and the cellular environment becomes acidic. This reduces available oxygen for the primary function of metabolism because more oxygen is needed to neutralise the acid. As meat and dairy foods, which are high in cholesterol, break down (dissolve) in our stomach and intestine, poisonous by-products are manufactured. These by-products dilate (widen or expand) the capillaries making the distance that the water has to travel to and from the cells much greater. The greater the distance that the fluids must travel, in the same amount of time, the less toxins, waste products poisons etc. are transferred back out of the cells and into the blood stream. The waste products, toxins, etc. start to accumulate in the cells. The stretched capillary walls allow the blood proteins to escape and lodge themselves between the cells. The trapped proteins permit the accumulation of excess fluids around the cells. This prevents the cells from getting the oxygen they need and so the glucose begins to ferment resulting in electrical changes as the mineral and salt balances change. The oxygen rich red blood cells carry a negative electrical charge. They travel through the capillaries single file. As the capillaries dilate and the electrical balance of our body changes because of a lack of oxygen, these red blood cells are now dispersed over a greater volume in the capillaries, and are unable to travel the now greater distances to the cells. To further complicate matters, the red blood cells, because of the electrical change, begin to clump together to cause a microscopic traffic jam in the capillaries, which in turn further dilates the capillary pores and allows more blood proteins to escape and lodge in the spaces around the cells. This produces more fluid around the cells and the process repeats itself until the cells begin to die because of a lack of Oxygen. Most individuals when they are sick turn to conventional medicines to correct the problem. Researchers have learned however, that many effective cures for illnesses have side effects. Antibiotics, for example, are effective for removing pathogenic bacteria from the body. Many thousands of lives have been saved from bacterial infections by using penicillin as well as other antibiotics. Yet, many studies indicate the possibility that recent drastic increase of Candida albicans yeast infections is primarily due to wide spread use of antibiotics as prescriptions for humans. Statistics further indicate that the majority of antibiotics produced and used today are also purchased by the poultry, dairy and beef industry to prevent animal diseases. These antibiotics are eventually stored in the tissues of these "food" animals. When they are slaughtered and prepared for markets, these stored antibiotics are eventually consumed by each of us and added to those that are already in our bodies. The problem is that antibiotics may remove beneficial as well as pathogenic bacteria. This situation could create an imbalance which could lead to an over-growth of candida as well as many other problems. Therefore, when we use antibiotics, we make a trade-off. We often sacrifice our beneficial bacteria to rid ourselves of the pathogenic bacteria. While antibiotics have importance in life threatening situations, to use them routinely for every illness is considered by many practitioners to be inappropriate and could even be dangerous to our health. When the body encounters pathogens, (like viruses, fungi and bacteria,) in the bloodstream or in tissues, the immune system cells will surround or engulf these invaders. They then bombard these pathogens with self-generated free radicals called superoxides which scientists call O2 . These superoxides are manufactured in the cells during normal cell metabolism using the oxygen they get from the red blood cells in the capillaries. If the immune system is working properly, as described earlier, it will generate more anti-oxidant enzymes to remove the free radicals to protect the surrounding tissues. The importance of this anti-oxidant defense system cannot be overstated. Without it, the immune system actually works against the body by generating too many free radicals which will go about doing damage to the surrounding tissue and our bodies. Although there are other factors, this breakdown in the immune system in many people has helped to create an epidemic of auto-immune diseases like lupus and rheumatoid arthritis. Oxygen is both a life giver as it also is a "killer". It is one of the body’s primary guardians and protectors against unfriendly bacteria and other disease organisms. In fact, one of oxygen’s major functions is disintegration. Brian Goulet, a Certified Herbologist and a Nutritional Consultant wrote in his article : The Magic of Aerobic Oxygen, ..Rubble, garbage, toxins, refuse, debris, and anything useless are destroyed by oxygen and carried out of the system. Just as a clean house holds little interest to passing flies, likewise an oxygen rich body is a difficult fortress to assail. Harmful anaerobic bacteria and viruses cannot survive in the presence of oxygen. But oxygen is also naturally selective in what it kills. Unlike drugs and antibiotics which may and usually kill all bacteria in the body, oxygen kills only harmful bacteria while allowing beneficial bacteria to thrive, thus ensuring good health. Oxygen molecules, like those found in stabilised oxygen supplements, such as Di-Atomic O2xygen, are missing electrons in each of the molecules æ outer shells. Because of this, the oxygen molecules will pull electrons from anything that will give them up. Anaerobic bacteria and anaerobic viruses, (as do all living things) possess an electrical charge. These anaerobic organisms cannot prevent the oxygen molecule from stealing electrons from them. As soon as this process occurs, the organisms die. Aerobic beneficial bacteria will not give up their electrons to the oxygen molecule. Neither will beneficial bacteria which live under both anaerobic and aerobic conditions, like in the intestines, be adversely affected by oxygen. In fact, these organisms are actually stimulated in the presence of oxygen. In view of all the scientific evidence available today, an excellent oxygen ratio in the bloodstream is a prime requirement for good health. Stabilised oxygen supplements are packed with nutrient oxygen- Vitamin O - as Dr. Levine calls it. This oxygen certainly fits the definition of a vitamin which is a substance found in foods, (or the environment) and is necessary for life, but not necessarily manufactured by the body. Vitamin O, the oxygen nutrient, Dr. Levine calls à the very spark of life! The current literature explains that nutritional oxygen can be either dissolved molecules of oxygen, (in various molecular configurations including O1, O2, O3, O4 etc.) or bio-available oxygen molecules electrically bound to numerous salts. These oxygen salts include magnesium, potassium and sodium provided their molecular bonds are easily weakened so that the oxygen molecules may be absorbed by the body into the bloodstream. The literature further explains that stabilised oxygen may be either in liquid or dry forms provided that in the dry form the oxygen molecules may be easily and readily made available to the body. Many oxygen supplements use oxychlorine or hypo-chloride compounds to stabilise the oxygen molecules. These compounds include chlorite (ClO2 ) and chlorate (ClO3). There is evidence to indicate that chlorite and chlorate may indeed release its oxygen in its nascent form as O2 into the body as these ions are broken down during the digestive process by hydrochloric acid (HCl) in the stomach. THE EXPERTS SPEAK OUT ON THE BENEFITS OF STABILISED OXYGEN. Rodrigo Rodriguez, M. D. (Medical Director of American Biologics Hospitals), in a report for the prestigious Robert W. Bradford Research Institute stated: Many substances are capable of releasing nascent oxygen such as chloride oxides, ozone, hydrogen peroxide and iodine compounds.. A comparison may be made of these substances on the basis of their stability and toxicity as well as pharmaconetic properties (how it reacts with the body). Using the above examples, ozone releases its oxygen very rapidly (short half life) while chlorine oxides remain effective over a much longer period of time. Hydrogen peroxide has a relatively short half life as it stimulates free radical activity. Sodium periodate is a rich source of oxygen however, the by-product is iodine which is highly toxic to the cells. Commonly used oxidants as described by Brian Goulet, Certified Herbologist in the newsletter (Alive: Focus on Nutrition which was published by the Canadian Journal of Health and Nutrition) includes oxides of oxygen (ClO2 , ClO, ClO3 , ClO4 etc.) ozone (O3) and Hydrogen Peroxide ( H2O2). James Lembreck, D. C. H., C.M.P. wrote in Natural Physique: Stabilised oxygen is often confused with hydrogen peroxide, but has a very different action and is very safe to use. The product was originally developed by Dr. William F. Koch, M.D. and Ph.D., and was later used by the NASA for the space research program. It was originally designed to destroy any known bacteria or virus and not harm the host when taken internally or used externally. Di-Atomic O2xygen« is based on the dissolved molecular oxygen (O2) formulation instead of the salts of oxygen formulations. It is very well established that in normal standing or free flowing water, oxygen molecules (O2) may also be dissolved. While the concentration of O2 in these instances is small, (normally from 7-20 parts per million /ppm) nevertheless, water is capable of holding these molecules in their bio-available state. During its proprietary manufacturing process, Di-Atomic O2xygen« actually creates stabilised oxygen molecules rather than bonding oxygen molecules to various salts.This process is a completely unique and totally different approach to creating a stabilised oxygen supplement than all other previous oxygen supplements on the market today. Di-Atomic O2xygen« is packed with dissolved oxygen in both its diatomic form (O2) and as ozone (O3). This separates Di-Atomic O2xygen« as a solution from traditional oxygen supplementation applications. Di-Atomic O2xygen« supplies the bloodstream and the cells with additional oxygen. We have already shown that without a dependable supply of oxygen, the cells in our bodies cannot function properly. Nutrients, occurring in our diets as proteins, carbohydrates, and fats, must have oxygen present to convert their potential energy into usable energy. In order for new cells to be formed, hundreds of amino acids must link together using oxygen as the source of their building energy. Toxic Stress Whether derived from the water we drink, the air we breath or the food we eat, we are now subject to over 70,000 different toxic contaminates, many that did not exist a decade ago. Oxygen is required for the body to even attempt to metabolise and eliminate these lethal chemicals from the body. Emotional Stress Adrenaline and adrenaline related hormones are created by the body during emotionally stressful times (like every day, for many of us). The body must use its available oxygen to metabolise those chemicals back out of the body to re-establish metabolic balance. Physical Trauma & Infections Bacteria and viruses can put tremendous stress on the body’s immune system. When this occurs, the immune system is robbed of the oxygen that is necessary for the body’s normal metabolic function. Reduction in available atmospheric oxygen. Studies reveal that increased environmental pollution and green plant destruction has reduced the amount of oxygen (by as much as 50%!) in our atmosphere over the last 200 years. Improper Diets, Saturated fats can reduce oxygen in the blood stream. Foods with high fat contents and low nutritional values, (i.e. junk and highly processed foods) have less than half of the oxygen content than do foods containing complex carbohydrates. Lack of Exerciseà Exercising increases the body’s metabolic rate as well as the intake of oxygen to help cleanse the body of built up toxins. A sedentary lifestyle reduces the body’s ability to process out toxic contaminates and to perform normal metabolic functions. For years alternative health care professionals have developed and provided natural therapies for healing and well being. Oxygen alternative approaches to health have been used effectively by these professionals, as reported in over 5,000 published articles, for over 100 years. Di-Atomic O2xygen has one of the highest concentrations of bio available stabilised oxygen available today. It is non chemically formulated ( no mixing of potentially toxic chemical), colorless, non-toxic, hypo allergenic, and completely safe to use as directed. The normal dose is 15 drops of Di-Atomic O2xygen« in 250ml of water. Di-Atomic O2xygen« is primarily absorbed into the blood stream through the stomach and the intestinal lining. Once in the blood stream, the oxygen will make its normal journey to the cells in the body. Each 50ml bottle is a 30 day supply of activated stabilised oxygen. WHO SHOULD USE Di-Atomic O2xygen ? Di-Atomic O2xygen as a supplement is every bit as important as Vitamin A,E,C, etc. Our bodies require a constant supply of oxygen to function properly. The oxygen in Di-Atomic O2xygen works hand in hand with vitamins, minerals and amino acids to help obtain a higher level of health. The oxygen in Di-Atomic O2xygen is a powerful topical and internal bactericide, fungicide and viricide. Tests show that the powerful oxygen present in Di-Atomic O2xygen renders most pathogenic anaerobic organisms harmless after direct application. Di-Atomic O2xygen is also a remarkable blend of essential and trace minerals in combination with its bio-available, stabilised dissolved oxygen. Calcium : Calcium is essential for developing and maintaining healthy bones and teeth. It also assists in blood clotting, muscle contraction, nerve transmission, oxygen transport, cellular secretion of fluids as well as enzyme activity. Chromium: Chromium aids in glucose metabolism, which also requires oxygen, and helps regulate blood sugar by potentiating insulin and serving as a component of the glucose tolerance factor. Cobalt: Cobalt promotes the formation of red blood cells which carry oxygen to the cells and tissues. Copper: Copper is essential to normal red blood cells and connective tissue formation. It acts as a catalyst to store and release iron to help form hemoglobin which is the key oxygen carrier system for the body. Copper also contributes to the central nervous system functions. Iodine: Iodine is required as an essential part of the thyroid hormone to support normal metabolic functions which require large amounts of oxygen in the body. Iron: Iron is necessary for red blood cell formation and function. It is essential for the transportation of oxygen throughout the body and for normal brain functions. Magnesium: Magnesium activates over 100 enzymes and assists in the functioning of nerves and muscles. It also helps maintain the integrity of cell membranes and stabilises the electro-chemical balances of the cells which require energy based on the "burning" of oxygen and glucose. It is a critical mineral for proper heart functions. Manganese: Manganese is a key component of enzyme systems, especially the oxygen handling enzymes. It supports brain functions and is required for blood sugar regulation. Manganese is essential in the formation and health of bone structure. Molybdenum: Molybdenum contributes to the normal growth and development of the body and organs. It is a key component in many enzyme systems including those involved in body detoxification which require large amounts of oxygen. Phosphorous: Phosphorus works directly with calcium to develop strong bones and teeth. It also enhances the uses of other key minerals and nutrients. Phosphorous is a key mineral in cell membrane integrity and intercellular communication with oxygen, for proper energy production and the functioning of the metabolic processes in the body. Selenium: Selenium is the key component of one of the main antioxidant enzymes and is necessary for normal growth and physiological development. It plays an important role in body detoxification of heavy metals such as mercury, in combination with oxygen. Selenium also plays an important role in the production of antibodies by the immune system. Sulfur: Sulfur is required for structuring body proteins, including muscles and hair. It plays a critical liver detoxification role as well as playing a key role in many antioxidant nutrients and oxygen handling.
http://vitalo2.com/diatomicO2xygeninfo.htm
Last week, I took part in a discussion with an amazing group of practitioners and change agents in the experiential learning world facilitated by Gensler Architects in San Francisco. The group included folks from the Stanford d.school, the Alt School, Singularity University, The Story Center, and the California College for the Arts. We talked about what we thought experiential learning meant to us, where we thought the pedagogical approach was heading into the future, and the implications of all this on our built spaces and places (the reason why Gensler convened us all in the first place). 1. There was widespread agreement that the disruptive forces in education would continue and strengthen for the foreseeable future. The people in the room were convinced that the abundance and relative accessibility of information has forever changed the way we think about teaching and learning. If information is “everywhere” then learning can be too. Pop-up classrooms, micro-labs, maker spaces, and community-based learning all point to the ways in which the dominance of “seat time” (with an occasional field trip) needs to become a relic of the past as we shift toward a pedagogy that more artfully integrates places, experiences, and people. The “primacy of content” educational paradigm is changing into the “primacy of experience” paradigm. Randall Bass at Georgetown talks about this as the shift in flow from “content mastery-to-practice” to a new paradigm of “practice-through-content.” I have described it as teachers becoming “curators of experience.” Teachers in this new paradigm will continue to be content and subject mastery specialists but with a different suite of skills beyond the lecture to employ with and alongside students (see #3 below). 2. There was also agreement that, despite these disruptive forces, the dominant structures of education (classrooms, courses, grades, etc.) remain mostly unchanged. Despite all this, educators of all shapes and sizes are experimenting and innovating. They are finding ways around these obstacles and structures and, most importantly, they are achieving some incredible results and projects. Architects are starting to pay attention too. Learning spaces are being designed for more flexibility, more collaboration, and more dynamism. One participant asked: “why would any university want to build another lecture theater… ever?” Interestingly amongst the group, technology, while not viewed negatively, was seen as a potential distraction. Start with your vision and your learning outcomes, the group agreed, and only bring in technology if it can serve as a tool to meet your broader aims. 3. Teachers need to learn a new suite of skills to help their students succeed in this environment. The experiential teacher of the future was most decidedly not organizing one or two field trips per quarter or facilitating a few interactive games to liven up direct instruction. The group agreed that a new set of knowledge, skills, and aptitudes were needed to thrive in this disruptive space. Skills like project management, teaching what you don’t know, design thinking, reflective practice, formative assessment, integrative learning, content curation, and networking. “Lesson planning” may be necessary but it will be insufficient. A firm understanding of design in philosophy, structure, and function will be critical. As one participant said, “the teacher is the user interface to the world” for students. She must make it accessible and intuitive for her students. And importantly (as one participant noted), we need to learn when to get out of the way. While we are certainly not yet at this future place for education (K-16), there are more than enough signs that this future is coming. It is why Gensler invited us to talk with them. As architects, they have to be mindful of these trends in their design and consultation work. Readers, if you made it this far, what do you see as the future of experiential learning?
https://jaywroberts.com/2016/08/05/the-future-of-experiential-learning/
We are soliciting chapter proposals for an edited collection entitled The Globalization of Musics in Transit: Musical Migration and Tourism to be published by Routledge in 2013 in its Research in Ethnomusicology Series. (Please note that the book is contractually agreed.) The book studies musical transformations as they occur across time and space, exploring contemporary concerns about the impact of globalization on musics and peoples as they transit across the globe. The book’s focus is on two main themes: musical tourism and travel; musical migration and diaspora. We invite authors working in all areas of ethnomusicology,anthropology, sociology, popular music studies, media and cultural studies, social/cultural geography, and other relevant disciplines and fields of study to propose a book chapter on either theme. Both single and multiple-authored works will be considered. The book will not be limited to specific musics, peoples or geographical areas; however, an element of ethnographic approaches would be anticipated. Contributors are particularly invited to engage with specific sub-themes and issues under each theme: Musical migration and diaspora. Under this theme, we invite book chapters that engage with one or more of the following sub-themes: diasporic production; the circulation and consumption of musics; the re-writing of the concept of diaspora through cultural production in relation to, but also outside of, national culture; the formation of transnational tastes and cosmopolitan imaginations; the liquidity of globalized cultures; the role of music in creating symbolic geographies and diasporic double-consciousness; specific instances of cultural translation and adaptation. Musical tourism and travel. Authors are invited to explore one or more of the following sub-themes: musical pilgrimage; the material and social flows of travellers and musicians; the cultural and economic policies that promote music tourism; festivals and performances for tourists; ethnographies of touristic encounters with music; the place of music in the representation of tourism destinations; the role of music in the construction of tourist discourses; narratives and memories. Anyone interested in submitting a chapter for consideration should in the first instance send a proposal to the editors (see below). The proposal should include: (1) a summary of the chapter (abstract, max. 500 words), addressing its aim and objectives, research methods, and contribution to the existing literature, specifically in relation to a chosen theme and sub-theme(s); (2) an indication of the number and type of images or illustrations to be included, and whether copyright will need to be sought; (3) a brief biographical note. Deadlines for submission of: Proposal: 20 October 2011 Chapter (ca. 7,000 words): 30 March 2012 Please e-mail your abstracts to the editors: [email protected] and [email protected] Further particulars: Title: Routledge needs their titles to be as explanatory as possible, as well as keyword heavy for searches and the like. Word count: 7,000 per chapter (110,000 in total) Images: ca. 2 per chapter (20 in total) Permissions: Please note that authors are responsible for obtaining permission for any copyrighted materials (all images, prose exceeding 500 words, more than 2 lines of poetry). This is often a very drawn out process, so it’s best to begin as soon as possible. Gratis copies: 1 per contributor Cover: To make things uniform, all books in this series have the same template cover design. Series: Routledge Research in Ethnomusicology Publishing plan: Initially, the book will be published as hard cover, library standard editions, which will be marketed primarily to research and university libraries (though of course it will also be marketed in Routledge catalogues, and available at relevant conferences where Routledge is represented). The initial print run will be fairly small-250 copies or so, and pricing ($110+) will reflect our focus on the library market. Marketing: Routledge will begin marketing the book on their website and catalogues shortly as follows: This book studies musical transformations in transit as they occur across time and space, exploring contemporary concerns about some of the impacts of globalization of musics and peoples as they transit across the globe. While taking diverse perspectives in embracing the contemporary musical landscape, the collection offers a range of research methods and theoretical approaches from ethnomusicology, anthropology, sociology, popular music studies, and media and cultural studies, thus providing interdisciplinary contributions to ethnomusicological studies of contemporary musical processes inspired by musical globalizations. Discussions on migration and diaspora advance discourse on new symbolic geographies mapped by the production, adaptation, and consumption of musics in transit. Elsewhere, essays on cultural travel and tourism raise concerns about the commodification of tradition and heritage, asking questions about cultural gain and loss, consumption, and performance. Tourism and migration are two key globalizing elements, and the book’s combination of migration/diaspora and tourism allows for an exploration of the juxtaposition of phenomena like cosmopolitan empathy, advocacy, and responsibility versus the commodification of the other, capitalism, and inequality. While the contributions reach across disciplines, they are united by an ethnomusicological concern with local experience and new community formation. Departing from established theories (e.g. globalization), this book questions current conceptualizations of musical phenomena while focusing on sub-themes such as the post-national or the cosmopolitan imagination, which are recent and little explored additions to ethnomusicology’s theoretical portfolio.
https://www.iaspm.net/the-globalization-of-musics-in-transit-musical-migration-and-tourism/
How Can You Discover a Page’s PageRank? To discover a web page’s PageRank, you could install the Google toolbar into your browser. If it’s not available for your browser, you might install a different toolbar — you can find various toolbars that provide PageRank in your browser’s add-ons library. Each time you open a page, the toolbar will load the page’s PageRank. In the case of the Google toolbar, if the bar is all gray, the PageRank is 0. If it’s all green, the PageRank is 10. You can estimate PageRank simply by looking at the position of the green bar, or you can mouse-over the bar, and a pop-up appears with the PageRank. If the PageRank component isn’t on your toolbar, click the Options button on the right of the bar and then select Options from the menu that appears. In the Toolbar Options dialog box, click the Privacy tab, click the Use PageRank check box, and then click the Save button. If you don’t want to install a toolbar, you can still check PageRank. Search for the term pagerank tool to find various sites that allow you to enter a URL and get the PageRank. It’s important to understand that as far as PageRank goes, zero is not zero, and ten is not ten. Although commonly referred to as PageRank, and even labeled as such, the PageRank value provided by PageRank tools and services is not the page’s actual PageRank. It’s simply a number indicating the approximate position of the page on the PageRank range. Therefore, pages never have a PageRank of 0, even though most pages show 0 on the toolbar, and a page with a rank of, say, 2 might actually have a PageRank of 25 or 100. The true PageRank scale is probably a logarithmic scale. Thus, the distance between PageRank 5 and 6 is much greater than the difference between 2 and 3. The consensus of opinion among people who like to obsess over such things is that PageRank is a logarithmic scale with a base of around 5 or 6; some people believe it’s more likely base 8, or perhaps higher. Suppose for a moment that the base is actually 5. That means that a page with a PageRank of 0 shown on the toolbar may have an actual PageRank somewhere between a fraction of 1 and just under 5. If the PageRank shown is 1, the page may have a rank between 5 and just under 25; if 2 is shown, the number may be between 25 and just under 125, and so on. A page with a rank of 9 or 10 shown on the toolbar most likely has a true PageRank in the millions. With base 5, for instance, the toolbar PageRank would represent a true PageRank. |Toolbar PageRank||True PageRank| |0||0–5| |1||5–25| |2||25–125| |3||125–625| |4||625–3,125| |5||3,125–15,625| |6||15,625–78,125| |7||78,125–390,625| |8||390,625–1,953,125| |9||1,953,125–9,765,625| |10||9,765,625–48,828,125| The maximum possible PageRank, and thus this scale, continually changes as Google recalculates PageRank. As pages are added to the index, the PageRank has to go up. Periodically, Google recalculates PageRank web-wide, and PageRank drops for many sites, perhaps most. How can you be sure that the numbers on the toolbar are not the true PageRank? The PageRank algorithm simply couldn’t work on a scale of 1 to 10 on a web that contains billions of web pages; it just wouldn’t make sense. Here are two important points to remember about PageRank values provided by these toolbars and services: - Two pages with the same PageRank may actually have a very different true PageRank. One may have a PageRank that is a quarter or a fifth of the other, perhaps less. - It gets progressively harder to push a page to the next PageRank on the toolbar. Getting a page to 1 or 2 is pretty easy, but to push it to 3 or 4 is much harder (though certainly possible). To push it to the higher levels is very difficult indeed — 8 or above is rare.
https://www.dummies.com/web-design-development/search-engine-optimization/how-can-you-discover-a-pages-pagerank/
Previously we have used quadgrams to see how similar a piece of text is to english. This page will deal with determining how similar to english a piece of text is, based on the english words it contains. To calculate the probabilities of words, we need a very large corpus of text so that we can count the occurances of each word. The probability of a word is then the count divided by the total number of words encountered. To get accurate counts we need a truly huge database, fortunately for us Google has done this for us, with their trillion word database using data scraped from the web. To calculate the likelyhood of a sentence we need to extract all the words, then multiply the probabilities of each word together to get the total sentence probability. When trying to break ciphers, though, we often have the problem that we don't know the word divisions, so extracting the words can be difficult. In this case we consider all possible segmentations, of which there are 2^(L-1), and choose the one with the highest probability as the probability of the sentence. Fortunately there are efficient algorithms for doing this which don't require us to consider all possible segmentations. Peter Norvig has provided a very elegant solution to this problem in chapter 14 of the book 'Beautiful Data', I recommend reading it as it explains things better than I can. Python code, and the book chapter to read, can be found on Peter Norvig's website. He provides a momoized recursive solution that can use an awful lot of memory very quickly. Because we'll need to be calling this function a lot, this memory footprint makes it unsuitable. I have implemented an iterative version based on the Viterbi decoder that does the same job and uses very little memory, available below. Note that there is a slight error in Norvig's implementation, it treats e.g. 'how are' and 'How are' as different tokens, when they should be treated as one. An Example § Let us take the sentence HELLOHOWAREYOUTODAY and try and determine the most likely segmentation. Possible segmentations include: HELLOHOWA REYOUTODAY HELLO HOW ARE YOU TODAY HE LL OH OW AR EY OU TO DA Y H ELLOH OW AREYOU TOD AY HELL OHO WARE YOU TODAY etc. To compute the probabilities of these segmentations we get the probability of each word individually, and multiply them together. If we do this we find out that HELLO HOW ARE YOU TODAY is the segmentation that gives the highest total probability, and it certainly looks like the best segmentation. Python code § Searching through all possible segmentations is efficiently performed by the Viterbi decoder, an algorithm more commonly used for determining the most likely sequence of phonemes in automatic speech recognition. This implementation is slightly modified, as our states now have varying lengths. Its time complexity is O(LN^2) where L is the length of the sentence and N is the maximum word length under consideration. Example use:
http://practicalcryptography.com/cryptanalysis/text-characterisation/word-statistics-fitness-measure/
Joannie Savoie is the owner of Busyfeet Holistic Centre in Moncton, N.B. She graduated from University of Moncton in Kinesiology and pursued a career in that domain for a few years, while studying for the many other natural healing modalities that she is now providing. Foot and Hand Reflexology, Chios Energy Healing, Reiki, Raindrop Technique and Aromatherapy with Essential Oils are some of the skills that were develop throughout those years. The most recent modality being Theta Healing (Basic, Advanced, Manifesting Abundance, Family Ties & Intuitive Anatomy). She will personalized the healing with a blend of modalities depending on what the individual needs at that moment using her intuitive skills & abilities to empower the clients on their highest and best life journey. She also specializes in teaching professional and personal self development courses and workshops.
https://www.busyfeetholistic.com/about-us
All Holistic Alternative approaches have in common that the healing process is based on activating one’s own body’s inherent healing abilities. Bio Food Supplements Glucosamin++ Chondroitin 100% European formula natural ingredients of the highest quality, efficient processing ingredients, biological origin. Cared international transport international transport operations handled with care by specialized companies in the transport of sensitive and biodegradable goods. Expert advice on alternative medicine Our experts have many years of practical experience in alternative medicine to ensure and provide professional and scientific information on our products.
https://www.eurovedahealthcare.com/copie-de-men-s-health-1
RELATED APPLICATIONS BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE EMBODIMENTS EXAMPLES Example 1 Obtaining VRE Isolates Example 2 Isolation of VRE Phage Example 3 A Phage Strain is Active Against Over 200 VRE Isolates Example 4 Producing Bacteriophage-Containing Compositions Example 5 Determination of a Protective Dose of Bacteriophage Example 6 Reduction of Colonization in Humans The present application is related to U.S. Provisional Patent Application Nos. 60/175,415 and 60/175,416, filed Jan. 11, 2000, and 60/205,240, filed May 19, 2000. In addition, the present application is related to U.S. Provisional Patent Application No. 60/175,377 filed Jan. 11, 2000. The disclosures of these provisional applications are incorporated herein, by reference, in their entirety. 1. Field of the Invention The present invention is directed to the field of medical treatment and prevention of infections diseases; in particular, use of therapeutic compositions containing bacteriophage to reduce or eliminate colonization with potentially pathogenic bacteria (including bacterial strains resistant to many or most commonly used antimicrobial agents), thereby reducing the risk of subsequent disease occurrence. 2. Description of Related Art Enterococcus Vancomycin-Resistant Over the last ten years there has been an emergence of bacterial pathogens, which demonstrate resistance to many, if not all antimicrobial agents. This is particularly relevant in the institutional environment where nosocomial pathogens are under selective pressure due to extensive antimicrobial usage. A particular problem in this regard has been vancomycin-resistant enterococci (VRE), which are not treatable with standard classes of antibiotics. Despite the recent release of two drugs to which VRE are susceptible (quinupristin/dalfopristin and linezolid (Plouffe J F, Emerging therapies for serious gram-positive bacterial infections: A focus on linezolid. Clin Infect dis 2000 Suppl 4:S144-9), these microorganisms remain an important cause of morbidity and mortality in immunocompromised patients. Enterococci are grain positive facultatively anaerobic cocci found in a variety of environmental sources including soil, food and water. They are also a common colonizing bacterial species in the human intestinal tract (i.e., the intestinal tract serves as a reservoir for the microorganism). Although the taxonomy of enterococci has not been finalized, it is generally accepted that the genus consists of 19 species. , “Enterococcus J. Antimicrob. Chemother., Clin. Infect. Dis. J. Infect. Dis., Arch. Intern: Med., J. Infect. Dis., Lancet Enterococcus faecium.” N Engl. J: Med., Enterococcus faecium Enterococcus faecalis. Antibiotic management of serious enterococcal infections has always been difficult due to the intrinsic resistance of the organisms to most antimicrobial agents (Arden, R. C, and B. E. Murray, 1994: Antimicrobial resistance.” In: Principles and Practice of Infectious Diseases Update, volume 2, number 4 (February, 1994). New York: Churchill Livingstone, Inc. 15 pps; Landman, D., and J. M. Quale, 1997, “Management of infections due to resistant enterococci: a review of therapeutic options.” 40:161-70; Moellering, R. C., 1998, “Vancomycin-resistant enterococci.” 26:1196-9]. In the 1970's enterococcal infections were treated with the synergistic combination of a cell wall active agent such as penicillin and are aminoglycoside (Moellering, et al. (1971), “Synergy of penicillin and gentamicin against enterococci.” 124:S207-9; Standiford, et al. (1970), “Antibiotic synergism of enterococci: relation to inhibitory concentrations.” 126: 255-9). However, during the 1980's enterococcal strains with high levels of aminoglycoside resistance and resistance to penicillin, mediated both by a plasmid-encoded β-lactamase and by changes in penicillin binding proteins, appeared (Mederski-Samoraj, et al. (1983), “High level resistance to gentamicin in clinical isolates of enterococci.” 147:751-7; Uttley, et al. (1988), “Vancomycin resistant enterococci.” i:57-8). In 1988 the first VRE isolates were identified (Leclercq, et al. (1988), “Plasmid mediated resistance to vancomycin and teicoplanin in 319:157-61). Such organisms, called VRE because of resistance to vancomycin, are also resistant to the penicillin-aminoglyroside combination. VRE includes strains of several different enterococcal species with clinically significant VRE infections caused by and Staphylococcus aureus Enterococcus faecium Clin. Inf. Dis., Enterococcus faecium Clin. Infect. Dis., Recurrent vancomycin resistant Enterococcus faecium bacteremia in a leukemia patient who was persistently colonized with vancomycin resistant enterococci for two years Enterococci can cause a variety of infections including wound infection, endocarditis, urinary tract infection and bacteremia. After and coagulase negative staphylococci, enterococci are the most common cause of nosocomial bacteremia. Among immunocompromised patients, intestinal colonization with VRE frequently precedes, and serves as a risk factor for, subsequent VRE bacteremia (Edmond, et al. (1995), “Vancomycin resistant bacteremia: Risk factors for infection.” 20:1126-33; Tornieporth, N. G., R. B. Roberts, J. John, A. Hafner, and L. W. Riley, 1996, “Risk factors associated with vancomycin-resistant infection or colonization in 145 matched case patients and control patients.” 23:767-72.]. By using pulse field gel electrophoresis as a molecular typing tool investigators at the University of Maryland at Baltimore and the Baltimore VA Medical Center have shown VRE strains causing bacteremia in cancer patients are almost always identical to those which colonize the patients gastrointestinal tract (Roghmann M C, Qaiyumi S, Johnson J A, Schwalbe R, Morris J G (1997), “--.” Clin Infect Dis 24:514-5). The risk of acquiring VRE increases significantly when there is a high rate of VRE colonization among patients on a hospital ward or unit (i.e., when there is high “colonization pressure”). In one study in the Netherlands, colonization pressure was the most important variable affecting acquisition of VRE among patients in an intensive care unit (Bonten M J, et al, “The role of “colonization pressure” in the spread of vancomycin-resistant enterococci: an important infection control variable.” Arch Intern Med 1998; 25:1127-32). Use of antibiotics has been clearly shown to increase the density, or level of colonization, in an individual patient (Donskey C J et al, “Effects of antibiotic therapy on the density of vancomycin-resistant enterococci in the stool of colonized patients.” N Engl J Med 2000; 343:1925-32): this, in turn, would appear to increase the risk of subsequent infection, and the risk of transmission of the organism to other patients. Staphylococcus aureus Multi-Drug Resistant (MDRSA) S. aureus Staphylococcus aureus N. Engl. J. Med, S. aureus Clin. Microbiol. Rev., S. aureus S. aureus Staphylococcus Gram positive pathogens S. aureus Staphylococcus aureus Infect. Dis. Clin. North. Am., is responsible for a variety of diseases ranging from minor skin infections to life-threatening systemic infections, including endocarditis and sepsis [Lowy, F. D., 1998, “infections.” 8:520-532]. It is a common cause of community- and nosocomially-acquired septicemia (e.g., of approximately 2 million infections nosocomially acquired annually in the United States, approximately 260,000 are associated with [Emori, T. G., and R. P. Gaynes, 1993, “An overview of nosocomial infections, including the role of the microbiology laboratory,” 4:428-442]). Also, approximately 20% of the human population is stably colonized with , and up to 50% of the population is transiently colonized, with diabetics, intravenous drug users, patients on dialysis, and patients with ADS having the highest rates of colonization [Tenover, F. C., and R. P. Gaynes, 2000, “The epidemiology of infections,” p. 414-421, In: V. A. Fischetti, R. P. Novick, J. J. Ferretti, D. A. Portnoy, and J. I. Rood (ed), -, American Society for Microbiology, Washington, D.C.]. The organism is responsible for approximately one-half of all skin and connective tissue infections, including folliculitis, cellulitis, furuncules, and pyomyositis, and is one of the most common causes of surgical site infections. The mortality rate for septicemia ranges from 11 to 48% [Mortara, L. A., and A. S. Bayer, 1993, “bacteremia and endocarditis. New diagnostic and therapeutic concepts.” 1:53-68]. S. aureus Staphylococcus aureus .” Infect. Control Hosp. Epidemiol., Am. J. Infect. Control., S. aureus S. aureus Methicillin was one of the first synthetic antibiotics developed to treat penicillin-resistant staphylococcal infections. However, the prevalence of methicillin-resistant strains or “MRSA” (which also are resistant to oxacillin and nafcillin) has drastically increased in the United States and abroad [Panlilio, A. L., D. H. Culver, R. P. Gaynes, S. Banerjee, T. S. Henderson, J. S. Tolson, and W. J. Martone, 1992, “Methicillin-resistant in U.S. hospitals, 1975-199110:582-586]. For example, according to the National Nosocomial Infections Surveillance System [National Nosocomial Infections Surveillance (NNIS) report, data summary from October 1986-April 1996, issued May 1996, “A report from the National Nosocomial Infections Surveillance (NNIS) System.” 5:380-388], approximately 29% of 50,574 nosocomial infections from 1987 to 1997 were resistant to the β-lactam antibiotics (e.g., oxacillin, nafcillin, methicillin), and the percent of MRSA strains among U.S. hospitals reached approximately 40% by the end of the same period. At the University of Maryland Medical Center, >50% of all blood isolates are now methicillin resistant. S. aureus S. aureus S. aureus In this setting, there is great concern about the possible emerge of methicillin-resistant/multi-drug resistant strains which are vancomycin resistant—and which would be essentially untreatable. Although overt resistance to vancomycin has not yet been documented in clinical isolates, there have been several reports of clinical infections with strains having intermediate resistance to vancomycin (MICs=8 μg/ml), which suggests that untreatable staphylococcal infections may not be too far away [Tenover, F. C., and R. P. Gaynes. 2000]. Given the virulence of , the emergence of such untreatable strains would be devastating and have a major impact on the way in which medicine is practiced in this country. Staphylococcus aureus Staphylococcus aureus Staphylococcus aureus Staphylococcus aureus Staphylococcal species, including MDRSA, are common colonizers of the human nose; in one community-based study, 35% of children and 28% of their guardians had nasal colonization (Shopsin B, et al, “Prevalence of methicillin-resistant and methicillin-susceptible in the community.” J Infect Dis 2000; 182:359-62.). Persons who are nasally colonized with MRSA have an increased risk of developing serious systemic infections with this microorganism, and, in particular, colonization or prior infection with MDRSA significantly increases the risk of subsequent bacteremia with MDRSA (Roghmann M C, “Predicting methicillin resistance and the effect of inadequate empiric therapy on survival in patients with bacteremia. Arch Intern Med 2000; 160:1001-4). As seen with VRE, the rate of colonization of persons with MDRSA on a unit (the colonization pressure) significantly increases the risk of acquisition of MDRSA for other patients on the unit (Merrer J. et al, ““Colonization pressure” and risk of acquisition of methicillin-resistant in a medical intensive care unit.” Infect Control Hosp Epidemiol 2000; 21:718-23). Pseudomonas aeruginosa Multi-Drug Resistant Pseudomonas aeruginosa Pseudomonas Pseudomonas is a highly virulent gram-negative bacterial species that is responsible for bacteremia, wound infections, pneumonia, and urinary tract infections. Increasing problems with multi-antibiotic resistance in has been noted in hospitals, with particular concern focusing on strains which are generally designated as “Imipenem-resistant ”, reflecting the last major antimicrobial agent to which they have become resistant. Many of these strains are resistant to all major antibiotic classes, presenting substantive difficulties in management of infected patients. Pseudomonas Pseudomonas Pseudomonas As seen with other Gram-negative microorganisms, strains often emerge as the primary colonizing flora of the posterior pharynx during hospitalization. Strains present in the posterior pharynx, in turn, are more likely to be aspirated into the lungs, and cause pneumonia. In this setting colonization with multi-drug resistant represents a potentially serious risk factor for development of multi-drug resistant pneumonia. Bacteriophage J. Bacteriol., J. Infect Dis., Trans R. Soc. Trop. Med. Hyg., Bacteriophage has been used therapeutically for much of this century. Bacteriophage, which derive their name from the Greek word “phago” meaning “to eat” or “bacteria eaters”, were independently discovered by Twort and independently by D'Herelle in the first part of the twentieth century. Early enthusiasm led to their use as both prophylaxis and therapy for diseases caused by bacteria. However the results from early studies to evaluate bacteriophage as antimicrobial agents were variable due to the uncontrolled study design and the inability to standardize reagents. Later in well designed and controlled studies it was concluded that bacteriophage were not useful as antimicrobial agents (Pyle, N. J. (1936), 12:245-61; Colvin, M. G. (1932), 51:17-29; Boyd et al. (1944), 37:243-62). Trends in Microbiology, Lancet, J. Infect. Dis., J: Gen. Microbiol., This initial failure of phage as antibacterial agents may have been due to the failure to select for phage that demonstrated high in vitro lytic activity prior to in vivo use. For example, the phage employed may have had little or no activity against the target pathogen, were used against bacteria that were resistant due to lysogenization or the phage itself might be lysogenic for the target bacterium (Barrow, et al. (1997), “Bacteriophage therapy and prophylaxis: rediscovery and renewed assessment of potential.” 5:268-71). However, with a better understanding of the phage-bacterium interaction and of bacterial virulence factors, it was possible to conduct studies which demonstrated the in vivo anti-bacterial activity of the bacteriophage (Asheshov, et al. (1937), 1:319-20; Ward, W. E. (1943), 72:172-6; Lowbury, et al. (1953), 9:524-35). In the U.S. during the 1940's Eli Lilly commercially manufactured six phage products for human use including preparations targeted towards staphylococci, streptococci and other respiratory pathogens. With the advent of antibiotics, the therapeutic use of phage gradually fell out of favor in the U.S. and Western Europe and little subsequent research was conducted. However, in the 1970's and 1980's there were reports of bacteriophage therapy continuing to be utilized in Eastern Europe, most notably in Poland and the former Soviet Union. Pseudomonas, Staphylococcus, Klebsiella, Proteus E. coli Archivum. Immunol. Therapiae Experimental, Grudnaya Khirurgia, P. aeruginosa Pseudomonas aeruginosa P. aeruginosa Zh. Mikrobiol. Epidemiol. Immunobiol. J. Am. Med. Assoc J. Am. Med. Assoc Trends in Microbiol., Proc. Natl. Acad. Sci. USA, J. Infect., Phage therapy has been used in the former Soviet Union and Eastern Europe for over half a century, with research and production centered at the Eliava Institute of Bacteriophage in Tbilisi, in what is now the Republic of Georgia. The international literature contains several hundred reports on phage therapy, with the majority of the publications coming from researchers in the former Soviet Union and eastern European countries. To give but a few examples, phages have been reported to be effective in treating (i) skin and blood infections caused by , and [Cislo, M., M. Dabrowski, B. Weber-Dabrowska, and A. Woyton, 1987, “Bacteriophage treatment of suppurative skin infections,” 35(2):175-183; Slopek, S., I. Durlakowa, B. Weber-Dabrowska, A. Kucharewicz-Krukowska, M. Dabrowski, and R. Bisikiewicz, 1983, “Results of bacteriophage treatment of suppurative bacterial infections. I. General evaluation of the results,” 31:267-291; Slopek, S., B. Weber-Dabrowska, M. Dabrowski, and A. Kucharewicz-Krukowska, 1987, “Results of bacteriophage treatment of suppurative bacterial infections in the years 1981-1986,”, 35:569-83], (ii) staphylococcal lung and pleural infections [Meladze, G. D., M. G. Mebuke, N. S. Chkhetia, N. I. Kiknadze, G. G. Koguashvili, I. I. Timoshuk, N. G. Larionova, and G. K. Vasadze, 1982, “The efficacy of Staphylococcal bacteriophage in treatment of purulent diseases of lungs and pleura,” 1:53-56 (in Russian, summary in English)], (iii) infections in cystic fibrosis patients [Shabalova, I. A., N. I. Karpanov, V. N. Krylov, T. O, Sharibjanova, and V. Z. Akhverdijan. “bacteriophage in treatment of infection in cystic fibrosis patients,” abstr. 443. In Proceedings of IX international cystic fibrosis congress, Dublin, Ireland], (iv) neonatal sepsis [Pavlenishvili, I., and T. Tsertsvadze. 1985. “Bacteriophage therapy and enterosorbtion in treatment of sepsis of newbornes caused by gram-negative bacteria.” In abstracts, p. 104, Prenatal and Neonatal Infections, Toronto, Canada], and (v) surgical wound infections [Peremitina, L. D., E. A. Berillo, and A. G. Khvoles, 1981, “Experience in the therapeutic use of bacteriophage preparations in supportive surgical infections.” 9:109-110 (in Russian)]. Several reviews of the therapeutic use of phages were published during the 1930s-40s [Eaton, M. D., and S. Bayne-Jones, 1934, “Bacteriophage therapy: review of the principles and results of the use of bacteriophage in the treatment of infections,” ., p. 103; Krueger, A. P., and E. J. Scribner, 1941, “The bacteriophage: its nature and its therapeutic use,” ., p. 116] and recently [Barrow, P. A., and J. S. Soothill, 1997, “Bacteriophage therapy and propylaxis—rediscovery and renewed assessment of potential,” 5(7):268-271; Lederberg, J., 1996, “Smaller fleas . . . ad infinitum: therapeutic bacteriophage,” 93:3167-3168]. In a recent paper published in the Journal of Infection (Alisky, J., K. Iczkowski, A. Rapoport, and N. Troitsky, 1998, “Bacteriophages show promise as antimicrobial agents,” 36:5-15), the authors reviewed Medline citations (published during 1966-1996) of the therapeutic use of phages in humans. There were twenty-seven papers from Britain, the U.S.A., Poland and the Soviet Union, and they found that the overall reported success rate for phage therapy was in the range of 80-95%. Escherichia coli J. Gen. Microbial., E. coli J. Gen. Microbiol., E. coli J. Gen. Microbial., J. Gen. Microbial., Staphylococcus, Pseudomonas Acinetobacter These are several British studies describing controlled trials of bacteriophage raised against specific pathogens in experimentally infected animal models such as mice and guinea pigs (See, e.g., Smith. H. W., and M. B. Huggins “Successful treatment of experimental infections in mice using phages: its general superiority over antibiotics”128:307-318 (1982); Smith, H. W., and M. B. Huggins “Effectiveness of phages in treating experimental diarrhea in calves, piglets and lambs” 129:2659-2675 (1983); Smith, H. W. and R. B. Huggins “The control of experimental diarrhea in calves by means of bacteriophage”. 133:1111-1126 (1987); Smith, H. W., R. B. Huggins and K. M. Shaw “Factors influencing the survival and multiplication of bacteriophages in calves and in their environment” 133:1127-1135 (1987)). These trials measured objective criteria such as survival rates. Efficacy against and infections were observed. These studies are described in more detail below. Proc. Natl. Acad. Sci. USA, Blood, Clin. Immunol. Immunopathol., One U.S. study concentrated on improving bioavailability of phage in live animals (Merril, C. R., B. Biswas, R. Carlton, N. C. Jensen, G. J. Greed, S. Zullo, S. Adhya “Long-circulating bacteriophage as antibacterial agents” 93:3188-3192 (1996)). Reports from the U.S. relating to bacteriophage administration for diagnostic purposes have indicated phage have been safely administered to humans in order to monitor humoral immune response in adenosine deaminase deficient patients (Ochs, et al. (1992), “Antibody responses to bacteriophage phi X174 in patients with adenosine deaminase deficiency.” 80:1163-71) and for analyzing the importance of cell associated molecules in modulating the immune response in humans (Ochs, et al. (1993), “Regulation of antibody responses: the role of complement acrd adhesion molecules.” 67:S33-40). Pseudomonas aeruginosa P. aeruginosa Archivum, Immunol. Therapiae Experimental, Archivum Immunol. Therapiae Experimental, Additionally, Polish, Georgian, and Russian papers describe experiments where phage was administered systemically, topically or orally to treat a wide variety of antimicrobial resistant pathogens (See, e.g., Shabalova, I. A., N. I. Karpanov, V. N. Krylov, T. O., Sharibjanova, and V. Z. Akhverdijan. “bacteriophage in treatment of infection in cystic fibrosis patients,” Abstr. 443. In Proceedings of IX International Cystic Fibrosis Congress, Dublin, Ireland; Slopek, S., I. Durlakowa, B. Weber-Dabrowska, A. Kucharewicz-Krukowska, M. Dabrowski, and R Bisikiewicz. 1983. “Results of bacteriophage treatment of suppurative bacterial infections. I. General evaluation of the results.” 31:267-291; Slopek, S., B. Weber-Dabrowska, M. Dabrowski, and A. Kucharewicz-Krukowska. 1987. “Results of bacteriophage treatment of suppurative bacterial infections in the years 1981-1986”, 35:569-83. Staphylococci, Sreptococci, Klebsiella, Shigella, Salmonella, Pseudomonas, Proteus Escherichia Infections treated with bacteriophage included osteomyelitis, sepsis, empyema, gastroenteritis, suppurative wound infection, pneumonia and dermatitis. Pathogens involved included and . These articles reported a range of success rates for phage therapy between 8095% with only rare reversible allergic or gastrointestinal side effects. These results indicate that bacteriophage may be a useful adjunct in the fight against bacterial diseases. However, this literature does not describe, in any way anticipate, or otherwise suggest the use of bacteriophage to modify the composition of colonizing bacterial flora in humans, thereby reducing the risk of subsequent development of active infections. Enterococcus Enterococcus Enterococcus According to one embodiment of the present invention, a lytic bacteriophage which infects , wherein no more than 30% of the strains in a collection of more than 100 genetically diverse vancomycin resistant (VRE) strains are resistant to infection by said bacteriophage is disclosed. The bacteriophage may produces lytic infection in at least 200 genetically diverse VRE isolates. The bacteriophage preparation may be administered orally to patients who are colonized with VRE who are admitted to a medical facility. These patients may include, inter alia, leukemia patients, lymphoma patient, transplant patients and AIDS patients. The bacteriophage may also be administered to all patients admitted at the medical facility. Administration of this bacteriophage will reduce or eliminate colonization with VRE, thereby reducing the risk that these patients will subsequently develop serious systemic infections with this highly resistant microorganism. Reduction or elimination of colonization will also decrease the VRE “colonization pressure” in the hospital or specific units of the hospital, thereby reducing the risk that VRE will be acquired by persons who currently are neither colonized or infected with the pathogen. In one embodiment, the present invention provides a method for reducing the risk of bacterial infection or sepsis in a susceptible patient by treating the susceptible patient with a pharmaceutical composition containing bacteriophage of one or more strains which produce lytic infections in pathogenic bacteria. Preferably, treatment of the patient reduces the level of colonization with pathogenic bacteria susceptible to the bacteriophage by at least one log. In a typical embodiment, the susceptible patient is an immunocompromised patient selected from the group consisting of leukemia patients, lymphoma patients, carcinoma patients, sarcoma patients, allogeneic transplant patients, congenital or acquired immunodeficiency patients, cystic fibrosis patients, and AIDS patients. In a preferred mode, the patients treated by this method are colonized with the pathogenic bacteria subject to infection by said bacteriophage. enterococcus Staphylococcus aureus Staphylococcus aureus Pseudomonas Nesseria Hemophilus Proteus Klebsiella Esherichia coli Pseudomonas In a preferred embodiment of this invention, the risk of infection is reduced by administering a composition comprising bacteriophage which produce lytic infections in pathogenic bacteria selected from vancomycin-resistant (VRE), pneumococcal species, methicillin-resistant , multi-drug resistant (MDRSA), multi-drug resistant species, sp., sp., sp., sp. and . Preferably, the pathogenic bacteria are selected from VRE, MDSA, and multi-drug resistant . In a preferred embodiment of this invention, the bacteriophage composition is in a form selected from a parenteral composition, an oral tablet, capsule or liquid, a nasal aerosol, a throat wash, a toothpaste, and a topical ointment. Preferably, the pharmaceutical composition contains a plurality of bacteriophage strains. More preferably, the pharmaceutical composition contains bacteriophage strains which produce lytic infections in pathogenic bacteria of a plurality of bacterial strains or bacteriophage strains which produce lytic infections in pathogenic bacteria of a plurality of bacterial species. In a preferred embodiment, the present invention provides a method for reducing the risk of bacterial infection or sepsis in a patient having a wound selected from an ulcer, a laceration, a deep penetrating wound and a surgical wound by treating the patient with a pharmaceutical composition containing bacteriophage of one or more strains which produce lytic infections in pathogenic bacteria capable of infecting these wounds. Preferably, the composition is a topical ointment, an irrigation solution or a component of a wound dressing. Pseudomonas. In another embodiment, this invention provides a method for reducing the incidence of infection by selected bacteria in a medical facility by administering to patients who are admitted to said medical facility a bacteriophage preparation which reduces the colonization level by the selected bacteria in patients at risk for infection by the selected bacteria. In a typical embodiment, the patients at risk for infection are selected from the group consisting of leukemia patients, lymphoma patients, carcinoma patients, sarcoma patients, allogeneic transplant patients, congenital or acquired immunodeficiency patients, cystic fibrosis patients, and ADS patients. In another embodiment, the bacteriophage preparation is administered to substantially all patients admitted to said medical facility. In a preferred embodiment, the bacteriophage preparation is administered to substantially all patients colonized with the selected bacteria who are admitted to said medical facility. In another preferred embodiment, the selected bacteria is VRE, MDRSA, or multi-drug resistant According to another embodiment of the present invention, a bacteriophage preparation which reduces the number of VRE in experimentally infected mice by at least 1 log is disclosed. Staphylococcus aureus Staphylococcus aureus According to another embodiment of the present invention, a lytic bacteriophage which infects , wherein no more than 30% of the Staphylococcal strains in a collection of more than 100 genetically diverse multi-drug resistant (MDRSA) strains are resistant to infection by said bacteriophage is disclosed. The bacteriophage may produces lytic infection in at least 200 genetically diverse MDRSA isolates. The bacteriophage preparation may be administered via nasal spray to individuals who are nasally colonized with MDRSA, particularly to a subpopulation made up of all such individuals who are admitted to a medical facility. The bacteriophage may also be administered to all patients admitted at the medical facility. Administration of this bacteriophage will reduce or eliminate colonization with MDRSA, thereby reducing the risk that these patients will subsequently develop serious systemic infections with this highly resistant microorganism. Reduction or elimination of colonization will also decrease the MDRSA “colonization pressure” in the hospital or specific units of the hospital, thereby reducing the risk that MDRSA will be acquired by persons who currently are neither colonized or infected with the pathogen. Pseudomonas aeruginosa Pseudomonas Pseudomonas aeruginosa Pseudomonas aeruginosa Pseudomonas According to one embodiment of the present invention, a lytic bacteriophage which infects , wherein no more than 30% of the strains in a collection of more than 100 genetically diverse multi-antibiotic resistant strains are resistant to infection by said bacteriophage is disclosed. The bacteriophage may produces lytic infection in at least 200 genetically diverse isolates. The bacteriophage preparation may be administered by mouth wash or gargle to individuals who are colonized with multi-drug resistant , particularly to a subpopulation made up of all such individuals who are admitted to a medical facility. The bacteriophage may also be administered to all patients admitted at the medical facility. Administration of this bacteriophage will reduce or eliminate colonization with multi-drug resistant strains, thereby reducing the risk that these patients will subsequently develop serious systemic infections with this highly resistant microorganism. This may also be expected to reduce the “colonization pressure” of this microorganism, thereby reducing the risk that it will be acquired by persons who are currently not colonized or infected. Bacteriophage technology can be of value in managing a large variety of bacterial infections because: (i) bacteriophages are highly specific and very effective in lysing targeted pathogenic bacteria, (ii) bacteriophages are absolutely specific for prokaryotes, and do not affect humans or animals, (iii) bacteriophages are safe, as underscored by their extensive clinical use in Eastern Europe and the former Soviet Union, and the commercial sale of phages in the 1940's in the United States, (iv) phage preparations can rapidly be modified to combat the emergence of newly arising bacterial threats, and (v) phage production is seen to be cost-effective for large-scale applications in a variety of medical settings. Of particular relevance, bacteriophage will not kill non-pathogenic, “normal flora” bacteria, thereby retaining the “colonization resistance” of reservoirs such as the human intestinal tract, the nose, and the posterior pharynx. Accordingly, the present invention envisions using lytic phages (in combination with antibiotics or alone) to prophylactically or therapeutically eliminate various bacteria capable of causing diseases of the gastrointestinal, genitourinary, and respiratory tracts, and skin, oral cavity, and bloodstream. In accordance with this invention, therapeutic phages can be administered in a number of ways, in various formulations, including: (i) orally, in tablets or liquids, (ii) locally, in tampons, rinses or creams, (iii) aerosols, and (iv) intravenously. Clostridium difficile One benefit of bacteriophage therapy when compared to antibiotic therapy relates to the relative specificity of the two therapeutic modalities. Bacteriophage are specific for particular bacterial strains or species, while antibiotics typically are broadly effective against a large multiplicity of bacterial species or genera. It is well known that normal individuals are colonized with innocuous bacteria, and this colonization may be beneficial to the colonized individual (see U.S. Pat. No. 6,132,710, incorporated herein by reference). Antibiotic therapy can severely alter colonization or even eliminate beneficial colonization completely. This may have adverse effects, such as the outgrowth of opportunistic species such as , which then leads to an antibiotic-associated colitis. In contrast, bacteriophage therapy specifically affects the bacterial strains that are sensitive or susceptible to lytic infection by the particular bacteriophage in the therapeutic composition, but leaves other (innocuous or beneficial) bacteria unaffected. Thus, bacteriophage therapy is preferable for prophylactic treatment where alteration of normal microflora should be minimized. In a preferred mode of this invention, phage technology is focused on two important human pathogens, VRE and MDRSA, and the value of VRE- and MDRSA-specific lytic phages in different settings: (i) oral administration of phages for prophylaxis against septicemia, (ii) local application of phages for prophylaxis/treatment of skin and wound infections, (iii) intravenous administration of phages for therapy of septicemia, and (iv) the use of aerosolized phages against respiratory pathogens. VRE infection has become a particularly serious problem among immunocompromised and/or seriously ill patients in intensive care units, cancer centers and organ transplant units. Since VRE are resistant to all currently used antimicrobials, alternate approaches to reducing or eliminating VRE gastrointestinal colonization in immunocompromised patients must be found in order to reduce the prevalence of VRE bacteremia. Oral administration of lytic bacteriophage active against VRE is one such approach. The general rule is that patients first become colonized by pathogenic bacteria present in their immediate environment before developing illness due to those bacteria. Serious VRE infections, including septicemia, usually are preceded by intestinal colonization with the infecting organisms; therefore, the risk of septicemia is likely to be decreased by reducing colonization prior to periods when patients are severely neutropenic or otherwise immunosuppressed (i.e., reducing intestinal colonization may also reduce the risk of bloodstream invasion). The present inventors have discovered that certain strains of bacteriophage are particularly effective at lysing VRE. By administering these VRE-active bacteriophage to persons colonized with VRE, it is possible to substantially reduce or even eliminate VRE from the colonized person. Thus, the present invention provides strains of phage which are particularly effective against VRE, methods for obtaining additional strains of VRE-active phage, methods for treating patients colonized with VRE by administering VRE-active phage, and methods of reducing nosicomial infection rate by administering VRE-active phage in vivo, ex vivo, or both, to selected locations, areas, objects and/or persons. S. aureus S. aureus Pseudomonas Analogous approaches using bacteriophage targeted to other pathogenic bacteria are also contemplated by this invention. phage preparations can reduce contamination of skin and wounds with , which in turn may prevent the development of serious surgical site infections and septicemia. Phage active against species can be used to reduce colonization that threatens to develop into pneumonia in immunocompromised patients or in individuals suffering from cystic fibrosis. Isolation of Bacteriophage E. faecium, E. faecalis E. gallinarium E. faecium The present inventors have isolated several lytic phages active against genetically diverse (as assessed by pulsed field gel electrophoresis and/or arbitrary pruned polymerase chain reaction or other nucleic acid amplification techniques) VRE strains. In vitro susceptibility tests involving 234 VRE strains (184 41 and 6 isolated from patients at the University of Maryland and the Baltimore VA Medical Center, and 3 ATCC strains), resulted in the Intralytix phage collection being able to cumulatively lyse all VRE strains in the collection, with one particular phage being able to lyse 95% of VRE strains. Furthermore mice whose gastrointestinal tract was colonized with VRE under selective pressure of antibiotic administration, were orogastrically administered VRE-active phages, which resulted in a 1 to 3 log reduction of VRE gastrointestinal colonization compared to a control group of animals not given phage. This occurred within a 48 to 72 hour time frame. No side effects due to the phage were observed. Enterococcus faecalis J. Clin. Microbiol., Lancet, Infect. Control Hosp. Epidemiol., Bacteriophage strains may be isolated by analogous procedures to those used to isolate the VRE-active strains described herein. Suitable bacteriophage may be isolated from any sample containing bacteriophage, which typically are found in association with their host bacteria. Thus, any source that might be expected to contain VRE is suitable for use as a source of VRE-active bacteriophage. Such samples include fecal, urine, or sputum samples from patients, particularly patients undergoing acute or prophylactic antibiotic therapy, patients in intensive care units or immunocompromised patients. Such patients may include but are not limited to burn patients, trauma patients, patients receiving bone marrow and/or organ transplants, cancer patients, patients with congenital or acquired immunodeficiency diseases, dialysis patients, liver disease patients, and patients with acute or chronic renal failure. Body fluids including ascites, pleural effusions, joint effusions, abscess fluids, and material obtained from wounds. While humans are the primary reservoir for VRE, the organism also can be readily found in the immediate environment of infected/colonized patients such as bedrails, bed sheets, furniture, etc. (Bodnar, U. R. et al (1996), “Use of in house studies of molecular epidemiology and full species identification of controlling spread of vancomycin resistant isolates”, 34: 2129-32; Bonten, M. J. M. et al (1996), “Epidemiology of colonization of patients and the environment with vancomycin resistant enterococci.” 348: 1615-19; Noskin, G. A. (1995), “Recovery of vancomycin resistant enterococci on fingertips and environmental surfaces.” 16: 577-81). Consequently, samples for bacteriophage isolation may also be obtained from nonpatient sources, including sewage, especially sewage streams near intensive care units or other hospital venues, or by swab in hospital areas associated with risk of nosicomial infection, such as intensive care units. Other suitable sampling sites include nursing homes, rest homes, military barracks, dormitories, classrooms, and medical waste facilities. Phages also can be isolated from rivers and lakes, wells, water tables, as well as other water sources (including salt water). Preferred sampling sites include water sources near likely sites of contamination listed above. Enterococcus Suitable methods for isolating pure bacteriophage strains from a bacteriophage-containing sample are well known, and such methods may be adapted by the skilled artisan in view of the guidance provided herein. Isolation of VRE-active bacteriophage from suitable samples typically proceeds by mixing the sample with nutrient broth, inoculating the broth with a host bacterial strain, and incubating to enrich the mixture with bacteriophage that can infect the host strain. An sp. strain will be used as the host strain, preferably a VRE strain. After the incubation for enrichment, the mixture is filtered to remove bacterial leaving lytic bacteriophage in the filtrate. Serial dilutions of the filtrate are plated on a lawn of VRE, and VRE-active phage infect and lyse neighboring bacteria. However the agar limits the physical spread of the phage throughout the plate, resulting in small visibly clear areas called plaques on the plate where bacteriophage has destroyed VRE within the confluent lawn of VRE growth. Since one plaque with a distinct morphology represents one phage particle that replicated in VRE within that area of the bacterial lawn, the purity of a bacteriophage preparation can be ensured by removing the material in that plaque with a pasteur pipette (a “plaque pick”) and using this material as the inoculum for further growth cycles of the phage. The bacteriophage produced in such cycles represent a single strain or “monophage.” The purity of phage preparation (including confirmation that it is a monophage and not a polyvalent phage preparation) is assessed by a combination of electron microscopy, SDS-PAGE, DNA restriction digest and analytical ultracentrifugation. In addition, each phage is uniquely identified by its DNA restriction digest profile, protein composition, and/or genome sequence. Individual VRE-active bacteriophage strains (i.e., monophages) are propagated as described for enrichment culture above, and then tested for activity against multiple VRE strains to select broad-spectrum VRE-active bacteriophage. Efforts are made to select phages that (i) are lytic, (ii) are specific to enterococci, (iii) lyse more than 70% of the VRE strains in our VRE strain collection, and/or (iv) lyse VRE strains resistant to other VRE phages previously identified. It is also possible to select appropriate phages based upon the sequences of DNA or RNA encoding proteins involved in the binding and/or entry of phage into their specific host, or based upon the amino acid sequences or antigenic properties of such proteins. Quantities of broad-spectrum VRE-active bacteriophage needed for therapeutic uses described below may be produced by culture on a suitable host strain in the mariner described above for enrichment culture. When performing an enrichment culture to produce bacteriophage for therapeutic use, a host strain is selected based on its ability to give a maximum yield of phage, as determined in pilot experiments with several different host VRE strains. If two or more host strains give similar yield the strain most sensitive to antibiotics is selected. The techniques described herein for isolation of VRE monophages are applicable to isolation of bacteriophages that are lytic for other pathogenic bacteria. It is within the skill in the art to substitute host strains of other bacteria in the methods described herein in order to isolate phage specific for those bacteria. Starting the phage isolation process with samples selected from environments that also contain bacteria of the host species will accelerate the process. Patient Population Ann. Int. Med., Enterococcus faecium Clin. Infect. Dis., Any patient who is at risk for colonization with VRE or who has proven VRE colonization is a candidate for treatment according to the method of this invention. Intestinal colonization with VRE is relatively common in institutionalized patients undergoing antimicrobial therapy. In studies conducted in 1993-94, 17-19% of a random sample of all patients at the University of Maryland Hospital were colonized with VRE (Morris, et al. (1995), “Enterococci resistant to multiple antimicrobial agents including vancomycin.” 123:250-9), while in an identical study conducted in 1996 this increased to 23.8%. Once colonized with VRE, a patient may remain colonized for life; however once off antimicrobial therapy, VRE colonization may drop to levels not detectable in routine stool culture. Colonized persons though who also subsequently become immunocompromised are at risk for developing bacteremia (Edmond, et al., 1995; Tornieporth, et al (1996), “Risk factors associated with vancomycin resistant colonization or infection in 145 matched case patients and control patients.” 23:767-72). Enterococcus faecium Clin. Infect. Dis., Arch. Intern. Med., Enterococcus faecium Clan. Infect. Dis., Enterococcus faecium Clin. Infect. Dis., VRE infection is a particularly serious problem among immunocompromised and/or seriously ill patients in cancer centers, intensive care units, and organ transplant centers. In case control studies VRE has been linked to antimicrobial use and severity of illness (as measured by APACHE score) (Handwerger, et al. (1993), “Nosocomial outbreak due to , highly resistant to vancomycin, penicillin and gentamicin.” 16:750-5; Montecalvo, et al. (1996), “Bloodstream infections with vancomycin resistant enterococci.” 156:1458-62; Papanicolaou, et al. (1996), “Nosocomial infections with vancomycin-resistant in liver transplant patients: Risk factors for acquisition and mortality.” 23:760-6; Roghmann, et al., (1997), “Recurrent vancomycin resistant bacteremia in a leukemic patient who was persistently colonized with vancomycin resistant enterococci for two years.” 24; 514-5). Investigators at the University of Maryland at Baltimore and the Baltimore VA Medical Center have demonstrated by pulse field electrophoresis that VRE strains causing bacteremia in cancer patients are almost always identical to those that colonize the patient's gastrointestinal tract. Clin. Transplant., Three categories of immunocompromised patients subjected to prolonged antimicrobial administration in a institutionalized setting and who would be susceptible to VRE gastrointestinal colonization are: 1) leukemia (30,200 patients per year in the U.S.) and lymphoma patients (64,000 patients per year in the U.S.), 2) transplant patients (20,961 per year in the U.S.), and 3) ADS patients (66,659 patients per year in the U.S.). The total number of patients in the immunocompromised category is 181,800 per year in the U.S. Pfundstein, et al., found that the typical rate of enterococcal gastrointestinal colonization among renal and pancreas transplant patients receiving antibiotics in an institutional setting was 34% (38/102) with 4 (11%) of these isolates being VRE (Pfundstein, et al. (1999), “A randomized trial of surgical antimicrobial prophylaxis with and without vancomycin in organ transplant patients.” 13:245-52). Therefore the rate of gastrointestinal colonization by VRE in this immunocompromised population would be 0.34×0.11=0.04 or 4% of the total patient population. One can therefore estimate VRE gastrointestinal, colonization to be 181,800×0.04=7272 patients per year. Formulation and Therapy According to this invention, VRE-active bacteriophage are preferably formulated in pharmaceutical compositions containing the bacteriophage and a pharmaceutically acceptable carrier, and can be stored as a concentrated aqueous solution or lyophilized powder preparation. Bacteriophage may be formulated for oral administration by resuspending purified phage preparation in aqueous medium, such as deionized water, mineral water, 5% sucrose solution, glycerol, dextran, polyethylene glycol, sorbitol, or such other formulations that maintain phage viability, and are non-toxic to humans. The pharmaceutical composition may contain other components so long as the other components do not reduce the effectiveness (ineffectivity) of the bacteriophage so much that the therapy is negated. Pharmaceutically acceptable carriers are well known, and one skilled in the pharmaceutical art can easily select carriers suitable for particular routes of administration (Remington's Pharmaceutical Sciences, Mack Publishing Co., Easton, Pa., 1985). The pharmaceutical compositions containing VRE-active bacteriophage may be administered by parenteral (subcutaneously, intramuscularly, intravenously, intraperitoneally, intrapleurally, intravesicularly or intrathecally), topical, oral, rectal, inhalation, ocular, otic, or nasal route, as necessitated by choice of drug and disease. Injection of specific lytic phages directly into the bloodstream can eliminate or significantly reduce the number of targeted bacteria in the blood. If, after either oral or local administration, phages get into the bloodstream in sufficient numbers to eliminate bacteria from the bloodstream, septicemia may be treated by administering phages orally (or locally). If the phages do not get into the bloodstream in sufficient numbers to eliminate bacteria from the bloodstream, the utility of direct i.v. injection of phages for treating septic infections can be used to treat bloodstream infections caused by VRE and other pathogenic bacteria, and can provide an urgently needed means for dealing with currently untreatable septicemic infections. 7 11 Dose and duration of therapy will depend on a variety of factors, including the patient age, patient weight, and tolerance of the page. Bacteriophage may be administered to patients in need of the therapy provided by this invention by oral administration. Based on previous human experience in Europe, a dose of phage between 10and 10PFU will be suitable in most instances. The phage may be administered orally in, for example, mineral water, optionally with 2.0 grams of sodium bicarbonate added to reduce stomach acidity. Alternatively, sodium bicarbonate may be administered separately to the patient just prior to dosing with the phage. Phages also may be incorporated in a tablet or capsule which will enable transfer of phages through the stomach with no reduction of phage viability due to gastric acidity, and release of fully active phages in the small intestine. The frequency of dosing will vary depending on how well the phage is tolerated by the patient and how effective a single versus multiple dose is at reducing VRE gastrointestinal colonization. The dose of VRE-active bacteriophage and duration of therapy for a particular patient can be determined by the skilled clinician using standard pharmacological approaches in view of the above factors. The response to treatment may be monitored by, analysis of blood or body fluid levels of VRE, or VRE levels in relevant tissues or monitoring disease state in the patient. The skilled clinician will adjust the dose and duration of therapy based ors the response to treatment revealed by these measurements. One of the major concerns about the use of phages in clinical settings is the possible development of bacterial resistance against them. However, as with antimicrobial resistance, the development of resistance to phages takes time. The successful use of phages in clinical settings will require continual monitoring for the development of resistance, and, when resistance appears, the substitution of other phages to which the bacterial mutants are not resistant. In general, phage preparations may be constructed by mixing several separately grown and well-characterized lytic monophages, in order to (i) achieve the desired, broad target activity of the phage preparation, (ii) ensure that the preparation has stable lytic properties, and (iii) minimize the development of resistance against the preparation. The development of neutralizing antibodies against a specific phage also is possible, especially after parenteral administration (it is less of a concern when phages are administered orally and/or locally). However, the development of neutralizing antibodies may not pose a significant obstacle in the proposed clinical settings, because the kinetics of phage action is much faster than is the host production of neutralizing antibodies. For VRE for example, phages will be used for just a few days, sufficient to reduce VRE colonization during the time period when immunocompromised patients are most susceptible to the development of potentially fatal VRE septicemia, but not long enough for phage-neutralizing antibodies to develop. If the development of antiphage antibodies is a problem, several strategies can be used to address this issue. For example, different phages having the same spectrum of activity (but a different antigenic profile) may be administered at different times during the course of therapy. On a more sophisticated level, therapeutic phages may be genetically engineered which will have a broad lytic range and/or be less immunogenic in humans and animals. Environmental Therapy Escherichia coli J. Gen. Microbiol., E. coli E. coli Escherichia coli J. Gen. Microbiol., Escherichia coli J. Gen. Microbiol., J. Gen. Microbiol., E. coli Salmonella typhimurium Salmonella Salmonella typhimurium.” Res. Microbiol., 6 4 9 9 In the 1980's a number of British studies were conducted which demonstrated the efficacy of bacteriophage prophylaxis and therapy in mice and farm animal models. These studies were significant because the titers of the phage preparations administered were significantly less than the bacterial inoculum indicating in vivo bacteriophage multiplication. For example, Smith et al (Smith, et al. (1982), “Successful treatment of experimental infections in mice using phage: its general superiority over antibiotics.” 128:307-1825) found intramuscular inoculation of mice with 10CFU of with K1 capsule killed 10/10 mice. However when mice were simultaneously intramuscularly inoculated with 10PFU of phage, at a separate site, 10/10 mice survived. Smith and coworkers demonstrated that administration of a mixture of two phage resulted in high levels of protection of calves with diarrhea induced by with K 88 or K99 fimbriae (Smith, et al. (1983), “Effectiveness of phages in treating experimental diarrhea in calves, piglets and lambs.” 129:2659-75; Smith, et al. (1987), “The control of experimental diarrhea in calves by means of bacteriophage.” 133:1111-26; Smith, et al. (1987), “Factors influencing the survival and multiplication of bacteriophages in calves and in their environment.” 133:1127-35). If the phage was administered before or at tire same time as no deaths occurred and complete protection was attained. Control animals developed watery diarrhea and died within 2 to 5 days. If phage administration was delayed until the onset of diarrhea, protection was not complete although the severity of infection was greatly reduced and no deaths were observed. Berchieri, et al., found that fewer chicks orally infected with 10PFU of died when 10PFU of specific phage was orally administered soon after initiation of the bacterial infection (Berchieri, et al. (1991), “The activity in the chicken alimentary tract of bacteriophages lytic for 142:541-49). They also found that the phage was readily spread between the different infected birds. Pseudomonas Pseudomonas P. aeruginosa Environmental applications of phage in health care institutions could lie most useful for equipment such as endoscopes and environments such as ICUs which maybe potential sources of nosocomial infection due to pathogens such as VRE but which may be difficult or impossible to disinfect. Phage would be particularly useful in treating equipment or environments inhabited by bacterial genera such as which may become resistant to commonly used disinfectants. In the Soviet Union there has been a report that application of phage to the hospital environment has resulted in killing targeted bacteria such as Staphylococci and within 48-72 hours. Phage persisted in the environment as long as there were target bacteria present and upon elimination of target bacteria, phage became undetectable in 6-8 days (Alavidze, et al, 1988, “Use of specific bacteriophage in the prophylaxis of intrahospital infections caused by .” in Abstracts., All-Soviet Union conference “Modern biology at the service of public health”. Kiev, Ukraine). 7 11 Phage compositions used to disinfect inanimate objects or the environment may be sprayed, painted, or poured, onto such objects or surfaces in aqueous solutions with phage titers ranging between 10-10PFU/ml. Alternatively, phage may be applied by aerosolizing agents that might include dry dispersants which would facilitate distribution of the phage into the environment. Such agents may also be included in the spray if compatible with phage viability and nontoxic in nature. Finally, objects may be immersed in a solution containing phage. The optimal numbers and timing of applications of phage compositions remains to be determined and would be predicated by the exact usage of such products. Since phage are normally widely present in the environment and are found even in food or drugs, there is minimal safety concern with regard to applying phage preparations to the environment. E. coli E. coli Staphylococcus As reported above, Smith and Huggins in England found that induced diarrhea in calves could be prevented by simply spraying the litter in the calf rooms with an aqueous phage preparation or even by keeping the calves in uncleaned rooms previously occupied by calves whose infections had been treated with phage. There is also data from the Soviet Union indicating the efficacy of phage to rid chicken houses of Staphylococci (Ponomarchuk, et al., (1987), “Strain phage Staphylococci applicable for prophylaxis and therapy of poultry .” Soviet patent N1389287, Dec. 15, 1987). The Scientist In the future, application of VRE phage to the environment of farm animals such as chickens or cattle maybe necessary to reduce VRE in this setting if VRE become prevalent in such environments and such animal VRE are capable, upon being consumed in contaminated food, of transiently colonizing the human gastrointestinal tract long enough to transfer antibiotic resistance gene transposons to normal gut flora (Latta, S. (1999) “Debate heats up over antibiotic-resistant foodborne bacteria.” 13; (14) 4-5). Salmonella E. coli Alternatively, colonization in the farm animals may be reduced by administering bacteriophage to the animals (or treating the animals' environment) using phage that produce lytic infections in target bacteria which colonize the animals. Such unwanted colonization may be a particular problem with intensive agricultural techniques, such as those used for chickens or veal calves. Target organisms include sp. and H7:O157. Bacteriophage Cocktails Staphylococcus aureus, Staphylococcus epidermidis, E. coli Pseudomonas aeruginosa This invention also contemplates phage cocktails which may be custom tailored to the pathogens that are prevalent in a certain situation. Typically, pathogenic bacteria would be initially isolated from a particular source (e.g., a patient or location contaminated with VRE) and susceptibility testing of the pathogens to various bacteriophage strains would be performed, analogous to antimicrobial susceptibility testing. Once each pathogen's phage susceptibility profile is determined, the appropriate phage cocktail can be formulated from phage strains to which the pathogens are susceptible and administered to the patient. Since phage would often be used in institutional settings where pathogens are resistant to many antimicrobial agents, phage cocktails would often consist of phage lytic for the most prevalent institutional pathogens which, in addition to enterococci, are and . Also since enterococci are often involved in polymicrobial infections along with other gastrointestinal commensals, such as in pelvic wound infections, the approach of therapeutically using cocktails of phage lytic against different bacterial species would be most appropriate. Since phage cocktails would be constructed of phage against institutional pathogens, isolation of such phage would be most successful from the sewage of such institutions. Typically, the phage cocktail will include one or more VRE-active bacteriophage according to this invention. Salmonella Campylobacter It may be appropriate to use certain phage cocktails in agricultural settings where there are certain human pathogens such as and inherent to poultry or livestock and which contaminate the environment of such animals on an ongoing basis. The result is a continuing source of infection by such pathogens. Bacteriophage cocktails may be applied contemporaneously—that is, they may be applied at the same time (e.g., in the same application), or may be applied in separate applications spaced in time such that they are effective at the same time. The bacteriophage may be applied as a single application, periodic applications, or as a continuous application. Campylobacter, E. coli Listeria Staphylococcus. Other bacteria within the contemplation of the present invention include, inter alia, H7:0157, and , and Isolation of VRE Enterococcus th VRE were isolated by standard methods from patients in the surgical intensive care and intermediate care units of the University of Maryland Medical Center in Baltimore. Trypticase Soy Agar supplemented with 5% sheep blood (BBL, Cockeysville Md.) was used to isolate enterococci from urine, wounds and sterile body fluids. VRE were isolated from stool specimens on Colistin Nalidixic Acid (CNA) agar (Difco labs, Detroit, Mich.) supplemented with defibrinated sheep blood (5%), vancomycin (10 g/ml) and amphotericin (1 g/ml). See Facklam, R. R., and D. F. Sahm. 1995. . In: Manual of Clinical Microbiology, 6edition, American Society for Microbiology, Washington, D.C., pp. 508-312. Identification of VRE Enterococcus J. Clin. Microbiol., Enterococci were identified by esculin hydrolysis and growth in 6.5% NaCl at 45° C. Identification to the species level was done using conventional testing as indicated in Facklam and Collins (Facklam, et al. (1989), “Identification of species isolated from human infections by a conventional method test scheme.” 27:731-4). Antimicrobial Susceptibility Testing of VRE E. faecium Antimicrobial susceptibilities to ampicillin, vancomycin, streptomycin, and gentamicin were determined using the E test quantitative minimum inhibitory concentration procedure (AB Biodisk, Solna Sweden). Quality control stains of (ATCC 29212, 51299) were used to ensure potency of each antimicrobial agent tested. With exception of vancomycin, susceptibility interpretations from the National Committee for Clinical Laboratory Standards were adhered to (National Committee for Clinical Laboratory Procedures (1993), “Methods for Dilution Antimicrobial Susceptibility Tests for Bacteria that Grow Aerobically.” 3rd Edition. National Committee for Clinical Laboratory Standards Villanova Pa.; National Committee for Clinical Laboratory Standards (1993), “Performance Standards for Antimicrobial Disk Susceptibility Tests” 5th Edition, National Committee for Clinical Laboratory Standards, Villanova Pa.). A VRE isolate was defined as one which had a minimum inhibitory concentration to vancomycin of at least 16 g/ml. Defining Genetically Distinct VRE Strains Enterococcus faecium J. Clin. Microbiol. Distinct VRE isolates were characterized as such by contour-clamped homogeneous electric field electrophoresis after digestion of chromosomal DNA with SmaI (Verma, P. et al. (1994) “Epidemiologic characterization of vancomycin resistant enterococci recovered from a University Hospital” (Abstract). In; Abstracts of the 94th General Meeting of the American Society for Microbiology, Las Vegas Nev.; Dean, et al. (1994) “Vancomycin resistant enterococci (VRE) of the vanB genotype demonstrating glycoprotein (G) resistance inducible by vancomycin (V) or teicoplanin (T)” In; Abstracts of the 94th General Meeting of the American Society for Microbiology, Las Vegas Nev.). Electrophoretic studies were also performed using ApaI digestion for VRE strains which differed only by 1-3 bands after initial analysis (Donabedian, S. M. et al (1992) “Molecular typing of ampicillin-resistant, non-beta lactamase producing isolates from diverse geographic areas.” 30: 2757-61). The vancomycin-resistant genotype (vanA, vanB or vanC) was defined by polymerase chain reaction analysis using specific primers selected from published gene sequences (Goering, R. V. and the Molecular Epidemiological Study Group (1994) “Guidelines for evaluating pulsed field restriction fragment patterns in the epidemiological analysis of nosocomial infections.” (Abstract) Third International Meeting of Bacterial Epidemiological Markers; Cambridge England). 500 ml of raw sewage from the University of Maryland is mixed with 100 ml of 10 times concentrated LB broth (Difco Laboratories). This sewage-broth mixture is inoculated with a 18-24 hour LB broth culture (1 ml) of a VRE strain and incubated at 37° C. for 24 hours to enrich the mixture for bacteriophage which can infect the VRE strain added. After incubation, the mixture is centrifuged at 5000 g for 15 minutes to eliminate matter which may interfere with subsequent filtration. The supernatant is filtered through a 0.45 m Millipore filter. Filtrate is assayed using the Streak Plate Method and/or Appelman Tube Turbidity Test to detect lytic activity against different strains of VRE. Method for Testing Phage Against VRE Isolates Three methods are employed: Plaque Assay; Streak Plate Method; and Tube Turbidity Method, and the procedures for each follow. Plaque Assay: A 18-24 hour nutrient broth culture of the VRE strain (0.1 nil) to be tested for susceptibility to infection and dilutions of a VRE phage preparation (1.0 ml) are mixed and then added to 4.5 ml 0.7% molten agar in nutrient broth at 45° C. This mixture is completely poured into a petri dish containing 25 ml of nutrient broth solidified with 2% agar. During overnight incubation at 37° C., VRE grow in the agar and form a confluent lawn with some VRE cells being infected with phage. These phages replicate and lyse the initially infected cells and subsequently infect and lyse neighboring bacteria. However the agar limits the physical spread of the phage throughout the plate, resulting in small visibly clear areas called plaques on the plate where bacteriophage has destroyed VRE within the confluent lawn of VRE growth. The number of plaques formed from a given volume of a given dilution of bacteriophage preparation is a reflection of the titer of the bacteriophage preparation. Also, since one plaque with a distinct morphology represents one phage particle that replicated in VRE in that area of the bacterial lawn, the purity of a bacteriophage preparation can be ensured by removing the material in that plaque with a pasteur pipette (a “plaque pick”) and using this material as the inoculum for further growth cycles of the phage. On this basis, doing further plaque assays on preparations of phage grown from this plaque pick, one would expect all plaques to have a single appearance or plaque morphology which is the same as the plaque picked, a further indication of purity. Therefore this technique can not only be used to test bacteriophage potency but also bacteriophage purity. Streak Plate Method: 9 Eighteen hour LB broth cultures of the different enterococci strains to be tested are grown at 37° C. (resulting in approximately 10CFU/ml) and a loopful of each culture is streaked across a nutrient agar plate in a single line. This results in each plate having a number of different VRE streaked across it in single straight lines of growth. Single drops of phage filtrates to be tested are applied to the steaks of each VRE growth, and the plate is incubated 6 hours at 37° C., at which time the steaks of the different VRE strains are examined for the ability of phage to form clear areas devoid of bacterial growth, indicating lysis of that particular VRE strain by that particular phage. The VRE host range for a given phage filtrate can be ascertained by which VRE streaks it is capable of causing a clear area devoid of growth and which strains of VRE the phage is incapable of doing this. Appelman Tube Turbidity Test (from Adams, M. H. 1959. Bacteriophages. Interscience Publ. New York N.Y.): 18 hour LB broth cultures of different VRE strains are prepared. 0.1 ml of phage filtrate or a dilution thereof is added to 4.5 ml of VRE broth cultures and incubated at 37° C. for 4 hours (monophages) or 4-18 hours (polyvalent phages). Phage free VRE broth cultures are used as controls. Broth cultures which are normally turbid due to bacterial growth are examined for the ability of the phage to lyse the VRE strain as indicated by the clearing of the culture turbidity. The host range of a given phage can be ascertained by which VRE broth cultures the phage is capable of clearing and which broth cultures it cannot induce clearing. E. faecium E. faecalis E. gallinarium E. faecium A collection of 234 VRE isolates, 187, of which 3 strains are from ATCC, 41 strains, and 6 strains as well as 6 strains which are vancomycin sensitive were tested for susceptibility of infection by 7 monophages isolated as described in Example 2. Susceptibility of infection was determined by the 3 techniques described. The majority of VRE strains in this collection were isolated from patients at the University of Maryland and Baltimore VA Medical Centers as indicated in Example 1. Such VRE isolates were determined to be distinct and genetically diverse by pulsed field gel electrophoresis typing. Of the 7 monophages, VRE/E2 and VRE/E3 have a relatively narrow host range compared to other VRE phages, but are able to infect the small proportion of VRE strains which were resistant to other phages collected. A phage cocktail containing the above 7 VRE monophages lysed 95% of the VRE strains in the collection. 1 0.1 ml amounts of a 18-24 LB broth cultureof a strain of VRE, which has been previously selected on the basis of being able to produce a maximum yield of bacteriophage are mixed with 1.0 ml of a VRE monophage filtrate and then mixed with 4.5 ml of 0.7% molten agar in nutrient broth at 450 C. T-his mixture is completely poured into a petri dish containing 25 ml of nutrient broth solidified with 2% agar. After overnight incubation at 37° C., the soft top agar layer with the phage is recovered by gently scraping it off the plate, and this recovered layer is mixed with a small volume of broth (1 ml per plate harvested). This suspension is LB broth culture contains Bacto LB Broth. Miller (Luria-Bertani, dehydrated) reconstituted according to instructions by Difco Laboratories, Detroit, Mich. centrifuged at 5,000-6,000 g for 20 minutes at 4° C. and the phage containing supernatant is carefully removed. The supernatant is filtered through a 0.45 m filter and centrifuged at 30,000 g for 2-3 hours at 4° C. 10 7 The phage containing pellet is suspended in 1-5 ml of phosphate buffer and is further purified by ion exchange chromatography using a Q resource ion exchange column (Pharmacia Biotech Piscataway N.J.) and a 0-1 M NaCl gradient in the start buffer. Phage tends to be eluted from the column between 150-170 mM NaCl with each fraction being assessed for the presence of phage by standard plaque assay technique. Fractions collected and assayed are pooled if the phage titer by the plaque assay is no greater than 3 logs lower than the phage preparation put onto the column (e.g., 10PFU/ml is put onto the column therefore pool only those fractions with titers>10PFU/ml). Pooled fractions are tested for endotoxin by the Limulus Amebocyte Lysate Assay (BioWhittaker Inc Walkersville Md.). Pools demonstrating, >50 EU/ml of endotoxin are passed through a Affi-prep polymyxin support column (Bio-Rad Labs, Hercules, Calif.) to remove residual endotoxin. The phage pool is buffer exchanged against 100 mM ammonium bicarbonate using size exclusion with Sephadex G-25 chromatography (Pharmacia Biotech). 1 ml aliquots of the purified phage are freeze dried in the presence of gelatin and stored at room temperature. The purity of the phage preparation is assessed by a combination of electron microscopy, SDS-PAGE, DNA restriction digest and analytical ultracentrifugation. Establishment of Sustained VRE Colonization in a Animal Model 6 3 3 CD-1 mice are pretreated for seven days with 0.1 mg/ml of gentamicin and 0.5 mg/ml of streptomycin in drinking water to reduce their normal intestinal flora. VRE are then administered to the mice, who have fasted for 6 hours, by consumption of one food pellet inoculated with 10CFU of VRE. VRE intestinal colonization is confirmed in mice by standard colony counts of >10CFU VRE/gram of feces on CNA agar containing 10 g/ml of vancomycin, 1 g/ml of amphotericin B and 10 g/ml of gentamicin. The colonization procedure is considered successful if there is consistent shedding of >10CFU of VRE per gram of feces for 5-7 days after consumption of the spiked food pellet. VRE colonization may persist for 4 weeks by this method. Mice are given drinking water containing the above mixture of antibiotics throughout the duration of the experiment. Use of a In Vivo Mouse Model to Demonstrate Efficacy of Lytic Bacteriophage in Reducing VRE Gastrointestinal Colonization 3 9 Twenty-four hours after detecting>10CFU VRE/grain of feces, mice were administered VRE phage (by having them consume one food pellet inoculated with 10PFU of VRE). Control groups consisted of (1) non-VRE-colonized mice sham dosed (no phage in dose), (2) VRE-colonized mice which are sham dosed, and (3) non-VRE-colonized mice dosed with phage. Five mice were used in each group. The efficacy of phage treatment to reduce VRE gastrointestinal colonization was determined by quantitating VRE, on a daily basis, in weighed fecal samples from the mice in the different groups. In addition, at the end of the experiment, mice were sacrificed and the number of VRE and phage in their liver, spleen, and blood determined. If administration of phage reduced VRE gastrointestinal colonization/overall load in mice by at least 1 log as compared to the control groups within 48-98 hours after phage administration, then this dose of the particular phage was deemed efficacious. More preferably, colonization was reduced by at least 3 logs. The primary objective of this study is to (i) determine the efficacy of a candidate phage preparation in transiently eliminating/reducing VRE colonization in humans, and (ii) further assess the kinetics of turnover and the safety of the phages in immunocompromised patients, who are at greatest risk for VRE infections. The study is a double-blinded, placebo-controlled trial of oral phage administration in hospitalized patients colonized with VRE. VRE-colonized patients are enrolled in the study. The patients are randomized to receive VRE-specific phages or a placebo. Stool samples are collected immediately before administration of the phages or placebo and 1, 2 and 3 days after administration of the phages or placebo; for patients who remain hospitalized, additional stool samples may be obtained 7 and 10 days after phage/placebo administration. The amount of VRE and VRE-specific phages in the stools is quantitated, and data is recorded on patient diagnosis, level of immunosuppression (as reflected by the degree of neutropenia or administration of immunosuppressive medications), and concurrent antibiotic therapy, if any. Side effects of phage administration, and changes in blood counts and renal and liver function are noted. Sufficient patients should be enrolled in each arm of the study to enable detection of a significant difference between groups (95% confidence, 80% power) if 20% of the group receiving phages are VRE positive 3 days after phage administration, vs. 50% of the group receiving a placebo. For these early Phase II efficacy studies, VRE cases are selected which are susceptible to the phage preparation in vitro; a “broad spectrum” VRE phage preparation may be tested during subsequent, more randomized clinical trials (i.e., phase III clinical trials). VRE counts will be compared before and after phage/placebo administration, in order to determine whether phage administration, even if not eradicating carriage, results in a significant (>1 log) decrease in VRE levels in stools. All VRE isolates will be screened for susceptibility to the phage preparation. Most patients are expected to be colonized with only a single strain of VRE, some may have multiple strains; therefore, for a minimum of patients, 10-20 VRE colonies should be picked from the primary isolation plate, in order to assess clonality (by PFGE) and for screening for phage susceptibility. A successful outcome for the studies consists of the demonstration that (i) significantly more patients receiving phages became VRE culture-negative than did patients receiving the placebo, or (ii) there was a significantly greater decrease (>1 log) in VRE levels in the stools of persons receiving phages as compared with persons receiving the placebo. From a clinical standpoint, there would be great value in reducing the levels of intestinal colonization during periods of severe neutropenia/immunosuppression, when the risk of bacteremia is greatest. For purposes of charity of understanding, the foregoing invention has been described in some detail by way of illustration and example in conjunction with specific embodiments, although other aspects, advantages and modifications will be apparent to those skilled in the art to which the invention pertains. The foregoing description and examples are intended to illustrate, but not limit the scope of the invention. Modifications of the above-described modes for carrying out the invention that are apparent to persons of skill in medicine, bacteriology, infectious diseases, pharmacology, and/or related fields are intended to be within the scope of the invention, which is limited only by the appended claims. All publications and patent applications mentioned in this specification are indicative of the level of skill of those skilled in the art to which this invention pertains. All publications and patent applications are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference.
1. Find your funny. In what way are you funny? What's funny about you? Being aware of what makes up your personal "funny"—finding the comedy in yourself and your everyday life—is the first step to becoming a successful comedy actor. Your sense of humor is your innate ability to be funny, what I call your funny gene. It has a number of influences, especially your upbringing. So, who tickled your funny bone growing up—a relative, a teacher, a best friend? Or did you learn how to be funny by watching great comedic actors? Also remember that comedy comes from drama, and drama comes from pain. Thus, comedy comes from pain. Are you able to draw humor from your pain? Once you've found your funny, you need to identify where your comedic strengths lie. 2. Identify your comedy character. Is your sense of humor dry and sarcastic? Silly and lighthearted? Shameless and quirky? Knowing your comedic qualities will help you identify your comedy character. I've discovered eight specific comedic character archetypes that have been used since the advent of television. In my book "The Eight Characters of Comedy," I break down each one and its unique personality traits. Are you patient and brutally honest like the Logical Smart One? Are you the desperate but hopeful Lovable Loser? Are you one of those unpredictable, unflappable eccentrics living In Their Own Universe? Each character has its own comedic attributes, and inevitably you'll find one with which you'll experience a natural, organic merge. Focusing on your comedy character will help you find your all-important comedic niche. 3. Explore your comedy. To be a successful comedy actor, you have to study the art form, and the best way to do that is in class. In comedy class, not only will you learn how to properly execute comedic technique; you'll also get an opportunity to hone this new, powerful career tool: your comedy character. You'll get to work that character in various scenes, exploring its history, motivations, actions, and comedic strengths. You'll get to interact with other characters and possibly explore new secondary characters for yourself. Most important, comedy class will help you see where you're getting laughs, which will help you build your comedic repertoire. 4. Stick to the script. Comedy is all about rhythm, timing, and pace, and it's your job as a comedic actor to identify those things in each and every piece of scripted comedy you perform. Comedy, when done right, is like a good song. Just as a musician plays the melody as composed, a comedic actor must stick to the script. That means following the words exactly as written. Don't add or drop words or attach handles to the beginnings of sentences, like "Look," "I mean," "Well," and "So." And don't change the punctuation! Always remember that a period is not just the end of a sentence; it's the end of a thought. Blowing past it or changing it to a question mark will not only change the intention of the line but also the "music" in the dialogue. 5. Learn to break down comedy scripts. Comedy is made up of two things: desperation and the unpredictable. These themes are found in story lines, jokes, and characters. And keep in mind that comedy consists of positive and negative forces that work against each other to create the humor in the script, especially in the dialogue. Breaking down comedy scripts will help you identify classic jokes like "the turnaround," in which a character says something with 100 percent conviction, then turns it around with equal conviction. Try it yourself: "You're going to Disneyland! (Beat.) Not." There are also "triplets": The character sets up with two simple and similar lines of dialogue, then on the third line hits the joke with something different and unpredictable. 6. Perfect your funny. As you develop your character and learn your comedic technique, you'll also start to examine more-subtle but important tools for your comedy, especially in the script itself. You need to identify the operative word (a word used over and over in a scene), callback jokes (jokes repeated over the course of a script), the hard consonants (k, t, and p) used by the writer to maximize the effect of comedic dialogue, and the funny words (Vitameatavegamin) used to enhance the punch line. Some writers will clue you in to important words by underlining, italicizing, or bolding them, but often that discovery will be up to you. 7. Stay committed. "Dying is easy; comedy is hard." It's an old adage, but so true. What makes comedy so difficult? Commitment. As a comedy actor, you need to be 100 percent committed to the dialogue, physical actions, jokes, technique, and especially the characters. It takes just as much commitment to do comedy as it does to do drama, perhaps even more. Think of all the comedy greats and how willing they were to "go there" for their comedy. There is no shame, no embarrassment, and definitely no faking it when it comes to comedy. You and your character need to believe in everything you're doing and commit to it wholeheartedly. 8. Be still. One of the biggest things that can kill a comedic scene is unscripted movement—what I call physical static—from an actor. That includes adjusting your clothes, scratching your nose, stretching your neck, waving your arms, tapping your thigh, rolling your eyes, and so on. Unless movement is specified in the stage directions, you should stay still, as any extra movement will distract from the comedic rhythm of the piece. I'm not saying you have to be a statue, but as a comedic actor it's your responsibility to be aware of your space and control your actions so you don't pull focus. Along the same lines, you must also learn when to hold for laughs, keeping still and holding your intention while the audience laughs and waiting for the laughter to die down before continuing with your dialogue. 9. Watch the comedy masters. There are so many legends old and new in this business, and watching them work is vital to building your comedy career. From Lucille Ball to Steve Carell, Jackie Gleason to Bea Arthur, the cast of "All in the Family" to the cast of "Friends," you need to know your comedy history. Rent or DVR as many of these classic shows as you can, as well as popular new shows. Watch them first for the sheer entertainment, then watch them again as a student of comedy learning from everything the actors do. 10. Have fun. When performing comedy, you have to enjoy what you're doing. You need to do your homework, find your character, perfect your technique, commit to the text, then go into class, an audition, or a job and have fun. Take some risks. Be confident and bring on the funny! If you're having fun, you'll feel a surge of energy inside you that will bring out your best, most positive, most energized performance. It will get you to the "louder, faster, funnier" that defines comedy. And if you're having fun, the casting director, the writer, the producer, and ultimately the audience will have fun with you, and that will help you become a successful comedy actor. Want more acting tips? Watch below: Inspired by this post? Check out our audition listings! Acting coach Scott Sedita, winner of Back Stage's 2008 Readers' Choice Award for Favorite L.A. Acting Teacher, is the author of the book "The Eight Characters of Comedy: A Guide to Sitcom Acting & Writing." The book has become a textbook in colleges and universities as well as a resource for comedy writers and producers. He also consults on comedies for the networks.
https://www.backstage.com/magazine/article/top-tips-becoming-successful-comedy-actor-63688/
Q: Chairman Kim Jong Un left Beijing yesterday after wrapping up his visit to China. What did he talk about with the Chinese side? Was China's role in lifting the sanctions against the DPRK mentioned? A: The Chinese side has released the readout of Chairman Kim Jong Un's visit to China. I would refer you to that. I have nothing further to add. Q: It is reported that a special UNISPACE+50 High-level Segment to celebrate the 50th anniversary of the first United Nations Conference on the Exploration and Peaceful Uses of Outer Space was inaugurated yesterday in Vienna. The outcome document of this meeting calls on intensifying international cooperation to realize the vision for a shared future in outer space exploration and serve the benefits and interests of the international community, as proposed by the Chinese side. What is your comment? A: UNISPACE+50, which is underway in Vienna, is a grand gathering for all to focus on the global governance of outer space by jointly reviewing the path traveled and planning for the future. The attendees at the High-level Segment agreed by consensus to incorporate for the first time into the outcome document the important concept of building a community with a shared future for all mankind in peaceful uses of outer space, which will be presented to the General Assembly in the form of a resolution for consideration. This incorporation, which represents a contribution of China's wisdom and proposal to strengthen international cooperation in outer space and address an array of complex challenges in the new era, complies with the purposes and principles of the Outer Space Treaty, meets the practical needs of outer space governance, and corresponds with the shared aspiration of the international community. China will work with all other countries to earnestly and fully implement the resolution and make greater contributions to building a shared future in outer space exploration. Q: Pope Francis has voiced optimism for improving the ties between the Vatican and China in an interview with Reuters. What is your response? A: China and the Vatican have been in effective contacts. China is always sincere about improving its ties with the Vatican and has been making unremitting efforts to that end. We stand ready to work with the Vatican to ensure that our bilateral constructive dialogues and the improvement of bilateral ties could continue to achieve new progress. Follow-up: When will China and the Vatican conduct the next round of negotiations? Will the two sides reach any agreement on the appointment of bishops this year? A: To answer it succinctly, China and the Vatican have been in effective contacts. Q: A lot of details have been provided about the meeting between President Xi Jinping and the Nepali Prime Minister K.P. Sharma Oli, who is currently on a visit here. It is reported that China and Nepal have inked a series of agreements. Can you provide us some details on those agreements? Does it include the railway line connecting China and Nepal? A: Yesterday, President Xi Jinping met with Nepali Prime Minister K.P. Sharma Oli. The Chinese side has issued the press release. According to my information, Prime Minister Oli will attend other events in China. Premier Li Keqiang and other Chinese leaders will hold talks and meet with him. Please keep following the relevant information. Follow-up: Can you give us some details on the agreements signed? A: We have issued the press release regarding the event yesterday. You may wait for the press releases of other events, for example, Premier Li Keqiang's talks with Prime Minister Oli. I hope that you will get more information then. Q: On June 20, Eritrean President Isaias Afwerki said that his country will dispatch a delegation to Addis Ababa, the capital of Ethiopia, to gauge current developments and chart out a plan for actions going forward so as to achieve the common interests and prosperity of the two countries. The Ethiopian Prime Minister has made a public response to it, saying that the constructive statement is of vital importance to realizing the common interests of the two countries. What is your comment? A: China supports Ethiopia and Eritrea in properly resolving the relevant dispute through dialogue and negotiation because it serves the fundamental interests of the two countries and two peoples. We welcome the relevant statement made by the Eritrean side and commend the efforts made by Ethiopia and Eritrea to normalize their relations. Both Ethiopia and Eritrea are China's friendly partners. China stands ready to work with the international community to continue to play a constructive role in promoting the peace, stability and development of the Horn of Africa.
http://dubai.chineseconsulate.org/eng/fyrth/t1570647.htm
BREAKING NEWS: Government publishes response to Fire Futures report Fire Minister Bob Neill has set out how local decision making and restoring a focus on local communities will be at the heart of Fire and Rescue Service policy as the Government published its response to the sector's Fire Futures reports. The Fire Futures Review, at the invitation of central government, gave those working within the Fire Service an exceptional opportunity to identify ideas for their future. This resulted in four independent reports published in December 2010. They looked into the role, efficiency and accountability of the Fire and Rescue Service in England. The Government's response, published today, provides a clear statement of the principles that are central to the future of the delivery of fire and rescue services. The response document outlines that Government is responsible for enabling and empowering the service and for ensuring appropriate funding to deliver reform, and in particular expresses the Government's view that: - Decisions on local service delivery are best made locally, not top down from Whitehall - Local fire and rescue services are accountable to local people with local responsibility for decisions and performance. National targets and monitoring are a thing of the past, the focus is on serving local communities - Clarity on the national and local roles in resilience arrangements will be provided within the next statutory National Framework. The Fire Futures reports also provide an 'ideas bank' and for many of these ideas the next steps will be for the sector to determine and take forward as appropriate. Fire Minister Bob Neill said: "From the start of the Fire Futures Review I was clear that I wanted this to be led by the fire and rescue services themselves, providing the sector for the first time with a real opportunity to identify ideas for the future direction of fire and rescue. "I am extremely grateful to those who contributed: members of Fire and Rescue Authorities, frontline firefighters, control room and other staff, professional representatives and the wider industry, for their time and expertise to undertake this important review. "Ending the culture of top-down Whitehall management of the fire and rescue service and restoring a focus on communities and local accountability rather than targets is at the heart of the Government's approach. The 'ideas bank' offers the sector the freedom to determine whether these ideas fit local needs without micro-management by the Government. "We will work with the sector to develop a new National Framework, and will re-set the relationship between fire and rescue authorities and central government. We will help 'barrier bust' where fire and rescue authorities find unnecessary restrictions are stopping them from getting on with things that clearly make sense. "The Government's focus in the future will be on providing the right framework for national resilience and to enable local Fire and Rescue Authorities and the wider sector to deliver the reform agenda and increased accountability to local communities for the delivery of their services."
https://www.fire-magazine.com/breaking-news-government-publishes-response-to-fire-futures-report
The developmental differences in kids of parents with schizophrenia have been discovered by Wayne State University School of Medicine scientists. The study, led by Vaibhav Diwadkar, Ph.D., assistant professor of psychiatry and behavioral neurosciences and co-director of the Division of Brain Research and Imaging Neuroscience, was published in the March 2012 issue of the American Medical Association journal Archives of General Psychiatry and is titled, "Disordered Corticolimbic Interactions During Affective Processing in Children and Adolescents at Risk for Schizophrenia Revealed by Functional Magnetic Resonance Imaging and Dynamic Causal Modeling." The results demonstrate sophisticated analyses of functional magnetic resonance imaging (fMRI) data and clarify the understanding of developmental mechanisms in normal versus vulnerable brains. The resulting information can provide unique information to psychiatrists. The study took place over three years, using MRI equipment at Harper University Hospital in Detroit. Using fMRI the researchers studied brain function in young individuals (8 to 20 years of age) as they observed pictures of human faces depicting positive, negative and neutral emotional expressions. Participants were recruited from the metropolitan Detroit area. Because children of patients are at highly increased risk for psychiatric illnesses such as schizophrenia, the team was interested in studying brain network function associated with emotional processing and the relevance of impaired network function as a potential predictor for schizophrenia. To investigate brain networks, the researchers applied advanced analyses techniques to the fMRI data to investigate how brain regions dynamically communicate with each other. The study demonstrated that children at risk for the illness are characterized by reduced network communication and disordered network responses to emotional faces. This suggests that brain developmental processes are going awry in children whose parents have schizophrenia, suggesting this is a subgroup of interest to watch in future longitudinal studies. "Brain network dysfunction associated with emotional processing is a potential predictor for the onset of emotional problems that may occur later in life and that are in turn associated with illnesses like schizophrenia," Diwadkar said. "If you clearly demonstrate there is something amiss in how the brain functions in children, there is something you can do about it. And that's what we're interested in." The results don't show whether schizophrenia will eventually develop in the subjects. "It doesn't mean that they have it, or that they will have it," he said. "The kids we studied were perfectly normal if you looked at them," he said. "By using functional brain imaging we are trying to get underneath behavior." "We are able to do this because we can investigate dynamic changes in brain network function by assessing changes in the fMRI signal. This allowed us to capture dramatic differences in how regions in the brain network are interacting with each other," he said. According to the National Alliance on Mental Illness, schizophrenia affects men and women with equal frequency, but generally manifests in men in their late teens or early 20s, and in women in their late 20s or early 30s. Results from a six-month pilot study show that people with schizophrenia report improved functioning after participating in a new, evidence-based clinical program. A specific type of computerized cognitive training can lead to significant neural and behavioral improvements in individuals with schizophrenia, a new study has suggested.
https://www.medindia.net/news/insight-into-plausible-origins-of-schizophrenia-98846-1.htm
The target audience is the cornerstone of integrated Internet promotion. The effectiveness of an Internet marketing campaign depends on a correct assessment of the target audience. Audience research makes it possible to assess the boundaries of an internet project’s audience, both real and potential, while considering the possibility of growth and change. Knowing the target audience is useful for attaining the following objectives: The goal of our research is to develop a complete portrait of the site’s target audience. In the process of conducting our research, we collect and analyze data from social networks and other public sources, conduct sociological research (both quantitative and qualitative), traffic analysis, and more. In the summary we provide of the results of our target audience research, we include the following information: Call us: +7 (495) 258-2810 Moscow, +7 (812) 441-2965 St. Petersburg Or leave a message, and we will respond as soon as possible. To request our services, fill out the form below and press the button SEND. We will analyze your site and will contact you during business hours.
http://en.ashmanov.com/consulting/research
N.H. wasting $5.4 million to design obsolete commuter rail line Amid a historic collapse in transit ridership, the Executive Council has approved a $5.4 million contract to design a commuter rail line from New Hampshire to Boston. The contract is financed entirely with federal money, so New Hampshire taxpayers could choose to take some comfort in knowing that the state is throwing away what is mostly other people’s money. Nonetheless, it’s a waste of taxpayer dollars. Americans have in the past year avoided mass transit like the plague, largely because of, well, a plague of sorts. But the trends before the rise of the coronavirus show a longer decline in ridership. In 2020, mass transit ridership fell by 50%, according to data kept by the American Public Transit Association. Commuter rail ridership fell by 62%. Transit ridership nationwide has been falling for years, according to federal data. (Commuter rail ridership has increased in the last decade, thought it’s leveled off in recent years.) In Boston, however, Massachusetts Bay Transit Authority (MBTA) commuter rail ridership has been in steady decline. The Pioneer Institute reported last year that MBTA commuter rail ridership fell by 11% (or about 4 million riders) from 2012-2018. In November, the MBTA reported that commuter rail was down to 13% of its normal ridership level. Whether transit ridership will rebound to anything near its pre-COVID levels is an open question. It might. But commercial real estate rents, along with announcements by large and small companies that they are preparing to permanently switch portions of their workforce to remote work, suggest that urban work and commute patterns might forever be altered. Again, even before the arrival of the coronavirus, technological advancements were driving declines in public transit. Ride sharing companies have given people another, more convenient way to move around cities and suburbs without relying on government-provided vehicles that travel pre-set, government-chosen routes. Those services are drawing riders away from mass transit, as this University of Kentucky study shows. Rail is a 19th century technology that is ill-suited to solving 21st century transportation and environmental issues. The way forward is through innovation. Electric vehicles and autonomous vehicles will get people where they need to go while reducing greenhouse gas emissions and turning commute time into productive work time. They are far more versatile than trains and will serve people’s travel needs better. That transition is already underway. And flying cars might follow, further changing the way we travel. New Hampshire doesn’t need to spend hundreds of millions of dollars to build a train to serve a declining number of commuters when tech companies are already working on alternatives that will better serve everyone.
https://jbartlett.org/2020/12/n-h-wasting-5-4-million-to-design-obsolete-commuter-rail-line/
The increasing diversity, globalization and aging of the U.S. workforce have put pressure on organizations to better understand and manage diversity (Perry & Li, 2019). While there are various ways in which companies can “manage” this diversity, increasing attention is being paid to the role that leaders play (Avery & McKay, 2010). More specifically, a number of authors have begun to discuss the potential benefits of inclusive leadership in managing increasingly diverse employees (Randel et al., 2018). Despite discussions about the potential benefits of inclusive leadership, there is relatively little consensus about what inclusive leadership is and how best to measure it (Randel, Dean, Ehrhart, Chung, & Shore, 2016). The purpose of current research is to highlight the limitations of previous measures and to begin to develop a more theoretically grounded and empirically valid measure of inclusive leadership. Based on a review of inclusion and inclusive leadership literature, a measure of inclusive leadership was designed, evaluated by six subject matter experts, and administered to a large MTurk sample (N = 529). The current research found that inclusive leadership is a multi-dimensional construct that includes leadership behaviors of treating all work unit members with fairness, equality, and respect; encouraging integration of and synergy among all work unit members; and translating (i.e., compliance and implementation) organizational diversity and inclusion related policies and programs that support inclusion and prevent exclusion into explicit diversity and inclusion practices in the work unit. The current research established a content, construct, convergent, discriminant, and criterion valid and reliable measure of inclusive leadership, the Inclusive Leadership Questionnaire (ILQ), that can be used to help academics better understand what inclusive leadership is, how best to measure it, and assess its impact on work-related processes and outcomes. Additionally, practitioners can use this measure to assess the strengths and development opportunities of leaders; develop inclusive leadership capabilities; facilitate the incorporation of behavior based inclusive leadership criteria into the talent management system; and continuously make progress toward the goal of creating an inclusive workplace and gaining long-term organization health and growth. Geographic Areas Subjects Files - Li_columbia_0054D_16336.pdf application/pdf 731 KB Download File More About This Work - Academic Units - Social-Organizational Psychology - Thesis Advisors - Perry, Elissa L.
https://academiccommons.columbia.edu/doi/10.7916/d8-y4d9-yy69
Physics levels instruct us that a pendulum is a device that mla heading format may be a device that is very sophisticated, and also has a mysterious source. You’ll find plenty of topics of attention, each plausible and factual, concerning this duration. When we consider that the procedure for rotating the body, by way of example, the force exerted by the mass on the angle of the plane, which is the bottom of the pendulum determines the movement of this rotating mass. That usually means this, in nature, the mass is always pushing down the angle. We will realize the forces cancel Should we consider the way the contrary forces work on the pendulum. Inside this case, the force of gravity doesn’t utilize. It looks like Newton’s law of gravity,” stated how we would express itis really an approximation. http://www.law.umich.edu/library/home/Pages/default.aspx This reality could indicate that the’x’ of Newton’s Law of Gravity should really be”x-phi.” One other important difficulty is people seem to neglect to appreciate this, for people using physics amounts that are traditional, there is a simple way to find the replies plus that individuals consider mathematics for scientists and engineers with modern physics. Physicists using physics degrees know what is depth physics. As an instance, let us consider a discussion about a pendulum using a likely plane, and let’s hypothetically say the radial (mass and inclined aircraft ) are symmetrical in the sense which the angles will be the exact same. Let us also assume that the length of the pendulum is equal to the diameter of the inclined plane. Thenthe half of this airplane (the period) is half of the length of the pendulum (the diameter). In other words, the depth of the likely plane is equal to this apparent thickness of view website the pendulum. The speed with regard towards the middle of the stair is only half the speed of the mass. There are just two problems in physics. One problem is obvious. The opposite is not. Each Newton’s and Einstein’s formulations of the theory of relativity allow us to create. Let us go through the 2nd assumption. We all know there is a normal into the speed with respect to the thing where we’re discussing. What we are required to think is the ordinary is stable throughout the world. We will only consider the premise. It must be said that the normal can be a standard and special phenomenon of particular relativity. This usually means that, generally speaking, it must be found that the normal must be different. This resembles a challenge that is readily resolved. Today, let us consider the dilemma of velocity with respect to the mass. The problem has to do with the existence of the normal. It isn’t difficult to see the ordinary goes to be different. It appears to be that it isn’t really a challenge to come across the pace therefore let’s proceed. That attracts us into a static stride with a likely plane. What is thickness physics? Then the depth of this plane is going to be different since we are aware the ordinary is going to differ everywhere. We are going to presume, though, which it is exactly the same everywhere.
https://landing.thammyvienngocdung.com/what-is-a-pendulum-in-physics
A new book draws on experiences of men and women farmers across eastern and southern Africa who bravely embraced change to improve their farming methods and the lives and livelihoods of their families. Research busts common myths about agricultural labor in Africa, suggests a shift in mechanization policy New study shows African farming households are far more dependent on labor markets, and much more inclined to hire mechanization services, than previously assumed. Exploring young Africans’ role and engagement in the rural economy How important is farming relative to non-farm activities for the income of young rural Africans? Sustainable intensification practices build resilience in Bangladesh’s charlands By reducing drudgery, irrigation and costs, conservation agriculture enables the soil of the charlands to produce rice and maize yields consecutively. Is a world without hunger possible, asks Germany’s minister Gerd Müller during his visit to CIMMYT Minister Müller explored the campus and heard about CIMMYT’s latest innovations in maize and wheat research. Support groups open women’s access to farm technologies in northeast India Self-help groups in Bihar are putting thousands of rural women in touch with agricultural innovations, benefiting households and the environment. Breaking Ground: Rahma Adam unleashes the agricultural productivity of Africa’s women and youth CIMMYT sociologist believes there is one vital resource that remains untapped to increase food security and boost livelihoods. Women empowerment vital to achieve food security: Australian envoy Source: Bangladesh Sangbad Sangstha (9 Mar 2019) Australian High Commissioner to Bangladesh visited female farmers adopting conservation agriculture for sustainable intensification technologies through CIMMYT project funded by ACIAR. Workshops, seminars mark Women’s Day in city Source: Times of India (9 Mar 2019) CIMMYT and Icar-Atari Ludhiana organized a workshop on the eve of International Women’s Day titled ‘Empowering rural women for addressing agricultural air pollution’. How gender equity and social inclusion are improving the lives of rural families in Africa Women have the potential to be drivers of agricultural transformation. Tribal women in India find value in maize cultivation This year the group harvested more than 3,300 kg from seven acres of land. International Women’s Day 2019: Women in seed systems in Africa These entrepreneurs are breaking social barriers while improving household nutrition and livelihoods. New publication: Gender and agricultural innovation in Oromia Region, Ethiopia Source: Addis Standard (1 Mar 2019) Drawing on GENNOVATE case studies from two wheat-growing communities, the authors examine how women and men smallholders innovate with improved wheat seed, row planting and the broad bed maker. The missing seed market A new guidebook promotes improved seed and farming technologies for men and women, with the goal of increasing adoption rates. Women’s equality crucial for Ethiopia’s agricultural productivity Source: The Reporter (16 Feb 2019) A new report shows that gender inequality in Ethiopia is hampering efforts to reach the government’s goal of wheat self-sufficiency by 2022.
https://www.cimmyt.org/theme/gender-and-social-inclusion/page/2/?category=news%2Cmultimedia&theme=gender-and-social-inclusion&location=0&research=0&year=0&order=0&orderby=date
Is txid a required field on all transactions? What is TXID used for? A txid or Transaction ID is a string of letters and numbers that identifies a specific transaction on the blockchain. The string is simply the double SHA-256 hash of a transaction. This hash can be used to look up a transaction on a node or block explorer. Is transaction ID same as TXID? A transaction ID (TXID) or transaction hash is a unique string of characters given to every transaction that’s verified and added to the blockchain. In other words, a TXID is an identification number that labels each transaction on the blockchain. Can two bitcoin transactions have same TXID? Per BIP 30, transactions are not allowed to have the same TXID as another transaction which has unspent outputs. This means that there can be transactions that have the same TXID, but only after the first transaction’s outputs are already spent. So in this case, the duplicate TXID does not matter. What is BTC TXID? A Transaction ID or TXID is an identification number for a Bitcoin transaction. It is an alphanumeric string that labels each transaction within the coin’s blockchain. This differs from trade IDs, which are only associated with the trades made within the Paxful marketplace for receiving cryptocurrencies only. What is TXID Coinbase? A transaction hash/id is a unique string of characters that is given to every transaction that is verified and added to the blockchain. In many cases, a transaction hash is needed in order to locate funds. What is TXID of gas ETH? What is TXID (Transaction ID): Find Transaction ID on the Blockchain. TXID, short for “transaction ID” and also known as TxID, is a hash number that records a transaction on the blockchain. All transactions on the Bitcoin (BTC) and Ethereum (ETH) blockchains are recorded in a string of hashes — that is, transaction IDs How do I get a Bitcoin TxID? How to find a transaction ID (txid) - Open your Bitcoin.com wallet. - Select your BCH or BTC wallet, depending on what transaction you are looking for. - You should now see all the sent and received transactions from the selected wallet. - Tap/click on the transaction you need the transaction ID for. What is TxID trust wallet? What is a Trust Wallet Transaction Id? Essentially a transaction id is a unique identifier code given to any transaction. It ensures that every exchange is verified and added to the blockchain. Every time you make an exchange on the Trust Wallet, you will get a transaction id. What TxID look like? Transaction hash (txid) is an identifier used to uniquely identify a particular transaction. All on-chain transactions (the transactions from or to external addresses) have a unique txid that can be seen in transaction details. A transaction hash usually looks like a random set of letters and numbers. Is TXID public? Adding to that, it is safe to share the TXID as it won’t disclose any personal information. Thanks to the relative anonymity of blockchain transactions, the only public information is the amount transferred, the date of the transfer, and the addresses involved in the transaction. How does TXID track crypto transfer? The easiest way to navigate to your particular transaction is to use your browser’s ‘Find’ feature to search for the exact crypto amount you sent. This is usually under ‘Edit > Find’, or you can use the keyboard shortcut Command + F (macOS) or Control + F (Windows). How do I get Coinbase TXID? Method 1: Check the Tx Hash ID from Coinbase - Step 1: Open Your Coinbase Accounts. Open your Coinbase profile as you usually do. … - Step 2: Locate Crypto Assets. … - Step 3: Find the Specific Transaction You Want to Trace. … - Step 4: Tap on “View Transaction” What is TXID Metamask? Share. Every single transaction on the blockchain is assigned a unique transaction ID, also known as a transaction hash. Often, the terms can be found with the abbreviations tx or txn substituted for transaction. Explaining the cryptography behind this number in full would require a few more articles. How do I copy a TXID? To copy the whole txid number, just click on the transaction you want and then click on the small “clipboard” icon at the end of the transaction ID. Transaction ID’s can be very handy to use as they contain all the information related to that transaction, everything from: How many confirmations the transaction has. Can Bitcoin transactions be tracked? Is bitcoin traceable? Yes. Bitcoin is traceable. While bitcoin can be minted, moved around, and stored without the oversight of any central authority like the government, each bitcoin transaction is recorded on a permanent publicly available ledger known as the blockchain. Is it safe to give out wallet address? Your crypto addresses are safe to display anywhere you would like to accept tips, payments, or donations. It is not possible to steal digital currency with a public address alone. Can wallet address be hacked? If a cryptocurrency Trojan gains access to your device and identify your keys, a hacker can easily wipe out your Btc address in a matter of minutes without you even noticing. What is the difference between wallet ID and wallet address? The short answer is that they are two different things, with the address being the ID of where the Bitcoins are being held, while the wallet is program controlling the address and facilitating transfers to and from the address. Can someone steal my Bitcoin wallet? Private keys are like passwords for securing the digital wallet from unauthorized access. That means that anyone who gets those keys can easily access your wallet and withdraw funds. Stealing your private keys is one of the things criminals will struggle to acquire to steal your Bitcoin. What is the best crypto wallet? The 6 Best Crypto Wallets - Best Crypto Wallet for Beginners. Coinbase Wallet. Jump to details. … - Best Bitcoin Wallet. Electrum. … - Best Crypto Wallet for Altcoins. Exodus. … - Best Crypto Wallet for Security. Trezor Model T. … - Best Cold Wallet That Costs Less. Ledger Nano X. … - Best Hot Wallet for Ethereum and Web3 Compatibility. MetaMask. Will Coinbase refund if hacked? Here’s the good news up front: Coinbase secures the majority of its digital assets with crime insurance, and will refund you if your account has been compromised. Even if your funds were lost due to negligence or personal error, they might still refund your account. Has blockchain ever been hacked? Hackers allegedly breached gaming-focused blockchain platform Ronin Network last week and extracted cryptocurrencies now valued at more than $600 million, the company announced on Tuesday, marking the second-biggest hack ever in the burgeoning cryptocurrency space. Who is the owner of blockchain? Blockchain.com |Industry||Cryptocurrency| |Founder||Benjamin Reeves, Nicolas Cary, Peter Smith| |Headquarters||Luxembourg City, Luxembourg| |Key people||Peter Smith (CEO) Jim Messina (Director)| |Products||Cryptocurrency wallet; cryptocurrency exchange; blockchain explorer; lending| Who stole all the crypto? The most famous of those was the 2016 breach of Bitfinex, through which hackers stole Bitcoin valued recently at an astonishing $4.5 billion. In February, the Justice Department announced it had recovered $3.6 billion of that. One analyst has counted at least 46 exchange hacks since 2012. What is the most secure blockchain? The bitcoin blockchain The bitcoin blockchain is known as one of the most secure blockchains. What is the next big cryptocurrency? 1. Ethereum (ETH) Ethereum, commonly known as ether, is the world’s second-largest cryptocurrency behind bitcoin, even outperforming bitcoin at times. Which cryptocurrency is not traceable? 1. Bytecoin (BCN) Bytecoin , which is based on the CryptoNote technology, claims to be the “first private untraceable currency.” CryptoNote was created with the goal of making transactions both a) untraceable and b) un-linkable.
https://aicrypto.blog/is-txid-a-required-field-on-all-transactions/
New Tech men’s basketball head coach Josh Pastner will have to wait until 2017 to bring his first solo recruiting class to Atlanta. By the time Pastner accepted the Tech job, the recruiting period was mostly over. Almost every top player had committed already, and Tech themselves had a three person class. However, only two of the commits from the Gregory class signed, and with the transfer of former player Travis Jorgenson, Tech had some open scholarships for this season. With the addition of two graduate transfers, guard and forward Jodan Price and forward Kellen McCormick, Pastner has demonstrated that he is willing to search in unlikely places for talented players. Price, who is originally from Indianapolis, Ind. has one year of NCAA eligibility remaining. Price is familiar with the transfer process, but he will be able to play immediately this time. Price ranked as one of the top 150 recruits of his class by Rivals and initially enrolled at DePaul University. After a quiet year, Price left Chicago for Ypsilanti, joining the Eastern Michigan Eagles. NCAA rules mandated that he sit out the following season, one that saw the Eagles notch 22 wins, a mark the team had not reached since their 1996–97 campaign. At Eastern Michigan, Price saw significant playing time off the bench. He averaged 15.6 minutes-per-game over his two seasons there, taking most of his shots from three-point range. He never achieved consistent success beyond the arc, however, making only 28.5 percent of his shots from that distance in his Eagles career. Price comes from a major basketball pedigree. Price’s brother, Austin plays at Lehigh. Price’s uncle played at Notre Dame and another uncle at Purdue. As a high schooler, Price attended Detroit Country Day, an institution that has produced such talent as former Duke and NBA guard and forward Shane Battier and Chris Webber, a member of Michigan’s famed “Fab Five” and the first overall pick of the 1993 NBA draft. Price was an integral part of the team’s success, leading it to a 44-10 record over his last season there, courtesy of Eastern Michigan’s team website. Given Tech’s depth issues at the shooting guard and small forward positions, Price should be able to step in immediately and play big minutes. Tech has had issue the last few years with shooting at those positions. There will be minutes for whoever can put the ball in the hoop. Coach Pastner traveled to the other side of the Wolverine State to court his second transfer, Western Michigan redshirt senior Kellen McCormick. In his final season with the Broncos, he shot at a 45.8 percent clip from that distance, a team best. McCormick stands at 6-foot-8-inches and should be able to provide consistent shooting for the Jackets off the bench. After losing Adam Smith, Tech will need a sharpshooter from beyond the arc to carry the load. McCormick was Western Michigan’s third-most prolific scorer per minute, with the team’s best shooting percentage. His mere 5.2 rebounds per 40 minutes, however, suggest that he will not provide a physical presence near the basket for the Jackets, but that is expected given 80 percent of his FGA in 2015-16 were from beyond the arc. Despite his excellent field point percentage and three point percentage, McCormick struggled from the free-throw line, only converting 68 percent of his free-throw attempts. Free throw shooting has haunted Tech of late, and Tech is not in a position to give points away this season. For a team looking to right the ship under a new helmsman, though, McCormick and Price both provide precisely the qualities Coach Josh Pastner is looking for: veteran leadership and added depth to a young and inexperienced Tech team. Pastner’s coaching system is very different than former head coach Brian Gregory. Pastner has to find players he thinks will fit his system, and belives he has that with Price and McCormick. There are no gaurantees for the starting lineup either, but every player will get their chance to prove themselves. The current players have bought into what Pastner is saying and that is obvious with the small attrition Tech had over the offseason. Only point guard Travis Jorgenson transferred. That move appears to be one that will benefit both parties. Jorgenson was not a scheme fit for Pastner’s up tempo offense and transferring will give him the opportunity to play Division I basketball with a bigger role. This season will not be easy for the Jackets. They are replacing alot of players from a solid NIT team. The goal for the Jackets is to compete at a high level, improve as the season goes on and for Coach Pastner to take a step in the right direction on the 2017 recruiting trail.
http://nique.net/sports/2016/07/29/tech-gains-two-graduate-transfers/
Dr. Hoagland’s research focuses on soil microbes and their interactions with plants to increase crop productivity and reduce negative impacts of farming systems on the environment. Current projects include efforts to identify management practices that stimulate the activity of microbes that efficiently cycle nutrients, suppress soil-borne pathogens, and/or help plants tolerate drought stress, and select carrot and tomato varieties that actively recruit and host beneficial microbes that help plants resist pathogen infection. Micaela Colley, Program Director, Organic Seed Alliance In addition to being the executive director, Micaela Colley manages participatory plant breeding, research and education projects with farmers, university researchers and other seed professionals. She has authored several educational publications covering topics on organic seed production, on-farm crop improvement and variety trialing, and teaches workshops on seed related topics. Jeanine Davis, Associate Professor and Extension Specialist, North Carolina State University Dr. Jeanine Davis is an associate professor and extension specialist in the Department of Horticultural Science at North Carolina State University. She is stationed at the Mountain Horticultural Crops Research and Extension Center in Mills River, near Asheville, NC. For over 25 years, her program has focused on helping farmers diversify into new crops and organic agriculture. She has led and cooperated on many applied and basic research projects that include Echinacea, hops, goldenseal, bloodroot, black cohosh, broccoli, stevia, garlic, and heirloom tomatoes. She has published over 120 refereed research and extension publications and given over 500 invited presentations in the U.S., Canada, and Chile. She recently revised and expanded the book Growing and Marketing Ginseng, Goldenseal and Other Woodland Medicinals with coauthor and ginseng expert Scott Persons. Jeanine is a founding board member of the Organic Growers School and an advisor for the NC Herb Association, the NC Tomato Growers Association, and the NC Natural Products Association. Julie Dawson, Assistant Professor, University of Wisconsin-Madison Julie Dawson is an Assistant Professor in the Department of Horticulture at the University of Wisconsin-Madison. Her background is in organic plant breeding and participatory research. Before arriving at UW-Madison, she worked on wheat breeding for artisanal bread making quality with farmers in Washington, France, and New York. In Wisconsin, she is working with other plant breeders to test varieties with organic farmers and local chefs, particularly related to flavor and quality in direct market vegetables. She also is working on tomato and carrot variety trialing for flavor and adaptation to organic conditions, including season extension using hoop-houses. Daniel S. Egel, Clinical/Engagement Associate Professor, Botany and Plant Pathology, Purdue University Dr. Egel earned his bachelor’s degree from Miami University, a master’s degree from Purdue University, and his doctorate from the University of Florida. Dan is responsible for vegetable disease extension and research throughout Indiana. His current research interests include: Host resistance to anthracnose and Fusarium wilt of watermelon; managing fungicide resistance in foliar pathogens, and; management of vegetable diseases in greenhouses. Dan's extension programs include: MELCAST, a weather-based disease forecasting system for cantaloupe and watermelon; accurate vegetable disease diagnosis including the Purdue Tomato Doctor, and; the Midwest Vegetable Production Guide for Commercial Growers of which Dan is the lead author. Dan’s extension mission is to encourage the sustainable production of healthy vegetables through the use of integrated pest management and organic systems. He works at the Southwest Purdue Agricultural Center in Vincennes. Sanjun Gu, Horticulture Specialist, the Cooperative Extension Program of North Carolina A&T State University Dr. Gu comes to N.C. A&T State University from Lincoln University of Missouri, where he served as the State Horticulture Specialist and as an Assistant Professor. Prior to that, Dr. Gu was the Viticulture Program Leader at the Kentucky State University. Dr. Gu’s areas of expertise and interest include organic and conventional vegetable production, vegetable grafting, small fruit production, season extension with high tunnels, plant tissue culture, and plant breeding. He also serves as the Sustainable Agriculture Research and Education (SARE) Co-coordinator for N.C. A&T State University. Gu’s research goal is to increase on-farm efficiency and profitability while maintaining environmental sustainability for the target audience- small, limited-resource farmers in North Carolina. Gu’s current focus is on vegetable grafting and season extension techniques, both organic and conventional, for vegetable and small fruit production. He also conducts applied research on cultivar evaluations such as for heirloom tomato, bell pepper, sweet corn, and salad greens. Cathleen McCluskey, Communications and Outreach Associate, Organic Seed Alliance Cathleen McCluskey supports Organic Seed Alliance’s communications, leads the organization’s design work, and organizes the biennial Organic Seed Growers Conference. She is a graduate of The Evergreen State College in Olympia, Washington where she studied Agricultural Systems Analysis, coordinated food sovereignty campaigns, and organized for federal-level public engagement regarding issues of market concentration in the agriculture sector. Laurie McKenzie, Research & Education Assistant, Organic Seed Alliance Laurie McKenzie is the research and education assistant for Organic Seed Alliance (OSA). She holds a master’s degree in plant breeding and genetics from Oregon State University, where her research focused on breeding for organic systems. Laurie has over 10 years of experience in the organic farming and seed world, having spent considerable time doing both production and direct marketing. At OSA Laurie is involved in coordinating field work for breeding and variety trial projects, writing educational guides and materials, and co-teaching on-farm workshops. James R. Myers, Professor of Vegetable Breeding and Genetics, Department of Horticulture, Oregon State University Dr. Myers holds the Baggett-Frazier Endowed Chair of Vegetable Breeding and Genetics in the Department of Horticulture at Oregon State University. He works on a number of crops including dry and snap bean, edible podded pea, broccoli, pepper, tomato, winter and summer squash, and sweet corn. Prior to employment at OSU, he worked as a dry bean breeder at University of Idaho. His main interest has been to improve vegetable varieties for disease resistance and human nutrition while maintaining quality and productivity in improved varieties. Myers is the project director of NOVIC (Northern Organic Vegetable Improvement Collaborative) and is also breeding tomatoes, broccoli, and summer squash for organic systems. A new venture is to breed for taste and quality through the Culinary Breeding Network. His latest variety release is the high anthocyanin tomato 'Indigo Rose' with two more cherry types on the way. Jared Zystro, Research and Education Assistant Director, Organic Seed Alliance Jared Zystro has a master’s degree in plant breeding and plant genetics from the University of Wisconsin and has worked in the organic seed industry for over 10 years, managing seed production at two farms and conducting research and education projects with Organic Seed Alliance (OSA). In his work at OSA, he manages regional development in California, conducts participatory breeding projects and variety trials, and teaches farmers about seed production and plant breeding at workshops, conferences, and field days. Funding - This project was funded in 2014 by the Organic Research and Extension Initiative grant, part of the USDA National Institute of Food and Agriculture. Award # 2014-51300-22267. Period of funding 2015-2019.
http://eorganic.info/node/13000
Know your garden and be water-wise All the nutrients plants need to grow, colour, flower, multiply, and defend their tissues against pests are produced by the solar-powered chemical reactions that take place in the watery soup within the cells. That’s why I make watering a priority every time I’m in a garden. The first step when determining a watering regimen is to test your soil’s water-holding capacity. This easy test is a much better way to figure out when to water than waiting for plants to wilt. Lacking the raw materials in water that become life-giving nutrients, a wilted plant burns its own starches to stay alive. It literally becomes smaller because any new tissues that do manage to form during a wilt are malnourished and deficient. It’s best to conduct a capacity test when a rainless period is on the cards. First, soak a 30 cm-diameter spot with a hose for about two minutes. After the water has had a chance to settle, thrust the head of a trowel into the spot so it reaches 6 to 8cm below ground level. Pull the trowel toward you to make an opening, then reach in with your hand to feel the soil at the bottom of the opening. When watered well, soil should feel cool and damp at the bottom of the hole. Dig a new hole in the same spot every day and note the number of days that elapse until the soil at the bottom of the hole feels warm and dry. That’s the number of days you can go without watering during a rainless period. CONTINUES BELOW Weather conditions also play a part in determining a watering schedule, so be prepared to adjust your established schedule throughout the growing season. A bed that remains moist for seven to eight days in the cool, slow-growing days of early spring or early autumn may dry out in half that time when large trees are leafing out. While a lack of water can pose problems for some plants, irrigating more often than necessary can be harmful, too. Tightly packed clay soils hold water so well that any excess displaces what little air there is in the root zone, and adding more water at that time will only raise the level of root-rotting puddles. In these beds, I pay closer attention to water levels than I do in other beds. A quick test can determine whether your soil has a drainage problem. Dig a 30cm-wide by 30cm-deep hole in your garden and fill it to the top with water. Let the hole drain and fill it again. If the water drains away within a few hours, your soil is excessively well drained, apt to dry out quickly unless watered frequently and lightly. If the water drains out in 12 to 24 hours, your soil is well drained, and you can grow almost anything without a problem. If the hole still has water in it after 24 hours, your soil is poorly drained. Water can be sprinkled overhead, trickled at a plant’s feet, or allowed to weep on the soil throughout the bed. Since there are advantages and disadvantages to each method, you need to weigh them for your given situation. My own preferred method of watering is an automatic, in-ground system for lawn areas and separate in-ground zones for beds. The in-ground lines in the beds end in above-the-ground spigots to which I can attach a short hose outfitted with a watering wand to attend to the thirstiest species. Direct-to-soil delivery with soaker hoses Advantage: Less water is lost to wind evaporation, and water reaches odd corners not covered by overhead sprinklers. Disadvantage: Weeper hoses are often covered with mulch for aesthetic reasons and become invisible to sharp garden tools. No cleansing water hits foliage to rinse dust away and keep mite populations down. Point irrigation direct to soil with watering can or wand hose attachment Advantage: Water is applied only where and when needed. Water can be directed to plants that need it most. Disadvantage: This method can be time consuming. Hose wrestling is required. Watering system breaks down in gardener’s absence. Overhead delivery by in-ground sprinkler system on timing device Advantage: No hose wrestling required and water keeps flowing even when the gardener is absent. Disadvantage: Gardener falls out of the watering loop and fails to notice a watering problem until it becomes a crisis situation. Location of plants When designing a garden, you should group plant species according to water needs.
https://www.home-dzine.co.za/garden/garden-know-your-garden.htm
From score to floor: opera from the ground up By the time an opera production hits the stage, composers and performers have spent years transforming theatrical ideas and music. The interactive CU New Opera Workshop (CU NOW)—part of the Eklund Opera Program—features CU Boulder singers in new works on their way to professional world premieres. Specifically, CU NOW is a three-week (June 1-19, 2022) session for student singers and composers to collaborate and workshop both their own pieces and works by new opera professionals. Diverse composer Kamala Sankaram—who has made a name for herself composing unique works utilizing unconventional methods and ideas, including exploring the interplay of singers with electronic and digital music—will join CU NOW this summer. A public performance of Sankaram’s "Joan of the City" opera will take place on June 19, 3 p.m. in the Music Theatre. "Joan of the City" is a site-specific, mixed reality opera for five voices, four instrumentalists and software app that augments reality/mixed reality to tell the story of a modern-day Joan of Arc. Expect exciting insights into the creative process when you experience opera from the ground up at CU NOW! And be sure to check out the CU NOW Composer Fellows’ Initiative.
https://colorado.edu/music/summer-college-music/cu-new-opera-workshop-cu-now
This application if a Continuation-In-Part (CIP) of application Ser. No. 09/994,233 filed on Nov. 26, 2001, now U.S. Pat. No. 6,819,156. This invention relates generally to logic circuits commonly known as flip-flops, which are also known as latches and bi-state multi-vibrators, and more specifically to a differential flip-flop capable of operating at both very high and very low frequencies. Logic circuits can be classified into two broad categories, combinational logic circuits and sequential logic circuits. The basic building block of sequential logic circuits is the flip-flop, also called a bi-stable multi-vibrator or latch. In most cases, logic circuits employ both sequential and combinational logic. FIG. 1 100 100 105 106 110 100 110 100 100 (prior art) depicts an exemplary logic circuit that includes both combinational and sequential logic elements. Logic circuit is a divide-by-five counter with a NAND gate and a NOR gate in a feedback path of a series of differential-input flip-flops . Logic circuit receives a pair of complementary clock signals C and Cb, which extend to clock input terminals of each of the flip-flops . Circuit produces a pair of complementary clock signals C/5 and Cb/5 with a frequency one fifth that of the input clock signals. The differential nature of circuit allows for higher clock frequencies than would a similar divide-by-five circuit using single-ended sequential logic elements. FIG. 2 FIG. 1 110 100 110 110 (prior art) depicts an embodiment of a differential-input flip-flop for use in circuit of . The operation of flip-flop is commonly understood by those of skill in the art, so a detailed description of flip-flop is omitted here for brevity. 110 110 If manufactured using commonly available CMOS processes, flip-flop can perform with clock frequencies as high as about 2 GHz. Unfortunately, modern high-speed digital communication systems employ clock and data recovery circuits operating in the 10 Gb/s range. The frequency response of flip-flop is therefore insufficient to meet the needs of some modern systems. Differential return-to-zero-type flip-flops have been developed for high-frequency operation, but some embodiments do not work well at lower frequencies (below about 1 GHz), particularly when the manufacturing process or operating environment (e.g. temperature) changes. Therefore, a high-speed differential flip-flop that overcomes these disadvantages is desirable. A differential flip-flop in accordance with one embodiment has an output stage with a first input terminal, a second input terminal, a first output terminal, a second output terminal, a first voltage supply terminal, a first transistor having a first current-handling terminal connected to the first output terminal, a second current-handling terminal connected to the second output terminal, and a first control terminal connected to a clock signal. A second transistor has a third current-handling terminal connected to the first output terminal, a fourth current-handling terminal connected to the voltage supply terminal, and a second control terminal connected to a first input terminal of the output stage. A third transistor has a fifth current-handling terminal connected to the first output terminal, a sixth current-handling terminal connected to the voltage supply terminal, and a third control terminal connected to the second output terminal. FIG. 3 FIG. 1 FIG. 1 FIG. 6 300 100 300 300 100 105 305 0 0 1 1 305 305 b b depicts a divide-by-five circuit that divides a pair of complementary clock signals C and Cb (“C-bar”) by five. Like circuit of , circuit employs differential signaling to improve performance. Circuit differs from circuit , however, in that the logic associated with NOR gates of is incorporated into a single flip-flop adapted to receive two pairs of complementary inputs D, Dand D, D. The operation of an embodiment of flip-flop is detailed below in connection with . Flip-flop is described in further detail in U.S. patent application Ser. No. 09/994,233, entitled HIGH-SPEED DIFFERENTIAL FLIP-FLOP, by Michael A. Nix, filed Nov. 26, 2001 the application of which is hereby incorporated by reference in its entirety. 300 310 300 315 310 315 5 7 FIGS. 4 Circuit also includes a number of flip-flops that are modified in accordance with the invention to improve speed performance. Finally, circuit includes a differential to single-ended converter . Flip-flops and converter are described below in connection with , , and . FIG. 4 FIG. 3 400 310 400 401 402 401 403 405 1 2 410 415 402 1 2 401 402 420 425 depicts a flip-flop that is an embodiment of flip-flop of . Flip-flop includes an input stage, also known as a “master stage,” and an output stage, also known as a “slave stage,” . The input stage includes a pair of differential transistors and , the control inputs of which are connected to respective complementary data inputs D and Db. The output terminals X and X of the input stage, which are internal nodes, connect to respective control terminals of a pair of differential transistors and in the output stage . Hence, the output terminals X, X of the input stage are the input terminals of the output stage . The input and output stages include respective cross-coupled PMOS transistor loads and . 401 430 1 2 430 The input stage includes an n-channel metal-oxide-semiconductor (“NMOS”) transistor having one current-handling terminal (e.g. drain) connected to output terminal X and the other current-handling terminal (e.g. source) connected to output terminal X. The control terminal (i.e. gate) of the NMOS transistor is connected to the clock signal Cb. 402 435 435 430 435 430 435 The output stage likewise includes an NMOS transistor , one current-handling terminal, which is connected to output terminal Q. The other current-handling terminal is connected to output terminal Qb. The control terminal of the NMOS transistor is connected to clock signal C. In another embodiment, one of transistors and is substituted with a p-channel metal-oxide-semiconductor (“PMOS”) transistor, allowing both control terminals associated with transistors and to be controlled by the same clock signal. 402 440 445 440 445 425 T T T The output stage also includes cross-coupled NMOS transistors , . The control terminal of NMOS transistor is coupled to output terminal Qb, and the current-handling terminals are connected to Q and voltage supply terminal Vss. Similarly, the control terminal of NMOS is coupled to output terminal Q, and the current-handling terminals are connected to Qb and Vss. These NMOS transistors add additional hysteresis, and have very low threshold, typically about 0.15 Vto about 0.2 V(compared to about 0.5 Vfor a standard NMOS transistor), providing a current path to hold the output values Q, Qb down to Vss or up to the voltage generated by the cross-coupled PMOS transistors . 1 2 410 415 440 445 425 When Cb is high, X and X go to almost the same voltage value. This value (also called the common-mode) can be a level that leaves transistors and almost off. Without the cross-coupled NMOS transistors , , Qb is held up by one of the cross-coupled PMOS transistors , which leaves Q essentially floating, and Qb is pulled high. Having these terminals float is not overly problematic if the “strength” (transconductance) of the PMOS transistors is essentially even with the strength of the NMOS transistors. However, the relative strengths of the n-channel and p-channel devices change with operating temperature and as a result of process variables that arise when the integrated circuits are fabricated. In one instance, a design for a differential flip-flop worked well in integrated circuits fabricated in a 180-micron process (i.e. where the critical dimension for the manufacturing process is 180 microns), but did not work well under some circumstances when the design was adapted for fabrication in a 90-micron process. In some cases, the p-channel devices became stronger than the n-channel devices, and in other cases the n-channel devices became stronger than the p-channel devices. Either case produces common-mode voltage variations. If the common-mode voltage variation becomes too large, an unintentional and undesirable downstream bit flip may occur. The problem becomes more pronounced if serial differential flip-flops are used because each stage can act like a D-C amplifier, increasing the variation of the common-mode voltage. 440 445 402 401 The cross-coupled NMOS transistors , slightly increase the input capacitance of the output stage , which slows down the high-frequency operation slightly. The improved reliability of operation over temperature and improved yield are more desirable than the slight decrease in maximum operating speed. In a further embodiment, similar cross-coupled NMOS transistors are added to the input stage . FIG. 5 FIG. 4 500 400 500 1 2 is a waveform diagram depicting exemplary signals associated with the operation of flip-flop of . Diagram depicts complementary clock signals C and Cb, data signals D and Db, input-stage output signals X and X, and output terminals Q and Qb. The various node labels refer to both the signal and the corresponding circuit node. Whether a given designation refers to a node or a signal will be clear from the context. 500 400 0 430 1 2 400 1 2 430 1 2 1 2 FIG. 4 Understanding of the waveform diagram is facilitated with reference to the differential flip-flop shown in . Prior to time T, clock signal Cb is high, so transistor connects output terminals X and X of the input stage of flip-flop . The logic 0 input on the differential terminals D and Db consequently produces only a relatively small voltage difference across terminals X and X. Though limited by the ON resistance of transistor , the voltage across terminals X and X does reflect a logic 0 (i.e. X>X). 0 430 1 2 1 2 0 435 435 At time T, clock signal Cb goes low, turning off transistor to disconnect terminals X and X. The voltage between terminals X and X thus increases, better representing the difference between input signals on terminals D and Db. Also at time T, clock signal C goes high, causing transistor to connect output terminals Q and Qb. The voltage difference between signals Q and Qb therefore diminishes (i.e. returns toward zero volts difference). Though limited by the ON resistance of transistor , the voltage across terminals Q and Qb continues to reflect a logic 0 (i.e. Q<Qb). 1 430 435 1 2 1 2 410 415 Next, at time T, clock signal C returns low and complementary clock signal Cb returns high. Transistors and consequently change states, so that terminals X and X are once again connected, and terminals Q and Qb are disconnected. In this new state, terminals X and X begin to approach one another and output terminals Q and Qb swing away from one another to reflect the differential input signals to transistors and . 1 2 2 4 2 4 Before the differential input terminals D and Db receive a new data bit, the pairs of output terminals X, X and Q, Qb approach one another to limit the maximum amount of voltage swing required to move the differential output signal to the next logic bit. For example, the logic level expressed on output terminal Q from time T to time T switches from a logical 0 to a logical 1, and the logic level expressed on output terminal Qb switches from T to T switches from a logical 1 to a logical 0. These transitions are shown as a maximum voltage swing for each of the output terminals Q and Qb. 505 400 FIG. 4 The time required to make this transition is expedited by beginning to bring Qb lower and Q higher prior to receipt of the data signal indicating the logic transition, shown within the dashed circle . The resulting reduction in the maximum voltage swing required to change the logic level expressed on terminals Q and Qb reduces the amount of time required to make logic transitions on terminals Q and Qb. This speeds the logic transitions on the outputs of flip-flop (ref. ). 510 4 6 400 400 400 As illustrated in the dashed circle between times T and T, the voltage difference between terminals Q and Qb is reduced even if the next data bit turns out to be the same logic level as the one presently represented. This is because flip-flop cannot anticipate the next logic level, and consequently must prepare for either of the two alternatives. Flip-flop therefore requires some amount of time to transition between two logic zeroes or two logic ones. The overall speed of flip-flop increases because the time required to transition between different logic levels is reduced. 400 110 400 FIG. 2 Reducing the time required for flip-flop to transition between different logic levels translates directly into improved speed performance. Moreover, as compared with flip-flop (), flip-flop has fewer transistors, and can therefore be implemented using less die area. These changes also result in significantly reduced power consumption for a given level of speed performance. However, it was found that some differential flip-flops optimized for high-speed operation sometimes did not work reliably at lower clock rates. In a particular instance, a differential flip-flop that worked at clock rates up to several GHz did not always work reliably (provide the correct output for a known input) at clock rates below about 1 GHz. DD Typically, a flip-flop is designed to have a nominal common-mode voltage about halfway between Vand ground. It is desirable to avoid shifting the common-mode voltage, otherwise the common-mode voltage may exceed a voltage associated with a logic state (“trigger voltage”) by rising above a trigger voltage for a logical 1 when the true data value is a logical 0, for example, or vice versa. FIG. 5 2 1 2 515 2 1 2 1 2 1 2 3 Referring to , Q and Qb are not supposed to respond to input on D and Db until C goes low. At around T, X and X cross within the dotted circle and stay close together until a short time after T. After that, X and X start to diverge. At a high-frequency clock rate, X and X do not have much time to diverge, and hence the difference between them, delta X, remains small. At a lower-frequency clock rate, X and X have more time to diverge before C goes low, which can cause Q and Qb to change logical states (flip) before T. 440 445 1 2 440 445 FIG. 4 FIG. 4 DD The cross-coupled NMOS transistors , illustrated in add additional hysteresis, increasing the time after X and X cross before they diverge sufficiently to prematurely flip the outputs Q, Qb. As discussed above in reference to , the cross-coupled NMOS transistors , provide current paths to hold the outputs Q, Qb up to Vor down to Vss, respectively. FIG. 6 600 depicts a differential buffer circuit that is optionally used with high-speed return-to-zero differential flip-flops or other circuits, such as differential clock circuits. A flip-flop might not provide sufficient voltage swing and/or current to drive downstream logic. A buffer circuit is often used to interface with downstream logic. In a particular embodiment, a differential flip-flop operates at about 1.5 V and a differential buffer operates at about 2.5 V. The differential buffer provides essentially rail-to-rail output. 400 600 FIG. 4 FIG. 6 The symmetry of the differential flip-flop shown in maintains the phase relationship of the differential inputs D, Db to the outputs Q, Qb, with respect to the clock signal(s). In many applications it is desirable to maintain differential signals through the buffer. The buffer shown in is symmetrical and provides differential outputs. For example, the output does not go though an inverter to obtain a complementary output, as is often done in non-symmetrical buffer circuits. 600 640 645 440 445 640 645 600 640 645 600 FIG. 4 The buffer includes cross-coupled NMOS transistors , that provide a similar function as the cross-coupled NMOS transistors , shown in , in that these transistors , provide additional hysteresis and improved low-frequency operation of the buffer . In one embodiment, the cross-coupled NMOS transistors , allow the buffer to operate down to DC. 600 650 655 650 655 660 665 400 400 650 655 650 655 670 675 680 680 670 675 600 FIG. 4 FIG. 4 The buffer also has NMOS transistors , that operate as active loads. These transistors , are low-threshold devices, and their gates are connected to the primary inputs VI_N, VI_P through capacitors , . For example, a first primary input VI_N is connected to a first output of a differential flip flop (e.g. Q of differential flip flop shown in ) and a second primary input VI_P is connected to a second output of the differential flip-flop (e.g. Qb of differential flip flop shown in ), or vice versa, depending on the desired phase relationships. These capacitors are relatively small, each having a capacitance of about 4 femto-Farads, and speed up the operation of the active load NMOS transistors , by transferring charge to the gates of the transistors when the primary inputs change state. The gates of the active load NMOS transistors , are also biased through resistors , to Vs through an optional power-down circuit . The optional power-down circuit brings the resistors , to ground in response to a power down signal PD to avoid static currents in the buffer . 650 655 660 665 600 600 The combination of the active load transistors , with the speedup capacitors , greatly improves the performance of the buffer for a given power consumption. In other words, the buffer can operate at a higher speed for a fixed power, or can operate on less power at a fixed speed. FIG. 7 700 702 704 706 708 shows a flow chart of a method of operating a flip-flop according to an embodiment of the invention. Differential input signals are provided to differential input terminals of a stage of a flip-flop (step ). A first clock signal is provided to the stage of the flip flop to couple a first differential output terminal to a second differential output terminal (step ). A second clock signal is provided to the stage of the flip flop to de-couple the first differential output terminal from the second differential output terminal (step ). Current paths from both differential output terminals are provided to both voltage supply terminals of the flip-flop (step ), so as to avoid floating one or both of the differential output terminals. While the present invention has been described in connection with specific embodiments, variations of these embodiments will be obvious to those of ordinary skill in the art. For example, the invention can be adapted for use with other types of sequential logic elements, such as single-stage latches, toggle flip-flops, J-K flip-flops, AND-input flip-flops, or XOR-input flip-flops. Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance, the method of interconnection establishes some desired electrical communication between two or more circuit nodes, or terminals. Such communication may often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art. Therefore, the spirit and scope of the appended claims should not be limited to the foregoing description. FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE DRAWINGS BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 100 (prior art) depicts an exemplary logic circuit that includes both combinational and sequential logic elements. FIG. 2 FIG. 1 110 100 (prior art) depicts an embodiment of a differential-input flip-flop for use in circuit of . FIG. 3 300 depicts a divide-by-five circuit that divides a pair of complementary clock signals C and Cb by five. FIG. 4 FIG. 3 400 310 depicts a flip-flop that is an embodiment of flip-flop of . FIG. 5 FIG. 4 500 400 is a waveform diagram depicting exemplary signals associated with the operation of flip-flop of . FIG. 6 depicts a differential buffer circuit that is optionally used with high-speed return-to-zero differential flip-flops. FIG. 7 shows a flow chart of a method of operating a flip-flop according to an embodiment of the invention.
32Si Dating of Marine Sediments from BangladeshAppropriate dating tools are essential for paleoenvironmental studies. Cosmogenic 32Si with a half-life of about 140 years is ideally suited to cover the dating range 30-1000 years. Here we have applied scintillation spectrometry for detection of natural 32Si to date marine shelf sediments. High detection efficiency, combined with stable background, allows for the detection of extremely low 32Si specific activities found in such sediments with counting rates below one count per hour. For a sediment core from the Ganges-Brahmaputra delta 32Si dating yields mean sedimentation rates of 0.7 +/0.2 cm/yr for 50 to several hundred years BP and 3.1 +/0.8 cm/yr for the past 50 years. The four-fold increase of the sedimentation rate over the past 50 years may reflect increased sediment loads in the rivers due to increasing human colonization within the rivers' drainage basins. - Calibration of Lacustrine Sediment Ages Using the Relationship between 14C Levels in Lake Waters and in the Atmosphere: The Case of Lake KinneretThe source of endogenic organic and inorganic carbon in lacustrine sediments is the dissolved inorganic carbon (DIC) in the lake water. The relation between the radiocarbon levels of DIC in Lake Kinneret and of CO2 in the atmosphere has been investigated. The ratio of the former to the latter was found to be 0.814 +/0.013. This ratio is used for calibrating the age of the sediment according to the natural fluctuations in the atmospheric levels of 14C that occurred during the past 10,000 years. - Dating of Biodeposits of Oxalates at the Arc de Berà in Tarragona, SpainThis research stems from an earlier study of the lichen covering with oxalate deposits at the Arc de Berà monument. The initial objective of dating these biodeposits opened up other questions concerning the structure of these deposits and how they were formed. Some dating results gave an absolute age greater than the monument itself, which posed various hypotheses on the possible ageing mechanisms. - Development of Accurate and Reliable 14C Chronologies for Loess Deposits: Application to the Loess Sequence of Nussloch (Rhine Valley, Germany)Due to very high accumulation rates, loess sequences are best suited archives for the continental paleoclimate of glacial periods. Accurate chronologies cannot be easily established by radiocarbon-dating, because of the lack of organic macrorests, the only material for reliable 14C dating so far. A chemical protocol is reported to extract the organic matter of loess (organic carbon content lower than 0.1% by weight) for accelerator mass spectrometry (AMS) 14C dating. Sediments were taken from the loess sequence of Nussloch, for which a large dataset of luminescence ages (TL, IRSL/OSL) is available. The 14C chronology of the organic matter extracted from loess is in good agreement with the corresponding luminescence ages. It allows high resolution correlations with climatic proxy signals (magnetic susceptibility, malacological assemblages, delta-13C on organic matter, etc.) derived from the loess sequence and global environmental proxy records. - The Cave of Theopetra, Kalambaka: Radiocarbon Evidence for 50,000 Years of Human PresenceThe cave of Theopetra is located on the northeast side of a limestone rock formation, 3 km south of Kalambaka (21 degrees 40'46"E, 39 degrees 40'51"N), in Thessaly, central Greece. It is a unique prehistoric site for Greece, as the Middle and Upper Paleolithic, Mesolithic, and Neolithic periods are present here, bridging the Pleistocene with the Holocene. Several alternations of the climate during the Pleistocene are recognized in its stratigraphy. Among the most striking finds, two human skeletons, one from the Upper Paleolithic period after the Last Glacial Maximum and one from the Mesolithic period, should be emphasized, while in a deep Middle Paleolithic layer, the oldest human footprints, with remains of fire, were uncovered. During the 13 years of excavation, evidence of human activity suitable for radiocarbon dating was collected, such as charcoal samples from hearths and bones from the two human skeletons. The use of proportional counters for the measurement of 14C in combination with the recent improvement of the calibration curve has enabled the production of high-precision reliable ages Sixty 14C-dated samples, originating from 19 pits and from depths ranging from 0.10 m to 4.20 m, have already provided an absolute time framework for the use of the cave. The earliest limit of human presence probably exceeds 48,000 BP and the latest reaches World War II. Within these limits the 14C dating of samples from consecutive layers, in combination with the archaeological data, permits the resolution of successive anthropogenic and environmental events. - The Kennewick Skeleton: Chronological and Biomolecular ContextsA human skeleton recovered near Kennewick, Washington, USA in 1996 has been dated to the early Holocene on the basis of multiple radiocarbon determinations, an analysis of a style of a temporally diagnostic projectile point found embedded in the ilium of the skeleton, and geological investigations of the locality where the skeleton was recovered. Based on morphological criteria, the Kennewick skeleton, which is one of the most complete early Holocene human skeletons recovered so far in the Western Hemisphere, appears to be more similar to those of modern South Asians and Europeans than to modern Native Americans or to contemporary indigenous populations of Northeast Asia. - Towards a Radiocarbon Chronology of the Late-Glacial: Sample Selection StrategiesThis paper outlines a dating program designed to test the reproducibility of radiocarbon dates on different materials of Late-Glacial age (plant macrofossils, fossil beetle remains, and the "humic" and "humin" chemical fractions of limnic sediments) using a combination of radiometric (beta counting) and accelerator mass spectrometry (AMS) techniques. The results have implications for the design of sampling strategies and for the development of improved dating protocols, both of which are important if a high-precision 14C chronology for the Late-Glacial is to be achieved.
https://repository.arizona.edu/handle/10150/635108/browse?type=subject&value=accuracy
These bacteria are known as extremophiles. What Are Autotrophs? This means that they thrive in conditions that are too hostile for most other species. It's not just bacteria at the bottom of the ocean. Weird and wonderful animals feed on the bacteria, and on each other! Tube worms, found surrounding hydrothermal vents, let the extremophile bacteria live inside them. These bacteria provide nutrients for the tube worm and the tube worm provides hydrogen sulphide for the bacteria. It's a good example of symbiosis: a relationship between two organisms where both of them benefit. Loads more species have been discovered in the deep ocean since these hydrothermal vents were found. - Learn more about Chemosynthesis. - people that make the world a better place essay. - Related terms:. - essay contests 7th grade? - chemosynthesis; - sir francis bacon essay on revenge! These include crabs, eyeless shrimp and even an Eel City:. Or view the video in YouTube. Planet Science. Advanced Search. Under 11s. Autotrophs and Heterotrophs ( Read ) | Biology | CK Foundation Methanopyrus kandleri is a heat- and salt-loving species of Archaea that makes its home on the chimney walls of smokers. It harvests energy from hydrogen gas and releases methane, a process known as methanogenesis. Hydrogen sulfide is highly toxic to most animals, including people. However, animals at hydrothermal vents have special biochemical adaptations that protect them from hydrogen sulfide. One of these hydrogen sulfide-making species is Pyrolobus fumarii or "fire lobe of the chimney" , that was first isolated from a hydrothermal vent at the Mid-Atlantic Ridge. Pyrodictium abyssi are disc-shaped cells that grow attached to networks of hollow tubes that resemble tree roots. Green sulfur bacteria are unique among hydrothermal vent bacteria because they require both chemical energy from hydrogen sulfide and light energy to survive. Accessibility links Green sulfur bacteria contain chlorosomes, organelles that are so efficient at harvesting light that green sulfur bacteria can grow at much lower light intensities than other light-requiring microbes. There is no sunlight at hydrothermal vents, and instead they capture energy from the weak radioactive glow emitted from geothermally heated rock. Extremophiles might have been among the earliest life forms on earth and have possible uses in industry. The oxygen produced is released into the air from the leaves. The glucose produced can be turned into other substances, such as starch and plant oils, which are used as an energy store. Dive under the ocean This energy can be released by respiration. The relationship between photosynthesis and respiration. What is photosynthesis? These are the things that plants need for photosynthesis: carbon dioxide water light a source of energy These are the things that plants make by photosynthesis: glucose oxygen Here is the word equation for photosynthesis: curriculum-key-fact.
https://axafiloj.tk/what-plants-use-chemosynthesis.php
is more like CLEAR and WHITE since our black is the absence of white which is the relative color of the laser etch point in a typical lit room. In the real world, printing is done on white paper, ink is black, shadows are dark, highlight is light and the world is round. In our world, our paper is clear, the ink is white, shadows don’t exist, white is viewed as substance (the world is still round just upside down and backwards). here is an example swatch of how tone is rendered by a laser – remember the BLACK is clear continuous tone on top/segmented in middle/laser dithering below Since we don’t have shadows to define elements, we do have contrast between our “colors” – in the example above we can clearly discern the difference between segments on the bottom row – more so the farther apart they are. This is a good example of how textures in an image or model can best be used to enhance the result using our “inks”.
http://www.sharpeawards.com/portfolio/wisdom/black-and-white
According to the National Crime Information Center (NCIC), there were a total of 627, 911 missing person records in 2013. By the end of the year, 84, 136 of these cases were still unsolved (active). Statistics also indicate that in 2013, there were 5,000 reported unidentified persons in the United States. A century ago most of these cases would have gone unsolved. However, modern advances in the fields of science and technology make it more possible for solve missing or unidentified person cases. Today, law enforcement agencies use DNA testing to identify individuals or even as forensic evidence in tracking down missing persons..solve missing or unidentified person cases. Today, law enforcement agencies use DNA testing to identify individuals or even as forensic evidence in tracking down missing persons. Types Of DNA Tests Used To Obtain Forensic Evidence i. Short Tandem Repeats (STRs) STRs are present in genomes – the complete set of genetic material in a cell or organism. Analysis of 15 or more of these STRs located in the non-sex chromosomes is used to ascertain family relationships with greater precision and accuracy. STR analyses are not always successful, particularly after the degradation of human remains. However, it is possible to use mini-STRs with more success when dealing with degraded DNA. ii. Mitochondrial DNA Mitochondrial DNA testing uses DNA contained in the mitochondria. It is inherited from the mother alone. This testing makes it possible for experts to match remains of unidentified persons with their maternal grandmothers, aunts or uncles as well as with siblings. iii. Sex Chromosome DNA Tests These are used to make connections between unidentified persons and relatives from the paternal line of the family. Experts use STRs located on the Y chromosome. However, just like mitochondrial DNA profiling, this kind of test is not very accurate because the Y chromosome is not unique. iv. Single Nucleotide Polymorphisms (SNPs) DNA tests that use SNPs are useful in situations when there is a severe degradation of forensic evidence. Standard Process Of DNA Analysis There are five standard steps involved in a conventional DNA analysis process. These include; i. Retrieval which involves the collection of forensic evidence, storage of remains and extraction of DNA from human remains or the forensic evidence collected. ii. Secondly, the DNA is quantified. iii. The production of multiple copies of the DNA fulfills the purpose of characterizing it. This step is known as amplification of DNA. iv. Once the DNA is amplified, it is separated to allow for identification. v. DNA samples are compared with known DNA profiles in a qualitative and quantitative manner. This was the purpose behind the establishment of the National DNA Index System in 1998. NDIS is part of a larger database known as the Combined DNA Index System (CODIS) used by all the different law enforcement agencies in the United States. Experts can use DNA profiles such as those from convicted offenders, unknown individuals from other forensic evidence, missing persons, biological relatives of the disappeared (contributed voluntarily) and remains of unidentified persons. The information from the analysis is then used to solve missing person or unidentified person cases. vi. Finally, the analysis of the final report brings out the technical accuracy. Interpretation Of Results DNA testing can yield three types of results. Inclusion is when the DNA profile of a suspect or an unidentified person is consistent with forensic evidence from the crime scene or with known relatives. Exclusion is when the DNA profile is not consistent with DNA from crime scene evidence or the purported relative. Lastly, DNA testing could yield inconclusive results due to some reasons including when the DNA sample is mixed. Advantages Of Using DNA As Evidence Using DNA testing to identify missing persons or solve active missing person cases has several benefits to it; i. Firstly, experts can test any DNA-containing biological trace evidence. This is because the composition of DNA is the same in all the cells of the body. It is, therefore, possible to test evidence such as hair, semen, skin, bone marrow, blood, teeth or even saliva among other biological materials. ii. Secondly, DNA testing is made effective by the fact that everyone is genetically different except for identical twins. This makes it more precise than tests such as blood typing. It also makes it possible for law enforcement to link crime scenes with specific suspects. iii. DNA tests do not require a huge amount of biological material for them to work. A single strand of hair or blood stain is enough as forensic evidence. iv. DNA degrades particularly slowly in hard tissues such as bones or teeth. This means that in cases where the only available specimens are old or degraded, it is still possible to use DNA testing to identify a missing person. This is a significant benefit especially in situations such as war or catastrophes.
https://pranadevi.net/finding-missing-or-unidentified-persons-using-dna-testing/
Financial statements provide information about the financial performance and position of an organization. Stakeholders generally use four major financial statements, found in the Annual Report of a company. - The Statement of Financial Position - The Statement of Comprehensive Income - The Statement of Changes in Equity - The Cash Flow Statement In this article, we will examine these separately. 1. The Statement of Financial Position Imagine you consider investing in some common shares of a company called ABC Corporation. How do you know whether this is a good investment? The first thing you do is check its Statement of Financial Position, also known as the Balance Sheet. In short, it summarizes what ABC Corporation owes and owns at a certain date. Think of it as a “snapshot” of the company’s financial position as at a given point in time. Most firms prepare their Statement of Financial Position as at 31st of December each year. Normally, such statements are prepared for two consecutive periods – the current year and the comparative previous year. Any Balance Sheet consists of three main elements. First, you see Total Assets, which represent the resources controlled by the enterprise. Second, there are Total Liabilities or the amount of money owed to lenders and other creditors. This is the debt of the company, resulting from past events and business activities. And third, you notice an item called Total Equity. This is the owners’ residual interest in a company’s assets after deducting its liabilities. The Statement of Financial Position is based on the basic accounting equation: Total Assets – Total Liabilities = Owners’ Equity The difference between Assets and Liabilities is sometimes referred to as net assets or net worth. In all instances, Total Assets must reconcile with Total Equity and Liabilities. This holds true in ABC Corporation’s Balance Sheet. Total Assets are $15, 078 million; Total Equity, or net worth, is $10,162 million; and Total Liabilities are $4,976 million. There are many line items included in any Balance sheet that you can read more about here. 2. The Statement of Comprehensive Income The Statement of Comprehensive Income, commonly known as the Profit and Loss Statement, tells us whether an organization generates a profit or a loss for a period of time. Furthermore, it allows for an indication of important trends, such as revenue growth and the incidence of gross profit, operating profit, and net profit on revenues. Thus, it helps us understand whether the operations of a firm created economic value over a certain timeframe. Typically, the Statement of Comprehensive Income is prepared on an annual basis. However, most publicly-traded companies produce a Profit & Loss Statement quarterly or for a 6-month period. In general, it has two primary elements. First – The Income Statement. All revenues and expenses that stem from the normal course of business operations are recorded here. The bottom line of the Income Statement is the Net Income for the period. And second – Statement of Other Comprehensive Income which reports all income and expense items that are not recorded in the Income statement but affect Owners’ Equity. 3. The Cash Flow Statement Provided that “profit” and “cash generation” are two different things, you need a statement that illustrates any changes in the cash balances during a given period. In this way, you can evaluate a company’s liquidity more precisely. The Cash Flow Statement compiles cash and cash equivalents that have been generated (cash inflows) and spent (cash outflows) within a fiscal period. Based on the specific business activities for which a firm pays or receives cash, the Cash Flow Statements consists of three major sections. These are operating activities, investing activities, and financial activities. Cash flow statements are prepared using either the Direct or Indirect Method. For more information, please read here. Firms that seemingly generate profits but are not liquid enough to maintain their operations are at risk of going out of business. Therefore, analyzing a company’s Cash Flow Statement is an inextricable part of thorough financial statements’ analysis. 4. The Statement of Changes in Equity At last, the Statement of Changes in Equity compounds the changes in Owners’ Equity that have occurred during a certain period. If ABC Corporation issued some share capital or distributed dividends to its stockholders, one can see that in the Statement of Changes in Equity. A word of caution! To understand why Equity has changed this way, we should take a step back and review the Statement of Comprehensive Income again. Interrelationship of Financial Statements All 4 financial statements are interrelated! Users must look at them jointly. Business transactions are intricate, and they influence many items in the financial reports simultaneously. For example, the profit figure for the year appears in both, the Income Statements and Statement of Changes in Equity. In another instance, if an enterprise raises additional share capital, this will induce modification of its Statement of Owners’ Equity and Statement of Financial Position concurrently. Users of Financial Statements Financial statements have been broadly used by many stakeholders. Customers, suppliers, potential investors, employees, and creditors are just among a few of them. Those directly concerned with the financial health of a particular business analyze its Annual Report to obtain relevant data. Scholars incorporate essential information in their business studies and economic analysis; governments examine the reports when tailoring regulations, even authorities turn to them when dealing with labor disputes. From a broader perspective, financial statements provide for a more accurate assessment of political and social policies. All in all, a well-prepared set of financial reports is at the heart of a high-quality financial analysis! Stay with us for more useful articles! If you want examples of some of the major financial statements in Excel, take a look at our Statement of Financial Position Excel template and Statement of Comprehensive Income Excel template.
https://365financialanalyst.com/knowledge-hub/accounting/the-four-major-financial-statements/
Quantifying the Economic Impact of the Grand Ethiopian Renaissance Dam on the Nile River Basin; Politics of the Grand Ethiopian Renaissance Dam: The Role of Egyptian Farmers. O'Neill, Adam, School of Engineering and Applied Science, University of Virginia Lakshmi, Venkataraman, EN-Eng Sys and Environment, University of Virginia The danger of water scarcity and subsequent human harm in Egypt and Sudan due to the filling of the Ethiopia’s Grand Ethiopian Renaissance Dam (GERD) motivates this research. The project team reviewed the conditions of the Nile through data analysis to determine potential negative effects of the GERD on downstream economies and well-being. The dam affects a complex transboundary political arena which necessitates careful consideration of the potential for unrest resulting from the human consequences of water scarcity. The region is already experiencing volatility from civil wars and political unrest, which the dam could serve to exacerbate as it is used as a political pawn. The dam is a highly political asset to Ethiopia, so the understanding of conflict and assessment of harm to Egypt is better viewed from the lens of techno-politics. Techno-politics addresses the inherent power dynamics imputed to technologies. The method employed to research this conflict was a case study on Egyptian farmers as a social group that has a unique positioning within the water-management community and Egyptian political balance. Findings illustrate the vulnerabilities of Egyptian farmer’s livelihoods to the operation of the GERD and its inherent politics, as well as illuminate the specific implicit political power afforded to Egyptian farmers within the broader Nile system. With data to show the real-world effects of the filling of the GERD coinciding with the social and political outfall of those effects, the impact of this research is to show the benefits of cooperative transboundary river management, as well as the danger of non-cooperation to marginalized social groups such as Egyptian farmers.
https://libraetd.lib.virginia.edu/public_view/z316q2535
Fire remains at 6 acres in Mount Zirkel Wilderness Area Steamboat Springs — The South Fork Fire burning in the Mount Zirkel Wilderness Area this week still was an estimated 6 acres in size Friday. A helicopter that was ordered to help contain the fire was released so it could be used at a higher priority incident. Other aircraft still are being used to monitor the fire. Closures remain in effect for the Wolverine Basin Area and North Lake Trail, which is Forest Trail 1164. The potential for the fire to grow in size is low. To reach Matt Stensland, call 970-871-4247, email [email protected] or follow him on Twitter @SBTStensland Support Local Journalism Support Local Journalism Readers around Steamboat and Routt County make the Steamboat Pilot & Today’s work possible. Your financial contribution supports our efforts to deliver quality, locally relevant journalism. Now more than ever, your support is critical to help us keep our community informed about the evolving coronavirus pandemic and the impact it is having locally. Every contribution, however large or small, will make a difference. Each donation will be used exclusively for the development and creation of increased news coverage. Start a dialogue, stay on topic and be civil. If you don't follow the rules, your comment may be deleted. User Legend: Moderator Trusted User County will only test symptomatic individuals for COVID-19 STEAMBOAT SPRINGS — The Routt County Department of Public Health will now only test symptomatic individuals for COVID-19. The decision, announced Thursday, is based on guidance the county received from the state about Curative tests.
https://www.steamboatpilot.com/news/fire-remains-at-6-acres-in-mount-zirkel-wilderness-area/
That was my recurring thought in my 30s as I looked at the way my life was unfolding. I felt stuck, stifled and oh-so-aware of the passing of time, but I was unsure how to bring about the changes I needed to make to my life. Does that sound familiar? And I was confused too, because I had done everything “right”. I studied hard, got my degree and a “proper” job, ditching any notions I’d previously had about being a writer or owning my own business. That was something that “other people” did, there’s no way I would be able to do it… I stuck to the sensible plan, assumed that one day it would all feel right and that everything would fall into place. But it never really did. After I had my twin boys, I knew my life wasn’t set up in a way that allowed me to do what mattered to me most - look after my family, build something worthwhile for my children and do something I enjoyed at the same time. It’s not that becoming a mum changed my priorities, it just gave me a reason to live them. The penny finally dropped with me that if I wanted to change something, I could learn how change it. I took writing courses and workshops to set me on the path to writing fiction, meanwhile I turned to a blank notebook to write about my real life too. Call it what you will – a notebook, a journal, a diary – it was the place I worked out what I wanted to do and then planned a way to take action. Writing in this way helped me gain clarity and inspiration, and then, more importantly, take positive actions based on what mattered to me most. It was this work that enabled me to change the path I was on, focus on writing and start my own business. I developed an ongoing practice of journaling for self-development and now my journal is my lifelong, life-enhancing companion for clarity, creativity, support and success. As a qualified HR specialist with nearly 20 years’ experience, I know and I’ve seen the benefits when people learn, grow and change, not only for themselves. When we achieve our potential, we can give so much more to our family, our work, our relationships and the wider world. If you have the feeling that turning the spotlight on yourself for a while is selfish or self-indulgent, please let go of it now. It really is a win-win. In fact, I’d even go further than that. I believe that trying to achieve our personal vision of success and be happy is the best way we can demonstrate gratitude for the life we’ve been given. I’m not going to pretend for a second that it’s easy (I’ve shed lots of tears), nor that I’m exactly where I want to be yet or have it all figured out (nope), but I’ve come a long way. Making these positive changes in my life is definitely up there as one of the better decisions I’ve made and I’ve finally figured out that this is what I’m meant to be doing. Hooray for that! When I left my day job in 2016, I was ready to take control of my career and build my business round my family but, as so often happens, life had other plans. Around six months into my business, I lost my wonderful dad. He died one night unexpectedly and his passing left a massive hole in our lives. I thought I was doing OK. My practical side kicked into gear. My dad had always been the capable one and I was determined to pick up the mantle. “I’ll do it, that’s no problem,” became my overused phrase. It took me a year to hit a metaphorical wall. All the time I thought I was doing well but, really, I wasn’t. I was trying to make things better for my mum and my children, trying to work on my business, trying to get back to "normal,” but all the time I felt that I operating from under a pile of bricks. It took me too long, but I finally went to my doctor and got some help by way of anxiety medication. It’s given me the breathing room to start to mainstream healthier practices in my life again, and for me that included a return to writing and journaling. It’s easy to become jaded and get lost in the day-to-day minutiae, whether it’s work or life in general, but I reckon the only way to extract something positive from a sad life event is not to lose the clarity about what’s important that accompanies it. To me what’s important is the people around me and how I can positively impact on their lives. I decided that the best way to show my gratitude for the life I’ve been given is to learn, grow, develop and try to be the best I can be, so that I have the capacity to give more to those around me and, if possible, the wider world. I’ve always believed “people are the most important thing” (you don’t – or shouldn’t - spend 20 years in HR and not believe that) and I wanted to combine my knowledge, skills, loves and experience to see what I could create that would be useful to the people with whom my story resonates. And here we are! Am I going to tell you that journaling solves everything for me? No, of course not. For example, did journaling help me when Dad died? Honestly, no, not immediately. I couldn’t face what was happening nor how I felt about it, so the last thing I wanted to do was write it down. It was about six months before I mentioned Dad in my journal other than obliquely, but this is common. It’s well documented that journaling best helps us process difficult feelings and events once we have some distance from them. But, here’s the thing. It’s not all about easing the difficult times. Since I started focusing on writing, both fiction and journaling, I’ve experienced so many positive changes that I now take every opportunity to find out more about how these activities can be used to enhance our daily lives. I started Elle Turner Creative because I’m on a mission to help us create a life round what matters to us most, achieve our own personal vision of success and fulfillment and have a positive impact on our world, all while bringing more fun and creativity into our lives. I see this as a lifelong practice that should be FUN in its progress. It’s about working to continually achieve a vision of personal success that will likely keep evolving. That we can keep changing, growing and developing is one of the joys in life. And isn’t it even better to know that by doing so, we can spread that joy around? That's what I'm constantly aiming for and it's writing that helps me progress. If this sounds like something you’re aiming for too, then I’m glad we’ve found each other. I understand that striving for personal growth can be a challenging process which is why much of my Success Journaling practice focuses on internal work - our mindsets, what we want that’s personal to us, where we want to place and preserve our energy and how we can actively support ourselves from the inside out. If that sounds good to you, get your FREE copy of How to stop feeling guilty for wanting something more from life. This guide and the journaling exercises inside are designed to help you make an important mindset shift that will help you as you embark on your personal development .
https://elleturnercreative.com/pages/my-story-2
Pest Control: Continue to spray trees with a lightweight horticultural oil (Dormant Oil) if they are infested with scale. Spray several times about 10 days apart. Pruning: Trees planted a year or more generally need some pruning. Do not prune pines in March as they tend to bleed more this time of the year. Planting: It is still to early to plant warm season grass seed, but you can lay sod.. Fertilizing: Warm-season grasses are just waking up and reestablishing a strong root system. Wait until early April to apply the first fertilizer application. Pest Control:Apply a weed preventing pre-emergent to help prevent summer weeds. Mowing: If sufficient growth occurs you should mow. Do not mow grass especially short the first few times. Scalping the lawn is not recommended and can lead to a weakened root system. Watering: If needed, water thoroughly. This encourages roots to grow deep. Planting: The weather has warmed up and you want to plant annuals. It is a tricky time of year – a last freeze or heavy frost can damage any warm weather annuals you plant. Frost cloth can help. Fertilizing: If it has been over 6 weeks since your last application of granular fertilizer, or over 2 weeks since you last used a soluble fertilizer you might want to do so now. Pest Control: Slugs and snails can be a problem – use an aid to keep them under control. Watering: Dry conditions can occur during this month, if so water deeply and thoroughly whenever irrigation is needed. Planting: Perennials may not look like much when you purchase them this month. Just imagine how they will look when they bloom in 2 to 3 months – or even at the end of the summer. Transplanting: Finish up transplanting and dividing perennials as soon as possible. Fertilizing: Established perennials should be fertilized this month. Pest Control: Snails and slugs may be active, get an early start on control and don’t let their populations build. Control aphids, caterpillars with approved aids. Pruning: Not much pruning is necessary during active growing periods. Watering: Apply water slowly over time with a sprinkler or soaker hose to ensure a deep thorough watering when adequate rain is not received. Planting: Wait until April to plant summer-flowering bulbs. Care: Remove faded flowers and developing seedpods from spring-flowering bulbs that are to be kept for bloom next year. Do not remove any of the green foliage and fertilize them. Watering: It is usually not necessary to do much watering of spring-flowering bulb, but dry weather occasionally makes it necessary. Pest Control: Snails, slugs, and caterpillars may chew holes in leaves of flowers – use pest control aids. If you see toads in the garden, remember they are excellent predators of slugs. Planting: Continue to plant roses purchased in containers. It is too late to plant bare root roses after the first of this month. Transplanting: This is not the best time to transplant roses. Fertilizing: Fertilize when new growth begins. Use a premium-quality rose fertilizer. Pest Control: Regular applications of blackspot treatment should begin as soon as new leaves begin to open. Pruning: Prune as early in the month as possible if you did not do so in February. It will not hurt your bushes but your roses will bloom later. Watering: Watch the rainfall and water plants if necessary. Planting: Spring months are fun months to plant new shrubs in the garden. It is a good time to replace any shrubs that were damaged through the winter. Transplanting: Although most shrubs are coming out of dormancy you can still safely transplant since the temperatures are still on the mild side. Fertilizing: This is the month to fertilize shrubs. Apply fertilizer evenly around the plants. They can be applied directly over the mulch or you can pull back the mulch apply the fertilizer and replace the mulch. Pest Control: Aphids make their appearance on the new growth of a variety of shrubs this time of year. Also, leaf spot diseases may attack new growth on some shrubs. Pruning: Finish up pruning summer-flowering shrubs by the end of this month. Do not be too hasty to prune growth that has been damaged by winter freezes. Wait and watch shrubs as they begin to grow and look at which branches are sprouting. Some branches that lost all of their leaves due to freezing weather may look dead but aren’t. Watering: Newly planted shrubs will require watering in. Use of a root stimulator when planting shrubs helps them establish faster prior to the arrival of the heat of summer.
https://thgc.net/march/
Merry Christmas! It has been a fun and special Christmas Eve and Christmas morning at our house, especially since Brielle is into Christmas this year and understands the holiday. Of course the twins loved getting new toys too! The past two nights have been late ones for Justin and I, as assembly is required for EVERYTHING these days. If Santa could send some elves for assembly work next year that would be awesome. It was worth it all though, to see the look on their surprised and excited faces this morning! The kids are now having fun this morning playing outside. It has been very warm here in Texas this Christmas season. Yesterday it was in the mid seventies. Which was quite nice for us, since trampolines can’t be assembled inside. Last night Justin grilled steaks outside while the kids played and we sat in the outdoor patio watching Christmas movies,. After dinner and baths we sat around the Christmas tree and each child opened one gift before bed. The boys opened their dump truck and digger truck. They wouldn’t let go of them, so they slept with them last night. Before they went up to their beds Justin read the Christmas story from the bible. He read it right from the ornament I had made that had all the scriptures printed on it.http://livingjoydaily.com/2015/12/16/holiday-ornament-with-a-purpose/ Our church (Gateway Church) does their Christmas candlelight service the weekend before Christmas, so we had already attended that last weekend. Last year, we went to a candlelight Christmas Eve service at White’s Chapel (the church where I attend MOPS). We decided to not go to it this year because there is no childcare and kids must be able to sit through the 1 hour service. Our twins are not that patient, although how many two year olds are patient? We will try to take them next year on Christmas Eve. At least we got to attend the candlelight service at our church last weekend. Here are some of the photos from Christmas Eve. Of course the kids all had matching new PJs too. Brielle set out cookies and eggnog for Santa after the twins went to bed. We ate all the homemade cookies we had, so Santa got Vanilla Wafers we found in the pantry. He didn’t mind. They were all eaten by morning. Christmas morning came early. Justin was up and showered at 6 AM. We had the Christmas music playing and everything set up for the kids before we went upstairs to get them. They were all awake and ready to celebrate Christmas before we went upstairs. We said a prayer together thanking God for the birth of the Jesus, which is the best gift of all. Then we headed downstairs and the fun of new toys began! Charlie and Alex’s Favorite gift? The train set and train table. I glued the set down to the train table using E6000 glue, so the tracks are now permanently in place. If I hadn’t done that every piece from the set would now be on the floor. We didn’t want to see that happen, since it took 2 hours to assemble the set. Great instructions were provided, just tedious work. We plan to enjoy the rest of our Christmas day outside. Although, I know that twins will want to get back inside at some point to play with the trains. For now they can wear themselves out on the trampoline and chasing behind Brielle’s car as she races around the yard.
https://livingjoydaily.com/2015/12/25/christmas-at-our-house-traditions/
Inflation has emerged as one of the hottest topics in economics and real estate this year as consumer prices continue to rise at the fastest pace in decades. Elevated costs are evident across nearly every corner of the U.S. economy, raising the level of anxiety among businesses, households and policymakers about the prospects for economic growth. Higher input costs incurred during the production process are passed on to wholesalers and ultimately consumers, who can end up bearing the brunt of more costly goods. The average American currently spends an additional $276 per month on everyday items such as groceries and gas compared to what they would be spending if inflation were still trending at the 2018-2019 rate.1 Sharp increases in the price of goods and services can reduce consumer demand and set off a chain of events that lead to a broader economic recession, especially given that U.S. consumer spending accounts for about 70% of GDP. Retailers are at the forefront of this potential cycle, so they will want to keep a close eye on inflation and its impact on consumer spending. The notion that inflation is inherently bad for the economy is somewhat of a misconception; the Federal Reserve and most other central banks consider moderate and stable levels of inflation a feature of a healthy functioning economy in which supply and demand expand roughly in tandem. An inflation rate of around 2% in the U.S. is considered the sweet spot and is unlikely to gain attention of the media or public. However, too little inflation — the likes of which we saw in the years following the Global Financial Crisis — could be a signal of weak demand and suboptimal economic growth. By contrast, the high-inflation situation we are in suggests that the economy is too “hot” for production to keep up, leading to a scarcity of goods and services. As of January, inflation as measured by the Consumer Price Index (CPI) is currently running at 7.5% on a year-over-year basis, the highest rate in 40 years (see chart below: Consumer Price Index).
https://cw-prod-emeagws-a-cd.azurewebsites.net/en/united-states/insights/how-inflation-is-impacting-us-consumers
Let’s now begin to focus more specifically on the role and importance of an individual’s personality as it relates to both work outcomes and the various forms of it. Personality is the relatively stable set of psychological attributes that distinguish one person from another. A longstanding debate among psychologists—often expressed as “nature versus nurture”—concerns the extent to which personality attributes are inherited from our parents (the “nature” argument) or shaped by our environment (the “nurture” argument). In reality, both biological and environmental factors play important roles in determining our personalities. Although the details of this debate are beyond the scope of our discussion here, managers should strive to understand basic personality attributes and how they can affect people’s behavior and fit in organizational situations, not to mention their perceptions of and attitudes toward the organization. The “Big Five” Framework Psychologists have identified literally thousands of personality traits and dimensions that differentiate one person from another. But in recent years, researchers have identified five fundamental personality traits that are especially relevant to organizations. These traits, illustrated in Figure 3.1, are now commonly called the “Big Five” personality traits . As suggested by the figure, the personality of any given person can fall anywhere along each of these five traits. Figure 3.1“Big Five” personality traits The “big five” personality framework is currently very popular among researchers and managers. These five dimensions represent fundamental personality traits presumed to be important in determining the behaviors of individuals in organizations. In general, experts agree that personality traits closer to the left end of each dimension are more positive in organizational settings, whereas traits closer to the right are less positive. Agreeableness refers to a person’s ability to get along with others. Agreeableness causes some people to be gentle, cooperative, forgiving, understanding, and good-natured in their dealings with others. Lack of it results in others’ being irritable, short-tempered, uncooperative, and generally antagonistic toward other people. Researchers have not yet fully investigated the effects of agreeableness, but it seems likely that highly agreeable people are better at developing good working relationships with coworkers, subordinates, and higher-level managers, whereas less agreeable people are not likely to have particularly good working relationships. The same pattern might extend to relationships with customers, suppliers, and other key organizational constituents. Conscientiousness refers to the extent to which a person can be counted on to get things done. Some people, for example, are organized, detail-oriented, responsible, dependable, and plan carefully to order to meet deadlines. These individuals can be characterized as being strong on conscientiousness. Less conscientious people may be prone to missing deadlines, overlooking various tasks, being unorganized, and being generally less dependable. In general, research suggests that being strong on conscientiousness is often a good predictor of job performance for many jobs. Conscientiousness refers to the extent that a person can be counted on to get things done. This group is acknowledging the work of one of their colleagues and his efforts to help them complete a project on time. He most likely has a high level of conscientiousness. Further, given how his colleagues seem to genuinely like him he most likely also has a high degree of agreeableness. DAVID WOOLLEY/DIGITAL VISION/GETTY IMAGES The third of the Big Five personality dimensions is neuroticism . People who are relatively more neurotic tend to experience unpleasant emotions such as anger, anxiety, depression, and feelings of vulnerability more often than do people who are relatively less neurotic. People who are less neurotic are relatively poised, calm, resilient, and secure; people who are more neurotic are more excitable, insecure, reactive, and subject to extreme mood swings. People with less neuroticism might be expected to better handle job stress, pressure, and tension. Their stability might also lead them to be seen as being more reliable than their less stable counterparts. Extraversion reflects a person’s comfort level with relationships. Extroverts are sociable, talkative, assertive, and open to establishing new relationships. Introverts are much less sociable, talkative, and assertive, and more reluctant to begin new relationships. Research suggests that extroverts tend to be higher overall job performers than introverts and that they are more likely to be attracted to jobs based on personal relationships, such as sales and marketing positions. For this particular trait, the opposite version is also given a name— introversion . An introvert tends to be less comfortable in social situations. Finally, openness reflects a person’s rigidity of beliefs and range of interests. People with high levels of openness are willing to listen to new ideas and to change their own ideas, beliefs, and attitudes in response to new information. They also tend to have broad interests and to be curious, imaginative, and creative. On the other hand, people with low levels of openness tend to be less receptive to new ideas and less willing to change their minds. Further, they tend to have fewer and narrower interests and to be less curious and creative. People with more openness might be expected to be better performers due to their flexibility and the likelihood that they will be better accepted by others in the organization. Openness may also encompass a person’s willingness to accept change; people with high levels of openness may be more receptive to change, whereas people with little openness may resist change. The Big Five framework continues to attract the attention of both researchers and managers. The potential value of this framework is that it encompasses an integrated set of traits that appear to be valid predictors of certain behaviors in certain situations. Thus, managers who can both understand the framework and assess these traits in their employees are in a good position to understand how and why they behave as they do. On the other hand, managers must be careful to not overestimate their ability to assess the Big Five traits in others. Even assessment using the most rigorous and valid measures is likely to be somewhat imprecise. There are also times when using more specific personality traits to predict outcomes such as turnover or performance are more useful than the more general Big Five traits because the more specific trait more directly influences the intended outcome. For example, if you are trying to hire a strong team player for a diverse creative team, individual differences including a preference for teamwork and other group orientation (a preference for working with diverse others) may outperform any of the Big Five traits in predicting performance. Another limitation of the Big Five framework is that it is primarily based on research conducted in the United States. Thus, generalizing it to other cultures presents unanswered questions. Even within the United States, a variety of other factors and traits are also likely to affect behavior in organizations. The Myers-Briggs Framework The Myers-Briggs framework is also a popular framework that some people use to characterize personality. Many people know of this framework through a widely-used questionnaire called the Myers-Briggs Type Indicator, or MBTI. More than 2 million people worldwide take the self-assessment inventory every year. It is based upon Carl Jung’s work on psychological types. Psychologist Carl Jung was a contemporary of Sigmund Freud and a leading exponent of Gestalt personality theory. The MBTI was first developed by Isabel Briggs Myers (1897–1979) and her mother, Katharine Cook Briggs, to help people understand themselves and each other so that they could find work that matches their personality. They put Jung’s concepts into everyday language. Isabel Myers’s 1980 book Gifts Differing, and her philosophy of celebrating individual differences, encouraged the workplace diversity movement. The MBTI uses four scales with opposite poles to assess four sets of preferences. The four scales are: 1. Extroversion (E)/Introversion (I): Extroverts are energized by things and people. They are interactors and “on the fly” thinkers whose motto is, “ready, fire, aim.” Introverts find energy in ideas, concepts, and abstractions. They can be social, but also need quiet time to recharge their batteries. They are reflective thinkers whose motto is, “ready, aim, aim.” Do you like to focus on the outer world (extroversion) or on your own inner world (introversion)? 2. Sensing (S)/Intuition (N): Sensing people are detail oriented. They want and trust facts. Intuitive people seek out patterns and relationships among the facts they have learned. They trust their intuition and look for the “big picture.” Do you prefer to focus on the information you take in (sensing) or do you like to interpret and add meaning (intuition)? 3. Thinking (T)/Feeling (F): Thinkers value fairness, and decide things impersonally based on objective criteria and logic. Feelers value harmony, and focus on human values and needs as they make decisions or judgments. When you make decisions, do you like to first look at logic and consistency (thinking) or at the people and special circumstances involved (feeling)? 4. Judging (J)/Perceiving (P): Judging people are decisive and tend to plan. They focus on completing tasks, take action quickly, and want to know the essentials. They develop plans and follow them, adhering to deadlines. Perceptive people are adaptable, spontaneous, and curious. They start many tasks, and often find it difficult to complete them. Deadlines are meant to be stretched. In dealing with the world, do you like to get things decided quickly (judging) or do you prefer to stay open to new information and options (perceiving)? The possible combinations of these preferences result in sixteen personality types, which are identified by the four letters that represent one’s tendencies on the four scales. For example, ENTJ reflects extraversion, intuition, thinking, and judging. You can complete a brief Myers-Briggs type self-assessment online at http://www.humanmetrics.com/cgi-win/JTypes2.asp. Although the framework and Myers-Briggs instrument were not developed or intended to be used to identify personality profiles and label people, too often this is what is done with the results. This is problematic as it can lead to discrimination and poor career counseling. Employers should not hire, fire, or assign employees by personality type, because the MBTI is not even reliable at identifying a person’s type. When retested, even after intervals as short as five weeks, as many as 50 percent of people are classified into a different type. There is little support for the claim that the MBTI can justify job discrimination or be a reliable aid to someone seeking career guidance. Jung never intended for his work to be applied to a personality inventory. He noted, “My scheme of typology is only a scheme of orientation. There is such a factor as introversion, there is such a factor as extraversion. The classification of individuals means nothing, nothing at all. It is only the instrumentarium for the practical psychologist to explain, for instance, the husband to a wife or vice versa.” Nonetheless, the MBTI has become so popular that it is likely that you will encounter it during your career. It can be a fun team-building tool for illustrating some of the ways that people differ, but it should not be used in making organizational decisions including hiring and promotions. Other Important Personality Traits Besides these complex models of personality, several other specific personality traits are also likely to influence behavior in organizations. Among the most important are locus of control, self-efficacy, self-esteem, authoritarianism, Machiavellianism, tolerance for risk and ambiguity, Type A and Type B traits, and tendencies to bully. The role of the situation is also important. Locus of Control Locus of control is the extent to which people believe that their behavior has a real effect on what happens to them. Some people, for example, believe that if they work hard they will succeed. They may also believe that people who fail do so because they lack ability or motivation. People who believe that individuals are in control of their lives are said to have an internal locus of control. Other people think that fate, chance, luck, or other people’s behavior determines what happens to them. For example, an employee who fails to get a promotion may attribute that failure to a politically motivated boss or just bad luck, rather than to her or his own lack of skills or poor performance record. People who think that forces beyond their control dictate what happens to them are said to have an external locus of control. Table 3.2 summarizes the effects of locus of control on important organizational factors. This chapter’s Understand Yourself feature gives you the opportunity to evaluate your locus of control when it comes to work. Effects of Locus of Control on Organizational Outcomes |Organizational Outcome||Internal versus External Locus of Control| |Job satisfaction||Internals are generally more satisfied with their job, pay, supervisor, and coworkers.| |Commitment||Internals are more committed and have lower absenteeism.| |Job motivation||Internals have greater task motivation, job involvement, and self-confidence than do externals.| |Job performance||Internals tend to have higher job performance than externals.| |Career success||Internals tend to earn a higher salary than do externals.| |Conflict and stress||Internals report lower role conflict, work-family conflict, burnout, and stress than do externals.| |Social integration||Internals tend to be more socially integrated at work and report more favorable relationships with their supervisors.| Source: See Ng, T.W.H., Sorensen, K.L., & Eby, L.T. (2006). Locus of Control at Work: A Meta-Analysis, Journal of Organizational Behavior, 27, 1057–1087. Understand Yourself Work Locus of Control Using the scale below, write the number from 1 to 7 that reflects your agreement or disagreement with the statements below. When you are finished, follow the scoring instructions at the bottom to interpret your score. 1. A job is what you make of it. 2. On most jobs, people can pretty much accomplish whatever they set out to accomplish. 3. If you know what you want out of a job, you can find a job that gives it to you. 4. If employees are unhappy with a decision made by their boss, they should do something about it. 5. Getting the job you want is mostly a matter of luck. 6. Making money is primarily a matter of good fortune. 7. Most people are capable of doing their jobs well if they make the effort. 8. In order to get a really good job, you need to have family members or friends in high places. 9. Promotions are usually a matter of good fortune. 10. When it comes to landing a really good job, who you know is more important than what you know. 11. Promotions are given to employees who perform well on the job. 12. To make a lot of money you have to know the right people. 13. It takes a lot of luck to be an outstanding employee on most jobs. 14. People who perform their jobs well generally get rewarded for it. 15. Most employees have more influence on their supervisors than they think they do. 16. The main difference between people who make a lot of money and people who make a little money is luck.
https://www.whizwriters.com/business-1079/
Welcome StrawBerry Point Stewards! On Native Land: ~Strawberry Point School is located on the ancestral lands of the Coast Miwok, Ohlone, and Southern Pomo Peoples~ These groups continue to be the first stewards of our surrounding animal/plant communities. * What is a Steward?Stewards are people who care for the land they are on. - Because of our close proximity to natural spaces such as the Richardson Bay Wildlife preserve, as well as our PTA funded garden spaces/ outdoor classroom, our students get the chance to learn from our planet during the school day. Each student will get to participate in a "garden and nature" class twice a month where they build relationship to the land, to the food they grow, and to the animals who live here. - Being a good steward, is like being a good friend. We must listen to and watch for the needs of the plants, animals, and water of our community, just like we would do for a friend. - Our goal is to grow a community of future stewards who involve our plant, animal, and water communities in their decision making processes, for a healthier planet. Why Environmental Education? When we get to know something, we can begin to care about it: Our garden/nature classes and green team clubs, offer students the opportunity to get to know our planet again. As we continue to face many challenges connected to a changing earth, we can prepare our future generations by fostering relationships back to our animal/plant/water communities. An Investment in the curious mind: Investing a portion of our students school time into environmental educaton, is an investment into making time for discoviers, creativity, community building, and place based learning. During their outdoor classes, students get the chance to investigate their own questions, and follow their curiosity. Social Emotional Learning: During our classes students will work together and build relationships. The garden is also a theraputic space for students to unwind. Whether students need to rest or to exhaust some energy after sitting at a desk, the garden is a great place for students to evaluate their own needs and to decide how they can make the most of their time outdoors (self regulation).How did this program come about? This program, supported by the School District and the PTA, is a requirement to maintaining our playing fields. A little over a year ago, the school had no playing field. Here's a brief history:In the 1990's the playing field fell into disuse when the school temporarily closed, and was classified as a degraded seasonal wetland. In order to re-open the playing field, a very complicated permitting process and comprehensive mitigation program was required. This resulted in seven years of volunteer work by dozens of PTA members, and culminated in the restoration of the playing field, and the creation of 0.7 acres of new tidal and seasonal wetlands, at a total project cost of just under $500,000. Funds were raised by the PTA and the community. In order to obtain approval from various agencies, the field and wetlands project included a comprehensive environmental education program for the students. It's not just a benefit to our students, it's a requirement to keeping our field, and the environmental education program must be supported by you, the parents. We also need parent volunteers in order to continue our wetland mitigation efforts to maintain the wetlands, and introduce native plant species. What does the environmental education program offer? - Naturalist Educator Melicca Gayle is on campus 20 hours a week conducting hands-on weekly sessions with each class. Funded by the PTA. - Field Activities Coastal plant ecologist Peter Baye, our regular teachers, and parent volunteers conduct field classes in the wetlands and garden. Worm bins, composting, vegetable gardening, plant and animal identification are only a few of the components of these outings. - The Green Team Recycling Program This is a comprehensive recycling effort run by students and parent volunteers to recycle and compost all lunch materials. - Field Trips Visits to the Richardson Bay Audubon Center, Marin Headlands, the Bay Model, and Ring Mountain are a few of the activities that are conducted off site. - The Lunch Bunch Salad Program Students learn to grow (and eat!) their own food in the garden and kindergarteners grow lettuce to create a schoolwide salad in the spring What can I do to support this program? Everyone is encouraged to participate in our program and help us be stewards of our environment! Parent volunteers are needed to assist with any of the education programs list above, to help maintain the wetlands, and to plant native species as part of the wetlands restoration. Parent volunteers are needed for each classroom to assist Kiea and the weekly environmental field class. Wetlands and garden work days are scheduled every few months to help maintain the garden, remove invasive species and improve our wetlands. Over the past five years, our wetlands has been planted with hundreds of seedlings to provide a diverse and thriving habitat for our local species.
https://www.mvschools.org/domain/288
- This event has passed. August 40 Hour Domestic Violence Training August 5, 2019 @ 9:00 am - 5:00 pm An event every week that begins at 9:00 am on Monday and Wednesday, repeating until August 21, 2019 The 40-Hour Domestic Violence Training is the state-required certificate training for new domestic violence professionals, including volunteers and interns. It is a stepping-stone for individuals who will be working directly with victims and survivors, and is the necessary first step to becoming an Illinois Certified Domestic Violence Professionals (ICDVP). The Network is an ICDVP approved training and supervision site. The Network’s 40-Hour is considered a model training program. Evidence-based popular education is practiced throughout, and all sessions are led by a team of local experts that use strengths-based and trauma-informed approaches. The training begins by grounding professionals in the philosophical frameworks of the gender-based violence movement. Skill-building activities help participants engage in not only intervention strategies, but models of prevention and transformation. Participants will leave with a comprehensive understanding of domestic violence, and the best practices to safely and appropriately respond to victims and survivors. Individuals must attend all required sessions in order to receive a Certificate of Completion. Topics covered include: - Foundations & Dynamics of Domestic Violence - IDVA & other Legal Concerns - Empowerment Advocacy & Counseling - Safety Planning & Basic Crisis Intervention - Partner Abuse Intervention Strategies - Impact of Domestic Violence on Children - Social justice topics including: reproductive rights, housing instability, disability justice and criminalization - Teen Dating Violence - Mental Wellness & Substance Use Issues - Heterosexism & Domestic Violence in LGBTQI+ Relationships - Vicarious Trauma & Compassion Fatigue - And much more!
https://the-network.org/event/august-40-hour-domestic-violence-training/2019-08-05/
Glaucoma refers to an eye condition that usually leads to damages to your eye optic nerve, which is critical to your good vision. Research has revealed that optic nerve damage results from intense pressure subjected to your eye. Statistics show that persons above the age of 60 have Glaucoma as their major leading cause. In some cases, it can occur at a younger age, but it’s slightly uncommon. In most situations, this condition does not send any warning signs or symptoms until at an advanced stage, where you will start experiencing difficulties with your vision. Experts and optometrist usually advise that it’s essential to conduct regular eye checkup that includes an overall eye pressure measurement to capture the disease at an early stage. The glaucoma effect cannot get reversed when it occurs, but it can get managed when caught early. Experts advise that when this condition gets specialist attention at an early stage, it’s possible to slow down the loss of vision or at some point be in a position to prevent it totally from occurring. By any chance you get diagnosed with Glaucoma, you will require treatment to manage it for the rest of your life. Glaucoma Signs and Symptoms This condition comes with various signs and symptoms, mostly depending on the stage and the condition’s type. Open-Angle Glaucoma This type of Glaucoma gets accompanied by the development of patchy blind spots on the peripheral or central side of both eyes. Another symptom or sign is tunnel vision that comes at an advanced stage of this condition. Acute Angle-closure Glaucoma Severe headaches characterize this type of condition, as well as frequent eye pains, and sometimes some people may have exhibited signs such as vomiting and nausea. Consequently, this type of Glaucoma comes with blurred vision, extensive eye redness, and halos, particularly around the lights. The acute angle-closure Glaucoma is quite severe, and when it’s left untreated, it can cause blindness. An optometrist usually advises that even with treatment, this condition will end up causing blindness to at least one eye in a period not less than 20 years. The moment you start experiencing such symptoms, as explained above, it’s essential that you do not waste much time and immediately book an appointment with an eye specialist. Causes Other than the damage caused to the eye optic nerve due to pressure on the eye, Glaucoma is genetic and runs in some families. Some people have a gene that is mostly responsible for pressure buildup in your eyes that eventually damages the optic nerve. There are certain risk factors that have been identified by scientists that can cause and lead to Glaucoma. Some of these risk factors include intense high pressure in your inner eye, commonly known as intraocular pressure. The other risk factor is the age factor, as persons above the age of 60 have shown to be susceptible to this condition. Having a family that suffers from this condition has been linked to this condition’s genetic risk factor. Lastly, if you develop diabetes, high blood pressure and heart disease are an accelerator to this condition. Call (714) 338-1603 for an appointment!
https://mkhoangod.com/diagnose-and-treat-glaucoma/
AMELIA Pettitt rubbed shoulders with double Olympic champion Mo Farah after being earmarked as a running star of the future. The 17-year-old, from Haslington, was invited to the London Marathon Young Athlete Camp at St Mary’s University College in recognition of her stunning cross country season. There she met 10,000m and 5,000m London 2012 gold winner Farah and received tips from Olympians Ross Murray and Leo Manzano and world indoor medallist Andrew Osagie. Amelia said: “It’s unbelievable to think that I was among our country’s next generation of Olympic athletes. “We stayed and trained at the university and received talks on nutrition, health, training, psychology, race preparation and injury prevention which has really helped me. “It was a privilege to listen to the guest speakers and to meet Mo while he was training on the university’s track. The experience was truly inspirational.” Amelia, a member of Vale Royal AC, has enjoyed a remarkable rise to prominence in youth cross country circles. When she started in 2006 she struggled to make the county team but under the leadership of Shaun McGrath and fellow coaches she has developed into an outstanding competitor on the national stage. This year she became Cheshire champion, finished fifth at the Northern Championships and notched 16th place at the National Championships. Amelia, a sixth-former at Sandbach High, added: “My next big competition is the National Road Relay Championships at Birmingham on October 13. “Unfortunately I’ll be too young for next year’s London Marathon but I plan to try a half-marathon before Christmas.
https://www.crewechronicle.co.uk/sport/other-sport/running-amelia-pettitt-tipped-next-5599927
By Aaniqa Qayoom In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The butterfly effect is the idea that small things can have non-linear impacts on a complex system. The concept is imagined with a butterfly flapping its wings and causing a typhoon. Of course , a single act like the butterfly flapping its wings cannot cause a typhoon directly. Tom Breuer writes : Simple systems, with few variables, can nonetheless show unpredictable and sometimes chaotic behavior. Libchaber conducted a series of seminal experiments. He created a small system in his lab to study convection (chaotic system behavior) in a cubic millimeter of helium .By gradually warming this up from the bottom , he could create a state of controlled turbulence. Even this tightly controlled environment displayed chaotic behavior; complex unpredictable disorder that is paradoxically governed by ‘orderly’ rules. A seemingly stable system (as in 1ccm cell of helium) can be exposed to very small influences(like heating it up a mere 0.001 degree),and can transform from orderly convection into wild chaos . Although such systems are governed by deterministic phenomena ,we are nonetheless unable to predict how they will behave over time. Benjamin Franklin offered a poetic perspective in his variation of a proverb that’s been around since the 14th century in English and the 13th century in German, long before the identification of the butterfly effect: For want of a nail the shoe was lost, For want of a shoe the horse was lost, For want of a horse the rider was lost, For want of a rider the battle was lost, For want of a battle the kingdom was lost, And all for the want of a horseshoe nail. The lack of one horseshoe nail could be inconsequential, or it could indirectly cause the loss of a war. There is no way to predict which outcome will occur. “If a single flap of a butterfly’s wing can be instrumental in generating a tornado, so all the previous and subsequent flaps of its wings, as can the flaps of the wings of the millions of other butterflies, not to mention the activities of innumerable more powerful creatures, including our own species.” “If a flap of a butterfly’s wing can be instrumental in generating a tornado, it can equally well be instrumental in preventing a tornado.” There is how we can think about current situation arising due to pandemic. There may seemingly or un seemingly occur extreme chaos and crisis which could arise due to the all those loopholes which we create or have created in the initial stages of pandemic control ,that is what we have seen happening with many nations, US being one example .It was their recklessness or negligence at initial stages of outbreak, which has got them to this point right now .And they still don’t know where the end of this lies it can take an even more huge and terrible form thus staying consistent with the phenomena of Butterfly effect. Conversely a small change or a small step today, towards controlling or managing the pandemic can prove to be gigantic in dealing the issue in the times to come. Similarly Butterfly effect might be witnessed as far as humanitarian crisis,economic crisis and threats to civilization are concerned. With many sustainability challenges like climate change, the effects might already be visible—for example, rising sea levels, changing weather, and an increase in wildfires. However, the major and potentially more disastrous consequences may only become apparent in years to come. In contrast, the accelerated impacts of COVID-19 have had an immediate and profound effect. We believe this stark warning will serve as a wake-up call, driving a further modal shift towards sustainable investing and a renewed focus on sustainability risks and opportunities. This, in turn, should help accelerate the global recovery from the pandemic and build a more resilient society and economy for the future. Hope lies in the fact that whatsoever has happened or is happening has a way out .In other words we can fix it as we don’t have many problems like everybody thinks, but we have many symptoms of the same problem. In this regard one approach can also be of Holism .Holism is the science of recognizing the parts of any whole, or system(from manmade to natural systems) are in intimate interconnection , such that they cannot exist independently of the whole or cannot be understood without reference to the whole. Thus if we understand the right thing at the right time we can avoid the wrong time. Therefore the wings of the butterfly need to be cut right now so that it doesn’t flap them and doesn’t cause any tornado anymore ,This analogy seems relevant, at least for now. The author is a research scholar at Kashmir University.
http://muslimmirror.com/eng/pandemic-and-the-butterfly-effect-why-didnt-we-realise-that/