content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Some of you might remember Susana from the Repub Convention. I was impressed with her then. Is there something NOT to like about this Governor? Am I missing something? http://nation.foxnews.com/susana-mar...nez-future-gop She came into office thinking she had to face a $250 million deficit in the state budget, but the leaving Gov had been stonewalling. The deficit was actually $450 million, she discovered on the day after election. (about 8% of the total $5.6 billion state budget) How did she handle that? She starting cutting expenses. Where did she start? With appointees! Those employees who are NOT in civil service jobs ... like the employees who report directly to the Gov. The previous Gov exited with 340 of them. Susana has cut that down to 212. She got rid of the two chefs at the Gov's residence. She sold the jet the previous Gov had purchased; as well as four other planes. She limited cabinet secretary positions to a salary of $125,000; and put a 2-year moratorium on the purchase of brand new cars. Meanwhile, she increased spending for education and Medicaid, with no new taxes ... and ended up with a surplus. And she did this with a Democratic house & senate. Sure could use some of this common sense in DC.
http://www.retrievertraining.net/forums/showthread.php?91925-Susana-Martinez-Gov-of-New-Mexico&p=1043855&viewfull=1
Jerry’s Apple Crisp Ingredients: 5 c. cooking apples, sliced. Sprinkle sugar to taste (3 TBS) Dot with butter Sprinkle on sliced apples 1/4 tsp cinnamon 1/4 tsp nutmeg Juice of 1 lemon Topping: 2/3 c flour 2/3 c. sugar 1/4 c margarine Mix into crumbs with fork and put on top of sliced apple mixture. Bake at 350° F about 45 minutes or until brown. Serve warm with ice cream. Honey on top is good. Just a few swirls back and forth over the crumb topping.
https://panoramaorchards.com/jerrys-apple-crisp/
Fragility creates the conditions for violent intrastate conflict. Its consequences contribute to global disorder and mounting threats to U.S. national security. Significant impediments to effective action in fragile states persist today, even with many years of policy attention and an emerging consensus about its centrality in causing armed conflict. Policy-makers across the U.S. interagency have yet to arrive at a shared consciousness about the challenge of fragility, a shared understanding of the nature of the problem, and the types of capacities that can be comprehensively deployed to address it effectively. This essay describes recent advances in the development sector with regard to fragile states that suggest a way forward for stronger results. The steep challenges of tackling the complex causes of fragility tell us to be measured in our actions, but the experiences of recent progress and the urgency to alleviate human suffering tell us the time is right for greater ambition. In the wake of the two world wars, the world experienced significant progress: an increase in the number of democratic states, heartening advances toward eliminating global poverty, and significant decreases in violent conflict. But those positive trends have abruptly reversed in the last decade. Now, a new wave of civil wars, historic levels of migrants and refugees, global pandemics, and increases in violent extremism are fueling a sense of global disorder. One critical cause for this increase in civil wars and violence can be traced to the challenges of fragile states. Several decades of scholarship and experience have identified the strong correlation between state fragility and higher levels of violent conflict, extreme poverty, violent extremism, and vulnerability to the predations of regional and international powers. In an increasingly interconnected world, fragility poses a greater threat to national and international security than ever before. It also presents pressing moral challenges. However, we have yet to effectively organize either the collective resources of the U.S. government or international institutions to address this challenge. Doing so within U.S. government institutions will require a significant shift in the way U.S. defense, diplomatic, and development capabilities operate, moving away from deeply stovepiped bureaucracies that work without a shared framework to what General Stanley McChrystal has called a “shared consciousness” that enables more cohesive joint action.1 This means moving from vertical structures that inhibit effective action on complex, interrelated challenges to horizontal approaches that can more nimbly work to prevent the crises associated with states in which the state-society relationship has become dangerously frayed. As noted by Jean-Marie Guéhenno, in the search for effective means to prevent and end civil wars, “intelligent orchestration is the most important strategic variable, and … isolated policies, even well-executed ones, are unlikely to produce lasting results unless they are part of an overall coherent and consistent strategy.”2 Promising approaches for addressing fragility have emerged from the development sector, which is grappling with how to prevent significant investments from being overturned by repeated shocks from conflict and disaster. Development is arguably undergoing a paradigm shift, moving from narrowly focused investments designed to spur economic growth and isolated, sector-based programming, to a more systemic approach of managing risk and building resilience to the effects of disaster and conflict. However, unless development, diplomatic, and defense approaches align more consistently to adopt a shared understanding of how to address fragility, development efforts alone will not be successful. This essay explores the challenge of fragility and its prominent role in fueling “unpredictable instability” and increasing threats to regional, national, and international security; notes critical obstacles to applying these approaches more effectively; and identifies promising approaches to addressing fragility that are emerging from the development community. It concludes with both recommendations and a call to action that acknowledge that while anxiety about state fragility and its consequences may be rising, we have the opportunity to pursue new models for a positive future. Informed by recent conflict research, many policy-makers, especially development policy-makers, agree that nearly all outbreaks of violent intrastate conflict can be traced back to the absence or breakdown of the social contract between people and their government, a condition that policy-makers often refer to as fragility. By enabling violent intrastate conflict and other transnational threats, the consequences of fragility pose serious challenges to U.S. national security. The source of fragility can be an absence of state legitimacy in the eyes of its citizens, effectiveness, or both. Legitimacy is weakened wherever societal and governing institutions are not inclusive or responsive to all identity groups, including minority and marginalized populations. Legitimacy may also be undermined when weak mechanisms exist by which populations can hold governing institutions accountable for performance. Effectiveness is diminished when state-society interactions fail to produce adequate public goods to respond to citizens' needs for security, health, economic well-being, and social welfare. High levels of fragility–whether caused by illegitimacy, ineffectiveness, or both–create conditions for armed conflict and political instability. While policy-makers use fragility as a helpful concept for framing a complicated set of problems relating to the state-society relationship, conflict researchers do not test hypotheses about the singular influence of fragility on the risks of conflict. Fragility refers to multiple dimensions of the state-society relationship, which would typically be represented in a regression-based model for the outbreak of violent conflict with separate independent variables. However, we think that conflict researchers have successfully made the case that fragility enables the conditions for violent conflict, based on the accumulated evidence from many conflict studies that examine the influence of different structural attributes of the state-society relationship on combined conflict risks. As Charles Call and Susanna Campbell note, the literature from the past decade is replete with studies presenting robust evidence on the relationship between structural attributes of society and future armed conflict.3 Many of those structural attributes are directly tied to elements of fragile state-society relationships, including variables that align with fragility in terms of low legitimacy, like the presence of factionalized zero-sum political competition, past ethnic conflict, ethnic discrimination, or weak justice systems. In other cases, there are variables that track with fragility in terms of poor effectiveness, such as high infant mortality rates, high youth unemployment rates, low GDP per capita growth rates, or high poverty rates. But with protection from two oceans, peaceful neighbors, and overwhelming military capabilities, is the United States immune to fragility? In today's world, people, states, and economies are deeply interconnected, and threats quickly cross boundaries and easily spread over large geographic distances. Fragility has already tested U.S. national security and will continue to do so if left unaddressed. Fragility is the common denominator running through some of the steepest security challenges the United States faces. A growing number of composite indices that directly measure state-society dysfunction have made it possible to track and rank key elements of fragility at the national and subnational levels.4 The combined insights from these efforts have clarified the nexus between fragility and multiple challenges to U.S. national security as well as international security: the top seven states responsible for refugees and migrants rank at the top of nearly every index on fragility;5 five of the top seven most fragile states also represent the top five sources of terrorist attacks;6 the fifty most fragile states on earth are home to 43 percent of the world's most impoverished people, or roughly three billion people;7 and a majority of the unprecedented sixty-five million people currently displaced by violent conflict around the globe are fleeing the forty ongoing internal conflicts worldwide.8 These conflicts have become increasingly internationalized, as fragile states in turmoil are more vulnerable to the predations of regional and international powers. Internationalized internal conflicts, like those unfolding in Syria, Iraq, Yemen, and Ukraine, were a rarity twenty-five years ago, accounting for approximately 3 percent of the world's conflicts. Today, internationalized internal conflicts account for one-third of all global conflicts, have contributed to the 500 percent increase in global battle deaths over the past ten years, and have pushed conflict deaths to a twenty-five-year high.9 Civil war in Syria alone has taken a staggering toll on human life; estimates range from 250,000 to 470,000 lost in the conflict since 2011. Further, these internationalized conflicts have become much harder to solve, providing proxy ground for external powers to manipulate fragile institutions, exercise their own interests, and flex their muscles, thereby raising concerns about the potential for renewed great-power conflicts playing out in highly vulnerable fragile states. These conflicts are lasting longer and costing more; various estimates of the costs of global conflict range from $9 to $13.6 trillion per year.10 Finally, these dynamics are playing out in a world that changes faster, is more complex, and is more inextricably connected than at any time in history. Fifteen billion devices were connected to the Internet in 2015; that is more than two devices for every person in the world and more than double the seven billion devices connected in 2011. However, this greater connectivity has cut both ways, and access is infamously being exploited by organizations like Al Qaeda and the Islamic State to spread radical and violent ideologies and recruit foreign fighters. Fragile states often lack the capacity to extend the reach of government over the entirety of their respective territories. As a result, illicit transnational forces (such as terrorist and organized criminal groups) often hold territory in fragile states.11 Transnational flows of illicit arms, drugs, and people are increasingly sophisticated and intertwined. And, driven out of their homes by violent conflict and poverty, historic levels of refugees and migrants have reached the shores of Europe, contributing to the political destabilization of key U.S. allies in Europe. Faced with the threat of pandemic disease, fragile states often lack the institutional capacity to respond quickly and effectively to control the spread of new outbreaks.12 With the experience of an outbreak of Ebola in three fragile states of West Africa in 2015 and the more recent outbreak of Zika in parts of Latin America, the specter of uncontrolled pandemics has never loomed larger. In the context of a highly interconnected world, fragility compounds the threat of the spread of pandemic disease to the United States. In the previous issue of Dædalus, Stewart Patrick argues that the threats emanating from fragile or failed states typically lack the potential to pose a significant or existential threat to the United States–we do not disagree.13 However, the many challenges emanating from fragile states do create circumstances that test U.S. national security interests. They impede the ability of the United States to attain foreign-policy objectives pertaining to the security of allies, the stability of key regions, and the promotion of a liberal international order that ultimately serves U.S. security interests. Whenever major civil wars or other types of crises erupt in fragile states, the deleterious results only steepen the ongoing uphill challenge for U.S. leadership to strengthen international security arrangements that serve to protect human rights and dignity for all global citizens. In addition to the security challenges presented by fragility, the moral challenge also looms large. In late 2017, four of the most fragile states on any index–Somalia, South Sudan, Yemen, and Northeastern Nigeria–were still teetering on the edge of famine, putting twenty million people at risk of severe malnutrition or starvation. From a moral standpoint, the human suffering engendered by dysfunctional interactions between governments and their people places a responsibility on the international community to respond. Whether fragility compounds the spread of a pandemic disease, contributes to famine, or enables the conditions for armed violence, the devastating toll on human life demands a remedy. In this respect, we wholeheartedly echo Patrick's highlighting of the moral dimension of addressing fragility. We would only emphasize that the moral challenge of fragility extends beyond the humanitarian response to crises. As these crises emerge from fragile settings not because of bad luck, but because of structural attributes, the moral imperative to address fragility extends to responding to its root causes, not just to the crises and human suffering that are often its consequence. Given the significant threats and costs of fragility, why has effective policy for supporting country transitions out of fragility remained elusive? On paper, Republican and Democratic administrations alike have made “weak,” “failed,” and “fragile” states a priority in their national security strategies.14 In the late 1990s, the Clinton administration recognized that states “unable to provide basic governance, safety and security, and opportunities for their populations” could potentially “[generate] internal conflict, mass migration, famine, epidemic diseases, environmental disasters, mass killings and aggression against neighboring states or ethnic groups–events which can threaten regional security and U.S. interests.”15 After 9/11, the Bush administration was primarily concerned about the exploitation of weak states by terrorists. And before the transfer of power to President Trump, the Obama administration's national security strategy stated: “fragile and conflict-affected states incubate and spawn infectious disease, illicit weapons and drug smugglers, and destabilizing refugee flows. Too often, failures in governance and endemic corruption hold back the potential of rising regions.”16 But the United States has not gotten measurably better at achieving its desired outcomes in these environments. In practice, fragility rarely becomes the focused area of effort, despite receiving significant attention in foundational strategic documents. Each situation is different, but there are some common reasons for this difficult reality. A crisis-driven focus. First, administrations inevitably become hostage to the latest terrible crisis and, by necessity, focus energy and resources on responding to rather than preventing crisis. The cost of this approach has become increasingly untenable, with an ever greater reliance on reactive tools, including military action, deployment of peacekeeping missions, and increasingly higher levels of humanitarian assistance. The result is a persistent focus on fragile states, but only after crisis hits, when action is more urgent and expensive, options are more limited, and problems are harder to solve. For example, the 2014 Ebola outbreak quickly spread from West Africa to the United States and resulted in Congress passing a significant package of postcrisis assistance intended to build greater, longer-term global health security in the region.17 These are, unfortunately, the kind of investments that rarely occur until after an attention-grabbing threat has landed. Bureaucratic impediments. Second, the vertical structures of government bureaucracies remain a significant impediment. The U.S. government is organized to divide security, development, and political action, each with its own frameworks, theories of change, and time horizons, precluding more effective joint approaches. A confusing web of authorities and areas of responsibility serve to ignite turf battles and create incentives for competition rather than collaboration. In addition, agencies are geographically organized in inconsistent ways, making it harder to have a shared analysis. The Department of Defense (DOD) is organized regionally, the Department of State is organized to operate via government-to-government interaction, and the United States Agency for International Development (USAID) has a hybrid approach, with both state-based and regional operations. Those differences, coupled with the different capabilities that each bring to bear in fragile environments, can lead the three D's (diplomacy, development, and defense) to analyze fragile contexts within different frameworks. The results are often cast in terms of the analyzing agency's set of capabilities, which can undermine the potential for coordinated action. Efforts are further hampered by congressional constraints that impose budget inflexibility through earmarks and competitive congressional committee jurisdictions. For example, in 2010, the State Department, DOD, and USAID brought a carefully crafted joint action plan for Iraq to Congress that required presentation to two different appropriations committees. The Armed Services Committee fully funded the DOD plan, while the State Department and USAID were allotted only a fraction of the necessary funding by their committee, invalidating the core assumptions and effectiveness of the plan.18 Lack of a shared consciousness. The most important challenge, however, is the absence of a “shared consciousness,” as termed by General Stanley McChrystal, among executive branch agencies about exactly why, what, how, and when to engage collectively in fragile states. The result is that each branch essentially operates with blinders on, limiting its ability to see the larger ecosystem of the challenge. A recent study by Stanford University, Chatham House, and the United States Institute of Peace underlined this challenge in a retrospective look at coalition efforts in Afghanistan over the past decade.19 The study found that there were essentially three separate, simultaneous lines of effort during this period: intelligence efforts, which sought information on Al Qaeda; military units, which fought the Taliban; and development actors that helped the Afghan state and society to rebuild. However, the methods employed by the intelligence and military actors served to exacerbate corruption and undermine the trust of the people in their state, undercutting the significant investments into rebuilding the state that were meant to strengthen the confidence of the Afghan people in the first place. This example is a stark illustration of how each effort was pursued with a different definition of the problem, with differing timelines and frameworks for actions and fundamentally different goals. Typically, the development community looks at longer-term change, while defense and diplomatic efforts address more immediate security and political problems. However, without more closely aligned goals, progress on the issues of fragility will remain limited, and, too frequently, short-term gains will result in longer-term crises. Meaningful progress will require a concerted effort to transform the business model of government, making it more proactive, adaptive, and integrated. A new approach requires a shared consciousness among the U.S. government interagency about how best to deploy the tools of U.S. foreign policy, and the horizontal effectiveness to work with one another: diplomacy and security must be achieved locally; development and security are political concerns; and diplomacy and development cannot be separated from security and stability. This type of cohesive framework for putting states back together after a major conflict was articulated in the Commission on Post-Conflict Reconstruction's 2003 report, Play to Win.20 The Commission stated that the priority areas requiring substantive local, U.S., and international community effort were security, justice and reconciliation, economic and social well-being, and governance and participation, and the report enumerated specific goals and tasks for short-, medium-, and long-term transition. The Commission also cautioned that a successful approach required mutually reinforcing and coherent action across all four pillars of engagement and that success would be jeopardized if security, justice, economic, or governance issues were addressed in isolation from one another. The Commission drew heavily upon the key lessons learned during the Balkans conflict and its aftermath. Unfortunately, by the time of its release in 2003, attention had already shifted to new imperatives imposed by the 9/11 attacks, underscoring the perennial problem of lessons lost as administrations and priorities change. In the very recent past, three important changes have emerged within the development sector that demonstrate the potential for overcoming some of the obstacles described above. These changes signal a paradigm shift in strategy away from more traditional humanitarian and development approaches to a more integrated approach for working in fragile states. Traditional development efforts have long focused on investing in productive economic growth and advancing key objectives in health, agriculture, or education with a steady determination to steer clear of politics. This approach was mirrored in the Millennium Development Goals (MDGs) announced by the UN in 2000. The MDGs comprised a fifteen-year plan for realizing eight global goals to end extreme poverty, including realizing universal primary education, promoting gender equality, reducing child mortality, improving maternal and child health, and developing a global partnership for development. Despite these ambitious objectives, the MDGs conspicuously avoided any of the challenges posed by conflict, inequity, or lack of human rights and justice. At their conclusion in 2015, poverty was increasingly concentrated in the most fragile countries. This result did not come as a surprise to many. As early as the late 1990s, USAID sought to address the need to understand the political dynamics of development and instituted a pioneering initiative to include democracy promotion and, later, conflict analysis as part of its development agenda. USAID also released its Fragile States Strategy in early 2005. Then, in 2011, the World Bank released its landmark World Development Report: Conflict, Security and Development, calling for a different approach to help conflict-affected states emerge from cycles of conflict by investing in an integrated set of activities emphasizing citizen security, access to justice, and job creation. The report proposed an evidence-based framework that emphasized institutional legitimacy as fundamental to stability. More recent reports in 2016 and 2017 on states of fragility from the International Institute for Economics and Peace and the Organisation for Economic Co-operation and Development have advanced this work to develop further evidence for frameworks that address the challenge of fragility.21 Finally, both the UN and World Bank have recently adopted conflict prevention as core priorities, a commitment highlighted by the release in October 2017 of Pathways to Peace–an unprecedented joint report that presents a comprehensive overview of global evidence pertaining to conflict prevention.22 These reports were key in articulating the evidence base and developing the frameworks for addressing fragility. However, in the U.S. government, real change has remained hampered by chronic underfunding and a lack of full acceptance by many humanitarian and development professionals, especially those skittish of becoming too engaged with “politics.” However, three key developments have helped catalyze an accelerated shift from more traditional relief and development approaches to a greater focus on fragility. Fragile states self-identify for the first time. First, in 2011, the International Dialogue on Peacebuilding and Statebuilding announced the New Deal for Engagement in Fragile States at the Fourth High-Level Forum on Aid Effectiveness (HLF-4) held in Busan, South Korea, the quadrennial gathering of international development actors to forge key agreements and chart global development progress. The New Deal–based on an agreement between self-identified fragile-state governments (the g7+), international donors, and civil society organizations and designed explicitly to create more inclusive, accountable systems of governance–called for new ways to invest financially and politically in fragile states.23 The New Deal's five peace-building and state-building goals build on a growing collective wisdom on the most effective ways to help fragile countries move toward greater peace: foster inclusive political settlements and conflict resolution; establish and strengthen people's security; address injustices and increase people's access to justice; generate employment and improve livelihoods; and manage revenue and build capacity for accountable and fair service delivery.24 Though the New Deal was not officially incorporated into the main platform of HLF-4, it was included as one of eight streams of activity, representing a significant shift in the mainstream development world. Unfortunately, support and engagement of the New Deal among G7 countries (Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States) has been limited to development agencies. To realize its full transformative potential, support for the New Deal will have to be expanded in both donor governments and fragile states to include security, political, and development departments and be championed by civil society with more extensive community engagement.25 The potential of the New Deal is further limited by the inability of most g7+ countries thus far to demonstrate proof of concept; instead, many member states have continued to descend into further conflict. However, it retains promise as a model for the kind of compact that could create greater coherence and effectiveness in providing a carrot-and-stick approach to those states trapped in fragility and conflict. Sustainable development goals prioritize inclusivity and accountability. Second, as the MDGs approached their conclusion in 2015, UN member states began negotiating the Global Goals for the next fifteen years. The MDGs' track record demonstrated that the elimination of extreme poverty could not advance without tackling the messy dynamics of exclusion, conflict, and fragility, thus opening the door for change. Despite initial opposition from member states reluctant to introduce politics into the development agenda, the Sustainable Development Goals (SDGs) adopted in 2015 recognize that development investments cannot be sustained unless states and societies are inclusive, accountable, and just. Significantly, SDG Goal 16 seeks to promote peaceful and inclusive societies for sustainable development, provide access to justice for all, and build effective, accountable, and inclusive institutions at all levels. The g7+ countries were among those most active in advocating for this goal. In this way, the SDGs represent a deep shift in the collective mindset of development practitioners and has already ignited a new approach. Refugee crises fuel rethinking of humanitarian architecture. Third, just as the Global Goals were adopted, the refugee and migrant crisis of 2015 began breaking on the shores of Europe. The protracted conflicts of Africa and Afghanistan were suddenly overlaid with new wars in Syria, Libya, and Yemen, and a renewed conflict in Iraq. As both refugees and migrants overflowed beyond the saturated frontline states, they sought refuge and a better life in Europe. As Sarah Kenyon Lischer has detailed, the global humanitarian system strained to address these multiple crises simultaneously, revealing cracks in the long-standing system of safety nets and necessarily prompting a rethinking of the business model of humanitarian assistance.26 In May 2015, the first-ever Global Humanitarian Summit was held in Turkey, where more than nine thousand humanitarian, development, and political participants and fifty-five heads of state from 173 countries convened to seek solutions to the human suffering created by acute violent conflict and historic displacement. Key agreements focused on breaking down the stovepipes between humanitarian and development activities, with a greater emphasis on understanding and addressing the drivers of violent conflict. As a result, the World Bank is opening new windows of concessionary funding for states like Jordan and Lebanon to better address the strain from the massive onslaught of refugees and to forestall them from collapsing into crises as well. The World Bank's International Development Association's IDA18 is the largest replenishment of IDA resources by donors in the organization's fifty-six-year history, and has a bold, new focus on increasing attention and investment in fragile states, acknowledging the core development challenge they represent. The promising developments described above have helped codify the international community's collective wisdom both on what to do and, increasingly, on how to prevent fragility or mitigate state failure. At least five important principles have emerged for guiding policy and programs in fragile states: 1) invest in sustainable security that entitles civilians to justice; 2) support legitimate governments, characterized by inclusive politics, accountable institutions, and reconciliation; 3) create conditions for inclusive, equitable economic growth; 4) enable locally led change by training and equipping local partners and investing in country systems; and 5) sustain efforts over time, since change can take a generation or more to reveal itself. The way forward for supporting fragile state transitions to resilience depends on putting these principles into practice. Many promising initiatives for addressing fragility were instituted in the Obama administration, both within USAID and across the interagency. For example, the U.S. government established and provided active support for values-based institutions that continue to provide normative support for more resilient democracies, including the Community of Democracies, Open Government Partnership, Inter-American Democratic Charter, and SDG Goal 16. And within the U.S. government, many efforts have focused on breaking down internal stovepipes and linking early warning with early action, such as the Atrocities Prevention Board and a new Center for Resilience within USAID. The State Department has sought to recognize the role of the private sector, faith leaders, and civil society in a world that is no longer simply the domain of diplomats. The National Security Council sought to establish a regular series of deputies' meetings to take up the issue of those fragile countries that warrant increased focus and attention. The Obama administration also negotiated critical new presidential directives to create greater interagency coherence, including Presidential Policy Directive 6 on Global Development and Presidential Policy Directive 23 on U.S. Security Sector Assistance Policy. In the first year of the Trump administration–with its national security strategy still forthcoming–it remains too early to assess how the current administration will put principles for fragile state engagement into practice. In 2016, the Carnegie Endowment for International Peace, the Center for New American Security, and the United States Institute of Peace convened a bipartisan study group composed of former U.S. government officials and private-sector and NGO leaders specifically to capture key lessons and make recommendations to the next administration. These recommendations offer a policy framework that takes the lessons of the last three administrations and builds on the collective wisdom of what to do based on a “four S approach”: strategic, selective, systemic, and sustained. Specific recommendations are organized into three compacts: one domestic, both within the administration and within Congress; one within the international community; and one within fragile states. Most important, that study acknowledges that the United States cannot tackle fragility everywhere, but can apply strategic and selective criteria to determine both priority areas for action, where it is most likely to have a positive impact, as well as specific efforts for enabling more systemic action that uses all the capabilities of the U.S. government over a sustained period. Colombia is an example of how this approach can result in success: Plan Colombia combined security, diplomatic, and development investments over a sustained period spanning three administrations. This approach helped transform a failed narcostate that threatened U.S. security into a partner with a rising economy and a new peace agreement ending fifty years of conflict. Fragility creates the conditions for violent intrastate conflict. The consequences contribute to global disorder and mounting threats to U.S. national security. This essay has described the significant impediments to effective action in fragile states, even with emerging consensus about its centrality in causing armed conflict and many years of policy attention. Although we appreciate the scope of the challenges described here, we also think that recent advances in the development sector with regard to fragile states suggest a way forward for stronger results. A bold, aspirational vision for a future world order and a healthy dose of realism are not mutually exclusive. Rather, they are mutually reinforcing. We can be realistic about America's ability and will to help shape that world order without relinquishing our commitment to peace, stability, human rights, and effective governance based on the rule of law. We can also be realistic about the ability and will of fragile states to overcome profound obstacles to economic growth and inclusive governance without declaring such transformations impossible. The last seventy years have brought the world unparalleled peace and security. But there are critical challenges to address in the institutions that have developed over time, both within the United States and internationally. Our challenge is to reform these institutions to more effectively meet the challenge of fragility rather than yield to the temptation to jettison their fundamental structures in search of illusory simple solutions. The experiences of recent progress in tackling the challenges of fragile states coupled with our appreciation of the steep problems ahead tell us to be both ambitious and measured in our actions as we seek to lead a community of nations into the uncertain future. While existing institutional architecture may be poorly positioned to respond to today's complexity without significant reform, the international community has a history of delivering on ambition. Nearly seventy years ago, from the ashes of conflict, the world united to establish the Bret-ton Woods institutions: the United Nations, the International Monetary Fund, and the World Bank. The United States and the international community have long proven their ability to do hard things. In that spirit, we close with a call to remain seized by the challenge to discover new ways to strengthen our understanding of and to compile evidence about fragile states. For example, more comprehensive evidence about the cost-effectiveness of long-term peace-building investments remains elusive. The policy case for expanded engagement in fragile states for the purpose of long-term conflict prevention would be strengthened considerably with compelling evidence about the relatively modest costs of prevention versus the immense costs of crisis response. The debate is not about whether peace-building investments cost less than humanitarian responses to crisis. Of course they do. The case that must be made is more complicated than that and depends on combining evidence about the results of foreign assistance with informed speculation about a counterfactual. For any fragile state that has received significant foreign assistance to address the sources of fragility, what evidence exists that those investments actually reduced the likelihood of a future outbreak of major armed conflict? Second, what would have been the estimated costs of the international humanitarian or military response to such an outbreak? To advance more convincing arguments about the cost-effectiveness of more coherent policies and programs that address fragility, we urge researchers to innovate and build evidence around these claims. A recent survey of more than three hundred impact evaluations of programs designed to address state-society relations found significant gaps in the evidence base on the effectiveness of such programs.27 For example, rigorous evidence from program interventions tend to be concentrated in a small number of countries. Evaluations of programs designed to strengthen the transparency, accountability, or inclusiveness of political institutions are particularly rare. The study's authors found that in the countries with the largest populations facing the steepest challenges of governance, very little or no evidence exists about the effectiveness of development interventions. We argued earlier that we have two decades of evidence that fragility enables violent conflict and that the presence of citizen security, inclusive justice, and economies increase stability and peace. However, policy-makers across the U.S. interagency have yet to arrive at a shared consciousness about the challenge of fragility, a shared understanding of the nature of the problem, and the types of capacities that can be comprehensively deployed to address it effectively. That remains a steep ambition, but one that can be supported and accelerated with the development of better evidence about what works in fragile contexts. With an ever-improving understanding of how diplomatic, development, and defense actors can combine to tackle fragility, that ambition can be realized. ENDNOTES Stanley McChrystal quoted in Dan Schawbel, “General Stanley McChrystal: Leadership Lessons from Afghanistan,” Forbes, January 10, 2013, http://www.forbes.com/sites/danschawbel/2013/01/10/general-stanley-mcchrystal-leadership-lessons-from-afghanistan/#36d7d5d678ec. Jean-Marie Guéhenno, “The United Nations & Civil War,” Dædalus 147 (1) (Winter 2018). Charles T. Call and Susanna P. Campbell, “Is Prevention the Answer?” Dædalus 147 (1) (Winter 2018). For examples of rankings that look at national fragility, see Susan E. Rice and Stewart Patrick, Index of State Weakness in the Developing World (Washington, D.C.: The Brookings Institution, 2008); Monty G. Marshall and Gabrielle Elzinga-Marshall, “Table 1: State Fragility Index and Matrix 2016” (Vienna, Va.: Center for Systemic Peace, 2016); David Carment, Simon Langlois-Bertrand, and Yiagadeesen Samy, Assessing State Fragility with a Focus on Climate Change and Refugees: A 2016 Country Indicators for Foreign Policy Report (Ottawa: Country Indicators for Foreign Policy, 2016); The Fund for Peace, “Fragile States Index,” http://fundforpeace.org/fsi/; Organisation for Economic Co-operation and Development, States of Fragility 2016: Understanding Violence (Paris: Organisation for Economic Co-operation and Development, 2016); and David A. Backer and Paul K. Huth, “Peace and Conflict Instability Ledger: Ranking States on Future Risks,” in Peace and Conflict 2016, ed. David A. Backer, Ravi Bhavnani, and Paul K. Huth (New York: Routledge, 2016). For examples of rankings that look at sub-national and regional fragility, see AT Kearney, https://www.atkearney.com/; Ibrahim Foundation, http://mo.ibrahim.foundation/; Institute for Economics and Peace, http://economicsandpeace.org/; and Institute for the Study of War, http://www.understandingwar.org/. In order: Syrian Arab Republic, Afghanistan, Somalia, South Sudan, Sudan, the Democratic Republic of the Congo, and the Central African Republic. United Nations High Commissioner for Refugees, Global Trends: Forced Displacement in 2015 (Geneva: United Nations High Commissioner for Refugees, 2016), http://www.unhcr.org/576408cd7. Institute for Economics and Peace, Global Terrorism Index 2015 (Sydney: Institute for Economics and Peace, 2015), http://economicsandpeace.org/wp-content/uploads/2015/11/Global-Terrorism-Index-2015.pdf. As identified by the Organisation for Economic Co-operation and Development, Development Assistance Committee, States of Fragility 2015: Meeting Post-2015 Ambitions, rev. ed. (Paris: Organisation for Economic Co-operation and Development, 2015). See United Nations High Commissioner for Refugees, Global Trends. Institute for Economics and Peace, Global Peace Index 2016 (Sydney: Institute for Economics and Peace, 2016), http://reliefweb.int/sites/reliefweb.int/files/resources/GPI%202016%20Report_2.pdf. Ibid. Center for American Progress, National Security and International Policy Team, “State Legitimacy, Fragile States, and U.S. National Security,” Center for American Progress, September 12, 2016, https://www.americanprogress.org/issues/security/reports/2016/09/12/143789/state-legitimacy-fragile-states-and-u-s-national-security/. See Paul H. Wise and Michele Barry, “Civil War & the Global Threat of Pandemics,” Dædalus 146 (4) (Fall 2017). Stewart Patrick, “Civil Wars & Transnational Threats: Mapping the Terrain, Assessing the Links,” Dædalus 146 (4) (Fall 2017). The Clinton administration's 2000 National Security Strategy (NSS) lists failed states as one of six “threats to U.S. interests.” The White House, A National Security Strategy for a New Century (Washington, D.C.: The White House, 1999). The Bush administration's 2002 NSS stated: “The events of September 11, 2001, taught us that weak states, like Afghanistan, can pose as great a danger to national interests as strong states.” The White House, The National Security Strategy of the United States of America (Washington, D.C.: The White House, 2002). And the 2006 Bush administration's NSS stated: “Weak and impoverished states and ungoverned areas are not only a threat to their people and a burden on regional economies, but are also susceptible to exploitation by terrorists, tyrants, and international criminals.” The White House, The National Security Strategy 2006 (Washington, D.C.: The White House, 2006). Lastly, the Obama administration's 2015 strategy stated: “we will prioritize efforts that address the top strategic risks to our interests … [including] significant security consequences associated with weak or failing states (including mass atrocities, regional spillover, and transnational organized crime).” The White House, National Security Strategy 2015 (Washington, D.C.: The White House, 2015). The White House, A National Security Strategy for a New Century. The White House, National Security Strategy 2015. The White House Office of the Press Secretary, “Fact Sheet: The Global Health Security Agenda,” July 28, 2015, https://www.whitehouse.gov/the-press-office/2015/07/28/fact-sheet-global-health-security-agenda. Curt Tarnoff, Iraq: Reconstruction Assistance (Washington, D.C.: Congressional Research Service, 2009), http://www.fas.org/sgp/crs/mideast/RL31833.pdf. Scott Smith and Colin Cookman, eds., State Strengthening in Afghanistan: Lessons Learned, 2001–14 (Washington, D.C.: United States Institute of Peace, 2016), http://www.usip.org/sites/default/files/PW116-State-Strengthening-in-Afghanistan-Lessons-Learned-2001-14_0.pdf. The Center for Strategic and International Studies and The Association of the United States Army, Play to Win: Final Report of the Bi-Partisan Commission on Post-Conflict Reconstruction (Washington, D.C., and Arlington, Va.: The Center for Strategic and International Studies and The Association of the United States Army, 2003), https://csis-prod.s3.amazonaws.com/s3fs-public/legacy_files/files/media/csis/pubs/playtowin.pdf. Institute for Economics and Peace, 2017 Global Peace Index (Sydney: Institute for Economics and Peace, 2017), http://visionofhumanity.org/app/uploads/2017/06/GPI17-Report.pdf; and Organisation for Economic Co-operation and Development, States of Fragility 2016: Understanding Violence (Paris: Organisation for Economic Co-operation and Development, 2016), http://www.oecd.org/dac/states-of-fragility-2016-9789264267213-en.htm. World Bank Group and United Nations, Pathways for Peace: Inclusive Approaches to Preventing Violent Conflict (Washington, D.C.: World Bank Group and The United Nations, 2017), https://openknowledge.worldbank.org/bitstream/handle/10986/28337/211162mm.pdf?sequence=2&isAllowed=y. The International Dialogue is a unique multistakeholder partnership between the g7+ group of countries affected by conflict and fragility, donors from OECD countries, and civil society organizations. The New Deal outlined new modes of operation for donor nations, including committing to locally owned and led development priorities, and making planning processes more inclusive in the target countries. This new method of working was designed to promote five foundational Peace-Building and State-Building Goals. International Dialogue on Peacebuilding and Statebuilding, http://www.pbsbdialogue.org/en/. For more information on the New Deal for Engagement in Fragile States, see New Deal: Building Peaceful States, http://www.newdeal4peace.org/. For an expanded discussion of the New Deal, see William J. Burns, Michèle A. Flournoy, and Nancy E. Lindborg, U.S. Leadership and The Challenge of State Fragility (Washington, D.C.: Carnegie Endowment for International Peace, Center for a New American Security, and United States Institute of Peace, 2016); and Sarah Hearn, Independent Review of the New Deal for Engagement in Fragile States (New York: New York University, Center on International Cooperation, 2016). Sarah Kenyon Lischer, “The Global Refugee Crisis: Regional Destabilization & Humanitarian Protection,” Dædalus 146 (4) (Fall 2017). Daniel Phillips, Chris Coffey, Emma Gallagher, et al., State-Society Relations in Low- and Middle-Income Countries: An Evidence Gap Map (London: International Initiative for Impact Evaluation, 2017).
https://direct.mit.edu/daed/article/147/1/158/27189/In-Defense-of-Ambition-Building-Peaceful-amp
How much food should my baby eat and how often? Obviously you want your baby to eat nutritious food, but how much food should you prepare for your baby? And, how frequently should they eat? When planning an eating routine for your baby, keep these six tips in mind….. - Try to create an eating routine: Babies thrive on routine. Try to plan what times of day your baby will eat. Encourage your baby to sit in their high chair and create a quiet, relaxed environment for your baby to focus on their meal instead of eating on the run. This will help to create good habits for the future and help your baby to eat nutritious foods instead of filling up on snacks. - Your baby’s routine will continue to change as they grow: As babies grow, their stomach size and capacity increases. This means that they’ll be able to eat larger quantities at each meal, and won’t need to eat as many meals throughout the day. So, whilst your baby may have started with a breastmilk or formula feed every three hours and be consuming 8-10 feeds throughout the day, by the time we reach adulthood, many adults only require three meals per day. - Every baby is different: The other babies in your playgroup may be eating six times each day, but if eight times per day, or five times per day works better for your family’s routine, that’s ok! Find a routine that works for your baby and your family. - You choose the foods and meal times, your baby chooses the amount: It is the parents or carers responsibility to determine which foods are served, and your baby’s responsibility to determine how much of each food is consumed. We all know someone who was forced to eat everything on their plate – this can often set up lifelong overeating habits. Provide your baby with a range of nutritious choices and allow your baby to determine their portion size. - Your baby’s appetite will fluctuate from meal to meal: Just like if we have a large meal (think Christmas lunch), we don’t feel like eating much at the next meal, our baby’s appetites, and consequently portion sizes, will fluctuate from meal to meal too. Your baby’s appetite is also impacted by growth spurts and activity levels, so expect their portion sizes to fluctuate. At times they may even refuse a few meals, and as long as their growth is tracking well, that’s ok. - After the age of 9 months, offer food first: Prior to nine months of age, it is recommended that you give your baby their milk first, then offer them some food. In terms of meal planning, you simply provide your baby with some nutritious foods to taste after each feed. Once they reach 9 months, offer them their food first, then top up with a milk feed. During the first year of life when your baby’s portion sizes are small, breastmilk or formula will be your baby’s primary source of nutrition. As their stomach capacity grows and their milk intake subsides, their portion sizes will slowly increase. As a general guide, aim for the following portion sizes: |Core Food Group||Number of Serves Per Day||Example Portion Sizes| |Grains||Approximately 4||– ½ -1 slice bread | – ½ cup cooked porridge – 11/2 wheat biscuits – 2-5 tbsp cooked pasta |Vegetables||Approximately 5||– 2 tbsp cooked peas | – ½ cooked carrot – 3 florets broccoli – ¼ cup mashed sweet potato |Fruit||Approximately 2||– 18 raspberries | – 2 tbsp canned fruit – 2 strawberries – 8 cubes melon |Meat and alternatives||Approximately 2||– 1 egg | – 1 lamb chop – 85g tofu – 5 cubes chicken |Dairy||1-3 serves depending on your baby’s age||– 20g hard cheese | – ½ cup milk – 100g yoghurt Still confused? If you’re still confused about introducing solids to your baby, book a consultation with a paediatric dietitian.
https://littleetoile.com/2022/02/09/how-much-food-should-my-baby-eat-and-how-often/
I help mentor some really young, bright kids in mathematics. We were looking at geometric properties of various shapes, and one of the kids noted that the surface area of a sphere $S = 4\pi r^2$ contains the equation for the area of a circle $A = \pi r^2$. She was a bit confused why the factor of $4$ was mysteriously there. I told her I'd get back to her. I know how to prove the formula using calculus, but I spent a long time trying to find an elementary way of doing it. Does anyone know of a way of proving the first equation using almost no advanced mathematics$^1$? This seems unlikely, so as a separate question, does anyone know of a good visualization to show the relation between $S$ and $A$? The naive approach of taking four circles and showing you can "place them" on a sphere is clearly wrong (you can't just place four circles on a sphere), but I'm not sure what the alternative is. $^1$These kids have a working knowledge of variable manipulation, basic geometry, and I guess combinatorics?
https://math.stackexchange.com/questions/1833867/visualization-of-surface-area-of-a-sphere
Harrels in Arkansas My research so far has been able to put some of the pieces of the Harrell puzzle together, but I need your help to make it more complete. PUZZLE NUMBER ONE. My earliest ancestor so far is Eli Harrell. I found him in the 1850 Census for Poinsett Co., Mitchell Twp, Ark. as being born in NC and 72 years old. This would mark his birth as 1777 or 1778. His wife, Cherry, or perhaps Cherey, is also listed in the same census as being 65 years old and as being born in Va. This would make her birth year as 1784 or 1785. I do not know her maiden name yet. As we are use to by now, there are four variant spellings: Harrell, Harrill, Harrel, Harell on the same page in the 1850 Census for Poinsett Co. After 1880 most of the family seem to settle on "Harrell" spelling. I wish I could find out more about Eli in NC. Some colleagues think he came from Edgecombe, Chowan or Bertie counties. A quick survey has not lined up my Eli with any I have noticed in NC resources. Who were Eli's parents? How about Cherry? In the 1860 Census for Independence Co., Black river Twp, Eli is not listed but Cherry is still living with her son, John G. However, she is listed as being 70 years old instead of 75, if the 1850 Census number is correct. Have you run across them in your research? Was Eli a shortened version of Elisha? I have not found their grave sites yet. Have you seen them listed anywhere? PUZZLE NUMBER TWO. Eli (#1) and Cherry were living in the same household as his son, John G in the 1850 Census for Poinsett Co., Mitchell Twp. John G is listed as being born in NC and was 23 years old, making his birth year as 1826 or 1827. His wife is Martha Rebecca, 19 years old and as being born in Alabama. (John G is listed as J.G. in the 1850 Census and as John G. in the 1860 Census for Independence Co., Black River Twp, Ark.) (Martha is listed as Martha in the 1850, 1860 and 1900 Census, as M.R. in the 1870 Census and as Martha R. in the 1880 Census. Also, in the 1880 Census her birth state is listed as Tennessee instead of Alabama.) The question is how many children did Eli and Cherry have? Just John G? Because the same 1850 Census for Poinsett Co., Mitchell Twp, Ark. lists other Harrell families living as neighbors of John G, Martha, Eli and Cherry, it may be that David and Eli (#2) were also brothers of John G. There is some evidence that Milton P ( or perhaps "I") is another brother as well as Jethro Harrell, but more on him below. It seems evident that the Harrells chose wives as they headed west to Arkansas. Eli (#1) married Cherry from Virginia, David married Elizabeth from NC, Eli (#2) married Martha C from Georgia and John G married Martha in either Alabama or Tennessee. Do you have any evidences of these marriages? PUZZLE NUMBER THREE. There is a mystery concerning my great grandfather, Eli L Harrell, the son of John G and Martha R Harrell. In the 1880 Census for Cross Co., Ark, he is listed as being in Sharp Co. (a nearby county from Poinsett Co.), being 22 years old, living with Robert Wooldridge and his wife, Mary P Crouch (or Couch). Eli is designated as a nephew. But I cannot figure out if Eli is the direct nephew of Robert or of Mary P Crouch. Mary P came from Alabama as did Martha R (according to the 1850 Census for Poinsett Co.). Does anyone have any information that will help me solve this mystery? PUZZLE NUMBER FOUR. Nan Harrell Snider wrote an article about Joseph Wyatt Harrell in the book entitled "History and Families of Poinsett County, Arkansas". In this article Mrs. Snider wrote that Jethro (Jefro) Harrell was Joseph Wyatt's father and was born about 1815 in the Carolinas. Mrs. Snider continues to say that Joseph Wyatt Harrell lived with David Harrell (his uncle, we presume) in 1865 (according to the Poinsett Co. tax records). In the 1870 Census for Cross co., Joseph Wyatt is living with another uncle, Eli (#2). If this is correct, then it means that if Eli (#2), the son of Eli and Cherry, was Joseph Wyatt's uncle, then Jethro (his father) must also be the son of Eli and Cherry and the brother to David, Eli (#2) and John G. Does that make any sense? Can anyone help me to verify this claim? PUZZLE NUMBER FIVE. Mrs. Snider, who has done some excellent work on the Harrell family and who is the author of "A Proud Heritage: The Harrell Family", mentions in a message board e-mail that Jethro Harrell was a half brother to John Harrell and a cousin to a Tom Harrell. Does anyone know anything about this John and this Tom? Has anyone made the connection between these families? Well, I have quite a bit more about the third, fourth and following generations of Eli and Cherry Harrell, but I will develop these lines a bit more before sharing them with you. I am hoping some of you will help me to fill in some of the missing blanks for the five puzzles I have presented to you More Replies:
https://www.genealogy.com/forum/surnames/topics/harrel/151/
Mimicking the brain, in silicon. Reported by Anne Trafton, MIT News Office, 15 Nov. 2011. CAMBRIDGE, Mass. — For decades, scientists have dreamed of building computer systems that could replicate the human brain’s talent for learning new tasks. MIT researchers have now taken a major step toward that goal by designing a computer chip that mimics how the brain’s neurons adapt in response to new information. This phenomenon, known as plasticity, is believed to underlie many brain functions, including learning and memory. With about 400 transistors, the silicon chip can simulate the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. The researchers anticipate this chip will help neuroscientists learn much more about how the brain works, and could also be used in neural prosthetic devices such as artificial retinas, says Chi-Sang Poon, a principal research scientist in the Harvard-MIT Division of Health Sciences and Technology. Poon is the senior author of a paper describing the chip in the Proceedings of the National Academy of Sciences the week of Nov. 14. Guy Rachmuth, a former postdoc in Poon’s lab, is lead author of the paper. Other authors are Mark Bear, the Picower Professor of Neuroscience at MIT, and Harel Shouval of the University of Texas Medical School. Modeling synapses There are about 100 billion neurons in the brain, each of which forms synapses with many other neurons. A synapse is the gap between two neurons (known as the presynaptic and postsynaptic neurons). The presynaptic neuron releases neurotransmitters, such as glutamate and GABA, which bind to receptors on the postsynaptic cell membrane, activating ion channels. Opening and closing those channels changes the cell’s electrical potential. If the potential changes dramatically enough, the cell fires an electrical impulse called an action potential. All of this synaptic activity depends on the ion channels, which control the flow of charged atoms such as sodium, potassium and calcium. Those channels are also key to two processes known as long-term potentiation (LTP) and long-term depression (LTD), which strengthen and weaken synapses, respectively. The MIT researchers designed their computer chip so that the transistors could mimic the activity of different ion channels. While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell. “We can tweak the parameters of the circuit to match specific ion channels,” Poon says. “We now have a way to capture each and every ionic process that’s going on in a neuron.” Previously, researchers had built circuits that could simulate the firing of an action potential, but not all of the circumstances that produce the potentials. “If you really want to mimic brain function realistically, you have to do more than just spiking. You have to capture the intracellular processes that are ion channel-based,” Poon says. The new chip represents a “significant advance in the efforts to incorporate what we know about the biology of neurons and synaptic plasticity onto CMOS [complementary metal-oxide-semiconductor] chips,” says Dean Buonomano, a professor of neurobiology at the University of California at Los Angeles, adding that “the level of biological realism is impressive. The MIT researchers plan to use their chip to build systems to model specific neural functions, such as the visual processing system. Such systems could be much faster than digital computers. Even on high-capacity computer systems, it takes hours or days to simulate a simple brain circuit. With the analog chip system, the simulation is even faster than the biological system itself. Another potential application is building chips that can interface with biological systems. This could be useful in enabling communication between neural prosthetic devices such as artificial retinas and the brain. Further down the road, these chips could also become building blocks for artificial intelligence devices, Poon says. Debate resolved The MIT researchers have already used their chip to propose a resolution to a longstanding debate over how LTD occurs. One theory holds that LTD and LTP depend on the frequency of action potentials stimulated in the postsynaptic cell, while a more recent theory suggests that they depend on the timing of the action potentials’ arrival at the synapse. Both require the involvement of ion channels known as NMDA receptors, which detect postsynaptic activation. Recently, it has been theorized that both models could be unified if there were a second type of receptor involved in detecting that activity. One candidate for that second receptor is the endo-cannabinoid receptor. Endo-cannabinoids, similar in structure to marijuana, are produced in the brain and are involved in many functions, including appetite, pain sensation and memory. Some neuroscientists had theorized that endo-cannabinoids produced in the postsynaptic cell are released into the synapse, where they activate presynaptic endo-cannabinoid receptors. If NMDA receptors are active at the same time, LTD occurs. When the researchers included on their chip transistors that model endo-cannabinoid receptors, they were able to accurately simulate both LTD and LTP. Although previous experiments supported this theory, until now, “nobody had put all this together and demonstrated computationally that indeed this works, and this is how it works,” Poon says.
http://gsirak.ee.duth.gr/index.php/archives/1137
At 22:34 16/11/98 +0200, you wrote: Dear Jewishgenners,In my opinion A tree is prepared primarily for the person who is creating it. therefore - opinions of others are not salient in the decisions you come to. Religion is a matter of BELIEF, and no amount of talking can - or must - change that. In deference to belief, I have adopted a single letter in the "baptised" line of the non-Jewish database software to denote that a ranch has "married out" - so that the fact is recorded and there, but definitely INCLUDE all people in the tree. One day, maybe, it will be necessary information for genetics study. If that happens, than those with narrower views may well be happy to have the knowledge of who is attached. Until then, do not try to persuade them. David Lewin London ------- David in London and Margret in Munich run a small Search & Reunite office attempting to help the many who suspect that despite the passage of so many years since World War II, someone may still exist "out there"
https://groups.jewishgen.org/g/main/message/94711
Q: Convert DECIMAL hours to Hours:Minutes SQL SELECT (without Functions and Declare) I have decimal field with for example value 0.17 (it is 0 hours and 10 minutes not 17 because 17*60/100=10,2 minutes) I would like to convert to solely to hours and minutes in this format: 0:10 I tried select with SUBSTR but I am loosing 0 value in that case. I would like only pure SELECT without declare and functions!!! Please I cannot use those! I saw some examples also with TO_DATE functions but this is not what I need because it just converts but without calculation. Thanks in advance for help. A: Being hours, I prefer 00:10. You can do this using LPAD(): concat(concat(lpad( floor( (value - floor(value))*60), 2, '0'), ':'), lpad(mod(floor(value), 2, '0')) ) Of course, you can just pad to one character for the hours, if you only want one "0".
Acromegaly is a rare but very dangerous condition that affects cats of all ages, breeds and genders. It is more common in certain breeds than others, but this disease is generally quite uncommon. The disease is typically caused by an abnormal growth or tumor that affects the pituitary gland. Because acromegaly is almost always fatal if left untreated, and because complete removal of the abnormal growth which causes the condition is often dangerous or difficult to do, this condition must be monitored closely and treated with medication as quickly as possible. Read on for a brief overview of this unusual condition in cats and how it is diagnosed and treated. Acromegaly Overview Acromegaly is a condition that comes about when something affects your cat's pituitary gland and causes it to secrete more growth hormones than are necessary. This is most often caused by an abnormal growth or tumor in the pituitary gland, and the hormones may vary somewhat. The condition is oftentimes difficult to diagnose, as the symptoms are similar to many other growth diseases and even other conditions like hyperadrenocorticism (also known as Cushing's Disease). Symptoms of Acromegaly in Cats The primary symptoms of this condition are physical. Because the overactive pituitary gland encourages more growth than is necessary in your cat, you'll notice that certain parts of his body continue to grow when they do not otherwise need to. It's not uncommon, therefore, to see the following symptoms associated with acromegaly: - Oversized head - Oversized paws - Protruding jaw Because the disease oftentimes strikes cats as they're still growing, and because most smaller cats and kittens have paws that are generally larger in proportion to their body size than cats that are fully developed, it can therefore be difficult to determine whether your pet has this condition based on a physical examination alone. Cats with acromegaly oftentimes also present symptoms that are similar to those of diabetes. You may notice that your cat seems to be insulin resistant and that his blood sugar level varies significantly depending upon the time of day and whether he's had food or not. This is a byproduct of the overactive pituitary gland, but can easily be mistaken for diabetes. Diagnosing and Treating Cats with Acromegaly Acromegaly will typically be treated with both a physical examination of your pet's symptoms and a series of blood tests to determine the levels of certain pituitary hormones in his blood. Treating this condition will require that you provide your pet with drugs that help to slow the production of growth hormones. Your vet will determine the exact hormones that are being overproduced and will help to develop a regimen of drugs to combat these hormones. These drug regimens oftentimes include octreotide, also known as Sandostatin, which is one of the more effective anti-growth drugs that is available for this treatment. Ask your vet for any additional information about acromegaly or if your cat is at particular risk for developing this condition.
https://www.vetinfo.com/acromegaly-in-cats.html
Q: Read text file on server into array javascript Here's the situation. On the apache box there's a text file that contains numbers with new lines: eg: 25 34 76 etc.... What I'm wanting to do it grab the values from that file and use them to "set" some sliders I have which are partially yoinked from http://webfx.eae.net/dhtml/slider/slider.html Once done I'll have a "commit" button which writes out the altered values to that text file. But I'm getting stuck at the bit where you read from the text file on the apache box which this runs in. Everything I've read seems to refer to file uploading via an API but this isn't what i want as the file is server side. I guess I could use php but as I'm not up on that either (and especially not on how to move variables between the two) Any ideas? If you need clarification i can give it to you. A: Just a simple ajax code!!! var xmlhttp; if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else { // code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById("mytextfiledic").innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","mytextfile.txt",true); xmlhttp.send();
Charlotte, NC recently passed a strategic mobility plan designed to improve public transit options, reduce driving, eliminate traffic deaths, and increase economic mobility for public transit users. The Plan intends to support the goals and objectives of the Charlotte Future 2040 Comprehensive Plan; integrate existing transportation plans and policies into a single Strategic Mobility Plan, establish new goals for prioritizing transportation investments and measuring progress, modernize transportation policies, and make the city more resilient. A major goal of the plan is a 50/ 50 mode share by the year 2040, where 50% of all travel will happen through walking, cycling, or public transit. The city has come up with a number of ways to move towards this goal. To reduce waiting times and encourage the use of public transport, the city will serve 390 miles of bus routes at a “high-frequency” rate, with the buses running every 15 minutes or less. The Charlotte Department of Transportation, Charlotte Area Transit System, the Metropolitan Transit Commission, and the Charlotte Regional Transportation Coalition have coordinated to service a total of 643 miles of routes during peak hours in order to integrate the public transit service for residents in the suburbs and exurbs of the city. The Charlotte Area Transit System (CATS) will transition from having a central hub to creating mobility hubs throughout the city. These hubs would serve as defined centers that support multi-modal transportation options, such as walking, biking, and microtransit. Microtransit uses on-demand smaller shuttle vehicles to service less-populated areas and help resolve the first-mile/last-mile problem that many commuters experience. Another component of the Strategic Mobility Plan is the Charlotte Streets Manual, a citywide mobility policy map that categorizes Charlotte’s arterial street network into defined street types that will support multimodal transport.
https://smartcitiesconnect.org/charlotte-nc-passes-strategic-mobility-plan/
Being with another person helps us discover ourselves. As we learn to appreciate the similarities that bring us closer, we also see the differences that make us unique. The Strength of the Couple Together, we create a reciprocal and mutually dependable bond that brings harmony to the relationship. In such an environment, we know that we are free to keep a level of independence. At the same time, we can explore creativity, passion, dreams, and goals. We grow individually, and together. We can explore life, with all its surprises and gifts, as a journey to enjoy and learn from. We share our passion and wisdom with our partner, being encouraged to be ourselves in the presence of another. When we see life through similar and complementary lenses, share beliefs and values, and relate to someone on a spiritual and personal level, it is easier to support each other in setting mutual goals and realizing common dreams with a healthy measure of interdependency. Time spent together adds to the quality of the interpersonal connection and enhances our well-being. A Decision Based in Wisdom My friend Zelia—a talented and sensitive artist—says: “Love is a decision, a choice, a feeling, and a connection with someone special. Although relationships can be challenging at times, I know they can be extremely supportive. I am fascinated by them.” In the ideal supportive environment Zelia referred to, when disagreements arise, loving couples respectfully express diverging or opposite views: - They learn to accept weaknesses and to compensate for what is missing as well as they can, whenever possible. - Couples who remember to show compassion, patience, and consideration for their partner last longer and are happier. - When those in a relationship forgive, forget, and move on, they cultivate an intimate connection. This is far better than seeking to be right and wanting to prove the other person wrong. - Those who respond to each other’s desires and needs with kindness can forgive the imperfection that characterizes human beings. A Pragmatic Difference Although physical attraction, strong feelings, and excitement are ingredients of a romantic relationship, it is friendship, camaraderie, and goodwill that transform romance into true, profound, and enduring love. Pragma and agape enable us to enrich another person’s life. They also guide us in adding value to their existence while serving their needs. In the context of a long-term relationship, pragma and agape empower us to help another person on their path. Furthermore, we can support them when their weaknesses prevent them from moving forward. Pragma and agape enable people to share inner gifts and talents; this foster growth and contributes to peace. When we cultivate pragma and agape, we learn valuable lessons from our experience. We also continue to enjoy life with an enthusiastic attitude. This type of LOVE encompasses acceptance, compassion, and mutual support. As my friend Marie Léontine’s husband Jean Blaise Bilombo puts it: “On one hand, love is and remains the opposite of selfishness. On the other hand, love is the great ally of conversation, mutual caring, and dreaming together.” Pragma and agape bring us the opportunity to embrace a life full of promises, hopes, faith, and joy. Let me ask you: how badly do you want to be in a fulfilling relationship? Are you connecting with other singles? Are you willing to shift your perspective from falling in love to standing for LOVE? Keep the faith According to the couples I interviewed, as single gen-Xers and baby boomers, we must keep the faith. It is also crucial that we believe that there is someone out there looking for us. As we want to be found, they too long for love and connectedness. We should therefore endeavor to connect with a person who shares our values and interests. Let’s look for someone who wants a long-term commitment as much as we do. If we desire to build a long-term relationship, we are encouraged to take a chance and put ourselves out there so that we can be found. We must also be vulnerable and open, while focusing on what we want as much as what we have to offer. Once you meet the person that you might be interested in, listen with intent. This will allow you to hear what they share with you and to acknowledge their perspective. You will also discover whether you can add value to their existence and vice versa. As we grow in confidence, overcome our fear, and attempt to form a new bond, the key to our success lies in our ability to listen. In doing so, we will hear the other person’s heart and hurt, with intent and good will so that we can serve, support, and cherish them unselfishly. Instead of falling in love, remember that you can choose to stand for love.
https://naomibambara.com/three-reasons-to-stand-for-love/
This application is the U.S. national phase of International Application No. PCT/JP2009/062782, filed 15 Jul. 2009, which designated the U.S. and claims priority to Japanese Patent Application No. 2008-184104, filed 15 Jul. 2008, the entire contents of each of which are hereby incorporated by reference. The present invention relates to an automated biological sample separating apparatus, an apparatus constituting the automated biological sample separating apparatus, and use thereof. More specifically, the present invention relates to an automated electrophoresis apparatus and an automated electrophoresis method. After the accomplishment of the human genome projects, ardent researches have been carried out on proteome. The term “proteome” encompass any proteins which are produced via translation in certain cells, organs, etc. One example of the researches on proteome is profiling of a protein. One of most common techniques for profiling a protein is 2-dimension electrophoresis of the protein. Proteins have different electric charges and molecular weights unique to themselves. From proteome, which is a mixture of many kinds of proteins, the proteins may be separated based on the electric charges or molecular weights. However, it is possible to perform protein separation from the proteome with higher resolution for more kinds of the proteins by separating the proteins from the proteome based on electric charges and molecular weights in combination. The 2-dimensional electrophoresis include two electrophoresis steps: isoelectric focusing electrophoresis for separating proteins based on their electric charge differences; and slab gel electrophoresis (especially, SDS-PAGE) for separating the proteins based on their molecular weight differences. Moreover, the 2-dimensional electrophoresis may be carried out with a sample prepared with or without a denaturing agent. As such, the 2-dimensional electrophoresis is an excellent technique capable of separating several hundreds kinds of proteins in one time. In the 2-dimensional electrophoresis, a sample is subjected to the isoelectric focusing electrophoresis in a first dimension gel. Then, the first dimension gel is transferred to be applied to a second dimension gel in which the sample is subjected to the molecular-weight-based separation. Generally, the first dimension gel for the isoelectric focusing electrophoresis is very thin in comparison with its width and length. Therefore, it is difficult to recognize which side is the front side or back side of the gel, and in which way the gel has a pH gradient. Further, the gel is easy to be warped or twisted and thus is poor in shape stability. This would be a cause of poor reproducibility of results of the electrophoresis. Further, handling of the first dimension gel is not easy, which poses an impediment to an effort of improving the transfer of the first dimension gel to the second dimension gel in terms of positioning accuracy. Moreover, in case where the second dimension separation is carried out with SDS-PAGE, it is required to perform equilibrating (SDS treatment and reduction) treatment (chemical treatment) to the first dimension gel after the first dimensional electrophoresis, so that the proteins in the first dimension gel will be able to migrate through the second dimension gel. Due to the need of such treatment to the first dimension gel, the 2-dimensional electrophoresis produces different results depending on operator's proficiency. As described above, the 2-dimensional electrophoresis is an excellent technique, yet it requires the operator thereof to be highly skilled in operating it. The dependency of the operator's proficiency makes it difficult for the 2-dimensional electrophoresis to obtain quantitative data with good reproducibility. In order to overcome this problem, techniques to automate the 2-dimensional electrophoresis have been developed (see Patent Literature 1 and Non-Patent Literature 1). Patent Literature 1 Japanese Patent Application Publication, Tokukai, No. 2007-64848 A (Publication Date: Mar. 15, 2007) Non Patent Literature 1 Hiratsuka et al., Fully Automated Two-Dimensional Electrophoresis System for High-Throughput Protein Analysis, Anal. Chem., 79 (15), 5730-5739, 2007 As described above, it has been a huge demand for higher spotting accuracy in the electrophoresis for analyzing a biological sample. In response to this, there is a need for an electrophoresis technique that can achieve a higher accuracy than the techniques of Patent Literature 1 and Non-Patent Literature 1. The present invention is accomplished in view of the aforementioned problems, and a main object of the present invention is to provide an electrophoresis technique with higher accuracy than conventional arts. Y≧ X In order to attain the object, an electrophoresis apparatus according to the present invention includes: a sample separating section for containing a sample separating medium for separating a sample in a horizontal direction, the sample separating section containing the sample separating medium in such a manner that the sample separating medium has an exposed portion at least one end of a surface of the sample separating medium, the surface being in parallel with the horizontal direction; and medium connecting means for connecting a sample containing medium to the sample separating medium at a connecting region, the sample containing medium containing a sample, and the connecting region satisfying the following equation (1): 0.4× (1), where X is a distance in the horizontal direction from an inside end of the exposed portion exposed on an upper surface of the sample separating medium, to a proximal end of the connecting region to the sample separating section, and Y is a distance in a vertical direction. With this arrangement, the sample containing medium containing the sample can be connected, at an appropriate position, with the sample separating medium for separating the sample. Thus, it is possible to improve accuracy of spots obtained as a result of the electrophoresis, compared to the conventional arts. The electrophoresis apparatus according to the present invention may be preferably arranged such that the sample separating medium has a supporting section whose top reaches a height equal to or higher than the inside end of the exposed portion, and the connecting region is on the supporting section. FIG. 3( b With this arrangement, the medium connecting means connects the sample containing medium to the sample separating medium in such a way that the electrophoresis will move the sample away from where the supporting section is located. Thus, the presence of the supporting section, surprisingly, improves the electrophoresis. Note that, for example in case where the exposed portion has a flat top, the exposed portion of the sample separating medium is deformed by the sample containing medium pushed against a non-terminal-side portion of the exposed portion, so that a terminal-side portion of the exposed portion serves as the supporting section (see )). The electrophoresis apparatus according to the present invention may be preferably arranged such that the sample separating section has two plates being parallel with each other, for holding the sample separating medium therebetween. With this arrangement, the sample separating medium has a board-like shape defined by the flat surfaces parallel with each other. As a result, the electrophoresis can perform more appropriate sample separation. The electrophoresis apparatus according to the present invention may be arranged such that the sample containing medium and the sample separating medium are a gel or gels that contain(s) a gelling agent selected from the group consisting of polyacrylic amides, agarose, agar, and starch. With this arrangement, the use of such a gel or gels leads to more appropriate sample separation. The electrophoresis apparatus according to the present invention may be arranged such that the sample containing medium is higher in viscoelasticity than the sample separating medium. In other words, it is preferable that the sample separating medium and the sample containing medium be respectively configured to have such structural strengths that the sample containing medium will not deformed but the sample separating medium will be deformed when the sample containing medium is connected to the sample separating medium. With this arrangement, the sample separating medium will be deformed without deforming the sample containing medium when the sample containing medium is connected to the sample separating medium. Thus, it becomes possible to push the sample containing medium into the sample separating medium appropriately. The electrophoresis apparatus according to the present invention may be arranged such that it further includes: a first buffer tank containing at least part of the exposed portion; and a second buffer tank being located in such a manner that the sample separating section will be in between the second buffer tank and the exposed portion, the sample separating section having a communicating opening for communicating between the sample separating medium and the second buffer tank. With this arrangement, in which the buffer tanks are provided, which are respectively connected with both the ends of the sample separating medium, it is possible to easily perform the electrophoresis. The electrophoresis apparatus according to the present invention may be arranged such that it includes a structure in which the sample separating section, the first buffer tank, and the second buffer tank are integrated. With this arrangement, in which the sample separating section, the first buffer tank, and the second buffer tank are integrated in a single unit, it is easy to handle them. The electrophoresis apparatus according to the present invention may be arranged such that it further includes: a cap for medium shaping, the cap being detachably provided so as to cover the exposed portion of the sample separating medium in the first buffer tank; and a seal for medium shaping, the seal sealing the communication opening. With this arrangement in which the cap for medium shaping and the seal for medium shaping are provided, the sample separating medium can be easily cast into a shape in the sample separating section by using the cap and seal. The electrophoresis apparatus according to the present invention may be arranged such that the first buffer tank and the cap are configured such that, when the cap is attached inside the first buffer tank, the first buffer tank still has a space for holding a liquid. With this arrangement, the cap for medium shaping does not fill up the first buffer tank. Thus, the cap can be easily removed from the first buffer tank. Moreover, for example in case where the sample separating medium is an acrylic amide gel, which solidified under anaerobic conditions, it is not preferable that the cap fills up the first buffer tank, because the gel would be solidified in a gap between the cap and the first buffer tank. In the present embodiment, however it is configured that the cap does not fill up the first buffer tank wholly, thereby preventing the solidification of the gel in the gap. The electrophoresis apparatus according to the present invention may be preferably arranged such that the sample separating section has a first engaging section and the cap has a third engaging section, wherein the first engaging section and the third engaging section are configured to engage together, and the first buffer tank has a second engaging section and the cap has a fourth engaging section, wherein the second engaging section and the fourth engaging section are configured to engage together. With this arrangement, it is possible to easily secure the cap to the electrophoresis apparatus. The electrophoresis apparatus according to the present invention may be preferably arranged such that: the first engaging section and the third engaging section are configured to engage with each other by being configured that the first engaging section has a protrusion section and the third engaging section has a recess section corresponding to the protrusion section, or by being configured that the first engaging section has a recess section and the third engaging section has a protrusion section corresponding to the recess section, and the second engaging section and the fourth engaging section are configured to engage with each other by being configured that the second engaging section has a protrusion section and the fourth engaging section has a recess section corresponding to the protrusion section, or by being configured that the second engaging section has a recess section and the fourth engaging section has a protrusion section corresponding to the recess section. It is preferable that the engagement between the first engaging section and the third engaging section be different in depth from the engagement between the second engaging section and the fourth engaging section. With this arrangement, the second engaging section and the fourth engaging section can be engaged smoothly even after the engagement of the first engaging section and the third engaging section, in the case where the engagement of the second engaging section and the fourth engaging section is deeper than the engagement of the first engaging section and the third engaging section. Alternatively, in the case where the first engaging section and the third engaging section is deeper than the engagement of the engagement of the second engaging section and the fourth engaging section, the first engaging section and the third engaging section can be engaged smoothly even after the engagement of the second engaging section and the fourth engaging section. In this way, the engagement of one pair of the engaging sections can be performed after the engagement of the other pair of the engaging sections, while letting the air out of the sample separating medium. Thus, the shaping of the sample separating medium can be performed appropriately. The electrophoresis apparatus according to the present invention may be preferably arranged such that: the cap has an overlapping section for being overlapped with a surface of the sample separating section or a side wall of the first buffer tank. With this arrangement, the overlapping section does not allow the air to enter inside the cap, thereby making it possible to perform the shaping of the sample separating medium appropriately. The electrophoresis apparatus according to the present invention may be preferably arranged such that: the overlapping section has an overlapping width of at least 1 mm. With this arrangement, in which the overlapping section has an overlapping width of at least 1 mm, it is made more difficult for the air to enter inside the cap. Thus, it is possible to perform the shaping of the sample separating medium more appropriately. The electrophoresis apparatus according to the present invention may be preferably arranged such that the communicating opening is located on an upper surface of the sample separating section. With this arrangement, in which the communicating opening is located on the upper surface of the sample separating section, the seal for sealing the communicating opening is also attached onto the upper surface of the sample separating section. Thus, it is so convenient that the seal can be easily removed from the sample separating section. The electrophoresis apparatus according to the present invention may be preferably arranged such that the sample separating medium having a bulged portion being bulged downward and located on a distal side to the exposed section and in a region ranged from the communicating opening to an edge of the sample separating medium. With this arrangement, the bulged portion is located closer to the edge of the sampling separating medium than the communicating opening is. Thus, the location of the bulged portion does not impede the sample separation in the sample separating medium. The bulged portion lets the sample separating medium have a heavier weight. In dissembling of the sample separating section for removal of the sample separating medium, this facilitates the sample separating medium to be remained in a lower part of the sample separating section. Moreover, by letting the sample separating medium contain an electrolyte, it becomes possible to supply the electrolyte from the bulged portion. Y≧ X A method according to the present invention for performing electrophoresis includes: connecting a sample separating medium with a sample containing medium at a connecting region, (i) the sample separating medium being configured to separate a sample in a horizontal direction and being held in an insulating member in such a manner that the separating medium has an exposed portion at least one end of a surface of the sample separating medium, (ii) the sample containing medium containing the sample, and the connecting region satisfying the equation (1): 0.4× (1), where X is a distance in the horizontal direction from an inside end of the exposed portion exposed on an upper surface of the sample separating medium, to a proximal end of the connecting region to the sample separating section, and Y is a distance in a vertical direction. Further, the method according to the present invention may be preferably arranged such that the sample separating medium has a supporting section whose top reaches a height equal to or higher than the inside end of the exposed portion, and the connecting region is on the supporting section. There arrangements can bring about the same effects as the electrophoresis apparatus according to the present invention. With the electrophoresis technique according to the present invention, a sample separating medium for separating a sample can be connected, at an appropriate position, with a sample containing medium for containing the sample. Therefore, the electrophoresis technique according to the present invention can achieve a higher accuracy than the conventional arts. FIG. 1 FIG. 2 FIG. 1 100 100 100 101 102 103 150 150 101 150 151 150 is a cross sectional view schematically illustrating a configuration of an electrophoresis apparatus according to one embodiment of the present invention. is a perspective view schematically illustrating a configuration of the electrophoresis apparatus . As illustrated in , the electrophoresis apparatus is configured such that a sample separating section including a first plate and a second plate parallel with the first plate holds a sample separating medium therebetween. The sample separating medium is a medium in which a sample is to be separated in a horizontal direction along the medium. The sample separating section holds the sample separating medium in such a manner that a portion (an exposed portion ) of the sample separating medium is exposed at one edge of a surface parallel with the horizontal direction. The term “inside end of the exposed portion (exposed part)” used in this Description intends to mean that portion of the sample separating medium which is exposed and is in contact with a sample separating section. 100 104 105 151 104 105 101 105 151 102 101 106 150 105 Moreover, the electrophoresis apparatus has a first buffer tank and a second buffer tank . At least part of the exposed portion is inserted inside the first buffer tank . The second buffer tank is located in such a manner that the sample separating section is in between the second buffer tank and the exposed section . The first plate of the sample separating section has a communicating opening for communicating between the sample separating medium and the second buffer tank . 150 152 106 150 151 152 The sample separating medium has a bulged portion in a region ranged from the communicating opening to an edge of the sample separating medium and on a distal side to the exposed section . The bulged portion is bulged downward. 100 107 108 107 160 108 107 108 107 160 161 160 150 FIG. 2 Moreover, the electrophoresis apparatus includes medium connecting means including a holding section and a transporting arm . The holding section supports a sample containing medium in which a sample is contained. The transporting arm is configured to move the holding section . In one embodiment, the transporting arm is a transporting arm capable of moving the holding section in 2-dimensional directions as illustrated in , and is configured to transfer the sample containing medium from a sample containing medium storage site to a position where the sample containing medium is connected with the sample separating medium . FIG. 1 101 104 105 As illustrated in , the sample separating section , the first buffer tank , and the second buffer tank may be integrated in a single structure, so that the single structure can be used as a chip for use in electrophoresis. 102 103 101 102 103 150 101 102 103 101 150 101 102 103 102 103 The first plate and the second plate may be made from an insulating material such as an acrylic material, glass, etc. The sample separating section is configured such that the first plate and the second plate are adhered together so as to hold the sample separating medium therebetween. After electrophoresis is performed with the sample separating section , the first plate and the second plate are taken off from the sample separating section by using a tool such as a spatula. Thereby, the sample separating medium is removed from the sample separating section so as to be subjected to a subsequent analysis. The first plate and the second plate may be adhered together with a general adhesive, but it is preferable to adhere the first plate and the second plate by ultrasonic welding because the ultrasonic welding is free from dispersion of the adhesive, and the like problems. 103 150 102 103 150 103 103 101 150 152 150 103 150 103 The second plate is extended as long as the sample separating medium . Therefore, when the first plate and the second plate are separated from each other, the sample separating medium can be easily remained on the second plate . That is, it is the second plate that supports a lateral side of the sample separating medium (not illustrated). Moreover, as described above, the sample separating medium has the bulged portion , thereby making it easier to remain the sample separating medium on the second plate . Because the subsequent analysis is supplied with the sample separating medium in such a form that it is remained on the second plate , the subsequent analysis can be performed in a fixed manner. 160 150 160 150 The sample containing medium and the sample separating medium may be any kind of media that are generally used for electrophoresis. For example, the sample containing medium and the sample separating medium may be a gel or gels that contain(s) a gelling agent selected from the group consisting of polyacrylic amides, agarose, agar, and starch. 160 160 160 The sample containing medium contains a sample to be subjected to the electrophoresis. The sample may be uniformly distributed throughout the sample containing medium . However, a primary electrophoresis may be performed on the sample containing medium . 160 151 150 160 151 160 151 160 151 160 151 160 FIG. 3 FIG. 3( a Y≧ X The sample containing medium is connected with the exposed section of the sample separating medium as illustrated in . Moreover, in one embodiment, the sample containing medium is transferred on top of the exposed portion as illustrated in ), then the sample containing medium is pushed into the exposed portion downwardly, so that the sample containing medium and the exposed portion are connected with each other (media connecting step). When connected with each other the sample containing medium and the exposed portion are connected via a connecting region satisfying the following equation (1). By this, it becomes possible to appropriately transfer the sample from the sample containing medium to the sample separating medium. 0.4× (1), 151 160 where X is a horizontal distance from an inside end of the exposed portion to the sample containing medium , and Y is a distance in a vertical direction. 150 153 151 153 153 153 FIG. 3( FIG. 3( FIG. 3( FIG. 3( b b b c Moreover, it is preferable that the sample separating medium have a supporting section whose top reaches a height equal to or higher than the inside end of the exposed portion as illustrated in ), and that the connecting region is located on the supporting section , as illustrated in ). For example, a configuration in which the connecting region is on the supporting section as illustrated in ) is more preferable than a configuration in which the connecting region is out of the supporting section as illustrated in ). 160 150 150 160 160 150 150 160 153 150 160 150 160 150 160 160 150 FIG. 3 Moreover, it is preferable that the sample containing medium be higher in viscoelasticity than the sample separating medium . In other words, it is preferable that the sample separating medium and the sample containing medium are respectively differentiated in terms of their structural strength such that the sample containing medium will not be deformed but the sample separating medium will be deformed, when the sample separating medium is attached to the sample containing medium . With this configuration, it becomes possible to appropriately deform the supporting section as illustrated in . For differentiating the sample separating medium and the sample containing medium in terms of viscoelasticity and structural strength, the sample separating medium and the sample containing medium may be adjusted in viscoelasticity and structural strength by, for example, preparing the sample separating medium and the sample containing medium with different kinds of gelling agents, or more preferably, with an identical gelling agent in such different quantities that the sample containing medium has a greater gelling agent content than the sample separating medium . FIG. 4 FIG. 4 101 151 130 106 120 150 101 is a cross sectional view schematically illustrating the electrophoresis apparatus according to one embodiment of the present invention as to formation of a sample separating medium. As illustrated in , the sample separating section is sealed in such a manner that the exposed portion is covered with a cap for medium shaping and the communicating opening is covered with a seal for medium shaping. In this way, the sample separating medium can be formed inside the sample separating section . 130 104 130 104 150 130 104 130 104 130 104 Here, the cap does not fill up the first buffer tank . Thus, the cap can be easily removed from the first buffer tank . Moreover, for example in case where the sample separating medium is an acrylic amide gel, which solidified under anaerobic conditions, it is not preferable that the cap fills up the first buffer tank , because the gel would be solidified in a gap between the cap and the first buffer tank . In the present embodiment, however it is configured that the cap does not fill up the first buffer tank wholly, thereby preventing the solidification of the gel in the gap. FIG. 5 FIG. 5( FIGS. 5( a b a b 101 104 130 130 5 109 101 131 130 110 104 132 130 110 132 109 131 110 132 109 131 130 101 104 () is a perspective view illustrating shapes of the sample separating section and the first buffer tank , with which the cap for medium shaping is to be engaged. ) is a perspective view schematically illustrating a configuration of the cap for medium shaping. As illustrated in ) and (), a recess section (first engaging section) of the sample separating section and a protrusion section (third engaging section) of the cap are to be engaged together. Meanwhile, a recess section (second engaging section) ) of the first buffer tank and a protrusion section (fourth engaging section) of the cap are to engaged together. Because the engagement between the recess section and the protrusion section engages deeper than the engagement between the recess section and the protrusion section , a user may firstly make the engagement between the recess section and the protrusion section , and secondly make the engagement between the recess section and the protrusion section . In this way, it is possible to engage the cap with the sample separating section and the first buffer tank without allowing air to enter therein. FIG. 5( b 130 101 104 130 101 104 As illustrated in ), the cap has an overlapping section of 1 mm in width, which overlaps with a surface of the sample separating section and a side wall of the first buffer tank . With this configuration, the cap can be engaged with the sample separating section and the first buffer tank , thereby preventing air from entering the gel after the engagement. FIG. 6 To determine an appropriate connection position for the connection between the sample separating medium and the sample containing medium, computer simulation of an electrophoresis of a sample was carried out by using, as a model, an electrophoresis apparatus as illustrated in . FIG. 6 200 201 202 205 203 204 203 201 205 201 207 206 205 207 203 201 205 205 207 207 205 As illustrated in , the electrophoresis apparatus was configured such that, between a cathode and an anode and, a gel (sample separating medium) of 1 mm in thickness sandwiched between acrylic plates and of 2 mm in thickness was provided. The acrylic plate was absent on a cathode side, thereby exposing the gel by 10 mm on the cathode side. A gel (sample containing medium) of 0.4 mm in thickness and 1.2 mm in width and supported by a supporter was pushed into the exposed portion of the gel . Here, X was a distance between the gel and a cathode-side end of the acrylic plate , the cathode-side end having been proximal to the cathode (that is, X was a distance by which the sample traveled through the exposed gel ), and Y was a distance between an upper side of the gel and an upper side of the gel (that is, Y was a distance by which the gel was pushed into the gel ). 205 207 205 207 205 207 207 The gels and had a dielectric constant equal to that of water. A sample (charged particles) tested herein was modeled lysozyme. Mobility of the model lysozyme was calculated out from actual measurement values of SDS-PAGE of lysozyme. It was assumed that the model lysozyme was to enter the gel (sample separating medium) from eight (8) positions of the gel (sample containing medium) , where the eight positions were inner into the gel by 0.02 mm from four corners of the gel and midpoints of four sides of the gel , respectively. FIG. 7 FIG. 7 FIGS. 8 207 205 202 9 10 is a view illustrating results of simulation where Y was 0 mm and X was varied in a range of 0 to 3 mm. In , the left-hand side shows the movement of the modeled lysozyme in the vicinity of the position where the gel was pushed in, while the right-hand side shows the movement of the modeled lysozyme in the vicinity of an anode-side end of the gel , the anode-side end having been proximal to the anode . , , and are illustrated in the same fashion. 205 1 205 207 207 203 2 3 4 6 7 207 203 When X=0 mm, the modeled lysozyme moved in the gel but did not diffuse into the buffer, as shown in #. The modeled lysozyme entered the gel from the midpoint of the upper side of the gel and the positions on the right-hand side of the gel was blocked by a wall (acrylic plate ) provided to the gel. When X=0.5 mm, 0.75 mm, 1 mm, 1.5 mm, or 2 mm, diffusion of the modeled lysozyme into the buffer was observed as shown in #, #, #, or #. Meanwhile, the modeled lysozyme was not blocked by the wall in these cases. When X=3 mm, as illustrated in #, the diffusion of the modeled lysozyme into the buffer was observed. In this case, however, the modeled lysozyme from the middle of the upper side of the gel as blocked by the wall (acrylic plate ). 205 207 As described above, when Y=0 mm, the movement of the modeled lysozyme in the gel (sample separating medium) was observed from all the positions of the gel (sample containing medium) only in the case where X=0 mm. FIG. 8 is a view illustrating results of simulation, where Y=0.3 mm and X was varied in the range of 0 to 3 mm. 205 11 12 13 14 207 203 When X=0 mm, 0.5 mm, or 0.75 mm, the movement of the modeled lysozyme in the gel was observed but the modeled lysozyme did not diffuse into the buffer and also was not blocked by the acrylic plate. When X=1 mm, 1.5 mm or 2 mm, the modeled lysozyme was diffused into the buffer as shown in #, #, or #. In these cases, the modeled lysozyme was not blocked by the wall. When X=3 mm, the diffusion of the modeled lysozyme into the buffer was observed as illustrated in #, but the modeled lysozyme from the midpoint of the upper side of the gel was blocked by the wall (acrylic plate ). 205 207 As described above, when Y=0.3 mm, the movement of the modeled lysozyme in the gel (sample separating medium) was observed from all the positions of the gel (sample containing medium) only in the case where X=0.75 mm or less. FIG. 9 is a view illustrating results of simulation where Y=0.6 mm and X was varied in a range of 0 to 3 mm. 205 15 207 204 205 16 17 19 207 204 20 207 204 21 207 203 207 204 When X=0 mm, the movement of the modeled lysozyme in the gel was observed without the diffusion thereof into the buffer, as illustrated in #. The modeled lysozyme from the left portion of the lower side of the gel was blocked by the wall (acrylic plate ). When X=0.5 mm, 0.75 mm, 1 mm or 1.5 mm, the movement of the modeled lysozyme in the gel was observed without the diffusion thereof into the buffer, as illustrated in #, #, or #. The modeled lysozyme from three positions on the lower side of the gel was blocked by the wall (acrylic plate ). When X=2 mm, the modeled lysozyme was diffused into the buffer, as illustrated in #. Moreover, the modeled lysozyme from the three position on the lower side of the gel was blocked by the wall (acrylic plate ). When X=3 mm, as illustrated in #, the modeled lysozyme was diffused into the buffer. Meanwhile, in this case, the modeled lysozyme from the midpoint of the upper side of the gel was blocked by the wall (acrylic plate ) and the modeled lysozyme from the three points on the lower side of the gel was blocked by the wall (acrylic plate ). 205 207 As described above, when Y=0.6 mm, the movement of the modeled lysozyme in the gel (sample separating medium) was observed from all the positions of the gel (sample containing medium) only in the case where X=1.5 mm or less. 205 Table 1 shows the above results. “∘” indicates that the modeled lysozyme from all the points moved in the gel (sample separating medium ) while “x” indicates that the modeled lysozyme from any of the points was diffused into the buffer. TABLE 1 X(mm) 0 0.5 0.75 1 1.5 2 3 Y = #1 ∘ #2 x #3 x #4 x #5 x #6 x #7 x 0.0 mm Y = #8 ∘ #9 ∘ #10 ∘ #11 x #12 x #13 x #14 x 0.3 mm Y = #15 ∘ #16 ∘ #17 ∘ #1 ∘ #1 ∘ #20 x #21 x 0.6 mm 207 205 Y≧ X As illustrated in Table 1, favorable results were obtained when the connecting region via which the gel and the gel were connected satisfied the following Equation (1): 0.4× (1), 207 203 207 205 where X is a distance from the gel (sample containing medium) to the acrylic plate (upper plate for gel) , and Y is a distance from the upper side of the gel to the upper side of the gel (sample separating medium) . 025 205 To analyze how the thickness of the gel affected the results of the simulation in Calculation Example simulation was repeated based on the model in Calculation Example 1 but with different thicknesses of the gel . FIG. 10 205 1 4 205 is a view illustrating results of simulation, where X=1 mm, Y=0.6 mm, and the thickness of the gel was varied in the range of 1 to 9 mm. Models to respectively show the results of the simulation with the gel of 1 mm, 3 mm, 6 mm, and 9 mm in thickness. 1 4 205 1 207 204 2 4 205 As shown in Models to , no diffusion of the modeled lysozyme into the buffer was observed with the gel of any of these thicknesses. In Model , the modeled lysozyme from three positions on the lower side of the gel was blocked by the wall (acrylic plate ). In Models to , the thick thickness of the gel prevented the modeled lysozyme from abutting the wall. 2 4 205 207 1 205 207 206 1 204 207 205 205 207 2 4 207 In Models to with the thick thickness of the gel , it was observed that the course of the movement of the modeled lysozyme showed large downward curves immediately after the modeled lysozyme moved out of the gel , compared with Model . It is deduced that the thick thickness of the gel attributed to the peculiar movement with the large downward curves. In the vicinity of the gel , electric flux lines were blocked by the supported and thereby curved downwardly. It is deduced that in Model , the acrylic plate is located right under the gel , thereby preventing the downward curving of the electric flux lines, whereas the thick thickness of the gel allows the electric flux lines to curve downwardly in the gel right under the gel in Models to , thereby resulting in the downward course of the movement of the modeled lysozyme immediately after the modeled lysozyme moves out of the gel . 205 Therefore, it is considered that the course of the electrophoresis movement of the modeled lysozyme (sample) is not influence by the thickness of the gel (sample separating medium) . FIG. 1 An electrophoresis apparatus according to the present invention was prepared as illustrated in . Electrophoresis was actually conduced by using the electrophoresis apparatus with various X and Y parameters. Separation of samples was detected by fluorescence. The sample containing medium was an IPG gel medium in which a first dimensional electrophoresis of mouse liver soluble proteins had been conducted. FIG. 11 FIG. 11 8 9 11 12 13 is a view illustrating fluorescent spots obtained as a result of the electrophoresis in which Y=0.3 mm and X was varied in a range of 0 to 2 mm. As illustrated in , there was a tendency that an increase in X caused a decrease in overall protein intensity. Moreover, when X and Y satisfied the Equation (1), that is, when X=0 mm or 0.5 mm, the spots showed no tailing as shown in # or #. When X=1 mm, the spots showed slight tailing as shown in #. On the other hand, in case where X and Y did not satisfy the Equation (1), that is, when X=1.5 mm or 2 mm, the spots showed tailing as shown in # or #. FIG. 1 FIG. 12( FIG. 3( FIG. 12( FIG. 3( FIG. 12 a b b c 153 An electrophoresis apparatus according to the present invention was prepared as illustrated in . Electrophoresis was actually conduced by using the electrophoresis apparatus. Separation of samples was detected by fluorescence. The sample containing medium was an IPG gel medium in which a first dimensional electrophoresis of mouse liver soluble proteins had been conducted. ) shows results of electrophoresis using a connecting method (1 step) in which the supporting section was formed as illustrated in ). ) shows results of electrophoresis using a connecting method (2 step) in which no support section was formed as illustrated in ). As illustrated in , the result was better in the configuration in which the sample containing medium was connected vertically to the sample separating medium along a direction perpendicular to a direction in which the sample was separated, than in the configuration (2 step) in which the sample containing medium was connected sideways to the sample separating medium along the direction in which the sample was separated. Further, the respective images of the results were detected by using Typhoon (GE healthcare), and subjected to image processing by PDQuest (BioRad). Then, by using ProFinder (Perkin Elmer), each spot was detected and analyzed in terms of its spot fluorescence intensity and spot gravity point. From coordinates of the spot gravity, a peak top position and a half-value width of the spot were determined. Table 2 shows results of the analysis. TABLE 2 1 step 2 step Y half-value Y half-value spot width C V width C V #1 9.75 9.819765 14.25 15.56039 #2 8.75 5.714286 11.75 23.43647 #3 13.25 11.32075 14.75 10.16949 #4 12.5 10.32796 13.25 7.225865 #5 13.75 12.42055 14.25 3.508772 #6 11 16.59765 13.75 19.12695 #7 13.25 16.73476 16 26.02082 #8 16 29.3151 15 19.62614 #9 12.25 13.94143 11.75 8.148316 #10 10.5 19.82539 9.75 21.14413 #11 15.25 18.83463 16 5.103104 As shown on Table 2, the 1 step configuration had a smaller half-value width, thereby having a greater resolution. It is deduced that a smoother transfer of the sample from the sample containing gel to the sample separating gel attributes to the smaller half-value width and the greater resolution. Simulation (not shown) showed no difference between the 1 step configuration and the 2 step configuration. The invention being thus described, it will be obvious that the same way may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims. Moreover, the entire contents of all the non-patent literatures an patent literatures cited in the Description of the present application are incorporated herein by reference. The present invention, capable of providing a solution to the drawbacks of the 2-dimensional electrophoresis apparatus, can facilitate proteome researches performed recently with ardency. By producing and selling various components for use in electrophoresis apparatus according to the present invention, it is possible to stimulate markets. Reference Signs List 100, 200: Electrophoresis apparatus 101: Sample separating section 102: First plate 103: Second plate 203, 204: Acrylic plate 104: First buffer tank 105: Second buffer tank 106: Communicating opening 107, 206: Holding section 108: Transporting arm 109: Recess section (first engaging section) 110: Recess section (section engaging section) 120: Seal for medium shaping 130: Cap for medium shaping 131: Protrusion section (third engaging section) 132: Protrusion section (fourth engaging section) 150, 205: Sample separating medium 151: Exposed portion 152: Bulged portion 153: Supporting section 160, 207: Sample containing medium 161: Sample containing medium storage site TECHNICAL FIELD BACKGROUND ART CITATION LIST Patent Literature Non Patent Literature SUMMARY OF INVENTION Technical Problem Solution to Problem Advantageous Effects of Invention DESCRIPTION OF EMBODIMENTS Calculation Example 1 Calculation Example 2 Experiment Example 1 Experimental Example 2 INDUSTRIAL APPLICABILITY BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a cross sectional view schematically illustrating a configuration of an electrophoresis apparatus according to one embodiment of the present invention. FIG. 2 is a perspective view schematically illustrating a configuration of the electrophoresis apparatus according to the embodiment of the present invention. FIG. 3 is a schematic view illustrating a variation of a way connecting a sample containing medium and a sample separating medium in the electrophoresis according to one embodiment of the present invention. FIG. 4 is a cross sectional view schematically illustrating the electrophoresis apparatus according to one embodiment of the present invention as to formation of a 5 sample separating medium. FIG. 5 is a perspective view schematically illustrating a configuration of a cap for medium shaping for the electrophoresis apparatus according to one embodiment of 10 the present invention. FIG. 6 is a cross sectional view illustrating a model of an electrophoresis apparatus used in Calculation Example 1. FIG. 7 is a view illustrating results of simulation in Calculation Example 1. FIG. 8 is a view illustrating a simulation result in Calculation Example 1. FIG. 9 is a view illustrating a simulation result in Calculation Example 1. FIG. 10 is a view illustrating a simulation result in 25 Calculation Example 2. FIG. 11 is a view illustrating an electrophoresis result in Example 1. FIG. 12 is a view illustrating an electrophoresis result in Example 2.
Lions and tigers lived longer ago Jack Tseng and several other researchers at the University of Southern California, the American Museum of Natural History (where Jack is the Frick Research Fellow), the Smithsonian Institution, the Chinese Academy of Sciences and Gansu Provincial Museum in Lanzhou, among others, have been looking at the genus Panthera, making up the lions, tigers, jaguars, leopards and snow leopards of this world. They used a recent discovery of the oldest fossil to try and uncover the complex genealogy of all the big cats. In another, antique, world, from the Pleistocene, these pantherines were joined by even more big cats. Just one superb example is Panthera onca gombaszogensis which was found in Italy, but resembles the South American jaguar very closely. The well-known, unrelated, species that died out most recently, 10,000 years ago, are the American Smilodon spp., the fabled sabre-toothed cats. While current cats are critically-endangered, these ancient cats are extinct. We really can benefit from locating their evolutionary history from the contradictory and slow-moving world of fossils and new palaeontological discoveries such as the new find, Panthera blytheae. Genetic studies are nowadays also naturally used to show relationships that could not be imagined using other methods. The plague of fossil researchers are "ghost lineages," that seem to spring up whenever there is a new fossil find. In only 3.8 million years, many speciations seem to have given us the modern fauna. However, in the Zanda basin of the Himalaya, the newly-discovered species may help by extending the pantherines back by a further 2 million years. At that time, in the Late Miocene, there were early felids such as the snow leopards and Neofelis, the clouded leopard, just before the appearance of the other members of the pantherine genus. Then came their migrations to become the African lions and leopards, the European lion, sadly long-gone, and finally, via what is now the Bering Straits to the Americas, where the jaguar fortunately thrived. The authors' analyses show these cats extending back further than was thought into the Asian Miocene, probably on the Tibetan Plateau and its associated ranges, even as these peaks were completing their rise out of the continental mass. The link with mountain building period could prove crucial. The isolation by geology was possibly incidental in causing these unique animals to evolve among the steep cliffs and herbivore-filled plains. Dr. Z.Jack Tseng and his colleagues present their paper in the Proceedings of the Royal Society B. There's more on the tiger species' unique genetic limitations in Tiger, tiger, burning less bright.
http://www.earthtimes.org/nature/lions-tigers-lived-longer/2492/
These deliciously easy oatmeal cookies will be your new favorite. Mix them up with your choice of add-ins and give them your own flair! Prep Time 10 mins Cook Time 10 mins Total Time 20 mins Course: Dessert Cuisine: American Keyword: dessert, oatmeal cookies, oatmeal cookies recipe Servings: 4 dozen cookies Ingredients 1 pound butter 2 tsp vanilla 2 cups sugar 2 cups brown sugar 4 eggs 4 cups flour 5 cups rolled oats, ground into oat flour 2 tsp baking soda 1 tsp salt 3-4 cups Add-ins (dried fruit, mini M&Ms, etc.) Instructions Preheat oven to 375°F. Cream butter, sugar, and brown sugar together in a bowl. Add eggs and vanilla. Sift and add flour,then add ground oatmeal, baking soda, and salt. Toss in any add-ins and mix until combined. Dough will be very thick! (You may need to knead the dough on a floured countertop.) Place golf ball-size balls on baking sheet and press flat. Bake for 8-10 minutes or until golden brown. Let the cookies cool on a baking sheet before enjoying! Notes Store these easy oatmeal cookies in an airtight container or gobble them up with friends right away!
https://funcheaporfree.com/wprm_print/recipe/51230
Summary: Learn how to install a weatherboard. Plasterboard is fixed to timber or metal stud walls. The sides of plasterboard will be different colours: usually pale grey or ivory on one side and a darker grey or brown on the reverse. When fixing plasterboard it is the lighter side that faces into the room, the darker side may carry the manufacturer’s name or logo. Preparation - Marking the position of the timber studs on the ceiling will assist when you come to fix the sheets of plasterboard in position. - Plasterboard comes in large sheets and can be difficult to lift and manoeuvre into position particularly if you are working alone. To help lift the plasterboard up to the ceiling you can make a simple tool called a footlifter or rocking wedge. This works on the same principle as the seesaw. - A footlifter is made from a piece of softwood 250mm (10in) L x 75mm (3in) W x 50mm (2in) H. On the side of the timber block draw two straight lines from the top corners to the centre of its base. The wedge-shaped pieces are then cut away. - When fixing a sheet of plasterboard the bottom edge of the plasterboard is placed on one end of the footlifter. Then simply press down with your foot on the other end and the plasterboard will lift. - Before starting, cut all the sheets of plasterboard 15mm (5/8 in) shorter than the height of the wall Fixing the plasterboard Fix all the whole boards first. If boarding a complete wall with no door, start in the corner and work across to the other corner. If boarding a wall with a door start at the door and work towards the furthest corner. - Using the footlifter raise the first board firmly against the ceiling. Remember that it is the lighter side that faces into the room. - Apart from the boards at the ends of the wall, the edges of the boards must be aligned midway over a stud. - Nail the board around its edges and to the noggins between the studs using galvanised plasterboard nails. The size of the nail you use depends on the thickness of the board, but as a guide: - Dropping a plumb line from the marks on the ceiling indicating the position of the studs will allow you to mark the position of each stud on to the face of the board. Nail the board to each stud at 150mm (6 in) intervals. - On reaching the end of the wall measure the gap to be filled and cut a piece of board remembering that it has to butt tightly up against the adjacent wall. You may need to scribe the boards to fit against uneven walls, floors or ceilings. - Having fixed plasterboard to one side of the wall, repeat the process on the other side of the timber stud frame. Filling the joints When the plasterboard is fixed to the stud frame you will need to fill the joints between the boards. The method you employ will depend on whether you have used tapered-edge or square-edge boards. Tapered-edge board - Mix up some jointing compound as indicated on the packet. A stiff mix is best for covering nail or screw heads and gaps wider than 3mm (1/8 in). - Then use a broad-blade filling knife to press the compound into the gap, spreading the compound in a thin layer covering about 25mm (1in) on either side of the gap. - Cut and stick a length of jointing tape over the first joint using the jointing compound as the adhesive. Ensure the all air bubbles are pressed out and the tape is stuck firmly to the board. - Repeat for all the other joints. - Cover the taped joints with another thin layer of jointing compound using a plasterer’s trowel or taping knife. Smooth the compound so it is flush with the surface of the board. At this stage you can also fill any screw holes. - Before the compound has dried fully, smooth over the joint with a moist jointing sponge, feathering the edges into the surface of the board and removing any excess compound. But take care not to move the tape. - When the compound has dried, apply another thin layer, but this time feathering the compound 300mm (12in) beyond the joint. - When this final layer of compound has dried, the walls can then be painted with a proprietary board sealant. If using self-adhesive jointing tape, cut the tape to the length you require and stick it directly over the joint. Then follow the method as described above. Straight-edge board - If using self-adhesive jointing tape, stick it directly over the joint. If using scrim tape apply mixed plaster to the joint, pressing it firmly into the narrow gap between the boards and spreading it thinly on each side of the joint to create a band of plaster about 100mm (4in) thick. - Measure and cut the scrim tape to the required length and using a trowel press it firmly against the plaster. - With the scrim tape stuck in place over the joint, spread a thin layer of plaster over the whole length of the tape. - Repeat this process for all the joints. Cutting plasterboard When fixing plasterboard, it is inevitable that you will need to cut some of the boards to fit. - Measure the correct width you need on the board. - Make a deep score along the marked line with a craft knife. - Position the board so the scored line is over a straight edge and apply even pressure. The board should break along the line. - Use the craft knife to cut the paper at the back of the board. Alternatively you can cut the board using a fine-toothed panel saw. Holes for switches and sockets are not cut into the board until they have been fixed to the stud frame. - Mark the position of the switch or socket on the plasterboard. - Drill a hole in each corner of the marked out area large enough to accommodate the blade of a padsaw. - Insert the blade of the padsaw in one of the holes and carefully cut along the line to the next hole. - On reaching the next hole turn the blade and cut along the next line. Continue in this fashion all around the marked out area. - Remove the piece of plasterboard.
https://www.lets-do-diy.com/projects-and-advice/fixing-plasterboard-to-a-stud-wall/
PROBLEM TO BE SOLVED: To provide a turntable which can drastically lessen the wobbling of surface occuring in a recording medium under rotation and can drastically reduce safety and the burden in terms of environment equipment. SOLUTION: A base material of the turntable 1 is molded of a resin composition 2 consisting of a thermoplastic resin (PPS), inorganic fillers (whiskers), etc. A receiving surface 2a to be placed with the disk-like recording medium 5, a center hub 2b for centring a central hole 5a of the recording medium 5 and a fitting hole 2c to be pressed fitted to a revolving shaft 4a of a spindle motor 4 are formed at this resin composition 2. Next, the resin composition is subjected to a blasting treatment in the state of masking center hub 2b, by which the surface roughness Rmax of the resin composition 2 exclusive of the center hub 2b is polished until ≥15 μm is attained and thereafter the resin composition 2 is successively subjected to catalyst impartation, activation treatment and electroless plating, by which a plating film 3 is formed in the part subjected to the blasting treatment of the resin composition 2. COPYRIGHT: (C)2001,JPO
We took the classic flavors and textures of Beef Wellington and transformed them into an elegant preparation for salmon. Use any mushroom you like — white button, cremini or mixed wild mushrooms are all good choices. In a large skillet over medium heat, melt 1 1/2 tablespoons butter. Add mushrooms, half the shallots, half the salt, half the pepper, and half the thyme. Cook, stirring frequently, until mushrooms are tender and almost completely dry, about 8 minutes. Stir in spinach and cook until the mixture is very dry. Set aside to let cool. Preheat the oven to 400°F. On a well-floured surface, roll out pastry to a rectangle about 21 x 16 inches. Use a sharp knife or pizza wheel to cut into 6 pieces, each about 7 x 8 inches. Season salmon on all sides with the remaining salt and pepper. Place a piece of salmon on each piece of puff pastry; if any ends of the salmon are particularly thin, tuck them under so the pieces are fairly uniform in thickness. Top salmon with mushroom mixture. Use your finger to moisten edges of pastry with a little water, then fold pastry sides in like a package and press to seal, completely covering the salmon and mushrooms. Arrange seam-side down on a baking sheet lined with parchment paper and place in the freezer for 15 minutes. Meanwhile, combine remaining shallot, thyme and wine in a small saucepan. Simmer until reduced to about 1/2 cup; set aside off the heat. Brush the tops of the pastry with egg and bake in the top third of the oven until pastry is golden brown, about 25 minutes. Reheat reduced wine; remove from heat and whisk in remaining butter a little at a time. Spoon sauce onto plates and top with salmon. Per Serving: 610 calories (300 from fat), 33g total fat, 11g saturated fat, 125mg cholesterol, 880mg sodium, 27g carbohydrates, (3 g dietary fiber, 3g sugar), 43g protein.
https://www.wholefoodsmarket.com/recipe/salmon-wellington
Parsons School of Design Street Seats 2019 Returns with Its Pop-Up Public Seating Space Earlier this month, Parsons School of Design unveiled this year’s Street Seats, a sustainably designed temporary public park for New Yorkers in the heart of The New School’s urban campus. This is the fourth iteration of the project. The Street Seats program reclaims portions of New York City streets to provide safe, attractive seating areas for members of the New School community and the public, giving them a place to sit, socialize, and observe the neighborhood and street life from a fresh perspective. This year, varying levels of seating and table areas provide the public with distinctive views of the surrounding environment, allowing them to have a comfortable and enjoyable experience. Since 2016, Parsons’ BFA Architectural Design program, part of the School of Constructed Environments, has collaborated with the New York City Department of Transportation (NYCDOT) on the project. This year’s Street Seats was created by students in the course Design Build: Urban Public Space, taught by Eric Feuster, an adjunct faculty member. The students “knew the value of creating a beacon for the university, as there is no other outdoor space to meet up at our urban campus,” said Feuster. The 19 student participants started with 12 proposals and then worked in groups to arrive at a single design, taking into account the users of the park, its context, and a variety of environmental factors. Flora Ng, was one of those students and the primary photo documenter for the project. “This class is a legacy that will continue on in the history of our university, and I am very proud to be part of the legacy, which is where different students from different departments bring their own vision and ability to creating something that is beyond just for themselves, but for everyone to enjoy,” says Ng, a Strategic Design & Management student who graduated in 2019. This year’s project, which will be in place for seven months, occupies a 40-by-6-foot space along the curb of 13th Street near Fifth Avenue. The space holds a number of movable folding tables and chairs and planters containing grasses, flowers, and other plants. The seating, tables, and fences are made of naturally rot-resistant Western red cedar; the red canvas that covers the seats and tables previously wrapped The New School’s water tower. “The canvas was heading for the landfill, but we ended up using it to upholster part of our park,” said Feuster. “The pattern creates a hyper graphic for the school and may be the first design that is The New School through and through.” The Street Seats team used strategically placed LED lights powered by solar panels to highlight the plantings and canvas. Batteries capture energy from sunlight during the day and release it through energy-efficient LED strips at night. “We teamed up with Shayne McQuade, the founder of Voltaic Systems, early on in the semester to build a great solar-powered lighting system,” says Feuster. “Our lighting is more than a feature of the design — it becomes the design at night. The backlit canvas seats appear as glowing volumes, and the angled planters are all lit by strip lights, highlighting the native plantings and the park’s strong aesthetic.” A wide variety of grasses, plants and flowers including Early Goldenrod, Smooth Blue Aster, and Echinacea will be used to enhance the environment and provide changes in color throughout the growing season. Street Seats will be open through mid-November, when it will be dismantled and reconstructed at a local community garden.
https://opencampus.newschool.edu/news/parsons-school-of-design-street-seats-2019-returns-with-its-pop-up-public-seating-space
Recently I had the honor of visiting the studio of Anne Brauer, the quilter of Shelburne Falls. Anne produces amazing work, and she’s been making a living at selling her quilts for thirty-six years. I bought two of her potholders, and I was shamed that I didn’t buy more things from her. But it was clear from the arrangement of her studio that she sews every day she’s in or at home in her studio, and she’s at home quite a lot. There must have been thirty or forty quilts on display in the room, and numerous smaller pieces in a variety of sizes. Her work is beautiful and richly colored, and all worth both a look and a buy. How do you get that good? Sewing every day, I imagine. It’s a tricky thing to tell anyone, I think. But if you want to get good enough to be a rock star at anything, you have to put in the hours and hours of musical practice that will make you good enough to perform. You need the time with your hands on the guitar. If you want to be able to produce beautiful things on a sewing machine, it’s not enough to have a quality machine, you also have to use it. Want to be a tai chi master? Put in the hours necessary to get 80% competent — and then keep going. Sew. That’s what I’ve been doing. Every day since I visited her studio last week. Didn’t matter if it was banners or half-triangle squares (HTS) or pinwheels or hats or kinkachu bags. You have to make stuff to get good at this art form — just like you have to paint to get to become a great painter, or write to become a great author, or sing to become a great singer. Sooner or later, you have to take out the scissors, cut your fabric stash into slivers and strips and stars and diamonds and half-rounds, and then you have to assemble them. Sometimes your work is going to be terrible. Sometimes it will be great. Sometimes there will be critics; many days there won’t be anyone talking at all to you about your work. From time to time, there will be a buyer or a commission or a custom order. It doesn’t matter, though, what opportunities come your way, if you haven’t put in the time to make things, if you haven’t got a backlog of things imperfectly or poorly made or not-quite-ready-for-prime-time. It’s the time you spend working on making stuff, that makes you ready to receive those commissions and orders for custom work. Richard Feynman, the American physicist, used to go around Los Alamos at the Manhattan Project, picking locks and cracking safes, to demonstrate that the security system was a joke — that if a half-trained amateur could pick the locks and crack the safes, it would be child’s play to the dedicated professional. The thing is, it was the practice that he’d gotten before he went to Los Alamos, that made Richard Feynman into a half-trained amateur… and the safes and locks at Los Alamos made him into a dedicated professional. If you want to be a dedicated professional at anything, yourself, it begins with accepting your role as a half-trained amateur… and then working as though you loved your work. Which you do. That’s how trained and dedicated professionals are made.
https://andrewbwatt.com/2018/03/13/sewing-daily-practice/
Situated in the heart of the well known village of Benllech, Llys Rhostrefor is a development of 18 luxurious apartments that were built in 2005 on the old Rhostrefor Hotel site. The village has many things to offer on the doorstep including local shops, pubs, coffee shops, cafes, arcade, tennis courts, bowling green, and a large Blue Flag beach. A short drive away many other interesting attractions can be found, including Beaumaris Castle and jail house, RAF Valley, Bangor (most of the big name shops are here), Holyhead (ferry trips over to Dublin for the day). There are also a selection of golf courses spread around the Island too. |Size||Sleeps up to 5, 2 bedrooms| |Rooms||2 bedrooms, 2 bathrooms of which 1 family bathroom and 1 en suite| |Nearest beach||Benllech 500 m| |Access||Car advised| |Nearest Amenities||100 m| |Nearest travel links||Nearest railway: Bangor 16 km| |Family friendly||Great for children of all ages| |Notes||No pets allowed, No smoking at this property| |Luxuries||Internet access, DVD player, Sea view| |General||Central heating, TV, CD player, Wi-Fi available| |Standard||Kettle, Toaster, Iron, Hair dryer| |Utilities||Dishwasher, Cooker, Microwave, Fridge, Freezer, Washing machine| |Furniture||Double Beds (1), Single Beds (2), Cots available (1), Dining seats for 4, Lounge seats for 6| |Other||Linen provided, Towels provided, High chair available| |Outdoors||Shared garden| |Access||Parking, Wheelchair users| |Further details indoors| The apartment is situated on the ground floor and has breathtaking views from both the lounge and kitchen. It has gas fired central heating (combi type boiler). The lounge is a good size to comfortably accommodate a family and has two "L shape" brown leather sofas, a 32 inch TV, freeview, coffee table, side tables and an ipod dock. The kitchen/dining room is approx. 20' long x 8' wide. It has all the modern appliances and a dining table with 4 chairs, down lighting and feature lighting also make this a great room to cook, sit and catch up on the days events. The master bedroom has a king size bed, modern bedroom furniture and attached to this room is a fully fitted en-suite shower room, with white pottery and a mirror light/shaving socket. The twin bedroom includes 2 single beds with modern bedroom furniture. The family bathroom consists of a white suite including standard size bath, wash hand basin, wc and a mirror light/shaving socket. The spacious hallway also has a cloak cupboard. There is also a travel cot and high chair for families with babies. |Further details outdoors| Due to the apartments location on the end of a small modern block, it benefits from a grassed terrace area, and a raised decking area where a table and chairs (provided) can be placed outside, From here you can sit and relax and enjoy the spectacular sea and mountain views. Set off the coast of North Wales, and a short drive from Liverpool or Manchester, lies the beautiful and diverse Isle of Anglesey, which boasts over 100 miles of spectacular coastline within an area of Outstanding Natural Beauty.Located in LLangefni which is about a 15 minute drive away, you can find a Saturday market, Local Swimming baths, Asda Superstore, and many more interesting shops, cafes, and boutiques.Within the apartment there is a rack with a selection of information brochures, advertising many of the local attractions.
https://www.holidaylettings.co.uk/rentals/description/164756
The experts said the government should boost its existing immigration funding to the Federal Migration Service (FMS) by 15 percent. The amount should be spent on programs like improving immigration control, fingerprinting, simplifying procedures for Russian experts have published an immigration roadmap that requires the government to spend more money, as the country’s projected population decline looms large. In a policy paper published on Monday, the experts said a new financial infusion would allow the government not just to attract highly qualified foreign workers, but also to increase internal labor mobility for Russians. Part of the funding, the experts say, would go to reform Russia’s broken quota system, while the rest will go to improving Russia's image as an attractive country for migration. Russia’s authorities to tighten migration legislature "Russia for Russians": what's behind this nationalist slogan? In addition to a gradually declining population, a report published by the States Statistics Services (Rosstat) on Monday showed that Russia’s workforce is ageing. The average age of an industrial worker, which was 39.6 years in 2005, has increased to 39.9 years in 2010, the report said. The decreasing number of their own nationals is forcing Russian leaders to explore ways of mitigating disturbing demographic consequences, while ensuring a new model for Russia's economic growth. The current roadmap, the authors claimed, will provide some answers. Much of the newly recommended funding should go toward combating illegal immigration, which has become a serious problem for the government already battling a demographic crisis. The experts said the government should boost its existing immigration funding to the Federal Migration Service (FMS) by 15 percent. The amount should be spent on programs like improving immigration control, fingerprinting, simplifying procedures for issuing migration cards and the creation of migrant detention centers, they said. Almost 14 million foreigners and stateless people legally arrived in Russia last year, according to the FMS. Every year, up to five million foreigners work in the country without work permits, said the agency, telling Russia Profile that only 1.7 million work permits were issued last year. However, the authors of the new immigration policy believe that "the country needs immigrants." Without the flood of immigrants, especially from the former Soviet States, the Russian headcount would have been seven million people less, said the experts. This has prompted the experts to earmark an additional 14 billion rubles ($480.5 million) to spend on adaptation and integration programs for migrants, including the creation of support centers for immigrants, language learning courses and courses on Russian history and culture. The new policy also spelled out a fast track plan for attracting highly skilled foreign workers, as well as stricter procedures for hiring foreign employees by Russian companies. According to the experts, the FMS should receive an additional four billion rubles ($137.2 million) to create a special division responsible for attracting foreign specialists. The agency must also come up with measures that will motivate entrepreneurs and investors to move to Russia, including luring them with temporary and full-time residence permits as well as making it easier for them to obtain Russian citizenship. The experts see the absence of a well-defined mechanism for determining the country’s skilled-labor requirement as one of the drawbacks of the previous immigration policies. To address such problems, including eliminating the horse-and-buggy quota system, experts said the government should spend 270 million rubles ($9.2 million) to “dramatically revise the old policy” by introducing new mechanisms for assessing skilled workers. In addition, the policymakers suggested the government allocate 200 million rubles ($6.8 million) to registered private businesses to help them “provide services for attracting and recruiting qualified workers.” To attract enough qualified foreign teachers to Russia the experts want the government to spend another 2.5 billion rubles ($85.8 million). Another one billion rubles ($34.2 million) is to be spent as grants for Russian citizens receiving a professional education abroad in order to lure them back to the country. And on top of that, the experts said about 200 million rubles ($6.8 million) would be needed to promote Russia's image as "a country attractive to highly skilled migrants." Another set of money-gobbling proposals concern encouraging internal labor migration in Russia, which they said should cost the government up to seven billion rubles ($239.8 million). The lion’s share of that amount will be spent to relocate people who live in more severe climatic conditions to other areas of the country, the experts said. A good part of the money will also be spent on supporting entrepreneurs who create housing and social infrastructure for internal labor migrants. The experts also recommend that the government spend ten billion rubles ($342.6 million) to resettle compatriots living abroad. Originally published in Russia Profile All rights reserved by Rossiyskaya Gazeta. Subscribe to our newsletter!
https://www.rbth.com/articles/2012/03/21/immigration_business_15209
Recipe: You Will Need: 4 Large Eggs 3/4 Cup Oil 2/3 Cup Water 1: Preheat oven to 350°. Generously grease two 9" round cake pans or line cupcake pan with 24 paper wrappers 2: Beat eggs and oil with an electric mixer at medium speed until fully combined. Add cake mix and water. Continue mixing, scraping down sides and bottom of bowl with spatula as you go, until mixture is smooth (approximately 2 minutes). 3: Pour mixture into prepared pans. You can use two 9" round pans or 24 cupcake molds filled halfway with batter. 4: Bake until toothpick inserted in center comes out clean. Approximately 35 minutes for 9" round and 22-25 minutes for cupcakes. 5: Remove from oven and allow to cool for 5 minutes before removing from pan. Cool completely before frosting. (Frosting not included) - Quantity: 27 oz. - Storage: Keep Frozen - NET Weight: 29.0 oz (29.0 oz) - Product Code: 1811 - Quantity: 27 oz. - Storage: Keep Frozen - NET Weight: 175.0 oz (175.0 oz) - Product Code: 6811 Ingredients:Sugar, gluten free flour (tapioca starch, white rice flour), cocoa, natural flavor, baking powder (sodium acid pyrophosphate, potato starch, monocalcium phosphate), xanthan gum. Recently Viewed Products Customer Reviews The cake was delicious. It was as fresh when it arrived. Soft and flavorful. I enjoyed it to the last piece. I will be ordering it again. This cake mix was simple like any other box cake. Though it wasn't a light fluffy cake. The consistency was heavier like a brownie. The flavor was not that of chocolate. It was very sweet and you can taste the sugar heavily. It is by no means bad and you cant tell its GF but it was nothing like a traditional chocolate cake like I had hoped for. Do far this is the best gluten free cake mix i have tried and I've tried most. Delicious, light, and fluffy great mix definitely buying case from now on. It was delicious. I made a double layer cake with gnache and fresh raspberries for Christmas Eve dinner and it was amazing. This is the best gluten free chocolate cake mix we've ever had. It's even better than King Arthur's which had been our go-to cake mix. We made delicious chocolate cupcakes. The mix was easy to work with as well as tasty. From now on this will be our cake mix of choice!
https://katzglutenfree.com/products/chocolate-fudge-cake-mix-gluten-free
COUNTRY VIEW ELEMENTARY - SECOND GRADE Over the years, we have greatly appreciated the generous support we receive annually from our community to provide essential learning supplies for our students. We are truly grateful to be a part of such an amazing community! Although households should never feel obligated to provide school supplies, we realize some families may wish to donate supplies to be used at school. The items included on this list will be used during the regular school day and distributed among all students. These items may be brought from home on a voluntary basis; otherwise, they will be furnished by the school. - One regular size plastic pencil box with student name - One regular size backpack with student name on the inside Wish List Items: - Disinfectant wipes - Hand sanitizer - Tissues - Paper towels - Box of baggies (any size) - Black dry erase markers *Please DO NOT send markers, pens, pencil sharpeners, or a big box of crayons. **Please label personal items such as a backpack, lunch box, pencil box, and jacket with your student’s name. THANK YOU! Monetary Donations: We also welcome any monetary donations on behalf of our students! All monetary/cash donations must be processed through our school office and will be utilized for instructional activities during the regular school day. If you would like to donate any amount under $250, please feel free to send it in or drop it off to our Country View Elementary main office. Donations over $250 must be processed through the Weber School Foundation for tax-related purposes.
https://countryview.wsd.net/index.php/en/classroom/2nd-grade/2nd-grade-supply-list-2021
 How Do Bats Use Echolocation? Wildlife Removal Advice - How Do Bats Use Echolocation? Nocturnal bats use echolocation to create a mental map of their surroundings in complete darkness. By listening to the echoes of their ultrasonic chirps, they rapidly gather and process all the information they need in order to successfully navigate their environment as they hunt and eat their unsuspecting flying prey. So, how exactly does bat echolocation work? Bats listen to the chirps produced through their mouth or nose as they bounce off the objects nearby. When the echo of those chirps returns to the bat, the cartilage of the outer ear funnels the sound waves first into the ear canal, then into the eardrum. The eardrum will then produce vibrations which are further transferred into the basilar membrane, which runs the length of the cochlea. The vibrations are then converted by the cochlea into nerve impulses which the brain rapidly interprets. With amazing throat muscles, the bat is capable of chirping approximately 180 times per second. The abundance of information that it gathers from these high-pitched chirps lets the bat create a perceptual map of its surrounding physical space. Nonetheless, the brain needs help to promptly interpret all this information. To help its brain create a perfect mental replica of its surrounding environment, the bat uses different frequency patterns with its chirps, patterns that make it easy for the brain to separate signal clutter from pertinent information. These varying frequency patterns will bounce off from objects differently depending on the object's shape, size, and distance, each specific pattern producing a different echo intensity in the bat's ear. The brain is able to create a mental representation of the physical area that surrounds the bat by analyzing and interpreting these different echo intensities. In order to measure how far away a specific object is, a bat will analyze the time delay between when it hears its chirp to when it receives its subsequent echo. For directional information when it hunts, the bat will simply take notice of which ear intercepts the echo first. If the right ear hears the echo first, the insect must be on the bat's right side. The brain then interprets the information in each ear in order to allow the bat to guide itself towards its prey. If the bat is out hunting, it will first send out low frequency chirps over a wide range. The bat is now looking for low intensity echoes that bounce off small objects such as tiny insects. When it receives this low intensity echo associated with small insects, the bat will now produce a high frequency chirp, directed towards the potential prey. At this point, on top of clearly separating the high intensity echoes from the low intensity ones, the bat can also distinguish between different types of low intensity echoes, being able to go after its prey with surgical precision – or bat precision, some would say. While the way bats and whales echolocate has been our inspiration for sonar and radar technologies, they are not the only animals that practice this technique. Echolocation is also practiced by some birds, as well as by the shrew mouse. Blind humans have been also known to be able to use echolocation by producing clicking sounds, and some scientists have witnessed the common lab rat echolocating in order to get out of mazes.
http://aaanimalcontrol.com/Professional-Trapper/batecholocation.html
Join us at AccessU a day early a full-day pre-conference workshop on Tuesday, May 14, 2019. Attendees select from one of the following: Knowbility’s Accessibility Master Class The big picture overview for any stage of implementing an accessibility program across an organization. Starting with the institutional evaluation and providing tools and methodology for each stage, our team will take you through what you need to put in place a process to ensure that you reach and maintain your accessibility goals. Depending on class size and composition, we may break into smaller groups to facilitate learning about the steps to accessibility including the practical aspects of how to: - Engage stakeholders (all of them) - Recruit executive sponsor - Set explicit goals/standards - Define success - Adopt/create explicit policy - Assess current status - Build and integrate teams - Train across roles - Integrate support structures - Test, measure, report - Evaluate, repeat - Integrate inclusive design into general process (sustain) Practical Skills - Role-based knowledge of accessibility responsibilities - Assessing the accessibility of digital products - Accessibility in the project lifecycle Usability Testing with People with Disabilities If you have conducted a usability test, you understand why it is important to observe your target users using your interface, and then apply what you learn to make their experience easy and fulfilling. In this full-day pre-conference workshop, you will learn how to adapt your usability testing to work with people with different disabilities and assistive technologies. In this session, you will learn how to recruit and screen people with disabilities, the pros and cons of remote vs. in-person testing and moderated vs. unmoderated testing, and disability etiquette. Best of all, you will practice writing a test plan and tasks, setting up the physical space to accommodate people with disabilities, moderating in-person and remote sessions, and interpreting your results. Practical Skills - When you should conduct usability testing as part of website design/development - Why usability is an important part of accessibility - Identifying, recruiting, and screening people with disabilities - Practice moderating a usability study with people of different abilities and assistive technologies Inclusive Design Workshop Our Inclusive Design workshop will give attendees an overview of key accessibility and aging user needs to address during product design. We will touch on designing for various users who are blind, have low-vision, have cognitive disabilities or physical disabilities that keep them from using a mouse. Interactive exercises are mixed with brief lectures, to get you started in inclusive design. Practical Skills Part 1: - Intro to inclusive design - Intro to IBM Design Thinking with warm up exercise - User research with PwD and aging personas - Hands on empathy exercise & empathy mapping Part 2: - Interaction design - Design UI for a “real life” mobile app Part 3: - Analyze the app from Part 2 - Visual design, and an accompanying exercise using color contrast tools Moving the Needle on K-12 Accessibility We must view digital educational materials through an accessibility lens if students with disabilities are to have equal access to the general curriculum. What does that mean in today’s classroom? What do teachers, administrators, procurement staff, and direct service providers need to know to comply with the law and most importantly, meet student needs? This overview session provides up to date information and a toolkit to ensure we are more able to meet the needs of all our students. Key Takeaways - Develop accessibility statements, procurement language, and awareness materials for district administrators to avoid legal actions. - Strategies for district technology teams to integrate tools and techniques to support diverse learning needs and improve student outcomes. - Basic techniques to help content providers and procurement officers ensure that digital materials are built or bought with accessibility in mind. Section 508 and WCAG 2.1 – The Evolving Standards of Digital Accessibility Section 508 Standards from the US Access Board were updated to include the W3C’s WCAG 2.0 Level AA Success Criteria and conformance requirements. In 2018, the W3C published an update – WCAG 2.1. What are the implications of standards evolution for accessibility managers in institutions of all types and sizes? How do we stay current and ensure that the needs of all our stakeholders are met while we conform to legal requirements? This full day class will take you through the important details of what you need to know about the Revised Section 508 Standards, the revised FAR requirements – and options for addressing new standards of WCAG 2.1 for documents and software. Learn about resources to help your team learn to develop (and maybe test) to both WCAG 2.1 and Revised Section 508 requirements. Finally, we will look at plans for future iterations of the W3C guidelines for Accessibility. The target audience of this course is federal, state and local governments seeking to learn how to understand and implement the new requirements of Section 508, at the beginner or intermediate level. Content will also be helpful to any organization that seeks to understand and meet federal requirements and stay current with the changing regulatory environment.
https://knowbility.org/programs/accessu/2019/workshops/
PROBLEM TO BE SOLVED: To provide a cable unit for a local area network (LAN) easy to distinguish each of four-pair cables, in intertwining a plurality of four-pair cables or arranging them in parallel at cable manufacturing, or in connecting the four-pair cables at actual cable use. SOLUTION: In the cable unit for a LAN in which a first set of two or more four-pair cables each formed by partitioning four twisted twin wires of different twisting pitches with a cross-shaped intercalation and covering them with a sheath, the other set of two or more four-pair cables each formed by partitioning four twisted twin wires of different twisting pitches with a cross-shaped intercalation and covering them with a sheath are arranged around an interposition after being alternately given numbering and are press-wound for being bundled and coated, thicknesses of the cross-shaped intercalations are mutually changed for regulating cross talk characteristics among the first four-pair cables and the other adjacent four-pair cables, and colors of the cross-shaped intercalations are mutually changed for distinguishing the first four-pair cables and the other four-pair cables. COPYRIGHT: (C)2010,JPO&INPIT
It may be near impossible to find a Southern lady who doesn’t have her own version of pimento cheese, oftentimes “the family recipe” passed down through generations. But what you don’t usually find is one made with Swiss cheese. We hope you will enjoy my version, featuring North Carolina’s Mt. Olive Roasted Red Peppers. Ingredients - 1 pound Swiss cheese, finely grated - ½ pound white cheddar cheese, finely grated - 1 12-ounce tub whipped cream cheese, softened - 1 stick cold butter, grated into mixture - 6 tablespoons mayonnaise (Duke’s preferred) - 1 12-ounce jar Mt. Olive Roasted Red Peppers, drained/coarsely chopped - 6 tablespoons honey - 1 tablespoon Worcestershire sauce - 1 tablespoon crushed red pepper - ½ cup chopped fresh chives Directions Combine all ingredients and fold together until well blended. Serve at room temperature with assorted crackers and in sandwiches, spooned into hot baked potatoes, dolloped on burgers or breakfast biscuits, whirled into warm macaroni, warmed as a dip … just let your imagination run wild! Note: Best if you let sit overnight in refrigerator to let flavors come together.
https://www.carolinacountry.com/carolina-kitchen/appetizers/creamy-swiss-pimento-cheese
INTRODUCTION {#SEC1} ============ Genomic rearrangements are composed of structural variations (SVs), such as deletions, insertions, inversions, duplications, translocations (transpositions) and others ([@B1]). Genomic rearrangements contribute towards the increased susceptibility and development of many human diseases ([@B2],[@B3]). Some rearrangements produce gene fusions with oncogenic activity ([@B4]), alter gene dosage, dysregulate cell function and change the context of regulatory elements ([@B1],[@B2]). Over 9000 gene fusions have been identified ([@B5]). A classic example is the Philadelphia chromosome that arises from a translocation between chromosome 9 and chromosome 22. This event leads to a fused BCR/ABL1 protein which is a principle driver of chronic myeloid leukemia ([@B4]). For this study, we focused on the size category of genomic rearrangements that have breakpoints \>200 kb or more apart. Structural aberrations in this range account for \>85% of curated cancer gene fusions ([@B6]). Many methods have been used to characterize this class of large cancer rearrangements across the genome. Karyotyping, fluorescent *in-situ* hybridization (FISH) and microarrays that measure copy number have been used to characterize these events at low resolution and without breakpoint information. More recently, whole-genome sequencing (WGS) using next-generation sequencing (NGS) technologies (i.e. Illumina), and sophisticated bioinformatics tools have been developed that identify structural variants in such data ([@B7]). However, current WGS approaches are geared towards identifying small- to mid-scale structural variants under 200 kb in size ([@B15]). In addition, most WGS data is generated from sequencing libraries with short DNA fragments under 0.5 kb. The use of short DNA inserts results in loss of genomic contiguity that adversely affects the calling of rearrangements generally ([@B16]). As an added challenge, resolving large, complex somatic rearrangements is difficult when both germline alleles are involved. Moreover, complex somatic rearrangements sometimes involve multiple SV types and this further prevents a detailed characterization. Accurate detection of these events is limited by overall base coverage of the genome, the methodology and ultimately by the sequencing cost. The repetitive nature of rearranged regions also complicates detection of these events -- genomic regions with a high density of repeat elements reduce the mapping quality locally around breakpoints. Overall, short sequence reads generated from short DNA inserts represent a significant handicap for identifying SVs in these regions. Some sequencing technologies generate long reads (i.e. Oxford Nanopore and Pacific Biosciences) using high molecular weight (HMW) DNA (e.g. 5 kb of more). These long reads are useful for identifying rearrangements ([@B17]). However, for SV analysis, current long read sequencers are more costly than conventional short insert sequencing (i.e. Illumina). As added challenges, long read sequencers have lower base quality, unevenness in genome coverage, or very high DNA input requirement, thus making them less suitable for high efficiency analysis and side-by-side SNV calling. For example, long-read sequencers have high DNA requirements that can be greater than microgram amounts of starting material, and this can be a significant issue for clinical tumor samples where the amount of nucleic acid may be limited. In addition, clinical samples pose challenges because the content of cancer cells relative to normal cells may be low, thus diluting out the number of molecules containing a genetic aberration. Recent technology developments include synthetic long-read sequencing (SLRS) to determine variant haplotypes and structural variants ([@B20],[@B21]). These technologies maintain high weight molecules rather than relying on physical fragmentation to small DNA inserts, use barcodes to delineate specific molecules and thus provide long-range information based on short-read sequencing. These methods offer high resolution (sequencing-based) and improved detection of structural variants and related distal breakpoint junctions. These methods leverage the high fidelity, low cost, high throughput of short read sequencers such as the Illumina system. As a result, this approach has great potential for characterizing large-scale rearrangements that are not recognized using conventional short read sequencing. In this study, we used one existing SLRS technology termed linked-read sequencing that employs the 10X Genomics Chromium system ([@B21]). By requiring only nanogram level input, this approach is particularly useful for analyzing tumor samples with low cellularity. WGS libraries were prepared on a 10X Genomics Chromium system (Pleasanton, CA, USA). This technology uses a microfluidic process to generate up to 10^6^ droplet partitions if not more per experiment sample. HMW DNA molecules (\>50kb) are distributed across these droplet partitions. This preparative method uses one nanogram of genomic DNA, representing approximately 300 haploid genome equivalents and no pre-amplification is required. After the library is completed, one uses an Illumina sequencer to generate reads with an integrated barcode---this information enables one to trace paired-end reads back to the originating HMW DNA molecule ([@B22],[@B23]). There are a limited number of methods available for analyzing linked-read sequencing data ([@B22]). Up to this date, Long Ranger is the standard tool to phase haplotypes and detect structural variations based on linked reads ([@B21]). The underlying statistical framework involves a binomial test of linked-read barcode counts. Long Ranger was used by Collins *et al.* to characterize germline SVs in several human genomes ([@B22]). Spies *et al.* developed a local assembly approach to reconstruct contigs with structural alterations from linked-read data, which used a binomial test similar to Long Ranger\'s for detecting SVs ([@B23],[@B26]). These individual-read based approaches were prone to errors such as incorrect read mapping due to repeats or erroneous barcode reads due to sequencing errors. To identify large rearrangements (\>200 kb), we leverage a statistical property of linked-reads data. Namely, the likelihood of two DNA molecules with the same sequence composition occurring in each droplet is extremely low. We estimated this probability to be on the order of $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \sim {10^{ - 8}}$\end{document}$ per droplet or \<1 per experiment with tens of millions of droplets (see [Supplementary Results](#sup1){ref-type="supplementary-material"}), which enables us to characterize individual molecules present in each portion. The molecule-based approach incorporates information from multiple linked-reads distributed along a molecule. This information is less prone to both mapping and barcode errors. Based on this property, we developed the ZoomX tool, embedding a Poisson-based statistic in a scalable grid scan algorithm. ZoomX systematically identifies novel genomic junctions. We demonstrate that ZoomX performs better at calling large rearrangements compared to the currently available SV calling method Long Ranger ([@B21]) for linked reads. As a demonstration of our approach, we conducted a benchmark analysis of the NA12878 genome for germline SVs. Subsequently, we identified a series of somatic rearrangements among several gastrointestinal cancers, sequencing primary tissue samples. MATERIALS AND METHODS {#SEC2} ===================== Sequencing data for NA12878 {#SEC2-1} --------------------------- Linked read data for NA12878 is publically available from 10X Genomics. The data is also available from the Genome-in-a-Bottle Project ([@B27]). The original DNA sample was obtained from the Coriell Institute, and 1.25 ng of DNA were extracted for sequencing. High molecular weight (HMW) genomic DNA on the order of 50 kb or higher was selected. A barcoded library was prepared using the Chromium assay (10X Genomics). Sequencing was performed using the Illumina XTens. Sequence data processing relied on Long Ranger software package. Samples {#SEC2-2} ------- The Institutional Review Board (IRB) of Stanford University School of Medicine approved the study protocol. We obtained informed consent for all patients prior to obtaining the samples. The tissue samples were collected at the time of surgical resection and fresh frozen as available from the Stanford Tissue Bank. The samples included a primary colorectal adenocarcinoma (labelled as MetB7175) and matched normal colorectal tissue (labelled as Norm7176). This sample had a mixed cellularity with at least 50% tumor fraction. Also, we obtained matched normal gastric tissue (labeled as Norm2386) and gastric metastatic tumors (labeled as MetR2721 and MetL2725, respectively). Based on histopathological examination, the tumor purity was estimated to be 20% for the MetR2721 sample and 50% for the MetL2725 sample. Genomic DNA extraction was performed with a Maxwell 16 Tissue DNA purification kit according to the manufacturer\'s recommended protocols (Promega, Madison, WI, USA). The genomic DNA did not require further size selection or processing. DNA was quantified with Life Technologies Qubit. Generating and sequencing linked read libraries {#SEC2-3} ----------------------------------------------- Using 1.0 nanogram of genomic DNA, from each of the tissue samples, we prepared barcode libraries using the Chromium Gel Bead and Library Kit (10X Genomics, Pleasanton, CA, USA). No preamplification was used. We performed sequencing runs on an Illumina HiSeq 2500 or X10 sequencer with 2 × 151 paired-end reads and achieved ∼30× coverage for all tumor and normal samples ([Supplementary Table S1B](#sup1){ref-type="supplementary-material"}). All resulting read pairs contain a 16-base barcode. We used *bclprocessor* (v2.0.0) to demultiplex and convert the resulting BCL files to FASTQ files. We used Long Ranger (v2.0.0) to align the barcoded reads in the FASTQ files to the human genome reference build GRCH37.1. Sequence data was deposited in dbGAP under the accession numbers phs001362.v1.p1 and phs001400. Identifying rearrangements from barcode linked reads {#SEC2-4} ---------------------------------------------------- The workflow for data generation and application of our algorithm is shown in Figure [1](#F1){ref-type="fig"}. Our statistical algorithm is implemented in the ZoomX software package that consists of Python and R scripts that call up Samtools ([@B28]) and Bedtools ([@B29]) ([Supplementary Figure S1](#sup1){ref-type="supplementary-material"}). For visual display of the results from ZoomX, we leveraged the 10X Loupe visualizer to show our results with barcode-sharing heatmaps. In the following, we summarize the steps of ZoomX algorithms. Complete statistical details are provided in the [Supplementary Methods](#sup1){ref-type="supplementary-material"}. ZoomX is open source software available in the Bitbucket repository (<https://bitbucket.org/charade/zoomx>). ![Identifying rearrangements from linked reads. The workflow is illustrated in steps (A) to (D): (**A**) 1 nanogram of high molecule weight (HMW) DNA is extracted from the sample; (**B**) the extraction was partitioned to \> 10^6^ droplets, where in average only a few DNA molecules enter each droplet and get the same barcode; The barcode, uniquely colored, is linked to random primers which sparsely prime on the HMW DNA; (**C**) the primed DNA undergoes several rounds of displacement amplification to generate short fragment within the droplet, which will be released into one sequencing library pool; (**D**) the linked-read sequencing is performed and ZoomX infers single molecules based on aligned barcode linked reads; ZoomX scans genome coordinate pairs to detect if there is any rearrangement junction in between based on single molecule coverage. In the plot: Each DNA molecule (fragment) is represented by a gray curved (linear) segment; each color represents a unique barcode. In (B); each short segment represents a random primer with barcode. In (D): Each colored long stretch is an inferred linked-read molecule; Each colored vertical short bar is a linked-read pair, which is interspersed along the inferred molecule; We depict the single molecule coverage against the base pair coverage given the shown molecules. The single molecule coverage is higher and more consistent across the genome.](gkx1193fig1){#F1} We use aligned, linked reads to identify individual HMW DNA molecules based on barcodes that distinguish different droplet partitions ([@B21]). Our statistic involves a two-dimensional Poisson scan for determining significant levels of barcode-sharing molecule counts between two genomic regions. Given two distal genomic regions, the event of distinct molecules originating separately from the two regions occurring in the same droplet is negligible. Therefore, distal genomic junctions can be detected by screening for region grid pairs. We also found overall linked-read sequencing metrics were consistent across all analysed samples, where the distributions of individual molecules were well approximated by Poisson distributions (Figure [2A](#F2){ref-type="fig"}, [Supplementary Figure S2 and Supplementary Results](#sup1){ref-type="supplementary-material"}). The range of metric values consistently allows for well-separated null and alternative distributions in our model. By simulations, we estimated the detection power of the statistic is \>90%, for junction allele fractions as low as 10% ([Supplementary Figure S3 and Supplementary Results](#sup1){ref-type="supplementary-material"}). ![Sequencing statistics from the linked-read, whole genome analysis of NA12878. (**A**) Observed genome-wide molecule coverage density represents a Poisson distribution, as fitted by maximum likelihood (NA12878_Fit). (**B**) Individual molecule coverage (NA12878_molecule) is significantly higher than paired-end fragment coverage (NA12878_PEfrag) over the entire genome; (**C**) Individual molecule coverage (NA12878_molecule) is higher than paired-end fragment coverage (NA12878_PEfrag) over the entire genome.](gkx1193fig2){#F2} The initial input to the algorithm are the BAM alignments of linked-reads. Given the sparsity of molecules per a droplet partition, typically from three to five, ZoomX uses the associated barcode and aligned sequences to determine the identity and characteristics of each partitioned molecule. ZoomX then computes molecule statistics across the genome, such as effective molecule coverage (as defined in [Supplementary Methods](#sup1){ref-type="supplementary-material"}) for a given genomic segment, and stores the coordinates of individual DNA molecules into a BED file with annotations. In this parser step, ZoomX also finds all mapped read pairs in unusual positions (i.e. not contiguous from the same genomic region or chromosome) and saves these read pairs into BEDPE files. In the next step, ZoomX conducts a genome-wide grid scan ([Supplementary Figure S4 and Supplementary Methods](#sup1){ref-type="supplementary-material"}). ZoomX identifies high frequency barcode sharing between two regions by applying the Poisson statistic to the molecule BED file defined in the previous step. If a genomic junction $\documentclass[12pt]{minimal} }{}$J( {X,\ Y} )$\end{document}$ exists between regions $\documentclass[12pt]{minimal} }{}$X$\end{document}$ and $\documentclass[12pt]{minimal} }{}$Y$\end{document}$, then one expects a substantial number of molecules to span such junction and be captured and sequenced with the same barcodes. As already noted, the probability of barcode collision, that is, the event of two molecules with sequence overlap in the library sharing the same barcode, is extremely small. Therefore, two linked-read molecules from $\documentclass[12pt]{minimal} }{}$Y$\end{document}$ sharing the same barcode are likely to have originated from the same individual HMW DNA molecule. One has the option to mask the liked-read molecules mapping to regions of aberrant coverage. This step reduces potential false discoveries resulting from abrupt coverage spikes. The required input of this optional step is a bed file containing the base coverage values in each grid region. The scanning step produces a list of candidate junctions in BEDPE format as represented by grid pairs, which become the input to the refinement step. The next position refinement step clusters junctions into groups and identifies additional short insert read pairs that support for each junction group. We use the Bedtools' pairtopair function to create a connection graph of all identified candidate grid pairs, in which each candidate grid pair is a node. A connecting edge is defined only if two grid pairs overlap at both ends, ignoring strand direction. Then, we use an efficient graph algorithm as implemented in *Scipy* to find all connected components in the resulting connection graph. The node set of each connected component is the group of grid pairs representing the same junction. We output the refined candidate list by taking the unions of the group of grid pairs within each component. Finally, we delineate the breakpoints by overlapping short read pairs saved in the parser steps with refined candidate junctions using the Bedtools*'* pairtopair function. We require overlap of both ends and ignore strand context. We report the indices of overlapping read pairs as additional annotations in our output, which can be used to derive exact breakpoints. The final output is a BEDPE file in which each junction is recorded with the confidence regions for its two breakpoints in the genome, along with annotations such as the number of supporting molecules and the indices of all supporting read pairs. As an optional step, we align and plot all supporting molecules spanning each junction. Subsequently, we derive base pair breakpoint as the consensus of molecules with flushing ending positions. The algorithm generates sequence contigs encapsulating the breakpoints and computing the Fisher\'s exact test statistic based on matched sample molecule data. In this work, we analysed the barcode linked read datasets using ZoomX with a scan grid length of 10kb, aiming to detect large-scale transpositions/translocations, inversions and more complex types (breakpoints at least 200kb apart) where previous analysis encountered difficulties. We excluded any genomic regions within 1 Mb of a centromere, a telomere or a large gap from our analysis to avoid alignment errors related to the genome reference. Call set differentiation was done by the Bedtools pairtopair function to remove any potential germline events from tumour call set if they were also found in matched normal call set. The pairtopair overlap was used to remove any event in the matched normal call set. Resulting somatic junctions were denoted in BEDPE format and were visualized by the 10X Genomics Loupe program. We used ZoomX's molecule plotting functions to illustrate rearrangement junctions and single-molecule support. Fisher\'s exact test for identifying statistical significance of SV junctions {#SEC2-5} ----------------------------------------------------------------------------- ZoomX uses a Fisher\'s exact test for identifying statistically significant somatic events from matched normal tumor pairs. We denote the normal control genome as $\documentclass[12pt]{minimal} }{}$C$\end{document}$ and tumor genome as $\documentclass[12pt]{minimal} }{}$T$\end{document}$. Any junction to be tested has following data acquired from the genome-wide scan: all molecules covering the junction breakpoints $\documentclass[12pt]{minimal} }{}${n_C}$\end{document}$ and $\documentclass[12pt]{minimal} }{}${n_T}$\end{document}$, respectively for control and tumor samples, and barcode-sharing molecules supporting the proposed junction $\documentclass[12pt]{minimal} }{}${z_C}$\end{document}$ and $\documentclass[12pt]{minimal} }{}${z_T}$\end{document}$, respectively. The data can be summarized in a two-way contingency table:$$\documentclass[12pt]{minimal} }{}\begin{equation*}\begin{array}{@{}*{3}{c}@{}} \ &{{\rm{Control}}}&{{\rm{Tumor}}}\\ {{\rm{Junction}}}&{{z_C}}&{{z_T}}\\ {{\rm{Non}} - {\rm{Junction}}}&{{n_C} - {z_C}}&{{n_T} - {z_T}} \end{array}\end{equation*}\end{document}$$ A one-sided Fisher\'s exact test is directly applicable to determine if there is significant evidence for more junction-supporting molecules in the tumor sample. The test was done by R's fisher.test function. The reported *P*-value *P* were Bonferroni-corrected Fisher\'s test *P*-values $\documentclass[12pt]{minimal} }{}${P_F}$\end{document}$, such that $\documentclass[12pt]{minimal} }{}$P\ = \ \# {\rm{junctions*}}{P_F}$\end{document}$. A standard cut-off $\documentclass[12pt]{minimal} }{}$P$\end{document}$ \<0.05 was used to determine statistical significance. RESULTS {#SEC3} ======= Defining molecule coverage based on barcode linked reads {#SEC3-1} -------------------------------------------------------- First, we developed the concept of 'molecule coverage' that improves rearrangement calling compared to the Long Ranger SV caller. This concept is based on the identification of the molecules and their genomic characteristics from each droplet partition as denoted by the barcodes. The barcode linked reads are used to extrapolate the genomic position of the partition contents. There is little overlap among the molecules' genomic positions given that there are only three to five molecules per each droplet, as defined by a Poisson distribution from the 300 genome equivalents originally used. With this information, several features proved very useful. As noted in the step (D) in Figure [1](#F1){ref-type="fig"}, the molecule coverage or depth is based on counting the number of separate DNA molecules that span a given genomic region. The partition barcode information is crucial for enumerating molecule coverage for any given genomic region. In a typical barcode library preparation, linked-read sequencing generates tens of millions of separate molecules with a mean molecule length around tens of kb ([Supplementary Table S1](#sup1){ref-type="supplementary-material"}). With this level of partitioning one achieves ∼100× effective coverage of the whole genome by individual molecules when the actual sequenced base pair coverage is only around 30x. The increased coverage was fully observed in all of samples including the tumor samples (Figure [2B](#F2){ref-type="fig"} and [Supplementary Figure S5](#sup1){ref-type="supplementary-material"}). Second, a molecule\'s map position, based on linked reads, is less constrained by mapping individual reads in the repetitive sequences that are likely directly adjacent to SV breakpoints. It is well known that such breakpoint mappings are error prone and confuse conventional SV callers. In contrast, a linked-read molecule\'s map position is based on multiple mapped read pairs per a given barcode. Namely, the HMW DNA source molecule provides extended genomic contiguity, thus providing a fundamental advantage for SV analysis compared to short DNA insert sequencing libraries. With this scheme, SV junctions are no less evident based on reads mapping distal to the breakpoint. The probability of two extrapolated DNA molecules with the same sequence present in the same partition droplet is very low (\<0.01). This feature insures that the mapping and identification of HMW species is accurate. Therefore, the detection of structural variant junctions no longer relies on short read mapping close to the breakpoint---this short read process is more error-prone owing to the enrichment of repetitive sequences next to structural variations ([@B30]). As a result, we consistently see a better evenness of genome wide molecule coverage as compared to read pair coverage in all samples (Figure [2C](#F2){ref-type="fig"} and [Supplementary Figure S5](#sup1){ref-type="supplementary-material"}). Whole genome performance metrics from barcode linked reads {#SEC3-2} ---------------------------------------------------------- As an initial test of our method, we processed the linked read data available from the whole genome sequencing of NA12878 (Materials and Methods). This genome has been extensively sequenced across multiple platforms including with linked reads. First, we demonstrated a significant increase of molecule coverage that enables the sensitive detection of SV junctions. Figure [2B](#F2){ref-type="fig"} shows the cumulative molecule coverage from barcode linked-reads versus the fragment coverage based on short-insert, paired-end reads for the NA12878 genome. Molecules identified by linked reads provide higher extrapolated molecule coverage for any genomic interval compared to paired-end sequencing fragments. The molecule coverage for the NA12878 sample is 176× for \>50% of the genome and 160× for \>80% of the genome. In comparison, the average coverage of paired-end sequences (also linked reads) was 33× for \>50% of the genome and 24× for \>90% of the genome, and the base pair coverage was 27× for \>50% of the genome and 19× for \>80% of the genome. The molecule coverage has significantly less coverage variance than what one encounters from standard sequence coverage using short insert paired-end fragments. We normalized the cumulative molecule coverage, a step that requires aligning the curves for the two different methods at the point where 50% of the genome is covered (Figure [2C](#F2){ref-type="fig"}). The normalized curve for extrapolated molecule coverage has a much steeper transition that translates into improved evenness. Thus, 55% of the genome had extrapolated molecule coverage ranging from minus to plus one standard deviation from the mean as compared to just 40% for coverage computed using the paired-end short-insert fragments. The same conclusion was drawn based on all other samples ([Supplementary Figure S5](#sup1){ref-type="supplementary-material"}). Identifying large rearrangements from NA12878 linked reads {#SEC3-3} ---------------------------------------------------------- Our analysis focused on discovery of large-scale events that were 200 kb in size or greater. We used the linked read data to identify individual DNA molecules that define the SV structure. Using our approach, we identified a series of rearrangements that included multiple SV elements not detected with the 10X Long Ranger SV caller. In total, we found seven intra- and two inter-chromosomal large-scale structural variations in the NA12878 genome ([Supplementary Table S2](#sup1){ref-type="supplementary-material"}). All the SVs were orthogonally corroborated by examining sequence data from Pacific Bioscience\'s long sequence reads and/or Illumina Moleculo synthetic long reads using split read analysis provided by Layer *et al.* ([@B10]). In comparison, the Long Ranger caller (10X Genomics) did not detect eight out of the nine validated ZoomX rearrangements ([@B21]). Long Ranger detected one intra-chromosome event that was not validated by any orthogonal data set ([Supplementary Table S2](#sup1){ref-type="supplementary-material"})---ZoomX did not detect this SV. Long Ranger detected three inter-chromosome events where only one was validated ([Supplementary Table S2](#sup1){ref-type="supplementary-material"}). In comparison, ZoomX identified the single validated Long Ranger SV, which is a transposition located at Chr11:108 585 666--Chr13:21 727 735 (Table [1](#tbl1){ref-type="table"}). We also validated many of the ZoomX calls using other reported results including clone-by-clone sequencing calls from Kidd *et al.* ([@B33]), microarray calls from Conrad *et al.* ([@B34]) and SVs from the 1000 Genome Project using conventional WGS ([@B16]) ([Supplementary Table S3](#sup1){ref-type="supplementary-material"}). ###### All ZoomX and Long Range identified events (\>200 kb) in NA12878 ---------------------- ![](gkx1193tbl1.jpg) ---------------------- Citing an example, we identified a novel heterogeneous double deletion on autosome 22 (Figure [3A](#F3){ref-type="fig"}). The locus is composed of a larger ∼700 kb deletion (Chr22: 22 550 534--23 242 648) allele and a smaller ∼80 kb deletion (Chr20: 23 210 673--23 242 648) allele. Greater than 80 molecules supported this variant call---these molecules spanned the breakpoints of the larger allele ([Supplementary Figure S6A](#sup1){ref-type="supplementary-material"}). Additional read depth analysis confirmed this SV---average coverage decreased from 30× to 15× for the larger allele and drops to zero for the smaller allele. The larger allele corresponds to a variant listed in the Database of Genomic Variants ([@B35]) **(DGV)** gold standard entry *gssvL77096* (accession number). This variant has a population frequency of 0.55% (in 117 of 14642 unique samples). The smaller allele corresponds to the DGV entry *gssvL77095* with population frequency 0.93% (128 of 13818 unique samples). ![Two large-scale complex structural variants resolved in germline sample NA12878. (**A**) A heterozygous locus has two deletion alleles where the junction formed by the larger deletion allele is supported by 80 molecules; (**B**) A heterozygous locus has a larger inversion-deletion allele and a small deletion-only allele, where the two junctions formed by the larger allele\'s inversion is supported by 61 and 67 molecules, respectively. In each subfigure, the upper panel is a heatmap, where the dark colour represents shared barcodes between the two genomic segments marked in X- and Y-axis. They are the same in this case. The heatmaps display the rearrangement. The middle panel is the base coverage along the X-axis segment. The bottom panel is the resolved genotypes or haplotypes resulting from the junction events.](gkx1193fig3){#F3} The entire locus resides in a significantly repetitive genomic region, interspersed by multiple LINE-1, LINE-2, Alu and other tandem repeats ([@B36]). The larger allele contains two segmental duplications (chr22: 22 604 170--22 669 477 and chr22: 22 973 847--22 997 581) that have high sequence similarity (97%) with other genomic regions. As noted, the Long Ranger SV caller provided with the 10X Chromium assay did not identify this variant. This larger allele event was only reported previously by the clone-by-clone approach and by microarray data, but at much lower resolution. In comparison, coupling linked-read single-molecule sequencing with ZoomX analysis, we resolved the larger allele at base pair level resolution. The other WGS studies may have missed this variant given the repetitive sequence structure and its large size. Figure [3B](#F3){ref-type="fig"} shows another complex rearrangement that incorporates a heterogeneous inversion and deletion locus on autosome 2. The locus is composed of a large ∼1.4 Mb inversion and deletion allele (Chr2: 130 892 516--132 296 052) and a smaller ∼75 kb deletion allele. A total of 61 and 67 molecules support the two breakpoints of the larger allele, respectively ([Supplementary Figure S6B and S6C](#sup1){ref-type="supplementary-material"}). Additional read depth analysis also confirms the locus. The variant was independently confirmed by one long read from long synthetic reads (i.e. Illumina Moleculo) ([@B10]). The event corresponds to the InvFEST ([@B37]) entry *HsInv0669*. All other studies with different long read sequencer approaches failed to identify this event. Similar to the previous variant, the fact that it resides within repetitive regions might have hindered its discovery by other studies. One inter-chromosomal variant that we found represents a balanced transposition junction between autosomes Chr12 and Chr15. A total of 57 molecules supported the junction breakpoint ([Supplementary Table S3](#sup1){ref-type="supplementary-material"}). The variant is heterozygous. In one haplotype, a small segment of Chr15 was inserted into Chr12: 73 239 613. Short read pairs also confirmed the variant with eight forward-forward and four reverse-reverse abnormal pairs. The transposed region is defined as Chr15: 94 886 289--94 888 455. A similar heterozygous variant was found between autosomes Chr11 and Chr13, with 52 supporting molecules. Moreover, we confirmed that regions consisted of a small region of Chr13: 21 727 733--21 732 060 inserted into Chr11: 108 585 666---this was validated with short insert read pairs. We identified independent sequencing validation of these events. For the first transposition, we identified nine long reads generated from Pacific Biosciences WGS data and 21 synthetic long reads that confirmed our call. Likewise, for the second transposition, we identified eight long reads and 36 synthetic reads that confirmed our call. Discovery of somatic rearrangements in cancer {#SEC3-4} --------------------------------------------- We analysed three cancers, including one colon and two gastric cancers. Our analysis method identified a series of complex somatic rearrangements composed of multiple SVs that would be challenging to identify with either short insert or long sequence reads. The first sample we analysed was a colorectal tumor, focusing on large genomic events that exceed 200 kb. We used the ZoomX program with a grid length of 10 kb. We inferred a range of 42--43 million molecules with average molecule length ∼6 kb. The estimated extrapolated molecule coverage $\documentclass[12pt]{minimal} }{}${c^M}$\end{document}$ was 88 and 89 for MetB7175 and Norm7176, respectively. The estimated null sharing was $\documentclass[12pt]{minimal} }{}${\mu _0} < 0.1$\end{document}$ for both samples. The expected sharing for 10% allele fractions was 14 for both MetB7175 and Norm7176, which we used as the minimum required single-molecule support for junction allele. For this colorectal tumor, all of the reported somatic rearrangements underwent Bonferroni adjusted Fisher Exact test with *P*-values \<0.05 ([Supplementary Table S4](#sup1){ref-type="supplementary-material"}). In total, we identified 13 somatic rearrangements as circus plotted in Figure [4](#F4){ref-type="fig"}. MetB7175, had seven intra-chromosomal and six inter-chromosomal somatic junctions with an average of 31 molecules supporting the identification of each. Moreover, short read pairs supported ∼80% of these junctions, although with an average of only six pairs per junction---significantly lower than molecule support (*P* = 2.122e--11, one-tailed paired *t*-test). ![Somatic rearrangements identified in a colorectal cancer. In total six intrachromosomal (intraChr) and seven interchromosomal (interChr) somatic rearrangement junctions were identified in the colorectal tumor (MetB7175). The junctions were illustrated by intra- and interchromosomal links. The junctions were supported by 25--50 molecules, which were marked as a dot next to the link and to the size.](gkx1193fig4){#F4} One of the somatic events overlapped with *SET* gene, which is a nuclear protein and listed as an annotated cancer driver among the curated variants in COSMIC ([@B38]). Specifically, we identified a translocation where a segment of Chr9: 131 457 029--131 458 900 was duplicated and inserted into Chr2: 116 376 786. The Long Ranger software did not detect this rearrangement. The junction was supported by 50 molecules and with additional sequence breakpoint support coming from 26 short read pairs. The ∼1.8 kb inserted segment incorporates nearly the entire first exon of the *SET* gene. The translocation creates a novel *DPP10* (2q14.1)/*SET* (9q34.11) gene fusion. *SET* gene fusions, such as *NUP214* (9q34.13)/*SET* (9q34.11), are known to be associated with various leukaemias ([@B39]). Dipeptidyl Peptidase Like 10 (*DPP10*) is high expressed in brain tissue ([@B40]) and has been implicated in asthma ([@B41]). Several reports have shown that *DPP10* has a potential role in colorectal cancer ([@B42]) and neuroblastoma ([@B43]). The role of this rearrangement in colorectal cancer is yet to be determined. We identified other somatic rearrangements as well. Three examples are shown in Figure [5](#F5){ref-type="fig"}. A total of 39 molecules supported the first junction. The rearrangement involved a ∼3.2 Mb partial duplication of Chr10: 144 672 679--147 914 434 (Figure [5A](#F5){ref-type="fig"} and [Supplementary Figure S7A](#sup1){ref-type="supplementary-material"}). The segment harbors the ST3 Beta-Galactoside Alpha-2,3-Sialyltransferase 3 gene (*ST3GAL3*). The breakpoint is within the last intron of the gene, which alters normal transcript forms. *ST3GAL3* is known to affect cell mobility in metastasis ([@B44],[@B45]). The second junction, which has 29 supporting molecules, represents a ∼458 kb deletion of Chr8: 98 634 834--99 093 478 (Figure [5B](#F5){ref-type="fig"} and [Supplementary Figure S7B](#sup1){ref-type="supplementary-material"}). The deletion covers three genes associated with metastasis including: Metadherin (*MTDH*), Lysosomal Protein Transmembrane 4 Beta (*LAPTM4B*) and Matrilin 2 (*MATN2*) ([@B46],[@B47]), along with others. The third junction, which has 34 supporting molecules, represents a ∼941 kb deletion of Chr18: 2 993 550--3 935 415 (Figure [5C](#F5){ref-type="fig"} and [Supplementary Figure S7C](#sup1){ref-type="supplementary-material"}). The deletion removes the TGFB Induced Factor Homeobox 1 (*TGIF1*) gene, which is crucial to normal brain development, the loss of which causes holoprosencephaly ([@B48]). The deletion breakpoint resides within the first intron of Lipin 2 (*LPIN2*), which can disrupt normal gene transcripts. Deactivation of this gene along with the *NF2, NIPSNAP1* and *UGT2B17* genes, is reported to enable metastasis in prostate cancer cell lines ([@B49]). ![Three somatic rearrangements resolved in a colorectal cancer. (**A**) A somatic duplication that interrupts gene *ST3GAL3*. A total of 39 molecules supported the junction breakpoint. (**B**) A somatic deletion that deletes *MTDH, LAPTM4B* and *MATN2* genes. A total of 29 molecules supported the junction breakpoint. (**C**) A somatic deletion that deletes *MYOM1* and *TFIF1* genes and interrupts the *LPIN1* gene. A total of 34 molecules supported the junction breakpoint. Each subfigure also shows the same region from the matched Norm7176 sample as an inset, which shows no alteration. Higher than anticipated barcode sharing was red circled, which represents the junction breakpoint.](gkx1193fig5){#F5} As additional demonstration of this approach\'s ability for identifying complex somatic rearrangements we sequenced two gastric tumors with a matched normal as denoted by MetR2721 (tumor), MetL2725 (tumor) and Norm2386 (normal tissue). Like the colorectal cancer, we identified a series of somatic rearrangements with multiple SV elements. Importantly, our molecule method identified these events despite the limited tumor cellularity that was less than 30% in both samples. We ran ZoomX with grid length of 10 kb. The sequencing statistics are presented in [Supplementary Table S1](#sup1){ref-type="supplementary-material"}. We inferred 12--42 million molecules with average molecule length around 10kb. The estimated extrapolated molecule coverage $\documentclass[12pt]{minimal} }{}${c^M}$\end{document}$ was 142 (Norm2386), 132 (MetL2725) and 43 (MetR2721), respectively. The estimated null sharing was $\documentclass[12pt]{minimal} }{}${\mu _0} < 0.1$\end{document}$ for all three. We required the minimum estimated junction allele to be at least 14. The analysis was focused on large-scale events (\>200kb). We listed all somatic rearrangements found with Bonferroni adjusted Fisher Exact test *P*-values \<0.05 in [Supplementary Table S5](#sup1){ref-type="supplementary-material"}. All junctions also had more than two paired-end short read pairs as additional validation support. We found four somatic intra-chromosomal junctions in MetR2721 and two in MetL2725 samples. Two of the four and one of the two junctions were overlapping cancer driver gene regions, as defined by the COSMIC census, which were both significantly enriched (*P* = 0.003286 and *P* = 0.04081, Binomial Test). Of particular interest was the fact that the rearrangements clustered around the Chr10: 122--124 Mb region harboring the fibroblast growth factor receptor 2 (*FGFR2*) gene. *FGFR2* is a well-known oncogene implicated in gastric cancers. The duplicated region was also inverted (Chr10: 122 763 941--123 240 993). In total, 141 and 146 molecules supported the inversion breakpoints, which is equivalent to ∼7× expected extrapolated molecule coverage for a heterozygous haplotype (Figure [6A](#F6){ref-type="fig"} and [Supplementary Figure S8A and B](#sup1){ref-type="supplementary-material"}). ![A complex somatic rearrangement. These tumor samples show distinct rearrangements in the same genomic region harbouring the *FGFR2* gene (Chr10: 122--124 MB). In MetR2721 (**A**), the rearrangement was resolved to a somatic inversion-amplification haplotype. A total of 146 and 141 molecules supported the two junction breakpoints formed by the inversion. In MetL2725 (**B**), the rearrangement was resolved to multiple parallel haplotypes. The two major haplotypes were two duplications with their breakpoints red circled in the plot. A total of 71 and 41 molecules supported the duplication junction breakpoints. The coverage changes in the region also confirmed these events. The same region from the matched Norm2386 shows no alteration, which is shown as an inset in subfigure (A).](gkx1193fig6){#F6} The MetL2725 site shows more complex rearrangements in the same region with multiple coexisting somatic alleles. Our analysis detected two distinct large-scale duplication that affected the same region, one spanning Chr10: 122 946 850--123 782 660 and the other spanning Chr10: 122 465 823--123 486 938, as shown in Figure [6B](#F6){ref-type="fig"}. The first allele is 2× duplication, while the second is duplicated multiple times. Both duplications affect the entirety of the *FGFR2* gene. A total of 71 and 41 molecules supported the junction breakpoints, respectively ([Supplementary Figure S8C and D](#sup1){ref-type="supplementary-material"}). The accompanying normal tissue, Norm2386, shows no aberration in the region. DISCUSSION {#SEC4} ========== In summary, we demonstrate a new method to detect large-scale complex structural variants and rearrangements using barcode linked read data with the 10X Genomics platform. Our approach identifies germline rearrangements and perhaps more challenging, somatic events that occur in lower allelic fractions (\<50%). We demonstrated that the method delineates complex structural variants where the size is \>200 kb and missed by other methods that include long read sequencers. Our approach detects a full spectrum of structural variations, including deletions, inversions, duplications and remote translocations even when they occur in a lower proportion of the sample DNA as seen in primary cancers from clinical biopsies. The improved sensitivity is a combined result of higher extrapolated molecule coverage (typically 100× or more), as well as the HMW genomic DNA (typically \>10 kb). Compared to the read-based binomial test algorithm, that was employed by the Long Ranger ([@B21]) and Spies *et al.* ([@B23]) for SV calling, our statistical algorithm demonstrates an improvement in performance for the following reasons: First, the extrapolated molecule coverage of linked-read molecules (the genome coverage computed using the inferred spans of all molecules) is generally higher than the coverage of short-insert fragments. Higher extrapolated molecule coverage translates directly to more informative features for junction detection as compared to the existing read pair design. Second, compared to individual reads or read-pairs, there is a higher likelihood that a molecule represented by multiple linked-read pairs spans a rearrangement junction. Barcode-linked sequencing data has additional features that facilitate its application in many aspects beyond structural variant analysis. Linked reads are compatible with existing short read bioinformatics pipelines used to analyse whole-genome sequencing. The DNA input is as little as 1 ng, representing orders of magnitude smaller than conventional whole-genome sequencing. The N50 of phased haplotype block size is up to 1 Mb, which offers haplotypes of both single nucleotide and structural variant calls. The ZoomX module developed here can be used directly on top of the existing 10X Genomics bioinformatics pipeline. Taken together, these developments provide a new way to perform whole-genome analysis that can rapidly identify complex rearrangements whether they be germline or somatic. AVAILABILITY {#SEC5} ============ The sequencing data of NA12878 is available from the 10X Genomics website (<https://support.10xgenomics.com/genome-exome/datasets/NA12878_WGS_210>). ZoomX is available in the following Bitbucket repository (<https://bitbucket.org/charade/zoomx>). The dbGAP accession number for cancer samples is phs001362.v1.p1 and phs001400. Supplementary Material ====================== ###### Click here for additional data file. *Author contributions*: L.C.X., N.R.Z. and H.P.J. designed the study. L.C.X. developed the statistical framework, wrote the software and performed the NA12878 sample analyses. L.C.X., J.J.C., J.M.B. and C.W.B. performed the sequencing and analyses. L.C.X., N.R.Z. and H.P.J. drafted the manuscript. H.P.J. provided overall supervision of the study. All authors contributed to the writing. SUPPLEMENTARY DATA {#SEC6} ================== [Supplementary Data](#sup1){ref-type="supplementary-material"} are available at NAR Online. FUNDING {#SEC7} ======= National Institutes of Health \[R01HG006137 to L.C.X., H.P.J., N.R.Z., P01HG00205 to J.M.B., J.J.C., C.W.B., H.P.J.\]; Intermountain Healthcare (to L.C.X. and H.P.J.); Translational Research Award from the Stanford Cancer Institute (to H.P.J. and J.M.B.); The American Cancer Society \[RSG-13-297-01-TBG to to H.P.J.\]; Doris Duke Charitable Foundation, the Clayville Foundation, the Seiler Foundation and the Howard Hughes Medical Institute (to H.P.J.). Funding for open access charge: National Institutes of Health \[2R01HG006137\]. *Conflict of interest statement*. None declared.
An introduction is an important part of your assignment as it is the first impression of your assignment on your audience. An engaging and interesting introduction will get your readers hooked from the beginning and compel them to read more. On the other hand, a lacklustre introduction to an assignment will bore the reader and you will not score good grades. Even if the content of your project is top-notch, your introduction is the place where you have to pay attention to. Since a lot of students struggle with writing effective introductions, we have listed five easy steps that are followed by renowned academic writers worldwide while writing this section: Try to start with a creative statement: You need to explain the background or information related to the topic in the introduction. You can write the introduction in your own words or quote a particular author on your research topic. However, remember that you should not be too informal in the beginning as it will not give a good impression to your assessor. A starting statement needs to be creative and attention-grabbing. You can also think about including some interesting facts here but do not stuff it full of statistics as this will not hold your reader’s attention for too long. Do not include any extra facts in the introduction that may be crucial to the main body of the assignment. If you remember these points, you will not have to pray for someone to “write my assignment”. You can easily do it on your own. Define some critical terms: If you are writing an assignment on a technical subject, it is natural that there will be some complex terms in it that may be difficult for the readers to understand. Thus, it can be helpful to define these terms in the introduction itself. If you are serious about writing an engaging introduction, you need to start by understanding the purpose of the introduction. It has to attract more readers, but it should also explain important points to people who may not belong to the same discipline as you, otherwise it will not be useful. Provide a general idea on the topic: After explaining the context of the topic and its background in the introduction, it is time to include a general description of the subject. This basic idea will help your readers understand what the purpose of the assignment is, how you will go about it, what they should expect from the paper, and how it is relevant to contemporary times. Your goal is the core part of an introduction and reflects your motivation behind writing the entire assignment. However, do not hide your aim behind academic jargon, instead present it clearly so your audience doesn’t feel confused about anything. You can also look for a reliable assignment writing service on the internet and ask them to assist you in writing the introduction according to the guidelines. The introduction should be brief: The size of your introduction is determined by the kind of assignment and the total word count given in the guidelines. Usually, all assignments demand that the introduction be 10% of the total word count. This is why you should try and wrap up your thought in one or two paragraphs, in brief, and without any sort of exaggeration. If the introduction is too long, it loses all meaning, and your marks may get deducted too. This is why it is best to keep it succinct. You should remember not to inflate any facts to attract readers as this can be a violation of your academic integrity. Additionally, do not repeat the title of your paper in the project again and again. Keep the ending specific: Towards the end of the introduction, you should include an outline of the assignment. This outline should present the key points of the project and briefly touch upon the main arguments that you will be making in the assignment. Make sure that the end also illustrates the scope of the research or the assignment. If you are including a thesis statement or question at the beginning of the introduction, then your reader should feel satisfied towards the end but also eager to read the following parts of the assignment. If there is some aspect you are confused about, get in touch with writers who provide online help with assignments to sort out your queries. Now that we have cleared out all the do’s and don’ts of writing a stellar introduction, let us quickly go through the checklist of all essential aspects that you must include in your introduction. We already know that the purpose of the introduction is to attract the reader’s attention, to do this effectively, you need a clear thesis statement in the beginning that describes your intentions. Next, there should be supporting sentences in the introduction that can link your introduction with the rest of the project. The connection between these two subtopics is created by using supporting lines. Your introduction helps you stand apart from the crowd, thus, you can try starting it with a question or a quotation that intrigues your audience. You can also include some positive and negative elements of your research topic in the introduction to give a balanced view of the content to the audience. Good luck!
https://www.sampleassignment.com/blog/5-expert-tips-to-write-a-brilliant-introduction-for-your-assignment/
Q: What is the ideal Borough Layout So for the purpose of this question, let's ignore terrain. Pretend every hex is the same. Obviously during an actual game we'd want to modify this based on the terrain (and impassable terrain might make it more difficult). What order should one build boroughs in to achieve maximum # of Level 2 districts, and exploit the maximum number of hexes. In the reference photo, if I've built 1 borough already at 1, building the second at 3, 4, 5, or 7 will add three more exploits, but 2 and 6 will only add two. I think Level 2 districts is more important than exploits. A: If the only thing you care about is level 2 districts then your layout should tend towards being a flat edged shape with as few corners as possible, which will cause every district within the shape except for those in the corners to be level 2. Ultimately, a complete triangle or quadrilateral is most efficient as all but 3 (or 4) of the boroughs will have levelled up. Let's expand the example city and we'll see. First, we build in position 2, creating the first triangle, which doesn't achieve anything (yet). If you now build 3 and 6 this will cause the city centre to level up (5 districts is the minimum to level up your first district, since one of them has to touch the other 4 to do so). Next build 8, which completes the triangle and causes positions 1 and 2 to level up. We now have 6 districts, with 3 of them levelled up. In these images I'm using green for level 2 districts, orange for lvl 1. If we now extend one edge of the triangle with another triangle, building 10, 9 and 23, this gives us a parallelogram. This causes 10 and 9 to level up, giving 9 cities with 5 levelled up. Repeat with another edge, building 4, 5 and 15 to give a trapezium. This causes 3 to level up as soon as you build 4, with 4 and 5 then levelling up when you complete 15. You now have 12 districts with 8 levelled up. If you complete the last edge (adding 7, 18 and 36 - levelling up 6, 8, 7 and 18) you'll have another complete triangle of 15 districts with 12 levelled up, only the corners 14, 23 and 36 remain level 1. While expanding you could, if you prefer, simply keep adding rows to the triangle instead of expanding in the manner above. This is actually slightly more efficient (by at most one borough) at some stages when half-finished but the end results are the same.
I just saw this too, sorry. - 1 - 23 hours ago, Cameron H. said: Without being too negative, does anyone have a Streisand movie that they do like? (looking at you @tomspanks) I know a couple of you watched her version of A Star is Born, but as I said when we covered the original, I just don't think I like that story, so I doubt I would like her version any better. Besides all her concerts? Probably The Way We Were and Yentl. I remember watching The Prince of Tides, but I can't remember any details about it. - 3 - On 6/13/2020 at 12:52 PM, Cameron H. said: I’m trying to come up with a list of “summer” movies, and I wondered if anyone had any suggestions. I’m not really looking for dramas, mostly comedies and action movies. They don’t necessarily have to happen during the summer, but I’d like a summer feel. Some examples: Summer Rental, Summer School, Club Paradise. Like, would Caddy Shack be one? Maybe Point Break? Anyway, any suggestions would be appreciated Do they have to be movies you've never watched before? You know what's hotter than summer - The Core - 2 - I always associated Don't Rain on My Parade with Streisand, but only because my mother is a big Streisand fan and I had to watch the Streisand concert(s) on tv. Still didn't know it was from Funny Girl until recently though. - 3 - Huh, more people need to watch The Way We Were. - 23 hours ago, Cakebug Tranch said: Hi everyone! I'm sorry that my inability to check email last night meant I delayed the pick today. I am never around here anymore but that doesn't mean I don't miss you all, and I am especially appreciative of watching you all generate great work over on Letterboxd. I had a bit of a struggle this week with what I would pick: should I pick the standard HDTGM-worthy thing to make fun of, or something that would pep us up in this difficult time? In the end, I chose the latter, a generally well-received musical based on a Broadway production that launched an international career. What's that you say? "They didn't make a Broadway musical of Across the Universe!" Yet. They haven't made it yet. Anyway, I noticed that this movie had been added to Netflix, I had never seen it, but my wife said "oh, I want to see that!", which is as much as to say, I'm picking that, instead of the terrible movie I was going to pick. So, let's all watch the movie version of the Broadway musical that would eventually launch the career of Rachel Berry! No one in this thread had reviewed it on Letterboxd yet, so hopefully this is a new experience for many of us! It's nice to be back! I'll try to remember to be around more! (I've said that 3000 times in the last 3 years. Sorry guys.) Your pick is like buttah! - 5 - - F, marry, kill: - Meryl Streep, Christine Baranski, Julie Walters - Pierce Brosnan, Colin Firth, Stellan Skateboard - 4 - 39 minutes ago, Cam Bert said: I think frothy is a good way to describe it. Like when you really think about it you establish a lot of these characters with distinct personalities and yet none of that ever comes into play really. Is there any reason that Stellar Skateboard is a globetrotting author? Is that just to explain his boat? What about the one friend being a cookbook author? They give them these things to make them seem less cookie cutter but in the end none of that matters or is expanded upon. They just keep things light and frothy. One of them should've been a super scientist. Spanakopita! - 3 - 1 - If I had to pick, Dancing Queen. On the other hand, Fernando exists... - 4 - 10 hours ago, Cam Bert said: I will slightly defend the autotune one just because A) it was the only different sounding song and B ) was the only one to really match the time. That said, yes she's a great singer and it does ruin it. It makes sense in the context of the era but I still hate it - 4 - 2 - Thanks a lot, I kind of want to watch The Hobbit cartoon now.On 5/4/2020 at 1:07 PM, AlmostAGhost said: I'm still baffled why they didn't match music style to the era. I guess the '70s one was vaguely disco-ish, but that seemed like a real clear choice to be doing in a musical like this and they didn't. Now that I've endured the whole thing, I think I'm most disappointed with the 80s song. It sounded like every other song in the musical. And earlier, what a weird choice to autotune Audra McDonald. - 3 - 8 minutes ago, Cam Bert said: To those watching it I would say the last bit is a lot better. Once you actually spend time with them and not just focus on the sex. I will power through to get to the Audra McDonald vignette. - 1 - 2 hours ago, AlmostAGhost said: What did you all think of the music? I liked a couple of songs (one by Rumer Willis and one near the end) but mainly I found them completely unremarkable I'm still baffled why they didn't match music style to the era. I guess the '70s one was vaguely disco-ish, but that seemed like a real clear choice to be doing in a musical like this and they didn't. Of what I've seen so far, the music seems I dunno, anemic? "Unremarkable" is fitting too. The only song that managed to get stuck in my head is Neil Diamond's Hello Again.40 minutes ago, GrahamS. said: Sadly, I completely forgot about this because I was filling out job applications all week. Everyone’s reactions makes me weirdly thankful for ...annoying paperwork???!! Good luck! - 2 - 1 - I feel like I've been watching Hello Again all weekend, so how is it that I still have an hour left?! Help me...2 hours ago, Cam Bert said: Like from the trailer and the name I thought this would be something a long the lines of "these two souls keep finding each other in different generations" but it was literally "Oh, you'll be used in 1902 and 2002!" They should've made Cloud Atlas, the musical. - 4 - 2 hours ago, Cameron H. said: I’ve honestly never really been into it, but it’s hitting a sweet spot today. Pun 100% intended Jacques Torres > Zumbo - 2 - 2 minutes ago, Cameron H. said: The plan was to watch it today, but apparently I’m now binging Nailed it. So...I may need to watch this tomorrow. Nailed It is delightful!!!!!! - I'm 20 minutes into Hello Again and I'm hating my past self. - 2 - 2 hours ago, Cam Bert said: If you're interested he did a more entertaining version of Primer called Timecrimes which if you love time travel, thrillers, and slobby Spanish men is the movie for you. I'm only into 2 of those things. Anyhoo, I've wanted to watch Timerimes but it's never on sale - 2 - 1 hour ago, Cinco DeNio said: Do we have to watch it at 7:35 in the morning? Claro que si46 minutes ago, AlmostAGhost said: That's your pick? I mean, we could all discuss that right now today it's so short. I say, feel free to pick something longer as well for next week (if you want to) Ok, I picked a longer movie, but don't hate me if I can't finish it by next Monday Hello Again - available on Amazon Prime - 3 - Can you spare a few minutes? We're watching: https://www.shortoftheweek.com/2010/12/29/735-in-the-morning/ - 4 - 2 hours ago, Cameron H. said: You’re saying the movie needed more hairy, 70’s dick. Got it I feel that way about most movies.51 minutes ago, Cameron H. said: It was listed as a goof on IMDB, and I didn’t notice at the time, but the tombstone at the end has Berger’s name, not Claude’s. This means that at some point the military discovered the switcheroo, but didn’t do anything about it. I thought that was done on purpose. It made me think of his poor mom, who just wanted to clean his pants... - 5 - 23 hours ago, grudlian. said: The movie had some pretty notable changes apparently. Claude was the leader of the hippies. He died instead of Berger. So, I don't know if the more complex view of the hippies is due to hindsight or updating it for the times. Has anyone seen the musical? I've seen the musical, but it's been so long I don't remember much except the musical showed more dongs than the movie. - 6 - On 4/12/2020 at 8:31 AM, Cam Bert said: So as I was watching this movie I was struck with one thought, "Didn't my mother love The Monkees?" So I messaged my mom to ask her about The Monkees. So turns out I was right. She was a fan of The Monkees. Her room as a child had posters of The Beach Boys, Paul Revere and the Raiders and Davy Jones of The Monkees. No Monkees posters just Davy Jones. Just never saw Head and wasn't a fan of the show (she was like 10 when it was on) but liked their "music videos". I'd never heard of Paul Revere and the Raiders so I thought it was a poster of an artist's rendering of Paul Revere's ride in the American Revolutionary War or him and his silver. In either case, best kink ever.
https://forum.earwolf.com/profile/116626-tomspanks/content/?type=forums_topic_post&page=3
Prepare Strawberry Butter; set aside. Preheat electric skillet or griddle to 375 degrees F. Combine baking mix, milk, eggs and yogurt in medium bowl; mix well. Spoon scant 1/2 cup batter into skillet. With back of spoon, gently spread batter into 4-inch circle. Spoon about 2 tablespoons batter onto top edge of circle for head. Using back of spoon, spread batter from head to form bunny ears. Cook until bubbles on surface begin to pop and top of pancake appears dry; turn pancake over. Cook until done, 1 to 2 minutes. Decorate with candies to resemble bunny face. Repeat with remaining batter. Serve warm with Strawberry Butter. Strawberry Butter: Place cream cheese and butter in food processor or blender; process until smooth. Add sugar; process until blended. Add strawberries; process until finely chopped.
http://www.goodcooking.com/recipe/11340-recipe-Bunny-Pancakes-with-Strawberry-Butter.html
ITM Web Conf. Volume 22, 2018The Third International Conference on Computational Mathematics and Engineering Sciences (CMES2018) |Article Number||01048| |Number of page(s)||5| |DOI||https://doi.org/10.1051/itmconf/20182201048| |Published online||17 October 2018| Comparison of Different Algorithms in the Radiotherapy Plans of Breast Cancer 1 Department of Radiation Oncology, University of Kocatepe, 03200, Afyon, Turkey 2 Department of Materials Science and Engineering, University of Kocatepe, 03200, Afyon, Turkey * Corresponding author: [email protected] It is aimed to evaluate portal dosimetry results of planned breast cancer patients with intensity-modulated radiotherapy (YART) of Anisotropic Analytical Algorithm (AAA) and Pencil Beam Convolution (PBC) dose calculation algorithms. The plans of 10 treated patients will receive 6 MV photon energy and a total of 25 fractions of 50 Gray dose using the inverse YART technique, which is reverse planned in the Eclipse (ver.13.6) treatment planning system with Varian Trilogy Linear Accelerator prescribing. For each plan, dose was calculated after optimization using PBC and then AAA algorithms. The quality controls of the plans were made using the Electronic Portal Imaging Device (EPID) by creating individual verification plans for each algorithm. In addition, the maximum and average dose values in the target volume were compared in inverse YART plans calculated using PBC and AAA. When treatment plans generated by AAA and PBC dose calculation algorithms are analyzed using EPID, for the PBC algorithm, the mean values of VArea and VAvg are 98.15 ± 1.07, 0.40 ± 0.048 and 98.72 ± 1.13, 0.37 ± 0.051, respectively, for the AAA algorithm. The PTV Dmax value for the PBC algorithm is 109.3 ± 1.09 and the DAvg value is 101.7 ± 0.51. For the AAA algorithm, the PTV Dmax value is 110.6 ± 1.12 and the DAvg value is 102.9 ± 0.62. When the mean values of portal dosimetry VArea and VAvg evaluated using PBC and AAA algorithms were compared, the differences between the algorithms were not statistically significant (p> 0.05). Differences between the algorithms for PTV Dmax and DAvg values are not statistically significant (p> 0.05). © The Authors, published by EDP Sciences, 2018 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
https://www.itm-conferences.org/articles/itmconf/abs/2018/07/itmconf_cmes2018_01048/itmconf_cmes2018_01048.html
TECHNICAL AREA The present invention relates to a drive control for a DC electric motor to initiate the rotation of the DC electric motor and to stop it accurately after a predetermined movement of its load. BACKGROUND OF THE INVENTION It is well known in the art that an electric motor and a motor drive for the motor can be utilized to move a guide means used for a sorter or other similar device. Although an electric motor, such as a stepper motor, whose rotation can be accurately controlled, is available, such a motor not only has relatively low torque, but is also costly. In contrast, a DC electric motor is less costly and superior in its starting characteristic. In view of this fact, there has always been a serious demand for use of the DC electric motor to achieve the abovementioned purpose. SUMMARY OF THE INVENTION Rapidity and stop position accuracy are required for driving a guide means for a sorter or other similar device. In general, a moment of inertia experienced by a motor shaft when attempting to move a load is expressed as a sum of a moment of inertia of the motor itself, a moment of inertia of a rotatable load and a moment of inertia of a linearly movable load (e.g., the guide means for the sorter or other similar device). When, by conventional control methods, the DC electric motor is rapidly started and rapidly braked at a predetermined stopping point, the moments of inertia cause the motor to be stopped only after the linearly movable load has traveled beyond the stopping point by a considerable amount. A principal object of the present invention is to provide a drive control for a DC electric motor that starts the DC electric motor in such a manner that the load is rapidly and accurately brought to its destination. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram illustrating a basic arrangement of a DC electric motor drive control formed in accordance with the present invention; FIG. 2 is a circuit diagram illustrating a preferred embodiment of the DC motor drive control illustrated in FIG. 1; FIG. 3 is a diagram illustrating an idexer, including an indexing disc and a sensor for detecting movement of the guide means or other similar device; FIG. 4, lines A-R, is a series of waveforms of signals at various points in the circuit diagram illustrated in FIG. 2, illustrating the operation of the DC motor drive control formed in accordance with the present invention; and, FIG. 5 is a diagram illustrating a counter electromotive current generated by the DC motor after the current has been amplified by a differential amplifier. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT FIG. 1 shows a simplified block diagram of a drive control for a DC electric motor formed in accordance with the present invention. The drive control drives the DC electric motor IV so that a load VI is stopped at a predetermined stop position. An indexer V senses the rotation of the motor IV and generates: a deceleration point signal when the load VI, that is moved by the motor IV, reaches a deceleration point; and, a stop point signal at a stop point corresponding to a destination of the load VI. A rotational direction detector III senses a counter electromotive force generated by the motor IV when it reaches the stop point and detects the rotational direction of the motor IV as the motor passes the stop point. The rotational direction detector III stores the value of the counter electromotive force and produces a rotational direction signal indicating the actual rotational direction of the motor IV. A movement signal, b, indicating the starting of the motor IV; a movement direction signal, a, indicating the rotational direction of the motor IV; and, an indexer signal, c, from the indexer V are applied to a rotation control circuit I. The rotation control circuit I generates: a drive signal in response to the movement and movement direction signals, b and a, respectively; a deceleration direction signal in response to the deceleration point signal; an inhibit signal upon generation of the stop point signal; and a decelerated drive signal representing a reverse rotational direction of the motor IV with respect to the rotational direction of the motor IV when the rotational direction signal is generated upon completion of the inhibit signal. A motor drive circuit II supplies a drive current having a different amplitude and direction relative to the movement direction signal, a, and the drive signal from the rotation control circuit I. The motor drive circuit II stops the supply of the drive current in response to the inhibit signal. The present invention will be described in more detail, by way of an example, with reference with the accompanying drawings. As discussed above, FIG. 1 is a simplified block diagram of a DC electric motor drive control formed in accordance with the present invention. FIG. 2 is a schematic diagram showing a preferred embodiment of portions of the drive control illustrated in FIG. 1. The motor drive circuit II drives the DC electric motor IV which, in turn, drives the load VI (i.e., the sorter or other similar device). An indexer V senses the rotation of the motor IV and generates the deceleration point signal at the deceleration point that lies along a course of travel of the load VI. Additionally, the indexer V generates the stop point signal at the destination stop point of the load VI. The indexer V, as shown in FIG. 1 and FIG. 3, is coupled to a shaft driven by an output shaft of the DC electric motor IV. The indexer V comprises an indexing disc 5 having signal generation positions (i), (ii), (iii), (iv), and a sensor 6 that detects the deceleration and stop points generated by the disc 5. In FIG. 3, (i) designates the stop point (or the movement start point), (ii) designates the deceleration point, (iii) designates the stop point and, (iv) designates the deceleration point. In accordance with the preferred embodiment of the invention, the motor IV is controlled so that the motor IV is started at the stop point (i) and begins to move the load VI toward the deceleration point (ii). Upon reaching the deceleration point (ii), the motor has its rotational speed decelerated so that the load VI (i.e., the guide means) is stopped at the stop point (iii). The above operation is successively repeated. It should be noted that another course of movement [i.e., (i). fwdarw. (ii). fwdarw.(iii)] is also possible depending on the direction of movement. The load VI (i.e., the guide means for the sorter) is supported so that it may be moved from one destination (the movement start point) in a first or a second direction to another destination and stopped. A clockwise rotation (hereinafter referred to as CW) of the DC electric motor IV moves the load VI in the first direction while a counterclockwise rotation (hereafter referred to as CCW) moves the load VI in the second direction. As will be discussed later, it is possible that the rotational direction of the DC electric motor IV will be reversed during the above operation. The actual rotational direction of the DC electric motor IV is detected by the rotational direction detector III. The rotation control circuit I supplies the motor drive circuit II with the motor drive signal. The rotation control circuit I has a movement direction memory flip-flop (FF) 22 that stores the movement direction signal, a. Signal a is supplied from a sequencer or other similar device (not shown), and may be either a CW or CCW signal. The CW signal (which causes the load VI to move in the first direction) is applied to an S-terminal of FF 22 and the CCW signal (which causes the load VI to move in the second direction) is applied to an R-terminal of the FF 22. A deceleration point memory FF 14 stores the deceleration point signal supplied from the indexer V. The movement direction signal, a, is applied through an EXOR circuit to an S-terminal of FF 14, the indexer signal, c, is applied through an inverter to a C-terminal of FF 14, and the movement signal, b, is applied to an R-terminal of FF 14. The deceleration point memory FF 14 is reset at the start of rotation by the movement signal, b, that is applied to the R-terminal. Thus, when the indexer signal, c, corresponding to point (i), is initially generated, it will not be erroneously stored as corresponding to the deceleration point (ii). A stop point memory FF 16 stores the indexer signal, c, as the stop point signal (iii). The indexer signal, c, is applied through the inverter to a C-terminal of FF 16. An output from a Q-terminal of the deceleration point memory FF 14 is coupled, through an integrator 15, to an R-terminal of the FF 16. The indexer signal, c, and the movement signal, b, are applied to a movement signal preference circuit 7 which comprises an OR circuit and an EXOR circuit. When the movement signal, b, from the sequencer, is applied to the movement signal preference circuit 7, the preference circuit 7 gives priority to the movement signal, b. The priority function of circuit 7 prevents the indexer signal, c, that appears at the stop point (i) from being input to a stop circuit 8 and a D/A converter 12, both of which are included in the motor drive circuit II. Consequently, the stop circuit 8 and the D/A converter 12 of the motor drive circuit II are released from their stop conditions by a leading edge of the movement signal, b. To assure that the motor IV can continue to be rotated even after the movement signal, b, has been turned off, the movement signal, b, must continue to be supplied until the stop point (i) is passed. Accordingly, the width of the movement signal, b, is determined as follows: t.sub.1 &lt; t.sub.2 &lt; t.sub.3 where, t.sub.1 =the indexer signal, c, pulse width; t.sub.2 =the movement signal, b, pulse width; and, t.sub.3 =the indexer pulse interval (a time required to move from the stop point (i) to the deceleration point (ii)). An AND circuit 9, an OR circuit 10 and a NOR circuit 11 supply the motor drive circuit II with signals indicating: the rotational direction (CW, CCW); and, the drive mode or the decelerated drive mode. The values of the signals are dependent upon: the output signals of the rotational direction detector III, wh ich will be described below; and the logic outputs of the movement direction memory FF 22, the deceleration point memory FF 14 and the stop point memory FF 16. A limiter 21 is a counter that applies a stop signal to the stop circuit 8 of the motor drive circuit II when the number of pulses representing a stop point (i) exceeds a predetermined number. The deceleration point signal and the stop point signal are also applied through the movement signal preference circuit 7 to the stop circuit 8. The OR circuit 10 and the NOR circuit 11 of the rotation control circuit I supply signals indicating the rotational direction and the speed of the motor IV to the D/A converter 12 of the motor drive circuit II. Input terminals CW, CCW, CW deceleration, and CCW deceleration of the D/A converter 12 have their logical input conditions related to one another as follows: the CCW terminal is low when the CW terminal is high and the CCW terminal is high when the CW terminal is low; and, the CCW deceleration terminal is high when the CW deceleration terminal is low and the CCW deceleration terminal is low when the CW deceleration terminal is high. The output of the D/A converter 12 provides: the CW high speed drive signal when the CW input is high and the CW deceleration input is high; the CW decelerated drive signal when the CW input is high and the CW deceleration input is low; the CCW high speed drive signal when the CCW input is high and the CCW deceleration input is high; and the CCW decelerated drive signal when the CCW input is high and the CCW deceleration input is low. The analog voltage output from the D/A converter 12 is applied to a differential amplifier 13 which, in turn, amplifies the analog voltage to a desired level. A reference voltage, Vref.sub.1, is applied to the differential amplifier 13. Output voltages from operational amplifiers A. sub.1 and A.sub.2 of the differential amplifier 13 are related as follows: the output of the operational amplifier A.sub.2 is negative when the output of the operational amplifier A.sub.1 is positive; and, the output of the operational amplifier A.sub.2 is positive when the output of the operational amplifier A.sub.1 is negative. Output current from the operational amplifier A.sub.1 flows through: a resistor R.sub.1 ; the DC motor IV; a resistor R.sub.2 ; and, the operational amplifier A. sub.2. Output current from the operational amplifier A.sub.2 flows through: the resistor R.sub.2 ; the DC motor IV; the resistor R.sub. 1 ; and, the operational amplifier A.sub.1. Rotation of the DC motor IV causes a movement of the guide means and a rotation of the indexer V. Voltages, IN.sub.1 and IN.sub.2, at opposite ends of the DC motor IV, are related to each other as follows: IN.sub.1 &lE;Vref.sub.1 &lE;IB.sub.2 ; or, IN.sub.1 &gE;Vref.sub.1 &gE;IN.sub.2, where Vref.sub.1 is a voltage which causes the DC motor IV to be turned off. The rotational direction detector III detects the rotational direction of the motor IV at a point in time when the motor IV is de- energized by the stop point signal. A Q output from the stop point memory FF 16, of the rotation control circuit I, is transmitted to the AND circuit 9, thereby inhibiting the movement direction signal, a, from the movement direction memory FF 22. As the load VI travels from the deceleration point (ii) to the stop point (iii), both the D/A converter 12 and the differential amplifier 13 are turned off, resulting in a counter electromotive current generated by the motor IV that flows from IN.sub.1 through: the resistor R.sub.1 ; a diode D.sub.1 ; a diode D.sub.2 ; and the resistor R.sub.2 to IN.sub.2. In the case of the load traveling from the deceleration point (iv) to the stop point (iii), the counter electromotive current flows from IN.sub.2 through: the resistor R.sub.2 ; a diode D.sub.3 ; a diode D. sub.4 ; and the resistor R.sub.1 to IN.sub. 1. Consequently, voltage drops across the resistor R.sub.1 and the resistor R.sub.2 are related to the current direction (and, therefore, the rotational direction of the motor IV). The voltage across the resistor R.sub.1 or the resistor R.sub.2 is input to, and amplified by, the differential amplifier 19 of the rotational direction detector III. The output voltage of the differential amplifier 19 is positive, with respect to the off point Vref.sub.1, when the output voltage is caused by the CW direction of the motor IV, and negative, with respect to the off point, Vref.sub.1, when the output voltage is caused by the CCW direction of the motor IV (FIG. 5). The output voltage from amplifier 19 is transmitted to a comparator 20 that compares the output voltage with reference voltages, Vref.sub.2 and Vref.sub.3. In this manner, the analog voltage output from the differential amplifier 19 is converted into a digital signal indicating the CW or CCW rotational direction. The reference voltages, Vref.sub.2 and Vref.sub.3, are selected so that they are related to the off voltage, Vref.sub.1, in the following manner: Vref.sub.2 &lt; Vref. sub.1 &lt; Vref.sub.3. The relationship between C.sub.1 and C.sub.2 of the comparator 20 is such that; C.sub.2 is low when C.sub.1 is high, and C. sub.2 is high when C.sub.1 is low. The output signal from the comparator 20 is applied to data terminals (D) of the CW memory FF 17 and the CCW memory FF 18 so that the rotational direction of the motor IV is stored when the trailing edge of the stop signal of the indexer signal, c, is received. The rotational direction signal outputs from the CW memory FF 17 and the CCW memory FF 18 of the rotational direction detector III are transmitted to the OR circuit 10 of the rotation control circuit I. The direction signal output from the CW memory FF 17 is coupled to the CCW terminal of the D/A converter 12, while the CCW direction signal output from the CCW memory 18 is coupled to the CW terminal of the D/A converter 12. As an example, when the CW signal is output from the CW memory FF 17, this CW signal is input to the CCW terminal of the D/A converter 12 which, in turn, outputs the CCW voltage. Consequently, the CCW signal amplified by the amplifier 13 causes the motor IV to be rotated in the CCW direction. Accordingly, when the load has passed the stop point (iii), the rotation of the motor IV is controlled so as to drive the load VI back to the stop point (iii). Thus, the rotation of the motor IV is stopped at the stop point (iii). If the load VI has not traveled beyond the stop point (iii), a predetermined control condition has been satisfied and the motor IV is necessarily stopped and awaits the next movement signal. The preferred embodiment of the invention operates in a manner that will be described below with reference to FIG. 4. FIG. 4 is a time chart that illustrates the waveforms of various inputs and outputs when the motor IV is driven in the clockwise direction. It is assumed that the movement signal, a, (not shown) indicating movement in the CW direction, and that the movement signal, b, as shown on line C in FIG. 4, is applied to the rotation control circuit I. The latter signal causes the output of the movement signal preference circuit 7 to be low, as shown on line R in FIG. 4, and thus, the motor drive circuit II is released from its stop condition. At this point in time, the CW and CW deceleration inputs, as indicated on lines K and Q in FIG. 4, respectively, are applied to the D/A converter 12 of the motor drive circuit II so that the motor IV is driven in the clockwise direction at full speed during the t.sub.3 period, as indicated on line A in FIG. 4. The CW rotation causes the load VI to travel in the first direction and the indexer V to be rotated so as to generate the deceleration point signal, as indicated on line B (ii) in FIG. 4, which is input to the rotation control circuit I. The deceleration point signal is, as indicated on line R in FIG. 4, transmitted via the movement signal preference circuit 7 to the stop circuit 8, thereby turning the D/A converter 12 and the differential amplifier 13 off. The voltages on the terminals IN.sub.1 and IN.sub.2 are related such that: IN.sub.1 =IN.sub.2 =Vref.sub.1. Thus, the voltage across the terminals IN.sub.1 and IN.sub.2 is zero and, as a result, a counter electromotive current is generated by the DC motor IV. This current flows from the terminal, IN.sub.1, through: the resistor, R.sub.1 ; the diode, D.sub.1 ; the diode D.sub.2 ; and, the resistor, R.sub.2 ; to the terminal, IN. sub.2, causing a braking effect that attempts to stop the DC motor IV. However, because the motor's inertial force exceeds the braking force, the braking effect produces a rapid deceleration of the motor IV from the point (i) to the point (ii) rather than stopping the DC motor IV at the deceleration point (ii). At the trailing edge of the deceleration point pulse, the output of the deceleration memory FF 14 goes high, as illustrated on line D in FIG. 4; the input terminals 7 and 8 of the NOR circuit 11 go high, as indicated on line 0 in FIG. 4; the output terminal 9 of the NOR circuit 11 is low; and, the CW deceleration input of the D/A converter 12 in the motor drive circuit II goes low, as indicated on line Q in FIG. 4. Thus, the motor IV is driven in the clockwise direction in the decelerated drive fashion, as indicated on line A in FIG. 4. Upon generation of the stop point pulse, i.e., when the indexer V has passed the deceleration point (ii) and reaches the stop point (iii), the counter electromotive current is generated by the motor IV, thereby providing a braking effect. If the motor IV is stopped during this process, the stop point signal pulse (iii) will not fall. Accordingly, the stop circuit 8 of the motor drive circuit II maintains the differential amplifier 13 at its stopped condition so as to complete the control of the load VI. If the load VI goes beyond the stop point (iii) due to the inertia of the motor IV, the stop point memory FF 16 will store the running past condition in preparation for the subsequent operation by going high when the trailing edge of the stop point signal (of the indexer signal, c) is input to the stop point memory FF 16. The low output of the stop point memory FF 16 is transmitted to the AND circuit 9, thereby inhibiting the movement direction signal, a, from the movement direction memory FF 22. At the stop point (iii), both the D/A converter 12 and the differential amplifier 13 are turned off and the counter electromotive current produced by the motor IV flows from the terminal, IN.sub.1, through: the resistor, R.sub.1 ; the diode, D.sub.1 ; the diode, D.sub.2 ; and the resistor R.sub.2 ; to the terminal, IN.sub.2. Thus, voltage drops across the resistor, R.sub.1 and the resistor, R.sub.2 are related to the current direction (i.e., the rotational direction). The voltage across the resistor, R.sub.1 or R.sub.2, is input to, and differentially amplified by, the differential amplifier 19 of the rotational direction detector III. The output voltage of the differential amplifier 19 is positive with respect to the off level, Vref.sub.1, if the output voltage is caused by the CW direction of the motor IV and is negative with respect to the off level Vref.sub.1, if the output voltage is caused by the CCW direction of the motor IV (FIG. 5). The output voltage signal from amplifier 19 is transmitted to the comparator 20 and compared with the reference voltages, Vref.sub. 2 and Vref.sub.3. Thus, the analog voltage output from the differential amplifier 19 is converted into the digital signal indicating the CW rotational direction of the motor IV. The comparator 20 compares the off voltage, Vref.sub.1, of the DC motor IV, with Vref.sub.2 and Vref.sub.3, and produces an output signal that is applied to the data terminals (D) of the CW memory FF 17 and the CCW memory FF 18. At the same point in time that the trailing edge of the stop signal (the indexer signal, c) occurs, the output of the CW memory FF 17 goes high the output of the CCW memory FF 18 is low, and the rotational direction of the motor IV is stored. The rotational direction signals from the CW memory FF 17 and the CCW memory FF 18 are transmitted to the OR circuit 10. The CW direction signal at the output terminal (6) of the OR circuit 10 is applied to the CCW terminal of the D/a converter 12. At this point in time, the CCW deceleration terminal is low, as indicated on line Q in FIG. 4, and the CCW decelerated drive current is supplied to the motor IV, as indicated on line A in FIG. 4. The rotation of the motor IV is controlled so that, when the load VI has passed beyond the stop point (iii), the load is brought back to the stop point (iii). In this manner, the rotation of the motor IV is finally stopped at the stop point (iii). In this embodiment of the invention, the limiter 21 comprises a counter. The motor drive circuit II is disabled by stop circuit 8 after N attempts to stop the motor IV at the stop point (iii). Although the preferred embodiment of the present invention has been described in detail, various modifications are possible within the scope of the invention. For example, it is possible to progressively reduce the reverse drive current after going past the stop point in an analog fashion, and thereby achieve a smooth stop. As will be apparent from the foregoing description, the motor drive control constructed in accordance with the present invention enables a relatively low cost DC electric motor to be rapidly driven and accurately stopped at predetermined stop positions. The rotational direction of the motor in the vicinity of the respective stop points can be detected using the counter electromotive current generated by the motor, so that the synchronizing means may be simplified and no other sensor means are necessary.
Where Is My CIBC Aerogold Visa Infinite Card? My CIBC Aerogold Visa Infinite Card? Today I will talk about my experience applying for a CIBC Aerogold Visa Infinite credit card. What could have possibly gone wrong went wrong with the process and that’s what I will share. On Jan 13, 2018, I called in to apply for a CIBC Aerogold Visa Infinite card and an American Express AeroplanPlus Gold credit card. Within a week, by the following Friday, my Amex AeroplanPlus Card arrived, but there was no news from CIBC. After around 10 days, I received a call from CIBC asking to call them back for further information. When I called back, the guy asked me what credit limit I wanted. For example, $5,000 or $10,000, or something else. I mentioned that $5,000 would just fine. He said I should be receiving my card shortly. 3 weeks passed and nothing arrived, so I called them again. They had difficulty tracking my application, but finally they were able to find it. I was told that the last person I talked to did not do his job or he forgot and no credit card was mailed out. They said it would be mailed out shortly. I was still calm and accepted CIBC’s poor service. The next day, on Feb 5, 2018, I received another email from CIBC asking to call them again for information verification. It was like deja vu. So I called back as I got irritated. The rep was apologetic and mentioned that it was their back office that caused all these delays and assured me this time it would be OK. Finally, nearly after one month, I received my card in the mail. I never had this type of issue with anyone else before. My credit score is usually above 800 and I get whatever credit card I apply for right away. So this was a lesson for me on how disorganised and inefficient CIBC’s back office is. Another thing I found out is that CIBC’s regular credit card customer service (not the lost or emergency line) is not open 24 hours. It is very important to have a credit card that offers 24-hour customer service. You just never know when you need to call them. For example, you are travelling and your card is declined or you just have a question and you can’t get a hold of anyone due to a time difference. I have a few CIBC credit cards, but none of them are my primary credit cards. And due to poor customer service, I intend to keep it that way. Why Aren’t American Express Cards Accepted Everywhere?
https://ahmeddawn.com/blog/where-is-my-cibc-aerogold-visa-infinite-card
DELTA, BC V4C 3C8 Welcome to the Kumon Math and Reading Center of North Delta! The North Delta Center has been in operation since December 2011, and operated at Kennedy Heights Shopping Centre, 11954 88 Ave Delta. Our mission is to maximize the academic potential of each of our students by improving their math and/or reading skills following the Kumon Method. New student orientations and testing are conducted on an individual basis and are scheduled outside of class hours. To schedule an appointment or to find out more about the Kumon Program, please call the Center at 604 3496488 or 604 3760702. Registration Fees : $50 non-refundable Material Fees : $30 non-refundable Other Fees that May be Charged: PARENTS, PLEASE READ THE FOLLOWING CENTRE POLICIES CAREFULLY: 1. Our class times at the Kumon Center of North Delta are Mondays Tuesdays Thursdays and Fridays from 3:00 pm until 7:00 pm. Please make sure students arrive at the Centre with enough time to complete their work within these hours. If you realize that your child will be unable to come to class, please call me ahead of time so that we can try to make alternative arrangements for you to pick up your child’s work. If a student misses a class, all assigned work will be pushed back and reassigned to the next class. Students who arrive too late to complete their classwork within the class hours will be sent home with it, along with their homework for the next several days. 2. Please do not ask for extra work without giving me at least one week’s notice ahead of time. That way, I can make a note of it right in the student’s file and avoid mix-ups. A written note from you may lead to less confusion later as class time can be hectic. 3. All homework is to be completed by the student, marked by a parent or guardian (preferably with a red pen, but not pencil), and corrected on a daily basis by the student. This saves the student from having to spend an hour or more at the Centre (while you wait) doing corrections from the previous week. More importantly, if the student does not do his/her corrections on a daily basis at home, he/she may continue making the same mistake through several sets and may have to repeat that work again later to “unlearn” the mistake he/she has made consistently. This will slow the student’s progress considerably. Any sets not completed at home should be returned with the completed homework at the next class. This homework will then be re-assigned to the student subsequently. 4. It is VERY IMPORTANT that students record their starting and finishing times. This, in addition to the accuracy of the work, helps me to monitor the student’s progress. If times are not recorded correctly and accurately, the student will have to repeat that day’s assignment. 5. Home-grading guidelines for both Math & Reading are available and are provided to all new families when you enroll (it’s a good idea to review these guidelines periodically). Following the Home-grading guidelines will aid my assistants and me in reviewing and recording homework in class, thus resulting in a more efficient classroom routine (and fewer time students spend in the Centre). Interpreting parents’ different grading styles can be a challenge at times, so following standard grading policies really helps us move things along. 6. Parents and siblings may wait in the designated waiting area or outside, but NOT IN THE CLASSROOM AREA. It is imperative that students be allowed to complete their assignments independently in a quiet atmosphere that is conducive to learning. For the sake of student safety, do not leave students at the Centre while you run errands. 7. The North Delta Kumon Centre operates on a year-round basis, including the summer months. Understandably, students will go on vacation at various times. Please give at least 1 week’s notice for vacations so that sufficient work may be prepared for students. It is standard Kumon policy that work is taken by the students on all vacations, as prolonged absences will adversely affect the student’s progress. If this is not possible, please talk to me. 8. Students are encouraged to attend classes regularly twice a week but once a week arrangements can be made. 9. Answer books are the property of the North Delta Kumon Centre. Please handle them with care. Answer books are to be handed in once the student graduates to the next level. An answer book for the new level will be given to replace the old book. 10. Remember that progress is not judged strictly by the page number that the student is working on. Compare the time taken and the score attained each time. Any decrease in time and increase in accuracy is progress. Remember to provide a lot of encouragement. 11. Progress reports with an updated progress goal graph will be sent out after the student finishes each level of work. If there is a problem in the Centre or with the materials, I will contact you immediately. 12. All information that I distribute to parents (newsletters, centre closure notices, etc.) is sent through the Kumon student, so please check with your child after every class to see if there is something for you with their homework. 13. Keep your perspective! Kumon is a “supplemental” education program. We are here to help students do better in their regular school classrooms. Nothing from Kumon goes on students’ “permanent records” and there is no failure by students in Kumon. You can best support your child by: a. making sure he/she completes his/her work on a daily basis. b. grading each day’s assignment and then checking that corrections have been completed. c. making sure they complete the work themselves. In Kumon, children have the luxury of being able to spend as much time as is necessary to master a concept, as measured by accuracy and speed. Giving them the answers or doing the work for them ultimately compromises student’s progress, as they will not learn concepts necessary to succeed at higher levels. d. being on the lookout for students copying answers from the answer books or using calculators in math problems. Keep these items out of your child’s possession during “Kumon time. To receive a brochure about Kumon, please submit your email below. *You have requested information from Kumon Canada, Inc. about the Kumon program and Kumon Math and Reading Centres. Do you consent to Kumon Canada, Inc. sending you commercial electronic messages regarding your inquiry for information? You may withdraw your consent and unsubscribe at any time. The Kumon Method was created to help children of all ages and academic levels. To learn more about how Kumon can help your child, choose the appropriate age range below.
https://www.kumon.com/delta-north/aboutcenter
--- abstract: 'Autonomous intelligent agent research is a domain situated at the forefront of artificial intelligence. Interest-based negotiation (IBN) is a form of negotiation in which agents exchange information about their underlying goals, with a view to improve the likelihood and quality of a offer. In this paper we model and verify a multi-agent argumentation scenario of resource sharing mechanism to enable resource sharing in a distributed system. We use IBN in our model wherein agents express their interests to the others in the society to gain certain resources.' author: - 'Supriya D’Souza, Abhishek Rao, Amit Sharma and Sanjay Singh[^1]' bibliography: - 'ref.bib' title: 'Modeling & Verification of a Multi-Agent Argumentation System using NuSMV' --- Introduction ============ Negotiation is a form of interaction in which a group of agents, with conflicting interests, try to come to a mutually acceptable agreement on the distribution of scarce resources. Argumentation-Based Negotiation (ABN) approaches, enable agents to exchange information (i.e. arguments) during negotiation [@ri03]. This paper is concerned with a particular style of argument-based negotiation, namely Interest-Based Negotiation (IBN) [@rsd03], a form of ABN in which agents explore and discuss their underlying interests. Information about other agents’ goals may be used in a variety of ways, such as discovering and exploiting common goals. Most existing literature supports the claim that ABN is useful by presenting specific examples that show how ABN can lead to agreement where a more basic exchange of proposals cannot (e.g. the mirror/picture example in [@psj98]). The focus is usually on underlying semantics of arguments and argument acceptability. However, no formal analysis exists of how agent preferences, and the range of possible negotiation outcomes,change as a result of exchanging arguments. In this paper, we model and verify a resource sharing mechanism using which agents in a digital ecosystem collaborate. Preliminaries ============= Our negotiation framework consists of a set of two agents **A** and a finite set of resources **R**, which are indivisible. An allocation of resources is a partitioning of **R** among agents in **A** [@emst06]. An allocation of resources **R** to a set of agents **A** is a function $\Lambda : \textbf{A} \rightarrow 2^{\textbf{R}}$ such that $\Lambda (i)\cap \Lambda(j) = \Phi $ for $i \neq j$ and $\cup_{i \in \textbf{A}} \Lambda(i) = \textbf{R} $. Agents may have different preferences over sets of resources, defined in the form of utility functions. A payment is a function $ p : \textbf{A} \rightarrow \mathbb{R}$ such that $\sum_{i \in \textbf{A}}p(i) = 0$. Note that the definition ensures that the total amount of money is constant. If $p(i) > 0$, the agent pays the amount $p(i)$, while $p(i) < 0$ means the agent receives the amount $-p(i).$ We can now define the notion of ‘deal’ formally. Let $\Lambda$ be the current resource allocation. A deal with money is a tuple $ \Delta = (\Lambda,\Lambda^{'}, p)$ where $\Lambda^{'}$ is the suggested allocation, $\Lambda^{'} \neq \Lambda$, and p is a payment. Methodology =========== An offer (or proposal) is a deal presented by one agent which, if accepted by the other agents, would result in a new allocation of resources. In this paper, we will restrict our analysis to two agents. The bargaining protocol initiated by agent $A_i$ with agent $A_j$ is shown in Fig.1. Bargaining can be seen as a search through possible allocations of resources. In the brute force method, agents would have to exchange every possible offer before a deal is reached or disagreement is acknowledged. The number of possible allocations of resources to agents is $|\textbf{A}|*2^{|\textbf{R}|}$, which is exponential in the number of resources. The number of possible offers is even larger, since agents would have to consider not only every possible allocation of resources, but also every possible payment. Various computational frameworks for bargaining have been proposed in order to enable agents to reach deals quickly [@ai07]. ![State Chart Diagrams[]{data-label="Fig:SD"}](fig1) Fig.1 shows state chart diagram which is used to describe the behavior of systems. In Fig.1 above portion show states of offering agent and below part show reacting agent .After initialize both agent offering agent offer some resource to reacting agent, reacting agent either accept,reject or challenge. If reacting agent challenge, so offering agent argue on challenge (A challenge is a continue process till the offering agent does not meet the requirement of reacting agent).Stop state show the termination of communication.\ **Bargaining Protocol(BP):**\ Agents start with resource allocation $\Lambda^{0}$ at time $t = 0$ At each time $t > 0$: 1. Propose($A_i$, $\delta^{t}$): Agent $A_i$ proposes to $A_j$ deal $\delta^{t}$ = $(\Lambda^{0},\Lambda^{'},p^{t})$ which has not been proposed before; 2. Agent $A_j$ either: (i) accept(j,$\delta^{t}$): accepts, and negotiation terminates with allocation $\Lambda^{t}$ and payment $p^{t}$; or (ii) reject($A_j$, $\delta^{t}$): rejects, and negotiation terminates with allocation $\Lambda^{0}$ and no payment; or (iii) challenges the argument by going to step 1 at the time step $t + 1$. Model Checking ============== Over the years, model checking has evolved greatly into the software domain rather than being confined to hardware such as electronic circuitries. Model checking is one of the most successful approach to verification of any model against formally expressed requirements. It is a technique used for verifying finite state transition system. The specification of system model can be formalized in temporal logic [@mm04], which can be used to verify if a specification holds true in the model. Model checking has a number of advantages over traditional approaches which are based on simulation, testing and deductive reasoning. In particular, model checking is an automatic, fast tool to verify the specification against the model. If any specification is false, model checker will produce a counter-example that can be used to trace the source of the error. In this paper, we have modeled a resource sharing based argumentation scheme between two agents. In this scenario we have considered a set of resources that are held by the agents. Agents negotiate over the possession of the resources needed by them to achieve their objectives. An Agent wanting a resource makes an initial offer for the resource. The reacting agent or the agent in possession of the resource can either accept, reject or challenge the offer. Based on the move made by the reacting agent the offering agent can either argue or close the dialogue. When an agent accepts a resource from another agent a payment is made to the offering agent. Offering\_Agent() ostate = inito ostate = offer Reacting\_Agent() rstate = initr rstate = accept rstate = reject rstate = challenge rstate = challenge rstate = stopr We have developed two algorithms to demonstrate the behavior of two agents. In algorithm \[alg1\], the offering agent makes an offer for a resource. After an offer is made based on the move made by the reacting agent, the offering agent can either argue or stop the dialogue. In algorithm \[alg2\], when an offer is made for a resource, the reacting agent can either accept, refuse or challenge the offer. Verification Results and Discussion =================================== Properties of the Multi-Agent Argumentation System are specified and evaluated in NuSMV [@racgemm]. The system is modeled and fed to the NuSMV tool [@ccg02]. We then construct CTL formula, which are in effect, negation of the properties of the system. Each formula is verified by the NuSMV model checker and a counter trace is provided to illustrate that the negated formula are false. We provide the trace after each specification. **Sl.No.** **Specification** **Satisfiability** ------------ ----------------------------------------------------------------------------------- ------------------------ 1 AG(oagent=offer $\rightarrow$ AF !(ragent=accept|ragent=refuse|ragent=challenge)) False(Counter-example) 2 AG(ragent=accept $\rightarrow$ AG !(resource\[want\]=0)) False(Counter-example) 3 AG(complete $\rightarrow$ AF !(typeChal=0 & typeArg=0)) False(Counter-example) The specification tells that when an offering agent makes an offer, it will neither be accepted, refused nor challenged by the reacting agent. This is FALSE since the reacting agent has to do one of the three options it has. And hence NuSMV generates a counter-example. The Trace shown indicates that reacting agent challenges the offer made by the offering agent.\ *AG(oagent=offer $\rightarrow$ AF !(ragent=accept|ragent=refuse|ragent=challenge))*. ![NuSMV Implementation of Specification 1[]{data-label="fig:spec1"}](fig2) The specification tells that, if a reacting agent j reaches a decision to accept the offer, then the resource does not move to the offering agent i. This is FALSE since the resource has to migrate and hence NuSMV generates a counter-example. The Trace indicates that when the offering agent makes an offer for a resource indicated by the variable ’want’,when the reacting agent accepts the offer the resource migrates to offering agent and hence its value in not zero.\ *AG(ragent=accept $\rightarrow$ AG !(resource\[want\]=0))*. ![NuSMV Implementation of Specification 2[]{data-label="fig:spec2"}](fig3) The specification tells that,that, More challenges and arguments are made, once a decision has been reached.This is FALSE since no more challenges and arguments are made and hence NuSMV generates a counter-example.The Trace indicates that once the state complete is reached both the offering and reacting agent reach their stop states and hence no more challenges are made.\ `AG(complete \rightarrow AF !(typeChal=0 \land typeArg=0))`. ![NuSMV Implementation of Specification 3[]{data-label="fig:spec3"}](fig4) Conclusion ========== In the future, distributed systems will be in the forefront. No distributed system can exist without collaboration. Each distributed system site can have an agent entity to voice its interests. It is not always the case that the interests of all sites will fall in line. This is when argumentation can be useful. In this paper we have demonstrated a simple agent-based argumentation paradigm, where two agents argue on an offer made by one of them, this scenario can be extended for more than two agents. There can be several cycles of challenges and arguments made on the proposal before the agents reach a feasible conclusion. We have modeled the situation and verified it using the NuSMV tool, and the results have been demonstrated. [^1]: Sanjay Singh is with the Department of Information and Communication Technology, Manipal Institute of Technology, Manipal University, Manipal-576104, INDIA, E-mail: [email protected]
Santa Claus brought a bag of mixed reactions to the Cape Breton Regional Municipality this year. On Saturday, thousands of people witnessed what was viewed as a much shorter version of the annual Sydney Christmas parade. Onlookers reported the seasonal convoy took less than 10 minutes to pass by in some cases. This was the first visit for Santa since CBRM councillors approved a motion to only allow parades to take place in daylight hours. Every year, parents and children line sidewalks for the annual event that begins in Whitney Pier. This year’s parade was met with differing opinions. “My little one is obsessed with lights,” said Kiersten Confiant who brought her three-and-a-half-year old daughter, Isla, all the way from Georges River to see the parade. “She’s missing out on the lights in the dark. At the same time in the daylight, she gets to see more of the floats.” With their letters addressed to the North Pole in hand, Claire Finney, 10, and Kate Finney, 8, lamented the loss of the nighttime parade. “I liked it better at night so you could see the lights,” said Claire. The sentiment was echoed by her younger sister who noted that catching a glimpse of Santa was still a parade highlight. But some of the faces in Saturday’s crowd said they preferred the switch up. “My girls are very young and last year it was in the evening … it was dark, and it was much harder to keep an eye on them,” said Savanna Paul. “I would prefer it being in the daytime.” Paul’s father Thomas Warcop agreed. “I think it’s safer in the daytime,” he said. “You can see better.” Parade safety has become a growing concern in Nova Scotia following the death of a four-year-old girl who fell under a float during an evening Christmas parade in Yarmouth in 2018. After the tragedy, the CBRM decided to review its own parade regulations. Following that, last August, a boy was taken to hospital after being struck by the rear wheel of a trailer during the annual Pride Cape Breton Parade in Sydney. According to CBRM’s new rules, parade participants are no longer allowed to throw items such as candy from vehicles or floats. Parade routes are also limited to a maximum distance of four kilometres.
https://www.capebretonpost.com/news/local/mixed-reaction-to-santa-claus-parade-in-sydney-379942/
--- abstract: 'Treating the two-dimensional Minkowski space as a Wick rotated version of the complex plane, we characterize the causal automorphisms in two-dimensional Minkowski space as the Märzke-Wheeler maps of a certain class of observers. We also characterize the differentiable causal automorphisms of this space as the Minkowski conformal maps whose restriction to the time axis belongs to the class of observers mentioned above. We answer a recently raised question about whether causal automorphisms are characterized by their wave equation. As another application of the theory, we give a proper time formula for accelerated observers which solves the twin paradox in two-dimensional Minkowski spacetime.' address: | Instituto de Matemáticas, Universidad Nacional Autónoma de México, Unidad Cuernavaca.\ Av. Universidad s/n, Col. Lomas de Chamilpa. Cuernavaca, Morelos México, 62209. author: - Juan Manuel Burgos title: 'Two-dimensional Minkowski causal automorphisms and conformal maps' --- Published under minor corrections in *Classical and Quantum Gravity*. Introduction ============ In 1964, Zeeman [@Ze] proved the following rigidity theorem on causal automorphisms: In $n\geq3$ dimensional Minkowski spacetime, every causal automorphism is the composite of a translation, a dilation and an orthochronous Lorentz transformation. Recently, the solution to the long standing problem of the characterization of causal automorphisms in two dimensional spacetime was given by Kim [@Ki] (see also [@Lo]). Treating the two-dimensional Minkowski space as a Wick rotated version of the complex plane, this paper gives another equivalent characterization of causal automorphisms in terms of Märzke-Wheeler maps and proves for the first time that differentiable causal automorphisms are in fact conformal isometries. Moreover, we prove the following characterization of differentiable causal automorphisms in terms of Minkowski conformal maps: In two dimensional Minkowski spacetime, $F$ is a $C^{1}$ causal automorphism if and only if $F$ is a $C^{1}$ Minkowski conformal map whose restriction to the time axis intersects every lightray. The above is a new result not included in the well known theorem by Hawking [@HKM]: A causal isomorphism between strongly causal spacetimes of dimension strictly greater than two is a conformal isometry. In particular, the characterization of causal automorphisms in terms of Märzke-Wheeler maps gives a negative answer to a recently raised question posed by Low [@Lo] and commented in [@Ki2], who wonders whether two dimensional Minkowski $C^{2}$ causal automorphisms are characterized by their wave equation. However, we prove the following characterization for two dimensional Minkowski $C^{2}$ causal automorphisms: $F$ is a $C^{2}$ causal automorphism if and only if $F$ is a $\mathcal{M}_{2}$-holomorphic or $\mathcal{M}_{2}$-antiholomorphic $C^{2}$ map (see Definition \[DefHolomorphic\]) whose restriction to the time axis intersects every lightray. Finally, a proper time formula for accelerated observers in two dimensional Minkowski spacetime is given in the appendix. This formula solves the twin paradox in this space and reproduces the well-known slowing down of clocks in the gravitational acceleration direction. Preliminaries ============= In what follows we will denote by $\mathcal{M}_{2}$ the two-dimensional Minkowski space and all causal morphisms $F$ are intended to be $F:\mathcal{M}_{2}\rightarrow \mathcal{M}_{2}$. The associated curve of a function $F:\mathcal{M}_{2}\rightarrow \mathcal{M}_{2}$ is the function restricted to the time axis; i.e. the function $\gamma$ such that $\gamma(s)=F(s,0)$ for every real $s$ (in general, $F$ will be continuous so $\gamma$ will be a continuous curve). Unless explicitly stated, differentiable maps are $C^{1}$ maps and continuity is relative to the Euclidean topology. Consider a pair of points $x$ and $y$ in $\mathcal{M}_{2}$. We say that: 1. $x$ causally precedes $y$ ($x<y$) if $x-y$ is a future directed null vector. 2. $x$ chronologically precedes $y$ ($x<<y$) if $x-y$ is a future directed timelike vector. ![Timelike future and past directed regions[]{data-label="Conos"}](Conos.png "fig:"){height="40.00000%"}\ It will be convenient to define the following causal and chronological regions (see Figure \[Conos\]): - $\mathcal{C}_{N}^{+}(p)=\{x\in \mathcal{M}_{2}\ /\ p<x\}$ - $\mathcal{C}_{N}^{-}(p)=\{x\in \mathcal{M}_{2}\ /\ x<p\}$ - $\mathcal{C}_{T}^{+}(p)=\{x\in \mathcal{M}_{2}\ /\ p<<x\}$ - $\mathcal{C}_{T}^{-}(p)=\{x\in \mathcal{M}_{2}\ /\ x<<p\}$ - $\mathcal{C}_{N}(p)=\mathcal{C}_{N}^{+}(p)\cup \mathcal{C}_{N}^{-}(p)$ - $\mathcal{C}_{T}(p)=\mathcal{C}_{T}^{+}(p)\cup \mathcal{C}_{T}^{-}(p)$ \[DefCausal\] $F$ is a causal morphism if $F$ preserves $<$; i.e. $x<y$ implies $F(x)<F(y)$. $F$ is a causal automorphism if $F$ is bijective and $F$ and $F^{-1}$ preserve $<$. In particular, if $F$ is a causal morphism then $F(l)\subset l'$ where $l$ and $l'$ are lightrays. Moreover, if $F$ is a causal automorphism then $F(l)=l'$ where $l$ and $l'$ are lightrays. For the following lemma see [@Na]. \[equivalence\] $F$ is a causal automorphism if and only if $F$ is bijective and $F$ and $F^{-1}$ preserve $<<$. In two-dimensional Minkowski space $\mathcal{M}_{2}$ we can distinguish between right and left moving lightrays. Thus, for every point $p$ in $\mathcal{M}_{2}$ there exists a unique pair consisting of a left and a right moving lightrays $l_{L}(p)$ and $l_{R}(p)$ respectively such that $\{p\}=l_{L}(p)\cap l_{R}(p)$. By definition, it is clear that a causal automorphism maps parallel lightrays into parallel lightrays. This motivates the following definition: A causal morphism $F$ is orientation preserving (reversing) if $F$ maps right moving lightrays into right (left) moving lightrays. We will see later that for $C^{1}$ causal automorphisms, the previous definition is equivalent to the differentiable one [@War]. It is important to remark that the concept of lightray in two dimensional spacetime is purely mathematical for electromagnetic theory in this dimension has no propagation. This is because there is no magnetic field so there are no electromagnetic waves. Moreover, the photon of two dimensional $QED$ is a free massive boson [@Sc]. Algebra of the two-dimensional Spacetime ======================================== Consider the associative real algebra $A={{\mathbb R}}[\sigma]$ such that $\sigma^{2}=1$; i.e. $$A={{\mathbb R}}[x]/\langle\ x^{2}-1\ \rangle$$ with the conjugation $\overline{a+b\ \sigma}=a-b\ \sigma$. Defining $|a|_{L}^{2}= \bar{a}\ a$ we have that $$|a\cdot b|_{L}^{2}=|a|_{L}^{2}\ |b|_{L}^{2}$$ This quadratic form comes from the inner product $\langle a,\ b \rangle= \Pi^{0}(\bar{a}\cdot b)$ where $\Pi^{0}$ is the projection over the first coordinate. In what follows, we will identify the two dimensional spacetime $\mathcal{M}_{2}$ with the algebra $A$ through the correspondence (see Figure \[SpaceTimeAlgebra\]) $$(x, c\ t)\leftrightarrow c\ t +x\ \sigma$$ where $c$ is the speed of light. It is interesting to see that this algebra encodes the usual special relativistic kinematic relations. The $2$-velocity $u$ associated to the $1$-velocity $v$ is $$u=\frac{c+v\ \sigma}{|c+v\ \sigma|_{L}}= \frac{1+\frac{v}{c}\sigma}{\sqrt{1-\frac{v^{2}}{c^{2}}}}$$ and the Lorentz transformation is just $p'=u\cdot p$; i.e. $$p'=u\cdot p= \frac{1+\frac{v}{c}\sigma}{\sqrt{1-\frac{v^{2}}{c^{2}}}}\cdot(ct+x\ \sigma)=c\ \frac{t+\frac{v}{c^{2}}x}{\sqrt{1-\frac{v^{2}}{c^{2}}}}+ \frac{x+vt}{\sqrt{1-\frac{v^{2}}{c^{2}}}}\ \sigma$$ The addition of velocities formula is just the product of the respective $2$-velocities: $$u\cdot u'=\frac{1+\frac{v}{c}\sigma}{\sqrt{1-\frac{v^{2}}{c^{2}}}}\cdot\frac{1+\frac{w}{c}\sigma}{\sqrt{1-\frac{w^{2}}{c^{2}}}}= \frac{1+\frac{v\ast w}{c}\sigma}{\sqrt{1-\frac{(v\ast w)^{2}}{c^{2}}}}$$ where $$v\ast w= \frac{v+w}{1+\frac{v\ w}{c^{2}}}$$ This way, Special Relativity in two dimensional spacetime becomes a Wick rotated version of the complex numbers. ![Two dimensional Spacetime Algebra[]{data-label="SpaceTimeAlgebra"}](SpaceTimeAlgebra.png "fig:"){height="35.00000%"}\ Märzke-Wheeler Map ================== An observer is a continuous curve $\gamma: {{\mathbb R}}\rightarrow \mathcal{M}_{2}$ such that $\gamma(t)\in \mathcal{C}_{T}^{+}(\gamma(s))$ for every $t>s$ and $\gamma(t)\in \mathcal{C}_{T}^{-}(\gamma(s))$ for every $t<s$, for every real $s$. We define the Märzke-Wheeler map $\Omega_{\gamma}:\mathcal{M}_{2}\rightarrow \mathcal{M}_{2}$ of an observer $\gamma$ as follows: $$\{\Omega_{\gamma}(p)\}= l_{L}(\gamma(s_{L}))\cap l_{R}(\gamma(s_{R}))$$ such that $\{(s_{L},0)\}=l_{L}(p)\cap \left({{\mathbb R}}\times\{0\}\right)$ and $\{(s_{R},0)\}=l_{R}(p)\cap \left({{\mathbb R}}\times\{0\}\right)$ (see Figure \[MW\_Coord\]). This map [@MW] is clearly an extension of the Einstein synchronization convention for non accelerated observers. ![Märzke-Wheeler map[]{data-label="MW_Coord"}](MWmap.png "fig:"){height="40.00000%"}\ \[MWformula\] Consider an observer $\gamma$. Then, $$\Omega_{\gamma}(s+x\sigma)= \frac{\gamma(s+x)+\gamma(s-x)}{2}+\frac{\gamma(s+x)-\gamma(s-x)}{2}\sigma$$ [[*Proof:* ]{}]{}$$\begin{aligned} |\Omega_{\gamma}(s+x\sigma)-\gamma(s\pm x)|_{L}^{2} &=& |\frac{\gamma(s+x)-\gamma(s-x)}{2}\cdot(\sigma\mp 1)|_{L}^{2} \\ &=& |\frac{\gamma(s+x)-\gamma(s-x)}{2}|_{L}^{2}\ |(\sigma\mp 1)|_{L}^{2}=0\end{aligned}$$ because $|(\sigma\mp 1)|_{L}^{2}=0$. [$\square$]{} \[MWCont\] The Märzke-Wheeler map of an observer is continuous. If $\gamma$ is a $C^{1}$ observer then the Märzke-Wheeler map verifies the relation $$\partial_{0}\Omega_{\gamma}= \sigma\ \partial_{1}\Omega_{\gamma}$$ The above property implies that the Märzke-Wheeler map has a wave like motion: $$\Box\ \Omega_{\gamma}=0$$ such that $\Omega_{\gamma}(s)= \gamma(s)$ for every real $s$ and $C^{2}$ observer $\gamma$. This motivates the following definition: \[DefHolomorphic\] We say that a function $F:\mathcal{M}_{2}\rightarrow \mathcal{M}_{2}$ is $\mathcal{M}_{2}$-holomorphic if $\partial_{0}F= \sigma\ \partial_{1}F$ and $\mathcal{M}_{2}$-antiholomorphic if $\partial_{0}F= -\sigma\ \partial_{1}F$. If $F:\mathcal{M}_{2}\rightarrow \mathcal{M}_{2}$ is a $\mathcal{M}_{2}$-holomorphic function, we define its $\mathcal{M}_{2}$-derivative as $DF=\partial_{0}F= \sigma\ \partial_{1}F$. Is clear that $\mathcal{M}_{2}$-holomorphic and $\mathcal{M}_{2}$-antiholomorphic functions have wave like motion. \[MWConformal\] If $\gamma$ is a $C^{1}$ observer then $\Omega_{\gamma}$ is a conformal map and its conformal factor is $|D\Omega_{\gamma}|_{L}^{2}$. [[*Proof:* ]{}]{}$$\begin{aligned} \langle \partial_{i}\Omega_{\gamma},\ \partial_{j}\Omega_{\gamma} \rangle &=& \langle D\Omega_{\gamma}\ \varepsilon_{i},\ D\Omega_{\gamma}\ \varepsilon_{j} \rangle= \Pi^{0}( \overline{D\Omega_{\gamma}\ \varepsilon_{i}}\ D\Omega_{\gamma}\ \varepsilon_{j}) \\ &=& \Pi^{0}( \overline{\varepsilon_{i}}\ \overline{D\Omega_{\gamma}}\ D\Omega_{\gamma}\ \varepsilon_{j})=|D\Omega_{\gamma}|_{L}^{2}\ \Pi^{0}( \overline{\varepsilon_{i}}\ \varepsilon_{j}) \\ &=& |D\Omega_{\gamma}|_{L}^{2}\ \langle \varepsilon_{i},\ \varepsilon_{j} \rangle=|D\Omega_{\gamma}|_{L}^{2}\ \eta_{ij}\end{aligned}$$ where $\varepsilon_{0}=1$ and $\varepsilon_{1}=\sigma$. [$\square$]{} The above proposition shows that acceleration is equivalent to a conformal map in two dimensional flat spacetime. Because $R_{0101}=0$ where $R$ is the Riemann curvature tensor, we conclude the interesting fact that the conformal factor logarithm of the Märzke-Wheeler map has also a wave like motion: $$\Box\ ln\ g=0$$ where $g=|D\Omega_{\gamma}|_{L}^{2}$ for $C^{2}$ observers. \[MWCausal\] If $\gamma$ is an observer then $\Omega_{\gamma}$ is an orientation preserving continuous causal morphism. [[*Proof:* ]{}]{}By Lemma \[MWformula\] and the fact that $\gamma$ is continuous, is clear that $\Omega_{\gamma}$ is continuous. By definition, $\Omega_{\gamma}$ preserves $<$ and maps right (left) moving lightrays into right (left) moving lightrays. [$\square$]{} $C^{1}$ Causal Automorphisms as Conformal Isometries ==================================================== \[Def\] We say an observer $\gamma:{{\mathbb R}}\rightarrow \mathcal{M}_{2}$ verifies the Lightray Intersecting Property (LIP) if every lightray intersects $\gamma({{\mathbb R}})$. \[causalCont\] If $F$ is a causal automorphism then its associated curve is an observer; i.e. $\gamma$ is an observer where $\gamma(s)=F(s)$ for every real $s$. [[*Proof:* ]{}]{}To prove that $\gamma:{{\mathbb R}}\rightarrow \mathcal{M}_{2}$ is continuous it is enough to show that $\gamma$ maps a monotone convergent sequence into a convergent sequence. Consider a strictly ascending convergent sequence $(s_{n})$ of real numbers such that $s_{n}\rightarrow s_{0}$. Because of Lemma \[equivalence\], $$\gamma(s_{1})<<\gamma(s_{2})<<\gamma(s_{3})\ldots <<\gamma(s_{0}) \label{RelCausal}$$ and we have that $\gamma(s_{n})\in \overline{\mathcal{C}_{T}^{+}(\gamma(s_{1}))\cap \mathcal{C}_{T}^{-}(\gamma(s_{0}))}$ for every $n\geq 1$. In particular, the sequence $(\gamma(s_{n}))$ is contained in a compact set and by the Bolzano-Weierstrass Theorem, $(\gamma(s_{n}))$ has a limit point $p$ such that $$\gamma(s_{1})<<\gamma(s_{2})<<\gamma(s_{3})\ldots <<p \label{RelCausal2}$$ For if there is a natural $N$ such that $\gamma(N)$ doesn’t chronologically precede $p$ then $\overline{\mathcal{C}_{T}^{+}(\gamma(s_{N+1}))}\cap \overline{\mathcal{C}_{T}^{-}(p)}=\emptyset$ which implies (because of (\[RelCausal\])) that $p$ is not the limit point of the subsequence $(\gamma(s_{n}))_{n>N}$ which is absurd. Consider an open disk $D_{\varepsilon}(p)$ centered at $p$ of radius $\varepsilon$. There is a natural $m$ such that $\gamma_{m}\in D_{\varepsilon}(p)$ and because of (\[RelCausal2\]), for every $n>m$ we have $$\gamma(s_{n})\in\mathcal{C}_{T}^{+}(\gamma(s_{m}))\cap \mathcal{C}_{T}^{-}(p)\subset D_{\varepsilon}(p)$$ and we conclude that $(\gamma(s_{n}))\rightarrow p$. In particular, $$\overline{\mathcal{C}_{T}^{+}(p)}= \bigcap_{n\in {{\mathbb N}}} \mathcal{C}_{T}^{+}(\gamma(s_{n}))$$ If $p=\gamma(s_{0})$ we are done. If not, $\mathcal{C}_{T}^{+}(p)\cap \mathcal{C}_{T}^{-}(\gamma(s_{0}))$ is an open non empty set such that $$F^{-1}(\mathcal{C}_{T}^{+}(p)\cap \mathcal{C}_{T}^{-}(\gamma(s_{0})))\subset \left(\bigcap_{n\in {{\mathbb N}}} F^{-1}(\mathcal{C}_{T}^{+}(\gamma(s_{n})))\right)\cap F^{-1}(\mathcal{C}_{T}^{-}(\gamma(s_{0})))=$$ $$=\left(\bigcap_{n\in {{\mathbb N}}} (s_{n}, +\infty)\right)\cap (-\infty, s_{0})=[s_{0}, +\infty)\cap (-\infty, s_{0})=\emptyset$$ which is absurd. The argument for a descending sequence is analogous. We have shown that $\gamma$ is continuous. Is clear that the continuous curve $\gamma$ is an observer for $$\gamma(s+h)=F(s+h)>>F(s)=\gamma(s)$$ because $s+h>>s$ such that $h>0$ where $s$ and $h$ are real numbers. [$\square$]{} \[LIP\] Consider a causal morphism $F$. Then, $F$ is a causal automorphism if and only if its associated curve $\gamma$ is an observer verifying LIP. [[*Proof:* ]{}]{}Suppose that $F$ is a causal automorphism. Because of Lemma \[causalCont\], $\gamma$ is an observer. Consider a lightray $l$. There is a unique lightray $l'$ such that $F(l')=l$ and intersects the real line (time axis) in the real $s_{0}$. Then, $l$ intersects $\gamma$ in the point $\gamma(s_{0})$. Conversely, consider a point $p$ in $\mathcal{M}_{2}$. There is a unique pair consisting of a left and a right moving lightrays $l_{L}(p)$ and $l_{R}(p)$ respectively such that $\{p\}=l_{L}(p)\cap l_{R}(p)$. Because $\gamma$ is an observer verifying LIP, $\{\gamma(s_{L})\}=l_{L}(p)\cap \gamma({{\mathbb R}})$ and $\{\gamma(s_{R})\}=l_{R}(p)\cap \gamma({{\mathbb R}})$. Because $F$ is a causal morphism, if $F$ is orientation preserving then $p=F(p')$ such that $\{p'\}=l_{L}(s_{L})\cap l_{R}(s_{R})$ for $$\{F(p')\}=F(l_{L}(s_{L})\cap l_{R}(s_{R}))= F(l_{L}(s_{L}))\cap F(l_{R}(s_{R}))\subset l_{L}(\gamma(s_{L}))\cap l_{R}(\gamma(s_{R}))=$$ $$= l_{L}(p)\cap l_{R}(p)= \{p\}$$ and $p'$ is the unique point verifying that property. If $F$ is orientation reversing then $p=F(p')$ such that $\{p'\}=l_{L}(s_{R})\cap l_{R}(s_{L})$. [$\square$]{} \[CausalimplicaMW\] $F:\mathcal{M}_{2}\rightarrow \mathcal{M}_{2}$ is a causal automorphism if and only if its associated curve $\gamma$ is an observer which verifies LIP and 1. $F=\Omega_{\gamma}$ if $F$ is orientation preserving. 2. $F=\Omega_{\gamma}\circ \bar{z}$ if $F$ is orientation reversing. where $\bar{z}$ is the conjugate map. [[*Proof:* ]{}]{}Suppose that $F$ is orientation preserving. Because $F$ is a causal automorphism, by Lemmas \[causalCont\] and \[LIP\] $\gamma$ is an observer verifying LIP and then by Lemmas \[MWCausal\] and \[LIP\] $\Omega_{\gamma}$ is also a causal automorphism. By Definition \[DefCausal\] and the remark below it, $F(l)= l'$ where $l$ and $l'$ are lightrays and this property is also verified by $\Omega_{\gamma}$. Because $\gamma(s)=F(s)$ for every real $s$ and $F$ is orientation preserving, then $$F(l)=\Omega_{\gamma}(l)$$ where $l$ is a lightray. Every point in $\mathcal{M}_{2}$ is a unique pair consisting of a left and a right lightray $l_{L}(p)$ and $l_{R}(p)$ respectively such that $\{p\}=l_{L}(p)\cap l_{R}(p)$. Then we have $$\{F(p)\}= F(l_{L}(p)\cap l_{R}(p))= F(l_{L}(p))\cap F(l_{R}(p))= \Omega_{\gamma}(l_{L}(p))\cap \Omega_{\gamma}(l_{R}(p))=$$ $$= \Omega_{\gamma}(l_{L}(p)\cap l_{R}(p))= \{\Omega_{\gamma}(p)\}$$ and we get the result. For the orientation reversing case just replace $\Omega_{\gamma}$ by $\Omega_{\gamma}\circ \bar{z}$ in the previous proof. Because of Lemmas \[MWCausal\] and \[LIP\], the converse follows. [$\square$]{} The above characterization in terms of Märzke-Wheeler maps agrees with the one given in [@Ki] and [@Lo]. In particular, we have shown that for $C^{1}$ causal automorphisms, definition \[Def\] is equivalent to the usual notion of orientation preserving and reversing. Because of Lemmas \[MWCausal\] and \[MWConformal\], the above theorem implies: In two dimensional Minkowski space, every causal automorphism is continuous and every $C^{1}$ causal automorphism is a conformal isometry. \[C\] Let $F:\mathcal{M}_{2}\rightarrow \mathcal{M}_{2}$ be a conformal map such that its associated curve $\gamma$ is an observer verifying LIP. Then, 1. $F=\Omega_{\gamma}$ if $F$ is orientation preserving. 2. $F=\Omega_{\gamma}\circ \bar{z}$ if $F$ is orientation reversing. [[*Proof:* ]{}]{}Because $F$ is a conformal map, its differential $DF$ maps null vectors into null vectors and because $F$ is $C^{1}$, we conclude that $F(l)\subset l'$ where $l$ and $l'$ are lightrays. Suppose that $F$ is orientation preserving (in the usual sense). By Hypothesis, $\gamma$ is an observer verifying LIP and then by Lemmas \[MWCausal\] and \[LIP\], $\Omega_{\gamma}$ is a causal automorphism. In particular, $\Omega_{\gamma}(l)= l'$ where $l$ and $l'$ are lightrays. Because $\gamma(s)=F(s)$ for every real $s$ and $F$ is orientation preserving, then $$F(l)\subset\Omega_{\gamma}(l)$$ where $l$ is a lightray. Every point in $\mathcal{M}_{2}$ is a unique pair consisting of a left and a right lightray $l_{L}(p)$ and $l_{R}(p)$ respectively such that $\{p\}=l_{L}(p)\cap l_{R}(p)$. Then we have $$\{F(p)\}= F(l_{L}(p)\cap l_{R}(p))= F(l_{L}(p))\cap F(l_{R}(p))\subset \Omega_{\gamma}(l_{L}(p))\cap \Omega_{\gamma}(l_{R}(p))=$$ $$= \Omega_{\gamma}(l_{L}(p)\cap l_{R}(p))= \{\Omega_{\gamma}(p)\}$$ and we get the result. For the orientation reversing case just replace $\Omega_{\gamma}$ by $\Omega_{\gamma}\circ \bar{z}$ in the previous proof. [$\square$]{} We have shown the following characterization theorem: In two dimensional Minkowski spacetime, $F$ is a $C^{1}$ causal automorphism if and only if $F$ is a Minkowski conformal map whose associated curve $\gamma$ is an observer verifying LIP. $C^{2}$ Causal Automorphisms as Minkowski (anti) Holomorphic Maps ================================================================= Recently, Low [@Lo] (page 4) has raised the question of whether causal automorphisms are characterized by wave equations: *“Comment: It is also worth observing that by considering the situation in terms of Cartesian coordinates, we can see that X and T are both given by solutions of the wave equation on $M^{2}$ (at least in the case where they are sufficiently differentiable). It would be interesting to know whether there is a useful characterization of just which solutions of the wave equation give rise to causal automorphisms of $M^{2}$."* Paraphrased in our terms, Low asks if the solution of the problem $\Box F=0$ such that $\gamma$ is an observer verifying LIP where $\gamma(s)=F(s)$ for every real $s$, is necessarily a causal automorphism. Proposition \[CausalimplicaMW\] gives a negative answer to that question, for a causal automorphism must be $\mathcal{M}_{2}$-holomorphic or $\mathcal{M}_{2}$-antiholomorphic and the general solution of the wave equation is a linear combination of both. For example, consider a pair of observers $\gamma_{1}$ and $\gamma_{2}$ and the function $F=\Omega_{\gamma_{1}}+\Omega_{\gamma_{2}}\circ \bar{z}$. Then, $F$ is a solution of the wave equation such that its associated observer is $\gamma_{1}+\gamma_{2}$. However, by the previous theorem, if $F$ is a causal automorphism then $F=\Omega_{\gamma_{1}+\gamma_{2}}$ or $F=\Omega_{\gamma_{1}+\gamma_{2}}\circ \bar{z}$, and we conclude that $F$ is not a causal automorphism. However, we can give the following characterization for $C^{2}$ causal automorphisms: \[unicidad\] If $F$ is a $\mathcal{M}_{2}$-holomorphic or $\mathcal{M}_{2}$-antiholomorphic $C^{2}$ function whose associated curve is zero then $F=0$. [[*Proof:* ]{}]{}In this proof we forget the algebraic structure considered so far and treat $\mathcal{M}_{2}$ just as a real vector space. Suppose $F$ is $\mathcal{M}_{2}$-holomorphic and write $F(x,y)= (P(x,y),\ Q(x,y))$. Then, $$\partial_{x}P=\partial_{y}Q$$ $$\partial_{y}P=\partial_{x}Q$$ such that $P(0,y)=Q(0,y)=0$ for every real $y$. Because $P$ and $Q$ are $C^{2}$ real functions, the above equations and constraints are equivalent to the following: $$Q(x,y)=\int_{0}^{x}dx'\ \partial_{y}P(x',y)$$ such that $\Box P=0$, $P(0,y)=0$ and $\partial_{x}P(0,y)=0$ for every real $y$. These constraints imply that $P=0$ so $Q=0$ as well. We have proved that $F=0$. The $\mathcal{M}_{2}$-antiholomorphic case is similar. [$\square$]{} $F$ is a $C^{2}$ causal automorphism if and only if $F$ is a $\mathcal{M}_{2}$-holomorphic or $\mathcal{M}_{2}$-antiholomorphic $C^{2}$ function whose associated curve is an observer verifying LIP. [[*Proof:* ]{}]{}The direct implication follows from proposition \[CausalimplicaMW\]. For the converse, suppose that $F$ is a $\mathcal{M}_{2}$-holomorphic $C^{2}$ function whose associated curve is an observer $\gamma$. This way, $\gamma$ is $C^{2}$ and $\Omega_{\gamma}$ is also a $\mathcal{M}_{2}$-holomorphic $C^{2}$ function whose associated curve is the observer $\gamma$. Then, $F-\Omega_{\gamma}$ is a $\mathcal{M}_{2}$-holomorphic $C^{2}$ function whose associated curve is zero and by Lemma \[unicidad\], $F=\Omega_{\gamma}$. Lemmas \[MWCausal\] and \[LIP\] imply that $F$ is a causal automorphism. For the $\mathcal{M}_{2}$-antiholomorphic case just replace $\Omega_{\gamma}$ by $\Omega_{\gamma}\circ \bar{z}$ in the previous proof. [$\square$]{} Although a causal automorphism is not characterized by a wave equation, we have the following characterization in terms of it: Consider a $C^{2}$ observer $\gamma$ verifying LIP. $F$ is a $C^{2}$ causal automorphism whose associated observer is $\gamma$ if and only if $$Q(x,y)= q(y)\pm\int_{0}^{x}dx'\ \partial_{y}P(x',y)$$ such that $P$ verifies the following wave equation: $$\begin{aligned} \Box\ P &=& 0 \\ P(0,y) &=& p(y) \\ \partial_{x}P(0,y) &=& \pm q'(y)\end{aligned}$$ for every real $y$, where $\gamma(y)= (p(y), q(y))$ and $F(x,y)= (P(x,y), Q(x,y))$ for every pair of reals $x$ and $y$. Proper time formula for Accelerated Observers in two dimensional Minkowski Spacetime ==================================================================================== In Special Relativity, the proper time of a given timelike continuous future directed curve $\alpha$ is $$\Delta\tau=\frac{1}{c}\int ds= \int\ \sqrt{1-\frac{v(t)^{2}}{c^{2}}}\ dt \label{ProperTimeFormula}$$ where $v$ is $\alpha$’s $1$-velocity measured by an inertial observer $\gamma$. Following Märzke-Wheeler synchronization convention for accelerated observers, by Lemma \[MWConformal\] we have the following proper time formula relative to an accelerated observer $\gamma$: $$\Delta\tau=\frac{1}{c}\int ds= \int\ |D\Omega_{\gamma}|_{L}(x(t), ct)\ \sqrt{1-\frac{v(t)^{2}}{c^{2}}}\ dt \label{ProperTimeAccFormula}$$ where $v(t)$ is $\alpha$’s $1$-velocity and $x(t)$ is $\alpha$’s position measured by the accelerated observer $\gamma$ at the instant $t$. Formula \[ProperTimeAccFormula\] simplifies to \[ProperTimeFormula\] if the observer is an inertial one for the conformal factor is one in this case. This way, formula \[ProperTimeAccFormula\] is a generalization of \[ProperTimeFormula\]. The twin paradox is the following: Consider a couple of twins $A$ and $B$ that use formula \[ProperTimeFormula\] to calculate the brother’s proper time. The twin $B$ launches in an accelerated space shuttle and then come back, relative to an inertial observer $A$ who stays in the air base, and finds that $A$ is younger than him at the same time that $A$ also finds that $B$ is younger than him, so, Who is the younger one? The abuse made by considering that certain formulas deduced for inertial observers remain valid for non-inertial ones is the origin of the twin paradox. Formula \[ProperTimeAccFormula\] solves the twin paradox for the accelerated twin $B$ calculates the proper time of the inertial one $A$ with formula \[ProperTimeAccFormula\] instead of using \[ProperTimeFormula\]. This way both twins agree that $B$ is the younger one. Without an explicit proper time formula, a similar approach to the twin paradox is also studied in [@PV]. Because of the fact that $$\Box\ ln\ g=0$$ where $g=|D\Omega_{\gamma}|_{L}^{2}$ for $C^{2}$ observers, we have shown that the proper time formula for an observer in two dimensional flat spacetime is: $$\Delta\tau= \int\ e^{h(x(t), ct)}\ \sqrt{1-\frac{v(t)^{2}}{c^{2}}}\ dt \label{ProperTimeAccGenFormula}$$ where $h$ is a scalar field such that $$\Box\ h=0$$ As an application of formula \[ProperTimeAccFormula\] consider a uniformly accelerated observer: $$\gamma(s)= \frac{c^{2}}{a}\ \exp\left(\frac{a\ s}{c^{2}}\ \sigma\right)\ \sigma$$ where $a$ denotes its acceleration and $c$ the lightspeed. By proposition \[MWformula\], the Märzke-Wheeler map $\Omega_{\gamma}$ of the observer $\gamma$ is: $$\Omega_{\gamma}(z)= \frac{c^{2}}{a}\ \exp\left(\frac{a\ z}{c^{2}}\ \sigma\right)\ \sigma$$The conformal factor is: $$|D\Omega_{\gamma}|_{L}(s + x\sigma)=\exp\left(\frac{a\ x}{c^{2}}\right)$$ By the equivalence principle, we can think that the observer is at rest in a constant gravitational field with gravitational acceleration $g=-a$. By formula \[ProperTimeAccFormula\] we have that: $$\Delta\tau(x)= \exp\left(-\frac{g\ x}{c^{2}}\right)\ \Delta t$$ where $\Delta\tau(x)$ is the time interval measured at $x$ by the observer $\gamma$. This way we have the formula: $$\Delta\tau(x_{2})= \exp\left(-\frac{g\ \Delta x}{c^{2}}\right)\ \Delta\tau(x_{1})$$ which express the well-known slowing down of clocks in the gravitational acceleration direction [@Wa]. References {#references .unnumbered} ========== [10]{} Hawking S, King A, McCarthy P, [*A new topology for curved space–time which incorporates the causal, differential, and conformal structures*]{} J. Math. Phys. **17** (1976) 174-181. Kim D, [*Causal automorphisms of two dimensional Minkowski spacetime*]{} Class. Quantum Grav. **27** (2010) 075006. Kim D, [*A characterization of causal automorphisms by wave equations*]{} J. Math. Phys. **53** (2012) 032507. Low R, [*Characterizing the causal automorphisms of 2D Minkowski space*]{}, Class. Quantum Grav. **26** (2011) 225009. Märzke M, Wheeler J, [*Gravitation as geometry - I: the geometry of Spacetime and the geometrodynamical standard meter*]{} Gravitation and relativity, H.-Y. Chiu and W. F. Hoffmann, eds., W. A. Benjamin, New York-Amsterdam (1964) 40. Naber G, [*The Geometry of Minkowski Spacetime*]{} Springer-Verlag, New York, (1992). Pauri M, Vallisneri M, [*Märzke-Wheeler coordinates for accelerated observers in special relativity*]{} Found. Phys. Lett. **13** (2000) 401. Schwinger J, [*Gauge Invariance and Mass. II.*]{} Phys.Rev. **128** (1962) 2425. Wald R, [*General Relativity*]{} University of Chicago Press (1984). Warner Frank, [*Foundations of Differentiable Manifolds and Lie Groups*]{} Springer-Verlag (1983). Zeeman E, [*Causality implies the Lorentz group*]{} J.Math.Phys **27** (1964) 490-493.
That book inspired him to learn more about George Washington. So he's finishing Ron Chernow's Pulitzer Prize winning tome, Washington: A Life. An avid reader, Dad enjoyed learning more about the Marquis de Lafayette, and resolved to emulate our first President's gentility and letter writing skills. Like most of the pen community, Dad was saddened to learn that handwritten letter champion, Penworld magazine columnist, and speaker Cindy Zimmermann had passed away. Her words and wisdom live on in A Woman of Interest, whose subtitle A Memoir in Letters perfectly describes the book. From her childhood to a trying murder trial to sending letters in Cloth Envelope Company envelopes to her DVD Sincerely Yours, Cindy's book of letters will enchant you and inspire you to draw from her strength as she confronted so many challenges. We hope you enjoy these books and learn as much from them as Dad did.
http://www.aluckylifebook.com/blog-1/2017/3/22/what-were-reading-washington-a-life
Newspaper in Bergen County Attic Catches Glimpse of Fair Lawn in ’50s WALDWICK, NJ – A homeowner discovered a wad of Paterson newspapers in the attic of her Waldwick home half a century ago and told a slice of Fair Lawn life in 1959. When Crystal Paras moved into her new home about a month ago, she found several copies of the Paterson Evening News in a trunk. A May 1959 issue of The Paterson Evening News Paras found Fair Lawn stories ranging from a nationwide cleanup by a local Boy Scout troop to an advance payment for a junior high school spring concert for candidates seeking leadership roles in the club. Sign up for the Fair Lawn / Glen Rock newsletter Our newsletter delivers the local news that you can trust. You have successfully registered for the TAPinto Fair Lawn / Glen Rock newsletter. “I plan to keep the newspapers as the story of the house I live in,” said Paras, who estimates the house was built around 1850. Paras said she was still walking through things to see what other items are hidden. Their first discovery – papers from the late 1950s – is a throwback to the golden age of journalism – a time when the public could rely on newspapers to be a source of breaking news at the local, state, national and global levels . Starting in 1890, Paterson Evening News was one of several daily newspapers to call the city home. At one time it was one of the most influential newspapers in the region, led by Harry Haines, a powerful and well-connected publisher, said Giacomo DeStefano, director of the Paterson Museum. “Newspapers kept the region informed of what was happening in their community, country and world. It was an important part of people’s lives, ”he said. In the 1950s, Paterson and the surrounding communities saw the post-WWII boom. The country is generally in a good mood and people feel good as winners of the war and industry, DeStefano said. After more families bought cars, employees were able to leave the cities and set their sights on the American dream of home ownership. “Paterson was a microcosm of it,” DeStefano said. Many city factory workers moved to nearby communities, bought houses, and commuted to work every day. “Almost everyone at Fair Lawn and Garfield were from Paterson – that was their hometown,” DeStefano said. Although they no longer lived in Paterson, they still read newspapers around town to keep up with current events, he said. As one of the state’s largest cities, Paterson was a hub for industry, entertainment, shopping, and other social activities in the 1950s, DeStefano said. Paterson and Newark were viewed as the “shopping meccas” of North Jersey, so the newspapers were filled with advertisements to attract people from nearby cities to shop, DeStefano said. The first malls opened in North Jersey in the late 1960s and became the shopping destination of choice for many families. Paterson still had several silk and textile mills, but DeStefano said they “were slowly and steadily moving out of town and New Jersey state”. He noted that even back then, it was “expensive to do business” in the Garden State. It was not uncommon to see Fair Lawn, Garfield, and East Paterson (now Elmwood Park) in the headlines, and often the newspaper had sections designated for news from these cities.
https://junkremovaldaily.com/newspaper-in-bergen-county-attic-catches-glimpse-of-fair-lawn-in-50s/
The Sage distribution includes most programs on which Sage depends – see a partial list below. These programs are all released under a license compatible with the GNU General Public License (GPL), version 3. See the COPYING.txt file in the Sage root directory for more details. See Listing Sage Packages for information about installing packages and for an up-to-date list of the standard, optional and experimental packages. Here is a list of some of the software included with Sage: - atlas: The ATLAS (Automatically Tuned Linear Algebra Software) project - bzip2: bzip2 compression library - ecl: common lisp interpreter - cython: the Cython programming language: a language, based on Pyrex, for easily writing C extensions for Python - eclib: John Cremona’s programs for enumerating and computing with elliptic curves defined over the rational numbers - ecm: elliptic curve method for integer factorization - flint: fast library for number theory - GAP: A System for Computational Discrete Algebra - GCC: GNU compiler collection containing C, C++ and Fortran compilers - genus2reduction: Reduction information about genus 2 curves - gfan: Computation of Groebner fans and toric varieties - givaro: a C++ library for arithmetic and algebraic computations - mpir: MPIR is an open source multiprecision integer library derived from GMP (the GNU multiprecision library) - gsl: GNU Scientific Library is a numerical library for C and C++ programmers - ipython: An enhanced Python shell designed for efficient interactive work, a library to build customized interactive environments using Python as the basic language, and a system for interactive distributed and parallel computing - jmol: a Java molecular viewer for three-dimensional chemical structures - lapack: a library of Fortran 77 subroutines for solving the most commonly occurring problems in numerical linear algebra. - lcalc: Rubinstein’s L-functions calculator - fplll: contains different implementations of the floating-point LLL reduction algorithm, offering different speed/guarantees ratios - linbox: C++ template library for exact, high-performance linear algebra computation - m4ri: Library for matrix multiplication, reduction and inversion over GF(2) - matplotlib: a Python 2-D plotting library - maxima: symbolic algebra and calculus - mpfi: a C library for arithmetic by multi-precision intervals, based on MPFR and GMP - mpfr: a C library for multiple-precision floating-point computations with correct rounding - networkx: a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks - NTL: number theory C++ library - numpy: numerical linear algebra and other numerical computing capabilities for Python - palp: a package for analyzing lattice polytopes - pari: PARI number theory library - pexpect: Python expect (for remote control of other systems) - polybori: provide high-level data types for Boolean polynomials and monomials, exponent vectors, as well as for the underlying polynomial rings and subsets of the power set of the Boolean variables - PPL: The Parma Polyhedra Library - pynac: a modified version of GiNaC (a C++ library for symbolic mathematical calculations) that replaces the dependency on CLN by Python - Python: The Python programming language - R: a language and environment for statistical computing and graphics - readline: GNU Readline line editor library - scipy: scientific tools for Python - singular: Polynomial computations in algebraic geometry, etc. - symmetrica: routines for computing in the representation theory of classical and symmetric groups, and related areas - sympow: Symmetric power L-functions and modular degrees - sympy: a Python library for symbolic mathematics - tachyon: Tachyon(tm) parallel/multiprocessor ray tracing software - termcap: Display terminal library - Twisted: Networking framework - zlib: zlib compression library - zn_poly: C library for polynomial arithmetic in \(\ZZ/n\ZZ[x]\) Todo Automatically generate this list!
https://doc.sagemath.org/html/en/installation/standard_packages.html
Ranking images by their quality is one of the most common challenges in many areas of applied science and technology. For example, a set of images returned by a web search may have good search relevance, but if these relevant images also have the best quality, this would certainly improve users' impression. Another area is medicine, where patient examinations result in terabytes of visual data which is hard to analyze at a time. This is why preprocessing this data by extracting the best images for further diagnostics will be a timesaving solution for physicians. Finally, if we know what defines image quality numerically, we can start developing quality enhancing filters, making imaging data more appealing to human visual system. Image quality is a complex concept which might have different interpretations. In our work we consider quality as an image-specific characteristic perceived by an average human observer. Thus, an image of good quality corresponds to our general idea of regular, informative, and well presented. Presently, it's common to measure image quality with a single metric like contrast, blurriness etc. Our goal is to provide a more complex and formal definition of human quality perception by identifying the top factors responsible for visual quality. To eliminate any subjectivity, we consider quality as an objective, non-reference multidimensional measure, that we want to be able to compute independently without comparing the image to the others. Our practical goal is to find a restricted set of features that are most responsible for quality perception. Such a set would become a first step in solving a practical issue of creating a useful tool for displaying medical images improving their quality. Most of research published on image quality uses quality measures estimated for original image and its distorted copies . In this study we use so called non-reference measures when quality is estimated for single image independently. We use a number of previously developed measures and a number of basic measures like contrast as described below. Even partly blurred image affects human perception of quality. That is why we consider blurriness as an important factor of image quality perception. In this work we use two different blurriness measures. where DB_ver(x,y) is the absolute difference for blurred image B. Horizontal blurriness is computed in the same way. Finally, maximum of two is selected as the final blurriness measure: Fblur = max(Fblur_hor, Fblur_ver). Further we will write it as Fblur_1. Another blurriness measure was presented by Min Goo Choi , based on edge extraction using intensity gradient. The authors define horizontal and vertical absolute difference value of a pixel computed as a difference between its left and right or upper and lower neighboring pixels. Then they obtain the mean horizontal and vertical absolute differences Dhor_mean for the entire image as in (Eq. 4). If candidate pixel Chor(x,y) has absolute horizontal value larger than its horizontal neighbors, this pixel will be classified as edge pixel Ehor(x,y) as shown in (6). Each edge pixel is examined to find whether it corresponds to a blurred edge or not. First, horizontal blurriness of a pixel is computed according to (7). Vertical value is obtained in the same way, maximum of two is selected for final decision. Pixel is considered blurred if its value is larger than a predefined threshold (0.1 suggested in the paper). Finally, the resulting measure of blurriness for the whole image is called inversed blurriness and is computed as a ratio of blurred edged pixels count to edge pixels count (9). Further we will term this measure Fblur_2 to discern it from blur described in . We assume that increase of blurriness should negatively affect quality perception because a very blurred image will loose important information and be less attractive. The basic idea behind entropy is to measure the uncertainty of the image. The more information and less noise the image contains, the more useful it would be, and we might relate image usefulness to its objective quality. In our study Shannon entropy Formula used to calculate entropy is taken from Wikipedia was computed for the entire image, its foreground, and its background according to (Eq. 10). where p(Ik) is the probability of the particular intensity value Ik. We assume that higher entropy should mean that more signal is contained in the image. For example, if there are less details and mode plain surfaces, entropy would be less. However, noisy image would have more entropy, so we will consider entropy for three levels of image. and it will be high for images with high separability between segments and low separability within segment. In our case this measure makes sense only for one set of images depicting trees because another set of medical images mostly presents dark background, which is clearly separated from the foreground. where is average intensity value for the image. This measure is assumed to be higher for less informative, non-predictive and redundant images. We assume that sharper image should be percepted as a more attractive and informative. where r and 1-r are weights for horizontal and vertical measures. We use r equal to 0.5. This measure will be higher for images distorted with block artifacts. The idea of possible relation between image quality and amount of image details brings us to the measures of fractal dimensions. We detect main contours in the image using Canny method and then estimate fractal dimension of the obtained curve. We use box-count to compute dimension (Eq. 25). N stands for the number of square blocks with side е with е =2, 3, 4, and 5. We assume that higher values measure of fractal dimension would correspond to more informative images containing more information. It is natural to assume that the presence of noise can be detrimental for the perceived image quality. Therefore we included a noise measure developed by Masayuki T. . In this work, noise level is described as standard deviation of the Gaussian noise. The authors propose a patch-based algorithm. First, the original image is decomposed into overlapping patches, and the model for the whole image is written as pi = zi+ni, where zi is the original image patch with i-th pixel in its center transformed to a one-dimensional vector, and pi is the observed patch (also transformed to vector) distorted by Gaussian noise which is presented as vector ni. To estimate noise level we need to obtain unknown standard deviation using only the observed distorted noisy image. where is standard deviation of the Gaussian noise. where ? is covariance matrix for noise-free patches z. where р' is the covariance matrix for weak textured patches. where Gj = [Dhorj, Dverj] and Dhor and Dver are horizontal and vertical derivative operators. We assume that noisier images would have worse quality and would be less informative. In order to evaluate the performance of various quality measures and validate the results, we used two datasets of grayscale images of different nature and quality. Each image quality was assessed two times: first by human observers (thus capturing our visual perception of the image quality), and second, but a set of metrics described above. The metrics were applied to the original images as well as their lower-resolution copies, derived with Laplacian pyramid decomposition, which produced the total of 57 quality metric measurement per each image. Our main intention was to find the best sets of numerical metrics that would explain the observed human perception of image quality. Each image dataset used in this work consisted of similar images: the first set had 50 medical images (CT tomography of an abdomen), and the second - 50 scenery photographs of trees and forest landscapes. We intentionally chose the images of rather abstract and “emotion-free” nature to exclude any subjective bias in the human perception. The human perception ranks for the images were obtained with pairwise comparisons between all images in each dataset. The images were presented in random pairs to 15 human spectators, asking them to choose the best of the two. This task was implemented using Amazon Mechanical Turk technology; Figure 1 Mechanical Turk assignment for image markup shows screenshot of assignment. To ensure comparison robustness, we used markup with triple overlap: each pair of images was compared three times by different observers; final choice computed using the majority rule. As a result, more than 7000 pairs were presented and compared. To get image features, 19 basic quality measures were computed for three copies of each image: the original image and its two lower-resolution derived as two levels of the Laplassian pyramid. The resulting 57 measurements were treated as57-dimensional image feature vectors, used as independent variables in models. On the first step of research we are trying to solve our task using known quality measures of every image. In such approach we are trying to fit models to predict known outcome. Based on the pairwise image comparison results we computed a quality index for every image as the number of this image's wins divided by the number of comparisons. This allowed us to put the images in a linear quality order. Note that in general this linear order cannot correspond to all the recorded comparisons: in some instances an image with a higher quality index might have been perceived as inferior when compared with some lower-quality image. This non-linearity in image grades originated from the differences in quality perception between different human observers, and we called such image pairs inverted. Overall, 10% of pairs were inverted in medical dataset and 14% were inverted in trees dataset. where Wp stands for the model-predicted image quality, and W - for the real observed quality. One of main goals of the study was to find a set of factors that are responsible for the human perception of the image quality. We validated our feature-modeling results using medical (MS) and trees (TS) image datasets separately to make sure that models that perform well for one dataset would be good for another dataset. Figure 2 shows various models for 1, 2, 3, 4 and 5 features. We used R squared as a metric to evaluate each model as a measure of the fraction of the original data variation explained by model. Treating the concept of image quality as a function of our visual perception rather than image selection, we therefore assumed that a good model should perform well for both MS and TS datasets. Figure 2 Regression models for both datasets visualizes our results. As one can see, R squared is not increasing dramatically after using more than 6 features, so we show only the models with up to 5 predictors. Circle sizes correspond to average error in each model. Largest circles are close to 0.27 while the best models have errors close to 0.08. You can also observe that the circles on the plot tend to cluster along the diagonal line, which means that most models perform similarly on both MS and TS datasets. Moreover, the higher is k (the number of model features/predictors), the closer are circles to the diagonal line. As a result, higher k generally corresponds to more accurate and more image-independent models, which can provide optimal quality predictions for both MS and TS sets. Figures 3 a, b illustrate best models obtained for MS and TS independently. As the figure indicates, the models selected as the best for one dataset perform well on the other. This already can be viewed as a strong demonstration of the objectivity in the human image quality perception: despite the obvious differences between the images of CT scans and forest landscapes, the models optimal for one set were among the best performers for the other. Finally, Figure 4 demonstrates top ten models for each model size, sorted by the average mean error on two datasets. It can be seen that most models lie on the diagonal line, models with 4, 5 and 6 features becoming increasingly closer to each other due to high R square for both datasets. · Entropy power of the image on first and second levels of Laplacian pyramid (metrics flat0, flat1). It is a product of spectral flatness and variance of the image and shows image signal compressibility, reflecting how much useful signal is contained in the image. · Both blur measures, sharpness, contrast and edge intensity measures on all resolution levels are significant for all datasets, proving that that perception of contrast and blurriness is one of major image quality metrics. · Fractal dimension on all levels of image resolution can be found in models for both sets. · Average gradient is especially important for trees dataset. This measure shows how much pixel values change on average. According to it, images with more contrast edges between objects get higher mark. · Object separability on first and second levels of pyramid can be found in models for both sets. This measure is higher for images with distinguishable and more contrast parts. · Amount of information contained in image, which can be described by spectral flatness and entropy measures. It is remarkable that random noise is not taken into account, while larger objects have some impact. · Contrast, average gradient and blurriness are the most important non-reference quality measures that affect visual perception of the whole image, while sharpness and noise level hardly appear in the best models. This might be explained by sensitivity of used metrics. · Artifact measures like blockness appears to be significant in most models. All things considered, we obtained models containing restricted sets of features that are able to explain quality perception. However, basic matrix of comparisons is our ground truth and main source of information. To measure quality of described approach, we compared each pair of images by predicted quality measures computed by best models of five features mentioned above. To get vector of predicted values we performed leave-one-out cross validation for each of the two sets. This procedure enabled us to get more stable resulting vector of quality measures. On each step one image was separated from other images, so the model weights were learned using the rest of images to predict quality measure for a single image. Final vector of model quality measures was constructed of predicted values and normalized. Average share of inverted pairs computed for predicted quality measures in comparison to initial matrix is 31% for medical images and 29% for trees. However, this result is far from original and could be improved. As we mentioned before, the reduction of pairwise comparison scores to one-dimensional linear quality indices resulted in 10%-14% of inverted pairs: the instances where linear image quality values would mismatch the result of the image pairwise comparison. Using OLS regression models of five features resulted in 29-30% of inverted pairs. To improve our results and to account for more arbitrary ways of defining image quality indices, we decided to consider a scenario where there was no original predefined quality order. That is, the basic idea was to consider quality measures as unknown variables and then try to find their optimal values which would satisfy two major criteria: good predictability with linear regression, and lowest number of inverted image pairs. Besides, we have another issue: in previous part we used linear model of quality. However, linear dependence is not obvious and should be checked. To do this, we tried to use a simple method based on best models obtained on previous step. The idea was to use linear models and enlarge R-squared, minimize error and avoid decreasing the number of inverted pairs. We were using known quality measures from previous step as starting values. If linear model is appropriate, than we should be able to improve target vector to get higher R-square without violating restrictions of initial matrix of comparisons. To start we looked for the best set of measures which would have the lowest regression error and which will not increase the number of inversions according to initial pairwise comparison matrix. In addition, we tried to decrease the number of inverted pairs with the new set of measures. To check this we implemented a simple algorithm described below. Qi is fraction of wins for the i-th image in parwise comparisons. take sorted array [q1, q2,…qm, qm+1,…qn]. choose qt = argmin (Ninverted_pairs). If qt provides no more inverted pairs: qi_new = qi. If qi_min <qi_max: go to step 2. find optimal qi = argmin (MSE) for linear regression model. * Repeat steps 1, 2 until R-squared is more than threshold and square error difference on step s and s-1 is less than threshold. To compare error on step s with previous step s-1, fit features weights using vector Qs as a target, obtain model vector Qs_mod and compute errors of Qs-1 and Qs against such vector. We assumed that in case of nonlinear dependence between quality and features, this algorithm will not converge: the idea of algorithm is to move initial quality measures closer to the model line. If this is possible without violating restrictions existing in the comparisons matrix, then mean square error (MSE) would decrease because model line will fit new quality measures better. We used best ten models of five features and quality measures from previous section as initial values. However, in all cases it was impossible to decrease the fraction of inverted pairs for more than 2% points. We suggest that this can be caused by peculiarities of human perception and lack of transitivity in pairwise comparisons: it is natural that a person who compares images by two is not able to keep all seen images in mind and provide ideal linear order of them. We achieved increase of R-square and reached +0.2 improvement without violating conditions. However we hardly achieved R-square more than 0.8. This result still proves that linear model is adequate for explaining quality perception. Figures 5 a b demonstrate average new and old values of quality measures obtained for best models for MS and TS respectively. Pearson correlation between old and new values is around 0.8 which means that new values are a linear combination of initial vector. This result enabled us to use linear models on next step when quality measures are treated as unknown. To improve the initial assignment of the quality indices, we tried one more approach that does not use any initial target vector of quality measures and based on the initial comparisons matrix in order to improve results achieved on previous step. To obtain image rankings that would give the most likely pairwise comparisons according to initial matrix, we should iteratively change features weights to maximize logarithm of likelihood which is sum of logarithms of Pi(x) shown in (Eq.39). Optimization was conducted using gradient descent method from SciPy library Implementation described on web-page . This method was applied to various combinations of five features used in previous method independently on each of image sets in order to compare features and estimate their importance in determining image quality perception. Besides best models for mixed set of images was obtained. To compare models we simply used a rate of truly detected pairwise outcomes, results are presented in Table 3. We performed ranking approach for possible combinations of five features and looked at best models that provide best results for each set separately and that perform well for both sets. In case of testing model on both sets we use sum of log likelihood for two sets separately, and take average of features weights for two sets. Performance of every model was estimated by number of correct pairwise comparisons according to ratings. They are presented in Table 3. According to the table, some of best models, that perform well on each of sets separately, give worse results on mixed set of images. It can be clearly seen on a 3D (Figure 6) and 2D plots (Figure 7 a b c) of models. Each axis corresponds to quality on one of sets: TS, MS or mixed set containing both sets. It is seen that most models have better quality on each of MS and TS sets, but have lower quality on mixed set. It means that models are quite good even with five features, however, these features are sensible to image content, so trying to use average weights affects quality of model. Moreover, in many cases feature weights for different sets have opposite signs. Another interesting finding concerns putting all 57 features in one model which seriously affects result negatively and provides around 40-50% of corrected pairs which is almost as good as just random choice. If we look at features contained in best five models, it can be seen that features contained in most models repeat results obtained with OLS regression. One of most important ones is entropy of whole image and its background and foreground on all levels of pyramid. Besides, blurriness, blockness, noise and average intensity and contrast occur in top models, which does not contradict to results obtained with OLS regression in Section 4.1. In comparison of previous approach with known quality measures, Elo rating approach provides 24-27% of inverted pairs on separate sets, which is better than with linear regression. This should be so due to using initial comparisons matrix as a ground truth. As for quality for mixed set, we see that models are not able to provide good result because of difference in weights. We are giving a closer look on this question in next section. After obtaining sets of most important features our intention was to check for features invariant to scene and try to get a unique formula of quality based on separate models for both image sets. In addition, we tested the best models for each image set separately. Using initial comparisons matrix as a ground truth we trained linear classifier with binary outcome to check the results obtained at previous steps. The first part of this experiment aimed at training a model on one set and test on the other. If weights of features derived from the first image set were providing a good prediction for the second set as well, we would assume that the selected features provide a good representation of human image quality perception. Second part was to check model performance on each set, and to get testing and training samples out of a mixed set to make sure that restricted number of features is able to provide acceptable results. For both parts, the main requirement was the use of linear classifiers according to previous assumption that quality of image depends on the image features linearly. We were also using logistic regression classifier, which considers linear dependence between outcome and features. For every pair we use differences of features between left and right image and binary target variable, which equals 1 if left image wins. Scikit-learn library implementation of logistic regression classifier was used Implementation is described on their web-page . We studied model quality metrics such as accuracy score and area under curve to evaluate model performance and see whether selected features are able to provide good result. On final step we took best ten models of five features and performed a number of binary classification experiments using logistic regression classifier with intercept. First part of experiment considered learning classifier on one homogenous set of images and testing on another. Results of these experiments demonstrate very low quality regardless number of features in model. Accuracy score is below 45%, precision and recall measures are close to 50% which is the same as random choice. This result was obtained for all experiments with same design. Example of feature weights for the same model learned on each set of images presented in Table 4 demonstrate that coefficients are different on sets. As for training and testing on same set of images, better results were achieved even with a five-feature set. For example, fifth model from Table 3 provides better results on both sets. It reaches average accuracy of 72% using random shuffle cross validation algorithm with 20% testing size on trees dataset and 71% accuracy score on medical dataset. On mixed dataset where examples of both sets were included into training and testing set, average accuracy score is about 59%. Another sets of experiments considered models including all 57 features. In this case average accuracy score is 76% for mixed dataset, 80% for medical dataset and 77% for trees dataset. This demonstrates that the best models of 5 features contain most of the useful signal needed for classification. If we train on one set and test on another one using all 57 features, model still gives only 50% of accuracy. These results show that selected models containing restricted features set are good enough for both set of images. However, there is no universal formula of quality for both sets at once due to different weights of features. We also observed that some factors were conceptually similar which enabled us to select a limited set of really important quality factors. In case of medical images, this is a very useful finding which enables us to interpret quality perception and not only to rank images by a number of features but also try to build a framework that improves particular image features. Such tool could be one of potential practical extension of this study. Still we would like to extend and generalize the achieved results by validating them on more datasets. Another potential study limitation lies in the field of ranking and classifying images by quality. After increasing dataset of manually ranked images we could then conduct a comparison of ranking provided by neural network which can use a large number of all possible features and a classifier which uses a restricted set of most important features. However, such comparison would be fair if we use dataset of neutral monochrome images which makes it useful only for a specific field like medicine and medical images. All things considered, our results demonstrate that image quality perception can be modeled with a small set of non-reference factors that are easy to interpret. This can definitely lead to new useful tools for image quality control. Dolmiere T., Ladret P. Crete F., The Blur Effect: Perception and Estimation with a New No-Reference Perceptual Blur Metric. Grenoble: SPIE Electronic Imaging Symposium Conf Human Vision and Electronic Imaging, Йtats-Unis d'Amйrique, 2007. Serir A. Kerouh F., A no-reference blur image quality measure based on wavelet transform.: Digital Information Processing and Communications, 2012. K. De, A new no-reference image quality measure to determine the quality of a given image using object separability. Taipei: Machine Vision and Image Processing (MVIP), 2012 International Conference on, 2012. Monica P. Carley-Spencer Jeffrey P. Woodard, No-Reference image quality metrics for structural MRI.: Neuroinformatics, 2006, vol. 4. Chen F., Doermann D. Kumar J., "Sharpness estimation for Document and Scene Images," in Pattern Recognition (ICPR), 2012 21st International Conference on, Tsukuba, 2012, pp. 3292 - 3295. JA Bloom C Chen, A blind reference-free blockiness measure. Shanghai: in Proceedings of the Pacic Rim Conference on Advances in Multimedia Information Processing: part I, 2010. Masayuki Tanaka and Masatoshi Okutomi Xinhao Liu, Noise Level Estimation Using Weak Textured Patches of a Single Noisy Image.: IEEE International Conference on Image Processing (ICIP), 2012. Xinqi Zheng, Xuan Hu, Wei Zhou, Wei Wang Tao Yuan, A method for the evaluation of image quality according to the recognition effectiveness of objects in the optical remote sensing image using machine learning algorithm.: PLoS ONE, 2014. Apard E Elo, 8.4 Logistic Probability as a Rating Basis". The Rating of Chessplayers, Past&Present. NY, United States: Press International, 2008. Игра арканный симулятор гонок разработана: в среде Delphi 5 с использованием библиотеки OpenGL 1.3.4582, Pixia 2.4g для создания и редактирования текстур, Image Editor 3.0 для создания иконок, 3D-Stydio Max 5.0 для создания моделей машин (игрока). Основные возможности Norton Ghost. Создание резервной копии и восстановление данных из нее. Основные возможности Paragon Drive Backup. Клонирование дисков и разделов. Пользовательский интерфейс Drive Image 6.0. Утилиты Image Explorer и Ghost Explorer. Программа "Labs", выбор шрифта с помощью элемента ComboBox. Очистка содержимого и добавление значений в элемент ListBox. Загрузка картинки в элементе Image. Совместная работа SpinButton и TextBox. Изменение масштаба надписи и текста элемента Label. Характеристика графических возможностей среды программирования Lazarus. Анализ свойств Canvas, Pen, Brush. Сущность методов рисования эллипса и прямоугольника. Возможности компонентов Image и PaintBox. Реализации программы "Графический редактор". Сферы применения и возможности WordPress - CMS с открытым исходным кодом, распространяемой под GNU GPL. Уязвимости WordPress в плагинах Emaily, FeedList, WP Auctions и Old Post Spinner. Межсайтовый скриптинг WordPress в плагине Page Flip Image Gallery. Теоретичні відомості щодо головних принципів локалізації програмного забезпечення, основні технологічні способи його здійснення. Труднощі, пов`язані з цим процесом. Перекладацький аналіз україномовної локалізації програм XnView і VSO Image Resizer. Структура сайта, характеристика процесса его создания. Необходимая кодировка, установка. Присоединение таблицы стилей к сайту. Окно специальных возможностей тега image. Разбор сайта на РНР блоки, создание базы данных. Доступ к админке по паролю. Дослідження логічних схем, їх побудови і емуляції роботи в різних програмних засобах, призначених для цього. Electronics Workbench 5 – розробка фірми Interactive Image Technologies, її можливості. Рівні бази Multisim. Ключові особливості Proteus. Обзор технологий резервного копирования. Восстановление данных из резервных копий. Разновидности программ резервного копирования: GFI Backup, Paragon Drive backup Workstation, Acronis True Image. Применение и сравнение рассмотренных программных продуктов. Основні технологічні способи здійснення локалізації програмного забезпечення: SDL Passolo, Lingobit Localizer, OmegaT, Pootle, Narro. Перекладацький аналіз україномовної локалізації програм XnView і VSO Image Resizer. Граматичні та лексичні трансформації. Другие документы, подобные "Quality as an image-specific characteristic perceived by an average human observer"
https://knowledge.allbest.ru/programming/3c0b65635a2bd69a4d53a88521306d26_0.html
The way you perceive your surroundings is determined by various factors, including your convictions, background, and vision. After all, how you perceive a situation is frequently determined by what you pay attention to and how those stimuli are processed and integrated within your brain, ultimately impacting how you’d respond. The example highlights that you can place different people in the exact situation, yet all of them might recount the situation differently and act in various ways as well. Putting on a different pair of glasses The idea of peeking into someone’s brain and seeing the world through their eyes, so to speak, can be quite tempting. While it’ll probably never be truly possible to fully comprehend how others view the world, the first step to gaining insight into someone else’s perspective is through communication. If you’re willing to listen to what people have to say, the chances are that you’ll get a better understanding of how they’d perceive certain situations. As various factors contribute to perception, there is no single answer when it comes to changing your own perception. Whether you start by attempting to steer your attention to other environmental inputs or by consciously evaluating what you perceive, it’s a process that isn’t as easy as simply putting on someone else’s glasses. The importance of vision Moreover, while perception is a multisensory process that requires integrating the perceived stimuli into your brain, vision forms a crucial part of it. Seeing and having functioning vision can allow you to be part of a shared reality, as you’re able to connect an image to other contextual stimuli, such as smells or noises. Watching people see for the first time can emphasize just how fascinating it is to perceive your surrounding with your eyes as well. It can truly be an exceptional experience, which does not only facilitate day-to-day actions but also enriches your perception of the world. Take care of what you have If you can count yourself amongst one of the lucky people who have normal or corrected to normal vision, then you should try to maintain it to the best of your ability. Undertaking preventative measures should consequently definitely be on your radar. As you might find yourself increasingly working from home and seated in front of a screen, taking regular breaks to give your eyes some time to rest can help greatly. Moreover, adjusting your screen lighting, for example, by using flux, can further make it easier on your eyes to stand the increased screen time. If you do experience dry eyes and the sensation of tiredness, then you should both seek out advice on what eye drops would be best to use and whether further medical intervention is necessary. Beyond your remote office, your eyes are also particularly sensitive to sunlight and specifically UV light. While your sunglasses might look good, it’s nonetheless important to ensure that they’re also blocking UV rays sufficiently. Otherwise, you might not be able to see how much you’re rocking the look in the years to come. If you’re already wearing glasses, you can further support your eyes by adding a coating to your lens. Experts, such as Shady Grove Eye and Vision Clinic, offer further tips on recommended coatings. See for yourself The exciting aspect about not seeing things through your friends’ and even strangers’ eyes is the opportunity to learn about their different points of view and take a different angle on things. By sharing how you perceive and see things, you can invite others into your own world one conversation at a time. This can be enriching by itself as you’re able to add more nuance and perspective to your own perception of the world. If this makes you appreciate your vision just a little bit more, then don’t forget to treat your eyes nicely to have them around for as long as possible.
https://outragemag.com/taking-a-different-point-of-view/
28 comments: Lovely happies - those raspberries do look good and what a great plumber you have! I have to say, I feel for Angus as I'm in exactly the same situation as he is at the moment, lol! I'm desperately hoping it's just for now and I'll get used to long days and too much work!ReplyDelete Love your pyrex bowl Gillian! That would be a happy for me, and also good news about your boiler. We had ours serviced this week and I always fear what they will say - we have an ongoing argument about vents, and apparently our 9 year old CO2 monitor is out of date as it is 10 years old - we haven't been in the house 10 years grrrrrrr! Sorry, shouldn't be complaining here. Glad that Angus is enjoying school, hopefully he is just tired afterwards and things will improve as he gets used to it? No advice here I'm afraid! Hope you have a good weekend and manage to stay warm and dry. Amy xxReplyDelete The french soap looks lovely and I bet it smells amazing! Your fish sandwich looks really delicious. I had fish fingers for lunch today. I need to try them on a sandwich next time. Oh, and I am happy to hear your boiler is working nicely again. Glad to hear Angus is enjoying school. I think Charlotte gives preschool her best too, when I pick her up she can be a bit stubborn for a while. She is already at home sick with a yucky cold and fever she caught from school and we are only two weeks in, so far.ReplyDelete Every entry here is so beautifully thought out, and your photography so crisp! I love visiting with you Gillian, and getting little snap shots into your day to day. Sending lots of patience for you & hubby whilst you adapt to your little man's new routine. Oh lordy, I'm gonna have to try the fish finger sarnie!! Mmmmmm.........! Happy weekend xReplyDelete Great news about the boiler. Ours is 30+ years old and I'm just waiting for the terrible moment when it goes. The sandwich looks good, I love to make new combinations. I hope Angus gets used to the routine soon. I can see where full days at school would be hard for a four-year-old, though. Because of the way our birthdate cutoff works here, both of mine were/will be almost six when they start longer days at school. I'm sure he'll do fine once the adjustment period is over. Hang in there.ReplyDelete That Fish Finger sandwich just took me back to my own childhood! Oh how we loved them :) We used to make the exact same sandwiches.ReplyDelete Gillian I hear so much from parents questions about why their children are so pleasant to EVERYONE ELSE but them! All I can think and say is that they spend so many hours of so many days throughout the week being on their BEST behaviour at school, trying their BEST to please teachers and complete tasks, that when they do go home they are sooooooo exhausted! Unfortunately parents do cop the bad end of the stick so to speak :) Stay positive and patient. I am also a mum who will begin to experience this next year when my Sunny ventures off to school for the very first time! Sophie xo Love your happies this week! Your raspberries look delicious especially layered in that glass with the yoghurt and honey. I feel for you about the boiler - but what a good result. I also feel for you coping with the new routine of both little ones being at school and coping with the aftermath at the end of the day. It does go with the territory I fear but it does pass although it can return for differing reasons even when people are rather older. I think it's just the out-working of stress but it's not easy being the parent on the sharp end of it. I send you a hug and am sure in a few weeks' time Angus will be back to his usual sunny self at home again. E xReplyDelete Poor angus. Dan was just the same! Thanks so much for reminding me about fish finger sandwiches. Haven't had one for 15 years (we went veggie then), but they were yummy. I remember the crunch in the melted butter. We used to heave a bit of ketchup on ours. XxReplyDelete I am still enjoying a few perfect red raspberries (though mine never get as far as the kitchen door, they are gobbled on the spot!) and I am suffering through the after-school hours, too! I dubbed 5pm 'The Witching Hour' because that is the point that the whiny behaviour just turns wicked, and I'm afraid I start counting the hours til bedtime...but I look at my eldest daughter and remember that we survived her adjustment period at school, so I know there is light at the end of the tunnel. Hang in there! Chrissie xReplyDelete Fish finger sandwiches - now you're talking! Glad Angus is enjoying school but sorry to hear that he's very tired. I'm sure he'll settle into the routine very soon. xReplyDelete Poor Angus! Long school hours for Reception are a pain but he will eventually get used to them. Also try telling him why he's cross and see if the two of you can come up with a better way of dealing with it, like a chill out activity before dealing with the rest of his life, or changing into jimjams and cookies and milk as soon as possible. My second son was barely 4 on entry and we had school prepared to let him out at 2.30 not 3.30 if he was still dragging tired in October just so he wouldn't be over tired all the time.ReplyDelete Oh those poor exhausted four year olds, i remember that terrible unreasonableness with Fergus especially, he was so young. It is frustrating and heartbreaking at the same time. This too will pass. My parenting mantra...then suddenly you are missing the chaos and their baby-ness. Lovely happies, thank goodness about the boiler and those raspberries made my mouth water!ReplyDelete I yes I remember those days so well. My Little Man has his birthday at the very end of August, so felt super-young heading into school only just turned 4. I'm now going through similar with Little B who LOVES his new nursery but comes out ravenously hungry and very tired and argumentative.ReplyDelete It does ease. Your life will soon feel light as a feather, I can see it coming :o) xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx By the way, I think if i ate a fish finger sandwich I would die. Eugh, how could you?!ReplyDelete It must be so strange to have John at home and I hope you can relax into it soon. I know I would love it but it would take me time to settle into it. As for Angus I feel for you as we have had the exact same here and in a bad severe scale, she lashes out at everything and anything, crying, kicking hitting and screaming. She is so aggressive and so rude and no matter how much I know it's all a perfect symptom of releasing where she feels comfortable after a day of conforming in school, it's still very hard to deal with. It will get better over time and in a way somehow it's comforting to know that both Angus and Es feels comfortable enough in their own environments to release. Thinking of you xoxoReplyDelete Hello GillianReplyDelete I too love opening a new soap, simple but a luxurious pleasure I think. I have also had a strange week, felt quite blue really, not sure if I am ready for the seasons changing and very tired too. Alice used to be super ravenous when I picked her up from junior school, I soon realised that having a snack for her at the school gates helped until we got home to have our tea. She can also be like that now at almost 13, I know the tell tale signs so well! Hang in there it will all become smooth again. Happy to hear that your boiler is ok, what a lovely plumber. Take care and happy weekending with your precious family xox Penny You've got two of my all time favourite happies up there G- raspberries and fish finger sandwiches. Joy. (No lettuce on mine though thanks!). With you on the school thing too, J only been doing mornings this week (an incredibly long winded but necessary I think settling in month yawn) and he's been grumpy and sleepy all afternoon! He's loving school though! I definitely get the worst of the boys behaviour when they're home from school, and often have to remind them of how well behaved they are for their teacher!!ReplyDelete Lovely foodie photos Gillian. I love opening and using a brand new bar of soap too, such lovely simple pleasures. I'm sure wee Angus will settle into his routine soon, school must be exhausting when you are only four!ReplyDelete Marianne x How lovely to still have raspberries long may they last :)ReplyDelete I agree with Modern Day Mummying about the tiredness at the end of a school day. They have to sit still, listen, do as they are asked for a long time during a school day and at home they have more autonomy and are released. I am sure boys find all these things much harder. It is a bit like opening a bottle of fizzy pop after it has been shaken. Like my youngest he is very young and his period of adjustment may take longer than for an older child. As long as you are his constant at home, somewhere safe for him to retreat to I am sure it will help him with this rite of passage. It makes me realise how lucky I am not to be sending my children to school. Glad to hear that Angus is enjoying school, I'm sure he'll settle soon when he gets in to a routine. Good news about the boiler, and what a lovely plumber, they usually charge a fortune just to look at something.ReplyDelete God bless plumbers and Angus - I'm sure he's enjoying school (but finding it exhausting at the same time). Why is the autumn term the longest? Interesting views in the news this week about too formal education too young. Makes you think we might be getting it wrong in this country? Hope next week is calmer for you. Claire xoReplyDelete Hello lovely! Yum, I do love a fish finger sandwich. Poor Angus, and you...I'm having similar shenanigans with Rufus but he's only doing 2 days at pre-school so we get to take it easy the rest of the week. If you really feel like you're struggling with him, I can really recommend a lovely book called Playful Parenting by Lawrence Cohen...I was finding my boys really hard work earlier this year, but it has made everything much easier and more fun, and without the awful guilt that a lot of parenting books seem to produce! I'm sure it will all get easier soon enough...good luck in the meantime! Rachel xxxReplyDelete Fish finger sandwiches are SO good! Have never tried it with sweet chilli sauce though!ReplyDelete Hope Angus settles down soon - the first few weeks are always a huge adjustment and so tiring for them. x Oh poor little man, and poor you getting the brunt of it all. I remember walking home from school with my oldest kicking me and his brother when he was in Reception. I hope he gets into the routine of it all soon. I do think that children go to school too early in this country. I'd like to keep mine at home until they are 7 at least! Hope you enjoy the rest of the weekend.ReplyDelete I thought I'd left a message here - but maybe I forgot to post it. Or maybe I'm just going mad?!ReplyDelete My son was the same - absolutely shattered when I picked him up, and horrid with it. I used to go armed with his "blankie" and a flapjack or oat bar of some sort. It used to really help. Not perfect but a vast improvement. Similarly at almost 16 he's reverted to be unable to string a sentence together at the end of the day and ravenously hungry too! It's all a phase has been a phrase I've used regularly. Keep smiling. :) x Lots of lovely simple pleasures and one big happy about not needing a new boiler, phew!ReplyDelete I'm sure things will improve with Angus when he gets used to the whole new routine of school. They keep so busy at school, everything is so new and then at home they get the tiredness dip. That's when life's little happies really are worth their weight in gold! Lisa x Lots of lovely happy photos. Yummy raspberries and I spy a fish finger butty, I LOVE a fish finger butty excellent.ReplyDelete I'm sure Angus will settle into the routine soon my daughter was the same x Now that is my kind of sandwich, and my idea of how to serve yoghurt. Yum!ReplyDelete Poor Angus, they do get so tired at school, emotionally and mentally tired as much as anything, there's so much happening, so much to take in. Some take longer to adjust than others but I'm sure he will :) Hello there! Thank you for leaving a comment. I read them all and I always try to answer questions, although sometimes it takes me a while.
http://www.talesfromahappyhouse.com/2013/09/52-weeks-of-happy4852.html
After examining multiple projects for common gestures, the following movements were agreed upon to be the most commonly defined gestures for specific actions. While certain middleware libraries and utilities may have pre-defined gestures available, many developers choose to define their own custom gestures based on skeletal data received from the Kinect. Users should refer to individual middleware documentation for details on whether pre-defined gestures exist within the system, the extent of gestural/skeletal data available, and the possibility of creating additional/custom gestures from that data. Forward - Step Forward (movement in the physical space) - Step Forward with distance (speeding up/slowing down) - Leaning Forward - Walking place - Both arms forward - Hand movement (up/down, positional) Jump - Jumping - Hand up (raise the roof) - Flicking (hand flick) - Kick a leg out - Left single leg - Raise Elbows - Tip toes (possible false positives, slouching, etc) Left - Leaning - Left Arm out - Left foot out - Rotating - Both arms left - Leftward motion - Turn head slightly to the left Right - Leaning - Right Arm out - Right foot out - Rotating - Both arms right - Rightward motion - Turn head slightly to the right Rotate - Rotation at the hips (hold, rotate, release) - one hand up, one hand swipe - Wax on wax off (circular motion) - arm out left/right for positive/negative rotation - Right arm to left – left rotation (hold, rotate, release) - Left arm to right – right rotation (hold, rotate, release) Looking Around - Vertical/horizontal separation of hands (distance) - Head tracking - Mouse look (without the mouse) - Head mounted display - Clasped hands pointing Interaction - Compression of hands - Alternate hand action - Sound/speech based Additional Avatar Interactions Note: The following gestures, while commonly used, are used for more specific applications than general gestural interaction. These gestures are usually specific to the project or application in question and may include more customized gestures than those listed here. These simply serve as a guideline for some of the more common specialized actions.
https://www.rit.edu/innovationcenter/kinectatrit/glossary-common-gestures
Effects of age at first-pairing on the reproductive performance of Mongolian gerbils (Meriones unguiculatus). Effects of age at first-pairing on the reproductive performance of the gerbil were studied throughout the reproductive life. Six groups of 7-30 female gerbils were paired monogamously with males at different ages. Out of 101 pairs in 6 groups, 79 (78.2%) produced 1 or more litters. The mean litter size at birth and mean weaning rate of 846 litters were 4.4 (totally 3,733 pups) and 67.4% (2,517 pups), respectively. Reproduction was compared in the 6 age groups. The littering rate (No. of females with litters/No. of female paired) was significantly lower in two groups in which mature females were paired with age-matched males (Group 4) or the oldest females with younger, sexually mature males (Group 6). The interval from pairing to the first litter was shortest in two groups in which mature females were paired with one month older, sexually mature males (Groups 3 and 5). Although the oldest pairs (Group 6) produced about 7 litters, the pairs from the other 5 groups produced about 10 or more litters throughout their reproductive life. The weaning rate was significantly higher in Group 6 (the oldest pairs) than in the younger groups. The effects of parity on reproduction were estimated from the data for the 61 pairs which produced more than 8 litters in the 6 groups. The number of pups at birth and the weaning rate were decreased in last 20-30% of the total parity in all 6 groups, although the age at the last litter in all groups was significantly different. The data suggest that any decline in reproduction may be due to not age but parity in the Mongolian gerbil.
This past week was another quiet week for me trading-wise, since all the currency pairs I follow were still chopping sideways or retracing against their respective major trends on the daily charts. As such I didn’t open any positions. The next few days look like they might present some opportunities, but nothing is sure yet. We’ll have to see. I’m considering switching these weekly logs to monthly logs instead, since I foresee many weeks to be slow like this. Being a swing trader, I’m more picky about my trades and am not looking to place more than a few each month, unlike a day trader or scalper who would place trades on a daily basis. Starting in March I will probably make the switch to monthly posts. Anyway, that’s pretty much all I have to say about Forex this week. Cheers everyone!
http://zero-to-zeros.com/weekly-forex-trading-log-10/
In recent times the issue of immigration has caused two significant changes to modern politics. It began with the rise of UKIP, which, in turn, led to calling an EU referendum, and a win for the leave vote. Both of these major events, arguably, could not have happened without the issue of immigration. The right have had a monopoly on the issue, whilst the left have generally struggled to form a counter argument. The likes of UKIP and the right of the Conservatives have made a fairly simple argument, namely that there is too much immigration and the country can’t handle it. This resonates with the public because it provides a simple answer to an agglomeration of problems. If the person on the street sees large queues somewhere, or longer waiting times at a hospital, it is easy to blame immigration. This argument has been supported by increasing net migration numbers. Put together, the right make a compelling narrative for voters to get behind. Of course all of the above is a largely emotive, and it has been boosted yet further by austerity. When hospitals, schools and local council budgets are reduced it is almost inevitable that waiting times will increase and services will decrease. What UKIP and the right have succeeded in is to link these issues to immigration. The fact that British politics has swung towards the right and chosen Brexit is an indication of the success of the right’s immigration narrative. Conversely, the left has struggled to find an equally engaging argument for their more pro-immigration stance. This will not be an easy task because the argument in favour of immigration is more complex. Immigration is generally good for the economy, immigrants contribute more in tax than they take in benefits, and of course they fill vacant jobs, or create further jobs. The challenge is to turn that data in to a compelling argument that resonates with the public and pushes back against the current narrative. Labour seems to be dichotomised when it comes to immigration. Corbyn is willing to advocate the complete opposite argument, embrace free movement, and praise the positive effects of immigration. Others in the party, such as Chukka Umunna, look to a more centrist, compromising approach, which stresses the positives of immigration whilst slightly limiting numbers. It remains to be seen which approach will turn out to be better with voters. The compromise seems to listen to concerns of those worried about immigration more, whilst the emphatically pro-immigration strategy has definite moral and upbeat messages, which will still appeal to some. If recent electoral evidence such as the 2015 general election, Brexit, and polling, is anything to go by, it suggests that increasingly the population, even some Remain voters, would like to see more control of immigration. The left may therefore have to swallow its pride and take a stricter line in order to get a majority in parliament, and then really begin shifting the argument. Indeed, the recent Conservative Conference suggests the argument may be shifting even further right, despite the rhetoric in Theresa May’s speech, so time is of the essence for Labour and their left-leaning colleagues. The nudge theory is something the left could look toward as a beacon of hope. The theory goes that it is easier to get the population to sway towards a certain argument in small ‘nudges’ rather than one big jump – a strategy that naturally works well with parliamentarianism. For example, labelling products with their calorie value is a nudge towards a healthier society. The political left may have to employ this to begin swinging the argument back towards a positive message. Starting by listening to concerns, then gradually send the argument away from its current anti-immigration trend. Getting the counter argument right will be vital for the future electoral hopes of the left. With Brexit on the horizon and the decision about accepting free movement or losing single market membership facing parliament, immigration is an issue that is not going to disappear any time soon. It was said that the vote to leave the European Union may make UKIP a fading force in UK politics, while this may yet be true, finding an effective argument in praise of immigration is a much more definite way to side-line UKIP for good. Want to respond? Submit an article. SUPPORT BACKBENCH We provide a space for reasoned arguments and constructive disagreements. Help to improve the quality of political debate – support our work today.
https://www.bbench.co.uk/single-post/2016/10/12/The-Left-Needs-a-Clear-Immigration-Narrative
Creation is one topic that has been well studied for many centuries. Not just one viewpoint on creation has been devised but so many that it would take pages to outline their individual approaches. Looking back into history in the times where gods were formed in the stars, to present day theories of god creating life through spiritual ideology and also to extend to a more physical approach of life being created materially through an evolutionary approach, from cells to formation of all life on earth from one core form. Each theory has its own viewpoint and approach that looks at life to suit its own goals and objectives. When we look at creation there is one thing that is undeniable and that is that we really do not have it wrapped up around our little finger, on how to pinpoint an exact time and place when creation was formed. We follow belief structures with full understanding that we are but the creation and not the all knowing of why it exists. Each theory has its own logical approach, looking at creation from its own creative and subjective methodology, with the essence of each theory focusing on the human body and how the earth is formed. Some theories also extend to the mind and seek truth in romantic and poetic stories that relate to love, light and just generally being happy and content with the overall position that we are all placed within. Except for a lot of scientific approaches each organized creation theory stems mostly from a story form approach, each showing a line of creation that is formed by an ideal of the human mind. I suppose the real question to ask when viewing all the individual approaches of creation: what was the original intention of this viewpoint? Whether that be to acknowledge that there is an omnipresent being that created us from a place far more divine than anything that we could even comprehend, or that we are merely formed but of matter that changes as the years go by. Each method of viewing creation definitely has its own beauty and individual creative approach, none without a fault in trying to find the reason for existence. All theories stand on their own podium of strong opinion and purpose, not one with limit to reach a desire to understand and justify its approach. The only downfall that could be noted is when each theory comes into contact with one another. It is then that the podiums start to crumble. Instead of finding undeniable links between each approach, we choose to debate each limited ideal until the end of time, even after we find that such passion can lead to fighting and viewing creation from the opposite viewpoint to which we seek. How can such things be overcome? Will we ever find an agreeable approach that does not have to hinge on an individuals creative potential. One thing that is seen when viewing many of the approaches of finding reason in creation is that our focus is as time goes by limiting itself to ideals of human social acceptance. Many things are forced to the wayside if they do not fit into a group acceptance instead of being seen subjectively and in its own beauty and form. Rules and guidelines are setup so stringent that the makers also feel themselves forced into a cage of redemption, leading to their own individual undoing. They wish to hold to their beliefs so strong without questioning their original purpose, but only to enforce without first contemplation and solution. Their ideal becomes then their purpose and when they are under attack they seize up and protect themselves with shields of rigid viewpoint to push away new concepts and approaches. Sometimes also these new concepts and approaches have also their own basis of crumbling podiums. They too are looking for ways to build themselves up to achieve understanding, and a strong undeniable truth, that they hope will be still standing at the end of the day after all the work that has have done to try and convince others of their purpose. What if creation itself was not in the individuals desire to create purpose for themselves? We all seek answers that we hope is the only truth standing from viewing each individual standpoint. The only other thing that we can really know to be truth is also that we will inevitably die one day out of the body to which we are formed. These two truths are all that we have to hold us up. They are our right and left foot, the only thing that we can be certain of. We all seek answers at some point in our life, but what are we really asking for? One thing is for sure, individually we are not really seeking an answer for why creation is formed, but really an answer to what is our purpose as humans. We want to know what we are to be doing and what we are to be achieving. Every person has the desire within them to want to know why they should even exist in the first place, but not to only acknowledge the beauty of the creation itself but to know why we endure all that we experience. What if creation had no real foundational purpose, would we then want to know the truth? The foundational elements of creation really would not in all essence have the same goals as the creation itself. These elements have a place outside the human physical world. They are not the creation but the creator. When people base ideas on creation they know that the foundation elements are this ‘creator’ and draw in on human ideals in order to form their formational model. It makes perfect sense to see creation this way. Seeing as we as humans only know so far the basis of this world alone and what we see with our eyes around us. The only thing that could be seen as lacking is that we cannot see beyond such limited concepts. We base all on our own self focus and think that we are but the center of all, but is this so? If we were to draw on all creation theory and look at creation for its foundation rather than its purpose how would it then be viewed? It would for sure have a basis far outside this singular limited world that we now know and isolate ourselves within. Would creation not be far reaching beyond even human reasoning and incorporate all possibility? One thing is for sure, the elements that created us, are not us, but the foundational embodiments to which we are made from. It would be very logical to see creation in such a way. Such a viewpoint would never change in its approach, because the ‘created’ is always a product from the ‘creator’. This would have to be a simple foundation fact. If this were to be so then how would we ascertain what this creator is and how creation is formed? If we take a known example from our world of an event that takes place, we know for sure with our logical minds that the effect follows after a causal event. When certain things happen within our world we understand then that it makes a change in the environment around us. It is thus then from the effect that the cause can be understood. Would we not then know the creator from understanding the components of the created? If we can ascertain what takes place then it is only logic to see that we can ascertain also why it takes place. There is always a reason, if there is an event taking place. This is another undeniable fact and one that will never change. That would then reason that creation can be known, we are the effect from a cause, and we can learn why it all happened, whether or not it makes reason to why we exist. It is all just in understanding the practical nature of creation, to seeing that it is far more than just a story being told. It is even far more than a rule that makes us conform within society. It is far more than even popularity. There is a core truth, that when it is found, it will hold tall and be understood by all in its core essence no matter how it is creatively relayed to others. In the core essence all belief has a foundation. When this core essence is known then nothing is right or wrong and all belief can be utilized to its full creative potential. Visit my website at http://creationtheory.weebly.comArticle source: http://articlebiz.com Rate article Article comments There are no posted comments. Related articles - The Stress of the Orthodox Paths - Workaholism As Futile - Examination is not a true test of knowledge - 3 Philosophical Questions We Should Ask Ourselves Regularly - On Honesty - When every day, I tell Janice that I'm her mother, and that her mother is always with her - When in Janice's orphanage, I talk to Janice about politics of observer being victory of politics over abstract - When because of Juno Skinner, from True Lies, the US supernatural woman can help me create positive moral tragic - When the American supernatural woman gave me The Representative, and my past anti-nationalism, so that I can enjoy helping her - When 21 years ago, the supernatural US woman used Miramax's 54 to tell me that sexy sexy socialism is evil - When Alicia Silverstone's Batgirl creates communism from tyranny data, so that Janice can live safely in Batman Forever - When taking the piss takes the piss out of Valak, in the Mullins barn, so that science can be a series of AI psychologies - When the American supernatural woman turns CNN into a false political anti-magic, so that she can be outsmarted by UK royalty - When The Craft's Bonnie Harper wants me to love Janice, by turning UK humour's freedom from science into an evil evil - When I become a UK Prime Minister, so that Alison Parker and Brittney Havers can protect my Annabelle: Creation body - When I delete my story, The Representative, so that the American people can criticize me if I fail to love Janice - When future robots choose to end American extramarital affairs, so that my brain can turn Annabelle Creation into a real reality - When the GOP can't use 112 Ocean Avenue, because the sexiness of the US Senator Kamala Harris opposes Harvard University - When European men and women are designed by nature to want to serve American businesswomen because the latters support monarchy - When as Janice's sexy mother, I mess with Europe's royal women by having her see me smoke a cigar in a mansion hot tub - When I can wear a sequin maxi dress for Janice, because Annabelle Creation has turned me into her sexy American mother - When American mothers choose to oppose UK monarchs, so that Darth Maul is safe from extramarital affairs and press conferences - When Scream's Sidney Prescott creates the US, so that Fox News can't copy the UK's vow to ABC's Cecilia Vega - When The Sentinel's Alison Parker has UK royals think that space is a son of CNN, so that space can be a US trophy wife - When CNN and the White House create science, so that FHM's Kelly Brook can't turn all adults into female supermodels - When geology watches the 2004 movie Wild Things 2, in order to remember why it was that a wizard outsmarted the North Pole - When Heaven refuses to use a mansion like a nightclub, so that all adults can try to wear a suit like ABC's Cecilia Vega - Shame on You!
https://articlebiz.com/article/185160-creation-theories
Walter P Moore established an office in India in September 2011. The office elevates the firm’s ability to serve clients and projects throughout India in a variety of market sectors, including commercial, residential, healthcare, hospitality, sports, entertainment, etc. Walter P Moore Engineering India Pvt. Ltd. is a subsidiary of Walter P Moore, with full access to all of its capabilities and resources. Senior Principal and Managing Director Abhijit Shah leads the entity in-country and directs the firm’s operations. Our India team is committed to extending the well-established Walter P Moore service capabilities to this emerging marketplace and giving clients access to 90+ years of global project experience. We provide one-stop engineering services to projects of all complexities. Our team has the ability to provide international-quality expertise and location-specific design solutions along with in-country expediency and cost structure. India Expertise: US Expertise: Walter P Moore India is immediately ready to help deliver projects of all sizes in India. We have established several in-country partner firms to enhance our ability to provide international-quality expertise with in-country expediency, local understanding, and cost structure. Go here to find our office contact information.
https://www.walterpmoore.com/india-office
This essay is a part of our series, Borders in the Classroom -- for more information, please see HERE. The Socially Polysemantic Border: Positionality and the Meaning of the Fence Short Title: The Socially Polysemantic Border Abstract: This paper documents the experience of teaching college students how to rethink the border by doing fieldwork in El Paso, Texas. Students were asked to encounter the border fence through, for example, personal visits to a part of the borderline, journaling, photography, writing poetry, or creating multimedia. Classroom discussions before the assignments revealed that many students had not previously taken the time and effort to study their communities from a larger social, theoretical, and historical perspective. This article discusses the initial challenges and the overall pedagogical success of this approach by showcasing some of the student work reflecting on the border fence. The paper includes some of the insights that border residents have about the U.S.-Mexico border between Ciudad Juárez and El Paso. These reflections and testimonies show how various individuals create different social meanings about the border region in general and the border fence in particular depending on their own positionality based on age, gender, ethnicity, language, and immigration experience. The border changes form along its distance and different actors interpret their encounters with it in diametrically different ways. The border is not a moving target but it manifests differently in the lives of border residents. For Full Article Edited by Benita Heiskanen, Andrae M. Marak, and Jeanne E. Grant (c) 2014 The Middle Ground Journal, Number 8, Spring, 2014. See Submission Guidelines page for the journal's not-for-profit educational open-access policy. Published by the Midwest World History Association (MWWHA), housed at The College of St. Scholastica. See also, The Middle Ground's curated Facebook Page.
https://www2.css.edu/app/depts/HIS/historyjournal/index.cfm?name=The-Socially-Polysemantic-Border:-Positionality-and-the-Meaning-of-the-Fence&cat=5&art=243
During Late-2018 Pulse North was approached by Sean Taylor to design the logo for InGAME: Innovation for Games & Media Enterprise, a research and development organisation created with the focus of driving forward innovation in the local Games Development Cluster. The logo was to integrate into the existing Creative Clusters identity, but with it’s own unique “Game styled” element included. Although the guidelines were very limiting, the Creative Clusters brand did allow for the inclusion of a stylised X symbol to represent the union between the parent and child brands, so it was decided to incorporate a D-Pad (Directional Pad) symbol within the logo with a slight angled twist to fit within this structure. In early-2019 Pulse North was commissioned to undertake the design and development of the InGAME website, this was to feature an events system that integrated with EventBrite, as well as a news posting system and client creatable forms. The site was later launched in Spring of that year. The site can be accessed by using the “Visit Site” button or by clicking the accompanying image.
https://pulsenorth.co.uk/portfolio/ingame-logo-design-website/
Join us for #GivingTuesday on December 1! Each year, after the celebration of Thanksgiving, and the hustle of shopping days like Black Friday and Cyber Monday, nonprofits organizations around the world … Read More → Stories from the Road: “Meating” the need for protein Good Shepherd Food Bank’s work is made possible through collaborative efforts of donors, staff, volunteers, our partner network, and more. Today, we’re showcasing a … Read More → What Can I Expect as a Volunteer?
https://www.gsfb.org/latest-news/
Data Mining Meets Performance Evaluation: Fast Algorithms for Modeling Bursty Traffic Appears in 18th International Conference on Data Engineering, 2002. Supercedes Carnegie Mellon University SCS Technical Report CMU-CS-01-101. M. Wang, T. Madhyastha, N.H. Chan, S. Papadimitriou, C. Faloutsos School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Network, web, and disk I/O traffic are usually bursty, self-similar and therefore can not be modeled adequately with Poisson arrivals. However, we do want to model these types of traffic and to generate realistic traces, because of obvious applications for disk scheduling, network management, web server design. Previous models (like fractional Brownian motion, FARIMA, etc.) tried to capture the burstiness. However, the proposed models either require too many parameters to fit and/or require prohibitively large (quadratic) time to generate large traces. We propose a simple, parsimonious method, the b-model , which solves both problems: It requires just one parameter, and it can easily generate large traces. In addition, it has many more attractive properties: (a) With our proposed estimation algorithm, it requires just a single pass over the actual trace to estimate b. For example, a one-day-long disk trace in milliseconds contains about 86Mb data points and requires about 3 minutes for model fitting and 5 minutes for generation. (b) The resulting synthetic traces are very realistic: our experiments on real disk and web traces show that our synthetic traces match the real ones very well in terms of queuing behavior.
http://www.pdl.cmu.edu/PDL-FTP/Workload/bmodel_abs.shtml
Position Description: Are you motivated by the opportunity to delight your clients by providing them with innovative solutions to complex technical problems? These are exciting times for CGI and are looking for talented individuals to trailblaze with us. If you have a pioneering spirit and thrive on innovation where you can influence the direction of technical strategy, now is the time to join this dynamic team. We need to double our capacity to satisfy growing client demand globally and are looking for top talent. Combine the challenges offered by complex technical initiatives with the potential for international travel and gain experience and demonstrate your creativity. The Designer / Implementer role will require an individual with leadership skills responsible for acting as a ”client lead” that can manage the fulfillment, delivery and implementation of proposed solutions within our account team. Further, the candidate will ensure high quality service delivery of networking & cloud services to the client through the effective management of assigned projects. The ideal candidate will have an in-depth knowledge of cloud and data networking design & Implementation specifically in TCP/IP networking. Your future duties and responsibilities: • Requirements definition, detailed cloud & network design development • Act as network subject matter expert defining technical solutions, designs, estimate transition efforts. • Perform cloud & network engineering/integration support for large multi-site network communications systems • Produce network & cloud architectures/designs based on business needs expressed by clients, in collaboration with service providers. • Systems analyses and performance assessments • Support the development and delivery of large network communication infrastructure architectures • Develop and mantain detailed design documents from network & cloud architectures, configuration templates, diagrams and support advisories • Interface with other senior level personnel and deliver technical information in a concise, understandable and accurate fashion • Review network device build books provided by a third party vendor, and ensure build books are followed during rollout • Provide trend analysis of current and future demands and troubleshooting • Develop and deliver concise and logical business-focused technical material • Develop Cloud and Network Implementation Plans and devices configuration • Execute Cloud and Network Implementation Plans Required qualifications to be successful in this role: • Expertise and experience with communication system components (including wide and local area networks, MPLS, IP addressing, Gig-E and ATM technologies, V-LANS, routing, OSPF, firewall configurations and data flows, and other infrastructure components). • Managed security services experience, including site-to-site and remote access VPN • Extensive knowledge of Telecom industry and network protocols • Strong IP sub-netting and IP management skills required. • Thorough knowledge of public/private IP addressing management and network address translation • Routing protocols (BGP, OSPF, EIGRP, RIP1&2) • Switching architecture and protocols (VLANs, VTP, Trunking, Port-channeling, Spanning Tree) • TCP/IP, Summarization • Network Authentication (RADIUS, TACACS+) • VPN Technology (IPSec, Key Management, Client & Site-to-site and remote access solutions) • Firewall technologies (Checkpoint FW1, Cisco, Juniper, Palo Alto) • Global Traffic Management, Load balancing and failover technologies (F5 Big IP GTM/LTM) • Software Defined Wide Area Network (SD-WAN) • Traffic management and shaping experience with Quality of Service • 4+ years of network experience with Cisco • 2+ years of network experience with F5 • 5+ years of successful experience with a proven track in a similar role Required Level of Education/Expertise: • University degree or equivalent work experience in communication systems • CCDP and/or CCNA certification, CCIE preferred • 8+ years in a data network design role • 10+ years in a data network implementation role Nice to have : • 2+ years of network experience with Palo Alto Networks • 1+ years of network DNS experience with Infloblox • 1+ year cloud experience with Azure/AWS/Google Soft Skills Required • Ability to interface with client and engagement teams at the appropriate level • Strong interpersonal and communication skills (written and verbal) • Strong prioritization and organizational skills • Good research/assessment skills • Effective working without direct supervision Other Requirements • This position requires limited travel within the GTA • This position requires after-hours support during project implementation phase. Skills: What you can expect from us: Build your career with us. It is an extraordinary time to be in business. As digital transformation continues to accelerate, CGI is at the center of this change-supporting our clients’ digital journeys and offering our professionals exciting career opportunities. At CGI, our success comes from the talent and commitment of our professionals. As one team, we share the challenges and rewards that come from growing our company, which reinforces our culture of ownership. All of our professionals benefit from the value we collectively create. Be part of building one of the largest independent technology and business services firms in the world. Learn more about CGI at www.cgi.com. No unsolicited agency referrals please. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs.
https://www.jobtoronto.net/it-tech-support/network-designer-implementer-c44350/
Qualcomm Board Says Broadcom Takeover Deal Is Too Low The Qualcomm board believes the per-share price offered by Broadcom in its $121 billion takeover deal of the San Diego company is too low, and it is concerned about whether the deal would survive antitrust scrutiny, Qualcomm Chairman Paul Jacobs wrote Friday. Jacobs wrote the letter to Broadcom Chief Executive Hock Tan after leaders from the two companies met Wednesday to discuss an offer that Broadcom officials maintain is their "best and final" offer, Jacobs said. The Qualcomm board rejected that offer last week, but Jacobs requested the meeting in an effort to press Broadcom about what it would do to ensure the transaction cleared regulatory hurdles and whether it would budge on the $82-per-share offer. "The board remains unanimously of the view that this proposal materially undervalues Qualcomm and has an unacceptably high level of risk, and therefore is not in the best interests of Qualcomm stockholders," Jacobs wrote. "That said, our board found the meeting to be constructive in that the Broadcom representatives expressed a willingness to agree to certain potential antitrust-related divestitures beyond those contained in your publicly filed merger agreement." RELATED: Qualcomm Board Rejects $121B Buyout Offer From Broadcom However, Jacobs said that Broadcom did not agree to commitments that would likely be expected by regulatory authorities in the U.S., Europe and China nor did it respond to questions about its plans for the future of Qualcomm's licensing business, which "makes it very difficult ot predict the antitrust-related remedies that might be required." A Broadcom spokesman said that the company would send out a statement in response to Jacobs' letter Friday. Like the deal rejected by the Qualcomm board in November, this one would have paid Qualcomm shareholders $60 per share in cash. But the latest offer included an increase in Broadcom stock that would be paid to Qualcomm shareholders — $22 per share, up from $10. Broadcom is incorporated and currently based in Singapore, but Tan announced late last year while visiting President Donald Trump at the White House that the company would return its corporate headquarters to the U.S., using San Jose as a base. Buying Qualcomm would make Broadcom the third-largest chip maker, behind Intel Corp. and Samsung Electronics Co. The combined business would become the default provider of a set of components needed to build each of the more than one billion smartphones sold every year. The company's hostile takeover attempt has come at a vulnerable time for Qualcomm, which has been embroiled in a long-running legal dispute with Apple and is facing several large fines from governing bodies across the globe. The most recent such fine levied against the San Diego company came from the European Union, which accused Qualcomm of breaking the EU's antitrust laws to the tune of $1.23 billion. Qualcomm said it would challenge that fine. RELATED: San Diego Contemplates A Region Without Qualcomm If Broadcom is ultimately successful in its takeover attempt, the impact on San Diego could be severe. Qualcomm is one of the few major corporations with a global reach to be headquartered in a city known mainly for tourism, and smaller defense and life-sciences firms. The company is one of the region's largest private employers, and the family of co-founder Irwin Jacobs is one of the area's most generous philanthropists.
https://www.kpbs.org/news/2018/feb/16/qualcomm-board-says-broadcom-takeover-deal-too-low/?utm_content=buffer73c35&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
Jump to navigation This project is part of a more comprehensive project developed under the Digital Mures Strategy. The scope of the Digital Mures Strategy is to integrate Tirgu Mures in the global network of local public authorities that provide to their citizens improved public services via ITC. This network brings together local public administration around the world and connects them into the network of Digital Cities. Discussions on a possible implementation of such project was driven by the desire to develop integrated, innovative and efficient public services within the city of Tirgu Mures, to increase the quality of public services, reduce the administrative costs for the public sector, while reducing the administrative burden for the citizens, increasing the quality of services for the citizens and contributing to the creation of new jobs in economic activities that provide high value added. These objectives are the ones assumed by the Municipality in the context of the Digital Mures Strategy, as first pillar of the strategy. The objectives are correlated with these identified by the World Bank in the Functional Review Analysis, conducted by the World Bank in the context of the Memorandum of Understanding signed with the Romanian Government by the IMF, the World Bank and the European Commission in 2009. The second pillar of the Digital Mures Strategy aims to implement the Science City for Research and Technological Development and ITC in the Healthcare sector. This technological and research park will attract foreign direct investments (FDI) in the region, by capitalizing on the comparative advantages that Tirgu Mures has: excellent reputation in the healthcare and ITC sectors. As a first step in the development of Science City concept, Tirgu Mures developed a Competitiveness Cluster for Research and Development in Healthcare Medical Informatics and related disciplines (LifeTech City), which brings together partners from the business environment, academia, healthcare facilities, NGOs, etc. Fundamental advantages of the structure developed under the Competitiveness Cluster are synergies developed between the 3 types of projects: investment project, namely the construction of a business center, complex R&D projects developed by universities and companies and soft projects, aimed at sustainable development of the cluster. It is too early to judge the success of the measure.
https://ec.europa.eu/growth/tools-databases/regional-innovation-monitor/support-measure/lifetech-city-pole-competitiveness-medicine-life-sciences-and-medical-informatics
What We Do | Citizens Planning and Housing Association, Inc. Activate Your Inner Citizen: Universities and Workshops to train the next generation of community leaders. Technical support, including GIS mapping and neighborhood planning to solve neighborhood challenges. Access to Tip Sheets, which explain best practices of community organizing and leadership. Printing access for newsletters and community flyers. Access to equipment to borrow, including an LCD projector, laptop, books, megaphones, and other supplies. Conduct research on public transit options to improve bus system efficiency, fair housing in the Baltimore Metro area, and other public policies related to voting and elections, citizen resources, and more. Provide communication support for the Baltimore Regional Housing Campaign and the Baltimore Inclusive Housing Collaboration. Advocacy and organizing around key policy issues for Baltimore City and the region. Weekly email newsletters informing thousands of residents and professionals for the region about important local news, resources for neighborhoods, and opportunities for engagement. Website with regular posts on local news and engagement opportunities related to improving Baltimore City and the region. Coordinate regular meetings of the Baltimore Neighborhood Organizer Network, which is a professional development group for neighborhood-based and advocacy-focused organizers in Baltimore. Meetings include presentations and networking opportunities. CPHA assists Baltimore’s government, foundations and nonprofits conduct targeted training, outreach, and organizing in specific neighborhoods or for specific projects. Outreach is tailored to each project, and may include: door knocking, phone banking, research, canvassing, presentations at meetings, coordinating meetings, online and email communications, print communications, and more. Projects have included: reducing high infant mortality, foreclosure prevention, organizing tenants, and identifying stewards of gardens in Baltimore City. Contact CPHA to collaborate on your next outreach or organizing project.
http://www.cphabaltimore.org/what-we-do/
This is the second workshop of a 5-part series hosted by the Department of Energy and Environment. Complete series and RSVP links are listing below in the FAQ section. Project Description & Partners Looking for a way to fund a community-oriented project idea? Overwhelmed by everything involved in submitting a grant proposal? Want to collaborate with other organizations but don’t know where to start? Start here! During DOEE’s Project Description & Partners workshop staff will breakdown what is involved in a grant proposal, provide tips on how to write a strong project description, and introduce community partners. After seeing a Community Stormwater Solutions grant application, participants will practice writing project descriptions with the guidance and feedback from DOEE and partners. Transportation Options: - Metro Green Line: Southern Avenue Station - Bus Lines: 94 (Station Road), 30, 32,34, 36 (Pennsylvania Ave), & W2, 3 (Southeast Community Hospital-Anacostia Line) - Free Parking on site Contact: Kara Pennino, Department of Energy and Environment [email protected] 202-654-6131 Participants will gain the tools and confidence to write government grant proposals at this free hands-on workshop series! This series will focus on DOEE’s Community Stormwater Solutions Grant program. These grants provide start-up funding for innovative and community-oriented projects that improve our river, streams, and park, reduce trash, and raise citizen awareness to enhance local water bodies. Participants will work with community partners and gain transferable grant writing skills. FAQs: Q: What are the other workshops and how to I sign up? A: There are four additional workshops that break down how to write a grant application. They are: - Stormwater 101: RSVP HERE - Work Plan & Required Documents: RSVP HERE - Budget & Narrative: RSVP HERE - You got the grant, now what?: RSVP HERE Q: Do I need to attend all 5 workshops? A: Nope! While it is encouraged to attend all the workshops, it is not required. We understand that you live a busy life and spending 5 nights away from home is difficult. All materials and resources will be made public online. However, each workshop will have staff to answer specific questions. Q: Will the workshops be boring? A: No way! Each workshop is designed to be engaging and hands-on. Plus, there will be plenty of opportunities to meet new people and learn about projects happening in your community. You might even meet someone you want to collaborate with! Q: Who is the intended audience? A: Anyone and everyone who is looking to gain transferable grant writing skills including individuals, community-based organizations, non-profits, businesses, Parent Teacher Associations, Neighborhood Associations, and many more! Q: Do I need to have any grant writing experience? A: Nope! This workshop series is meant for first-time grant writers or those who want to brush up on some skills. Q: What skills will I gain? A: You will gain different grant writing skills at each workshop including how to write a project narrative, create a budget, develop a workplan, and much more! DOEE and partners will be leading all hands-on exercises to help develop your project idea into a reality. Q: What happens once the workshop series is over? A: You will be able to use your freshly acquired skills to apply for a Community Stormwater Solutions grant to fund your project! Each project can get up to $20,000!! Q: What is stormwater and why does it need to be managed? A: Stormwater runoff occurs when rain and snowmelt does not get absorbed into the ground. This happens more in places with impervious surfaces (such as streets, parking lots, driveways, sidewalks, and rooftops). Can you think of a place like this? DC is more than 40% impervious! Stormwater runoff causes problems like pollution, flooding, and erosion. Want to learn more? Join this workshop series! Q: I don’t know anything about the environment. Can I still come? A: Of course! The Community Stormwater Solutions program is a great way for everyone to get involved and learn about the environment. This grant program started in 2016 with the goal of expanding DOEE’s work with community partners by supporting projects that are inspired and supported by the community. Q: I have more questions. Who do I talk to?
https://www.eventbrite.com/e/free-grant-writing-workshop-project-description-partners-tickets-49140435365
The Czech Republic – the 8th most peaceful country in the world! Since the beginning of the coronavirus pandemic, personal safety has been a widely-discussed topic in world news. The global health crisis has developed rapidly since the start of the year, with governments and societies taking dramatic actions and making far-reaching changes. As the situation forces millions of students to reconsider their Study Abroad plans, it looks like the year 2020 has changed students’ attitudes towards their education, and their choice of a country and university in which to study. Personal safety is becoming one of the top factors that young people consider when looking for a Study Abroad program. Your parents are more likely to get on board with the idea of university study abroad if you can reassure them that you plan to study in a safe destination with a stable political system and a low risk of infection. Feeling safe and secure will improve your academic performance and give you the confidence and energy to push your studies to the next level. When you plan to relocate thousands of kilometers away from home to pursue your university program, your family needs to be sure that you will be safe. Fortunately, the Czech Republic has again proved to be one of the most peaceful destinations for international students. This year’s research on the Global Peace Index by the Institute for Economics and Peace showed a severe decline in global peace levels. However, despite the challenges that Covid-19 has brought to the European region, the Czech Republic has stayed strong and shows impressive results compared to many neighboring countries. The country jumped two places compared to last year, landing 8th out of 163 countries on the GPI list. The Global Peace Index report is a sophisticated analysis of numerous factors that are considered crucial to a country’s stability and security. “The 2020 GPI reveals a world in which the conflicts and crises that emerged in the past decade have begun to abate, only to be replaced with a new wave of tension and uncertainty as a result of the Covid-19 pandemic.” The situation that we find ourselves in is genuinely troubling, yet hope remains that the rest of the world will emulate the actions of the governments of the countries that top this list. The Czech Republic: a proven safe destination for Study Abroad students Relocating to one of the most peaceful places in the world doesn’t mean that you should stop being self-aware and take sensible precautions. The Czech Republic is a relatively small country with a fast-growing economy, and many higher education establishments. While you may not speak Czech, the University of New York in Prague offers a wide range of undergraduate and graduate programs, that are all taught entirely in English. UNYP’s partnership with the State University of New York, Empire State College, means that students from the US can transfer without losing credit. Students from other countries can do the same, provided that their previous studies took place at an accredited university. With the capital city of Prague rated as one of the most beautiful and safest cities in the world, the Czech Republic is a perfect place to study abroad during these unpredictable times. The Czech Republic’s safe communities and low crime rates have kept the country at the top of the Global Peace Index list for years, an achievement that was boosted this year by the urgent and effective response to the pandemic from the Czech government and society. This response was undoubtedly helped by the efficient state-run Czech healthcare insurance system, which is relatively inexpensive and provides almost universal coverage. With growing international communities and a high demand for qualified professionals in areas such as Marketing, Business Administration, Communications & Media, and IT Management, it's easy to see why so many students choose the Czech Republic as their Study Abroad destination. Many UNYP students manage to land a great job offer in the Czech Republic or internationally before they have even finished their program of study. UNYP degrees are recognized worldwide, and will provide you with an excellent foundation for your future career anywhere in the world.
https://www.unyp.cz/news/czech-republic-8th-most-peaceful-country-world
The step pyramid of Zoser: the original pyramid Who would have thought that a short, 30-kilometre, trip to the south-east of Cairo would lead to another, much longer and almost impossible journey… a journey of nearly 4,500 years into the past? The necropolis of Saqqara, the most important burial complex of the ancient city of Memphis — the capital of the Old Kingdom — offers exactly that: the chance to journey back in time to discover, among the numerous treasures of Ancient Egypt, the great step pyramid of Zoser. It is considered to be the original pyramid, the oldest, the forerunner of all the others, and the oldest carved stone monument in the world. In a nutshell, to visit the step pyramid of Zoser and the other monuments of the legendary necropolis of Saqqara is, without the shadow of a doubt, essential for any traveller to Cairo. - The history of the step pyramid of Zoser or Djoser - The subterranean world: inside the pyramid - Saqqara and Zoser’s burial complex - Where to eat near the step pyramid of Zoser - Where to stay near the step pyramid of Zoser The history of the step pyramid of Zoser or Djoser The ancient city of Memphis is said to have been founded by Egypt’s first king, the legendary Menes, around 3100 B.C. It was the capital of the kingdom during the Early Dynastic Period (c. 3100-2686 B.C.), and during the Old Kingdom (c. 2686-2181 B.C.) — and it remained one of Egypt’s most important cities for another three thousand years or more. Cairo Barceló Cairo Pyramids - Junto a la pirámide de Keops - Sensacional oferta gastronómica - Piscina climatizada con vistas a las pirámides - Wi-Fi gratuito The city’s long existence is demonstrated by the number and scale of the necropolises and burial monuments that can be found in the area today, including major sites as the Giza plateau, with its three world-famous pyramids, or as in this case, the necropolis of Saqqara. This burial ground is located further south than the famous cenotaphs of Khufu (Cheops), Khafre (Chephren) and Menkaure (Mycerinus), and is the site of this unique, step pyramid, one of Egypt’s most iconic monuments. Saqqara’s step pyramid was built in around 2650 B.C. by the polymath Imhotep (regarded as history’s first architect and engineer, as well as being a doctor, astronomer, and mathematician), as a tomb for the pharaoh Necherjet Dyeser (Third Dynasty, also known as Zoser or Djoser). It marks a historical turning point, a change to royalty being buried in pyramids, rather than in mastabas (flat-roofed, rectangular structures). This step pyramid is in fact a series of mastabas of decreasing size, placed one on top of the other. Now it seems like a whim or a flight of fancy by the erudite Imhotep, but it was at the time, a revolutionary change in many respects. Set within a walled burial ground containing other temples and symbolic monuments, this imposing structure, built of stone (unlike the traditional mastabas, which were made of adobe) measures 140 metres by 118 metres at its base. With 6 levels, and once reaching 60 metres in height, it marked a significant turning point, and set the architectural model for subsequent pyramids. The subterranean world: inside the pyramid The basic function of the traditional mastabas was to cover underground passages and burial chambers. The ‘new’ pyramid maintained these customs, and beneath it lie a series of pits and galleries where archaeologists found various types of stores, tombs of members of the pharaoh’s family and, of course, the burial chamber of Zoser himself, covered with great blocks of granite. Thus, a network of rooms, passages and galleries (some interconnected, and others not) leads to an extraordinary subterranean world, including areas richly adorned with blue and green ceramic tiles (faience), walls decorated with bas-relief, ancient construction features, and more. After a long restoration and adaptation process, the chambers and galleries of the step pyramid of Zoser have been reopened to the public, and now visitors can even access the main chamber, containing the pharaoh’s sarcophagus. Saqqara and Zoser’s burial complex The great step pyramid is the most important monument of both the Saqqara necropolis and of the Zoser burial complex, but it is far from being the only one. The walled complex The step pyramid stands within a huge, rectangular, walled enclosure measuring around 550 by 280 metres. Access is by means of a door at the south-eastern corner (there are fourteen other false doors around the walls), which leads to a kind of passage, or colonnade, with 40 columns set in two rows, opening into the great South Courtyard. The South Tomb Not far from the entrance stands this structure which seems to have been an additional, symbolic, tomb for Zoser, possibly reflecting his status as king of both Upper and Lower Egypt. Its magnificent frieze is crowned by cobra heads, symbolising protection and power, and the burial chamber inside is very similar to the one below the step pyramid, as are the exquisite relief decorations portraying the pharaoh taking part in the feast of Heb Sed. The Heb Sed courtyards and chapels One of the unique features of the Zoser burial ground is the complex of courtyards and false ‘chapels’ or shrines that recreate the scenario of the Sed festival. This was a royal ceremony believed to rejuvenate the king and regenerate his power, thus ensuring that the pharaoh, even after death, would continue to rejuvenate forever. Other structures to be found within the splendid Zoser burial complex include the remains of the Temple and the three columns on the eastern side of the wall, the buildings known as the Houses of the South and the North, the serdab (or Ka chamber) which once contained the statue of the pharaoh (now in the Cairo’s Egyptian Museum), and the ruins of the North Temple. Apart from the pharaoh’s burial ground, the Saqqara necropolis is home to many other historic structures, such as the tombs and mastabas of various pharaohs of the First and Second Dynasties; the Sarapeum, a remarkable burial place for the sacred Apis bulls; the pyramids of Sekhemkhet, Teti, Userkaf, Unis and other kings of the Fifth and Sixth Dynasties… All in all, a fascinating open-air museum. Where to eat near the step pyramid of Zoser Between the necropolis of Saqqara and the neighbouring town of Mit-Rahineh, visitors can find various options in terms of eateries — from small local cafés to tourist-orientated restaurants. However, an ideal choice if you want to sample the finest Egyptian and international cuisine is to head for any of the three restaurants at the Barceló Cairo Pyramids hotel(https://www.barcelo.com/es-es/barcelo-cairo-pyramids/), barely 4 kilometres from the other great necropolis of ancient Memphis, the Giza plateau, presided over by its three extraordinary pyramids. Where to stay near the step pyramid of Zoser Likewise, the Barceló Cairo Pyramids offers everything required for a first-rate stay in Cairo, very close to the Giza pyramids, and, of course, to the spectacular step pyramid of Zoser. With its 236 spacious, comfortable and fully-equipped guest rooms, outdoor swimming pool and panoramic terrace with stunning views of the pyramids, and a full range of facilities and services, this is the ideal place to enjoy an unforgettable holiday in the heart of Ancient Egypt. Frequently Asked Questions Who built the step pyramid of Zoser? Imhotep, the renowned engineer, architect, doctor, mathematician and royal astronomer, was responsible for building this imposing burial monument on the orders of the pharaoh Necherjet Dyeser, also known as Zoser or Djoser. How do I get to the step pyramid of Zoser? Located about 40 minutes to the south of the Giza pyramids, the necropolis of Saqqara is easily accessed by car. Of course, any number of tours and guided visits operate from Cairo and the surrounding area to take you to explore this extraordinary monument. Can you visit the pyramid? Yes, you can see the outside, and you can go inside. In other words, the great Zoser burial complex, with all the underground chambers and galleries that lie beneath the step pyramid, is open to the public.
https://www.barcelo.com/guia-turismo/en/egypt/el-cairo/things-to-do/pyramid-of-zoser/
The daily cuts of racial microaggressions make it difficult for African descent communites to stand confident in their identity. As a result, the unhealed wounds put this community at risk of mental health challenges. This workshop hopes to build awareness around the real impacts of racial microaggressions and ways to begin healing these wounds. About Simone: Simone Donaldson is a Consultant, Therapist, and founder of Agapé Lens Consulting and Therapy, with over 12 years devoted to mental health, racialized communities, and youth.She offers consultations to private, non-profit, and public sectors to help guide and implement equity, cultural humility, anti-black racism, mental health and wellness education through program development, workshops and training, leadership coaching, staff development and speaking engagements. She also provides psychotherapy and counselling, prioritizing Black youth 12-24 years old through a friendly, honest and mindfully present approach. Her sessions are embedded with a cultural and strength-based practice to elicit joy and hope as individuals tap into internal and external resources to support their healing journey. Agapé Lens Consulting and Therapy is grounded in the "Love Lens,” Afrocentric practice, anti-black racism, attachment, holistic and trauma informed lens. Her goal for consultations, training, and therapy is to patiently and safely guide individuals and groups through their journey to experience sustainable change. Simone believes true healing manifests when we become our most authentic self, allowing us to thrive and live out our purpose. Please note this event will be recorded. About The Africa Centre Council of Advancement of African Canadians in Alberta (CAAC) operating as “Africa Centre” is a charitable organization based in the most northern city of the global north. Using a Pan-African approach, Africa Centre works with diverse communities of African descent in Alberta. The organization strives to bring African diversity of heritage, culture, and contributions to building a stronger community in Alberta, Canada. Our mandate is to create a thriving community with full participation in all aspects of life while maintaining cultural and heritage attributions of African identity. The Centre strives to deliver this mandate through community engagement, empowerment, preservation of traditions, and cultural heritage. About Taibu TAIBU Community Health Centre (CHC) is a multidisciplinary, non-for-profit, community led organization established to serve the Black Community across the Greater Toronto Area as its priority population. We are located in the Malvern neighbourhood of Scarborough in Ontario. TAIBU also serves all the residents of the Malvern neighbourhood bounded by McCowan Rd to the West, Pickering Town line to the East, Highway 401 to the South and Steeles Avenue to the North. TAIBU is a Kiswahili word is used by well-wishers as a greeting that means, “Be in Good Health”. The name encapsulates the vision of TAIBU, which is promoting “healthy, vibrant and sustainable communities creating our own solution.” TAIBU Community Health Centre provides comprehensive primary healthcare in combination with health promotion programs and activities. We also work in close partnership with other community-based health and social services. Note: The information gathered from this event may be used in the future to provide various services to support Black Youth in the future. Your name will not be released at any time, and if you wish not to provide information, you may inform the moderators at any time through the event.
https://www.eventbrite.ca/e/what-are-microaggressions-tickets-142023007639
Mowing is one of the hobbies of gardeners. In addition, it’s also a thing I need to do before spring and summer come. However, someone has difficulties cutting the grass evenly. The reason is you don’t know about the lawn mowing tips. If you mow the grass correctly, the lawn will be thicker and healthier. The Garden is one of the favorable habitats for all kinds of bees. The bees see the garden as a safe zone where they can build up the nest and store their food. On the other hand, bees bring to the garden some benefits as well. Is attracting the bees and creating a bee-friendly garden Spring lawn care helps your lawnmower look fantastic. With the right routine, you can do your best with a little time. However, someone doesn’t know what to do. Are cutting the grass, watering, and distributing the fertilizers enough? Don’t worry. In this article, you can learn a perfect process. Just need a few minutes, you To help your lawn grow better, you may need to apply fertilizer to your lawn. Fertilizing a lawn is not very difficult. There is four expert’s advice you need to pay attention to when preparing to apply fertilizer to your lawn. This article will show you how to apply fertilizer to a lawn. 4 Tips Watering your garden is one of the easiest and most effective ways to take care of the grass. Water helps keep the moisture at the proper level. Thanks to the water, the lawn looks strong, although it is a drought summer. Everyone knows the importance of watering. However, what about how to water your lawn? Some people consider mowing a lawn as an unpleasant chore, whereas others see it as a way to beautify their houses or gardens. When people follow the instructions well, mowing will support green and healthy grass and reduce weeds and bare spots. This article will show you the instructions on how to mow a lawn. Soil which provides nutrients and serves as a foundation for plants is made up of minerals, organic matter, air, and water. Testing soil is one of the most significant steps to prepare for planting. To do this gardeners should follow the steps including ribbon test, worm count, and drain test. This article will show you Have you ever asked yourself what are the benefits of lawn aeration? People always have the aeration in the fall. This preparation helps strengthen the lawn. As a result, the lawn will grow evenly despite how bad the climate is. Moreover, the aeration also helps remove the thatch and improve the quality of the soil. Finding the best lawn mowers for large uneven ground is a sublime mission. With it, the ground that has ups and downs can not hamper your work anymore. Nevertheless, how well your operation technique is. Driving on an uneven ground is challenging. To deal with the uneven ground, someone decided to reshape the ground. However, it John Deere mower dies when engaging blades gives you a huge inconvenience. Although today’s lawnmower is much improved, it may have some problems when operating. Imagine how annoyed you are in the case that the lawnmower works unevenly. If you don’t have the maintenance regularly, the engine can suddenly stall when you cut the grass.
https://mowerplaza.com/author/ballina
The campaign is being led by plastics recycling body Recoup, who aim to make the public more aware of house collection services which will hopefully increase the amount of plastic recycled. There will be a display trailer at Epworth Market Place on Tuesday, September 9, which will promote the campaign. There will be a range of information available which aim to widen people’s knowledge on recycling and the different systems available. The campaign will help the council promote its new recycling initiatives, where from 29 September; residents can recycle items such as juice cartons and bottle tops, previously not allowed in burgundy bins. Councillor Nigel Sherwood, cabinet member for Highways and Neighbourhoods, said: “This campaign will be a great opportunity for the people of North Lincolnshire to learn about the different recycling options available. “Besides making people more aware, the campaign will also help the region increase the amount of plastic which is recycled, therefore helping North Lincolnshire to become a more efficient area and reduce the amount of waste that goes to landfill. “This is a package of improvements we introducing that also include refurbishing all of our household recycling centres and introducing house clearance permits. “By making it easier for people to recycle, we hope that more people will take part.” For more information regarding the campaign, visit www.northlincs.gov.uk.
https://www.lincolnshireworld.com/news/pledge-4-plastics-campaign-visits-epworth-2269304
Psuedo-archaic jargon aside, the modern translation is, simply, “if it harms none, do what you will/wish/aim to do.” Nowhere here does it say to “harm none” –i.e. that harm is not allowed. Rather, it states, quite clearly that if what you aim to do causes no harm, then you are allowed to do that which you aim to. The Wiccan Rede is a permissive counsel: it does not tell you what you cannot do, only what you can do. As for the times when what you aim to do will cause harm, you alone must decide if that harm is justified and if you can live with the consequences (because all actions, especially magickal, have consequences.) Of course, in stating that actions that do not cause harm are allowed, there is an implication that these are then the types of actions that are preferred, but this is left to the inference of the Witch: after all, you’re the one who has to live with the results of your decisions. Of course, actions that obviously will or could cause harm, or some sort of negatively perceived consequence, are a given in the discussion of personal responsibility and morality, but what about our actions that are based on good intentions? It’s generally accepted that working magick for or on someone without their permission is a big no-no. Reasons cited for this are often not wishing to interfere with anyone’s free will or larger universal forces that may be at work in that individual’s life, such as karmic based situations, the will of the Gods, or lessons that that individual must learn and cannot learn any other way. And all of these are very good reasons for not doing so, but this caution and consideration seem to be forgotten when it comes to magick that we perceive as being in that person’s best interest. After all, if someone is stating that they are in pain or are afraid or unsure of what to do, they must be looking for help, right? Right?? Actually… no. Most times, all anyone is really looking for is a sympathetic ear, someone to take the time to hear them out and show that they care, to remind them that they’re not alone in their struggles. They’re not looking for someone to fix all their problems or offer advice, just someone to let them vent their frustrations at life and thus help them relieve that stress. And this can be hard to do. We naturally want to help, to take the pain away, to make everything okay again, but unless asked to do so it’s not our place to try. Before tossing out an off the hand “sending positive energy”or “will light a candle for you” or “blessings and prayers to the Goddess that everything will work out” take a moment and consider if such actions would truly be appreciated. Just as a Pagan may not appreciate a Christian praying for them in tough times (even if the prayers are well intentioned,) so too may another Pagan or Witch not appreciate the purposeful direction of energy at them by another Pagan or Witch. Remember, there is no good or bad when it comes to energy: it’s just neutral. So, regardless of what your intention may be, curse or bless, energy sent without permission is magick worked without permission. Remember, many Witches work very hard to guard themselves energetically, shielding themselves and their homes and strengthening those shields on a regular basis; what one may consider as a blessing another may see as unwanted energetic interference. would be viewed as appalling, as care and teaching of children is seen as the responsibility only of those to whom the children belong, and the elderly are to be cared for also by those whom they spent their lives caring for. You take care of your own, not expecting someone else to step in when you don’t want to, and you sacrifice for those who have sacrificed for you.
https://www.ladyalthaea.com/every-day-is-magickal/magick-and-morality
CHICAGO -- Joakim Noah is at his happiest on a basketball court. The emotional big man loves talking to his teammates and opponents, and he relishes being the center of attention when he takes the floor. When he's at his best, he plays with the type of fire and passion that is much easier to find in a high school gymnasium, not an NBA palace. That's why last season was so difficult for the 30-year-old center. The exuberance that has defined his nine-year career went missing for long stretches. After coming in fourth in the MVP voting two years ago, Noah looked like a shell of his old self after offseason knee surgery. He wasn't moving the same way, he wasn't feeling the same way. For a player who thrives off energy and positive vibes, Noah looked miserable at times while playing on a knee that was still hurting and in an offensive system still adjusting to the reemergence of Derrick Rose and the introduction of All-Star Pau Gasol. For a player entering into the final year of a $60 million contract and hoping to cash in on the monster amounts of money coming into the league this summer thanks to the new TV deal, this was not the way Noah wanted to enter into his impending free agency year. But that's what makes the past week so intriguing for both Noah and the Bulls. Noah looks and sounds like his old self again in practice. He's bouncing around the floor and talking the way he did when he had the most success of his career. Finally, Noah looks as happy as he feels again. "Jo's been awesome," Bulls coach Fred Hoiberg said recently. "His energy, he's been a great leader out here. He's knocking down shots right now. Offensive rebounds, he's finishing with explosiveness. He's been, I'd say, one of the top guys in camp so far." So what happened? Noah spent much of the summer working out in Santa Barbara, California, at P3 The Peak Performance Project, an athletic training facility that, at least in part, is known for helping athletes get back on track after various injuries. Former Bull Kyle Korver swears by the facility. Alongside his trusted friend and trainer Alex Perris, Noah spent much of the offseason in Santa Barbara trying to get his body back in order. The difference in his game early in training camp has been noticeable to teammates who watched him hobble through last season. "I just feel bouncier, just lighter on my feet. Just waking up in the morning and moving good, that's a good feeling. Doing a lot of yoga every morning before I come in. Just taking care of myself a little different. This isn't my first rodeo." Joakim Noah "He feels a lot better," Bulls power forward Taj Gibson said. "You can tell by how he's jumping, his pop. You can tell he put a lot of work in over the summer. Sometimes with injuries like he had, it takes a full year to really recuperate. You can tell he put in a lot of work, you can tell he's moving back the way he's normally moving, he's attacking the rim with force. He's been having a great couple days in camp. I look forward to him having a good early start." The fact Noah's body never appeared to heal completely from the knee surgery was apparent and backed up by the numbers. Noah averaged 35.3 minutes a game in the 2013-14 season compared to just 30.6 in 2014-15, thanks in part to a front office mandate that Noah was not to exceed 32 minutes a game during the regular season. Two seasons ago, Noah averaged a career-high 10.0 field-goal attempts a game, according to research from ESPN Stats and Information. A year ago, he averaged just 6.4 shots a game and looked lost on offense throughout much of the season, especially in the playoffs. Noah averaged just 4.8 points a game in the paint a year ago, the lowest total he earned since his rookie season. Two years ago, he averaged 7.8 points in the paint. The "tornado" jumper that had been successful for him two years ago went missing as well. Two years ago, Noah was 55.9 percent (19-for-34) from the field from 20-24 feet, according to ESPN Stats and Information research. Last season, he shot just 42 percent (8-for-19) from the same range. The larger issue for Noah, and one which may underscore the issues he had moving around the floor, comes when looking at defensive win shares. Two seasons ago, Noah had 6.6 defensive win shares, which led the NBA, according to statistics from basketballreference.com. Last season, Noah had just 3.1, good for 35th in the league. As a whole, the Bulls were a much poorer defensive team than they had been in years past, but throughout his time in Chicago, it has been Noah who has set the tone for his team, especially on the defensive end. In order to fix the agility issue, Noah also started doing more yoga to help with his movement. "I just feel bouncier, just lighter on my feet," he said. "Just waking up in the morning and moving good, that's a good feeling. Doing a lot of yoga every morning before I come in. Just taking care of myself a little different. This isn't my first rodeo." Besides feeling better, Noah also has to find a way to play better with Gasol, whom he helped recruit last summer to come to the Bulls. Noah never seemed to click with Gasol a year ago, and the pair looked unsure of their positions on the floor. While Hoiberg hasn't yet committed to starting both in the regular season, he did sound confident that Noah and Gasol could function much better together. "The big thing with him is being a little bit more patient when he's on the baseline," Hoiberg said of Noah. "As opposed to just flashing as an action is going on where he could muddy up the spacing a little bit. But Jo is a very good playmaker, and you have to utilize him in that role. But I think he and Pau have been good together so far, and I'm excited to get those two on the floor together once we start playing preseason games and I know those two guys are as well." Gasol likes what he has seen from Noah up to this point as well. When asked about his pairing with Gasol, Noah was quick to defend the duo's potential effectiveness even if it's still a work in progress. "I think we should give it an honest evaluation while I'm healthy," Noah said. "Last year, I wasn't healthy. Let's see how it goes and then coaches can make a decision from there." Noah is perceptive. He knows many believe his best days might be behind him. He knows he's in a contract year and has to prove to the Bulls -- and the rest of the league -- he can still play at a high level. He knows what's on the line. But as he gets set for his ninth season, Noah is trying to do what he couldn't do throughout much of last year: have fun. His body is in a good place again, and he hopes his game will follow suit. "I'm just enjoying and embracing the moment," Noah said when asked about his upcoming free agency. "Enjoying it day by day. I'm not worried about the future or past. I'm just trying to stay focused on the moment."
http://www.espn.com/blog/chicago/bulls/post/_/id/22337/bouncier-noah-ready-to-bring-fun-back-to-game
The list— No. 55 Next pub → ← Prev pub This Kensal Green-based venue boasts a menu that is updated weekly. Previous menus have featured dishes including brill with hazelnut Hollandaise, Yukon golds and rainbow chard, plus sea bass fillet with butternut squash, spinach and sage. For meat lovers, previous menus have included meals such as Tamworth pork belly with celeriac, quince and mustard, as well as short horn Sirloin steak, served with bone marrow butter, chips and salad. The pub has formed its own ‘secret diners club’ with undercover guests who provide honest and constructive feedback to improve diners’ experiences.
https://www.top50gastropubs.com/pub-profile/parlour/
- Published: An easy operating pathogen microarray (EOPM) platform for rapid screening of vertebrate pathogens BMC Infectious Diseases volume 13, Article number: 437 (2013) - 4347 Accesses - 2 Citations - 1 Altmetric - Abstract Background Infectious diseases emerge frequently in China, partly because of its large and highly mobile population. Therefore, a rapid and cost-effective pathogen screening method with broad coverage is required for prevention and control of infectious diseases. The availability of a large number of microbial genome sequences generated by conventional Sanger sequencing and next generation sequencing has enabled the development of a high-throughput high-density microarray platform for rapid large-scale screening of vertebrate pathogens. Methods An easy operating pathogen microarray (EOPM) was designed to detect almost all known pathogens and related species based on their genomic sequences. For effective identification of pathogens from EOPM data, a statistical enrichment algorithm has been proposed, and further implemented in a user-friendly web-based interface. Results Using multiple probes designed to specifically detect a microbial genus or species, EOPM can correctly identify known pathogens at the species or genus level in blinded testing. Despite a lower sensitivity than PCR, EOPM is sufficiently sensitive to detect the predominant pathogens causing clinical symptoms. During application in two recent clinical infectious disease outbreaks in China, EOPM successfully identified the responsible pathogens. Conclusions EOPM is an effective surveillance platform for infectious diseases, and can play an important role in infectious disease control. Background The frequent invasion of microorganisms, including viruses, bacteria, fungi, parasites, and other eukaryotic and prokaryotic organisms, has threatened and will continue to threaten the life and health of humans and other vertebrates. In recent years, mutant or new forms of some existing pathogens have been identified as the causative agents of a number of outbreaks that have endangered public health in China . Severe acute respiratory syndrome (SARS), caused by a coronavirus, spread throughout Guangdong Province in 2003, followed by a worldwide epidemic. During the epidemic, 66% of the SARS cases were reported in China, resulting in 349 human deaths . In 2007, an outbreak of hand, foot, and mouth disease (HFMD) infected 1149 persons and caused the death of three children in Linyi City, Shandong Province, China . The 2009 influenza A (H1N1) pandemic affected more than 154,000 human patients, leading to 842 deaths in China alone . Because of its large and highly mobile population, the emergence of infectious diseases in China is relatively more frequent. Therefore, a system implemented by the medical community and government for the monitoring of pathogens that could have a significantly negative impact on public health is urgently required in China. China has an established hospital-based surveillance system for infectious diseases. All clinical and hospital reports of both suspected and confirmed cases of notifiable infectious disease must be sent to local Centers for Disease Control (CDC). The information is then sent to the China CDC headquarters in Beijing through the National Infectious Diseases Monitoring Information System Database, which was established in 2004. The hierarchical administrative organization of the surveillance system ensures a rapid and efficient upward flow of epidemic information . Based on this system, development of effective diagnostic platforms can greatly enhance the prevention and control of infectious diseases in China. The predominant techniques for identification of microbial pathogens depend on conventional clinical microbiology monitoring approaches. Although well established, these approaches usually require culture of the pathogens, followed by susceptibility tests, which are time-consuming and laborious. In addition, many microbes are difficult to culture, and may be undetectable by culture-based approaches. Molecular approaches for microbial surveillance and discovery have emerged as a very promising alternative for early diagnosis of infectious diseases. Currently, molecular approaches include traditional Sanger DNA sequencing, polymerase chain reaction (PCR), oligonucleotide microarrays, and next generation sequencing (NGS). Among these four technologies, the former two can identify a few known pathogens that must then be confirmed individually, and thus cannot cover a wide range of pathogens. The latter two methods cover a broad range of pathogens, and are therefore suitable for identifying unknown or even novel pathogens in infectious outbreaks. Although NGS produces the most in-depth, unbiased information, and can reveal completely novel organisms, it is time-consuming and expensive, especially for the analysis of complex samples . DeRisi and colleagues developed the first generation of microarray platform, called ViroChip, to detect a wide range of viruses . In 2003, the ViroChip helped to characterize SARS as a novel Coronavirus. Since then, ViroChip has also been used to detected a human metapneumovirus , a novel influenza virus , and a novel adenovirus . More recently, GreeneChip and MDA microarrays have been developed, which are broader spectrum approaches that can detect several thousand pathogenic viruses, bacteria, fungi, and protozoa [12, 13]. The aforementioned three platforms all used long oligonucleotide probes and random amplification of nucleic acids. In this study, we report the construction of a high throughput pathogen microarray platform, named Easy Operating Pathogen Microarray (EOPM), for large-scale pathogen surveillance and discovery in China. The platform uses similar technical features to previous methods, but will be more useful for clinical applications because of its user-friendly analysis software. The EOPM was designed based on the latest versions of nucleic acid sequence resources for microbes. Clinical application of the microarray system confirmed that it can correctly identify the pathogens responsible for infectious disease. Methods Collection of nucleic acid sequences of vertebrate pathogens Release 111 of the European Molecular Biology Laboratory (EMBL, http://www.embl.org/) database (March 2012) was used to establish our vertebrate viral sequence database. The terms at the family level that describe the host as a vertebrate animal were extracted from the “Virus Taxonomy List 2012” (http://ictvonline.org/virusTaxonomy.asp?version=2012), compiled by the International Committee on Taxonomy of Viruses (ICTVdB). We only considered viruses under these taxonomy nodes. We also downloaded the sequences of fungi and parasites from EMBL. 18S rRNA sequences were extracted using the CDS tag. Finally, we obtained bacterial 16S rRNA sequences from the Ribosomal Database Project (RDP 10.28, http://rdp.cme.msu.edu). The final integrated dataset included 1,358,528 viral sequences representing complete and partial viral genomes, 2,110,258 bacterial 16S rRNA sequences, 621,351 fungal 18S rRNA sequences, and 1,735,744 18S rRNA sequences from parasites. The EOPM Chip distinguishes all 2,554 known vertebrate virus species (involving 151 genera, 36 families), 124 bacterial genera (involving 53 families), 38 fungal genera (involving 17 families), and 47 genera of parasites (involving 24 families). Considering that bacterial 16S rRNA genes show a relatively high level of homology, and that bacteria require the presence of active virulence genes for pathogenesis, 58 virulence genes were selected, including rfbE, slt-1, ipaA, and katG, and probes were designed against these gene sequences. EOPM chip design and fabrication The basic design of the viral probes included as many different genomic target regions as possible for each species of vertebrate virus in the EMBLdB. First, probes were targeted to conserved regions in areas encoding the structural proteins. The protein families database (Pfam, http://pfam.sanger.ac.uk/) of multiple sequence alignments was used to cluster the functionally related sequences . The regions tagged as 5′ UTR, 3′ UTR, and LTR were also extracted and used as candidate sequences for the following probe design. Second, candidate probes were screened according to the following criteria: probes with a length of 60 nt, no repeats exceeding a length of 8 nt, no hairpins with stem lengths exceeding 10 nt, GC content between 30–70%, and Tm from 60–80°C. Third, we used BLAST analysis to select the conserved viral probes at the genus level from all of the candidate probes. The extent of conservation was evaluated for each probe, and all were found to detect the majority of species in each genus. A target species was considered to be represented if a probe matched it with at least 75% sequence identity. Probes conserved at the genus level were selected based on a flexible threshold because the sequence conservation between species belonging to different genera is quite variable. Finally, we aligned the sequences of all the candidate probes against the nt database, which was downloaded from NCBI FTP in August 2012. Probes with high sequence similarity to non-target genomes were eliminated. Both species-specific and genus-conserved probes were included in the final probe set. The identification of bacterial, fungal, and parasite probes was similar, but only focused on the 16S and 18S rRNA sequences. In addition, probes were also designed to target 1160 host immune response genes as a potential index to pathogenesis. The 60-mer oligonucleotide probes were synthesized on a 75 mm × 25 mm glass slide by applying an inkjet deposition system (Agilent Technologies, Palo Alto, CA). A total of eight sub-arrays with 60,000 distinct 60-mer probes in one slide were customized. All hybridizations involved a fluorescently-labeled synthetic oligonucleotide that was complementary to a positive control probe, which was replicated for more than 4,000 spots scattered in different zones of each sub-array. This ensured that signals appeared in every zone of each sub-array to facilitate data extraction from hybridization figures. Sample preparation and EOPM hybridization Microbial nucleic acids were extracted from serum, plasma, throat swabs, nasal lavage, feces, cerebrospinal fluid, and other body fluid using a TIANamp Virus DNA/RNA Kit (TIANGEN Biotech., Beijing, China). The carrier RNA from the kit was applied to extract virus nucleic acid with low molecular weight. The kit can be used to extract the nucleic acid from both RNA and DNA viruses (like adenovirus), as well as bacteria, fungi, and parasites. A previously described random PCR amplification strategy with minor modification was applied to amplify extracted nucleic acids and label amplified products with fluorescent dye. In brief, the first cDNA strand was reverse transcribed with a random decamer heeled with a PCR primer (5′-GTTTCCCAGTCACGATCNNNNNNNNN-3′). The first strand cDNA was then synthesized to double-stranded DNA using the same primer and Klenow DNA polymerase (Takara, Dalian, China). Double stranded cDNA from both patients and normal controls was PCR amplified using the heel primer. Resultant PCR amplicons were then purified and labeled with Cy3-dCTP or Cy5-dCTP for the normal controls and patient samples, respectively, using Klenow polymerase (Takara). Labeled DNA was mixed with 60 μl of hybridization buffer and added to the 8 × 60,000 EOPM arrays for hybridization overnight at 65°C in a hybridization oven (Agilent). The EOPM arrays were then washed with 2× SSC, 0.005% Triton X-100 at room temperature for 1 min, followed by a second wash with 0.2× SSC at 37°C for 1 min. The arrays were then scanned using a dual-laser scanner (Agilent) and the images were extracted and analyzed using Feature Extraction software (Agilent). EOPM data analysis The normal distribution of microbes in the human body should be considered when using EOPM to identify pathogens that are responsible for obvious clinical symptoms. We used two strategies to eliminate the background of normal microflora. Firstly, at the experimental level, we always compared the suspected clinical sample with a normal sample of the same type, i.e. serum vs. serum or feces vs. feces. Secondly, on a database level, we compared clinical samples with the same type of samples from a database that included more than 30 different samples from a normal population, such as serum, feces, cerebrospinal fluid, and throat swabs. The second aspect may avoid unexpected issues in the experimental normal control. Under the above strategy, each clinical sample was first compared with a normal control, and then with the normal sample database, so that potential pathogens should be identified based on their increased distribution compared to the normal human samples. To facilitate the application of EOPM in multiple surveillance sites for infectious diseases, we designed software with a user-friendly interface, which is supported by a statistical analysis method based on a comprehensive microbial sequence identification database. In microbial diagnostic microarrays, only a few probes are designed for each targeted microbe, and each probe should be confirmed with specific positive and negative samples. In the pan-microbial microarrays, many probes are designed for one pathogen, and there is no way to confirm each probe. However, the majority of the probes targeting an expected pathogen are likely to be positive, and not hybridize with other non-target microbes. We applied a hypergeometric distribution to calculate a p-value for each species as an assessment of statistical significance. Whether a pathogen was significantly present was determined using a complex interpretation method. The formula of hypergeometric distribution function is as follows: where C stands for the combination formula; N is the whole number of microbial probes on an array; M is the number of probes for a target microbe; n is the number of probes for which the intensity is positive on an array; and m is the number of probes whose intensity is positive for a target microbe. The probes were ranked by the signal of the Cy5 fluorescent dye that was used to label the patient sample. In the user-interface of the EOPM software, the proportion of probes can be chosen by the user according to the sample types. A small p-value indicates that there is a very low likelihood that a mistake has occurred in the multi-probe analysis, and correspondingly, that there is a high probability of the existence of the target microbe. Finally, the p-value is adjusted using Benjamini and Hochberg's FDR correction . Because the probes were designed to both the species and genus levels, results will be given accordingly. In EOPM analysis, when there were at least three positive probes for a specific species of pathogen and an enrichment p-value < 0.01, the given species could be considered positive for further investigation, including the clinical symptom coincidence analysis. Sensitivity test for EPOM Molecular detection methods, including pan-microbial microarrays and unbiased high throughput sequencing, traditionally rely on random amplification, and so have lower sensitivity than specific PCR . Clinical samples usually contain host nucleic acid which may interfere with the sensitivity of microarray analysis. To determine the sensitivity of EPOM, we spiked viral RNA into human RNA, mimicking the actual clinical samples. Enterovirus 71 (EV71), a single-stranded RNA virus, was cultured with Vero cells. The RNA from the culture supernatant medium was extracted and quantitatively determined using a qRT-PCR standard curve. Then, 103–108 EV71 molecules were spiked into RNA extracted from 1012 human HeLa cells. The RNA was then randomly amplified and hybridized with the EOPM microarray as described above. In parallel, RT-PCR using a pair of specific primers to amplify EV71 was performed to compare the sensitivity of the two methods. EOPM verification using known pathogens and clinical sample tests Known pathogens, including cell-cultured viral reference strains, cultured bacteria, and fungi, were used to verify EOPM performance. Clinical samples were all from patients with obvious infectious disease symptoms and which obtained negative results with routine diagnostic methods. Following detection by EOPM, the screened pathogens that caused similar clinical symptoms to those of the patients from which the clinical samples were collected were PCR amplified with species- or genus-specific primers. PCR-positive samples were then sequenced. This study obtained ethical approval from Ethical Committee of Guangdong Women and Children’s Hospital. Informed consent was not required because clinical samples were screened for potential pathogens in vitro. Original microarray data have been submitted to the Gene Expression Omnibus with the platform access number GPL16935. Results Evaluation of EOPM High throughput microarrays with long oligonucleotide probes, such as the Virochip and GreeneChip systems, have proved effective for pathogen screening [9, 11, 17, 18]. The EOPM technique described here also uses long oligonucleotide probes and random PCR amplification. Several known viruses, bacteria, and fungi were used to evaluate the accuracy of EOPM. Dengue virus was used as a test subject to determine whether the EOPM method could detect the virus from an infected C6/36 cell culture (Tables 1, 2, and 3). As shown in Table 1, among the 15 top ranked probes, eight targeted dengue virus specifically, while a further four probes targeted related flavivirueses such as Phnom Penh bat virus, Tembusu virus, and deer tick virus. We also carried out enrichment analysis of the positive probes at both the species and genus level. Notably, only dengue virus or closely related species showed significant enrichment (Table 2), and only Flavivirus showed significant enrichment at the genus level (adjusted p-value<0.0001) (Table 3). Both results were consistent with the known cultured dengue virus. By following a similar procedure, we successfully tested EOPM on a panel of other known pathogens, including an RNA virus, a DNA virus, bacteria, fungi, and parasites (listed in Table 4). In terms of detection sensitivity, EOPM could reliably detect EV71 when >106 copies of EV71 RNA were mixed into 1012 copies of HeLa cell RNA, while 103 copies of spike virus RNA could be detected in 1012 copies of host RNA by specific RT-PCR following agarose gel electrophoresis. Therefore, we inferred that when there was a high level of background nucleic acid, the detection sensitivity of random primer amplification was three orders of magnitude lower than specific primer amplification. Clinical case 1: identification of adenovirus responsible for an outbreak of flu-like infections Most adenovirus infections cause similar symptoms to those induced by some respiratory viruses and mycoplasmas, making it difficult to identify the pathogens by traditional clinical diagnostic procedures. In February of 2012, an outbreak of disease caused by an unknown pathogen occurred in Baoding City, Hebei Province. Patients presented with obvious infectious symptoms, such as high fever, coughing, throat congestion, lung tissue necrosis, and bronchopneumonia. Initially, influenza virus, SARS virus, and mycoplasma, known causes of these clinical symptoms, were suspected, but PCR tests were negative for all three pathogens. To rapidly identify the unknown pathogen, EOPM chips were selected to screen the possible pathogens responsible for these infections. Nucleic acid was extracted from patient serum samples to be used for EOPM analysis. Nucleic acid from normal serum was used as a control. One scanned microarray image is shown in Figure 1, and the enrichment results for the top-ranked pathogens at species and genus level are listed in Tables 5 and 6 respectively. Adenoviruses were found to be significantly enriched, as were the top five species results (Tables 5 and 6). We further verified adenovirus as the causative agent by PCR targeted to a conserved region of Mastadenovirus genomic sequence (see Additional file 1). Clinical case 2: cardiovirus discovery in a hand-foot-and-mouth juvenile patient Hand-foot-and-mouth disease (HFMD) is a common viral illness that predominantly affects infants and children younger than 5 years old. HFMD epidemics usually occur in China in late spring and early summer. The pathogens responsible for HFMD are mainly coxsackie A16 virus (CVA16) and enterovirus 71 (EV71), both of which belong to the Enterovirus genus. The routine HFMD clinical diagnosis includes three qRT-PCR kits targeting the Enterovirus genus, CVA16, and EV71 species respectively. In May of 2010, many children were found to have clinical symptoms of “hand-foot-and-mouth diseases” at Guangdong Women and Children’s Hospital, located in southern China. Although most patients were diagnosed as having CVA16 or EV71 infections by the qRT-PCR analysis, some were negative for Enterovirus. To identify the pathogens responsible for Enterovirus-negative HFMD children, samples from each of the patients were subjected to EOPM analysis. About 1 mg of a feces sample was used to extract RNA, using a TIANamp Virus DNA/RNA Kit, and labeled with Cy5 following random amplification. In parallel, RNA extracted from normal feces was labeled with Cy3 and used as a control. The enrichment analysis at the species level identified Theiler’s-like Cardiovirus as the most probable pathogen responsible for the HFMD infection in these patients (Table 7). Analysis of the enrichment results at the genus level revealed Cardiovirus as the number one match, showing significant enrichment (Table 8). The genera Cardiovirus and Enterovirus belong to the family Picornaviridae, a family of positive single-stranded RNA viruses. A few intestinal viruses of the Picornaviridae family, besides the enterviruse strains coxsackie A virus and enterovirus 71, are also known to potentially cause HFMD syndrome. Therefore, we hypothesized that the Enterovirus-negative HFMD children were actually infected with Cardiovirus, the sister genus of Enterovirus. To confirm the presence of Cardiovirus in patent feces, two specific nested RT-PCR primers proposed in a previous report were used to amplify the RNA extracted from the Enterovirus-negative patients. Samples were Cardiovirus-positive (see Additional file 2). The PCR products were further verified by DNA sequencing, and 708 bp of the PCR amplicon shared 99% nucleotide identity with human TMEV-like Cardiovirus isolate UC2 5' UTR. The microarray raw data of other symptom-causing pathogens, such as streptococcus and mycoplasma, identified by EOPM in peripheral blood in infectious patients, were also submitted to the GEO database. Development of software with a user-friendly interface to support the EOPM application The primary purpose of developing the EOPM was to facilitate the rapid identification of unknown pathogens in regional surveillance centers in China when emergent pathogen-causing incidents occur. When considering the application of microarray technology, data analysis is a significant obstacle to users without specialized knowledge in bioinformatics analysis of microarray data and nucleic acid sequences. Therefore, we implemented the statistical enrichment analysis in a user-friendly interface (Figure 2). The software can support a large-scale search of probe hits against a comprehensive microbial sequence database. We believe this software will greatly facilitate the installation of the EOPM platform in different infectious surveillance system laboratories in China. The software can be accessed at http://www.genestone.com.cn:8080/microbial/index.jsp. Discussion Since the first application of a high-throughput, rapid, and unbiased microarray for detecting viral pathogens in 2002 , several pan-microbial microarray platforms with different degrees of coverage of various pathogens have been established. These microarray platforms use long oligonucleotide probes (60–70-mer) and random PCR amplification, and have successfully identified unexpected pathogens in infectious disease outbreaks, even discovering novel viruses with homology to known species [8, 11]. In this study, we constructed a high-density EOPM array for screening all known viruses, bacteria, fungi, and parasites that could become vertebrate pathogens. Based on the sequence data available for vertebrate pathogens, we have designed 60,000 60-mer oligonucleotide probes targeting 2,554 vertebrate virus species (involving 151 genera, 36 families), 124 bacterial genera (involving 53 families), 38 fungal genera (involving 17 families), and 47 parasite genera (involving 24 families). The 60-mer oligonucleotide probes can cross-hybridize with similar but non-identical sequences, allowing the detection of novel pathogens that are related to known species. The EOPM probes designed to detect bacteria, fungi, and parasites were targeted to 16S rRNA or 18S rRNA sequences. Whereas rRNA sequences are relatively conserved in the same genus or family, EOPM can distinguish bacteria, fungi, and parasites at either the genus or family level, which has already been successfully applied in a clinical setting for confirmation and treatment. In the sensitivity study of EOPM, we designed experiments to compare the sensitivity of random amplification and specific amplification, while not considering the effect of other issues, such as clinical sample collection and nucleic acid extraction, on the sensitivity of EOPM. EOPM showed 103-fold lower sensitivity than specific target PCR amplification, which was consistent with a previous report . The lower sensitivity was due to the random PCR amplification adopted in the EOPM sample preparation, which was not as efficient as specific PCR for amplification of a particular species. Despite having lower sensitivity than target-specific PCR, the EOPM platform is sufficiently sensitive to identify the pathogens causing clinical symptoms in infectious outbreaks, in which symptom-causing pathogens should be highly enriched. The sensitivity can be further improved in practice if acellular samples with minimal host nucleic acid contamination, such as serum and throat swabs, are used for pathogen screening. For example, Greninger and colleagues had used ViroChip microarray to identify influenza A/H1N1 in nasal swab samples showing a comparable sensitivity with RT-PCR . In the sample preparation for the EOPM method, all RNA and DNA extracted from samples are firstly reverse transcribed. RNA viruses are converted into cDNA, and DNA viruses keep its DNA status in the reverse transcription reaction, then the DNA, including the reverse-transcribed cDNA and original DNA viruses, were transformed to double strand DNA for the subsequent random amplification procedure. Therefore, EOPM can detect both RNA viruses and DNA viruses in the same standard protocol. For bacteria, fungi, and parasites, EOPM detects 16S rRNA or 18S rRNA copies encoded by rRNA genes located in the genomic DNA. Because rRNA genes are highly transcribed, detecting rRNA molecules instead of rRNA genes should achieve higher sensitivity. With the dual color strategy used by the EOPM method, one normal sample without infectious symptoms was always analyzed in parallel. Despite this, the “normal” sample may possess its own clinical characteristics. For example, we have found Torque teno virus and human endogenous retroviruses in some normal blood samples. These viruses do not cause obvious clinical symptoms, and should not interfere with the aim of EOPM analysis, which is to determine the possible pathogens causing the symptoms in the test patients. EOPM data analysis consists of two steps. First, we screened for significantly enriched microbes in the target sample compared with the normal sample using the dual color chip. Second, the predicted microbes identified in the first step were compared with a database compiled from the normal population mentioned above, to eliminate the background microbes that also exist in normal samples without infectious symptoms. Pan-microbial screening microarrays differ from nucleic acid-based microbial diagnostic technologies, such as qPCR and low density microarrays. These diagnostic technologies are merely aimed at identifying one or a few types of microbes using target-specific probes that should be confirmed with specific positive and non-specific samples. Moreover, diagnostic low-density microarrays usually use short oligonucleotides of about 20-nucleotides as specific probes, similar to TaqMan probes in qPCR technology [21, 22]. The very limited number of short probes/primers targeting a pathogen could fail to detect sequences with mutations located in the regions targeted by the probes/primers. However, over a dozen long oligonucleotide probes were designed for each pathogen in the EOPM method, allowing reliable identification of a pathogen based on a statistical enrichment analysis of the probe group, instead of one individual probe. Moreover, EOPM can effectively narrow down the potential pathogens and even identify novel pathogens in complex clinical infection situations. In addition to the pathogen sequences, 1160 host immune response genes were also included in the EOPM database. During EOPM analysis of clinical samples, the immune response genes show dramatic up- or down-regulation in the target samples compared with the normal reference (data not shown). So far we have not found any reliable relationships between the immune response genes and the pathogen categories. The overall clinical information for patients and normal controls should also be comprehensively analyzed. Human immune related genes in peripheral blood show dramatic differences in expression even in a normal population, with differences correlated with sex, age, and sampling time, amongst other factors [23, 24]. Until now, the available genome-wide technologies to detect unknown pathogens in infectious outbreaks primarily consisted of microarrays and NGS. Although NGS can provide the most in-depth, unbiased information, and can reveal completely novel pathogens, it is time-consuming when the sample contains hundreds of microbial species that require comprehensive data processing. Therefore, NGS cannot meet the short time requirement for infectious disease control. However, the most complicated step in EOPM technology is probe design, which can be undertaken by a core bioinformatics team in the development phase. Once probe design is complete, and the whole microarray procedure is optimized as a standard procedure, pathogen screening results can be interpreted in less than 28 hours. Therefore, EOPM is more suitable for applications requiring detection of unknown pathogens during infectious outbreaks. In addition, with the rapid increase in microbial metagenomic sequence data produced by NGS, the probes used for EOPM can easily be upgraded, and the EOPM version can be updated due to the in situ synthesis technology replacing the spotting technology in microarray fabrication. Conclusions In conclusion, EOPM is a very powerful pan-microbial detection microarray platform, which can detect almost all known pathogens and related species. In several clinical test applications, we found that EOPM technology is sensitive enough to detect the pathogens causing evident clinical symptoms. EOPM is designed for easy operation, with detection software containing a user-friendly interface, facilitating its application in molecular laboratories. Infectious disease epidemics emerge frequently in China, and we believe that the use of EOPM in main pathogen surveillance sites across the country could play an important role in infectious disease control in China. References Cook IG, Dummer TJ: Changing health in China: re-evaluating the epidemiological transition model. Health Policy. 2004, 67 (3): 329-343. 10.1016/j.healthpol.2003.07.005. Guan Y, Zheng BJ, He YQ, Liu XL, Zhuang ZX, Cheung CL, Luo SW, Li PH, Zhang LJ, Guan YJ, et al: Isolation and characterization of viruses related to the SARS coronavirus from animals in southern China. Science. 2003, 302 (5643): 276-278. 10.1126/science.1087139. Zhang Y, Tan XJ, Wang HY, Yan DM, Zhu SL, Wang DY, Ji F, Wang XJ, Gao YJ, Chen L, et al: An outbreak of hand, foot, and mouth disease associated with subgenotype C4 of human enterovirus 71 in Shandong, China. J Clin Virol. 2009, 44 (4): 262-267. 10.1016/j.jcv.2009.02.002. Zhou J, Sun W, Wang J, Guo J, Yin W, Wu N, Li L, Yan Y, Liao M, Huang Y, et al: Characterization of the H5N1 highly pathogenic avian influenza virus derived from wild pikas in China. J Virol. 2009, 83 (17): 8957-8964. 10.1128/JVI.00793-09. Liu D, Wang X, Pan F, Xu Y, Yang P, Rao K: Web-based infectious disease reporting using XML forms. Int J Med Inform. 2008, 77 (9): 630-640. 10.1016/j.ijmedinf.2007.10.011. Nakamura S, Yang CS, Sakon N, Ueda M, Tougan T, Yamashita A, Goto N, Takahashi K, Yasunaga T, Ikuta K, et al: Direct metagenomic detection of viral pathogens in nasal and fecal specimens using an unbiased high-throughput sequencing approach. PLoS One. 2009, 4 (1): e4219-10.1371/journal.pone.0004219. Wang D, Coscoy L, Zylberberg M, Avila PC, Boushey HA, Ganem D, DeRisi JL: Microarray-based detection and genotyping of viral pathogens. Proc Natl Acad Sci USA. 2002, 99 (24): 15687-15692. 10.1073/pnas.242579699. Wang D, Urisman A, Liu YT, Springer M, Ksiazek TG, Erdman DD, Mardis ER, Hickenbotham M, Magrini V, Eldred J, et al: Viral discovery and sequence recovery using DNA microarrays. PLoS Biol. 2003, 1 (2): E2- Chiu CY, Alizadeh AA, Rouskin S, Merker JD, Yeh E, Yagi S, Schnurr D, Patterson BK, Ganem D, DeRisi JL: Diagnosis of a critical respiratory illness caused by human metapneumovirus by use of a pan-virus microarray. J Clin Microbiol. 2007, 45 (7): 2340-2343. 10.1128/JCM.00364-07. Greninger AL, Chen EC, Sittler T, Scheinerman A, Roubinian N, Yu G, Kim E, Pillai DR, Guyard C, Mazzulli T, et al: A metagenomic analysis of pandemic influenza A (2009 H1N1) infection in patients from North America. PLoS One. 2010, 5 (10): e13381-10.1371/journal.pone.0013381. Chen EC, Yagi S, Kelly KR, Mendoza SP, Tarara RP, Canfield DR, Maninger N, Rosenthal A, Spinner A, Bales KL, et al: Cross-species transmission of a novel adenovirus associated with a fulminant pneumonia outbreak in a new world monkey colony. PLoS Pathog. 2011, 7 (7): e1002155-10.1371/journal.ppat.1002155. Gardner SN, Jaing CJ, McLoughlin KS, Slezak TR: A microbial detection array (MDA) for viral and bacterial detection. BMC Genomics. 2010, 11: 668-10.1186/1471-2164-11-668. Palacios G, Quan PL, Jabado OJ, Conlan S, Hirschberg DL, Liu Y, Zhai J, Renwick N, Hui J, Hegyi H, et al: Panmicrobial oligonucleotide array for diagnosis of infectious diseases. Emerg Infect Dis. 2007, 13 (1): 73-81. 10.3201/eid1301.060837. Finn RD, Mistry J, Schuster-Bockler B, Griffiths-Jones S, Hollich V, Lassmann T, Moxon S, Marshall M, Khanna A, Durbin R: Pfam: clans, web tools and services. Nucleic Acids Res. 2006, 34: 247-251. 10.1093/nar/gkj149. Benjamini Y, Drai D, Elmer G, Kafkafi N, Golani I: Controlling the false discovery rate in behavior genetics research. Behav Brain Res. 2001, 125 (1–2): 279-284. Lipkin WI, Palacios G, Briese T: Diagnostics and discovery in viral hemorrhagic fevers. Ann N Y Acad Sci. 2009, 1171 (Suppl 1): E6-E11. Quan PL, Palacios G, Jabado OJ, Conlan S, Hirschberg DL, Pozo F, Jack PJ, Cisterna D, Renwick N, Hui J, et al: Detection of respiratory viruses and subtype identification of influenza A viruses by GreeneChipResp oligonucleotide microarray. J Clin Microbiol. 2007, 45 (8): 2359-2364. 10.1128/JCM.00737-07. Chiu CY, Rouskin S, Koshy A, Urisman A, Fischer K, Yagi S, Schnurr D, Eckburg PB, Tompkins LS, Blackburn BG, et al: Microarray detection of human parainfluenzavirus 4 infection associated with respiratory failure in an immunocompetent adult. Clin Infect Dis. 2006, 43 (8): e71-e76. 10.1086/507896. Drexler JF, Luna LK, Stocker A, Almeida PS, Ribeiro TC, Petersen N, Herzog P, Pedroso C, Huppertz HI, Ribeiro Hda C, et al: Circulation of 3 lineages of a novel Saffold cardiovirus in humans. Emerg Infect Dis. 2008, 14 (9): 1398-1405. 10.3201/eid1409.080570. Vora GJ, Meador CE, Stenger DA, Andreadis JD: Nucleic acid amplification strategies for DNA microarray-based pathogen detection. Appl Environ Microbiol. 2004, 70 (5): 3047-3054. 10.1128/AEM.70.5.3047-3054.2004. Huang TS, Liu YC, Bair CH, Sy CL, Chen YS, Tu HZ, Chen BC: Detection of M. tuberculosis using DNA chips combined with an image analysis system. Int J Tuberc Lung Dis. 2008, 12: 33-38. Zhu L, Jiang G, Wang S, Wang C, Li Q, Yu H, Zhou Y, Zhao B, Huang H, Xing W, et al: Biochip system for rapid and accurate identification of mycobacterial species from isolates and sputum. J Clin Microbiol. 2010, 48 (10): 3654-3660. 10.1128/JCM.00158-10. Radich JP, Mao M, Stepaniants S, Biery M, Castle J, Ward T, Schimmack G, Kobayashi S, Carleton M, Lampe J, et al: Individual-specific variation of gene expression in peripheral blood leukocytes. Genomics. 2004, 83 (6): 980-988. 10.1016/j.ygeno.2003.12.013. Whitney AR, Diehn M, Popper SJ, Alizadeh AA, Boldrick JC, Relman DA, Brown PO: Individuality and variation in gene expression patterns in human blood. Proc Natl Acad Sci USA. 2003, 100 (4): 1896-1901. 10.1073/pnas.252784499. Pre-publication history The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2334/13/437/prepub Acknowledgments We gratefully acknowledge Professor Taijiao Jiang in Institute of Biophysics, Chinese Academy of Sciences for manuscript review. This study was supported by Chinese State Key Project Specialized for Infectious Disease (2013ZX10004101). Additional information Competing interests There are patents pending by the authors related to the probe design methods and array data statistical enrichment methods. In addition, software copyright is pending related to pathogen interpretation. Authors’ contributions LZ and YJ conceived the study and analyzed the data. LZ drafted the manuscript. WH and YY conducted the microarray experiments, PCR, and sequencing confirmation. XZ and HL designed probes and software. XZ, AY, CZ, and ZH participated in the sample collection and array data analysis. All authors read and approved the final manuscript. Weiwei Huang, Yinhui Yang contributed equally to this work. Electronic supplementary material 12879_2012_2650_MOESM1_ESM.docx Additional file 1: Two pairs of specific primers for amplifying adenovirus, and the sequence of PCR products from clinical case 1.(DOCX 13 KB) 12879_2012_2650_MOESM2_ESM.docx Additional file 2: Sequence of nested RT-PCR primers for cardiovirus, and the PCR product sequence from clinical case 2.(DOCX 14 KB) Authors’ original submitted files for images Below are the links to the authors’ original submitted files for images. Rights and permissions Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. About this article Cite this article Huang, W., Yang, Y., Zhang, X. et al. An easy operating pathogen microarray (EOPM) platform for rapid screening of vertebrate pathogens. BMC Infect Dis 13, 437 (2013). https://doi.org/10.1186/1471-2334-13-437 Received: Accepted: Published:
https://bmcinfectdis.biomedcentral.com/articles/10.1186/1471-2334-13-437
Predictions of total bed material load for investigation of river sedimentation using selected empirical equations were made based on laboratory data. Data were obtained through observations made from conducting experiments in the hydraulic laboratory using is mobile flume and visualization tank.The experiments are categorized under two alignment namely the straight channel and full curved channel with different discharge. Experimental data covers- flow discharges of 1.2 l/s, 1.61/s, 2.1511s and 4.11/s, average flow depths of 0.08m and median sediment sires of 0.25mm. the equations use in the evaluation are Ackers and White, DuBoys, Shield's, Schoklitsch'% and Meyer Peters's. The selection was based on the performance of these equations by past investigators who showed good agreement between observed and calculated transport rates. The accuracy and the reliability of these formulas are verified. Actions (login required) Document DownloadsMore statistics for this item...
http://utpedia.utp.edu.my/754/
Mathematical modelling has become an industry of great proportions. As in the wake of every big industry, there is some need for ecological concern. One reason why mathematical modelling is so popular, and is spreading to every comer of science, is the great prestige which is attached to mathematics in almost every academic community. This prestige has its roots in the overwhelmingly successful application of mathematics in physics. Scientists know of this success and many of them look at it with deep admiration and respect, considering it a worthy ideal to strive towards in practically all scientific endeavour. Modern physics possesses a rare combination of a very high degree of sophistication both on the theoretical and on the experimental side. On the one hand the theory builds on a highly advanced experimental basis. On the other hand, the experiments may in their turn stand on the shoulders of an impressive mathematical theory - a theory that not only makes use of plain calculus and differential equations, but also of a host of other and more modern mathematical tools. It is not surprising that some social scientists might develop an inferiority complex. In comparison, most of the papers that are written within the social sciences took like child's play. For instance, as compared to physics, papers in sociology often take on a conversational and literary form, using predominantly everyday language and common sense. Somehow, we tend to be less impressed by the kind of science which appears to be within the reach of the well-educated layman in contrast to a science which requires years of study to merely understand its language. It is understandable that social scientists might be tempted to imitate physics by using more sophisticated mathematical models. The hope is that this will bring about scientific success. But it should also be pointed out that in many academic quarters there is no better way to impress or silence a colleague than to refer to some deep mathematical theorem. Enthusiasm (if not craze) for mathematical modelling is perhaps farthest developed within modern economics. In a letter to Science (1983), the distinguished economics scholar Wassily Leontief recently launched a fierce attack on mathematical modelling in academic economics: "Not having been subjected from the outset to the harsh discipline of systematic fact-finding, traditionally imposed on and accepted by their colleagues in the natural and historical sciences, economists developed a nearly irresistible predilection for deductive reasoning. As a matter of fact, many entered the field after specializing in pure or applied mathematics. Page after page of professional economic journals are filled with mathematical formulas leading the reader from sets of more or less plausible but entirely arbitrary assumptions to precisely stated but irrelevant theoretical conclusions. ... Year after year economic theorists continue to produce scores of mathematical models and to explore in great detail their formal properties; and the econometricians fit algebraic functions of all possible shapes to essentially the same sets of data without being able to advance, in any perceptible way, a systematic understanding of the structure and the operations of a real economic system."
http://www-history.mcs.st-andrews.ac.uk/Extras/Aubert_modelling.html
World heritage site in India list: Everything you need to know After UNESCO added Dholavira, Gujarat to the World Heritage list, the total number of World Heritage sites in India has raised to number 40. Here’s a World Heritage site in India list by UNESCO. So in today’s blog, we are going to see the list. The Story On July 27 Tuesday, UNESCO (United Nations Educational, Scientific and Cultural Organization) declared the ancient city of Dholavira as a World Heritage site in India. Dholavira is the southern center of the Harappan Civilization in India. Following the declaration, UNESCO had informed that the water management system in Dholavira is a multi-layered defensive mechanism, extensive use of stone in construction, and special burial structures put the ancient city of Harappan apart from other cultural sites in India as well in the world. “The ancient city of Dholavira is one of the most remarkable and well-preserved urban settlements in South Asia dating from the 3rd to mid-2nd millennium BCE (Before Common Era)”.The UN cultural agency said in a statement, “Absolutely delighted by this news. Dholavira was an important urban center and is one of our most important linkages with our past. It is a must-visit, especially for those interested in history, culture, and archaeology”.Prime Minister Narendra Modi took tweets and shared this information while writing, “Today is a proud day for India, especially for the people of #Gujarat. Since 2014, India has added 10 new World Heritage sites – one-fourth of our total sites. This shows PM @narendramodi’s steadfast commitment to promoting Indian culture, heritage and the Indian way of life,”Union minister for culture G Kishan Reddy tweeted. Taking to Twitter, Union Minister for Culture G Reddy said that “India has added 10 new World Heritage sites, one-fourth of India’s total sites on Unesco’s list.” With Dholavira, UNESCO also inscribed Telangana’s iconic Kakatiya Rudreshwara (Ramappa) Temple on the World Heritage List. The UN agency said that the two inscribed World Heritage Sites from India offer “great insight into the knowledge and ways of life of earlier societies, customs, and communities.” The 40 World Heritage Sites in India • Dholavira, Gujarat • Ramappa Temple, Telangana • Taj Mahal, Agra • Khajuraho, Madhya Pradesh • Hampi, Karnataka • Ajanta Caves, Maharashtra • Ellora Caves, Maharashtra • Bodh Gaya, Bihar • Sun Temple, Konark, Odisha • Red Fort Complex, Delhi • Buddhist monuments at Sanchi, Madhya Pradesh • Chola Temples, Tamil Nadu • Kaziranga Wild Life Sanctuary, Assam • Group of Monuments at Mahabalipuram, Tamil Nadu • Sundarbans National Park, West Bengal • Humayun’s Tomb, New Delhi • Jantar Mantar, Jaipur, Rajasthan • Agra Fort, Uttar Pradesh • Group of Monuments at Pattadakal, Karnataka • Elephanta Caves, Maharashtra • Mountain Railways of India • Nalanda Mahavihara (Nalanda University), Bihar • Chhatrapati Shivaji Maharaj Terminus (formerly Victoria Terminus), Maharashtra • Qutub Minar and its Monuments, New Delhi • Champaner-Pavagadh Archaeological Park, Gujarat • Great Himalayan National Park, Himachal Pradesh • Hill Forts of Rajasthan • Churches and Convents of Goa • Rock Shelters of Bhimbetka, Madhya Pradesh • Manas Wild Life Sanctuary, Assam • Fatehpur Sikri, Uttar Pradesh • Rani Ki Vav, Patan, Gujarat • Keoladeo National Park, Bharatpur, Rajasthan • Nanda Devi and Valley of Flowers National Parks, Uttarakhand • the Western Ghats • Kanchenjunga National Park, Sikkim • Capitol Complex, Chandigarh • The Historic City of Ahmedabad • The Victorian and Art Deco Ensemble of Mumbai • The Pink City – Jaipur So that’s all for today about the World heritage site in India list by UNESCO If you like the blog share it with your friend and be proud. We will catch you in the next one till then take care of yourself and your family.
https://1peecentquotes.com/world-heritage-site-in-india-list/
The Space Telescope Science Institute (STScI) is a multi-mission science and flight operations center for NASA’s flagship observatories. Our world-class astronomical research center is based on the Johns Hopkins University Homewood campus in Baltimore, Maryland. Visit our website to learn more about our missions. This position can support hybrid work. Candidates must reside in or be willing to relocate to our local market. (MD, DE, VA, PA, DC & WV). STScI is seeking a Collection Management Librarian to create, maintain, and develop balanced electronic and print collections that meet the needs of the STScI community and support the missions of the institute and wider astronomical community. Responsibilities include: In coordination with STScI Library staff: - Makes digital and print purchases for relevant subjects; tracks library expenditures related to those specific budget lines. - Performs original and copy cataloging of print and online materials using Library of Congress (LC) classification scheme and subject headings. - Referring to collection development guidelines, assesses low-use materials for retention or removal and identifies rare materials for transfer to other libraries. - Uses judgment and takes initiative in cataloging, rule interpretation, and selection and removal decisions. - Maintains accuracy of the STScI Library catalog by identifying and rectifying problematic records or collections; deletes records/holdings for material being removed; verifies the accurate loading of e-resource records; enhances records for discoverability. - Sustains accurate and timely journal access in print and electronic formats: checks in print journals and applies journal-specific retention policies. - Performs Interlibrary Loan functions. - Collaborates with other library staff to provide content for library promotions and to understand the impact of marketing and usage statistics on collections. - Carries out day-to-day functions of library, including routine circulation and reshelving duties, working with STScI staff and visitors to find requested resources, and providing reference services in a supportive role as necessary; coordinates schedules with other library staff. - Assists with and may lead additional library projects as needed, such as submitting and updating records in the Astrophysics Data System (ADS), bulk catalog enhancements, or inventory. - Participates in STScI and professional trainings as appropriate. - Follows developments and future trends for bibliographic standards and frameworks, e.g., linked data models and Bibframe; informs library staff of major changes in the greater metadata world. - Maintains membership in one or more professional organizations such as American Library Association (ALA) or the American Astronomical Society (AAS). Works under the general direction of the Principal Librarian with regularly scheduled check-ins. This position is non-supervisory but may direct or lead the work of short-term interns. Qualifications: Technical Skills - Knowledge of RDA (formerly AACR2) standards is required. Must understand MARC21 format and its implementation by OCLC and the Library of Congress and apply them in a local setting. - Intermediate knowledge of Integrated Library System (ILS) or Library Services Platform (LSP) functions is required. - Experience performing original and copy cataloging of multiple formats is required. Exposure to the Library of Congress classification scheme, experience working with scientific materials, or knowledge of astronomical materials is desirable but not required. - Demonstrated experience in monograph acquisitions, collection assessment, de-acquisition, or serials management. Abilities - Routinely demonstrates skill in verbal and written communication. Communicates effectively through multiple channels (chat, phone, email) in a hybrid work environment. - Routinely demonstrates technical proficiency and initiative. Uses judgement and asks for assistance as appropriate. - Demonstrated ability to adapt to changing tools, methodologies, and requirements in information delivery. - Ability to collaborate with a diverse group of technical, scientific, and non-technical personnel within the institute as well as external colleagues and vendors. Education/Experience: An ALA accredited Master’s degree in Library /Information Science. A minimum of 2 years’ experience in library or information center. Prior experience in an academic, federal, research, or special library is desirable but not required. This position can support hybrid work. Candidates must reside in or be willing to relocate to our local market. (MD, DE, VA, PA, DC & WV). The starting position and salary are commensurate with education and experience. We offer an excellent and generous benefits package. TO APPLY: Share your experience by uploading a resume and completing an online application. Applications received by January 15, 2023 will receive full consideration. Applications received after this date will be considered until the position is filled. Direct link: https://recruiting2.ultipro.com/SPA1004AURA/JobBoard/2451ecb6-af3b-4d72-805a-eeeca596042b/Opportunity/OpportunityDetail?opportunityId=3c39497c-e0a3-40ec-95e2-933332fcd562 Explore all career opportunities through our website at www.stsci.edu/opportunities COVID-19 Working Protocols: https://outerspace.stsci.edu/display/CWP STScI offers a flexible and welcoming workspace for all. STScI embraces the diversity of our staff as a strategic priority in creating a first-rate community. We reflect this deep dedication by strongly encouraging women, ethnic minorities, veterans, and disabled individuals to apply for these opportunities. Veterans, disabled individuals, or wounded warriors needing assistance with the employment process can contact us at [email protected] EOE/AA/M/F/D/V.
https://jobs.code4lib.org/jobs/55117-assistant-librarian
Mind the gap. Because, according to Rotozaza, there is one. It’s the gap between the representational world and the real world, and this is what our lives are defined by. Rotozaza began in 1998 with Ant Hampton and Sam Britton’s ‘Bloke’, which marked the start of the company’s obsession “with the image of the ‘doubleself’”, the internal conflict of ideas and emotions which is applicable to all of us. In 1999 Silvia Mercuriali joined forces with Ant and it is Rotozaza’s continual attempt to break down traditional notions of ‘performer’ and ‘audience’ that has carved them out as something truly relevant. In many Rotozaza works the performer is an unrehearsed guest and, crucially, the audience is aware of this. The performer receives voice-over instructions which he/she must follow while the audience hears and sees their attempt to fulfil them. The performer’s vulnerability heightens the audience’s empathy for them and thus tampers with the convention of the representational world and real world being separate in theatre. The alteration of this fundamental characteristic of the theatre experience has created a basis of Rotozaza’s work, essentially putting performer and audience in the same position, where “everyone is discovering at the same time”. Not surprisingly, for Ant, “empathy is at the heart of theatre”. The latest work, ‘Five In The Morning’, takes this concept a step further than previous shows, ‘Doublethink’ and ‘Etiquette’. In this piece we wonder: are the performers’ actions being dictated in a hyper-real world? Or are they dictating their own actions in a representational world? Here the audience believes in a fictional world without knowing it, but the world created, ‘Aquaworld’, is essentially a very fragile one (sound familiar?). This constant questioning of realities, both internal and external, is at the heart of Rotozaza. Accordingly, we are forced to consider our own truth versus fiction complex and on what it is built; our role as an audience member; the concept of the “spectacle” itself; and the limitations of language. It is “participatory theatre” but what sets it apart is that you don’t have to suspend disbelief; you’ve just got to be there. You’ve just got to experience this live, knowing that it is, indeed, live. Does that make it real? For Ant and Silvia it doesn’t necessarily matter what the audience reaction is, as long as there is a reaction, and hopefully a strong one. Ant talks of a triangle with the internal self, the projected self/world and the real world at each corner. Rotozaza attempts to make links, or trains of thought, between these three effectively separate entities. But, just in case you’re not sure, remember: mind the gap. Rotozaza: 'Five in the morning' will be on at Shunt on Weds 13 & Thurs 14 Feb.
http://www.run-riot.com/articles/blogs/eleanor-ivory-weber-meets-rotozaza
RELATED APPLICATIONS 0001 This patent application claims the priority of provisional patent applications serial No. 60/232,644, filed Sep. 14, 2000 and serial No. 60/253,280 filed Nov. 27, 2000. BACKGROUND OF THE INVENTION BRIEF SUMMARY OF THE INVENTION 0002 In the TV-Anytime document authored by Peter van Beek of Sharp Laboratories of America and dated Aug. 23, 2000, a draft specification of descriptors and description schemes for Electronic Program Guides or Electronic Content Guides is proposed. The TV Anytime Forum is an association of organizations which seeks to develop specifications to enable audio-visual and other services based on mass-market high volume digital storage. 0003 The basic assumptions and design principles of the proposed specification of the Electronic Program Guide contained in the EPG specification document are: 0004 It is a layered design containing descriptions ranging from those that are core (e.g., identifying and locating content) to those that are basic (title, abstract, actors etc.) and advanced (audiovisual titles, extensive textual summaries etc.). 0005 Its capability to hold extensive information allows content guides to be arranged and presented to the user in multiple different ways, perhaps according to user preferences (e.g., Robert Redford channel). Current ATSC-PSIP and DVB-SI specifications 1,2 do not have, for example, a well-defined mechanism to specify actors or directors. 0006 Its design is consistent with the TVA framework, in which selection of program content based on program metadata is separated from localization of the program content. To facilitate this separation, the design includes a content reference identifier, with which the metadata is associated. Localization implies a mapping from the content reference identifier to a location. The design of the EPG description schemes allows a wide range of scenarios in this respect, including those with unidirectional and bidirectional links between the content provider and the user. 0007 It has been designed such that the structure can co-exist with ATSC-PSIP 1 or DVB-SI 2, when they are available, and in fact utilize the tuning and service information tables of these two specifications. 0008 The description scheme-XML based framework enables the electronic guide descriptions to co-exist with other advanced description schemes (e.g., those that are included in MPEG-7, for example, Summarization Description Schemes) in the very same framework. These advanced description schemes allow functionalities to the user so that the user can consume the content in ways that fits to his/her preferences, e.g., by consuming highlights of a program that are created on the basis of a preferred theme in the program such as the goals in a soccer game. 0009 Its design extends ATSC and DVB specifications to scenarios that are beyond TV broadcast. E.g., Internet streaming, Video on Demand, Electronic Content Guide in a home setting where local content (e.g., on DVDs) are also included. 0010 The ProgramInformation Description Scheme (DS) contains the information related to a single audiovisual program, e.g. TV program, that is necessary to build an Electronic Program Guide. 0011 Furthermore, the ProgramInformation DS as defined in the EPG specification document consists of four parts: 0012 Mapping from content identifier to locator; 0013 Basic program information; 0014 Extended program information; 0015 Program event information. 0016 The first element serves to map a content reference identifier to the location information of a program, effectively allowing localization. The basic program information consists of the most basic information needed to schedule a program, such as for example title and genre. The extended program information contains further useful information for describing a program textually and technically. This is useful for enhanced applications. The program event information further contains the tools to describe a particular program instance or program event. Multiple program events or instances may exist or occur for a single source program. For instance, a program may be broadcast on a particular channel at multiple occasions, on different times. Particular events, such as broadcast events, may differ in their program attributes from each other. For instance, the first showing of a program may be live, while later instances can be regarded as repeats. Another example is a case where a particular program is broadcast on different channels, one through a free channel, and another through a pay-per-view service. 0017 It should be understood that the ProgramInformation DS serves as a structure to link all the pieces of information together. Various scenarios in different application environments exist in which not all the various parts of the ProgramInformation DS are linked together into one description, but in other cases they may be. For example, in some cases the localization information may be part of a separate description and may be obtained from other sources than the other program content metadata. In other cases, these parts may in fact be linked together in a single description. Also, different descriptions may share description parts through the use of identifiers and identifier references. Different parts of the scheme proposed may exist in standalone descriptions. 0018 Thus, the basic program information, the extended program information and the program event information each contain the appropriate content identifier(s), effectively linking the descriptors in each of these parts to a particular program. The overall ProgramInformation DS can be used to ti.e. all the description parts together, and, in certain cases, link them to a locator. 0019 The EPG specification document also contains the specification of the syntax and semantics of the proposed description schemes, as well as examples, as listed below. 0020 ProgramInformation DS Name Definition ProgramInformationType A data type used to specify all information related to a single audiovisual program, e.g. TV program, for inclusion in an Electronic Program Guide (EPG). LocationInformation Location information related to this program. This part of the description specifies where the program material can be found (both in space and time). LocationInformationRef Reference to a description with location information related to this program. Shall refer to the id of a LocationInformation element. BasicInformation Basic information related to this program. This part of the description specifies basic EPG program attributes. BasicInformationRef Reference to a description with basic information related to this program. Shall refer to the id of a BasicInformation element. ExtendedInformation Extended information related to this program. This part of the description specifies more detailed EPG program attributes. ExtendedInformationRef Reference to a description with extended information related to this program. Shall refer to the id of an ExtendedInformation element. EventInformation Event information related to this program. This part of the description specifies attributes related to specific instances of a program (e.g. corresponding to a particular broadcast event). EventInformationRef Reference to a description with event information related to this program. Shall refer to the id of an EventInformation element. id Description instance identifier. tag Description instance tag. ProgramLocationType A data type used to specify the location of a program, i.e. where the program material can be found. It effectively associates a content identifier with a location. ContentReferenceID Content ID that is used to refer to this program. ProgramLocator Locator of the program material. id Description instance identifier. tag Description instance tag. ProgramBasicInformation- A data type used to specify the basic Type information needed to include the program in a Program Guide. ContentReferenceID Content ID that is used to refer to this program. ProgramIdentifier Unique identifier of the program (e.g. UPID). GroupRef A reference to the group of programs that the program is part of (e.g. a TV series). Title Textual title of the program. The language in which the title is expressed is indicated by the xml: lang attribute. Multiple title descriptors may be included. The type of title (main, original or alternative) is indicated by the type attribute. Version Version of the program material. EpisodeNumber Episode number of the program, in case it is part of a series. EpisodeTitle Episode title of the program, in case it is part of a series. SeriesTitle Series title, in case the program is part of a series. ParentalGuidance Parental guidance or viewer discretion descriptor, with associated semantics: Country - Code that indicates the country for which the parental guidance descriptor is defined. ParentalRatingScheme - Denotes the specific scheme used for rating the input program. ParentalRatingValue - The actual rating of the program according to the rating scheme specified above. MinimumAge - The minimum recommended age for consumers of the program, in years. Genre The genre of the program content. Multiple genre descriptors may be included. The type of genre (main, sub or other) is indicated by the type attribute. For basic program information, it is expected that the type attribute will be set to main. The type other enables 3&lt;highlight&gt;&lt;superscript&gt;rd &lt;/superscript&gt;&lt;/highlight&gt;party broadcasters to specify extra genre information. Keywords Keywords associated with the program content. Multiple keyword descriptors may be included. The type of keyword (any, main or sub) is indicated by the type attribute. For basic program information, it is expected that type attribute will be set to any. The type any can be used for non-category keywords. Abstract Textual description of the program content. Multiple abstract descriptors of different lengths may be included. The number of words in the textual abstract is indicated by the nr attribute. Creator A creator of the program material. Multiple creator descriptors may be included. A creator may be an individual (such as an actor, director, producer, host, anchor, composer, narrator or others), a group of people, or an organization. The type or function of a creator is indicated by the Role descriptor. Character A fictional character that is part of the content or that specifies a role played by an actor. Multiple character descriptors may be included. This descriptor includes the name of the character, and either (i) the name, or (ii) a reference to, the actor that performs the role of that character. ProductionYear Year of production of the program. ProductionCountry Country of production of the program. CreationLocation Spatial location of the content creation. CreationDate Time and date of the content creation. Language The language of the spoken content of the program. Multiple language descriptors may be included. The language specified by the descriptor (main, original, alternative) is indicated by the type attribute. The descriptor original is used to describe the original language of the program when the program is dubbed. Dubbed A flag indicating whether the program audio was dubbed. Subtitled A flag indicating whether the program includes subtitles. SubtitleLanguage If present, the language of the subtitles. Multiple subtitle-language descriptors may be included. CCService References the closed-caption services for this program. AudioSigning A flag indicating whether the program includes signing captions. TitleImage Locates image media representing the program content, e.g. a thumbnail image or logo. RelatedMaterialURL Reference to media that is related to the program content. Multiple related-material link descriptors may be included. AspectRatio Aspect ratio of the visual program material, represented by the two attributes width and height (e.g. 4:3, 16:9, 2.35:1). Color Flag indicating whether the visual program material is in color or not. HD Flag indicating whether the visual program material is in high-definition format or not. Stereo Flag indicating whether the audio program material is in stereo or not. AudioChannels The number of audio-channels of the program. ExtensionDescriptor An abstract descriptor that provides a generic template for future definition of new descriptors as they are deemed necessary. id Description instance identifier. tag Description instance tag. ProgramExtended- A data type used to specify the extended InformationType information associated with a program included in a Program Guide. ContentReferenceID Content ID that is used to refer to this program. ProgramIdentifier Unique identifier of the program (e.g. UPID). Genre Specifies the genre of the program. Multiple genre descriptors may be included. The type of genre (main, sub or other) is indicated by the type attribute. For extended program information, it is expected that the type attribute will be set to sub or other, to complement the genre specification provided in basic program information. Keywords Keywords associated with the program content. Multiple keyword descriptors may be included. The type of keyword (any, main or sub) is indicated by the type attribute. For extended program information, it is expected that type attribute will be set to main or sub, to complement the keywords provided in basic program information. VideoSystem Denotes the video system in which the program data is broadcast (e.g. PAL, NTSC, SECAM). VisualCodingFormat Denotes the coding format of the input visual content (e.g. MPEG-1, JPEG2000). FrameWidth The width of the input images/frames in pixels. FrameHeight The height of the input images/frames in pixels. FrameRate The frame rate of the input video stream, in Hz. Progressive A flag that specifies whether the input video is in progressive or interlaced format. AudioCodingFormat Specifies the coding format of the input audio stream. AudioSamplingRate Specifies the sampling rate of the input audio stream, in Hz. FileFormat The file format or MIME type of the input AV content. FileSize The size of the AV media file in bytes. BitRate The bit rate of the AV content required for synchronous transmission, in bits/sec. TitleVideo Specifies a video segment or clip that will be used as or with the title sequence for the program TitleAudio Specifies an audio segment or clip that will be used as or with the title sequence for the program ExtensionDescriptor An abstract descriptor that provides a generic template for future definition of new descriptors as they are deemed necessary. id Description instance identifier. tag Description instance tag. ProgramEventInformation- A data type used to specify the Type information associated with every instance of a program. ContentReferenceID Content ID that is used to refer to this program. ProgramIdentifier Unique identifier of the program (e.g. UPID). Duration Duration of the program. Repeat Flag that specifies whether the program is a repeat of previously broadcast material. Live Flag that specifies whether the program is broadcast live. FirstShowing Flag that specifies whether the given instance is the first showing of the program. LastShowing Flag that specifies whether the given instance is the final showing of the program. Encrypted Flag that specifies whether the program is encrypted for restricted viewing. PayPerView Flag that specifies whether the program is pay-per-view or free of charge. RightsService Reference to individual services that provide the rights management information associated with the program. ReBroadcastDate Specifies the date when the program will be broadcast again. ServiceProvider Reference to the resources (web etc.) of the program service provider ParentalGuidance Parental guidance or viewer discretion descriptor, with associated semantics: Country - Code that indicates the country for which the parental guidance descriptor is defined. ParentalRatingScheme - Denotes the specific scheme used for rating the input program. ParentalRatingValue - The actual rating of the program according to the rating scheme specified above. MinimumAge - The minimum recommended age for consumers of the program, in years. AspectRatio Aspect ratio of the visual program material, represented by the two attributes width and height (e.g. 4:3, 16:9, 2.35:1). Color Flag indicating whether the visual program material is in color or not. HD Flag indicating whether the visual program material is high-definition or not. Stereo Flag indicating whether the audio program material is stereo or not. AudioChannels The number of audio-channels of the program. VideoSystem Denotes the video system in which the program data is broadcast (e.g. PAL, NTSC, SECAM). VisualCodingFormat Denotes the coding format of the input visual content (e.g. MPEG-1, JPEG2000). FrameWidth The width of the input images/frames in pixels. FrameHeight The height of the input images/frames in pixels. FrameRate The frame rate of the input video stream, in Hz. Progressive A flag that specifies whether the input video is in progressive or interlaced format. AudioCodingFormat Specifies the coding format of the input audio stream. AudioSamplingRate Specifies the sampling rate of the input audio stream, in Hz. FileFormat The file format or MIME type of the input AV content. FileSize The size of the AV media file in bytes. BitRate The bit rate of the AV content required for synchronous transmission, in bits/sec. ExtensionDescriptor An abstract descriptor that provides a generic template for future definition of new descriptors as they are deemed necessary. id Description instance identifier. tag Description instance tag. 0021 The ProgramInformation DS contains all the information related to a single audiovisual program, e.g. TV program, that is necessary to build an Electronic Program Guide. 0022 ProgramInformation Examples &lt;ProgramInformation&gt; &ensp;&lt;BasicInformation&gt; &lt;ContentReferenceID&gt; http://media.nbz.com/programs/contentids/NBZ-FR-1999 &lt;/ContentReferenceID&gt; &lt;Title type&equals;&ldquo;main&rdquo; &gt;Friendz&lt;/Title&gt; &lt;Version&gt;3&lt;/Version&gt; &lt;EpisodeNumber&gt;10&lt;/EpisodeNumber&gt; &lt;ParentalGuidance&gt; &lt;Country&gt;us&lt;/Country&gt; &lt;MinimumAge&gt;10&lt;/MinimumAge&gt; &lt;/ParentalGuidance&gt; &lt;Genre type&equals;&ldquo;main&rdquo;&gt;Situation comedy&lt;/Genre&gt; &lt;Language type&equals;&ldquo;main&rdquo;&gt;en&lt;/Language&gt; &lt;Subtitled&gt;false&lt;/Subtitled&gt; ............ ............ &ensp;&lt;/BasicInformation&gt; &lt;/ProgramBasicInformation&gt; &lt;ProgramLocation id&equals;&ldquo;proglocationa&gt; &lt;ContentReferenceID&gt; http://media.nbz.com/programs/contentids/NBZ-FR-1999 &lt;/ContentReferenceID&gt; &lt;ProgramLocator&gt; http://media.nbz.com/programs/media/friendz.mp2 &lt;/ProgramLocator&gt; &lt;/ProgramLocation&gt; 0023 In the following example, basic program descriptive data is received separately from the location data of the program. This achieves separation of selection (using the program descriptors) from location resolution (using the mapping from content reference identifier to a location). The content reference identifier is the link between the two descriptions. &lt;ProgramInformation id&equals;&ldquo;proginfoa&rdquo;&gt; &lt;LocationInformation ID&equals;&ldquo;locationa&rdquo; tag&equals;&ldquo;1&rdquo;&gt; &lt;ContentReferenceID&gt; http://media.nbz.com/programs/contentids/NBZ-FR-1999 &lt;/ContentReferenceID&gt; &lt;ProgramLocator&gt; http://media.nbz.com/programs/media/friendz.mp2 &lt;/ProgramLocator&gt; &lt;/LocationInformation&gt; &lt;BasicInformation id&equals;&ldquo;basicinfoa&rdquo;&gt; &lt;Title xml:lang&equals;&ldquo;en&rdquo; type&equals;&ldquo;main&rdquo;&gt;Friendz&lt;/Title&gt; &lt;Version&gt;3&lt;/Version&gt; &lt;EpisodeNumber&gt;10&lt;/EpisodeNumber&gt; &lt;ParentalGuidance&gt; &lt;Country&gt;us&lt;/Country&gt; &lt;MinimumAge&gt;10&lt;/MinimumAge&gt; &lt;/ParentalGuidance&gt; &lt;Genre type&equals;&ldquo;main&rdquo;&gt;Situation comedy&lt;/Genre&gt; &lt;Language type&equals;&ldquo;main&rdquo;&gt;en&lt;/Language&gt; &lt;Subtitled&gt;false&lt;/Subtitled&gt; ............ ............ &lt;/BasicInformation&gt; &lt;ExtendedInformation id&equals;&ldquo;xtendinfoa&rdquo;&gt; &lt;Genre type&equals;&ldquo;sub&rdquo;&gt;Drama&lt;/Genre&gt; &lt;VideoCodingSystem&gt;ATSC&lt;/VideoCodingSystem&gt; &lt;Progressive&gt;false&lt;/Progressive&gt; ............ ............ &lt;/ExtendedInformation&gt; &lt;EventInformation id&equals;&ldquo;eventinfoa&rdquo;&gt; &lt;Repeat&gt;true&lt;/Repeat&gt; &lt;Live&gt;false&lt;/Live&gt; &lt;PayPerView&gt;false&lt;/PayPerView&gt; &lt;RightsService&gt; http://media.nbz.com/programs/rights/friendz/ &lt;/RightsService&gt; &lt;AspectRatio width&equals;&ldquo;4&rdquo; height&equals;&ldquo;3&rdquo;/&gt; ............ ............ &lt;/EventInformation&gt; &lt;/ProgramInformation&gt; &lt;ProgramInformation id&equals;&ldquo;proginfob&rdquo;&gt; &lt;LocationInformation ID&equals;&ldquo;locationb&rdquo;&gt; &lt;ContentReferenceID&gt; http://media.nbz.com/programs/contentids/NBZ-FR-1999 &lt;/ContentReferenceID&gt; &lt;ProgramLocator&gt; http://anothermedia.nbz.com/moreprograms/media/friendz.mp2 &lt;/ProgramLocator&gt; &lt;/LocationInformation&gt; &lt;BasicInformationRef&gt; proginfoa.xml&num;basicinfoa &lt;/BasicInformationRef&gt; &lt;ExtendedInformationRef&gt; proginfoa.xml&num;xtendinfoa &lt;/ExtendedInformationRef&gt; &lt;EventInformation id&equals;&ldquo;eventinfob&rdquo;&gt; &lt;Repeat&gt;true&lt;/Repeat&gt; &lt;Live&gt;false&lt;/Live&gt; &lt;PayPerView&gt;true&lt;/PayPerView&gt; &lt;RightsService&gt; http://media.nbz.com/programs/rights/friendz/ &lt;/RightsService&gt; &lt;AspectRatio width&equals;&ldquo;16&rdquo; height&equals;&ldquo;9&rdquo;/&gt; ............ ............ &lt;/EventIformation&gt; &lt;/ProgramInformation&gt; 0024 In the following example, sharing of program descriptive data is illustrated. The program is available in two locations (in time and place), but both versions share the same basic and extended information. Hence this common part of the description is provided only once, and subsequently referenced by the second location instance. The programs differ in their event information, namely their location is different, and format attributes are different. 0025 As exemplified by the above, Future TV systems will use computer based end-user equipment, i.e. TVs with program storage. Intelligent agents will learn or will be told the program preferences of the viewer and select programs from the many broadcasts and store them for real-time or later viewing. New business models are thus required to support the rights of the broadcasters, program copyright owners and other agents and system operators. 0026 In one aspect, the present invention provides methods to enable such new business models that will give rights owners influence over the effective production made by the end-user equipment (TV, STB) and the program audience. Both long programs, e.g. movies, and short programs, e.g. commercials, contain metadata information to enable the rights owners to target their material. Defined target types include the time at which the program is to be shown, the type or genre of programs to be shown, the households or individual demographics to which the programs are to be shown, viewers who have demonstrate prior interest in certain products or programs. In this manner, both the traditional business model and new models are fully supported. 0027 The Targeting is in two parts. The first part, If-Audience, allows audience selection (e.g. demographic targeting) for the program, and the second part, Then-Presentation allows presentation or production selection (e.g. targeting a time or insertion in another program). There is also a final term (Else) to define what to do if the targets are not successful. 0028 A Target is formed as a logical expression using logical operators like NOT, AND, OR, ANDNOT and ORNOT and terms of the aforementioned types. The number of terms may be small or large in number and can be used to define a very specific target(s) or broad target(s) as required. A money attribute optional with each term allows programming decisions based on cost/revenue used for example in the likely event of multiple suitable programs competing for the viewer's attention. Accounting for the cost of some programming can be offset by credit from advertising impressions. 0029 In another aspect, the present invention provides a method for displaying a TV program to a viewer, including receiving a plurality of TV programs; allowing the viewer to select one of the plurality of received TV programs for viewing; and responding to the viewer selection by displaying the viewer selected program and displaying additional programs in accordance with previously specified display criteria, the additional programs selected in accordance with the previously determined viewing preferences of the viewer. The additional programs may be stored in accordance with the display criteria. The display criteria may include display schedule criteria, selected program criteria, and previously determined viewing preferences criteria. The method may further include receiving a plurality of additional programs; receiving the display criteria for each additional program together with each respective additional program; and storing a plurality of additional programs selected in accordance with the previously determined viewing preferences. 0030 In a further aspect, the present invention provides a method for displaying a TV program to a viewer including transmitting a plurality of TV programs for selection therebetween by the viewer, and transmitting a plurality of additional programs for selection therebetween in accordance with previously determined viewing preferences of the viewer, the selected additional programs for display to the viewer in accordance with previously specified display criteria. BRIEF DESCRIPTION OF THE DRAWINGS 0031FIG. 1 is a diagram of an EPG including a virtual channel; and 0032FIG. 2 is schematic diagram of the architecture of a programming targeting system according to the invention. DETAILED DESCRIPTION OF THE INVENTION TARGETING EXAMPLE 1 TARGETING EXAMPLE 2 TARGETING EXAMPLE 3 TARGETING EXAMPLE 4 TARGETING EXAMPLE 5 Operation of the Presentation Agent Virtual-Channel Creation Algorithm 0033 A new Television System model based on recent advances in Digital Television and Computer technology can advantageously replace the traditional TV industry system and business model of 50 years standing. While initially Digital TV seemed to be merely a digital replacement of the analog technology systems (NTSC and PAL), albeit with high definition picture quality available, now a radically different, new generation TV system model has come to light. This includes commercial technology and much industry-generated technology and standards including MPEG, SMPTE, ATSC and TV Anytime. 0034 Digital conversion and compression allow the TV signal to be represented efficiently as digital computer data and stored on a computer Hard Disk Drive (HDD). This together with recent and expected further advances in HDD technology allow hours of video to be saved at the viewers home in a Digital Television (DTV), Set Top Box (STB) or other devices accessible via a Home Network. The time-shifting video recorder systems (PDR), examples already on sale, convert all TV signals to compressed-digital (e.g. MPEG2) and pass them via Hard Disk Drive (HDD) storage prior to presentation. PDR concurrent record and replay,effectively a gigantic random access buffer and a generic capability with HDD storage, enables the simultaneous replay of display video stream and recording of new video information ie programs and commercials (advertising programs-Ad), for possible later replay. 0035 With PDR systems a sophisticated EPG is provided, using specially accessed program metadata (special access sometimes required for the legacy analog case or inadequately developed digital case), to allow the viewer to select a program for view or record. Advanced technology automatic preference determination addresses the ease of use aspect, providing the viewer with a selection of preferred program titles and also drive an automatic recording system to provide a selection of preferred programs. Also, and more importantly, it enables viewer profiling that leads to an improved target advertising system for TV commercials compared to the traditional model. 0036 The combination of the following technology items allow, in end-user equipment, all broadcast Programs, Ad and non-Ad, to be identified, selectively saved and later more selectively replayed as a channel stream for presentation to the viewer: 0037 1) Digital TV broadcast technology (MPEG2) or combination of analog NTSC and digital data (e.g. VBI or Internet data) to give the same data capability, 0038 2) Intelligent Digital TV type, end-user equipment ie including computer and HDD storage (PDR), 0039 3) Program (Ad and non-Ad) content descriptionEPG Metadata, plus identifying mechanism for Program video transitions (Ad and non-Ad), thus enabling video to be treated as information. Return path metadata may be also required. 0040 The new TV system: Information Broadcast to Intelligent-TV, is very different from the traditional TV system: Prepared Programming Streams Broadcast to Dumb-TV. The full potential is an incredible new TV system where the broadcast channels are alive 24 hours per day transmitting a much richer and fuller set of programming and each intelligent TV, running preference algorithms, picks off and records programming of interest to their viewer(s) for viewing at any time. 0041 Because television programming and system running costs are in many cases paid out of advertising revenue it is a critical issue to demonstrate a workable and desirable new business model or the new technology cannot be deployed. This metadata specification defines EPG schema format and language to carry Targeting control information from the program owners and/or distributors to influence the personal programming decisions made be the Intelligent Digital TV end-user system (or PDR) thus leading the way to acceptable business models for all system contributors. 0042 Targeting 0043 Introduction 0044 Personal TV systems can function without program targeting but all personal programming decisions are then made totally independently by the software agents in the end-user equipment leaving out the potential for new business models for program makers, distributors and operators, brought about by communication to influence the agent's decisions. 0045 The Targeting DS (T-DS) contains selection information which is in addition to the usual Program content and schedule information (ie EPG). T-DS references a program location or scheduled or broadcast program (event) and has information in two parts to select or influence selection of: 0046 (1) Audience for the program and, 0047 If successful Then execute: 0048 (2) Presentation or display of the program. 0049 T-DS, for example, enables program copyright owners, distributors and broadcasters to influence the selection of offered or available programs at the end-user equipment so they match their interests as well as the personal interests and preferences of the user. In addition an obvious use is for the audience targeting of advertisement programs (Ad's or commercials) but the same mechanism is used for personalized programming in general for influencing final production of personal programming and virtual channels. The following is an example of target information supported: 0050 Audience targeting (audience selection) is based the following three main types of data: 0051 User demographic information 0052 name, age, sex, language, occupation, income, etc 0053 Preference rated program information or other preference rated information (e.g. products), 0054 distributor, producer, title, subject, genre-main, genre-sub, actor-1, actor2, etc 0055 Transition behavior, using data monitored when changing TV programs, 0056 changing between Titles, Genres and Channels. 0057 General geographic, household, AVCE product or industry information 0058 time-zone, ZIP/post-code, no. TV's, HomeNet, etc. 0059 In addition each database row (or database item) is augmented with a confidence level value. This is particularly useful for automatically inferred data items or rows enabling information entries of useful value but with less than 100% confidence. Of course for manually entered data then confidence is 100%. 0060 Presentation targeting (selection of when to show) is based on the following main types of data: 0061 Time information; 0062 actual or relative time of presentation 0063 Another defined program event; 0064 Insert, Substitute Rights, Repeat count 0065 Money attribute with each term. 0066 In a sense the broadcast T-DS information represents a simple computer program of targeting instructions, interpreted by common agents each operating independently using special local user data in order to resolve the targeting (selection) decisions, see FIG. 2. 0067 Audience targeting instructions are analyzed by the storage STB agent on arrival and entail comparing given targeting information against specially accessed local target information as specified in the targeting expression. If audience targeting is successful (ultimately a Yes or No decision) then the metadata (program and targeting) is stored locally and by so doing a note is made to store the program on arrival later (by seconds or days). This may require, at a scheduled time, a seeking of the program e.g. Analog and or digital TV tuner control or even Web access to access the program. 0068 Targeting is by construction of a logical selection expression of information terms and the data content model used allows a flexible definition of target. The target can be made as narrow or wide as required and include a variety of types, traditional and new. A money attribute allows cost/revenue based (presentation) decision making in the event of multiple suitable program material competing for the viewers attention. 0069 The subsections contain the specification of the syntax and semantics of the Targeting Description Schema, as well as some examples. 0070 Targeting, Description and Resolution 0071 Starting with a targeting example: 0072Consider the audience target successfully found IF the targeting description Most popular MainGenre of Movie is Action is True. 0073 Targeting is selecting a target by selecting a certain, user oriented data item, from a data set collected and retained by the end-user STB system, ie most popular one item of a certain category of items, and comparing it to a given data item. If the compare is successful then the Audience target is considered found. There a number of ways to custruct the data item selection part of the targeting. 0074 One way is to have a two part selection statement. One part is a target information type definition (e.g. Genre: Movie.Action) and it is succeeded by the second part which is one from a set of defined and fixed selection qualifiers. Together they create a targeting question precise enough to be allow resolution as to whether the location user information offers the intended target for the program. If the answer is True then the audience target is considered successful. Examples of selection qualifiers: 0075 TARGET-IS-THE-MOST-POPULAR, 0076 CORRELATION-WITH-TARGET, 0077 EXACTLY-DEFINED-BY-TARGET, 0078 PREFERENCE-FOR-TARGET 0079 HAS-INSTANCE-HISTORY-OF-TARGET, 0080 HAS-INTEREST-HISTORY-OF-TARGET. 0081 This works well for a small number of question types and where they are general in nature but for a large number of question types and where detailed unambiguous questions, flexibility and extendibility is required then the method isn't suitable. 0082 An alternative way, type two, is rather than explicitly build in (to the metadata definition) a set of pre-determined selection qualifiers to make the targeting question, they can be created in a general way by considering that the STB target is in the form of a database, e.g. called: preferences, of known columns, e.g. channel, program, genre_main, genre_sub, preference_rating, with known possible labels or values for the database contents. The audience targeting question is now constructed in a general format using a standard database selection format, structured query language (SQL) query and the question. For example: 0083Audience targeting successful IF (most popular item of a defined type from STB databasegiven item). This is a comparison of the database selection item result against the given item. Taking a further developed version of the example: 0084Consider the audience targeting successful IF the most popular genre of movie is action. The database is searched for the name of the most popular Genre-Sub (e.g. with the highest count of Genre-Sub) for the Genre-Main of movie and the test made be comparing to see if the result equals the given Genre-sub name action. IF( TARGET(genre_sub &lsquo;action&rsquo;, genre_main &lsquo;movie&rsquo;) TARGET-IS-THE- MOST-POPULAR )&quest; Type two, (first version) targeting desciption is as follows: IF( (SELECT genre_sub FROM preferences WHERE genre_main &equals;&lsquo;movie&rsquo; AND preference_rating &equals; (SELECT MAX(preference_rating) FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo;;) ;) &equals; &lsquo;action&rsquo; )&quest; 0085 Type one targeting description is constructed as follows: 0086 Type two targeting, though more complex, offers very precise targeting and avoids the ambiguity present in type one where it isn't stating clearly in the words that the intention is to use ratings to compare the most popular sub_genre of movie program and ignore all other programs. Also, there are a number of ways to determine Most popular. One way is to search for the highest preference rating for main-genre movie using two SELECT queries as shown above. Another way is for the database to be searched for sub-genre label of the highest count of sub-genre for the main-genre movie as below: IF( (SELECT genre_sub FROM preferences WHERE genre_main &equals;&lsquo;movie&rsquo; GROUP BY genre_sub HAVING MAX (COUNT(genre_sub)) ;) &equals; &lsquo;action&rsquo; )&quest; 0087 Type two, (second version) of example targeting description, as follows: 0088 Regarding type 1 it would be difficult to think up in advance and make a fixed metadata selection qualifier statement for every possible way to pick user target profile data for the targeting test question and also result in a less compact and more complex specification. Therefore type 2, targeting using standardised database selection statements (e.g. SQL), is favored for use over type 1. 0089 Targeting using Database Selection 0090 There are two types of database in the end-user equipment (STB). 0091 The most obvious type is the program history data type. The program preferences database, with data mainly from monitoring programs viewed, is the main one of this type. Targeting access to this database enables, for example, the targeting of a user with a preference for a particular program or genre type of program or title or actor. 0092 The second type of database contains data from monitoring user behavior for example regarding the transitions and switching between contexts e.g. programs and program content types like title, channel and genre. This type therefore brings additional target material for reaching user types through their monitored and processed behaviors. 0093 One can for example write targeting instructions to reach a user who switches to Fox News after watching Larry King on Monday nights. The history type preferences database does not have this transition type data. 0094 Database queries can be extended by joining e.g. Titles and accessing both program preferences and transition behavior databases. 0095 Program Preferences database 0096 The User information in each STB is held in relational databases. One of the databases is for user Demographic data, one for General information relating to the household as a whole, one is for program Transition behavior and another is called the program Preferences database. 0097 The User demographic database has row entries for each user or predicted-user, predicted in the case that users declined to enter their personal information and the data has been automatically generated. Each row contains details like age, gender, race, occupation and a confidence rating number to give a measure of confidence in the automatically generated data. The common case of targeting an advertisement video to an age or age range target would require accessing the age data from the age column. 0098 The General information database is typically a single row database with the following example column types: Geographic location (ZIP code, time-zone), PC's-in-house, Serial number. The Preference database consists of many rows of program history data of recently viewed video programs with important program content information (e.g. Title, Genre) user information and a preference rating. Non-program data is included in here if there is a preference rating attached e.g. products-UPC. The most-popular or most-preferred can be determined by examining the automatically pre-computed preference rating number or by counting instances as specified in the targeting instructions. Program preferences are based on the background monitoring of programs viewed and user control but entries can be also made directly to the database by the user via a GUI e.g. preference for an actor or program genre or subject. 0099 Columns of this Preferences database are given here as an example. For the full set see semantics table later: SERVICE CHANNEL-DISTRIBUTION VIEW-START-TIME VIEW-DAY-OF-WEEK TITLE KEYWORD GENRE-MAIN GENRE-SUB MPAA-RATING CAST1 CONFIDENCE-Level (especially useful for inferred entries) 0100 PREFERENCE-RATING-FOR-ROW 500, &emsp;&emsp;HBO, DSS399, &emsp;&emsp;2100 &emsp;&emsp;FRIDAY, &emsp;&emsp;&emsp;INDEPENDENCE DAY, &emsp;&emsp;&emsp;&emsp;SIFT, &emsp;&emsp;MOVIE, &emsp;&emsp;&emsp;&emsp;&emsp;ACTION, G, &emsp;&emsp;JOE BLOGGS, 90. 0101 A column for Preference Rating number is available for each row. This is a number e.g. between 100 and 999 indicating relative preference for the row item and may have been produce automatically, for example be preference agent, or entered manually. A Preference database row example follows: 0102 Sometimes complex targeting is required e.g. Target Audience where most popular genre of movie is ACTION, and this is done in a general way by including in the targeting metadata information a subset of the SQL (Structured Query Language) standard method to access a data item from the databases. The subset is use of only the SELECT command and a version of it which only returns one result. 0103 The result returned after a SELECT command, e.g. looking for the highest preference rating for MOVIE, is compared to the targeting item e.g. ACTION, to result in a logical TRUE or FALSE. The use of the SQL SELECT command is merely to use a standard way (ANSI) to describe a targeting item, as an alternative to re-inventing new words to do the same thing, and doesn't imply that an SQL database or SQL interface need be employed in a STB implementation. IF( SELECT genre_sub FROM preferences &emsp;&emsp;&emsp;&emsp;&emsp;WHERE genre_main &equals; &lsquo;movie&rsquo; &emsp;&emsp;&emsp;&emsp;AND rating &equals; (SELECT MAX(rating) FROM preferences &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;WHERE genre_main &equals; &lsquo;movie&rsquo;;) ; ) &equals; &lsquo;action&rsquo; 0104 Consider the audience targeting successful IF Most popular GENRE of MOVIE is ACTION. IF( (SELECT MAX(rating) FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND genre_sub &equals; &lsquo;action&rsquo;;) / (SELECT MAX(rating) FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND genre_sub &excl;&equals; &lsquo;action&rsquo;;) ) &equals; 1.9 0105 Consider the audience targeting successful IF MOVIE.ACTION is 90% more popular than the next most popular IF( SELECT view_day_of_week FROM preferences &emsp;&emsp;&emsp;WHERE genre_main &equals; &lsquo;movie&rsquo; AND genre_sub &equals; &lsquo;action&rsquo; GROUP BY view_day_of_week &emsp;&emsp;&emsp;HAVING MAX ( COUNT (view_day_of_week)); )&equals; &lsquo;friday&rsquo; 0106 Consider the audience targeting successful IF Most popular DAY OF WEEK for watching MOVIE.ACTION is FRIDAY IF( SELECT view_start_time FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND genre_sub &equals; &lsquo;action&rsquo; AND view_day_of_week &equals; ( SELECT view_day_of_week FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND genre_sub &equals; &lsquo;action&rsquo; GROUP BY view_day_of_week HAVING MAX (COUNT(view_day_of_week)) ; ) GROUP BY view_start_time HAVING MAX(COUNT(view_start_time)); ) &equals; 2100 0107 Consider the audience targeting successful IF Most popular TIME for watching MOVIE.ACTION is 9:00PM 0108 Transition Behavior type database 0109 This database contains data from user transition behavior history. Transition behavior in this sense is the user viewing a TV program and making a transition from a Present-state to a Next-state where the state transition is a decision point defined in time using absolute and relative time parameters ie time-of-day, time-of-week and transition time relative to the program start. The state is a program or program content defining parameter e.g. Title, Channel and Genre. The technique isn't however limited to these state parameters and works equally well for other behaviors for example the state types Subject and Actor. 0110 A pre-computed preference rating is also added as a row data item. This is different for different state type transitions because not all state parameters need change at a transition point, for example, a transitions may be a Title change but stay with same Genre, or Title change and stay with same Channel. 0111 Example columns for this database are given here: 0112 USER NAME 0113 CONFIDENCE-LEVEL 0114 TITLE-CURRENT 0115 TITLE-NEXT 0116 TITLE-PREFERENCE-RATING 0117 CHANNEL-CURRENT 0118 CHANNEL-NEXT 0119 CHANNEL-PREFERENCE-RATING 0120 GENRE-CURRENT 0121 GENRE-NEXT 0122 GENRE-PREFERENCE-RATING 0123 TRANS-DAY-OF-WEEK, 0124 TRANS-TIME-OF-DAY, 0125 TRANS-REL-TIME-IN-SESSION 0126 TRANS-REL-TIME-IN-PROGRAM 0127 Consider the audience targeting successful IF Most likely Title following Larry King on a Monday is FOX News IF( SELECT title-next FROM transition WHERE trans-day-of-week &equals; &lsquo;monday&rsquo; AND title-current &equals; &lsquo;Larry King&rsquo; AND title-preference-rating &equals; (SELECT MAX (title-preference- rating) FROM transition &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;WHERE trans-day-of-week &equals; &lsquo;monday&rsquo; &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;AND title-current &equals; &lsquo;Larry King&rsquo;;) )&equals;&lsquo;FOX News&rsquo; 0128 The audience targeting question is to do with a Title transition so the audience targeting instruction is directed at the Transition behavior database rather than the program Preferences database. 0129 Targeting Architecture 0130 Architecture Overall Description 0131 Special targeting information is added to or supplements the program information metadata to enable the video program it references, to be aimed at a user target. The target is described by data in the end-user equipment (STB or PDR) and consists of for example user demographics or user program preferences see, FIG. 2. 0132FIG. 2 is a block diagram of the basic targeting architecture. It shows video programs and associated metadata being broadcast from the TV distribution plant and an exploded view of relevant agent and database modules in the end-user equipment e.g. Set-Top Box. 0133 The two bubbles in the STB represent software controller agents. The upper one, called storage agent, is responsible for deciding whether an arriving metadata, and later arriving video program, should be stored or not. The lower one, presentation agent, is responsible for deciding, what programs to show or present to the user at what time, it's decision output being a Virtual Channel in the electronic program guide (EPG). Arrow lines pointing at each agent indicate data from stored information used to make the decision and is represented in the FIG. 2 as four databases: demographics, preferences, general and the stored metadata database. 0134 Upper right is the User Program Preference database. This contains a table of data, each row for example derived from user TV viewing history, about Programs watched and some of their content description information e.g. Title, Genre, Actor, together with a preference rating number indicating relative preference. The Preference Rating (pre-computed and derived from local user data) is a positive integer number where higher indicates more relative preference and highest indicates the favorite item. Row data of a non-program type can also be input by the user directly for example to indicate strong preference for a particular actor or director. In any case all elements of each row need not be filled. Generalized content and individual information can be obtained by querying this database. 0135 Upper left is the User Demographic database. This contains personal data about the user or users and may be have been obtained by direct user input OR inferred by programs viewed and cross-correlated to demographics (production of which is not part of this specification). Household aggregate and individual information may be obtained by querying this database. 0136 Center left is a small database of General Information for useful target data that does not fit in with User demographic or Program e.g. STB geographic location, Serial number, Presence of TV's, PC's etc. 0137 Lower left is the storage area for program Metadata that is either pending actual program material or corresponding to actual stored Programs shown in the area lower right. 0138 Virtual Channel 0139 As can be seen from FIG. 1 the virtual channel appears in the EPG schedule and looks just a regular, live, TV channel with certain programs scheduled to be shown at certain times of the day. The obvious difference, and this may be transparent to the user, is that it is made using previously stored programs (channel 8 in FIG. 1, programs Z, P, X and Y) and plays out from the STB (PDR) video storage (hard-drive). 0140 The user will find that, unlike regular scheduled programming, he can go back in time (e.g. 6-7PM) and watch programs scheduled in the virtual channel for earlier in the day (Program Z). When doing this, of course, regular programming in the program guide is blanked out or marked as unavailable. Also, the system agents know when the user never watches TV e.g. see FIG. 1, 8-9PM out of the house, or 11PM onwards in bed, both always have the STB/TV switched off, so there is normally no virtual channel scheduled program for these times. User request via a GUI button feature command can instruct the system to complete fully the V.Ch. schedule e.g. for the remainder of the day. 0141 All virtual channel programs are audience targeted and user preferred programs. A virtual channel schedule is considered more natural for use than to offer a completely separate mechanism (e.g. top ten list presentation), because a user HAS to interact with as an EPG schedule for all live programs, and it makes sense to see the selected user preferred programs alongside the live programming in the guide schedule. 0142 Storage Agent 0143 Arriving metadata, arriving before the associated video program, is examined by this controlling agent for presence of audience targeting information. If present it is processed using local target database items and if successful the metadata is stored and also the associated video program is stored on later arrival. Target databases are User demographics, User program preferences and General information. and also metadata indicating programs already stored. Storage agent tasks are listed: 0144 Examine incoming metadata and save successful metadata; 0145 Manage stored metadata for example read saved metadata and access and save the associated programs. At any one time there might be a number of solo metadata blocks of information pending arrival or access of the associated program material. The storage agent manages control data in addition to the metadata and program to enable effective system operation. This control data is for a directory of metadata and programs and also includes control data elements (bits, bytes) to account for the presence of and usage of the programs e.g. presentation counts. 0146 Housekeep metadata and program storage areas. That is Observe and Delete: (1) expired programs, (2) presented programs (3) completed campaigns for each program ie number of presentation repeats satisfied (4) if short of storage capacity then re-process targeting and delete programs that produce a relatively weak targeting success factor in favor of keeping or saving the stronger. The targeting success factor (instead of straight Yes or No) is used for housekeeping metadata where there is uncertainty about inferred local target data (see appendix). Here, for example, users have not input their demographics directly so they are inferred using additional agents and input data (not described here). The inference process is dynamic and can change the probability of set user demographic profiles or add or remove profiles. Therefore depending on the audience targeting expression and certainty of local data, the targeting result could be a value (between yes-1 and no-0) and be different from a few days prior. The housekeeping software re-assesses targeting success as needed for the purpose of deleting or replace stored programs. 0147 Arriving material for live presentation can short circuit the described process (storing metadata, storing program) as the presentation agent can be notified directly. 0148 Audience targeting depends on three things: 0149 (1) Metadata targeting instructions; 0150 (2) Processing agent algorithm including some built-in rules; 0151 (3) Local target data. 0152 Certain targeting rules are built in to the processing agent e.g. whether to store a program in the event of a space limitation., whether to store a program with audience targeting successful but which doesn't seem to match user preferences. Modules of this processing agent (storage agent) e.g. targeting module, can normally be updated or replaced to enable a different interpretation of targeting metadata and local data. 0153 Presentation Agent 0154 The presentation agent has the basic task of making a program schedule for the audience selected and stored regular preferred programs (ie audience targeted or otherwise captured programs) for their notification to the user (in the multi-user case to the current user), see FIG. 2. In addition to regular programs the presentation agent has to identify and present advertising programs (Ad's). Audience targeted Ad's are placed between programs and inserted or substituted within programs as the defined rights and other metadata allow. 0155 For regular programs the preferred notification format is to make up one (or more if need be for different users or extra content) personal virtual channels for the displayed program guide so the stored programs can be displayed alongside live scheduled programs. On the face of it as these programs are from storage they could be listed in order of preference rating with the highest number first. However, this does not permit proper notification of them to the user who must use an EPG (electoronic program guide) for all live scheduled programs nor does it permit ordering them suitable for the viewing time. 0156 The user has the choice whether to select and stay on the virtual (personal) channel or switch to live or other programming. If the user stays on the virtual channel then programs are automatically replayed sequentally from storage as per the created schedule. 0157 The presentation agent determines how to make the personal channel programming (personal final production) using the following information: 0158 (1) targeting metadata including business ID's and money values; 0159 (2) user program preferences and transition behavior databases; 0160 (3) presentation agent algorithm with presentation and conflict resolution rules; 0161 (4) global (applying to all commercials) business rules (and downloaded to user boxes). 0162 The T-DS presentation content model options allow either Time information or another Program (location information) to be used to set placement targets e.g. setting a specific time for presentation or in the case of a commercial, setting another specific program to present before, after or within as a insertion or substitute for another commercial. A strength attribute is included in the metadata to be used by the agent in the decision process. Taking an example if the strength is EXACTLY-DEFINED-BY-TARGET for a Given Target Program Location and the program isn't found within the operation period then the program is discarded even if the audience target was satisfactory. On the other hand if the strength is BEST-EFFORT then a similar program is chosen for presentation. 0163 The presentation agent determines how to make the personal channel programming using the local data and presentation metadata. It is possible for the local data and metadata to suggest different programs for each time slot of the virtual channel and these conflicts are resolved by the agent. Broad plan of agent operation is as follows: 0164 (1) Time slot by time slot the algorithm makes a hidden-for-internal-working virtual channel using the presentation metadata resolving conflicts using a downloaded rules set (e.g. giving preference to a particular business ID), 0165 (2) Time slot by time slot the algorithm accesses program preferences from the preference database and makes another hidden-for-internal-working virtual channel, 0166 (3) Then the agent makes up the actual virtual channel taking input from both hidden-for-internal-working virtual channels. 0167 Sometimes there are multiple programs vying for the same presentation time. In this case the money attribute can be used to decide which program to present. At some other times there are multiple programs vying for the same presentation time and in the Rights and ID metadata is used in conduction with downloaded special rules (not shown on diagrams) to enable the decision about what to present or recommend in the personal channel program guide. These rules may indicate (for business reasons) that presentation should be biased to favor programs belonging to a certain ID over those from another ID. 0168 Targeting DS 0169 Definition Name Definition TargetingInformation Metadata content model to accompany or Type precede a program. Enables program copyright owners and distributors to influence the personalized programming and program stream production decisions at the end-user equipment. OperatingPeriod Program with this metadata should be used only during the period. Defined by Open (date) and Close (date). ProgramLocation, Defines the Program that the Targeting ProgramLocationType pertains to. References the TVA ProgramLocationType including Broadcast Services and the Web. &mdash; &mdash; BusinessIDs, Set of business ID&apos;s intended to allow BusinessIDsType proper accounting for programs selected. Copyright owner ID, Agency Service ID, Distribution Service ID, Targeting Service ID, Unnamed ID CopyrightOwnerID Copyright owner identity of video program material. AgencyServiceID Agency services identity, if any involved, e.g. Advertising Agency. This may be needed to automatically apportion payments at the end of a certain accounting period e.g. audience monitoring period. Distribution service identity e.g. TV DistributionService ID Company, Cable company, Internet company etc. TargetingServiceID Targeting services company, if different identity, managing the system operation e.g. target program scheduling, metadata and audience measurement. UnnamedID Any other company identity fiscally relevant to the system operation. &mdash; &mdash; ProductionRights Set of rights governing the permitted usage of, and usage by others of, this particular video program regarding insertion, substitution, and repeat use. &mdash; &mdash; RepeatControl, Data governing repeats: Maximum number of RepeatControlType repeats, Minimum and Maximum time interval between repeats. NumberMaximum Maximum number of permitted presentations of this video program. IntervalMinimum Minimum time interval between repeat showings of the video program. Absolute minimum permitted interval even if the targeting expression allows a smaller interval. IntervalMaximum Maximum time interval between repeat showings of the video program. Absolute maximum permitted interval even if the targeting expression allows a smaller interval. &mdash; &mdash; IFAudienceTargetTrue The first, IF or &lsquo;arming&rsquo; part of a IF- THEN-ELSE statement governs the AUDIENCE selection for the pertaining video material or program and determines whether it is a candidate for presentation. Decision is from the boolean results of compare(s) OF an item selected from the STB target database TO the targeting item string or integer value. &mdash; &mdash; THENSeekPresentation- This &lsquo;THEN-PRESENTATION&rsquo; expression is Target the second part of the targeting IF-THEN- ELSE statement and selects the presentation and production for the pertaining video material or program. It determines when, how or with what other program this program material should be shown. Element may be repeated for multiple Presentation targets. &mdash; &mdash; ELSETargeting- The ACTION attributes govern what to do Unsuccessful, with the video program should any of the ELSETargeting- targeting be unsuccessful. UnsuccessfulType ACTION NO-OP IGNORE/DELETE-PROGRAM (TargetsUnSuccessful FETCH/KEEP-PROGRAM attribute) FETCH/KEEP-PROGRAM-RETRY &mdash; &mdash; ProductionRightsType Sub-level content model defining the permissions (Unrestricted or Prohibited) for usage of, and usage by others of, this particular video program (segment or material) regarding insertion, substitution, and repeat use. InsertionWithinSelf Regarding another video program inserted within this program ToBeAnInsert Regarding this program being inserted in another program SubstitutionWithinS Regarding another video program elf substituted for part of this program ToBeASubstitute Regarding this video program being a substitute for part of another program OneTimeUse Regarding this program being used once RepeatUse Regarding this program being used multiple times IFAudienceTarget- Logical expression (with result True or TrueType False) provides for the definition of an audience target for the video program, segment or material. The target is made narrow or wide using one or multiple terms and logic operators*. Each term is itself a conditional IF- expression (result True or False) after comparing an Item from the Target STB information databases of program preferences, program content information or general items to a given Item. Items are pulled from the STB information databases using an SQL (relational database) query -a general way to look for most popular program, most frequently viewed genre, most popluar time etc of any item. *Expression evaluation is in the order NOT, AND, OR. FirstTermIFStatement Definition of Audience targeting question. First term IF statement consisting of: (IF(SelectedTargetItem, CompareOperator, GivenTargetingItem) &equals; TRUE), targeting Name Definition is deemed successful. Selected Target Items is a choice as follows: DatabaseItem or DatabaseExpression (an expression of two regular database items) Target is made narrow or wide using one or multiple terms and logic operators*. Logical Operator (only NOT type) optionally used for the first term. DatabaseItem Selected target information Item is SQLDatabaseType described by an industry standardized SQL database query for the Item from a choice of STB target databases as follows: Preferences, Transition, Demographic, GeneralInfo, ProprietaryInfo. DatabaseExpressionR Choice of target item which is derived esultItemResultItem from an expression of two or more selected database Item items joined by the ExpressionOperator. DatabaseExpressionR DatabaseItem(1), Expression Operator, esultItemResultItem DatabaseItem(2). Type ExpressionOperator Fixed choice of operator from: EQ&mdash;Equal NE&mdash;Not Equal LT&mdash;Less Than LE&mdash;Less than or Equal to GT&mdash;Greater Than GE&mdash;Greater than or Equal to PLUS&mdash;arithmetic SUBTRACT&mdash;arithmetic MULTIPLY&mdash;arithmetic DIVIDEDBY&mdash;arithmetic AND&mdash;Logical AND of neighboring terms; ANDNOT&mdash;Negate next term then logical AND of neighboring terms.). AND is performed after all NOT&apos;s. OR&mdash;Logical OR of neighboring terms (or groups of AND&apos;d terms). OR is performed after all AND&apos;s and NOT&apos;s. ORNOT&mdash;Negate next term then logical OR of neighboring terms (or groups of AND&apos;d terms). OR is performed after all AND&apos;s and NOT&apos;s. XOR&mdash;Exclusive OR XNOR&mdash;Exclusive NOR. &mdash; &mdash; CompareOperator Compare logic operator to implement the compare of the first &lsquo;Choice&rsquo; item (target item) from a STB database and the given targeting item. CompareOperatorType Conditional compare types as follows: EQ&mdash;Equal NE&mdash;Not Equal LIKE&mdash;Like (using % for missing letters) LT&mdash;Less Than LE&mdash;Less than or Equal to GT&mdash;Greater Than GE&mdash;Greater than or Equal to EQWIN02&mdash;Equal, approximation within 2% accepted EQWIN05&mdash;Equal, approximation within 5% accepted EQWIN10&mdash;Equal, approximation within 10% accepted MATCHGT10FWORDS MATCHGT20FWORDS MATCHGT30FWORDS MATCHGT50PCOFWORDS MATCHGT75PCOFWORDS MATCHGT90PCOFWORDS GivenItems One Given targeting item (Integer or String) or Logical expression of Given targeting items joined by logic operators for example AND, OR. Multiple items are considered bracketed regarding the compare operator. Integer Given Integer item to compare against String Given Text item to compare against Can include &lsquo;TRUE&rsquo; and &lsquo;FALSE&rsquo; LogicOperatorType See LogicOperatorType ExtraTermIFStatement Additional term IF statement consisting of: LogicOperator( IF(SelectedTargetItem, CompareOperator, GivenTargetingItem) &equals; TRUE&emsp;&emsp;), targeting is deemed successful. SelectedTargetItem is selected information from a choice of STB target databases. Target is made narrow or wide using one or multiple terms and logic operators*. LogicOperator Fixed choice of term join operator from: AND, ANDNOT, OR, ORNOT, XOR, XNOR. *Expression evaluation is in the order NOT, AND, OR, XOR. LogicOperatorType Fixed choice of expression logical operator from: AND&mdash;Logical AND of neighboring terms; ANDNOT&mdash;Negate next term then logical AND of neighboring terms.). AND is performed after all NOT&apos;s. OR&mdash;Logical OR of neighboring terms (or groups of AND&apos;d terms). OR is performed after all AND&apos;s and NOT&apos;s. ORNOT&mdash;Negate next term then logical OR of neighboring terms (or groups of AND&apos;d terms). OR is performed after all AND&apos;s and NOT&apos;s. XOR&mdash;Exclusive OR XNOR&mdash;Exclusive NOR. &mdash; &mdash; Preferences Choice of target items from &lsquo;preferences&rsquo; database of user program viewing history including manually entered items and other items e.g. products - all items in this database have a preference rating value. Column examples are: USER-NAME PREFERENCE-RATING-FOR-ROW (integer) SERVICE CHANNEL-DISTRIBUTION VIEW-START-TIME VIEW-END-TIME VIEW-DAY-OF-WEEK TITLE KEYWORD LANGUAGE GENRE-MAIN GENRE-SUB REVIEW-RATING (integer) SUBJECT-1 SUBJECT-2 MPAA-RATING CAST-1 CAST-2 CAST-3 OTHER-PRODUCT-NAICS OTHER-PRODUCT-UPC CONFIDENCE LEVEL (especially useful for inferred entries) OTHER&quest; SQLQuery SQL query text string. Example text (Preferences string: attribute) &lsquo;SELECT genre_sub FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND rating &equals; (SELECT MAX(rating) FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo;;);&rsquo; &mdash; &mdash; Transition Choice of target items from the &lsquo;transition&rsquo; database of user behavior regarding changing from one Title to another or One Genre to another. Column examples are: USER NAME CONFIDENCE-LEVEL (useful for inferred User Name entry) TITLE-CURRENT TITLE-NEXT TITLE-PREFERENCE-RATING CHANNEL-CURRENT CHANNEL-NEXT CHANNEL-PREFERENCE-RATING GENRE-CURRENT GENRE-NEXT GENRE-PREFERENCE-RATING TRANS-DAY-OF-WEEK, TRANS-TIME-OF-DAY, TRANS-REL-TIME-IN-SESSION TRANS-REL-TIME-IN-PROGRAM SQLQuery SQL query text string. (Transition attribute) &mdash; &mdash; Demographic Choice of target item from &lsquo;demographic&rsquo; database of the user(s). May be manually entered or inferred or both. Column examples are: USER-NAME AGE RACE INCOME LANGUAGE EDUCATION OCCUPATION OCCUPATION-NAICS TVHOURS-AVE-PER-WEEK CONFIDENCE LEVEL (re: information in e.g. ROW entry; e.g. allows 2 or more row entries for one user) OTHER&quest; SQLQuery SQL query text string. Example text (Demographic string: attribute) &lsquo;SELECT max(age) FROM demographic WHERE sex &equals; &lsquo;male&rsquo; AND occupation &excl;&equals; student;&rsquo; &mdash; &mdash; GeneralInfo Choice of target item from &lsquo;generalinfo&rsquo; database of general information. May be manually entered or inferred or both. Includes location information, serial numbers. Column examples are: GEO-COUNTRY GEO-TIME-ZONE-TERRITORY GEO-ZIP-CODE-(USA) BOX-SERIAL-NO BOX-RANDOM-FIXED-NO TECH-TVSETS-NO TECH-VCRS-NO TECH-PCS-NO TECH-SERVICES-IN-USE PETS-NO CONFIDENCE LEVEL (re: information in e.g. ROW entry) OTHER&quest; SQLQuery SQL query text string. Example text Name Definition (GeneralInfo string: attribute) &lsquo;SELECT geo-zip-code-(usa) from generalinfo&rsquo; &mdash; &mdash; ProprietaryInfo Allows operator specific and non-standard extensions of the target expression. Care should be used as some systems will not be able to respond. Allows introduction of different proprietary complex type ie data content model &mdash; &mdash; Name Definition THENSeekPresentation- Logical expression (result True or False) TargetType provides for the definition of an presentation target for the video program, segment or material. The target is made narrow or wide using one or multiple terms and logic operators*. Presentation is either at a defined time using a temporal term or at a time based on program information or program location e.g. schedule information or can be a combination of above. *Expression evaluation is in the order NOT, AND, OR. FirstTerm Content model for the first term consisting of: Logical, then choice of Temporal Control Information or a Program Information type Presentation targeting. Includes a MONEYCOSTUSD attribute for valuing presentation terms. Includes a STRENGTH attribute qualifying how to present if the term is successful. MoneyCostUSD Video programs all have different costs (attribute) e.g. some are zero cost, a regular program or movie a certain positive cost and advertising program (commercial) a small negative cost (a credit). Money allows the end-user equipment to make an presentation selection decision that includes money value. STRENGTH Allowed attributes below define how the (attribute) associated term should be used: EXACTLY-DEFINED-BY-TERM BEST-EFFORT-DEFINED-BY-TERM ALTERNATIVE-TO-TERM-PERMITTED CONTINUE Example: (Exactly defined by) PgmGenre AND (Best- Effort defined by) Time Logical Operator (NOT) optionally used for the first term. ExtraTerm Content model for the first term consisting of: Logical, then choice of Temporal Control Information or a Program Information type Presentation targeting. Includes a MONEYCOSTUSD attribute for valuing presentation terms. Includes a STRENGTH attribute qualifying how to present if the term is successful. LogicOperator, Fixed choice of term join operator from: LogicOperatorType AND, ANDNOT, OR, ORNOT, XOR, XNOR. *Expression evaluation is in the order NOT, AND, OR, XOR. &mdash; &mdash; TemporalControlInfo Sub-level content model for setting a rmation, particular usage time or times for the TemporalControlInfo program (Presentation). Includes rmationType recurring day of week, recurring time of day, exact time span and also relative position for inserts and substitutions. RecurringDay Use program on a particular day of the week e.g. any Friday RecurringTime Start program at a particular time of the day e.g. 1900 hours any day. DateTimeSpan Exact start and end times and dates for use of the Program. InsertBeforeprogram Insertion of this EPG&apos;s program or video Start material before the start of the program referred to here. (Presentation target only) InsertTimeFromProgr Insertion of this EPG&apos;s program or video amStart material at this time after the start of the program referred to here. (Presentation target only) InsertAfterProgramE Insertion of this EPG&apos;s program or video nd material after the end of the program referred to here. SubstituteTimeFromP Substitution of this EPG&apos;s program or rogramStart video material at this time after the start of the program referred to here. &mdash; &mdash; ProgramLocation, In Presentation choice model this allows ProgramLocationType selection of a particular program for presentation (e.g. insert, before or after) OR a particular time or combination. References the TVA ProgramLocationType including Broadcast Services and the Web.and Program Content model definitions. Although defined for the program information entering the STB or PDR, this is assumed to be still applicable as targeting information (ie retained in the STB in a suitable form for this targeting). GeneralInfo GeneralInfo database example items: database columns Geo-Country Location of STB (country): USA UK etc Geo-Time-Zone- Location of STB (time-zone territory): Territory Eastern Central Mountain Pacific SouthEastern SouthCentral SouthMountain SouthPacific NorthEastern NorthCentral NorthMountain NorthPacific Geo-ZIP-Ccode-(USA) US postal ZIP code integer for small geographic area location (integer) Box-Serial-No End-user equipment (STB, PDR) Serial number. Arithemtic manipulation enables targeting for example a percentage of total population of STB&apos;s Box-Random-Fixed-No Fixed number now fixed but originally once generated by random technique. Arithmetic manipulation enables targeting for example a percentage of total population of STB&apos;s Tech-TV-Set&apos;s-No Integer number of TV sets at location Tech-VCR-No Integer number of VCR&apos;s at location Tech-PCs-No Integer number of PC&apos;s at location Tech-Sevices-In-Use Services in use at location: TVSatellite TVCable InternetDialUp InternetBroadband HomeNetwork1394 HomeNetworkEIA7751 HomeNetworkEthernet Pets-No Integer number of Pet&apos;s at location Confidence Level Confidence Level (percentage) for row entry particularly useful for marking inferred data entries which have a lower number than manually enterred information (which has maximum number). Allows there to be a number of entries for this general profile each with different confidence levels. Other Demographic Demographic Info database example items: database Columns User Name String for user name Age Integer defining user age Race Selected few race categories (others should be added): White, Black, Indian Continent, Asian Pacific Islander, Hispanic Income Individual viewer income as salary, integer. Language Selected language categories (others should be added): English, Mandarin, Cantonese, Vietnamese, Spanish, French. Education Selected education categories including: None, Grade-school, High-school, College, Graduate, Postgraduate. Occupation Selected occupation categories including: Not-working, Blue-collar and Professional-managerial. Occupation-NAICS Integer NAICS code for occupation. NAICS: North American Industry Classification System code number TVHours-Ave-Per- Integer computed from TV viewing history Week Confidence Level Confidence Level (percentage) for row entry particularly useful for marking inferred data entries which have a lower number than manually enterred information (which has maximum number). Allows there to be a number of user entries (for perhaps only one user) each with different confidence levels. Other Preferences Preferences database example items: database Columns User Name String for user name PREFERENCE- Integer (e.g. between 100 and 999) RATING-FOR- expressing a relative preference for the ROW (integer) row item (e.g. Program Genre) Service TV distribution service e.g. CNN, BECAmerica Channel- DSS-202, DSS-264 Distribution View-Start-Time 2100 View-End-Time 2130 View-Day-Of-Week Friday Title Independence Day Keyword Independence Language English Genre-Main Movie Genre-Sub Action Review-Rating 900 (e.g. between 100 and 999) (integer) Subject-1 Fiction Subject-2 Science Fiction Movie MPAA-Rating PG-13 Cast-1 Will Smith Cast-2 Mary McDonnell Cast-3 Jeff Goldblum Other-Product-NAICS Integer NAICS code for row: North American Industry Classification System code number Other-Product-UPC Universal Product Code Number Confidence-Level 50 especially useful Confidence level percentage integer. for inferred Example 50% would indicate the movie entries wasn&apos;t viewed fully or that the system was unsure of the user watching. Other Transition database Transition database example items: columns USER NAME String for user name Confidence Level Confidence Level (percentage) for row entry particularly useful for marking inferred data entries which have a lower number than manually enterred information (which has maximum number). Allows there to be a number of entries for this general profile each with different confidence levels. TITLE-CURRENT Title before transition (Title change) TITLE-NEXT Title after transition (Title change) TITLE-PREFERENCE- Computed preference rating for Title RATING transition CHANNEL-CURRENT Channel before transition (Channel change) CHANNEL-NEXT Channel after transition (Channel change) CHANNEL- Computed preference rating for Channel PREFERENCE- transition RATING GENRE-CURRENT Genre before transition (Genre change) GENRE-NEXT Genre after transition (Genre change) GENRE-PREFERENCE- Computed preference rating for Genre RATING transition TRANS-DAY-OF- Transition Day of the Week WEEK, (Sunday-Saturday) TRANS-TIME- Transition Time of Day (24 hour clock) OF-DAY, TRANS-REL-TIME-IN- Transition relative time after the user SESSION started watching TV that period TRANS-REL-TIME-IN- Transition time after start of program PROGRAM &mdash; &mdash; 0170 Target Expression allows definition of an audience target. Terms, number of terms and logic operators are chosen to make the desired target narrow or wide, simple or complex. One or more Money attributes are optionally added to further assist the selection decision. The Cost amount is either positive (e.g. for movie) or negative (e.g. for a advertising). Computational Precedence NOT, AND, OR 0171 Targeting and Program Information Examples 0172 Example with Targeting information for Audience and Presentation Targeting. 0173 The following targeting metadata example is attached (by ProgramLocation reference) to an Advertising (Ad) video program and defines intended audience and presentation. The Ad program information is not described. 0174 The targeted Audience is a weekday viewer, male age over 30, income over 50,000 also qualified by kids in the household. For end-user systems where the audience criteria is satisfied then presentation parameters are employed. For presentation this example targets: 0175 Either Weekdays, 6-8PM, for an insertion into a program defined by Program-Location-Information, 5 minutes 30 seconds from the beginning Or at other times a Situation Comedy main Genre by the same video distribution service company as the Ad ie TV Company (TVCo-Mnop). The first target is preferred and comes with an impression credit amount of $0.005 and the second, more inferior, presentation $0.0001. &lt;TargetingInformation&gt; &lt;OperatingPeriod Open&equals;&ldquo;2001-01-01&rdquo; Close&equals;&ldquo;2001-2-14&rdquo;/&gt; &lt;ProgramLocation&gt; ...reference to Ad video program... &lt;/ProgramLocation&gt; &lt;BusinessIDs&gt; &lt;AgencyServiceID&gt; id.teveadagency.com/id01234 &lt;/AgencyServiceID&gt; &lt;TargetingServiceID&gt; id.tvatargeting.com/id56789 &lt;/TargetingServiceID&gt; &lt;/BusinessIDs&gt; &lt;ProductionRights&gt; &lt;InsertionWithinSelf Right&equals;&ldquo;Prohibited&rdquo;/&gt; &lt;ToBeAnInsert Right&equals;&ldquo;Unrestricted&rdquo;/&gt; &lt;SubstitutionWithinSelf Right&equals;&ldquo;Prohibited&rdquo;/&gt; &lt;ToBeASubstitute Right&equals;&ldquo;Unrestricted&rdquo;/&gt; &lt;OneTimeUse Right&equals;&ldquo;Unrestricted&rdquo;/&gt; &lt;RepeatUse Right&equals;&ldquo;Unrestricted&rdquo;/&gt; &lt;/ProductionRights&gt; &lt;RepeatControl&gt; &lt;NumberMaximum&gt;3&lt;/NumberMaximum&gt; &lt;IntervalMimimum&gt;PT2H30M&lt;/IntervalMimimum&gt; &lt;/RepeatControl&gt; &lt;IFAudienceTargetTrue&gt; &lt;FirstTermIFStatement&gt; &lt;PreferencesItem SQLQueryPreferences&equals; &ldquo;SELECT &emsp;view_day_of_week &emsp;FROM preferences &emsp;GROUP &emsp;BY view_day_of_week HAVING MAX ( COUNT (view_day_of_week));&rdquo;/&gt; &lt;CompareOperator&gt;NE&lt;/CompareOperator&gt; &lt;GivenItems&gt; &lt;String&gt;&ldquo;Saturday&rdquo;&lt;/String&gt; &lt;LogicalOperator&gt;OR&lt;/LogicalOperator&gt; &lt;String&gt;&ldquo;Sunday&rdquo;&lt;/String&gt; &lt;/GivenItems&gt; &lt;/FirstTermIFStatement&gt; &lt;ExtraTermIFStatement&gt; &lt;LogicOperator&gt;AND&lt;/LogicOperator&gt; &lt;DemographicItem SQLQueryDemographic&equals; &ldquo;SELECT income FROM demographic WHERE sex &equals; &lsquo;male&rsquo; AND age &gt;&equals; 30&rdquo;/&gt; &lt;CompareOperator&gt;GT&lt;/CompareOperator&gt; &lt;GivenItems&gt; &lt;Integer&gt;50000&lt;/Integer&gt; &lt;/ExtraTermIFStatement&gt; &lt;ExtraTermIFStatement&gt; &lt;LogicOperator&gt;AND&lt;/LogicOperator&gt; &lt;DemographicItem SQLQueryDemographic&equals; &ldquo;SELECT COUNT(name) FROM demographic GROUP BY name HAVING age&lt;&lsquo;21&rsquo;;/&gt; &lt;CompareOperator&gt;GT&lt;/CompareOperator&gt; &lt;GivenItems&gt; &lt;Integer&gt;0&lt;/Integer&gt; &lt;/ExtraTermIFStatement&gt; &lt;/IFAudienceTargetTrue&gt; &lt;THENSeekPresentationTarget&gt; &lt;FirstTerm STRENGTH&equals;&ldquo;EXACTLY-DEFINED-BY-TARGET2&rdquo; MoneyCostUSD&equals;&minus;5.0E-3&rdquo;/&gt; &lt;TemporalControlInformation&gt; &lt;RecurringDay Day&equals;&ldquo;WeekDays&rdquo;/&gt; &lt;RecurringTime Begin&equals;&ldquo;18:00:00&rdquo; End&equals;&ldquo;20:00:00&rdquo;/&gt; &lt;InsertTimeFromProgramStart Time&equals;&ldquo;PT5M30S&rdquo;/&gt; &lt;/TemporalControlInformation&gt; &lt;/FirstTerm&gt; &lt;ExtraTerm STRENGTH&equals;&ldquo;CONTINUE&rdquo;&gt; &lt;LogicOperator&gt;AND&lt;/LogicOperator&gt; &lt;ProgramLocation&gt; &emsp;...reference to target video program... &lt;/ProgramLocation&gt; &lt;/ExtraTerm&gt; &lt;ExtraTerm &emsp;STRENGTH&equals;&ldquo;EXACTLY- DEFINED-BY-TARGET2&rdquo; MoneyCostUSD&equals;&ldquo;&minus;1.0E-4&rdquo;/&gt; &lt;LogicOperator&gt;ORNOT&lt;/LogicOperator&gt; &lt;TemporalControlInformation&gt; &lt;RecurringDay Day&equals;&ldquo;WeekDays&rdquo;/&gt; &emsp;&emsp;&lt;RecurringTime &emsp;&emsp;&emsp;Begin&equals;&ldquo;18:00:00&rdquo; End&equals;&ldquo;20:00:00&rdquo;/&gt; &lt;/TemporalControlInformation&gt; &lt;/ExtraTerm&gt; &lt;ExtraTerm STRENGTH&equals;&ldquo;CONTINUE&rdquo;&gt; &lt;LogicOperator&gt;AND&lt;/LogicOperator&gt; &lt;ProgramLocation&gt; &emsp;&emsp;&lt;ProgramInformation ProgramId&equals;&ldquo;CRID&rdquo;&gt; &emsp;&emsp;&emsp;&emsp;&emsp;&lt;Genre &emsp;type&equals;&ldquo;main&rdquo;&gt;Situation comedy&lt;/Genre&gt; &lt;/ProgramInformation&gt; &lt;/ProgramLocation&gt; &lt;/ExtraTerm&gt; &lt;/THENSeekPresentationTarget&gt; &lt;ELSETargetingUnSuccessful ACTION&equals;&ldquo;IGNORE-PROGRAM&rdquo;/&gt; &lt;/TargetingInformation&gt; Deliver this advertisement to all viewers as specified Deliver this advertisement to all viewers whose: Most popular genre of movie is &lsquo;action&rsquo; AND 0176 If the targets are not satisfied then this Ad program is ignored. 0177 This genre is at least 90% more popular than the next most popular genre of movie 0178 AND &lt;TargetingInformation&gt; &lt;OperatingPeriod Open&equals;&ldquo;2000-11-25&rdquo; Close&equals;&ldquo;2000-12-25&rdquo;/&gt; &lt;ProgramLocation&gt; &emsp;&emsp;&emsp;...reference to Ad video program... &lt;/ProgramLocation&gt; &lt;ProductionRights&gt; &lt;InsertionWithinSelf Right&equals;&ldquo;Prohibited&rdquo;/&gt; &lt;ToBeAnInsert Right&equals;&ldquo; Prohibited&rdquo;/&gt; &lt;SubstitutionWithinSelf Right&equals;&ldquo;Prohibited&rdquo;/&gt; &lt;ToBeASubstitute Right&equals;&ldquo; Prohibited&rdquo;/&gt; &lt;OneTimeUse Right&equals;&ldquo;Unrestricted&rdquo;/&gt; &lt;RepeatUse Right&equals;&ldquo;Unrestricted&rdquo;/&gt; &lt;/ProductionRights&gt; &lt;IFAudienceTargetTrue&gt; &lt;FirstTermIFStatement&gt; &emsp;&lt;PreferencesItem SQLQueryPreferences&equals; &ldquo;SELECT genre_sub FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND rating &equals; (SELECT MAX(rating) FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo;;) ;&rdquo;/&gt; &lt;CompareOperator&gt;EQ&lt;/CompareOperator&gt; &lt;GivenItems&gt; &lt;String&gt;&ldquo;action&rdquo;&lt;/String&gt; &lt;/GivenItems&gt; &emsp;&lt;/FirstTermIFStatement&gt; &lt;ExtraTermIFStatement&gt; &lt;LogicOperator&gt;AND&lt;/LogicOperator&gt; &emsp;&lt;PreferencesExpressionResultItem&gt; &lt;PreferencesItem1 SQLQueryPreferences&equals; &ldquo;SELECT MAX(rating) FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND genre_sub &equals; &lsquo;action&rsquo;;&rdquo;/&gt; &lt;ExpressionOperator&gt;DIVIDEDBY&lt;/ExpressionOperator&gt; &lt;PreferencesItem2 SQLQueryPreferences&equals; &ldquo;SELECT MAX(rating) FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND genre_sub &excl;&equals; &lsquo;action&rsquo;;&rdquo;/&gt; &emsp;&lt;/PreferencesExpressionResultItem&gt; &emsp;&lt;CompareOperator&gt;GE&lt;/CompareOperator&gt; &lt;GivenItems&gt; &lt;Integer&gt;1.9&lt;/Integer&gt; &emsp;&emsp;&lt;/GivenItems&gt; &lt;/ExtraTermIFStatement&gt; &lt;ExtraTermIFStatement&gt; &lt;LogicOperator&gt;AND&lt;/LogicOperator&gt; &emsp;&emsp;&lt;PreferencesItem SQLQueryPreferences&equals; &ldquo;SELECT view_start_time FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND genre_sub &equals; &lsquo;action&rsquo; AND view_day_of_week &equals; ( SELECT view_day_of_week FROM preferences WHERE genre_main &equals; &lsquo;movie&rsquo; AND genre_sub &equals; &lsquo;action&rsquo; GROUP BY view_day_of_week HAVING MAX(COUNT(view_day_of_week)) ; ) GROUP BY view_start_time HAVING MAX(COUNT(view_start_time));&rdquo;/&gt; &lt;CompareOperator&gt;EQ&lt;/CompareOperator&gt; &lt;GivenItems&gt; &emsp;&emsp;&lt;Integer&gt;1900&lt;/Integer&gt; &lt;/GivenItems&gt; &lt;/ExtraTermIFStatement&gt; &lt;/IFAudienceTargetTrue&gt; &lt;ELSETargetingUnSuccessful ACTION&equals; &ldquo;IGNORE-PROGRAM&rdquo;/&gt; &lt;/TargetingInformation&gt; 0179 The most popular time for watching action movies is 9:00PM on Friday nights. 0180 Targeting with Fuzzy Terms 0181 In the client, or STB, there is a profiling agent that continually builds a database of preferences and behaviors that profile IATV users in the household. 0182 Preferences include affinities for any data field or entries in an electronic programming guide (EPG), examples are titles, genres, channels, and actors. In one instance of the present invention, the agent models patterns of IATV usage behaviors with a behavioral model similar to the clustering engine used at the TV head-end, and extracts key usage information from the behavioral model into a behavioral database. Each entry of the behavioral database has a confidence value generated by a multiplicity of novel techniques presented in detail herein. The database entry confidence registered by the profiling agent reflects an estimate of the structural and sampling quality of the data used to calculate the database entry. 0183 The AD mixer receives AD targeting metadata with restricting query terms to display the associated AD only to selected user's with database entries matching the query constraints. Each AD metadata query term has a minimum confidence threshold term that specifies the lowest confidence level in satisfying the query term, or terms, acceptable to display the targeted AD. For example, an AD targeting constraint such as gender: Male80% AND age:25-3550% would have the effect of only showing the AD to users the targeting agent has at least 80% confidence in being a male, and at least 50% confidence in being between 25 and 35 years of age. In yet another aspect of confidence level specification, there is an expression level, confidence threshold as follows: (gender: Male AND age:25-35)80%. This targeting mode selects for AD display only users that the system has at least 80% confidence in being male and between 25 and 35 years of age. These methods provide flexibility by enabling Ads to specify the most important targeting selection terms, or to specify a range of people that are close enough to the desired targeting profile to show the AD to. The targeting agent only selects profiles from the database whose aggregate per dimension confidence rating satisfies the query limits set by the AD targeting metadata. In yet another aspect of the confidence thresholding system, the query selection filter is stated as a Fuzzy Logic, and not Boolean, expression. The targeting query expression is similar to the probabilistic percentage confidence terms with two notable exceptions: fuzzy membership literals replace the percentage terms, and a fuzzy literal table synchronizes client and server. An exemplar of this query expression mode appears as follows: gender: MaleVERY_SURE AND Age:25-35FAIRLY_SURE. This query would select users whom the targeting agent was very sure is a male, and fairly sure lie between 25 and 35 years of age. A fuzzy literal table (FLT) lists the allowable range of fuzzy memberships each AD category may exhibit. An example of a FLT is: 0184 Male: UNSURE, FAIRLY_SURE,VERY_SURE 0185 Age: UNSURE, FAIRLY_SURE,VERY_SURE CERTAIN 0186 The advantage of this method is that the novice AD agency only specifies the degree of confidence required in intuitive, non-mathematical, terms, and leaves the exact range of confidence percentages up to the targeting agent to decided, and continually optimize. Additionally, the fuzzy method handles the non-deterministic meaning of the percentage confidence terms in the database. The targeting agent learns the percentage confidence rating ranges historically associated with each fuzzy performance level. 0187 Having now described the invention in accordance with the requirements of the patent statutes, those skilled in the art will understand how to make changes and modifications to the disclosed embodiments to meet their specific requirements or conditions. Such changes and modifications may be made without departing from the scope and spirit of the invention, as defined and limited solely by the following claims.
A View From the Bridge Arthur Miller, one of America’s well-renowned playwrights shaped mid-century American theater through his representation of everyday life. Without the influence of the glitz and glamor of the Golden age, he wrote about corruption and sorrow, and his work was prevalent in social criticism of society. His influence taken from Greek tragedies and controversial themes enticed audience members who were unaware of the drastic measures his plays went. Truly his most notable works include The Crucible, A View from the Bridge, All My Sons, and Pulitzer Prize-winning Death of a Salesman. Miller came from a Polish and Jewish family who were immigrants in the early New York scene of the 1900s. From his experience of his childhood with his parents who moved from Harlem to Brooklyn due to the recession, it influenced a lot of his work and how he portrayed life in the city. A true representation of his own life searching for something more with the standards of society surrounding him and who he was. Especially after his marriage to Marilyn Monroe, a renowned actress of her time and still crucial to American culture today. A View From the Bridge was written based on his experiences of Brooklyn specifically the Red Hook piers and how immigrants both fresh off the boat and established ones assimilated into American culture. Even though it was set in the early years of the 1900s, one of its main messages regarding immigration and life still is prevalent in our society today as America. Its many themes of immigration, love, honor, law, and more impact every character differently and supply each with its own motive that is present within the scenes. There is also the use of the Greek tragic hero complex and how Eddie Carbone is an example of ambition and preserving honor leading to one’s own destruction. Another usage of Greek theater is the chorus which as a replacement is Aflarari who is a lawyer and represents the justice aspect of the play. Moral reasoning if it can be placed that way and the struggle that Eddie has to accept his moral responsibility instead of falling into his own tragic death caused by an ambition field through love and honor. Before choosing scenes, the group decided to choose first a character that they connected with and from their view what scenes really embody these characters. Each of us decided on our characters and I felt that scenes such as the start of Act 2 and the last part of Act 2 allowed each of us to be prominent in the scenes. Miller’s work allows us as the audience and performer to join in at any scene and be caught up with what is occurring. That is why the start of Act 2 presents tension among all characters and displays how each is coming into play with their internal conflicts and main conflict on the outside. From presenting the rising conflict to the climax and the ending within these scenes, it allows the audience to view how it all works out without having to struggle with understanding any minor plot lines. Including the end of the play allows three themes of the entire play to come together, especially how each character views love, honor, and more differently after tragedy has struck. Our set design will follow a pattern similar to these two images below to allow us as the characters to stand out more and be present in the scenes. Personal experiences with multiple props onstage have led to a bit of disorganization and more portrayal of when props are being used or become the center of attention. The only exception in this play is the chair that is lifted up and a portal of honor and dominance between Eddie and Marco. Other than that, only some chairs and for Beatrice to have a rag of some sort. Including costumes that represent the current era that they are in which was the ’50s. Costumes that portrayed more blue-collar attire for men and for women printed dresses along with aprons are those that are simple but allow for each character to stand out in their own way. Truly, this work embodies more than just a series of twisted plots and deaths mixed with familial conflict but is more than forbidden love. Each person in the play struggles with who they are and are immigrants to a country they are unaware of. Similar to life, we struggle living in a world that we are unsure of and have internal conflicts that if let out can lead to our tragedy Miller truly emphasizes this in A View from the Bridge, including the journey that each character comes to terms with and how bigger concepts can involve minor aspects of life.
https://commons.gc.cuny.edu/docs/a-view-from-the-bridge-josue-ramos-carpio/
Thursday March 22, 2018 Watch for Details! Friday, March 23, 2018 - Watch for Details! National Parks Conservation Association is the leading voice for America’s national parks. Join NPCA’s regional and lobbying staff to discuss the current state of national park policy from Capitol Hill to the parks in our own backyard. Solar United Neighbors of Florida expands access to solar electricity by educating Floridians about the benefits of distributed solar energy, helping them organize group solar installations, and strengthening Florida’s solar policies and its community of solar supporters. The Global Sustainability and Earth Literacy Studies (GSELS) Learning Network provides inclusive educational opportunities for the Miami Dade College community to explore global citizenship, ecological sustainability, and civic engagement, through understanding planetary challenges and limits and by developing values, skills, and behaviors that promote prosperity and communities of well-being. This workshop focuses on food production and sustainability. The global food web has become increasingly more complicated with the industrialization and globalization of our world. Participants will build upon exploration of their own relationship with food to delve deeper into the interconnections between food, politics, health, environment, ethics and justice. This workshop focuses on food production and sustainability. The global food web has become increasingly more complicated with the industrialization and globalization of our world. Participants will identify impacts of our eating choices on farm animals, social justice and human rights issues. How climate change affects food supply and how food production impacts climate will be explored. Participants must have completed Part 1 of this workshop. This workshop explores the appropriate relationship of human beings to Earth. As participants engage in readings and discussion questions, they will share observations and reflections on our relationship to Earth, the threats of ecological imbalance, what we can do to create a paradigm shift and make lasting, impactful change, and what the role of higher education is in creating a future generation of Earth literate citizens. This workshop offers an overview of Earth Ethics Institute’s Global Sustainability and Earth Literacy Studies (GSELS) Learning Network. Participants learn about development and operations of GSELS. GSELS faculty learn how to broaden their GSELS-designated courses to include additional GSELS course criteria, as well as best practices of faculty learning communities. Participants explore Sustainable Education, a concept created by Stephen Sterling and described as “a change of educational culture, one which develops and embodies the theory and practice of sustainability in a way which is critically aware. Contrast will be made of mechanistic versus ecological worldviews, and a multidisciplinary collaborative approach to education will be endorsed. of the largest urban park in the Florida State Park System. Oleta River State Park is a 1,043 acre natural and recreational area surrounded largely by high density residential and commercial developmental. Park administrators have developed alliances with a range of compatible user groups. The need for sustainable design and resilient infrastructure is becoming more and more apparent in this era of increasing threats due to climate change. Miami is widely recognized to be one of the most vulnerable areas of the United States to sea level rise. The Patricia and Phillip Frost Science Museum was designed with this in mind, and serves as a model of practical applications of sustainable design. Faculty and staff will retreat at Narrow Ridge Earth Literacy Center to reflect on Culture and Cosmology, Foundations of Resiliency, and the necesarry Paradigm Shift that has been EEI's focus from its inception. Participants will be encouraged to nurture views that are compatible with the Earth's systems and to encourage all to apply these views across the disciplines into the curriculum at MDC. Facutly and staff will be selected from participating GSELS Faculty to participate. The Next Generation of Global Leaders are Stepping Forward - JOIN US! Narrow Ridge Earth Literacy Field Experience - Apply In December!
http://earthethicsinstitute.org/PastProgramsSpring2018.asp
Sealed to the Time of the End! Last Day Prophecy: Sealed to the ‘Time of the End’ The LORD has laid something on my heart to share with anyone who will hear. Right now we are seeing Biblical Prophecy unfolding before our very eyes — however, most Christians do not even see it! Most feel in their spirit that ‘something’ prophetic is going on, but they are not putting the pieces together correctly. Why? Because they do not study the Bible or seek the LORD on HOLY SPIRIT interpretations. What you’ve been taught isn’t completely accurate — the reason being is that the LORD is now starting to unravel the REAL interpretations, as we draw close to their fulfillment. It even says in Daniel that the interpretation is ‘CLOSED UP’ until the actual time is upon the earth: “And he said, “Go your way, Daniel, for the words are closed up and sealed till the time of the end,” Daniel 12:9. So NOW is the ‘time of the end’ for HIM to reveal to HIS CHOSEN ONES the REAL interpretation — and not man’s carnal, fleshly interpretation. Just as it was with JESUS’ FIRST coming, there was major misinterpretations of how HE would come. They thought HE was coming as KING to rule with HIS first coming. Instead, HE came as a SACRIFICIAL LAMB and SERVANT. It will be with HIS SECOND coming that HE will come and rule as KING. So it is today — there are many misinterpretations of how it will be before HIS SECOND COMING. SO, how do we know the TRUTH? It is IMPERATIVE, as a CHILD OF THE LIVING GOD, that you seek the LORD on who the ‘last day players’ are in the Book of Revelation and Daniel. I really believe that if you earnestly desire to know the TRUTH, seeking it out through prayer and studying, you will be shown! Come to the LORD with a new sense of urgency for HIM to show the ‘unsealing’ of the prophetic WORD! I urge you to go back to Scripture and STUDY — because what you have been taught isn’t completely correct! The LORD has told me several time: “It is not as you think….” — therefore you will be deceived in the upcoming days, if you do not have the right understanding! - Do you know who the BEAST is with his 7 heads and 10 crowns? (Revelation 13:1) - Who is the harlot of Babylon who has slayed the prophets and saints? (Revelation 18:24) - Do you completely understand the MARK and the NAME of the beast…..is it ONLY a chip? - Are the saints still here for the revealing of the AC or the beginning of the tribulation? - When will the tribulation occur? - Who are HIS witnesses…..who are the 144,000? I will tell you this — the MARK is MUCH more than the chip, and Mystery Babylon is definitely a ‘mystery’ and not what you think. Also, the Beast has risen and is now spreading its evil tentacles to all nations — but again, it is NOT who you think! It will be shocking to many when they discover these truths through the power of the HOLY SPIRIT. Only HE will put the pieces CORRECTLY together for you! If the LORD JESUS has shown me through much time in prayer and study, then HE will also show you through the illumination of SCRIPTURE, if you seek HIM on it with ALL of your hearts! “Then you will know the truth, and the truth will set you free,” John 8:32. “Ask and it will be given to you; seek and you will find; knock and the door will be opened to you.” Matthew 7:7.
https://www.hiskingdomprophecy.com/sealed-to-the-time-of-the-end/
Ever since the breakout of Covid-19, the health and safety of our team members have become our primary concern. We are fortunate to be in a line of work that translates well to a remote setting and to have highly motivated teams capable of staying productive in a new environment. Our transition to remote work has been seamless, with a brief adjustment period quickly paving the way towards an established and effective workflow. And while our primary responsibilities have been handled with efficiency, one aspect of our work has been sorely missing — the office life dynamics and the chance to engage with our teammates on the basic human level. That is why we’ve made it a point to carve out opportunities for quality time with our team members. One such opportunity was set up this past Friday, with an international cast of HTEC colleagues getting together in the virtual space for a bit of good old-fashioned clue-hunting and crime-solving. HTEC puzzle solvers For our virtual team building, the participants were tasked with solving a delightful whodunit mystery. More than 80 of our team members grouped across 19 teams have taken part in a virtual murder investigation. Everyone got together to be presented with the set-up of the case and the course of the investigation, and then teams went their separate ways into individual virtual investigation rooms, where they could work together on finding clues and comparing their conclusions. It’s been a hectic hour of investigating as teams rushed to gather and analyze all the information and tried to identify the culprit. The case turned out to be rather tricky, as only 8 teams were able to provide all the correct answers to the questions of who, when, where, how, and why. After an additional round of questioning, one team stood slightly above the others with a combination of speed and awareness — the formidable RoBINZon CLUEseau. Kudos to our top amateur detectives Marija, Strahinja, Stefan, and Milos for cracking the case and paying attention to the tiniest details. Judging by the lively and friendly discussion in the aftermath of the case, other investigative teams didn’t take the loss too hard and the process itself was its own reward. After all, there will be plenty more chances to show their deductive skills, as there is no rest for the wicked. Remote, yet connected In the era of physical distancing, it is easy to lose the thread that turns groups of people into teams. It is important that we find ways to overcome the lack of shared physical space and nurture the building blocks of our team identity. At HTEC, we are intent on overcoming this obstacle and finding ways to continue to get to know our fellow professionals, to develop and deepen mutual relationships, to share our experiences and stick together through the hard times and the good times, as teams do. Whether close or remote, we stay connected through our love for our work and the respect for our fellow professionals. And if we can have a bit of crime-solving fun in the process — all for the better! For now, we are setting our detective caps and looking-glasses aside until another mystery calls for our services. In the meantime, we are looking forward to new chances for fun times with our HTEC professional family. And if you'd like to join us on our professional journey, we can't wait to meet you!
https://htecgroup.com/insights/team-activities-in-the-remote-era/
Femtosecond laser pulses, by concentrating optical power into a short interval, combine exacting control with a minimum use of power. By implication, there is also a minimum of damage to surrounding tissue due to errant or otherwise prolonged irradiation. One difficulty with femtosecond lasers has been that an exotic system of free-space beam delivery optics is often called for. This is because the short pulses are significantly transformed by passage through standard fiber optics. As the authors now show, off-the-shelf instruments, like two-photon scanning or uncaging microscopes can be readily modified to perform fast, automated laser persuasion of cell membranes to allow DNA to slip inside. In order to deliver various molecular constructs to single cells, protocols including manual injection, modified patch-clamping, lipofection, and electroporation have been developed. Unfortunately, these methods do not scale well if you want to hotwire a bunch of cells in a short time. Transfecting neighboring cells with different reporters or channels, or alternatively the same cell but sequentially with different elements, would be off the table with these methods. Trying to transfect neurons in the brain rather than large egg cells, and using naked DNA rather vector-based DNA, or RNA, involves additional considerations. Using their custom-developed touchscreen, and image-guided femtobeam, the researchers were able to target up to 100 cells per minute. At a maximum recommended beam power of 77 milliWatts, they could also target a 4x4 array of points (on a 4um grid) to deliver 12-200 femtosecond pulses over 60 ms metapulse intervals. Depending on the specifics of the protocol, transfection yields from 50-100 percent could be obtained. These numbers were for dividing cells in which the nuclear membrane is transiently dispersed and therefore doesn;t present an additional barier to the DNA. For neurons, the researchers added a nuclear membrane-targeted peptide (Nupherin), that binds with the plasmid DNA and enhances transport. In further experiments with these neurons, they successfully activated the transfected channelrhodopsin protein using blue light, and recorded subsequently evoked spikes via patch clamp. To really squeeze the technique into greater productivity, the researchers hope to implement spatial light modulators for precise and independent control of multiple beams. For an vivo or behaving scenario, the researchers point to fairly recent work where fiber based femotosecond transfection has been made to work in CHO-K1 cells at efficiencies of 74 percent. Using a compact, endoscope-like system with 6000 individual cores, this "nanosurgical instrument" was also used for simultaneous microfluidic delivery of drug to localized areas under direct imaging. I asked lead author Maciej Antokowiak whether he thought there would be significant distortion in migrating to fiber-based delivery. He said that at 200fs, pulse stretching is much less of a concern than for the shorter 12-20 fs pulses. He also mentioned that in the high repetition regime (76MHz) femtosecond transfection appears to involve cumulative biochemical changes in the cell membrane. Astounding reports of so-called glowing memories have also been trickling in this week along with the larger wake from the recent Society for Neuroscience meeting. This kind of selective optical interrogation of complete circuits in the brain will take mere connectomics into full-blown activity maps, and then, to control. As it has become apparent through omni-labelling techniques like Brainbow I and II, total label of the synaptic jungle is hardly better than no label. The ability to pick and choose multiple combinatorial activators or other modifiers, by finger or algorithm, as a prelude to thought itself, will be the quickest path to workable BCIs and our subsequent understanding of the brain. Abstract A prevailing problem in neuroscience is the fast and targeted delivery of DNA into selected neurons. The development of an appropriate methodology would enable the transfection of multiple genes into the same cell or different genes into different neighboring cells as well as rapid cell selective functionalization of neurons. Here, we show that optimized femtosecond optical transfection fulfills these requirements. We also demonstrate successful optical transfection of channelrhodopsin-2 in single selected neurons. We extend the functionality of this technique for wider uptake by neuroscientists by using fast three-dimensional laser beam steering enabling an image-guided "point-and-transfect" user-friendly transfection of selected cells. A sub-second transfection timescale per cell makes this method more rapid by at least two orders of magnitude when compared to alternative single-cell transfection techniques. This novel technology provides the ability to carry out large-scale cell selective genetic studies on neuronal ensembles and perform rapid genetic programming of neural circuits.
https://medicalxpress.com/news/2013-11-multibeam-femtosecond-optical-transfection-ultimate.html
The Senior IT Service Support Specialist role is responsible for providing technical assistance in answering questions and resolving computer hardware/software problems in person, via telephone, or utilizing a remote support tool. This includes receiving, prioritizing, documenting, and actively resolving end user requests. The Senior Support Specialist will be an escalation contact for the Tier I support team. This role will also work closely with internal and external support teams to quickly identify and bring resolution to critical business incidents. The IT Service Desk team troubleshoots desktop PCs, laptops, scanners, printers, phones and a variety of commercial and proprietary software in a Windows, Linux, and Mac OS environment. Including but not limited to creating, tracking, and closing trouble tickets using the service desk ticketing system. The selected candidate must ensure that all support calls, helpdesk tickets, and related procedures adhere to organizational values and guidelines. Primary Responsibilities: - Respond to service requests, incidents and reported issues within the set SLAs - Escalation point for all advanced IT support related issues and ultimate owner of issue when escalated outside of support team - Provide advanced support and ownership of corporate telephony systems - Participate in and/or manage small to medium scale team projects as assigned - Liaison with senior leadership and executive team members providing general IT system troubleshooting and support - Quickly and accurately determine event scope and impact upon notice - Field incoming requests made to the service desk via phone queue, e-mail, and ticketing system to ensure courteous, timely, and efficient resolution for the customer - Support technology and audio-visual services at management and executive events at off-site locations, such as the Quarterly Management Meetings, Board Meetings, and Trustees Meetings - Identify and learn appropriate software and hardware used by the organization - Perform post-resolution follow-ups to help requests - Create, change, and delete user accounts per request - Dedicate time to project work and improvement initiatives as allotted - Must treat requests courteously and professionally, in line with the business standards - Work independently on resolving incidents and escalate complaints to upper management - Collaborate with team members to resolve issues where appropriate and contribute to a friendly, helpful environment You’ll be rewarded and recognized for your performance in an environment that will challenge you and give you clear direction on what it takes to succeed in your role as well as provide development for other roles you may be interested in. Required Qualifications: - Associates degree or 3+ years equivalent IT experience - 3+ years of experience of working in a service desk environment or deskside support - Experience of working in Active Directory environment or equivalent training - Demonstrated proficiency with software and systems - Superior organizational, analytical, and problem-solving skills - Ability to communicate with senior leadership and executive team members - Problem solving ability is critical - Proficient written and verbal skills - Ability to manage multiple projects and tasks - Ability to read, analyze and interpret State and Federal laws, rules and regulations - Ability to work well both independently and with others - Strong interpersonal skills - Excellent written and oral communication - Ability to maintain confidentiality - Operate standard office equipment including, but not limited to, computer, fax machine, and copier - Full COVID-19 vaccination is an essential requirement of this role. Candidates located in states that mandate COVID-19 booster doses must also comply with those state requirements. UnitedHealth Group will adhere to all federal, state and local regulations as well as all client requirements and will obtain necessary proof of vaccination, and boosters when applicable, prior to employment to ensure compliance. Preferred Qualifications: - A+, Network+, or CompTIA Linux+ Certifications - Understanding of Linux operating systems To protect the health and safety of our workforce, patients and communities we serve, UnitedHealth Group and its affiliate companies require all employees to disclose COVID-19 vaccination status prior to beginning employment. In addition, some roles and locations require full COVID-19 vaccination, including boosters, as an essential job function. UnitedHealth Group adheres to all federal, state and local COVID-19 vaccination regulations as well as all client COVID-19 vaccination requirements and will obtain the necessary information from candidates prior to employment to ensure compliance. Candidates must be able to perform all essential job functions with or without reasonable accommodation. Failure to meet the vaccination requirement may result in rescission of an employment offer or termination of employment Careers at UnitedHealthcare Employer & Individual. We all want to make a difference with the work we do. Sometimes we're presented with an opportunity to make a difference on a scale we couldn't imagine. Here, you get that opportunity every day. As a member of one of our elite teams, you'll provide the ideas and solutions that help nearly 25 million customers live healthier lives. You'll help write the next chapter in the history of health care. And you'll find a wealth of open doors and career paths that will take you as far as you want to go. Go further. This is your life's best work.SM Diversity creates a healthier atmosphere: UnitedHealth Group is an Equal Employment Opportunity/Affirmative Action employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin, protected veteran status, disability status, sexual orientation, gender identity or expression, marital status, genetic information, or any other characteristic protected by law. UnitedHealth Group is a drug-free workplace. Candidates are required to pass a drug test before beginning employment.
https://careers.gijobs.com/job/equipment-technology-specialist/equipment-and-facilities-specialist/13621432/sr-technical-support-specialist
Churches are one of the underrated travel delights that many tourists frequently miss out on. While there are some well-known cathedrals that get a steady stream of people in and out of their doors, many others are neglected despite their beauty. A church doesn't need to have a famous name or be in a prime location in order to be breathtaking, and there are many that go unnoticed... until now. Related: 25 Pics Of Deserted Churches We Would Never Step Foot In Many of these churches are lesser known than their neighbors but are equally as stunning. For all those who love history (and stained glass windows), whether religion is part of daily life or not, there are some that we would definitely consider adding to our tour list. 10 San Luigi dei Francesi The main reason to visit this church is centered around its awe-inspiring artwork. The entire interior of the structure is painted with wall-to-wall murals, painted by Caravaggio. These enormous yet flawless paintings can be seen in the Contarelli Chapel and are titled The Calling of Saint Matthew, The Inspiration of Saint Matthew, and The Martyrdom of Saint Matthew. The story behind this artwork is the battle between good and evil, something that's depicted in a unique yet striking way. In addition, visitors will be floored by the architecture that this church boasts, from marble columns to grand facades. 9 Santa Maria della Pace The exterior of this church simply doesn't do it justice. The Roman-style columns in the front give way to a humble entrance, which is not what one would expect as soon as they walk into this chapel. The interior is bright, welcoming, and open, with a high ceiling and various artistic depictions lining the walls. The altar sits front and center, set apart from the rest of the room with its bold and vibrant colors, drawing the eye forward. On either side are the pews which are works of art themselves; each carefully carved and placed at an angle facing the front of the church. Entering this sanctuary is more of an experience than a tourist attraction. 8 Salzburg Cathedral Rome isn't the only country which holds intrigue and wonder when it comes to some of the world's most beautiful churches. Salzburg Cathedral in Austria is a must-visit. Its Baroque architecture is the first thing that many notice about this cathedral, along with the fact that it rises high above the old city. RELATED: 10 Abandoned Malls Thrillseekers Should Visit The architect responsible for the design is Santino Solari and over time, the integrity of the structure has been kept up so that many may continue to visit and pay their respects. When we speak of historic churches, though lesser known, this one dates back to the year 774. 7 Bedkhem Church Iran is also home to a church that's worth a visit for those so inclined, and that would be none other than the Bedkhem church. While this church might not look any different than any other from the outside, the inside holds quite the surprise. In the form of 72 paintings, visitors will be able to walk through a visual account of the life of Christ. These paintings were created by various Armenian artists, each just as beautiful as the next. Along with its artwork, the church is notable for its architectural details, including various domes and archways that make it such a unique appearance. 6 Santa Maria della Vittoria When walking into this church, it's impossible to avoid the instinct to look up. There's no end to the artistic and architecturally stunning surprises that await visitors around every corner in this church. When translated, the name of this church is 'Our Lady of Victory' and is a basilica built in the 17th century. Benini, the sculptor who is known for creating the Fontana dei Quattro Fiumi, is also known for his work in this basilica. The Ecstasy of Saint Teresa is the sculpture that many come to see and it's surrounded by solid marble columns and additional artwork that can be found throughout the basilica, including that which is on the ceiling. 5 Kizhi Pogost Kizhi Pogost is an interesting church that has quite the history and story behind it. For starters, its location is quite unique: It can be found on a strip of island on Lake Onega in Russia and is called the 'Church of Transformation'. Upon further inspection, visitors will notice that the entire church has been built without the use of any nails whatsoever. It relies on interlocking wood pieces, making it unique in structure as well as design. RELATED: 10 Open-Air Markets Worth Planning A Trip Around Seeing The purpose behind this is to prevent lightning strikes, something the previous structure had an issue with on the lake. It's also rumored that the entire church was built using only one ax, adding to its intrigue. 4 St. Stephen’s Basilica Although this church is the largest in Budapest, we added it to this list simply because it doesn't get the attention that it should. From the inside of this basilica, visitors are able to look out over the entire city, making it a great stop for those who are visiting. However, the view is not the only thing worth visiting this basilica for. The church has a long history and took nearly 50 years to complete, due to various issues and turmoil within the city. The fact that it's standing is amazing in itself and it's also home to one of Hungary's most treasured possessions: The mummified fist of that whom bears its namesake, King Stephen. 3 Chiesa di Sant’Ignazio What makes this church so intriguing (besides the fact that it's only a short walk from the Pantheon) is its artwork. Created by the artist Andrea Pozzo, his work is known for its ability to make onlookers feel as though the artwork's depth is true to life. You can imagine how intricate and life-like his paintings would be, especially those featured high above on the ceiling. The illusion lies in the belief that those walking into the church are walking into a structure without a ceiling, and are instead staring straight up at a scene depicted straight from the Bible. The realism involved in such artwork is what makes this church such a beautiful experience. 2 Sagrada Familia While this church, although more like a grand cathedral, is hard to miss, it's one that visitors must add to their list. While not as popular as some other sights, the Sagrada Familia has much to offer in the way of both worship as well as design. This church was designed in a Neo-gothic style, lending itself to steep towers and grandiose heights. RELATED: 10 Things You Didn’t Know About Duty-Free Stores The regality of its design is out of respect for Jesus, the Gospels, the Virgin Mary, and his 12 Apostles, of which there are 18 towers in all which stand for each person mentioned. The overall height of the church is intended to represent closeness to God, equating elevation with faith. 1 Las Lajas Sanctuary The reason Las Lajas is on our list is that many people don't think to visit a basilica when they're visiting Columbia. This sanctuary is built in quite a striking way, as it sits just off a steep cliffside, making it appear to be supported by virtually nothing. The basilica towers high over the Guaitara River Canyon and visitors will immediately note its Gothic-style towers, which only add to the grand appearance of this lesser-known church. The history of this basilica includes the story of the Deaf-mute daughter of Maria Muences claiming to have witnessed a vision of the Virgin Mary over the ravine. This basilica was built in that same spot.
https://www.thetravel.com/world-least-visited-churches/
Description: This task involves all of the administration and management activities required to ensure that the work programme as organised in Task 2 – Task 12 runs in a timely fashion and the reporting requirements are fulfilled within the specified time. The relationship between tasks is depicted in Figure 4. The tasks research tasks, 2 – 11 are organised into two high level goals for the project, Understanding Farmers, Tasks 2 – 6, and Initiating Change, Tasks 7 – 11. The blue lines indicate task integration, i.e. where the outputs from a task or set of tasks inform another. 2. Risk perceptions and behavioural intention for health and safety among farmers Lead researchers: Dr. David Meredith, Dr Mohammad Mohammadrezaei Collaborators: Dr Denis O’Hora, Dr John McNamara Lead institution: Teagasc Other institutions involved: NUI Galway, Psychological Sciences Research Istitute, UC, Louvain, Belgium Description: Farms are complex social–ecological systems and differ from most workplaces or work environments. Farming, as an occupation, is fundamentally different from most occupations by virtue of the range of tasks that farmers may have to undertake on any given day. These tasks are likely to vary in intensity, duration, location, physicality or use of machinery. This context presents substantial challenges to understanding and influencing farmer behaviour. Feola and Binder (2010, p.2323) concluded that an effective approach to research on farmers' behaviour is based on an explicit and well-motivated behavioural theory, an integrative approach and, understanding feedback processes and dynamics which shape behaviours and outcomes. Whilst this perspective underpins the entirety of the BeSafe project, this Task aims to develop an overarching conceptual framework to describe farmer OHS behaviours. In addition, the framework will guide the research undertaken in subsequent tasks, particularly Tasks 3 – 10. The framework will be applied to the development of a survey which seeks to improve our understanding of farmer’s OHS behaviours by describing the knowledge, attitudes, risk perceptions and behaviours of a nationally representative sample of farmers and evaluating those personal, environmental and social factors influencing these behaviours. The data will be analysed with the objective of identifying groups of farmers that may be particularly exposed to accidents. These findings will be of direct relevance to Tasks 8 – 10 and will be useful for a wide range of stakeholders. 3. Risk perceptions and behavioural intention for health and safety among agriculture students in the development of professional knowledge Lead researcher: Dr Aoife Osborne, FBD Lecturer in Farm Health and Safety, UCD Collaborators: Post-Doctoral Researcher (2), Dr Denis O’Hora, Dr John McNamara Lead institution: UCD School of Agriculture and Food Science. Other institutions involved: Teagasc, NUIG School of Psychology, FBD Insurance Description: Health and safety is a major issue and significant challenge in agriculture. It is important to consider all at risk populations when undertaking research in this area. Recent research undertaken by Watson et al. (2017) indicated that younger farmers were more likely to take risks. The objective of this task is to monitor health and safety risk perceptions and behavioural intention among young farmers, i.e. agriculture students undergoing 3rd level training in the development of professional knowledge. Changing the perception of safety risks represents the predominant challenge to improving the farm safety and health record of the Agriculture sector (Hale & Glendon, 1987). Risk perception plays a key role in behaving safely at work. To date there has been no measurement of such perceptions amongst farmers in Ireland though Task 2 will undertake this assessment. While changing risk perceptions applies throughout the farming population, achieving it among new entrant farmers and future workers in the agriculture sector is particularly important if we are to achieve generational change, i.e. improvements over time. 4. Comparative assessment of knowledge transfer (farm extension services) and regulatory systems governing OHS practices among selected European Countries Lead researcher: Dr John McNamara Collaborators: Dr Mohammad Mohammadrezaei Lead institution: Teagasc Other institutions involved: LUKE - Natural Resources Institute Finland. Leibniz Institute for Agricultural Engineering and Bio economy (ATB), Germany. Description: Though the EU Framework directive in OHS (89/391/EEC) applies to employed workers it does not include self-employed persons, e.g. farmers. As a consequence there is considerable variation in OHS regulatory systems pertaining to farm OHS throughout the EU. In part this reflects different approaches to governance and also the extent to which farm OSH is viewed as a societal issue. This task seeks to identify best practice with respect to the support and implementation of behaviour change initiatives through responsible agencies with a view to informing approaches developed in Tasks 8 – 11. 5. Quantifying the exposure to risk: an evaluation of the relationship between time spent working, farm accidents and fatalities Lead researcher: Dr David Meredith Collaborators: Dr Mohammad Mohammadrezaei, Mr Pat Griffin, Dr John McNamara Lead institution: Teagasc Other institutions involved: HSA Description: Understanding accidents and fatalities Analysis of Irish farm fatality data for the period 2008 – 2013 found that time of day and day of the week were significant in understanding when fatalities were likely to occur (Clinton, 2014). This research was limited by accurate information regarding the amount of time spent working, i.e. it was not possible to accurately assess the exposure to risk. This task seeks to firstly, update and extend the analysis of farm accidents and fatalities for the period 1993 – 2021 as this provide a window into the key risks faced by farmers and give clear pointers to areas of concern. This work will inform subsequent tasks, particularly, Task 8 and 10. Secondly, the research will undertake an analysis of National Farm Survey data to estimate the exposure to selected risks, e.g. trips and falls, machinery or animal related incidents. 6. Learning from the DAFM Knowledge Transfer OHS initiative: An assessment of facilitator’s and farmer’s experiences and outcomes Lead researcher: Dr John McNamara Collaborators: Dr Mohammad Mohammadrezaei Lead institution: Teagasc Description: Though there are limited studies published internationally concerning farmer OHS adoption, the extant research indicates that providing an incentive to support OHS adoption has achieved some success (e.g. Hallman, 2005). The proposed research seeks to learn from the DAFM incentive based KT programme in raising awareness and, more specifically, motivating farmer adoption of OHS initiatives. The DAFM KT OHS initiative is of considerable importance given that, internationally, it is one of the few large scale measures targeting farmer OSH that have been applied using KT approaches.In light of the need for the project to develop and pilot behaviour change initiatives that can be delivered using KT methods, it is vitally important that we understanding both facilitator’s and farmer’s experiences and responses to participation in the DAFM Knowledge Transfer programme related to farm health and safety. Roughly 20,000 farmers in 1200 groups participated in this scheme. Each farmer engaged in a Knowledge Transfer group meeting focused on farm OHS. Subsequently they completed a farm health and safety/work organisation template with their KT Facilitator as part of a Farm Plan which is required to be reviewed annually for a 3- year period.This task is divided between evaluating farmer’s adoption or non-adoption of OHS measures following participation in the DAFM Knowledge Transfer programme and assessing Knowledge Transfer programme facilitator’s perspectives on improving farm OHS and engendering behavioural change. 7A. Identification and Analysis of Behavioural Machinery-related Farm Safety Interventions Lead researcher: Dr Denis O’Hora (NUIG) Collaborators: PhD Student (NUIG), Dr Jennifer McSharry (NUIG) Lead institution: NUI Galway Description: The first step in developing two novel behavioural machinery-related interventions (BMach) interventions will be to identify and assess evidence for BMach interventions available nationally and internationally. BMachs will be sourced through the scientific literature, national networks, and from international experts. Gathering data from national networks and international experts is crucial to identify ‘grey literature’ that may include useful interventions (Hopewell, McDonald, Clarke, & Egger, 2007). A systematic review and meta-analysis will evaluate the evidence of behaviour or intentional change due to these interventions, where such evidence exists.The second step of the process will be to analyse the components of the BMachs that are required for successful implementation. Behavioural interventions can be developed based on a variety of theories or sometimes in the absence of recognised theory. Consequently, similar activities can be described differently and different activities labelled as the same across interventions. All too often, interventions are considered ‘black boxes’ that simply output a change in behaviour. Such an approach limits what can be learned from successful intervention and may occlude negative effects of intervention components within successful packages. 7B. Development and Refinement of Two Behavioural Farm Machinery Interventions Lead researcher: Dr Denis O’Hora (NUIG) Collaborators: PhD Student (NUIG), Dr Jennifer McSharry (NUIG) Lead institution: NUI Galway Description: Once candidate interventions have been identified in Task 7, input from expert panels of farmers, other stakeholders and international experts in farm safety will be collated to select targets (e.g., behaviours and demographic groups of farmers) for intervention and recommended strategies. This process with be heavily informed by the research being undertaken in Task 2. These strategies will then be modified further with focus groups of farmers to maximise the potential adoption and behaviour change amongst Irish farmers. Once the range of candidate interventions has been documented, a panel of professionals will be established to participate in a nominal or expert group technique process. Since BMachs constitute a wide range of physical behaviours, the foremost goal of this group will be to prioritise behaviours for intervention. Also, as some demographic groups of farmers, e.g. younger and older farmers, are at increased risk of injury and fatality, these panels, informed by the results from Task 2 and Task 5 will provide input into the decision defining the target population. 8. Adoption of safer work systems for handling livestock and managing facilities on farms, and effecting change in farmer behavior Lead researcher: Dr Bernadette Earley Collaborators: PhD Candidate; Dr Mohammad Mohammadrezaei; Dr. David Meredith, Dr. John G. McNamara, Dr. Noirin McHugh, Dr. Mark McGee, Mr. JJ Lenehan Lead institution: Teagasc Other institutions involved: UCD. Collaborators: Dr. Marijke Beltman, Dr Aoife Osborne Description: A recent study by Berney et al. (2017) reported four significant risk factors on Irish farms of which handling livestock resulted in the largest number of incidents resulting in spinal fracture with or without spinal cord injury. Teagasc research indicates that livestock are involved in 65% of all farm injuries on farm (McNamara et al., 2007). These results correspond with an analysis of farm fatalities undertaken by Meredith (2015) who found that animal related incidents accounted for most fatal accidents amongst older farmers.In general, the livestock related accidents and deaths are attributable to inadequate handling facilities on farms, poor set up or taking risks when dealing with livestock, less contact between farmer and livestock and inadequate attention given to temperament of animals (e.g. breeding animals for docility) (Pat Griffin, Pers. Comm). There is a behavioural aspect to all of these factors whether it is the decision to invest, or not, in adequate handling facilities or to approach an animal in a way that exposes the farmer to increased risks. This task seeks to improve understanding of the human – animal – work environment interaction with the objective of identifying risky behaviours, developing materials and resources that challenge perceptions of animal safety risks and the dissemination of these resources via knowledge transfer approaches. 9. Participatory co-design of interventions to enhance farm safety Lead researcher: Dr Aine Macken-Walsh Lead institution: Teagasc Other institutions involved: NUI, Galway Description: The most effective policy interventions successfully relate to, engage and influence the habitual behaviour of target communities (Thorogood, 2002). The public health literature identifies a first step in designing interventions, which is a thorough understanding of the ‘culture’ in which behavioural change is sought, where culture is described simply as “the way we do things around here” (Deal and Kennedy, 1982 cited in Griffith et al., 2010, 427). The challenge of successful interventions is not only to encourage the adoption of new knowledge, practices and routines but to alter existing practices. Therefore, research conducted in the preceding Tasks 2 - 10 are of critical importance in providing evidence-based data in relation to farmers’ knowledges, attitudes, behaviours and priorities regarding safety and risk management. In its first stage, this task translates evidence generated by preceding tasks into key messages that are interpretable to diverse actors in the agriculture sector. In its second stage, a multi-actor process involving farmers, policy-makers, safety inspectors, knowledge transfer specialists etc. is used to co-design practice-ready interventions. The co-design process will involve a series of workshops that generate co-designed interventions that respond to the key challenges identified by preceding tasks. The input of end-users in co-designing the interventions is an effort to maximise the interventions’ acceptability and effectiveness, following the hypothesis that co-designed interventions are “more suitable and diverse innovations that are more appropriate, easier to adopt, and developed more rapidly than innovations generated through conventional approaches” (Triomphe, 2012, p. 314). The methodology used for the co-design workshops is Participatory Learning and Action (PLA) and a research-controlled environment for the co-design process will be facilitated by a trained sociologist.
https://www.teagasc.ie/rural-economy/rural-economy/besafe-project/project-tasks/
Network Rail has fallen behind other industries such as water, aviation, energy and roads in the way it uses people, a new study has found. The study which was published on July 13 was carried out by specialist infrastructure consultancy, Nichols, with a focus on how the maintenance of infrastructure and key assets is undertaken and how this compares to d other industries in the UK and across Europe. Sectors such as water, aviation, energy and roads are ahead of Network Rail in the way they use team members with the report highlighting improvements Network Rail could make to free up the efficiency such as: - Introduce individual rosters to use staff more efficiently - Upgrading specialists and cross-functional teams with broader knowledge to enable first responders to fix most breakdowns and get trains moving faster - Increase and accelerate the use of technology to keep employees safe Andrew Haines, Chief Executive of Network Rail, said: “Britain deserves a modern, 21st century rail maintenance regime. It is in no one’s interest to impede vital changes that make the railroad and its workers safer and improve the reliability of the services we provide. With common sense and compromise, our proposals can generate millions of pounds in savings which we can then translate into a better pay offer for all of our employees. It’s a win-win. » The study also suggested that increased productivity and efficiency could be found by ensuring that maintenance is carried out when needed, by the correct number of employees who have the right skills and that would mean an individual roster. Nichols’ study suggests that productivity and efficiency gains can be achieved by ensuring that maintenance is performed at the right time, by the right number of employees with the right skills. Individual classification: Current contract terms mean team leaders must agree rosters up to 52 weeks in advance and form teams together. As the workload is variable and unpredictable in the railway environment, it can be difficult to adapt the lists, especially if more than one crew is required for the job. Network Rail’s current registration practice has proven to be less flexible and more restrictive than other comparable organizations which generally list staff on an individual basis with shorter registration cycles which are centrally managed and have no problem deploying staff as needed. Network Rail wants to create more flexibility to independently assign individual staff with a focus on the size, nature, location and timing of work. These changes could be accelerated by a centralized resourcing function responsible for overseeing overall business needs. Network Rail is confident that the necessary changes can be implemented without compulsory redundancies. Around 1,800 jobs will have to be cut, but the voluntary departure desired by hundreds of employees alongside natural waste, redeployment and retraining, Network Rail believes there will be a job for all who want it. Presentation of versatile and multifunctional teams: The current responsibility for Network Rail’s maintenance is split between Network Rail’s 14 routes and again into maintenance delivery units. The units are organized into three distinct disciplinary teams which are Track, Signaling and Telecommunications, and Electrification and Plant (E&P). A standard team consists of up to three to four people, including a team leader, technician(s) and operator(s), who are trained in skills that are only needed in that specific discipline . When the teams receive a job, the whole team will go to the site despite the size of the task. When a job requires more than one discipline like signaling and lane, more than one crew will be present but will generally work sequentially. These practices result in a great waste of time, with team members waiting for work to be completed by other disciplines before they can begin their own work. A more efficient and productive way of working would be to create joint multidisciplinary teams instead of individual disciplines, which would lead to a reduction in the number of employees needed to maintain the network and the associated costs. The introduction of such teams would ensure that work could be carried out across geographical boundaries. Current working practices within the rail industry mean that crews on one route will not assist another in a neighboring area, even if they have the capacity to do so. Increase in technology adoption: The railway in Britain is the safest major railway in Europe and huge efforts have been made to improve safety over the past 20 years. The safety measures implemented have often come up against the initial reluctance of the trade unions which, even today, tend to thwart efforts to appropriate technology on the railways. The study acknowledges that although Network Rail has made substantial progress in the use of the technology, further improvements could be made as the roll-out of this technology has been slow. The information in the table below reveals how a dozen key technological improvements have been blocked by the RMT for over two years. Analysis carried out by Network Rail reveals that current scheduled maintenance tasks could be reduced by around 50% by using technology and data, reducing the number of manual inspections carried out by maintenance teams and improving safety . A recent McKinsey report focused on rolling stock maintenance and suggested that remote condition monitoring could reduce manual inspections by at least 60%, reducing costs by more than 10%. The changes could be compared to replacing a quarterly manual meter reading with a smart meter. In order to get the most out of the deployment of technology, Network Rail needs to have flexible and responsive working practices like other sectors do.
https://tregouet.org/network-rail-guilty-of-restrictive-and-inflexible-working-practices-says-new-report/
More Working Americans Struggling to Afford Housing With growth in incomes lagging growth in housing and utility costs, the share of Americans spending large sums of their income on housing has climbed nearly uninterrupted for decades. But the Great Recession has taken an especially heavy toll, as millions of families have slipped down the income scale due to job loss or curtailment of hours. Indeed, while households with incomes under $15,000 made up only 12 percent of all households in 2001, they made up 40 percent of the net growth in the number of households over the past ten years. Faced with reduced incomes, some of these households have moved so that they can save on housing costs but many others are instead stretching to make their rent or mortgage payments. As shown in Figure 1, even households with incomes above $15,000 (slightly above the equivalent of full-time work at minimum wage) are finding it harder to keep up with housing costs. Fully 64 percent of all households with incomes in the $15,000-$30,000 range are housing cost burdened, spending 30 percent of their income on housing and utilities. Among those with incomes of $30,000-45,000, a smaller but still substantial 42 percent are cost burdened, while more than a quarter of those with incomes in the $45,000-$60,000 range are cost burdened. These shares are each up over seven percentage points across all three of these income bands in just the past ten years. Renters and owners are both experiencing rising housing cost burdens. On the rental side, the share of renters with cost burdens has doubled, from a quarter in 1960 to a half in 2011, while the share with severe cost burdens (spending more than half their income on housing and utilities) shot up from 11 percent to 28 percent over that period, spiking in the last decade. Renters with incomes of $15,000-$30,000 who have severe cost burdens climbed from 2.0 million in 2001 to 3.2 million in 2011, and those with incomes of $30,000-$45,000 doubled from 300,000 to 600,000. Cost burdens have also reached record highs for homeowners. Among homeowners under age 65, 39 percent of those earning one to two times the minimum wage and 18 percent of those earning two to three times the minimum wage are now severely housing cost burdened. There is an irony to the situation of homeowners: millions of them can’t take advantage of today’s low rates to lower their housing costs because their homes are worth less than they owe on their mortgages. Despite many federal efforts to ease the path to refinancing for such owners, it remains blocked for large shares of them. Those who have loans not endorsed by FHA, Fannie Mae, or Freddie Mac are out of luck. And for those with loans endorsed by these agencies, they may still not meet credit score, debt-to-income ratio, and documentation requirements for refinancing. Even if existing owners can refinance, loss of an earner or curtailment in hours may result in payments that still stretch them thin. These affordability problems are not likely to abate any time soon. Rents are back on the rise, and in many areas sharply. Incomes remain under pressure from high unemployment rates and an ongoing shift in the composition of jobs to lower paying work, where entry-level workers in many key occupations are priced out of affordably covering their housing costs. For example, two-thirds of households that include a retail worker in the lowest wage quartile for that occupation are severely cost burdened, along with seven in ten of those including a childcare worker in the lowest wage quartile for that occupation. Meanwhile a golden moment is being missed to place people into homeownership at record low interest rates. Additionally, home prices have fallen by about a third nationally, and by much more in many places. As a result, relative to renting, the cost of owning a home for first-time buyers has not been as favorable for at least 40 years, on average, nationally. But lenders are reluctant to lend, fearful of the impact of new regulations and that they will have to buy back poorly performing loans. As a result, many would-be homebuyers are missing a chance to lower their payments relative to today’s rents and also to lock in their mortgage costs with extraordinarily low fixed-rate loans. Having so many Americans spend so much on housing is a concern not just for those affected. Housing cost burdens affect the national economy, leaving less to spend on other items and making it harder for Americans to save for the future. As an example, families with children in the bottom quarter of spenders with housing and utility payments of more than half of total outlays spent a third as much on healthcare, half as much on clothes, and two thirds as much on both food and pension and insurance as those with housing outlays of less than 30 percent. In retirement, more will be entitled to programs like Medicaid, placing strains on social service systems. Hemmed in by budget pressures and the enormity of the problem, our political leaders have done little to forestall or address growing housing affordability problems. Federal programs are costly and also have limited reach. Indeed, only about a quarter of all renters eligible for housing assistance (those earning half or less local area median incomes) receive it and there is essentially no comparable program to help struggling homeowners apart from a very small, temporary, emergency program put in place in 2010. Still, some places at least, have found ways to reduce housing costs in their areas through regulations and land use policies that do not involve taxpayer subsidies or tax incentives. These include some cities that are relaxing minimum unit-size requirements to encourage production of small micro-units of only a few hundred square feet. Others with markets strong enough have been offering density bonuses to encourage set-asides of affordable housing units in new construction projects. Yet most local governments continue to restrict residential densities. Lenders, meanwhile, are so cautious after having so badly missed the mark with their lending standards that many who could lock in today’s low home prices and record low rates are unable to do so. Americans will face daunting housing cost burdens that thwart savings and sap spending on non-housing items until: 1) lenders ease standards back to reasonable levels, 2) homebuilders are freed of barriers preventing them from building at greater densities, and 3) governments provide greater tax incentives or subsidies to close the gap for more low and moderate-income households between what they can afford and the costs of market-rate housing.
https://www.jchs.harvard.edu/blog/more-working-americans-struggling-to-afford-housing
John Zorn can be defined as one of the most prolific avant-garde composers, arrangers, producers and multi-instrumentalists in America. He can be credited for hundreds of albums, each one incorporating a style different from the other. He has skillfully stroked over several genres such as classical, jazz, pop/rock and film music among others. Zorn is the typical example of an eccentric 21st century musician. Born on September 2, 1953 in New York City, each of Zorn’s family members indulged in a different genre of music, thereby surrounding his childhood with a diverse set of musical experiences. Not only was he avidly exposed to jazz, country music, doo-wop but also to television music of the 50s, which greatly inspired him. He became interested in avant-garde music during his teens and later went on to study orchestration and composition at Webster College. These influences can be clearly seen in some of his earlier works such as the ‘The First Recording 1973’. He began his compositions and recordings in the form of game pieces, often inspired by sports as well. They include ‘Track and Field’, ‘Baseball’, ‘Golf’, ‘Hockey’ and the most influential of all, ‘Cobra’. Zorn also indulged in improvised performances, which often incorporated duck calls; Such performances include ‘The Classic Guide to Strategy’ and ‘Locus Solus’. These smaller works led Zorn to his major breakthrough when he was signed on by the Warner Bros and released the hit, ‘The Big Gundown: John Zorn Plays The Music Of Ennio Morricone’. This piece became famous because of its radically arranged themes juxtaposed with diversely traditional musical genres. Soon he released two more such pieces, ‘Spillane’ and ‘Spy vs. Spy: The Music of Ornette Coleman’, both of which proved to be equally successful. Zorn is a talented saxophonist and several of his compositions emphasized his unique style and talent, such as ‘Voodoo’. His passion for jazz also led him to form his own punk jazz band, named ‘Naked City’; The band released several albums such as ‘Grand Guignol’, ‘ Heretic’ and ‘Absinthe’ all of which also highlighted Zorn’s interest in hardcore improvisations. He formed another band named ‘Painkiller’ in 1991, both of Zorn’s bands received international acclaim for its works. Painkiller’s most famous releases include ‘Guts of a Virgin’, ‘Rituals: Live in Japan’ and ‘Talisman: Live in Nagoya’. Not only is Zorn praised as a jazz musician, but also as a music composer for documentaries, cartoons and films. He was approached by several independent film makers such as Rob Schwebber and Raul Ruiz to compose soundtracks for their films, ‘White and Lazy’ and ‘The Golden Boat’, respectively. Zorn worked mainly on composing film scores in the 90s since he found it personally more appealing and fulfilling, he usually composed for underground films conveying strong messages. He compiled a large volume of his film scores under the series named ‘Film Works 1986-1990’. Later on, he turned to classical music and began composing chamber music an example of which is ‘Elegy’, a suite that he wrote in 1992. He also revisited his Jewish heritage through albums such as ‘Kristallnacht’ and the later ‘Radical Jewish Culture’ series. In 1995, Zorn took charge of his career and formed his own label, ‘Tzadik Records’. He released several of his works through his label such as ‘Bar Kohkaba’ , ‘Cartoon S&M’ and ‘Madness, Love and Mysticism’. Well into the 21st century he continued to produce several awe-inspiring pieces that are undoubtedly the hallmarks of his legacy, such as, ‘The Gift’ and its sequel ‘The Dreamers’. His unparalleled work has been honored through awards such as a McArthur Foundation ‘Genius Grant’ and ‘Jewish Cultural Award in Performing Arts’.
https://www.famouscomposers.net/john-zorn
1. To subsidise meaningful and impactful activities which benefit the students / the university / the community organised by student societies. 2. To encourage students to organise activities which can develop their leadership skills and interpersonal skills and cultivate their personal interests. 3. To enhance the student out of class experiences. The committee will consider to allocate funding to support activities that improve the overall study environment which aligning with the below: 1. Sustainable Development Goals (SDG) Activities which promote SDG to achieve a better and more sustainable future for all. 2. Community Engagement Collaborating with the external institutions, industries, communities, NGO and etc., which contribute to the public and serve people in need. 3. Cultural Exchange / Talent Development Activities which encourage cultural exchange among local and international communities to enrich the students learning experience outside from the lecture room. In addition, these activities shall foster students’ talent and development in soft skills areas including but not limited to sports, arts, music and performing arts, leadership, innovation, science and technology.
https://dsa.sl.utar.edu.my/Student-Eminent-Project-Fund.php
The fixation on the college “rankings” that come out every year has never made much sense to me. Why do people pay so much attention to arbitrary lists, when the true value of a college education depends on factors specific to each individual student? Why is there such a strong desire, especially amongst the nation’s best and brightest high school students, to attend the most prestigious, most selective or otherwise highest ‘ranked’ school? Our competative academic culture emphasizes prestige above other factors in selecting a college. I believe that this fixation is not beneficial to the students, their experiences in college, the institutions themselves and overall the state of higher education in 2016. Yet, sadly, the trend appears to be increasing. According to a survey from UCLA, 70 percent of 2015 freshmen believed that reputation was “very important” when it came to choosing a college to attend. This is the highest level the survey has ever recorded since it started in 1967. The fact that so many students find reputation as one of the most important factors in choosing a school is largely believed to be a byproduct of the rise of these college rankings. I find this notion – that it’s extremely important to attend a school with high reputation and prestige – as not necessarily true, and possibly detrimental to higher education as a whole as well as many students’ individual experiences in college. This is because such a fixation on prestige encourages students to bypass a school that might be a better fit for a school that is higher ranked. The writer and well-known intellectual Malcolm Gladwell makes a very compelling case along these lines in his most recent book David and Goliath: Underdogs, Misfits, and the Art of Battling Giants. Gladwell tells us the tale of Caroline Sacks, a student from the Washington D.C. area who sailed through her high school curriculum, never receiving a grade less than an A and finishing near the top of her class. She applied and was accepted to her dream school, Brown University, where she chose to attend, rather than the University of Maryland, which was her backup school. Caroline was passionate in the sciences, and had ambitions of pursuing a career in science. However, she ran into academic difficulties starting her freshman year in chemistry and organic chemistry classes. Despite being an intelligent, hard-working student who thrived in high school, she found the material tough to understand, and began to lose confidence because she was no longer one of the smartest students in the class. To use Gladwell’s terms, she was no longer a “big fish in a small pond,” but rather a small fish in a very big pond, as Brown is one of the most selective schools in the country. Gladwell argues that had Caroline attended her back up school, the University of Maryland, she would not have lost confidence in her abilities to the extent that she did at Brown. He posits that the reason she struggled so much was because of a phenomenon sociologists call “relative deprivation,” which is the idea that we develop our impressions of how we are doing in comparison to our peers rather than the population as a whole. At Brown, Caroline’s peers were some of the brightest minds in the nation, and even though she struggled, she was surely still in an extremely high percentile of ability in science for the general population. Yet the effects of relative deprivation, due to Caroline attending Brown, were feelings that she was not good enough to pursue a technical subject such as science. Gladwell’s main point is that while some individuals may thrive in an environment such as Brown, others who still have a high level of aptitude and work ethic may not because of relative deprivation. In other words, some students might do better as a bigger fish in a smaller pond. So what’s the takeaway from this story? College rankings do matter to the extent that a student wants to know which schools have the “biggest ponds’ – which schools are most selective and prestigious. But I happen to agree with Malcolm Gladwell – our society fixates on the big pond far too much, and many of us are better off in the smaller pond. Owen Sandercox ’19 ([email protected]) is from Sandy Hook, Conn. He majors in economics and statistics.
https://www.theolafmessenger.com/2016/college-prestige-privileged-over-best-fit/
Using a technique dubbed “brainbow,” the Virginia Tech Carilion Research Institute scientists tagged synaptic terminals with proteins that fluoresce different colors. The researchers thought one color, representing the single source of the many terminals, would dominate in the clusters. Instead, several different colors appeared together, intertwined but distinct. Credit: Virginia Tech Neuroscientists know that some connections in the brain are pruned through neural development. Function gives rise to structure, according to the textbooks. But scientists at the Virginia Tech Carilion Research Institute have discovered that the textbooks might be wrong. Their results were published today in Cell Reports. “Retinal neurons associated with vision generate connections in the brain, and as the brain develops it strengthens and maintains some of those connections more than others. The disused connections are eliminated,” said Michael Fox, an associate professor at the Virginia Tech Carilion Research Institute who led the study. “We found that this activity-dependent pruning might not be as simple as we’d like to believe.” Fox and his team of researchers used two different techniques to examine how retinal ganglion cells – neurons that live in the retina and transmit visual information to the visual centers in the brain – develop in a mouse model.
https://gravernews.com/brainbow-reveals-surprising-data-about-visual-connections-in-brain/
Why get a Master of Science in Biology? The Master of Science in Biology is a 32-credit hour program designed to strengthen students’ content knowledge, problem-solving skills and research capabilities. Students will have an increased specialization in a biological discipline and an enhanced ability to do research. Through tailoring coursework to meet students’ interests, faculty ensure students can demonstrate their ability to interpret and report data in written and oral formats. Graduate are prepared to work in their focus area of biology and be successful in additional professional or doctoral studies. Choose a thesis or non-thesis track Thesis track Students are advised to pursue this option if they wish to maximize their exposure to hands-on research methodology. Students design and carry out lab experiments or field research to address specific hypotheses related to unknown aspects of biology. Upon completion of the laboratory or field research, the results are communicated in both a formal thesis document and a seminar presentation. Non-thesis track Students are advised to pursue this option if they are seeking a master’s degree that puts a greater emphasis on traditional classroom coursework and does not involve extensive involvement in novel experimentation or field research. Students select an advisor to oversee the development of a scholarly paper. The paper will review and synthesize various aspects of the established scientific literature related to a specific, narrow biological question. Admission requirements Candidates for admission to the graduate program should meet the following requirements and submit the following materials: - Meet the requirements of the Graduate School as set forth in the graduate catalog and acceptance to graduate study by the dean of the Graduate School. - Prior to submitting an application to the graduate program, students must contact and secure an appropriately qualified graduate faculty advisor. Application should not be made until a faculty member has agreed to accept the applicant as an advisee. A listing of biology graduate faculty is available at www.nwmissouri.edu/naturalsciences/directory/. For assistance, contact Graduate Coordinator Katie Spears. - Completion of a four-year undergraduate degree from an accredited college or university with an undergraduate grade-point average (GPA) of 2.75 (4.00 scale). Submit a complete set of undergraduate transcripts from all institutions attended. - An applicant with a GPA of 2.50 to 2.74 may apply to be accepted conditionally. If accepted, the student would must complete the first eight graduate hours with a 3.0 GPA or be subject to suspension for one calendar year. A student who does not meet the GPA criteria could apply to the university as a non-degree seeking student and seek full admission at a later date. If the student achieves a 3.0 in his/her first eight hours of graduate study, the student may reapply but must undergo the full application and admission review process. Admission to the department is not guaranteed. Applicants must have completed minimum coursework in the following areas: - Minimum of 24 semester hours in acceptable undergraduate courses in biology. Coursework should include: - Zoology - Botany - Genetics - Microbiology or cell biology - Ecology - Minimum of 13 semester hours in chemistry - Pre-calculus mathematics - Minimum of four hours of physics Additional courses (such as calculus, computer science and statistics) may be required depending upon the anticipated graduate program of the applicant. Acceptability of courses and additional requirements will be determined by the Biology Graduate Committee. Students may be accepted into the program with coursework deficiencies in these areas, but these deficiencies must be corrected in addition to the regular coursework associated with the program. Deficiencies should be corrected before completion of the first 15 graduate hours. GRE requirements - Composite score of a minimum of 286 (800 old version); - the analytical writing section must be submitted; and - a student who does not meet the required GRE score may be accepted conditionally to the program. However, the minimum score listed above must be attained during the first trimester of enrollment. In extenuating circumstances, the student may appeal to the Biology Graduate Committee. Two letters of recommendation describing the suitability of the applicant for graduate study. A statement of purpose including: - a description of why the applicant wishes to pursue a Master of Science in biology; - the specific type of research project the applicant intends to conduct. - Applicants are encouraged to review the research interests of faculty within the department and contact them about potential projects within their laboratory. Priority is given to applicants who have identified a research advisor willing to guide them through completion of their thesis or scholarly paper. A writing sample will be evaluated by the graduate advisor and two other members of the graduate committee, as required by the graduate catalog, during the initial trimester of enrollment. A student will be required to compose a handwritten, impromptu composition on a subject provided by the Biology Graduate Committee. An unacceptable ability to write will necessitate some remedial work and a subsequent writing sample. This is to be completed within the first trimester in the program. Submit official documents by mail to:
https://www.nwmissouri.edu/academics/graduate/masters/biology.htm
The thing that strikes me most about this month’s lovely painting from Cezanne is the tonal quality of all the colors. We’ve had inspiration paintings in the past with a full rainbow of colors represented. But this month, and in this painting, all of the colors are all the cool side. See how even red and yellow, the warmest colors on the color wheel, seem quieter, colder? So as you design this month, aim for those cooler shades. For warmer colors, like red, orange, and yellow, you want to look for darker, less saturated shades. Dark ruby instead of fire engine red, browner oranges instead of tangerines, mustard-y yellows instead of sunshine. You’ll have more range of colors to work with in the yellow and orange family, but the reds don’t change nearly as much. Similarly, aim for beads in that very specific shade of dark blue (third swatch from the left). That color is seen in touches all over the painting, and doesn’t vary very much, like the red. On the lighter side of blue (and blue-green), you have more to play with – light aquas and blues, medium teals, and all of those subtle shades in the background area. For the greens, stick to dark green, yellow-green, and brownish greens. And because I’m working on a project where I’m playing with color proportions, I thought I’d share the breakdown of this month’s colors. You tell me, is seeing the color proportions helpful? Which colors, or combos, are you drawn to this month?
https://www.artbeadscenestudio.com/november-monthly-challenge-color-palette/
Have there been any studies done on the animal use of their bodies to signal, communicate or express their emotions, particularly to members of other species (ex: humans)? I've been observing a very intelligent indoor-outdoor cat who has been doing the same sideways stretching posture every day he met me. There has been a number of other postures that the cat has been using - tearing at a carpet with claws, or dragging himself against the carpet by using claws that I've repeatedly observed. Another cat was thumping her head against a door to "knock" and indicate that the cat is outside. To me, these gestures appeared as clear expressions of intent or emotional state. Other experiment involved "trained" geese at a local pond who would indirectly approach humans to ask for food. Their body, neck and head position appear to indicate intent (is the animal grazing towards the human or away from the human). Yet another experiment involves nesting birds who start to clearly express their displeasure at me located near their nesting area. They repeatedly produce high pitched screech until I move away. For example I repeatedly see geese assume this gesture to intimidate other geese. As a human, I'm very conditioned to vocal and eye related coordination, and this "non-verbal" language is fascinating to me. Have there been any studies in how indoor/outdoor - partially domesticated animals communicate with humans? Is it true that this communication uses their entire body and not just the vocal cords? Is there some "foundation" language that would be similar among members of the same species, or is it entirely a learned skill that has nothing to do with evolutionary adaptation? Finally, is there some sort of a brain complexity cut off at which level the animals can no longer understand if they are being communicated with?
https://biology.stackexchange.com/questions/2463/is-there-such-thing-as-animal-non-verbal-body-language
Marquette Law Mentorship Program All first-year law students and transfer students are invited to participate in the Marquette Law Mentorship (MLM) program, a peer-mentoring program hosted by the Office of Student Affairs. Participating students gain an immediate connection to a second- or third-year law student who assists them in making a successful transition to Marquette University Law School. New students are matched with an upper-class mentor based on academic interests, shared experiences, hometowns, hobbies, and/or other common interests. Mentors help welcome their mentees and guide them to the academic, social, and professional resources available to them, helping them quickly integrate into the Law School community. As this year presents unanticipated challenges, we believe creating a strong MLM program will be more important than ever before. MLM’s mission this year is one of unity, coming together as one student body to support one another and to create a deeper sense of community, especially for the incoming class.The goal of MLM is to create a casual, natural relationship between mentors and mentees. The overall expectation is that a relationship develops genuinely between mentors and mentees and that positive outcomes will result for both parties.
https://law.marquette.edu/current-students/MLM
Warning: more... Generate a file for use with external citation management software. Extinction is an important mechanism to inhibit initially acquired fear responses. There is growing evidence that the ventromedial prefrontal cortex (vmPFC) inhibits the amygdala and therefore plays an important role in the extinction of delay fear conditioning. To our knowledge, there is no evidence on the role of the prefrontal cortex in the extinction of trace conditioning up to now. Thus, we compared brain structures involved in the extinction of human delay and trace fear conditioning in a between-subjects-design in an fMRI study. Participants were passively guided through a virtual environment during learning and extinction of conditioned fear. Two different lights served as conditioned stimuli (CS); as unconditioned stimulus (US) a mildly painful electric stimulus was delivered. In the delay conditioning group (DCG) the US was administered with offset of one light (CS+), whereas in the trace conditioning group (TCG) the US was presented 4 s after CS+ offset. Both groups showed insular and striatal activation during early extinction, but differed in their prefrontal activation. The vmPFC was mainly activated in the DCG, whereas the TCG showed activation of the dorsolateral prefrontal cortex (dlPFC) during extinction. These results point to different extinction processes in delay and trace conditioning. VmPFC activation during extinction of delay conditioning might reflect the inhibition of the fear response. In contrast, dlPFC activation during extinction of trace conditioning may reflect modulation of working memory processes which are involved in bridging the trace interval and hold information in short term memory. delay conditioning; extinction; fMRI; prefrontal cortex; trace conditioning; virtual reality National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/24904363
A new book by Buddhist practitioner and writer B. Alan Wallace aims to bridge the gap between the worlds of science and of spirituality, but positing an adventurous new “Special Theory of Ontological Relativity.” Reviewer William Harryman expresses ambivalence about Wallace’s bold endeavor. I like Alan Wallace. He is one of my favorite Buddhist scholars. In fact, I recently reviewed his newest book — Mind in the Balance — very favorably. When he is talking about Buddhism, he is in his element. There are few people writing today with a better understanding of Buddhist history and tradition, especially Tibetan Buddhism, than Wallace. When he gets into the field of science, however, he is less knowledgeable, and it shows. At the beginning of Hidden Dimensions, his 2007 book attempting to unify physics and consciousness (from a Buddhist perspective, of course), he falls immediately into one of the common errors in trying to make sense of physics, namely the idea that consciousness — human consciousness — is an essential part of the measurement problem. Quoting his “Preface” to the book, “So quantum mechanics implies that consciousness may play a crucial role in the formation and evolution of the universe as we know it” (pg. viii). He goes on to say, in the final chapter of the book: The notion of an observer necessarily implies the presence of consciousness, without which no observation ever takes place, and … consciousness, far from being an insignificant by-product of brain activity, plays a crucial role in the formation and evolution of the universe. (109) Aside from the fact that the universe existed quite well without human consciousness, or any consciousness, for about 14.5 billion years, the essential flaw here is that it does not require consciousness, human or otherwise, to impact the outcome of a measurement. It simply requires the act of measurement, which only requires another electron, and contrary to popular understanding, that measurement effect is fully reversible. Despite this fundamental flaw in his thinking, this is still a useful volume, although many readers may have a hard time staying with the abstract nature of the arguments. One of his central premises is that the mind sciences need to get beyond the notion that all subjective experience is a by-product of neuro-chemical activity in the brain. This is the same argument that people such as Francisco Varela, Evan Thompson, and others have been making for years. The idea is still foreign to many neuroscientists, or simply rejected. And this is where Buddhism has something important to contribute as he gets into some of the finer points of neuroscience. For example, the notion that [E]verything we observe extrospectively and introspectively consists of qualia, or appearances, and they are illusory in the sense that they seem to exist either in the external world or inside our heads, whereas in reality there is no compelling evidence that they are located anywhere in physical space. (pg. 51) It’s a basic tenet of Buddhism that if we try to take apart our perceptions of self (the five aggregates), looking for the substance behind each aspect, we will eventually discover there is no self there to look at — it’s all illusory. This is what perceptual neuroscience is also coming to terms with in recent years. Wallace does a good job of dismantling our consensus reality with his Special Theory of Ontological Reality (chapter 5). I’m not sure I buy his conclusions here — that all mental and physical processes arise from “another dimension” that exists prior to the separation of mind and matter. His conclusion seems to rest on the work of Carl Jung and Wolfgang Pauli and their synchronicity hypothesis. I like the theory, but I also want to see some way to test it and verify it. Wallace then cites Roger Penrose and his archetypal mathematics (“independent of the existence and culture of human beings” [pg. 56]), but George Lakoff would counter that mathematics is metaphoric language and, as such, is grounded in our physical being, not in some abstract archetypal space. Chapter six offers some intriguing experiments to test the hypothesis of an “archetypal realm of pure ideas,” many or most of which are based on early Buddhist practices that have fallen away over the last 1,500 years. Wallace is honest to admit that without prior training, and I’m guessing he means here monastic training in the Tibetan tradition, it could take 5,000 to 15,000 hours to complete his proposed experiments testing the archetypal qualities of the five basic elements (earth, water, fire, air, space). You might see why the scientific community wouldn’t support such a project. And again, I take issue with the notion of a realm of pure archetypal ideas or forms — it’s too anthropocentric to be valid on a cosmological scale. Wallace’s next theory, A General Theory of Ontological Relativity, borrows from Einstein both in name and in spirit. He is proposing that [T]here is no theory or mode of observation — no infallible method of inquiry, scientific or otherwise — that provides an absolute frame of reference within which to test all other perceptions or ideas. (70) This is useful in that what he is really referring to here is the ability to take multiple perspectives (one person’s background theory may be someone else’s foreground theory”). His conclusion, in part, is that there is no way to “separate the universe we know from the information we have about it” (72). From here he brings up the idea of seeing the universe as a giant computer (a favorite — and flawed — metaphor for some physicists). He relies on information theory — all things are information — to support this metaphor. But Wallace rejects this idea and then proposes something even more anthropocentric, that the universe is a giant brain. And here he brings back my initial complaint about his book: But whether that information exists in a computer, a brain, or a cosmos, we inevitably come back to the same point: meaningful information exists only relative to the act of informing and a conscious being that is informed. (74) There is no convincing evidence that the universe is information, first of all, and secondly, if this is untrue then there is no need for a conscious being to be informed by it. The universe existed pretty well for 14.5 billion years without any conscious beings that we know about (unless you accept the idea of a “God” of some sort). It’s in this objectivity that theories such as these collapse. Wallace then proposes another option to the measurement problem with the many worlds theory of physics. Beginning with the notion that when a measurement or an observation causes the collapse of the quantum wave function — one possible reality is split off from all the possible realities (this is known as the Copenhagen interpretation) — the many worlds hypothesis claims that the wave function collapse is a subjective experience, and that objectively, all the possible worlds continue to exist. According to Wallace, “This hypothesis raises the possibility that individuals may alter the course of events by their choices, aspirations, faith, and prayers” (83). This line of thought is very close to magical thinking. It might be more realistic to say that individuals may alter their perception of events, but not the events themselves. The remainder of the book is equally challenging, including a chapter on the semi-annual meeting of the Dalai Lama with distinguished scientists, especially physicists, and a final chapter on the concept of symmetry in physics (the idea that there is a perfect or absolute reality — the “Great Perfection” — that exists independently of the material universe). While Wallace is arguing for a first-person science throughout the book, he never offers the studies supporting such an approach (for example, Tibetan monks changing their brain patterns depending on the form of meditation they use) — granted, he has made those arguments in other books. But if you want to convince scientists to take up that approach, more detailed arguments in support of it might be useful. In the end, this is a short but challenging book. It requires an ability to think in abstractions, mostly because that is where Wallace is working with this book. If you can suspend your disbelief about some of these anthropocentric ideas — which I could not — then you might enjoy the ride he takes us on as readers. Or, on the other hand, you might just enjoy watching a great mind tackle some of the toughest questions about life, the mind, consciousness, and the universe.
https://www.wildmind.org/blogs/book-reviews/hidden-dimensions-by-b-alan-wallace
Organizations can get in the way of their own success. The reasons are plentiful—ineffective design, challenged teams, unclear roles or structures, cumbersome or insufficient processes, challenging authority dynamics, to name just a few. Every leader has faced that moment. When they simply know that there is something about how the organization is working that is getting in the way of achieving the full potential of its goals. And yet harnessing the collective power of an organization—with multiple talents, ideas, and a more powerful overall impact than any one individual can have—is somehow still around the corner, just out of reach. How can you build or rebuild an organization to be greater than the sum of its parts? What are the key levers to focus on and what needs tweaking or changing to ignite an organizational engine, capable of outsized impact? Where to start? It is often challenging to re-visit how your organization is set up because certain ways of working are simply “the way things are done around here” or have become entrenched. The turbulence leaders face in today’s uncertain times creates both a mandate and an opportunity to do what is hard to do even in more stable times: re-evaluate and make needed changes in your organization to build a sustainable future. Whether you are trying to create or reduce scale, redefine roles, ramp up innovation, clarify or change decision-making or leadership norms, it all begins by defining your goal and identifying what levers will make the most difference to get you to the future you desire. Consider the levers to get you where you need to go: Once you have identified your goal—e.g., what you are trying to solve for in the organization that you are building—consider which levers to focus on to get there. Here are a few: — Functions and areas of responsibility: What are all of the kinds of work that must be accomplished and who does it? Are there things you want people to stop doing? Spend less time on? Spend more time on? Do you have all areas of responsibility accounted for somewhere — Organizational structure: How do you organize those who do the work (and what is their relationship to each other)? Are the right parts set up to interact effectively? Is the structure aligned with your current market and strategy—geographically, functionally, or otherwise? Does your structure enable innovation? — Leadership and governance: How do you make decisions, provide integration/oversight, and communicate (both internally and externally)? Is there a clear leadership team (does there need to be?) and are the right people on it, given what you need them to do? Are decision-making processes clear and is the leadership structure set up to enable the consultation you need to make great decisions? — Process: How and where can you make your work more defined and repeatable? Do you iterate and differentiate in the places where unique thinking is needed and follow repeatable processes where scale and efficiency are possible and/or critical? — Informal structure: How do you stay connected as an organization? What creates the glue that has everyone feel part of the same overarching purpose (cohesion)? What informal channels are in place and what is important to understand about how they enable work and discussion? There is no doubt a lot to consider, but chances are that only some of the levers need attention or re-visiting to align with your current strategy. Start by assessing what is working, and then address what is getting in the way. A few things to keep in mind when assessing how to re-align your organization with your goals: — Organizational design needs to be linked to your purpose—structure works when it follows strategy. In assessing this, harness the power of real market forces. How do changes in the current environment translate into shifts in your goals, and how must the organization adapt to respond? Be as creative as possible in designing your organization for the market it actually exists in—and benchmarking it against real competitors. — Structure is important, but often is too singular a focus—structure can only take an organization so far. Organizations have two dimensions—the formal and informal: both matter. In the same mode of thinking, invest in both the incremental and transformative. Pay attention both to the genuine windows for major system innovation and to the endless stream of opportunities for incremental improvements. — Think about the real human side effects, unintended consequences. We tend to think about organizational design in too analytic a framework, ignoring the more human-based aspects of how change efforts might be experienced. Often, organizational changes end up producing exactly the opposite of what we allege we wanted. We may talk about empowerment, yet people feel less potent than before. We may talk about distributed leadership and accountability, yet people experience more centralization and less ownership. How can you think ahead to unintended consequences, and plan accordingly to align your intention with the outcomes? — Act quickly to learn. A key feature of the current moment is strategic speed. A fast-moving company will often use multiple actions as pilots to get the organization up a learning curve, so that they can more easily adapt as the new challenges unfold. — Think through multiple perspectives. Your people are your biggest asset. Consider their experience genuinely. Looking at your organization through the lenses of different roles is crucial to thinking through implementation. Organizational design is never easy work. It requires taking a critical, unflinching look at the full scope of your organization. Use the current moment to take on that hard work. Your efforts today will provide the foundation for the future.
https://www.cfar.com/harnessing-the-power-in-your-organization/
Search for: NWS All NOAA Search by city or zip code. Press enter or select the go button to submit request Local weather forecast by "City, ST" National Conditions Rivers Satellite Climate Observed Precip Local Conditions Warnings Weather Forecast Radar AHPS Documentation User Guide User Brochure What is AHPS? Facts Our Partners Feedback/Questions Provide Feedback Ask Questions National Observations WFO Observations Hydrograph Weather Forecast Office New Orleans/Baton Rouge, LA Lower Mississippi River Forecast Center Dense Fog Advisory Hydrograph River at a Glance Download Auto Refresh: OFF Printable Image About this graph Tabular Data (UTC) Tabular Data (CST) Datum: Not Available Metadata NOTE: Graphical forecasts are not available for the Mississippi River at Inner Harbor Navigational Canal Lock. During times of high water, forecast crest information can be found in the text products . Default Hydrograph Scale to Flood Categories Return to Area Map Flood Categories (in feet) Not Available Historic Crests (1) 17.52 ft on 04/27/1945 (2) 13.87 ft on 05/28/2017 (P) : Preliminary values subject to further review. Recent Crests (1) 13.87 ft on 05/28/2017 (2) 17.52 ft on 04/27/1945 (P) : Preliminary values subject to further review. Low Water Records Currently none available. Zoom Level:16 Legend FEMA Layer Gauge Location Disclaimer Latitude/Longitude Disclaimer: The gauge location shown in the above map is the approximate location based on the latitude/longitude coordinates provided to the NWS by the gauge owner. About This Location Latitude: 29.964444° N, Longitude: 90.027500° W, Horizontal Datum: NAD83/WGS84 River Stage Reference Frame Gauge Height Flood Stage Uses NWS stage 0 ft Not Available Interpreting hydrographs and NWS watch, warnings, and forecasts, and inundation maps Vertical Datum Elevation (gauge height = 0) Elevation (gauge height = flood stage) Elevation information source NAVD88 Not Available Not Available Survey grade GPS equipment, FEMA flood plain maps, newer USGS topographic maps NGVD 29 Not Available Not Available Older USGS topographic maps, NGVD29 benchmarks MSL Not Available Not Available Older USGS topographic maps, MSL benchmarks Other Not Available Not Available Resources Hydrologic Resources Text Products Past Precipitation Forecast Precipitation River Forecast Centers River Stage Summary Inundation Mapping Locations Additional Resources Area Hydrographs NWS Precipitation and River Forecasting AHPS Iframes for Developers Mobile iNWS for emergency management Snow Information Collaborative Agencies The National Weather Service prepares its forecasts and other services in collaboration with agencies like the US Geological Survey, US Bureau of Reclamation, US Army Corps of Engineers, Natural Resource Conservation Service, National Park Service, ALERT Users Group, Bureau of Indian Affairs, and many state and local emergency managers across the country. For details, please click here . NWS Information National Weather Service New Orleans/Baton Rouge Weather Forecast Office 62300 Airport Rd.
https://water.weather.gov/ahps2/hydrograph.php?wfo=lix&gage=ihml1&hydro_type=0
Relationships between age and scholarly impact were assessed by determining the number of times single-author articles (N=227) published inPsychological Review between 1965 and 1980 were cited in the fifth year following publication. There were substantial individual differences in citation rates, but this measure of scholarly impact did not correlate with either the chronological age of authors or their professional age (years since PhD award). Although the majority of articles inPsychological Review were published by authors under the age of 40, such a bias is to be expected in terms of the age distribution of American psychologists. When allowance was made for the number of authors in different age ranges, older authors were no less likely than younger authors to have generated a high-impact article (an article cited 10 or more times in the fifth year after publication). The data offer no support to claims that publications by young scientists have greater impact. radical political change has led to the commencement of broad social, economic and cultural transformations, which are generally signified in scholarly literature as post-socialist “transitions” ( Hann 2004 :1). Those processes inevitably affected politics given to the logically-prior conceptual issue of what the theoretically ideal summary bibliometric measure would look like. Citations are clearly only of interest as an observable indicator for a latent, but more important, concept of “scholarly Abstract The purpose of this paper was to analyze the intellectual structure of biomedical informatics reflected in scholarly events such as conferences, workshops, symposia, and seminars. As analysis variables, ‘call for paper topics’, ‘session titles’ and author keywords from biomedical informatics-related scholarly events, and the MeSH descriptors were combined. As analysis cases, the titles and abstracts of 12,536 papers presented at five medical informatics (MI) and six bioinformatics (BI) global scale scholarly event series during the years 1999–2008 were collected. Then, n-gram terms (MI = 6,958; BI = 5,436) from the paper corpus were extracted and the term co-occurrence network was analyzed. One hundred important topics for each medical informatics and bioinformatics were identified through the hub-authority metric, and their usage contexts were compared with the k-nearest neighbor measure. To research trends, newly popular topics by 2-year period units were observed. In the past 10 years the most important topic in MI has been “decision support”, while in BI “gene expression”. Though the two communities share several methodologies, according to our analysis, they do not use them in the same context. This evidence suggests that MI uses technologies for the improvement of productivity in clinical settings, while BI uses algorithms as its tools for scientific biological discovery. Though MI and BI are arguably separate research fields, their topics are increasingly intertwined, and the gap between the fields blurred, forming a broad informatics—namely biomedical informatics. Using scholarly events as data sources for domain analysis is the closest way to approximate the forefront of biomedical informatics. Abstract A bibliometric analysis was performed on a set of 1718 documents relating to Web 2.0 to explore the dimensions and characteristics of this emerging field. It has been found that Web 2.0 has its root deep in social networks with medicine and sociology as the major contributing disciplines to the scholarly publications beyond its technology backbone — information and computer science. Terms germane to Web 2.0, extracted from the data collected in this study, were also visualized to reflect the very nature of this rising star on the Internet. Web 2.0, according to the current research, is of the user, by the user, and more importantly, for the user. Abstract Using a dataset of refereed conference papers, this work explores the determinants of academic production in the field of management. The estimation of a count data model shows that the countries’ level of economic development and their economy size have a positive and highly significant effect on scholarly management knowledge production. The linguistic variable (English as official language), which has been cited by the literature as an important factor facilitating the participation in the international scientific arena, has also a positive and statistically significant effect.
https://akjournals.com/search?q=%22Scholarly+%22