content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
As we trudge onward in the longest bull market in US history, I thought it wise to share with you a section from an old article, written by Richard Bookstaber, titled “Risk Management in Complex Organizations”. The article talks about risk. Particularly, that most pernicious kind of risk… The kind we can’t foresee. Volatility selling, FAANG hodling, growth at any price investing… any strategy that’s become over-optimized for the current market regime faces a high susceptibility of risk of ruin in the next. For those of you heavily involved in such activities, it may be time to learn some lessons from the lowly cockroach. Here’s Bookstaber: “We can manage risks only when we can identify them and ponder their possible outcomes. We can manage market risk because we know security prices are uncertain; credit risk because we know companies can default; operational risk because we know missteps are possible in settlement and clearing. But despite all the risks we can control, the greatest risks remain beyond our control. These are the risks we do not see, things beyond the veil. The challenge in risk management is how to deal with these unidentified risks. It is more than a challenge; it is a paradox: How can we manage a risk we do not know exists? The answer is that we cannot manage these risks directly, but we can identify characteristics of risk management that will increase our ability to react to the risks. These characteristics are easiest to grasp in the biological setting, where we are willing to concede that nature has surprises that are wholly unanticipated by life forms that lack total foresight. A disease that destroys a once-abundant food source, the introduction of chemicals in a pristine environment, the eruption of a volcano in a formerly stable geological setting are examples of events that could not be anticipated, even in probabilistic terms, and therefore, could not be explicitly considered in rules of behavior. They are nature’s equivalent to the unforeseeable risks for the corporation. The best measure of adaptation to unanticipated risks in the biological setting is the length of time a species has survived. A species that has survived for hundreds of millions of years can be considered, de facto, to have a better strategy for dealing with unanticipated risks than one that has survived for a short time. In contrast, a species that is prolific and successful during a short time period but then dies out after an unanticipated event may be thought of as having a good mechanism for coping with the known risks of one environment but not for dealing with unforeseeable changes. By this measure, the lowly cockroach is a prime case through which to study risk management. Because the cockroach has survived through many unforeseeable changes — jungles turning to deserts, flatland giving way to urban habitat, predators of all types coming and going over the course of the millennia — the cockroach can provide us with a clue for how to approach unanticipated risks in our world of financial markets. What is remarkable about the cockroach is not simply that it has survived so long but that it has done so with a singularly simple and seemingly suboptimal mechanism: It moves in the opposite direction of gusts of wind that might signal an approaching predator. This “risk management structure” is extremely coarse; it ignores a wide set of information about the environment — visual and olfactory cues, for example — which one would think an optimal risk-management system would take into account. This same pattern of behavior — using coarse decision rules that ignore valuable information — appears in other species with good track records of survivability. For example, the great tit does not forage solely on the small set of plants that maximize its nutritional intake. The salamander does not fully differentiate between small and large flies in its diet. This behavior, although not totally responsive to the current environment, allows survivability if the nature of the food source unexpectedly changes. We also see animals increase the coarseness of their response when the environment does change in unforeseeable ways. For example, animals placed for the first time in a laboratory setting often show a less fine-tuned response to stimuli and follow a less discriminating diet than they do in the wild. The coarse response, although suboptimal for any one environment, is more than satisfactory for a wide range of unforeseeable environments. In contrast, an animal that has found a well-defined and unvarying niche may follow a specialized rule that depends critically on that animal’s perception of its world. If the world continues on as the animal perceives it — with the same predators, food sources, and landscape — the animal will survive. If the world changes in ways beyond the animal’s experience, however, the animal will die off. Precision and focus in addressing the known comes at the cost of reduced ability to address the unknown.” Investors who pile into trendy strategies simply because they’ve worked so well in the recent past run afoul of mistaking the unknown for the nonexistent as Nassim Taleb puts it. Strategies predicated on simple “coarse decision rules” on the other hand, may not be the best performing approach in any single environment but they lend themselves to being more robust and adaptable, which in the long-run means greater odds of survivability and in investing that means staying in the game. Choose adaptability over optimization. Be more like the cockroach.
https://macro-ops.com/adaptability-versus-optimization-the-cockroach-approach/
Draws an expression in terms of X. DrawF expression Press: - 2nd PRGM to access the draw menu. - 6 to select DrawF, or use arrows and ENTER. TI-83/84/+/SE 1 byte The DrawF commands draws a single expression on the graph screen in terms of X using Func graphing mode, regardless of what graphing mode the calculator is actually in. For example, DrawF X² will draw a parabola in the shape of a U on the screen. Of course, how it is displayed all depends on the window dimensions of the graph screen; you should use a friendly window to ensure it shows up as you intend. Advanced Uses DrawF will update X and Y for each coordinate drawn (like Tangent( and DrawInv), and exit with the last coordinate still stored. When evaluating the expression using DrawF, the calculator will ignore the following errors: ERR:DATA TYPE, ERR:DIVIDE BY 0, ERR:DOMAIN, ERR:INCREMENT, ERR:NONREAL ANS, ERR:OVERFLOW, and ERR:SINGULAR MAT. If one of these errors occurs, the data point will be omitted. For this reason, DrawF can sometimes behave in an unexpected fashion: for example, it doesn't throw an error for list or matrix expressions (it won't graph anything, either). You can use DrawF to draw an expression instead of having to store an expression to a Y# variable and then displaying it. At the same time, if you plan on manipulating the expression (either changing the value or changing the expression itself), it would be better to simply use the Y# variable. Related Commands .
http://tibasicdev.wikidot.com/drawf
Inspiraling compact-object binary systems are promising gravitational wave sources for ground and space-based detectors. The time-dependent signature of these sources is a well-characterized function of a relatively small number of parameters; thus, the favored analysis technique makes use of matched filtering and maximum likelihood methods. As the parameters that characterize the source model vary, so do the templates against which the detector data are compared in the matched filter. For small variations in the parameters, the filter responses are closely correlated. Current analysis methodology samples a bank of filters whose parameter values are chosen so that the correlation between successive samples from successive filters in the bank is 97%. Correspondingly, the additional information available with each successive template evaluation is, in a real sense, only 3% of that already provided by the nearby templates. The reason for such a dense coverage of parameter space is to minimize the chance that a real signal, near the detection threshold, will be missed by the parameter space sampling. Here we investigate the use of Chebyshev interpolation for reducing the number of templates that must be evaluated to obtain the same analysis sensitivity. Additionally, rather than focus on the "loss" of signal-to-noise associated with the finite number of filters in the template bank, we evaluate the receiver operating characteristic (ROC) as a measure of the effectiveness of an analysis technique. The ROC relates the false alarm probability to the false dismissal probability of an analysis, which are the quantities that bear most directly on the effectiveness of an analysis scheme. As a demonstration, we compare the present "dense sampling" analysis methodology with the "interpolation" methodology using Chebyshev polynomials, restricted to one dimension of the multidimensional analysis problem by plotting the ROC curves. We find that the interpolated search can be arranged to have the same false alarm and false dismissal probabilities as the dense sampling strategy using 25% fewer templates. Generalized to the two-dimensional space used in the computationally limited current analyses, this suggests a factor of 2 increase in computational efficiency; generalized to the full seven-dimensional parameter space that characterizes the signal associated with an eccentric binary system of spinning neutron stars or black holes, it suggests an order of magnitude increase in computational efficiency. Since the computational cost of the analysis is driven almost exclusively by the matched filter evaluations, a reduction in the number of template evaluations translates directly into an increase in computational efficiency; additionally, since the computational cost of the analysis is large, the increased efficiency translates also into an increase in the size of the parameter space that can be analyzed and, thus, the science that can be accomplished with the data.
http://repository.ias.ac.in/9755/
MOSCOW: Mikhail Gorbachev, who set out to revitalise the Soviet Union but ended up unleashing forces that led to the collapse of communism, the breakup of the state and the end of the Cold War, died Tuesday. The last Soviet leader was 91. Gorbachev died after a long illness, according to a statement from the Central Clinical Hospital in Moscow. Though in power less than seven years, Gorbachev unleashed a breathtaking series of changes. But they quickly overtook him and resulted in the collapse of the authoritarian Soviet state, the freeing of Eastern European nations from Russian domination and the end of decades of East-West nuclear confrontation. US President Joe Biden called Gorbachev a man of remarkable vision and a rare leader who had the imagination to see that a different future was possible and the courage to risk his entire career to achieve it. The result was a safer world and greater freedom for millions of people, Biden said in a statement. Hard to think of a single person who altered the course of history more in a positive direction than Gorbachev, said Michael McFaul, a political analyst and former US ambassador in Moscow, on Twitter. His decline was humiliating. His power hopelessly sapped by an attempted coup against him in August 1991, he spent his last months in office watching republic after republic declare independence until he resigned on December 25, 1991. The Soviet Union wrote itself into oblivion a day later. A quarter-century after the collapse, Gorbachev told The Associated Press that he had not considered using widespread force to try to keep the USSR together because he feared chaos in the nuclear country. The country was loaded to the brim with weapons. And it would have immediately pushed the country into a civil war, he said. By the end of his rule, he was powerless to halt the whirlwind he had started. Yet Gorbachev may have had a greater impact on the second half of the 20th century than any other political figure. I see myself as a man who started the reforms that were necessary for the country and for Europe and the world, Gorbachev told the AP in a 1992 interview shortly after he left office. I am often asked, would I have started it all again if I had to repeat it? Yes, indeed. And with more persistence and determination, he said. Gorbachev won the 1990 Nobel Peace Prize for his role in ending the Cold War and spent his later years collecting accolades and awards from all corners of the world. Yet he was widely despised at home. Russians blamed him for the 1991 implosion of the Soviet Union a once-fearsome superpower whose territory fractured into 15 separate nations. His former allies deserted him and made him a scapegoat for the country’s troubles. His run for president in 1996 was a national joke, and he polled less than 1% of the vote. In 1997, he resorted to making a TV ad for Pizza Hut to earn money for his charitable foundation. Gorbachev never set out to dismantle the Soviet system. He wanted to improve it. Soon after taking power, Gorbachev began a campaign to end his country’s economic and political stagnation, using glasnost, or openness, to help achieve his goal of perestroika, or restructuring. In his memoirs, he said he had long been frustrated that in a country with immense natural resources, tens of millions were living in poverty. Once he began, one move led to another: He freed political prisoners, allowed open debate and multi-candidate elections, gave his countrymen freedom to travel, halted religious oppression, reduced nuclear arsenals, established closer ties with the West and did not resist the fall of Communist regimes in Eastern European satellite states. But the forces he unleashed quickly escaped his control. Long-suppressed ethnic tensions flared, sparking wars and unrest in trouble spots such as the southern Caucasus region. Strikes and labour unrest followed price increases and shortages of consumer goods. Competitive elections also produced a new crop of populist politicians who challenged Gorbachev’s policies and authority. Chief among them was his former protege and eventual nemesis, Boris Yeltsin, who became Russia’s first president. The process of renovating this country and bringing about fundamental changes in the international community proved to be much more complex than originally anticipated, Gorbachev told the nation as he stepped down. However, let us acknowledge what has been achieved so far. Society has acquired freedom; it has been freed politically and spiritually. And this is the most important achievement.
https://starindia.news/mikhail-gorbachev-who-steered-soviet-breakup-dead-at-91/
3 Things You Need To Know About Midwives Finding out you are expecting a new bundle of joy can be one of the happiest times in your life. You probably have a million questions running through your mind right now about everything. Don't worry; that's completely normal. While many women opt for the typical obstetrician, many others are considering going with a midwife. If you are on the fence and not sure which one is the right one for you, here are a few things you need to know about midwives to help you make an informed decision. Midwives have extensive training and education in the field. Oftentimes, people form misconceptions about midwives. They assume that these individuals don't have a lot of training or experience and they are just someone who comes into the home to help assist in the birth while someone else does all of the hard work. Unfortunately, that isn't even close to being the case. Midwives have a lot more training than you might think. Depending on what state you live in, many are required to have their master's degree in the field and pass the national certification before being able to practice. During their training, they will have garnered countless hours in the field on the path to earning their degree. Midwives take a more personalized approach. One of the best things about going with a midwife over a traditional OBGYN is that you are going to get a more personal approach to your care. You have to think about the fact that OBGYNs are seeing hundreds of patients during the week. They can hardly be expected to remember everything about everyone. With a midwife, they take on a much smaller caseload, which means they are able to get to know you on a more personal level. They will be able to note all of your concerns and specific needs to better provide you with the care you require during the pregnancy and into labor and delivery. Midwives work in home and hospital settings. Regardless of whether you are planning on having your baby at home or in the comfort of a hospital, you can opt to have your midwife there. They aren't restricted to just home births. Spend some time discussing the various options with the midwife and see what their preference is to make sure you are on the same page beforehand. Now that you know a little bit about midwives, you can make an informed decision that is in the best interest of you and your new bundle of joy. Reach out to a center like Women's Healthcare Associates LLC for more help.
http://findingnz.com/2015/09/28/3-things-you-need-to-know-about-midwives/
The saga continues: Fergus Falls cuts ties with developer of historic Kirkbride building It's back to square one for the City of Fergus Falls and future of the historic Kirkbride building. The city council voted unanimously Monday night to cut ties with developer Ray Willey of Historic Properties Inc., which had been in discussions with the city to fix up and repurpose the old Regional Treatment Center known locally as the Kirkbride building, Forum News reports. It's huge structure that looks sort of a like a castle, and it housed patients with mental illness from the 1890s until it was closed in 2005 (read a history of the building here). After months of discussion, the city decided the differences between the two parties were too great and terminated all future discussions with the developer. The funding problem The main issue: The sides couldn't agree on funding for the project by the July 10 deadline. Willey had requested the city give the project $350,000 upfront – no restrictions or guarantees attached – for the first phase of the multimillion dollar project. (That's half the amount Willey had requested last fall.) “If we gave the money without condition and they weren’t successful, that is obviously money that is taken away from another developer or from the city being able to do something tangible with the money,” Ward 4 Council Member Anthony Hicks told the Fergus Falls Journal. So the city offered the developer $350,000 in the form of a grant upon completion of the project, the paper says, and was willing to do other improvements around the property that totaled more than $1 million. In the final hours before Monday's meeting, Willey agreed to the city's offer, but the city council opted to terminate the partnership and move to find a different developer to repurpose the historic building – but it must do so soon, the Fergus Falls Journal says. The city, which purchased the Kirkbride from the state of Minnesota for $1 in 2007, was granted $4 million in state money to help pay for the cost of renovation or demolition. That state grant expires at the end of 2016, so time is a factor. City officials have scheduled a work session for Aug. 10 for public input on the future of the Kirkbride, the Fergus Falls Journal notes. Friends of the Kirkbride – a vocal group of community residents that has been fighting for years to keep the historic building safe from the wrecking ball – called Monday night's news disappointing, but hope that the city will find another developer soon, according to posts in the Facebook group. You can read more about the building's development saga here.
https://bringmethenews.com/minnesota-news/the-saga-continues-fergus-falls-cuts-ties-with-developer-of-historic-kirkbride-building
Bartlett SR, et al. Clin Infect Dis. 2018;doi:10.1093/cid/ciy210. March 19, 2018 A program granting prison inmates with hepatitis C virus infection unrestricted access to direct-acting antiviral therapy nearly eliminated the virus at a correctional facility in Australia less than 2 years after its implementation, according to study findings published in Clinical Infectious Diseases. Open access to direct-acting antivirals (DAAs) was made available through the Pharmaceutical Benefits Scheme (PBS) — a component of the Australian Government’s National Medicines Policy that subsidizes the cost of certain medications. PBS expanded access to DAA therapy for people with HCV, including those in the correctional system, in March 2016, according to Sofia R. Bartlett, PhD, researcher in the Viral Hepatitis Clinical Research Program at The Kirby Institute, University of New South Wales in Sydney, Australia, and colleagues. HCV, they noted, is common in correctional facilities, with more than 10% of incarcerated people having the infection worldwide. HCV prevalence is even higher among people who inject drugs (PWID) who are incarcerated, they added. “The close relationship between injection drug use, incarceration and prevalence of blood-borne viruses makes correctional centers a crucial setting for enhanced DAA therapy access and broad prevention strategies,” Bartlett and colleagues wrote. “Population-level HCV elimination success will require effective HCV treatment and prevention programs among both PWID and people who are incarcerated.” After unrestricted access to DAA therapy was made available, the Lotus Glen Correctional Center (LGCC) in Queensland initiated a program implementing rapid treatment scale-up for inmates. Within the first 22 months of the program, the proportion of new inmates tested for HCV increased from 83% to 91%. Overall, 125 inmates who tested positive for HCV were offered treatment. Among them, 119 were prescribed interferon-free DAA for 8, 12 or 24 weeks. Bartlett and colleagues reported that 97% of patients with evaluable treatment outcomes had sustained virologic suppression. They estimated that HCV viremic point prevalence declined from 12.6% before the program, to 4.3% 1 year after implementation and 1.1% 22 months after implementation. More than 30 patients were lost to follow-up, and reinfections occurred in two LGCC inmates, one patient released from the facility and three transferred to another center, highlighting the risk for ongoing exposure and the need to improve communication liaisons between HCV treatment services in the correctional system and the community, according to the researchers. “Nonetheless, this study demonstrates that correctional center-based DAA therapy services ... can provide favorable individual and facility population-level outcomes,” they concluded. – by Stephanie Viguers Disclosures: Bartlett reports no relevant financial disclosures. Please see the study for all other authors’ relevant financial disclosures. to Tell us what you think about Healio.com » Get the latest news and education delivered to your inbox ©2019 Healio All Rights Reserved.
https://www.healio.com/infectious-disease/hepatitis-c/news/in-the-journals/%7B955dc680-64fc-4a4a-93ab-e91aa338c68d%7D/unrestricted-access-to-daas-nearly-eliminates-hcv-in-australian-prison
Welcome Seniors! Please click HERE to view important dates, events, fundraisers for the Class of 2018 The Class of 2018 is sponsoring a PAINT YOUR OWN HOLIDAY ORNAMENT FUNDRAISER on Friday, December 2, 2017 from 11am-2pm in the H.S. Large Cafeteria (snow date: December 9 ). Come join our Class of 2018 Holiday Elves for a fun afternoon of painting, cookies, milk, coffee, tea, hot cocoa, and holiday storytime for the little ones. Register here by November 21, 2017 to secure your spot: Paint Your Own Holiday Ornament Fundraiser . In order to raise as much money as possible we are reaching out to the Class of 2018 families, HS staff/faculty and to request donations of food, beverages and other items . To make a donation please click on this link: PAINT YOUR OWN DONATIONS . The Class of 2018 appreciates your support. **ALL DATES ARE SUBJECT TO CHANGE PLEASE CHECK BACK FREQUENTLY! Questions or concerns, please contact the Class of 2018 Advisors: Lara Held: [email protected] Melissa Lynch: [email protected] Schools Faculty & Staff Students & Parents Departments District Contact Us District Calendars © 2017. Washingtonville Central School District. All Rights Reserved.
http://www.ws.k12.ny.us/Classof2018.aspx
Incident Management Month was started in 2016 and takes place every year in November, to highlight the importance of improving workplace safety by promoting a culture of thorough incident management and reporting. It's a celebration of all that has been achieved, and a centre of ideas for improvement in the future. Incident Management is a core function of the Environmental, Health & Safety (EHS) profession. It's a process used widely across industries with a considerable level of EHS risk, such as Aerospace, Construction, Manufacturing or Oil & Gas. The aim of deploying an Incident Management system or procedure is to reduce risk through appropriate levels of investigation, remediation tasks and actions taken after a worker reports that an EHS incident has occurred. These actions are what prevent a similar incident happening again in the future. Why give it an annual month? For the month of November each year, Pro-Sapien is all about Incident Management. Despite the benefits being widely recognized, many EHS departments still struggle to garner investment to either update existing, no longer fit-for-purpose software or to deploy a system for the first time. However, having a system in place is not a silver bullet: workers have to be happy to use it, too. The Occupational Safety and Health Administration (OSHA) estimates that around half of all severe workplace injuries go unreported. This can be down to a number of reasons, such as fear, effort, ignorance or other. Our annual IMM campaign aims to help EHS professionals with their - you guessed it - Incident Management. Across all of our online communications channels, we'll be bringing you a combination of helpful and insightful content to inspire ideas for improvement. Find out what's on. Use this page to keep track of #IMM2018 where we'll be sharing articles from industry influencers, our annual survey of EHS professionals and details on the campaign's Incident Management webinar. Take part in the conversation! There will be blog posts, infographics, social media conversations, a survey and a webinar included in IMM 2018. EHS influences will be contributing content to the Pro-Sapien blog, with articles from the likes of Shawn Galloway (ProAct Safety) and John Dony (NSC). Check out what's been shared and leave your comments! Our communications team will be "out and about" on LinkedIn and Twitter sparking discussions around Incident Management. Take part with the hashtag #IMM2018 or visit the Pro-Sapien Twitter channel for more! Are you an EHS manager? We wan't to hear from you! Fill out our short survey to tell us about your experience with Incident Management. Remain anonymous or enter your email to get a $5 Amazon voucher as thanks. Join us for a 30-minute webinar on Tuesday, November 20, 2018 as the grand finale to Incident Management Month. We'll be demonstrating an Incident Management system which you can use through Microsoft SharePoint. Register online for free today.
https://www.pro-sapien.com/resources/events/incident-management-month/
Seum, Stefan and Ehrenberger, Simone and Pregger, Thomas (2020) Extended emission factors for future automotive propulsion in Germany considering fleet composition, new technologies and emissions from energy supplies. Atmospheric Environment, p. 117568. Elsevier. doi: 10.1016/j.atmosenv.2020.117568. ISBN 1352-2310. ISSN 1352-2310. | PDF - Preprint version (submitted draft) | 259kB Official URL: https://www.sciencedirect.com/science/article/abs/pii/S1352231020303034?via%3Dihub Abstract Until today, road transport is largely fossil fuel driven and contributes significantly to greenhouse gas emissions and air pollutants. In order to assess the impact of development pathways of future transport, new emission factors for emerging technologies and a shift in the assessment framework that extends the emissions to include electricity generation is needed. The focus of this study is to provide emission factors for future passenger cars and fleets and offer an approach to comprehensively assess emission effects in future studies. Our scenario storyline approach imbeds different levels of changes in consideration of plausibility and consistency. We developed three pathways for Germany up to 2040 in order to capture the interdependencies of measures and developments. We hereby consistently modified the progress in transport technologies and in power generation together with changes in fleet compositions. Furthermore, we developed emission factors and energy consumption factors for plug-in hybrid and electric vehicles and expanded the conventional tank-to-wheel emission factors by including emissions derived from consistent energy scenarios. The development of emission factors depends on multiple factors, including vehicle and engine size. Furthermore, electrification shifts the emissions from tailpipe to power generation. Particularly for nitrogen oxides and particulate matter emissions, electric power generation for transport purposes could contribute significantly to ambient air emissions in the future, while tailpipe emissions can be expected to decline substantially.
https://elib.dlr.de/134971/
OPEN MIC/Karaoke NIGHTS: are available for students and outside guests to practice their performance, and will be on the calendar to sign up on a first come first served basis. Check the calendar above, as they will be updated on the first Saturday of each month. When you arrive to open mic, put in your name and you will be put on the list to perform. This is open to all musicians, not just singers. These are casual nights, that do not require a sign up. Just show up and enjoy the show. STUDENT RECITALS: REQUIRE REGISTRATION. Before you register for a student recital, please be sure you have communicated with your teacher that there is an appropriate goal set for your student with a proper date. If a student isn't prepared to perform 2 weeks before the reserved time, their teacher may pull them from that recital and advise a later date. Preparation for a recital means FULLY MEMORIZED and COMPLETED pieces. Students dress up, and present their pieces in a formal setting. On HOW TO REGISTER FOR A RECITAL, Click here. **Important: if you cannot make it to the recital you register for, please be sure to cancel your reservation so other students can sign up and attend. UPCOMING RECITAL DATES:
https://www.temeculamusicteacher.com/calender
The Secretary of the Environment and Urbanization of the 20th Executive Council of the Institute of Architects Bangladesh, a contributor to the Daily Star on urban issues, and an architect and faculty member at Ahsanullah University of Science and Technology, Sujaul Islam Khan, shares his impressions of the New Urban Agenda and contemplates the development predictions of Dhaka. What are the major issues in Bangladesh's urbanization trajectory? With 160 million people, Bangladesh is the world’s eighth most populous nation. It is a low-lying, deltaic country which is prone to regular water disasters such as erosion and flooding and will bear the brunt of climate change induced water calamities. Unplanned urbanization, slums, lack of proper housing, infrastructure deficits, disruption of natural hydrology and pollution, poor waste management, and urban poverty are threatening sustainable growth of the nation. Wanton corruption, land grabbing, poor accountability of civil servants, and top-down elite biased decision making practices exacerbate the issues facing urbanization. Has there been a significant commitment to the New Urban Agenda process from Bangladesh? Yes, Bangladesh has been a significant contributor to the New Urban Agenda process. There is a consensus in the government regarding prioritizing urban management on the development agenda. How can the New Urban Agenda have an impact on the ground in Bangladesh? The draft document of the New Urban Agenda is an extremely pertinent document for managing urban growth to ensure sustainable development in Bangladesh. Even though it has been designed to address urbanization issues globally, it has special relevance for Bangladesh. The document emphasizes capacity-building, reconfiguring public agencies involved for urban planning on a nationwide scale, and re-organizing land use planning with development planning with an emphasis on environmental issues are very relevant to Bangladesh. What are the major challenges of local implementation of a global agenda? Political exigencies shape the process of prioritization of collective efforts in any given society. Such processes initiated by a top-down mindset have not often been conducive in the developing world. Scarcity of resources and institutional capacities for pursuance of goals which require positive political commitment, multi-sectoral collaboration and long-term strategic planning, monitoring and accountability handicap many efforts. Such goals need to become social movements with effective public-private collaboration. How do you see Bangladesh's urbanization taking shape over the next decade or two? Given the economic trends and projections, urbanization will continue for the next two decades. Quality of life in the cities will not improve due to urban mismanagement. Pressure on scarce land will increase and will hamper agriculture. If the river-linking projects in the upper riparian countries materialize, water will become scarce, exacerbating the downfall in agriculture. Climate change induced sea-level rising and water disasters will also trigger mass migration from the south. The people of the Bengal delta are resourceful, resilient, and adaptable. Bangladesh, a nation plagued by poverty for almost three hundred years is now at the crossroads of prosperity. Given the track record of the tremendous amount of positive change over the last two decades in health, education, sanitation, communications, and many other sectors, I am optimistic that it will respond to the upcoming situation arising from rapid urbanization with ingenuity and effectiveness.
http://www.urb.im/ca1610dhe
The sample of the 2005 National Health Survey in Algeria was designed to collect data from 4,818 households in 16 of Algeria's 48 provinces. The distribution of the sample corresponded to the national distribution of the population. Each head of the household was asked questions about socioeconomic factors and diseases affecting members of the household. The responses were then coded in ICD-10 by physicians who supervised each household visit. Additional modules on typical food consumption, physical activity, and well-being (the SF-36 questionnaire) were answered by household members age 35-70. Measurements of individuals height, weight, and blood pressure were taken, as well as blood samples to measure glucose, cholesterol, and triglycerides.
http://ghdx.healthdata.org/print/126190
Australia's gross domestic product fell by 0.5 percent quarter-on-quarter in the three months to September, down from an increase of 0.6 percent in the three months to June (revised from a previous estimate of 0.5 percent). This is the weakest quarterly growth since the last three months of 2008 and the first quarter-on-quarter fall in GDP since the first three months of 2011. The consensus forecast earlier in the week had been for a quarter-on-quarter increase of 0.2 percent, but this had since been revised lower to a fall of 0.1 percent after the release of other data in the last few days. Year-on-year growth slowed to 1.8 percent in the three months to September, down from 3.3 percent in the previous quarter. This is the weakest year-on-year growth since 2013, and is also the first fall in year-on-year growth since the three months to June 2015. The consensus forecast had also been revised lower in recent days from 2.5 percent to 2.2 percent. Weaker GDP growth in the three months to September was driven by private investment, government spending and net exports. In seasonally adjusted volume terms, dwelling investment fell 1.4 percent on the quarter, while new building investment fell 11.5 percent. This weakness in investment was again driven by the mining sector. Mining investment fell for the twelfth consecutive quarter, dropping 10.6 percent, offset by an increase of 4.8 percent in non-mining investment. Government spending data released earlier in the week showed that government consumption fell 0.2 percent quarter-on-quarter in the three months to September, after an increase of 1.9 recent in the three months to June. There was also a sharp turnaround in government investment spending, which fell 10.4 percent quarter-on-quarter after increasing by 19.8 percent the previous quarter. Net exports made a negative contribution to headline growth of 0.2 percentage points in the three months to December, with 0.3 percent increase in export volumes outweighed by a 1.3 percent increase in import volumes. Household consumption was the only major expenditure component to make a positive contribution to GDP growth. This component increased 0.4 percent on the quarter and by 2.5 percent year-on-year. On an industry basis, the construction sector made the biggest contribution to the fall in GDP growth, down 3.6 percent on the quarter. Other weak sectors included financial and insurance services, professional scientific and technical services, rental hiring and real estate services and administrative support services. Mining output was flat on the quarter, while agricultural production rose 7.5 percent. The GDP price deflator, which shows the overall price movement in the Australian economy, rose by 1.2 percent in the three months to September. and by 1.4 percent year-on-year. Australia's terms of trade, a measure of the relative strength of export and import prices, rose 4.5 percent in the three months to September. Although the fall was sharper than expected, the slowdown in GDP growth confirmed in today's report had already been anticipated, including by the Reserve Bank of Australia. In the statement accompanying its decision to leave policy rates unchanged yesterday, the RBA noted that growth was likely to slow down in the near-term. This decision suggests that officials, for now, are not overly concerned about the drop in growth, with the statement also noting the recent pick-up in commodity prices, the improvement in Australia's terms of trade, and the prospect of stronger exports. This is consistent with the RBA's view that the Australian economy is continuing a transition from the mining investment boom a few years ago, with officials expressing confidence that growth is likely to peak up again soon. In particular, officials expect that the drag on growth caused by weak mining investment will increasingly be offset by positive contributions to growth from mining production and exports, taking advantage of the extra capacity created by past investment in the sector. This week's RBA statement also noted that inflation is expected to return to "more normal" levels. These factors suggest that today's GDP report will have only a limited impact on upcoming policy decisions, despite the attention that will likely be given to the weak headline number. The RBA's next policy meeting is scheduled for early February, by which time officials will have seen inflation data for the three months to December. By then officials will also have seen several other monthly indicators that should provide information on whether the weakness in activity seen in the three months to September has extended into the last quarter of the year. Some more up-to-date data have already shown some positive signs since the start of the current quarter, with full-time employment up strongly and retail sales recording solid growth in October. Officials at the February meeting will also have had time to assess the impact of an expected increase in U.S. policy rates later this month. Definition Gross domestic product (GDP) is the broadest measure of aggregate economic activity and encompasses every sector of the economy and is usually released early in the third month after the reference period. Description GDP is the all-inclusive measure of economic activity. Investors need to closely track the economy because it usually dictates how investments will perform. Stock market Investors like to see healthy economic growth because robust business activity translates to higher corporate profits. The GDP report contains a treasure-trove of information which not only paints an image of the overall economy, but tells investors about important trends within the big picture. These data, which follow the international classification system (SNA93), are readily comparable to other industrialized countries. GDP components such as consumer spending, business and residential investment, and price (inflation) indexes illuminate the economy's undercurrents, which can translate to investment opportunities and guidance in managing a portfolio. Each financial market reacts differently to GDP data because of their focus. For example, equity market participants cheer healthy economic growth because it improves the corporate profit outlook while weak growth generally means anemic earnings. Equities generally drop on disappointing growth and climb on good growth prospects. Bond or fixed income markets are contrarians. They prefer weak growth so that there is less of a chance of higher central bank interest rates and inflation. When GDP growth is poor or negative it indicates anemic or negative economic activity. Bond prices will rise and interest rates will fall. When growth is positive and good, interest rates will be higher and bond prices lower.
https://www.cmegroup.com/education/events/econoday/2016/12/feed476483.html
FFF Articles To me, “Thou shalt not steal” is a principle not because some sage of antiquity said so but because, in my own experience, it has been revealed as a principle which must be adhered to if we are not to perish from the face of the earth. To the ones who have not been graced with this revelation; to the ones who hold that they should gratify their personal charitable feelings, not with their own goods, but by using the police force to take goods from others; to those who would indulge in legal thievery and honestly think the practice right and honorable to those, I say, “Thou shalt not steal” is no principle at all. It is only the principle of someone else. A principle, then, is what one holds to be a fact of life, of nature, or, as some of us would put it, of God. If this is correct, it follows that a principle is a matter of personal individual judgment. Judgment is fallible. Therefore, there are wrong principles as well as right principles. Aristotle said there were a million ways to be wrong, only one way to be right. That suggests the measure of fallibility among us. Now then, if principle is a matter of personal judgment, and judgment is conceded to be fallible, on what is right principle dependent? The discovery and adoption of right principle are dependent on the evolution of judgment through logic, reason, observation and honesty. When judgments deteriorate we have what history refers to as the “Dark Ages.” When judgments evolve or improve, reference is made to “The Renaissance.” The question that grows out of this reasoning is, how does judgment evolve? My answer is, by revelation. For instance, I am convinced that no person is capable of rising above his best judgment. To live in strict accordance with one’s best judgment is to live as perfectly as one can, as humble or as mediocre as that may be. The one hope for personal betterment lies in raising the level of one’s judgment; judgment is a limiting factor. If the evolution of judgment rests on revelation how is revelation to be achieved? I can think of no answer superior to that suggested by Goethe: “Nature understands no jesting; she is always true, always serious, always severe; she is always right, and the errors and faults are always those of man. The man incapable of appreciating her she despises; and only to the apt, the pure, and the true, does she resign herself, and reveal her secrets.” The sole way to revelation, to ultimate truth, to nature, as Goethe puts it, or to God, as I put it, lies through one’s own person. It is my faith that the individual is God’s manifestation so far as any given individual is concerned. My way to God is through my own person. He will reveal Himself to me, I will be His manifestation, only to the extent that I am “apt, pure and true.” But the revelation of truth and of principles does not come automatically, without effort, like “manna from Heaven.” Revelation is the product of a diligent application of an individual’s mental resources. Truth must be sought, and its revelation is most likely in an active mind. It is rather easy to observe that to some, very little, if anything, is ever revealed. To others there come revelations far beyond anything I now possess or have any right seriously to expect. Anyway, with this as a faith, based, as it is, on such revelation as is mine, God is as intimate to me as my own person. He exists for each of us only insofar as we achieve our own conception of His likeness. This is why I believe, so fervently, in the sanctity and dignity of the individual. This is why I subscribe to the philosophy that each person has inalienable rights to life, liberty and the pursuit of happiness. For me to deny this philosophy by violating the life, liberty or property of another, by inflicting my ways on other persons, is for me to assert myself as a god over God, to interfere with another person’s relationship with God. For me to use compulsion in any manner whatsoever to cast others in my image is for me to rebuke God in his several manifestations. If one accepts the individual in this light, a rule of conduct emerges with crystal clarity: reflect in word and in deed, always and accurately, that which one’s best judgment dictates. This is you in such godliness as you possess. To do less, to deviate one iota, is to sin against yourself, that is, against your Maker as He has manifested Himself in you. To do less is not to compromise. To do less is to surrender! Certainly, there is nothing new about the efficacy of accurately reflecting one’s best judgment. This principle of conduct has been known throughout the ages. Now and then it has been expressed beautifully and simply. Shakespeare enunciated this principle when he had Polonius say these words: “This above all: To thine own self be true, And it must follow, as the night the day, Thou canst not then be false to any man.” Edmond Rostand meant nothing different when he wrote this line for Cyrano: “Never to make a line I have not heard in my own heart.” American folklore counseled intellectual integrity with: “Honesty is the best policy.” The Bible announces the penalty of surrender; what it means to abandon principle. It says: “The wages of sin is death.” Whether the wages of sin be mere physical death or the death of man’s spirit his character, his integrity, his self-respect one needs to make no further inquiry to verify this Biblical pronouncement. Abundant testimony has been provided all of us in our lifetime. Nor is the end in sight. All the world is filled with examples of warped judgments and principles abandoned: men ruling over man; the glamour of popularity rather than the strictness of judgment directing policy; expediency substituting for such truth as is known; businessmen employing experts to help them seem right, often at the expense of rightness itself; labor leaders justifying any action that gratifies their lust for power; political leaders asserting that the end justifies the means; clergymen preaching expropriation of property without consent in the name of “common good”; teachers advocating collectivism and denying the sanctity and the dignity of the individual; politicians building platforms from public-opinion polls; farmers and miners joining other plunderbundists in demands for other people’s property; arrogance replacing humility; in short, we are sinking into a new dark age, an age darkened by persons who have abandoned intellectual integrity; who through ignorance or design, have adopted bad ideas and principles. If we were suddenly to become aware of foreign vandals invading our shores, vandals that would kill our children, rape our women and pilfer our industry, every last man of us would rise in arms that we might sweep them from our land. Yes, these bad ideas, these ideas based on the abandonment of absolute integrity, are the most depraved and dangerous vandals known to man. Is the Bible right that “the wages of sin is death”? I give you the last two wars, wars born of unreason and lies. And the present so-called peace! I give you the Russia of 1929-1932 where millions died of starvation and, in other years, where other millions died in this and other ways. I give you almost any place in the world today. Perhaps the reason that so many fear stating accurately what they believe is that they are not aware that it is safe to do so. Does it take courage to be honest, that is, does one have to be brave to state accurately one’s highest opinion? Indeed, not. A part of revealed truth is: It is not dangerous to be honest. One who possesses this revelation is to that extent intelligent. Being honest, not surrendering principle, rests only upon intelligence, not at all upon courage. Relying, erroneously, on courage, many persons become blusterous with their opinions; they get cantankerous when they are honest. But, in this case, the villain is their cantankerousness, not their honesty. Finally, some may contend that due to the great variety of judgments differences and antagonisms would still remain even if everyone were a model of intellectual integrity. This is true. But differences lend themselves to a change toward the truth in an atmosphere of honesty. Under these circumstances they can be endured. For after all, life, in a physical sense, is and for ages to come, will be a compromise. But if principle is abandoned, even compromise will not be possible. Nothing but chaos! Honesty each person true to himself at his best is the condition from which revelation springs; from which knowledge expands; from which intelligence grows; and from which judgments improve. Honesty and intelligence are godlike and are, therefore, primary virtues. Anyone is capable of being true to himself. That is the one equality we were all born with. Its abandonment is the greatest sin of all. If there be no falseness there will then be as much intelligence as we are capable of. How nearer God can we get? Reading List Prepared by Richard M. Ebeling Austrian economics is a distinctive approach to the discipline of economics that analyzes market forces without ever losing sight of the logic of individual human action. Two of the major Austrian economists in the 20th century have been Friedrich A. Hayek, who won the Nobel Prize in Economics, and Ludwig von Mises. Posted below is an Austrian Economics reading list prepared by Richard M. Ebeling, economics professor at Northwood University in Midland and former president of the Foundation for Economic Education and vice president of academic affairs at FFF.
Brown Advisory is pleased to host “Environmental Risk in the Diversified Chemicals Industry,” a presentation of key findings and engagement opportunities from CDP’s latest sectorial research report “Back to the Laboratory: Are global chemical companies innovating for a low-carbon future?”On Thursday, September 17th, James Magness, Head of Investor Research at CDP, will present analysis that looks at climate change and water risks and opportunities via metrics and assessment of financial impact on earnings. The report features a Sustainability League Table where we rank the leaders and the laggards based on a number of different emissions-related and water metrics. Key findings of the research include:
http://events.r20.constantcontact.com/register/event?oeidk=a07ebficbcqc8626f42&llr=sadl6mlab&showPage=true
Dedicated to lord Shiva and built towards the end of the 19th century, this is the only kovil in the country known to be constructed entirely of gray granite. It is a regal structure resembling Dravidian temple architecture. The pillars are intricately carved with motifs of birds, flowers and Hindu deities. The kovil located in Kotahena is of great importance to devotees especially during the Maha Shivarathri festival. The temple’s magnificent architecture is bound to leave one awe-struck.
https://www.timeout.com/sri-lanka/attractions/sri-ponnambala-vaneswarar-kovil
Green tea is high in antioxidants, which help reduce free radical toxins in the body to decrease the frequency and intensity of cold symptoms. Herbals teas do not contain caffeine, which can be dehydrating, and often have high anti-inflammatory and antibacterial properties as well. Mullein tea is an excellent choice because it is an expectorant, which means it helps subdue chronic coughing. Make this type of tea by steeping mullein leaves in boiling water for ten minutes. Drain the leaves and sip on the boiled water. For the other teas, simply boil water and steep according to the directions. Besides the medical benefits discussed, drinking clear fluids is an ideal way to combat the common cold. Furthermore, hot beverages are often more comforting when an individual is ill, which makes herbal and green teas the perfect options for fighting the common cold.
https://healthprep.com/cold-flu-cough/10-common-cold-treatments-found-in-your-kitchen/2/
I was recently discussing the value of metrics in an early stage organization like Factorial, where I found myself defending the value of metrics but it seemed that I was instead claiming that strategic business decisions were to be taken based on past numbers exclusively. Nothing further from my opinion. My point regarding metrics is very simple: we’re emotionally self-centered animals and have a tendency to convince ourselves that our view is the right one. An approach to fixing this is being open-minded, which I would define as the ability to consider alternative views as possible, and trying to remove all personal bias from inclining towards a particular view. A way to remove personal bias and ensure enough open-mindedness is by challenging one’s own view the same way the others are challenged and setting a rational and measurable objective to validate the truth in it. In the world of business, that means that either a strategic decision or a tactical initiative will be right or wrong if a certain goal is reached. A measurable and objective goal. The universal metric a business uses to measure its progress is profit, but there’s an infinite stream of factors to it and lagging indicators: revenue growth, efficiency at serving customers, loyalty and satisfaction of customers, all the way down to something trivial such as visits at the website (as long as there’s a correlation between that and profits). So, what does it mean, in my opinion, to be “metrics-informed” (a more explicit name than the infamous “metrics-driven”)? Simply acknowledging the context where a decision is being made and objectively stating what the success of that decision looks like. For a tactical initiative such as changing the color of a button this is trivial as it can be measured within a very short time frame: Increase conversion rate by at least 10%. Some decisions are more long-term and have a strategic nature, often based on a vision, a set of principles and a world that doesn’t necessarily exist yet, since it’s being built by the same decision. Such decisions will most definitely not be proven right or wrong within a few days or even weeks. It can take months – even years – until a measurable impact exists. But it will. The somehow counter-intuitive part is that the past doesn’t determine the future. Nobody looks at the rearview mirror when driving a curvy road, the eyes must be pointing ahead. This is especially true in a startup, where the future weeks often outweigh the past months or years. In a scenario of high growth, what’s been done is irrelevant compared to what needs to happen in the immediate future. That’s why the company vision, together with personal experience and intuition are the key ingredients for brave innovation and strategic thinking. Metrics exist to keep us honest and allow for objective retrospective and analysis. As the lean startup mantra has it: Build, Measure and Learn. If you have thoughts on this topic I’d love to hear them! Ping me on twitter @jordiromero. The original conversation was spurred by Bernat, Pau, and César.
https://jrom.net/the-value-of-metrics-vs-the-relevance-of-what-has-been-done/
Australia’s Uluru closes to climbers for good YULARA, Australia: Australia’s Uluru officially closed to climbers for good on Friday, although the last visitors to scale the sacred rock were allowed to stay until sunset, as a permanent ban takes effect after a decades-long fight by indigenous people. To commemorate the climbing ban, public celebrations will take place over the weekend when the dismantling of the trail and its railing is also expected to begin. Earlier in the day, hundreds of tourists clambered up the Unesco World Heritage-listed 348-metre monolith, formerly known as Ayers Rock. Authorities had opened the climb mid-morning amid clear skies, after blustery conditions delayed early trekkers. Uluru is a top tourist draw in Australia despite its remote desert location near Alice Springs in the Northern Territory. While most visitors don’t climb its steep, red-ochre flanks, the impending October 26 ban has triggered a surge in people taking a final opportunity to make the trek. Nearly 400,000 visitors flocked to the Australian landmark in the year to end-June, government data shows. Australians still make up the bulk of the visitors to climb the rock, followed by Japanese, Parks Australia says. The Anangu people, the traditional owners of Uluru, have called for the climb to be closed since 1985, when the park was returned to indigenous control. The Anangu say Uluru has deep spiritual significance as a route their ancestors took. “This is our home,” read a sign at the base of the rock. “Please don’t climb.” “It shows that Anangu can actually make decisions about the land they own and more importantly Anangu aren’t going to have to get sad anymore,” said Steven Baldwin, national park operations and visitor services manager. — Reuters
Your story seems to be falling apart, right before your eyes. The characters are shallow, the plot is going nowhere and has tons of holes, and your setting doesn’t work. “Why is this happening?” You may think. “What could I have done to prevent this?” Well, did you outline your novel? Inevitably, for some writers, the answer to that question will be “no.” But today, I’m going to show you why an outline can be used to help your story stay alive, even if you write by the seat of your pants. What Exactly Is An Outline Though? What is an outline? Well, it can quite honestly be different things for different people. But I would define an outline as a plan for the drafting (or writing) stage of your novel. There are several different methods of outlining, a few of which I will discuss more in-depth in a companion article to come. So what do I mean by an outline? Am I talking about a scene list? Scene lists are where you list off every single scene in your book. This can be done with scene cards as YouTuber Abbie Emmons does. This can also just be a simple bullet point list. But a scene list alone is not what I would say an outline is. What about story structure or The Three Act Method? This is where you write out each point in the three-point story structure. This can give you an amazing idea of the structure of your story and it can help you plan out the plot of your story. But even this alone is not an outline to its fullest potential. Is an outline just pure brainstorming? You just brainstorm what happens in your story, and what your characters will be like? Again, this is not what an outline could be to its fullest potential. I believe an outline can be combination of all three of these things in many different ways. Redefining the Outline An outline is your story structure, your scene list, and brainstorming all at once (with a few nuances and exceptions). This can be manifested in several different ways. There’s the Snowflake Method by Randy Ingermanson, the Agenda Method as told by John Fox, and many more ways of outlining. In an outline, you must do brainstorming, you must work out your plot through your story structure. You also have to have an idea of what your characters will look and act like, what they will struggle with, and what they desire. Depending on how nitty-gritty you want to get (again, this varies depending on the writer) you can do in-depth scene lists. Planning out the structure of your story is a scene list, but not one that is super in-depth. But all of these things form into a beautiful outline that helps you get an idea of where your story is going. In an outline of what I’m speaking, maybe you’d first brainstorm your plot, conflict, and characters. Maybe you would do this with a friend so you would have a sounding board to bounce ideas off of. Then, maybe you’d flesh out your plot to where it covers all the points in the three act structure. You’d list what happens in your plot for each point. After that, you may say, “I need to develop my characters more,” so you work on fleshing out your characters and their arcs, and plan how they will grow and change throughout the story. Along with this, you can start to develop your themes that will be weaved throughout the story, outlining what themes you would like to show, and how they will be shown through your characters. Finally, if you decided you would like a more in-depth scene list to know where you are going, that can be done. Whether it’s more structured (e.g. with scene cards) or in bullet points, a scene list can be very beneficial during the drafting stage. How is this different from other outlining methods? Most methods focus on plot, and occasionally on characters. But by outlining your plot, your characters, your theme, and your scenes, you will have a well rounded outline that prepares you for the drafting stage of your novel. You will have a better idea of what exactly will happen in your novel before it even happens. “Oh, I Don’t Usually Outline.” But why outline? What’s the point of going through all that work? Depending on the writer, outlines can take months and months. But when you talk to a lot of new writers or pantsers, and you ask about an outline, they will tell you, “Oh, I don’t usually outline.” I remember I was talking to one writer friend I have, and she asked how far along in my story I was. I said, “Right now I’m working on outlining my story.” She smiled and said, “Oh, I never really do an outline.” Right before she said that though, she mentioned how she was having a hard time with writer’s block and being motivated. And that brings us to reason number one why you should try to outline your novel before you draft it. 1. Outlining Your Novel Gives You Motivation and Stops Writer’s Block When you outline your novel, and you have a good idea of where you are going before you draft, you will no longer struggle with writer’s block and motivation while drafting. Why? Because you already know where you are going before you even start writing your first draft. Having an outline gives you the plans you need so when you draft, you know what to write. It also gives you motivation to write because you have no excuse not to. You cannot say, “I have writer’s block,” or “I don’t know what to do next,” because an outline solves those problems. 2. In an Outline You Get Rid Of Plot Holes and Gain a Better Story Premise One reason stories often fall apart is primarily that the premise or the idea of the story was flawed in the first place. When we form the very main idea of the premise, we often do not create an outline to flesh it out, and therefore, we miss all kinds of plot holes that originated in the idea of the story. Plot holes, or problems with the story and plot as a whole, are easier to spot ahead of time. They are easier to catch before you take the time to write thousands of words in a draft. An outline makes you stop, think, and look at your story so you can address the flaws within your story. I remember writing this novella that was about 35,000 words long in total. But about 22, 000 words in, I realized the plot had a ton of problems! All of those problems could have been avoided if I had taken the time to write an outline before, and not just rush in and pants off the story. In my story, not only was the plot disconnected, my characters fell flat. 3. An Outline Helps You Grow Your Characters Ahead of Time This brings us to our third point. An outline can help your characters be better developed before they even enter into your story. An outline is more than just brainstorming, scene lists, and a story structure template like I mentioned above. It also includes the careful development of your characters. It involves taking the time to figure out what your character arcs are for your characters, what they desire, and what lie they’re believing. In an outline, you can also start to brainstorm (there’s that aspect of the outline being used!) what your character’s personalities will be. By outlining your characters ahead of time, they will be able to better engage your readers in your draft. Not only that, but you can also have a better idea of what the themes will be throughout your story because of your characters being outlined. 4. An Outline Helps You to Have a Better Idea of Your Themes Ahead of Time. By outlining your plot and your characters ahead of time, you have a better idea of what your themes will be throughout your story. If we take the time to outline our characters and their struggles, we begin to see what themes will flow from our story. Not only that, but you can outline your themes as well. This can’t be done too rigidly, however, because themes do tend to flow naturally as you write your first draft. But you can get a general idea of what your themes will look like throughout your story. How exactly can you do this, however? Unfortunately I cannot go through exactly how to outline theme in this article—theme is very complex—but the Christian website, StoryEmbers.org, has excellent articles on theme! This article by Story Embers goes through using theme to outline your novel. I definitely recommend it if you’d like to outline your themes better. But again, by outlining your novel before drafting, you won’t have to worry about your themes, because when you create a rich, deep outline, your theme will be included in that, even if you don’t rigidly outline your theme. The Case For Outlining As shown by these few four points, outlining has many benefits. While most writers don’t outline, it has many benefits. For some writers, when they don’t outline, their novel may fall apart during the drafting stage, resulting in the death of the story, or extensive editing. Outlining is a good step to take for the good of your brain, and your story. But what about the pantsers? What about the free spirits? Believe it or not, I am very much pantser-minded. When I started my first story with an outline, my brain started to fry. I started a small snippet series just to pants something off, so my sanity wouldn’t be lost. After outlining, however, my story became so much more clear. And I was so much more excited to write it! Pantsers, I understand how your mind works. It’s almost constricting to create an outline. But I say to you, try it. After outlining my first full length novel, it was freeing during the drafting stage. I knew that plot holes were mostly taken care of, and that editing afterwards (not my most favorite stage) would be made easier. I didn’t have to worry about my story falling apart because I had a plan. This plan was actionable, I could do it and feel like my story was secure. I was more motivated and didn’t get writer’s block as easily. And now, I think I’m a combination of a pantser and a plotter. Sort of a, plantser if you will. But, I realize, it’s still challenging for pantsers to get started and keep outlining. Thankfully, there are many resources to help with outlining, and to make the outlining process easier. Three Resources To Help Pantsers Outline 1. Other Writers Well first, let me just say, other writers are lifesavers for so many reasons. They encourage you through the hard parts of writing, but in this case, a fellow writer helped me through the process of outlining. She walked me through it, step-by-step, and looked over my outline each time I tweaked it. Now, while I draft my novel, I know exactly where I’m going. I know how my characters should respond, so their arcs even out, and I know what themes are flowing through the story. Find plotters to help you, it will save your life, and you may just make many new friends in the process! 2. Worksheets For you dear pantsers, there are also many worksheets. K.M. Weiland has an amazing workbook for outlining that goes along with the book she has written about outlining your novel. She also gives a review of a writing planner called the WriteMind Planner on her blog. There are many worksheets on the internet as well. These can help you organize your outline on paper, and to get the big idea of your story. They can guide you step by step through this process. Think of worksheets like a writing prompt that will help you get past writer’s block. . . except during the outlining stage. But when you print off worksheets, make sure that you organize well, because having a lot of different worksheets (and pieces of paper so you had more space to write!) can easily get disorganized and overwhelming. 3. Software But yet another resource available for panters is writing and outlining computer software. Again, outlining expert K.M. Weiland has an Outlining Your Novel Software. There’s Scrivener—which is not necessarily just for outlining, but for writing too—Plottr, and Dabble as well. Software combines the awesomeness of worksheets and word editors into one. Like worksheets, software can guide you step by step through each part of the outlining process. Software can help you organize your work much better as well. Instead of having lots of worksheets sporadically placed throughout your room or desk, a software enables you to organize your outline and draft easily. Not to mention, with a search bar you can find what you need with a click of a button. Of course, these methods aren’t for everyone, but all of these are there to help us pantsers get through the outlining stage of our story. Your Next Steps I pray that this article will motivate you to create an outline (even you pantsers!) so that your story will be well-crafted. I hope that this article proves a point, but not only that, helps you to craft a better story. Pantsers, if you’re still having trouble, fear not! Below I have a trouble-shooting guide for you, that will help your outline along (and there may be a few helpful tips for plotters as well!) With an outline, your next novel will be better than ever before, so go and write, writer!
https://theyoungwriter.com/4-reasons-to-outline-your-novel/
But they said to her, “There is no one among your relatives who is called by this name.” So they made signs to his father—what he would have him called. And he asked for a writing tablet, and wrote, saying, “His name is John.” So they all marveled. Immediately his mouth was opened and his tongue loosed, and he spoke, praising God. [Luke 1:59-64 NKJV] In some ways in this meditation, I am shamelessly leaning on my own family and my experience of it; but it is what I know, and all of us can only speak from what we know. In the Bible names are very important indeed. I learned that when I did a study on rebuilding the gates of Jerusalem as told in the book of Nehemiah. The names of the many saints who contributed to that work spoke enormously of the spiritual nature of that rebuilding and gave me an insight into something of which I’d been told but not realised until I’d seen it for myself. This is important — seeing it for yourself — because our relationship with the Lord is an intensely personal thing. He speaks to you and I according to our relationship with Him, but never against His connection with His creation. Do we know this? In this passage from Luke, many were astonished that Zacharias, John’s father, went against tradition and did not name his son after himself, but as Elizabeth had heard. This was a man who had now heard from the Holy Spirit. We are encouraged to do the same. John, as we know, means “Jehovah is a gracious giver”. Isn’t He just. John (the Baptist) had a crucial part to play in the eternal story. Jesus’ name was important, and each one of us is named by God in His eternal purpose. We are, each one, given a new name in Him. That name is Jeshua (which is also Joshua, which is also Jesus) and means ‘He is saved’. We are called by the name of our Father, which is salvation. Our natural fathers were integral to our earthly existence; but our Heavenly Father is essential to our eternal existence. Always remember that we have been called by the name of His Father. Hallelujah! Never forget that our capacity in spiritual matters is measured by the promises of God. Is God able to fulfil His promises? Our answer depends on whether we have received the Holy Spirit. Comments are closed.
https://www.cornerstonekilmartin.com/lively-stones-meditations/called-by-the-name-of-his-father
Young men in more than 60 countries around the world face the prospect of mandatory military conscription.1 This occurs at a critical juncture in their lives – when they are making decisions about higher education, entering the labour market, and are at the peak of the age-crime profile. It is therefore not surprising that conscription remains a hotly debated topic; a number of European countries have recently abolished it (France in 1996, Italy in 2005, Sweden in 2010, and Germany in 2011), while others have had failed referendums (Austria and Switzerland in 2013).2 This column aims to raise awareness of this important issue and shed more light on how mandatory military conscription may affect crime and labour market outcomes of young men.3 The contemporaneous effect of conscription on crime is ambiguous. Keeping young men engaged and isolated from mainstream society during their most crime-prone years can incapacitate crime, while increased social interactions among young men who serve could increase crimes that are highly ‘social’ in nature. Conscription could also affect post-service crime through a number of channels. An ‘incapacitation’ effect, combined with the persistent nature of crime, could lead to a reduction in post-service crime. The promotion of democratic values, obedience, and discipline may also decrease post-service crime by focusing men at this high risk age. Exposure to weapons and desensitisation to violence, however, could exacerbate criminal tendencies (Grossman 1995). Conscription may also affect crime through its impact on education and labour market outcomes. Conscription would decrease crime if it is viewed as a positive signal of quality by employers, or improves a young man’s marketable skills, health, or physical fitness. However, post-service crime may increase if conscription interrupts a continuous educational path, delays entry into the labour market, and reduces future labour market opportunities. Intense exposure to new peers during service may have either positive or negative effects, depending on the relative characteristics of the new and old peer groups. Reconciling a mixed and outdated literature There is little consensus in the academic literature about the impact of this potentially life transforming event. Angrist’s (1990) seminal study found that Vietnam draftees in the US had lower earnings than non-draftees. Subsequent papers (Angrist and Chen 2011, Angrist et al. 2011) find that this gap closes over time, so that by age 50 draftees are on par with non-draftees. There is some evidence that conscription causes an increase in violent crimes among Vietnam veterans in the US (Rohlfs 2010, Lindo and Stoecker 2012), though this is not seen amongst Australian veterans (Siminski et al. 2016). The effects of peacetime conscription are similarly mixed: no effect on wages in Britain and Germany (Grenet et al. 2011, Bauer et al. 2012), a negative effect in Holland and for high-ability men in Denmark (Imbens and van der Klaauw 1995, Bingley et al. 2014), and a positive effect for low-educated men in Portugal (Card and Cardoso 2012). Galiani et al. (2011) find that conscription increases crime in Argentina, while Albaek et al. (forthcoming) find that service reduces property crime among Danish men with previous convictions. What can explain these diverse findings? - First, the effect of conscription may change over the lifecycle. For an outcome like crime which peaks as a young adult, focusing on crime after age 40, as done in some of the previous studies, may skew the results. - Second, the conscription ‘experience’ varies greatly across studies. While peacetime versus wartime conscription is the most obvious example, other differences may emerge as countries approach the end of their mandatory conscription regimes. - Third, measured differences may be related to differences in how the causal effect is identified. Because of the selection process involved in military service, one cannot simply compare outcomes for those who do and do not serve. The above mentioned studies use various quasi-experimental designs to solve this potential omitted variables problem. The most convincing studies rely on random variation in service generated by draft lotteries. But we should also be interested in the effect of service in countries that do not rely upon a lottery to assign service. Several studies do this by comparing cohorts before and after the abolition of mandatory conscription. This research design can yield different results than the lottery design for a number of reasons: the conscription experience likely differs when it is about to be abolished; it may include general equilibrium effects; and the average and marginal individuals ‘treated’ may not be comparable across studies. If conscription has heterogeneous effects, then it is not surprising if studies with different identification strategies find different effects. New research Our new paper (Hjalmarsson and Lindquist 2016) contributes to this debate by utilising individual administrative records and a quasi-experimental research design to identify the causal impact of mandatory military conscription in Sweden on crime (both during and after conscription), legitimate labour market outcomes, and work-related health outcomes. Our paper stands out from the previous literature by: - Studying modern-day cohorts; - Using a comprehensive set of crime and labour market outcomes; - Applying a new identification strategy; and - Providing the first clean evidence of an incapacitation effect using information on the exact dates of service. Mandatory military conscription in Sweden dates back to 1901 and was abolished in 2010, after a gradual decline that began upon the end of the Cold War. For most of this period, Swedish male citizens underwent an intensive drafting procedure upon turning 18, including tests of physical and mental ability. Generally speaking, the tested were positively selected for conscription; those with the highest cognitive and non-cognitive test scores were most likely to serve. Given that such ability measures are also likely correlated with criminality, a naïve comparison of post-service crime rates of those who do and do not serve would most certainly yield biased estimates of the effects of conscription. Though potential Swedish conscripts are not assigned to service on the basis of a lottery, there is some ‘chance’ involved in service decisions. Namely, each individual’s test results were reviewed by a randomly assigned officiator, with a relatively high or low tendency to assign conscripts to service; we can observe these officiators from 1990 to 1996. It is this exogenous variation in the likelihood of serving that we use to identify the causal effect of conscription on crime and labour market outcomes. Those who serve are 20 percentage points more likely to have been assigned to a high service rate officiator than those who do not serve. New findings Our baseline results are striking: - Military service significantly increases both the likelihood of crime and the number of crimes between ages 23 and 30. These effects are seen across all crime categories, are quite large in magnitude, and are driven by those with a criminal history prior to service or who come from low socioeconomic status households. - Given these findings, it is perhaps surprising that we also find large and significant incapacitation effects of conscription, especially for drug and alcohol offences and for traffic crimes. Unfortunately, our analysis suggests that these effects are not large enough to break a cycle of crime that has already begun prior to service. This heterogeneous impact of service is also seen with respect to labour market outcomes. Individuals from disadvantaged backgrounds have significantly lower income, and are more likely to receive unemployment and welfare benefits. In contrast, military service significantly increases income and does not impact welfare and unemployment for those at the other end of the distribution. There is no effect of service on the likelihood of higher education. The only positive effect of service we see, at least for those from disadvantaged backgrounds, is a decrease in disability benefits and the number of sick days; these effects are in fact seen for all subsamples. Conclusion Our analysis indicates that mandatory military conscription significantly impacts the life course of young men; the heterogeneous nature of the effects reinforces already existing inequalities in the likelihood of future success. Our results contradict the idea that military service may be a way to straighten out troubled youths and build skills that are marketable in the post-service labour market. These non-monetary costs should be taken into account when deciding whether to reinstate or abolish mandatory conscription or when devising the system through which conscription occurs (e.g. lottery, testing, etc.). Who are the average and marginal conscripts? How will conscription affect these individuals? References Albaek, K, S Leth-Petersen, d le Maire and T Tranaes (forthcoming), “Does Peacetime Military Service Affect Crime?” Scandinavian Journal of Economics. Angrist, J D (1990), “Lifetime Earnings and the Vietnam Era Draft Lottery: Evidence from Social Security Administrative Records," American Economic Review 80(3), 313-336. Angrist, J D and S H Chen (2011), “Schooling and the Vietnam Era GI Bill: Evidence from the Draft Lottery,” American Economic Journal: Applied Economics 3(2), 96-118. Angrist, J D, S H Chen and J Song (2011), “Long-term Consequences of Vietnam-Era Conscription: New Estimates Using Social Security Data,” American Economic Review: Papers and Proceedings 101(3), 334–338. Bauer, T K, S Bender, A R Paloyo and C M Schmidt (2012), “Evaluating the Labour-Market Effects of Compulsory Military Service,” European Economic Review 56(4), 814-829. Bingley, P, P Lundborg and S Vincent Lyk-Jensen (2014), “Opportunity Cost and the Incidence of a Draft Lottery,” IZA DP No. 8057. Card, D and A R Cardoso (2012), “Can Compulsory Military Service Raise Civilian Wages? Evidence from the Peacetime Draft in Portugal,” American Economic Journal: Applied Economics 4(4), 57-93. Galiani, S, M A Rossi, and E Schargrodsky (2011), “The Effects of Peacetime and Wartime Conscription on Criminal Activity," American Economic Journal: Applied Economics 3(2), 119-136. Grenet, J, R Hart, and E Roberts (2011), “Above and Beyond the Call: Long-term Real Earnings Effects of British Male Military Conscription in the Post-War Years,” Labour Economics 18(2), 194-204. Grossman, D (1995), On Killing. The Psychological Cost of Learning to Kill in War and Society, Boston: Little, Brown. Hjalmarsson, R and M J Lindquist (2016), “The Causal Effect of Military Conscription on Crime and the Labour Market,” CEPR Discussion Paper No. 11110. Imbens, G and W van der Klaauw (1995), “Evaluating the Cost of Conscription in The Netherlands,” Journal of Business & Economic Statistics 13(2), 207-215. Lindo, J M and C Stoecker (2014), “Drawn into Violence: Evidence on ‘What Makes a Criminal’ from the Vietnam Draft Lotteries,” Economic Inquiry 52(1), 239-258. Poutvaara, P and A Wagener (2007), “Conscription: economic costs and political allure,” The Economics of Peace and Security Journal 2(1), 6-15. Poutvaara, P and A Wagener (2011), "The Political Economy of Conscription," in Christopher J. Coyne and Rachel L. Mathers (eds.) The Handbook on the Political Economy of War, Edward Elgar Publishing Ltd, Cheltenham, 154-174. Rohlfs, C (2010), “Does Combat Exposure Make You a More Violent or Criminal Person? Evidence from the Vietnam draft,” Journal of Human Resources 45(2), 271-300. Siminski, P, S Ville, and A Paull (2016), “Does the Military Train Men to Be Violent Criminals? New Evidence from Australia’s Conscription Lotteries,” Journal of Population Economics 29(1), 197-218. Endnotes See the CIA’s World Factbook (https://www.cia.gov/library/publications/the-world-factbook/fields/2024....) and http://chartsbin.com/view/1887 for a summary of this data. Though the US moved to an all-volunteer military in 1973, young men ages 18 to 26 are still required to register for the draft. Today, the US is debating extending this requirement to young women (http://www.nbcnews.com/news/us-news/military-officials-women-should-register-draft-n509851). Sweden is also considering reinstating some form of mandatory public service for both men and women. For a broader discussion of the pros and cons of a conscription army vs. an all-volunteer army from a more general economic and political perspective see Poutvaara and Wagener (2007, 2011).
https://voxeu.org/article/impact-mandatory-military-conscription-crime-and-labour-market
SUMMARY: The Program Manager will provide regional direction and leadership to staff implementing College Success Foundation's pipeline of integrated student support services to include, but not limited to College Preparatory Advisors (CPA), Higher Education Readiness Opportunity (HERO), Assistant Program Officer (APO) or AmeriCorps members. The Program Manager will manage and strategize a pipeline of integrated student services which may include, but not limited to early college awareness, personal and leadership development, college planning, and promotion of CSF administered or other scholarship programs. The Program Manager recommends, participates and implements policies, procedures and materials to ensure multi-program goals are met for the region. PRIMARY DUTIES AND RESPONSIBILITIES: - Manage the day to day personnel and program management to include but not limited program initiatives, training, staffing/recruiting, time and attendance, performance management process, and event planning. Recommends salary adjustments, transfers, promotions and corrective action measurements, as necessary or required. - In conjunction with Director of Programs, manage and administer adopted site budget in a cost effective manner. May be required to participate in the preparation of the annual budget. - Reviews ongoing performance results to targets. Takes corrective measures with authorization and escalate as needed. - Keeps Director(s) promptly and fully informed of all problems or unusual matters of significance and takes prompt corrective action where necessary or suggests alternative courses of action which may be taken. - Manage collaborative efforts with teachers, principals, colleague access providers and community based organizations to promote college opportunities for students. May be required to facilitate programming at new sites to meet the demands of the organization. - Collaborate and work closely with, but not limited to office staff, program staff, national office, Director of Programs, CSF National Program Directors, Regional Directors on initiatives or matters that will have a direct impact to the regional site or the Foundation. - Manage, sustain and deliver on program development initiatives, scholarship programing partnerships, and partner-sponsored events. - Ensures fidelity of student and program data by overseeing data collection practices that are accurate, timely and in accordance with CSF policy. REQUIRED KNOWLEDGE, SKILLS AND ABILITIES: - Excellent verbal and written communication skills. - Demonstrated understanding of complex program management. - Budget and fiscal discipline. - Ability to facilitate meetings, build consensus, and speak in public. - An established commitment to work collaboratively and harmoniously with CSF staff, colleagues and stakeholders. A commitment to diversity and equal opportunity. - Skills in Office 365. - Organizing, performing, and prioritizing multiple tasks with excellent attention to detail. - Handle a variety of tasks and projects on an ongoing basis, including meetings with staff, students, school personnel, parents/guardians, Foundation staff and other community resource persons. - Utilize proactive approaches to problem-solving with decision-making capability. - Build relationships with diverse stakeholders, including staff and external partners. - Be a "self-starter", able to work independently while observing and complying with all standards of the supervision personnel and relating programs and services of the employer. - Attend and participate in community functions, all CSF events and programs that will enhance the visibility of our programs. - Attend occasional evening and weekend events. - Ability to travel up to 30% of the time. Must have a valid driver's license and proof of insurance. QUALIFICATIONS FOR THE POSITION: - Bachelor's degree. Master's degree preferred. - A minimum of six years of experience in fields such as education, youth development, or policy work. - Five years prior supervisory or management experience highly desired. PHYSICAL DEMANDS: While performing the duties of this job, the employee is regularly required to sit; reach with hands and arms and talk or hear. The employee is frequently required to use hands to finger, handle or feel; frequently lift and/or move up to 10 pounds, and occasionally lift and/or move up to 50 pounds. The employee is regularly required to stand and walk. Specific vision abilities required by this job include close vision, distance vision, color vision, peripheral vision, depth perception and ability to adjust focus. WORK ENVIRONMENT: Work environment is moderately quiet. Employee must be able to handle stress that is involved in meeting strenuous customer deadlines, working in high volume areas, and be flexible and able to interact with employees at all levels. CONDITION OF EMPLOYMENT: The position may change based upon needs of the program and/or organization needs and available funding. College Success Foundation maintains a drug free environment. Employees of College Success Foundation and its subsidiaries must be able to successfully work in and promote a multicultural and diverse work environment. The statements contained herein reflect general details as necessary to describe the principal functions of this job, the level of knowledge and skill typically required and the scope of responsibility, but should not be considered an all-inclusive listing of work requirements. Individuals may perform other duties as assigned, including work in other functional areas to cover absences or relief, to equalize peak work periods, or otherwise to balance the workload.
https://collegesuccessfoundation.applicantpro.com/jobs/1164808.html
Certain applicants for immigration benefits may be determined "inadmissible" to the United States based on prior legal violations or administrative decisions entered against them. Based on this inadmissibility, U.S. immigration authorities may deny their applications for visas or to adjust status. However, a waiver of inadmissibility may be available, depending on the charges raised against the applicant and the type of visa for which they are applying. Most waivers are adjudicated by DHS and DOS based on loose discretionary standards with a wide berth of judgment left to the examining officer. Therefore, preparing a compelling waiver brief with supporting evidence, and knowing how to most effectively present this to U.S. immigration authorities, can be absolutely critical to the waiver's success. Other factors that might affect the chances for a waiver's success include the nature and seriousness of the violation, the amount of time that has elapsed since the violation, the applicant's family and business ties to the United States, and any U.S. interests that would be positively affected by the applicant's admission. Grounds of inadmissibility that may be raised against an applicant by U.S. immigration authorities include: Health-Related Criminal Fraud / Misrepresentation Unlawful Presence Prior Removal / Deportation Nonimmigrant Waivers for Canadians Port of Entry Parole Requests Other Health-Related (HIV / Substance Use) Applicants may be declared inadmissible for several health-related reasons, either because they are a carrier of a disease of public health significance, lack required vaccinations, have a physical or mental disorder that poses a threat to themselves or others, or are deemed to be a drug abuser or addict. These issues often arise as part of required medical examinations that occur prior to a visa interview, and are reported in the doctor's findings to U.S. immigration authorities. Applicants who have been arrested or convicted for an alcohol- or drug-related offense may be referred back to visit a doctor for further questioning after the visa interview. These grounds of inadmissibility also exclude persons who are HIV-positive from obtaining U.S. immigration benefits without special permission. All health-related grounds of inadmissibility may be waived in the context of non-immigrant visas and most may be waived for immigrant visas. However, persons deemed to be drug abusers or addicts are ineligible to receive immigrant visas or adjust status in the United States. Back to Top Criminal Any applicant who has been arrested or convicted for any offense (other than minor traffic violations) must be prepared to disclose these facts and produce original court documents as part of their U.S. immigration application. Typical offenses that can complicate visa processing are crimes involving fraud or deceit, crimes against persons or property and drug-related offenses, although many other offenses may be problematic also. Applicants with multiple criminal offenses may also face additional problems. Applicants who have been arrested or convicted for an alcohol- or drug-related offense may even be referred to a doctor for further questioning after the visa interview to screen for health-related inadmissibility. If you are a non-citizen and are currently facing criminal charges in state or federal court, you should strongly consider consulting with an immigration attorney before taking any action in your criminal case. Any plea of guilty or no contest, or even a suspended sentence or a deferred entry of judgment, could result in negative U.S. immigration consequences. The collaboration of an immigration attorney with your criminal defense attorney can help you make an informed decision about your criminal case, and may be instrumental in finding an alternative or lesser charge that minimizes U.S. immigration consequences. If you are already convicted of a criminal offense, a waiver of inadmissibility may or may not be available depending on the nature of your criminal offense and the resulting punishment or sentence ordered by the Court. Even if you are now facing deportation charges in U.S. Immigration Court based on your conviction, you may still qualify for a waiver. Because of the wide-ranging negative immigration consequences that can follow a conviction, it is highly advisable to retain an immigration attorney to review your specific case. Back to Top Fraud / Misrepresentation Any applicant who has obtained (or sought to obtain) an immigration benefit or admission to the United States through fraud or misrepresentation may be declared inadmissible. Lying to immigration or border officials, presenting false documents, or even failing to disclose certain information on an immigration application form can trigger inadmissibility. Applicants who have made false claims to U.S. citizenship or who have been prosecuted for document fraud under the INA will also face similar problems. Waivers are available to overcome most fraud-based inadmissibility charges in non-immigrant and immigrant visa applications. In some cases where government allegations of fraud are unsubstantiated or clearly erroneous, it may even be possible to challenge these allegations as part of, or in addition to, a waiver application. Applicants for adjustment of status or immigrant visas must have a U.S. citizen or lawful permanent resident spouse or parent to qualify for a waiver. Back to Top Unlawful Presence Most applicants who have been present in the United States for more than 180 days after the expiration of their valid immigration status (or after entering unlawfully) may be declared inadmissible. Waivers are generally available for unlawful presence inadmissibility, but not to individuals who enter or attempt to re-enter the United States illegally after being unlawfully present for one year or more. In the context of adjustment of status or immigrant visas, unlawful presence waivers are only eligible to applicants who have a U.S. citizen or lawful permanent resident spouse or parent. It is also important to note that departing the United States after being unlawfully present will trigger a bar to re-entry if a waiver application is not granted. If the period of unlawful presence prior to departure was between 180 days and one year, a three-year bar to re-entry will be imposed. If the period of unlawful presence was greater than one year, a 10-year bar will be triggered. These consequences are especially critical to consider in cases where the applicant must depart the United States to apply for a visa and a waiver. Back to Top Prior Removal/Deportation Any applicant who has been ordered removed from the United Stateseither by an expedited removal order by U.S. Customs and Border Protection or a deportation order by a U.S. Immigration Judgewill be declared inadmissible for a fixed period of time. After an expedited removal order, inadmissibility will follow for five years after departure; after a deportation order, 10 years; and after any second or subsequent order of any kind, 20 years. Waivers are available to overcome inadmissibility based on a prior removal. However, it is important to note that other grounds of inadmissibility also typically apply to these cases because without prior violations, there may not have been a prior order of removal. It is important to keep this in mind, as the cumulative effect of other violations may render the applicant ineligible for a waiver, or may reduce the chances of a waiver's success. Back to Top Nonimmigrant Waivers for Canadians Many Canadians will thoroughly prepare for an application for admission to the U.S. in a business-related nonimmigrant status only to learn upon arrival at the U.S. port of entry that they are inadmissible, often for a minor theft offense or possession of a small amount of marijuana. Fortunately a nonimmigrant waiver is available for nearly every ground of inadmissibility, and ABIL lawyers are skilled in nearly every possible scenario. When applying for a nonimmigrant waiver, Canadian citizens must follow an application procedure that differs from citizens of all other nations. Canadians are the only foreign nationals that must complete and submit Form I-192, which must be filed in-person with U.S. Customs and Border Protection (CBP) officials at a port of entry. The applications are then forwarded to the CBP Admissibility Review Office (ARO) in the Washington, DC area for adjudication. Processing times range from 4-6 months in most cases. There are, however, always cases in which the adjudication is delayed beyond normal processing times. In these cases, ABIL lawyers have successfully worked with the ARO to obtain the fastest possible adjudication and decision for Canadian business immigrants, who are often stranded without employment during the adjudication process. Back to Top Port of Entry Parole Requests In rare and urgent circumstances, ABIL lawyers can help inadmissible individuals obtain a port of entry parole, which allows entry to the U.S. without a waiver. A parole entry is a one-time entry for a very specific and limited purpose. Conveniently located in several port cities, ABIL lawyers have the necessary relationships to facilitate port of entry paroles in urgent cases. Back to Top Other An applicant may also be declared inadmissible based on other diverse grounds such as lack of a proper U.S. visa or participation in alien smuggling, or a U.S. immigration officer's belief that the applicant will not be able to support themselves financially in the United States, they have supported or are a member of a terrorist organization, they have participated in drug trafficking, or other reasons. If you have been declared inadmissible to the United States and need assessment of your opportunities to obtain a waiver, please contact us for a fact-specific review of your case.
http://www.abil.com/svc_complex_waivers.cfm
During the final production testing process, a mountain bike manufacturer wants to test their bike frames load capacities to ensure it’s high quality and frame durability. A mountain bike manufacturing company wants a system that measures their bike frames load capacities and vibrations on the frame. They want to ensure the bike’s high quality and frame load durability during this final step of the product testing process. Interface suggests installing Model SSMF Fatigue Rated S-Type Load Cell, connected to the WTS-AM-1E Wireless Strain Bridge , between the mountain bike’s seat and the bike frame. This will measure the vibrations and load forces applied onto the bike frame. The results will be captured by the WTS-AM-1E and transmitted to the customer’s PC using the WTS-BS-6 Wireless Telemetry Dongle Base Station. The mountain bike manufacturing company was able to gather highly accurate data to determine that their bikes met performance standards through this final testing.
https://www.interfaceforce.com/solutions/test-and-measurement/bike-load-testing/
Climate change is causing massive food production cuts, affecting the global food supply. Climate change is affecting food supplies, cutting back heavily, or even some crops, according to a new United Nations report. Unpredictable storms, wildfires, droughts, floods, unpredictable food production, which will lead to reduced food production, mean fewer quantities of food available to humans, higher food prices, and many agricultural and by-products will not be available. By 2100, the main grain crops of all kinds may be significantly reduced, with reductions of 20-45% of maize, 5-50% of wheat, 20-30% rice and 30-60% of soybeans. In addition, climate change will lead to an increase in pests and diseases. Experts say the world’s population could reach 10 billion by 2050, and that 60 percent more food is needed to be eaten by all. The more vulnerable in the future are those in low-income countries, where agricultural prices have skyrocketed, while rich countries account for most of the world’s agricultural products, and low-income countries may not be able to feed many people. The world food crisis, which erupted in 2007-2008, had a world food reserve of only 405 million tons, enough for 53 days. Food prices have soared and protests and unrest have broken out in many countries. In addition, there is malnutrition, increased carbon dioxide levels in the atmosphere, increased photosynthesis of plants, plants grow faster, have fewer proteins entering the seeds, and plants use more protein to grow. The protein-based influence of crop species decreased, and trace elements such as zinc and iron also declined, and the nutrients people can get from crops were reduced accordingly. Research shows that by 2050, the global zinc deficiency population will reach 138 million, 148 million people or suffering from protein deficiency.
https://www.smalltechnews.com/archives/35867
--- abstract: 'We review current theoretical cosmology, including fundamental and mathematical cosmology and physical cosmology (as well as cosmology in the quantum realm), with an emphasis on open questions.' author: - | A. A. Coley:\ Department of Mathematics and Statistics, Dalhousie University,\ Halifax, Nova Scotia, B3H 4R2, Canada\ email: [email protected]\ \ G. F. R. Ellis:\ Mathematics Department, University of Cape Town,\ Rondebosch, Cape Town 7701, South Africa\ email: [email protected] title: Theoretical Cosmology --- Introduction ============ Cosmology concerns the study of the large scale behavior of the Universe within a theory of gravity, which is usually assumed to be General Relativity (GR). [^1] It has a unique nature that makes it a distinctive science in terms of its relation to both scientific explanation and testing. ### The uniqueness of the Universe There is only one Universe which we effectively see from one spacetime point (because it is so large) [@Ellis1971]. This is the foundational constraint in terms of both scientific theory (how do we distinguish laws from initial conditions?) and observational testing of our models: - We can only observe our Universe on one past light cone. - We have to deduce four dimensional (4D) spacetime structure from a 2D image; distance estimations are therefore key. - We can’t see many copies of the universe to deduce laws governing how universes operate. Therefore, we have to compare the one universe with simulations of what might have been. This consequently leads to the important question of what variations from our model need explaining and what are statistical anomalies that do not need any explanation (i.e., [*[cosmic variance]{}*]{}). This question arises, for example, regarding some cosmic microwave background (CMB) anomalies. ### The background model Cosmology is the study of the behaviour of the Universe when small-scale structures (such as, for example, stars and galaxies) can be neglected. The “Cosmological Principle”, which can be regarded as a generalization of the Copernican Principle, is often assumed to be valid. This principle asserts that: [*[On large scales the Universe can be well–modeled by a solution to Einstein’s equations which is spatially homogeneous and isotropic.]{}*]{} This implies that a preferred notion of cosmological time exists [^2] such that at each instant of time, space appears the same in all directions (isotropy) and at all places (spatial homogeneity) on the largest scales. This is, of course, certainly not true on smaller scales such as the astrophysical scales of galaxies, and it would thus be better if the cosmological principle could be deduced rather than assumed a priori (i.e., could late time spatial homogeneity and isotropy be derived as a dynamical consequence of the Einstein Field Equations (EFE) under suitable physical conditions and for appropriate initial data). This has been addressed, in part, within the inflationary paradigm, when scalar fields are dynamically important in the early Universe. The Cosmological Principle leads to a background Friedmann-Lemaítre-Robertson-Walker (FLRW) model, and the EFE determine its dynamics. The concordance spatially homogeneous and isotropic FLRW model (with a three-dimensional comoving spatial section of constant curvature which is assumed simply connected) with a cosmological constant, $\Lambda$, representing the cosmological constant as an interpretation of dark energy, and CDM is the acronym for cold dark matter (or so-called $\Lambda$CDM cosmology or [*[standard cosmology]{}*]{} for short), has been very successful in describing current observations. Early universe inflation is often regarded as a part of the concordance model. The background spatial curvature of the universe, often characterized by the normalized curvature parameter ($\Omega_{k}$), is predicted to be negligible by most of simple inflationary models. Regardless of whether inflation is regarded as part of the standard model, spatial curvature is assumed zero. ### Inhomogeneous models One of the greatest challenges in cosmology is understanding the origin of the structure of the Universe. An essential feature in structure formation is the study of inhomogeneities and anisotropies in cosmology. There are three approaches: - Using exact solutions and properties where possible [@krameretal]. In particular, the Lemaître-Tolman-Bondi (“LTB”) spherically symmetric dust model has been widely used, while the “Strenge losungen” approach of Ehlers, Kundt, Sachs, and Trümper at Hamburg provides a powerful method of examining generic properties of fluid solutions. - Perturbed models where structure formation can be investigated, as pioneered by Lifschitz, Peebles, Sachs and Wolfe, Bardeen (see below). - Numerical simulations, mainly Newtonian, but now being extended to the GR by various groups. In particular, this enables an investigation of the scalar-tensor ratio and CMB polarization, and redshift space distortions and non-Gaussianities, which are key to testing inflationary universe models. It is also important to consider the averaging, backreaction, and fitting problems relating the perturbed and background models. The main point here is that *the same* spacetime domain can be modeled at different averaging scales to obtain, for example, models representing galactic scales $L_1$, galaxy cluster scales $L_2$, large scale structure scales $L_3$, and cosmological scales $L_4$, with corresponding metrics, Ricci tensors, and matter tensors; the issue then is, firstly, how the FE at different scales are related [@fit] and, secondly, how observations at these different scales are related [@Bertotti]. ### Perturbed models In particular, the structure of the Universe can be investigated in cosmology via perturbed FLRW models. A technical issue that arises is the *gauge issue*: how do we map the background model (smooth) to a more realistic (lumpy) model? One must either handle gauge freedom by very carefully delineating what freedom remains at each stage of coordinate specialisation, or use gauge covariant variables (see later). Cosmic inflation provides a causal mechanism for the generation of primordial cosmological perturbations in our Universe, through the generation of quantum fluctuations in the inflaton field which by coupling to the spatial curvature of the Universe act as seeds for the observed anisotropies in the CMB and large scale structure (LSS) of our Universe. Although inflationary cosmology is not the only game in town, it is the simplest and perhaps the only scenario which is currently self-consistent, at least within low energy effective field theory. The recent Planck observations confirm that the primordial curvature perturbations are almost scale-invariant and Gaussian. In the standard cosmology, the primordial perturbations, corresponding to the seeds for the LSS, are chosen from a Gaussian distribution with random phases. This assumption is justified based on experimental evidence, regardless of whether or not inflation is assumed. Predictions arising for matter power spectra and CMB anisotropy power spectra can then be compared with observations; this is a central feature of cosmology today. Together with comparisons of element abundance observations with primordial nucleosynthesis predictions, this has turned cosmology from philosophy to a solid physical theory. Finally, quantum fluctuations of the metric during inflation, imprinted in primordial B-mode perturbations of the CMB, are perhaps the most vivid evidence conceivable for the reality of quantum gravity (QG) in the early history of our Universe. Indeed, any direct detection of primordial gravitational waves (PGW) and primordial non-Gaussianities (PNG) with the specific features predicted by inflation would provide strong independent support to this framework.\ In this article we review the philosophical, mathematical, theoretical, physical (and quantum) challenges to the standard cosmology. For the most part, important non-theoretical issues (such as, for example, experiments and data analysis) are not discussed. Fundamental issues ------------------ Cosmology is a strange beast. On the one hand it has evolved into a mature science, complete with observations, data analysis and numerical methods. On the other hand it contains philosophical assumptions that are not always scientific; this includes, e.g., the assumption of spatial homogeneity and isotropy at large scales outside our particle horizon and issues regarding inflation and the multiverse. As well as philosophical questions, there are fundamental physical problems (e.g., what us the appropriate model for matter, and what is the applicability of coarse graining) as well as mathematical issues (e.g., the gauge invariance problem in cosmology). Indeed, many of the open problems in theoretical cosmology involve the nature of the origin and details of cosmic inflation. ### Open problems and GR Noted problems have always been of importance and part of the culture in mathematics [@OpenProb]. The twenty-three problems by Hilbert [@Hilbert] are perhaps the most well known problems in mathematics. In addition, the set of fifteen problems presented in [@simon] nicely illustrate a number of open problems in mathematical physics. The most important and interesting unsolved problems in fundamental theoretical physics include foundational problems of quantum mechanics and the unification of particles and forces and the fine tuning problem in the quantum regime, the problem of quantum gravity and, of course, the problem of “cosmological mysteries” [@gonitsora]. However, it should be noted that some of them are in fact philsophical problems, in that they are not dealing with any conflict with observations. We are primarily interested here in problems which we shall refer to as problems in theoretical cosmology, and particularly those that are susceptible to a rigorous treatment within mathematical cosmology. Problems in GR have been discussed elsewhere [@MathGR]. There are some problems in GR that are relevant in cosmology, and theorems can be extended into the cosmological regime by including models with matter. For example, generic spacelike singularities, traditionally regarded as being cosmological singularities, have been studied in detail [@SenovillaGarfinkle]. It is also of interest to extend mathematical stability results to the case of a non-zero cosmological constant [@Dotti]. ### Philosophical issues Philosophical problems have always played an important role in cosmology [@Ellis2014]; e.g., are we situated at the center of the Universe or not and, even in the earliest days of Einstein, is the Universe static or evolving. In addition, in cosmology the dynamical laws governing the evolution of the universe, the classical EFE, require boundary conditions to yield solutions. But in cosmology, by definition, there is no “rest of the Universe” to pass their specification off to. The cosmological boundary conditions must be one of the fundamental laws of physics. There are a number of important philosophical issues that include the following: There is only one Universe. Consistency of one model does not rule out alternative models. What can a statistical analysis with only one data point tell us? What is observable? Due to the existence of horizons, the Universe is only observed on or within our past light cone. A typical question in cosmology is: Why is the Universe so smooth. Must a suitable explanation be in terms of ‘genericity’ (of possible initial conditions), or can specialness lead to a possible explanation. There is no physical law that is violated by fine tuning. Indeed, perhaps the Universe is fine-tuned due to anthropic reasons. However, there are many caveats in describing physical processes (e.g., inflation) in terms of naturalness. Indeed, in cosmology the whole concept of ‘naturalness’ is suspect. Let us discuss some of these issues in a little more detail. In [*[observational cosmology]{}*]{}, the amount of information that can be expected to be collected via astronomical observations is limited since we occupy a particular vantage point in the Universe; we are limited in what we can observe by visual and causal horizons (see discussion below). It can be argued that the observational limit may be approached in the foreseeable future, at least regarding some specific scientific hypotheses [@Ellis2014]. There is no certainty that the amount and types of information that can be collected will be sufficient to test all reasonable postulated hypotheses statistically. There is under-determination both in principle and in practice [@Butterfield; @Ellis2014]. This consequently leads to a natural view of model inference as inference to the best explanation/model, since some degree of explanatory ambiguity appears unavoidable in principle; inference in cosmology is based on a [*[Bayesian interpretation]{}*]{} of probability which includes a priori assumptions explicitly. In [*[physical cosmology]{}*]{}, we are gravely compromised because we can only test physics directly up to the highest energies attainable by collisions at facilities such as the LHC, or from what we can deduce indirectly by cosmic ray observations. Hence we have to guess what extrapolation from known physics into the unknown is most likely to be correct; different extrapolations (e.g., string theory or loop quantum gravity) give different outcomes. As we cannot test directly the physics of inflation or of dark energy, theorists in fact rely mainly on Synge’s g-method discussed below: we conclude matter has the properties we would like it to have, in order to fit with astronomical observations. ### Underlying theory It has been argued [@Sahlena] that the measure problem, and hence model inference, is ill defined due to ambiguity in the concepts of probability, in the situation where additional empirical observations cannot add any significant new information. However, inference in cosmological models can be made conceptually well-defined by extending the concept of probability to general valuations (using principles of uniformity and consistency) [@Sahlena]. For example, an important area is empirical tests of the inflationary paradigm which necessitates, in principle, the specification or derivation of an a priori probability of inflation occurring (“the measure problem”). The weakness of all models of inflation is consequently in the initial conditions [@measure]. To assert that the flatness of the Universe or the expected value for $\Lambda$ is predicted by inflation is absolutely meaningless without such an appropriate measure (this is particularly true in the case of the multiverse [@SilkLimits]). The fundamental problem is that the theory of inflation cannot be proven to be correct. Falsifying a “bad theory” (such as the the multiverse “solution” to the cosmological constant problem [@Weinberg1987]) may be impossible [@SilkLimits] [[^3]]{}, since parameters can be added without limit. But it should be possible to falsify a “good theory”, like inflation [@EllisSilk]. Perhaps the best way to make progress may be to probe the falsification of inflation, for which there is a robust predicted CMB polarization signal (induced by GW at the onset of inflation) [@SilkLimits]. ### Assumptions It is necessary to make assumptions to derive models to be used for cosmological predictions and check with observational data. But what precisely are these assumptions and how do they affect the results that come out; e.g., is the reason that small backreaction effects are obtained in computations because of the assumptions that are put in by hand at the beginning? We can only confirm the consistency of assumptions; we cannot rule out alternative explanations. The assumption of a FLRW background (cosmological principle) on cosmological scales presents a number of problems. There is no solid way to test spatial homogeneity, even in principle, by direct tests such as (redshift, distance observations), because we cannot control the possible time evolution of sources and so cannot be confident they are good standard candles (we do not, for example, have a solid understanding of supernova explosions and how they might depend on metallicity, or of radio source evolution). However, observations of structure growth on the one hand and matter-light interactions via the kinematic Sunyaev-Zeldovich effect on the other do indeed give rise to solid constraints on inhomogeneity [@Maartens_Homog; @EllMaaMac], and indicate that approximate spatial homogeneity does indeed hold within our past light cone. Due to the existence of horizons, we can only observe the Universe on or within our past light cone (on cosmological scales). Assumptions beyond the horizon (Hubble scales) are impossible to test and so are, in effect, unscientific. ### Homogeneity scale The homogeneity scale is not actually theoretically determined, even in principle, in the standard cosmological model. It is just “pasted in" to the standard model a postieri to help fit observations. Even then, what is the derived homogeneity scale implied (from the statistics observed). This question is important in the backreaction question. There are a number of different approaches to the definition of a scale of statistical homogeneity. Even if we consider the standard model setting, then the homogeneity scale depends on the statistical measure used. But there are arguments that such a definition is not met and will never be met observationally [@sl09]. Perhaps there is a different notion (e.g., using ergodicity) of statistical homogeneity in terms of an average positive density. But, any practical measure of statistical homogeneity is not directly based on a fundemental relation, but rather on the scale dependence of galaxy-galaxy correlation functions in observations [@dust]. Observationally, and based on the 2-point correlation function, the smallest scale at which any measure of statistical homogeneity can emerge by the current epoch is in the range 70-120 $h^{-1} Mpc$. Indeed, if all N-point correlations of the galaxy distribution are considered, then the homogeneity scale can only be reached, if at all, on scales above 700 $h^{-1} Mpc$ [@sl09] (also see [@DHB]). ### Local and global coordinates Perhaps, most importantly, what are the assumptions that underscore the use of an inertial coordinate system over a Hubble scale ‘background’ patch in which to do perturbation computations or in specifying initial conditions for numerical GR evolution. In particular, what are the assumptions necessary for the existence of Gaussian normal coordinates and hence a ‘global’ time and a ‘global’ inertial (Cartesian and orthogonal) spatial coordinate system (and thus a $1+3$ split) on the ‘background’ patch. This necessitates an irrotational congruence of fluid-comoving observers and, of course, is related to a choice of lapse and shift and hence a well defined gauge. And it essentially amounts to assuming that fluctuations propagate on a fixed absolute Newtonian background (with post-Newtonian corrections). Global inertial coordinates on a dynamical FLRW background in a GR framework are not conceptually possible [@BC]. A collection of spatially contiguous but causally disconnected regions which evolve according to GR on small scales do not generally evolve as a single collective background solution of GR on large cosmological scales. ### Periodic boundary conditions in structure formation studies In addition, what are the assumptions necessary for periodic boundary conditions (appropriate on scales comparable to the homogeneity scale) used in structure formation studies and numerical simulations? In particular, periodic boundary conditions impose a constraint on the global spatial curvature and force it to vanish [@Adamek; @Macpherson]. Strictly speaking, the (average) spatial curvature is only zero in the standard cosmology in which the FLRW universe possesses the space-time structure $R \times M^3$, in which $M^3$ is a three-dimensional spatial comoving simply connected infinite Euclidean 3-space of constant curvature. The EFE govern the local properties of space-time but not the global geometry or the topology of the Universe at large. Nonstandard models with a compact spatial topology (or [*[small universes]{}*]{}) which are periodic due to topological identifications (and are hence not necessarily spatially flat) are also of interest and have observational consequences [@Cornish]. In particular, it has been shown that CMB data are compatible with the possibility that we live in a small Universe having the shape of a flat 3-torus with a sufficiently large volume [@small]. ### Weak field approach Finally, what are the assumptions behind the weak field approach, the applicability of perturbation theory (and use of Fourier analysis), Gaussian initial conditions, averaging and the neglect of backreaction? To different degrees they all assume a small (or zero) spatial curvature. In particular, all global averages of spatial curvature are expected to coincide with that in the corresponding exact FLRW model to a high degree of accuracy when averaging linear Gaussian perturbations. In addition, in cosmology we can observe directions, redshifts, fluxes, but not distances. To infer a distance from observations in cosmology, we always use a model. Hence the real space correlation function and its Fourier transform, the power spectrum, are model dependent. Essentially we conclude that within standard cosmology the spatial curvature is assumed to be zero (or at least very small and below the order of other approximations) in order for the analysis to be valid. In any case, the standard model cannot be used to [*[predict]{}*]{} a small spatial curvature. We will revisit the issue of spatial curvature later. ### Quantum realm and multiverse Are there possible differences from GR at very small scales that result from a theory of QG? In particular, do they have any relevance in the cosmological realm, and conversely what is the impact of cosmology on quantum mechanics [@Hartle]. For example, are there any fundamental particles that have yet to be observed and, if so, what are their properties? Do they (or the recently observed Higgs boson) have any relevance for cosmology. There is also the issue of whether singularities can be resolved in GR by quantum effects and whether singularity theorems are possible in higher dimensions, that are relevant in cosmology. Both QG and inflation motivate the idea of a multiverse, in which there exists a wide range of fundamental theories (or, at least, different versions of the same fundamental theory with different physical parameters) and our own Universe is but one possibility [@Carrmultiverse]. In this scenario the question then arises as to why our own particular Universe has such finely tuned properties that allow for the existence of life. This has led to an explanation in terms of the so-called anthropic principle, which asserts that our Universe must have the properties it does because otherwise we would not be here to ask such a question. The cosmology of a multiverse leads to a number of philosophical questions. For example, is the multiverse even a scientific theory. Definition of a cosmological model ---------------------------------- A cosmological model has the following components [@Ellis1971]. [*[Spacetime geometry:]{}*]{} The spacetime geometry $({\bf M,g})$ is defined by a smooth Lorentzian metric ${\bf g}$ (characterizing the macroscopic gravitational field) defined on a smooth differentiable manifold ${\bf M}$ [@HawEll73]. The scale over which the cosmological model is valid should be specified. [*[Field equations and equations of motion:]{}*]{} To complete the definition of a cosmological model, we must specify the physical relationship (interaction) between the macroscopic geometry and the matter fields, including how the matter responds to the macroscopic geometry. We also need to know the trajectories along which the cosmological matter and light moves. In standard theory, the space-time metric, ${\bf g}$, is determined by the matter present via the EFE: $$\label{eq:EFE} G_{ab} := R_{ab} - \frac{1}{2}R g_{ab} = \kappa T_{ab} -\Lambda g_{ab}$$ where the total energy momentum tensor, $T_{ab}$, is the sum of the stress tensors of all matter components present: $T_{ab} = \sum_{(i)}T_{ab}^{(i)}$, $\kappa$ is essentially the gravitational constant, and $\Lambda$ is the cosmological constant. In colloquial terms: *Matter curves spacetime*. Because of the Bianchi identities, $R_{ab[cd;e]}=0$, the definition on the left of (\[eq:EFE\]) implies the identity $G^{ab}_{\,\,\,\,;b} = 0$ and hence, provided $\Lambda$ is indeed constant, that: $$\label{eq:cons} G^{ab}_{\,\,\,\,;b} = 0 \,\,\, \Rightarrow T^{ab}_{\,\,\,\,;b} = 0;$$ that is, energy-momentum conservation follows identically from the FE (\[eq:EFE\]). The covariant derivatives in (\[eq:cons\]) depend on the space-time geometry, so in colloquial terms: *Space-time tells matter how to move.* The key non-linearity of GR follows from the combination of these two statements, and the fact that $R_{ab}$ is a highly non-linear function of $g_{ab}(x^i)$. In GR a test particle follows a timelike or null geodesic. But a system that behaves as point particles on small scales may [*not necessarily*]{} do so on larger scales. That is, if the particles traverse timelike geodesics in the microgeometry, in principle, the macroscopic (averaged) matter need not follow timelike geodesics of the macrogeometry. However, the fundamental congruence is, in essence, the average of the timelike congruences along which particles move in the microgeometry, and defining the effective conserved energy-momentum tensor ${T}^{a}_{~b}$ through the EFE ensures timelike geodesic motion. In addition, the (average) motion of a photon is not necessarily on a null geodesic in the averaged macrogeometry, which will affect observations. [*[Matter:]{}*]{} We require a consistent model for the matter on the characteristic cosmological (e.g., averaging) scale, and its appropriate (averaged) physical properties. Differentiation between the gravitational field and the matter fields is known not to be scale invariant and, in particular, a perfect fluid is not a scale invariant phenomenon [@LL]; averaging in the “mean field theory" in the presence of gravity changes the equation of state of the matter [@Deb]. In this framework all of the qualitative effects of averaging are absorbed into the redefined effective energy-momentum tensor ${T}^{a}_{~b}$ and the redefined effective equation state of the macro-matter, where $T^{a}_{~b}$ is conserved relative to the macrogeometery. The definition of the Landau frame for any combination of matter fields and radiation is invariant when matter and matter-radiation interactions take place due to local momentum conservation. [*[Timelike congruence:]{}*]{} There is a preferred unit timelike congruence ${\bf u}$ ($u^a u_a = -1$), defined locally at each event, associated with a family of fundamental observers (at late times) or the average motion of energy (at earlier times). In the case that there is more than one matter component, implying the existence of more than one fundamental macroscopic timelike congruence, we can always identify a fundamental macroscopic timelike congruence represented by the 4-velocity of the averaged matter in the model; i.e., the matter fields admit a formulation in terms of an averaged matter content which defines an average (macroscopic) timelike congruence. This then leads to a covariant $1+3$ split of spacetime [@Ellis1971]. Mathematically this implies that the spacetime is topologically restricted and is *$\mathcal{I}$-non-degenerate*, and consequently the spacetime is uniquely characterized by its scalar curvature invariants [@CHP]. For example, for a perfect fluid ${\bf u}$ is the timelike eigenfunction of the Ricci tensor. Observationally, this cosmological rest frame is determined as the frame wherein the CBR dipole is eliminated (the Solar System is moving at about $370 km/sec$ relative to this rest frame). Note that the existence of this preferred rest frame is an important case of a *broken symmetry*: while the underlying theory is Lorentz invariant, it’s cosmologically relevant solutions are not (in particular, at no point in the history of the universe is it actually de-Sitter – with its 10-dimensional symmetry group – much less anti-de Sitter). [*[A note on modified theories of gravity:]{}*]{} Let us make a brief comment here. A key issue is whether GR is, in fact, the correct theory of gravity, especially on galactic and cosmological scales. Recent developments in testing GR on cosmological scales within modified theories of gravity were reviewed in [@Ishak; @Clifton_Ferreira]. In particular, modified gravity theories have played an important role in the dark energy problem. Many questions can be posed in the context of modified gravity theories which include, for example, the general applicability of the BKL behaviour in the neighborhood of a cosmological singularity. We will not discuss such questions here, except for the particular question of whether isotropic singularities are typical in modified gravity theories. Problems in mathematical cosmology ---------------------------------- In GR, a sufficiently differentiable 4-dimensional Lorentzian manifold is assumed [@HawEll73]. The Lorentz metric, g, which characterizes the causal structure of M, is assumed to obey the EFE, which constitute a hyperbolic system of quasi-linear partial differential equations which are coupled to additional partial differential equations describing the matter content [@Rendall2002]. The Cauchy problem is of particular interest, in which the unknown variables in the constraint equations of the governing EFE, consisting of a three-dimensional Riemannian metric and a symmetric tensor (in addition to initial data for any matter fields present), constitute the initial data for the remaining EFEs. Primarily the vacuum case is considered in attempting to prove theorems in GR, but this is not the case of relevance in cosmology. Viable cosmological models contain both matter and radiation, which in physically realistic cases then define a geometrically preferred timelike 4-velocity field [@Ellis1971] which, because of (\[eq:EFE\]), is related to an eigenvector of the matter stress tensor (which is unique if we assume realistic energy conditions [@HawEll73]). The EFE are invariant under an arbitrary change of coordinates (general covariance), which complicates the way they should be formulated in order for global properties to be investigated [@LARS99]. The vacuum EFEs are not hyperbolic in the normal sense. But utilizing general covariance, in harmonic coordinates the vacuum EFEs do represent a quasi–linear hyperbolic system and thus the Cauchy problem is indeed well posed and local existence is garenteed by standard results [@CB69]. It can also be shown that if the constraints (and any gauge conditions) are satisfied initially, they are preserved by the evolution. Many analogues of the results in the vacuum case are known for the EFE coupled to different kinds of matter, including perfect fluids, gases governed by kinetic theory, scalar fields, Maxwell fields, Yang-Mills fields, and various combinations of these. Any results obtained for (perfect) fluids are generally only applicable in restricted circumstances such as, for example, when the energy density is uniformly bounded away from zero (in the region of interest) [@Rendall2002]. The existence of global solutions for models with more exotic matter, such as stringy matter, has also been studied [@Narita]. ### Singularity theorems The concepts of geodesic incompleteness (to characterize singularities) and closed trapped surfaces [@Penrose1979] were first introduced in the singularity theorem due to Penrose [@Penrose65]. Hawking and Ellis [@HawEll68] then proved that closed trapped surfaces will indeed exist in the reversed direction of time in cosmology, due to the gravitational effect of the CMB. Hawking subsequently realized that closed trapped surfaces will also be present in any expanding Universe in its past, which would then inevitability lead to an initial singularity under reasonable conditions within GR [@Hawking1966]. This led to the famous Hawking and Penrose singularity theorem [@PenroseHawking]. The singularity theorems prove the inevitability of spacetime singularities in GR under rather general conditions [@Penrose65; @PenroseHawking], but they say very little about the actual nature of generic singularities. We should note that there are generic spacetimes which do not have singularities [@Senovilla2012]. In particular, the proof of the Penrose singularity theorem does not guarantee that a trapped surface will occur in the evolution. It was proven [@Christodoulou2009] that for vacuum spacetimes a trapped surface can, indeed, form dynamically from regular initial data free of any trapped surfaces. This result was subsequently generalized in [@Klainerman2014; @Klainerman2012]. A number of questions still exist, which include proving more general singularity theorems with weaker energy conditions and with weaker differentiability, and determining any relationship between geodesic incompleteness and the divergence of curvature [@Senovilla2012]. But perhaps the most important open problem within GR is cosmic censorship [@MathGR]. ### Bouncing models Using exotic matter, or alternative modified theories of gravity, can classically lead to the initial cosmological (or “big bang”) singularity being replaced by a “big” [*[bounce]{}*]{}, a smooth transition from contraction to an expanding universe [@Brandenberger], which may help to resolve some fundamental problems in cosmology. Bounce models have utilized ideas like branes and extra dimensions [@Khoury], Penrose’s conformal cyclic cosmology [@Penrosebooks] (which leads to an interest in an isotropic singularity), string gas [@BrandenbergerVafa], and others [@Brandenberger; @Bruni_bounce]. The matter bounce scenario faces significant problems. In particular, the contracting phase is unstable against anisotropies [@Cai] and inhomogeneities [@Penrosebooks1]. Also, GW are not suppressed compared to cosmological perturbations, and consequently the amplitude of GW (as well as possible induced non-Gaussianities) may be in excess of the observational bounds. In a computational study of the evolution of adiabatic perturbations in a nonsingular bounce within the ekpyrotic cosmological scenario [@EKp], it was shown that the bounce is disrupted in regions with significant spatial inhomogeneity and anisotropy compared with the background energy density, but is achieved in regions that are relatively spatially homogeneous and isotropic. The specially fine-tuned and simple examples studied to date, particularly those based on three spatial dimensions, scalar fields and, most importantly, a non-singular bounce that occurs at densities well below the Planck scale where QG effects are small [@Ijjas2], are arguably instructive in pointing to more physical bouncing cosmological models, and may present realistic alternatives to inflation to obtain successful structure formation (which we will discuss below). The precise properties of a cosmic bounce depend upon the way in which it is generated, and many mechanisms have been proposed for this both classically and non-classically. Bounces can occur due to QG effects associated with string theory [@Turok] and loop quantum gravity [@Bojowald; @Ashtekar]. In particular, in loop quantum cosmology there is a bounce when the energy density reaches a maximum value of approximately one half of the Planck density (although it is also possible that bounces occur without a QG regime ever occurring [@bounce], because if inflation occurs, the inflaton field violates the energy conditions needed for the classical singularity theorems to be applicable). We will discuss this in more detail later. ### Mathematical results Some applications in GR can be studied via Einstein-Yang-Mills (EYM) theory (which is relevant to cosmological models containing Maxwell fields and form fields and is perhaps a prototype to studying fields in, for example, string theory). Mathematical results when generalized to Maxwell and YM matter in 4D [@Olinyk] are known (and have been studied in two dimensions less by [*[wave maps]{}*]{} with values on spheres [@Bizonwavemaps; @AnderssonG]). Global existence in Minkowski spacetime, assuming initial data of sufficiently high differentiability, was first investigated in [@differentiability]. The uniqueness theorem for the 4D Schwarzschild spacetime was presented in [@Bunting]. The uniqueness theorem for the Kerr spacetime was proven in [@Carter]. In the non-vacuum case the uniqueness of the rotating electrically charged black hole solution of Kerr-Newman has not yet been generally proven [@Newman65]. Once uniqueness has been established, the next step is to prove stability under perturbations. Minkowski spacetime has been shown to be globally stable [@ChristodoulouKlainerman90; @Christodoulou93]. ### Extension to cosmology Many of these problems in GR can be extended to the cosmological realm [@MathGR]. The uniqueness and stability of solutions to the EFE in GR are important, [^4] and can be generalized to cosmological spacetimes (with a cosmological constant). Generic spacelike singularities are traditionally referred to as being cosmological singularities [@SenovillaGarfinkle]. In particular, the stability of de Sitter spacetime will be discussed later. There are also a number of questions in the quantum realm [@OpenProb], such as singularity resolution in GR by quantum effects and higher dimensional models, which are of interest in cosmology. In essence the perturbation studies leading to theories of structure formation are stability studies of FLRW models. With ordinary equations of state, initial instabilities will grow but with a rate that depends on the background model expansion. Thus if there is no expansion, inhomogeneity will grow exponentially with time; with power law expansion, they will grow as a fractional power of time; and with exponential expansion, they will tend to freeze out. However, the way this happens depends on the comoving wavelength of the perturbation relative to the scale set by the Hubble expansion rate at that time.[^5] These studies hold while the perturbation is linear, and have been heroically extended to the non-linear case (see later). However numerical simulations are required for the strongly non-linear case [@Adamek]. ### Computational cosmology Numerical calculations have always played a central role in GR. Indeed, numerical computations support many of the conjectures in GR and their counterparts in cosmology and have led to a number of very important theoretical advances [@MathGR]. For example, the investigation of the mathematical stability of AdS spacetime includes fundamental numerical work and cosmic censorship is supported by numerical computations. In addition, the role of numerics in the understanding of the BKL dynamics, and in various other problems in cosmology and higher dimensional gravity, has been important. In fact, numerical computations are now commonly used to address fundamental issues within full GR cosmology[@Computational; @Bentivegna; @Giblin; @Adamek; @Macpherson; @Adamek18]. Cosmological observations ------------------------- What turns cosmology from a mathematical endeavour to a scientific theory is its ability to produce observational predictions that can be tested. Since the initiation of cosmology as a science by Lemaïtre in 1927 [@Lemait27], telescopes of ever increasing power, covering all wavelengths and both Earth-based and in satellites, have led to a plethora of detailed tests of the models leading to the era of “precision cosmology”. The tests are essentially of two kinds: direct tests of the background models based on some kind of “standard candle” or “standard ruler”, and indirect tests based on studying the statistics both of structures (inhomogeneities) on the one hand, and their effects on the CMB on the other. Both kinds of results produce broadly concordant results, but the latter give tighter restrictions on the background model than the former, because what kinds of structures can form depends on the dynamics of the background model. [*[The basic restricton:]{}*]{} The basic observational restriction in cosmology is that given the scales involved, we can only observe the Universe from one space-time event (“here and now”) [@Ellis1971]. This would not be the case if the Universe were say the size of the Solar System, but that is not the case: a key discovery has been the immense size of the Universe, dwarfing the scales of galaxies which themselves dwarf the scale of the Solar System. This leads to major limits on what is observable, because of *visual horizons* for each kind of radiation or particle: for example, the CMB is observed on single surface (two-sphere) of last scattering. The furthest matter we can observe can be influenced by matter even further out, but such indirect effects are limited by the *particle horizon*: the furthest matter that can have had causal influence on us by influences travelling to us at speeds limited by the speed of light since the start of the Universe. ### Anomalies Within theoretical cosmology there needs to be an adequate explanation of observational anomalies, which are bound to occur as we make ever more detailed models of the structures and their effects on the CMB. Geometric optics must be utilized and model independent observations are sought. In general, data analysis and statistical methods are not discussed here. However, observations do, of course, lead to theoretical questions. Are there important neglected selection/detection effects [@Disney]; i.e., what else can exist that we have not yet seen or detected? Observations sometimes lead to ridiculous predictions (e.g., $w < -1$; phantom matter); care must be taken not to be led into unphysical parameter space. Appropriate explanations of observational anomalies may well lead to new fundamental physics and questions. The standard cosmology has been extremely successful in describing current observations, up to various possible anomalies and tensions [@tension], and particularly some statistical features in the CMB [@planck2018] and the existence of structures on gigaparsec scales such as the cold spot and some super-voids [@Finelli]. Although primordial nucleosynthesis has been very successful in accounting for the abundances of helium and deuterium, lithium has been found to be overpredicted by a factor of about three [@lithum]. Lithium, along with deuterium, is destroyed in stars, and consequently it’s observation constitutes evidence (and a measure) of the primordial abundance after any appropriate corrections. To date there has been some claims of relief in this tension, but there is no satisfactory resolution of the lithium problem. A seldom asked question is whether the CMB and matter dipoles are in agreement [@EllisBaldwin]. Tests of differential cosmic expansion on such scales rely on very large distance and redshift catalogues, which are noisy and are subject to numerous observational biases which must be accounted for. In addition, ideally any test should be performed in a model independent manner, which requires removing the FLRW assumptions that are often taken for granted in many investigations. To date, such a model independent test has been performed for full sky spherical averages of local expansion [@WiltshireHubble], using the COMPOSITE and Cosmicflows-II catalogues; it was found with very strong Bayesian evidence that the spherically averaged expansion is significantly more uniform in the rest frame of the Local Group (LG) of galaxies than in the standard CMB rest frame. It was subsequently shown by that this result is consistent with Newtonian N-body simulations in the standard cosmology framework [@Kraljic]. The future of such tests is discussed in [@Maartens_dipole], concluding that the amplitude of the matter dipole can be significantly larger than that of the CMB dipole. Its redshift dependence encodes information on the evolution of the Universe and on the tracers. Perhaps more controversially, it has also been suggested that a “dark flow” may be responsible for part of the motion of large objects that has been observed. An analysis of the local bulk flow of galaxies indicates a lack of convergence to the CMB frame beyond 100 Mpc [@Kashlinsky], which contradicts standard cosmological expectations. Indeed, there is an anomalously high and approximately constant bulk flow of roughly 250 km/s extending all the way out to the Shapley supercluster at approximately 260 Mpc, as indicated by low redshift supernova data. Furthermore, there is a discrepancy which has been confirmed by 6dF galaxy redshift data [@Sarkar]. ### Tension in the Hubble constant The recent determination of the local value of the Hubble constant based on direct measurements of supernovae made with the Hubble Space Telescope [@R16] is now $3.3$ standard deviations higher than the value derived from the most recent data on the power spectrum temperature features in the CMB provided by the Planck satellite in a $\Lambda$CDM model. Although it is unlikely that there are no systematic errors (since the value of the Hubble constant has historically been a source of controversy), the difference might be a pointer towards new physics [@tensionRiess]. So this is perhaps the most important anomaly that needs to be addressed. Although a large number of authors have proposed several different mechanisms to explain this tension, after three years of improved analyses and data sets, the tension in the Hubble constant between the various cosmological datasets not only persists but is even more statistically significant. The recent analysis of [@R16] found no compelling arguments to question the validity of the dataset used. Indeed, the recent determination of the local value of the Hubble constant by Riess [*[et al.]{}*]{} in 2016 [@R16] of $H_0=73.24 \pm 1.74 km s^{-1} Mpc^{-1}$ at $68 \%$ confidence level is now about $3$ standard deviations higher than the (global) value derived from the earlier 2015 CMB anisotropy data provided by the Planck satellite assuming a $\Lambda$CDM model [@planck2015]. This tension only gets worse when we compare the Riess [*[et al.]{}*]{} 2018 value of $H_0=73.52 \pm 1.62 km s^{-1} Mpc^{-1}$ [@R18] to the Planck 2018 value of $H_0=67.27 \pm 0.60 km s^{-1} Mpc^{-1}$ [@planck2018]. In order to investigate possible solutions to the Hubble constant tension a number of proposals have been made [@VMS1]. For example, in [@Carneiro] it was shown that the best-fit to current experimental results includes an additional fourth, sterile, neutrino family with a mass of an eV order suggested by flavour oscillations. This would imply an additional relativistic degree of freedom ($N_{eff} = 4$) in the standard model, which may alleviate the $H_0$ tension. Recently it was argued that GW could represent a new kind of standard “sirens" that will allow for $H_0$ to be constrained in a model independent way [@siren]. It is unlikely that inhomogeneitites and cosmic variance can resolve the tension [@Macpherson]. However, there are suggestions that the emergence of spatial curvature may alleviate the tension [@Bolejko18; @CCCS; @Brand; @Macpherson; @Ryan2019]. Any definitive measurement of a non-zero spatial curvature would be crucial in cosmology. We will revisit this later. Problems in theoretical cosmology ================================= Acceleration: dark energy ------------------------- The most fundamental questions in cosmology, perhaps, concern dark matter and dark energy, both of which are ‘detected’ by their gravitational interactions but can not be directly observed [@Martin19]. Indeed, the dark energy problem is believed to be one of the major obstacles to progress in theoretical physics [@Witten2001; @Steinhardt]. Weinberg discussed the [*[cosmological constant problem]{}*]{} in detail [@Weinberg1989]. Conventional quantum field theory (QFT) predicts an enormous energy density for the vacuum. However, the GR equivalence principle asserts that all forms of mass and energy gravitate in an identical manner, which then implies that the vacuum energy is gravitationally equivalent to a cosmological constant and would consequently have a huge effect on the spacetime curvature. But the observed value for the effective cosmological constant is so very tiny (in comparison to the predictions of QFT) that a “bare” cosmological constant, whose origin is currently not known, is necessary to cancel out this enormous vacuum energy to at least $10^{-120}$. This impossibly difficult fine-tuning problem becomes even worse if we include higher order LQG corrections [@Padilla]. A number of authors, including Weinberg, have offered the opinion that of all of the possible solutions to the dark energy problem, perhaps the most reasonable is the anthropic bound, which is itself very controversial [@Weinberg1987]. However, another possibility is that the quantum vacuum does not gravitate. This will be true if the real gravitational theory is unimodular gravity, leading to the trace-free EFE [@tracefree]. Furthermore, the expansion of the Universe has been increasing for the last few billion years [@Riess; @Perlmutter]. Within the paradigm of standard cosmology, it is usually proposed that this acceleration is caused by a so-called [*[dark energy]{}*]{}, which effectively has the same properties as a very small cosmological constant (which is a repulsive gravitational force in GR). This [*[cosmological coincidence problem]{}*]{}, which necessitates a possible explanation for why the particular small observed valued of the cosmological constant currently is of a similar magnitude to that of the matter density in the Universe, is an additional problem. In particular, it is often postulated that dark energy is not due to a pure cosmological constant but that dynamical models such as, for example, quintessence and phantom energy scalar field models, are more reasonable. Alternative explanations for these gravitational effects have been proposed within theories with modified gravity on large scales, which consequently do not necessitate new forms of matter. The possibility of an effective acceleration of the Universe due to backreaction has also been discussed. Acceleration: inflation ----------------------- Inflation is a central part of modern theoretical physics. The assumption of zero spatial curvature ($k = 0$) is certainly well motivated in the standard model by inflation. Before the development of inflation, it was already known that a scale invariant (Harrison-Zeldovitch) power spectrum is a good fit to the data. But its origin was mysterious and there was no convincing physical mechanism to explain it. However, inflation naturally implies this property as a result of cosmological perturbations of quantum mechanical origin. Moreover, it allows a bridge to be built between theoretical considerations and actual astrophysical measurements. One fundamental assumption of inflation is that, initially, the quantum perturbations are placed in the vacuum state [@Martin]. As noted earlier, models with a positive cosmological constant are asymptotic at late times to the inflationary de Sitter spacetime [@Friedrich1986; @Wald83]. Scalar field models with an increasing rate of (volume) expansion are also future inflationary. For models with an exponential potential, global asymptotic results can be obtained [@Coleybook; @exppot]. Inflationary behavior is also possible in scalar field models with a power law potential, but typically occurs during an intermediate epoch rather than asymptotically to the future. Local results in this case are possible, but they are difficult to obtain and this problem is usually studied numerically. There are a number of fundamental questions, which include the following. What exactly is the conjectured inflaton? What is the precise physical details of [*[cosmic inflation]{}*]{}? If inflation is self-sustaining due to the amplification of fluctuations in the quantum regime, is it still taking place in some (distant) regions of the Universe? And, if so, does inflation consequently give rise to an infinite number of “bubble universes”? In this case, under what (initial) conditions can such a [*[multiverse]{}*]{} exist? An investigation of “bubble universes", in which our own Universe is but one of many that nucleate and grow within an ever-expanding false vacuum, has been undertaken (primarily computationally). For example, the interactions between such bubbles were investigated in [@bubbles]. Cosmological inflation is usually taken as a reasonable explanation for the fact that the Universe is apparently more uniform on larger scales than is anticipated within the standard cosmology (the [*[horizon problem]{}*]{}). However, there are other possible explanations. But how does inflation start? And, perhaps most importantly, what is the generality for the onset of inflation for generic spatially inhomogeneous initial data? We note that a rigorous formulation of this question is problematic due to the fact that there are so many different inflationary theories and since there are no “natural” conditions for the initial data. However, any such natural initial conditions are expected to contain some degree of inhomogeneity [[^6]]{}. Unfortunately, such initial data does not necessarily lead to inflation. Although it is known that large field inflation can occur for simple inhomogeneous initial data (at least for energies with substantial initial gradients and when the inflaton field is on the inflation supporting portion of the potential to begin with), it has also been shown that small field inflation is significantly less robust in the presence of inhomogeneities [@infl] (also see [@bubbles] and [@infl2]). ### Alternatives to inflation Although inflation is the most widely acceptable mechanism for the generation of almost scale invariant (and nearly Gaussian adiabatic density) fluctuations to explain the origin of structure on large scales, possible alternatives include GR spikes [@art:ColeyLim2012], conformal cyclic cosmology [@Penrosebooks] and QG fluctuations [@Hamber]. In particular, Penrose has argued that since inflation fails to take fully into account the huge gravitational entropy that would be associated with black holes in a generic spacetime, inflation is incredibly unlikely to start, and smooth out the universe, if its initial state is generic [@Penrosebooks]. In addition, in the approach of [@Hamber] results from non-perturbative studies of QG regarding the large distance behavior of gravitational and matter two-point functions are utilized; non-trivial scaling dimensions exist due to a nontrivial ultraviolet renormalization group fixed point in 4D, motivating an explanation for the galaxy power spectrum based on the non-perturbative quantum field-theoretical treatment of GR. Perhaps the most widely accepted alternative to inflation to obtain successful structure formation and which is consistent with current observations [@beyond] is the matter bounce scenario, in which new physics resolves the cosmological singularity. ### Bouncing models revisited Bouncing models include the ekpyrotic and emergent string gas scenarios [@beyond]. The [*[ekpyrotic]{}*]{} scenarios [@Khoury] are bouncing cosmologies which avoid the problems of the anisotropy and overproduction of GW in the matter bounce scenario, since the dynamics of the contracting phase is governed by a matter field (e.g., a scalar field with negative exponential potential) whose energy density increases faster than the contribution of anisotropies. In ekpyrotic scenarios, in which the bounce is not necessarily symmetric, fluctuations on all currently observable scales start inside the Hubble radius at earlier times, leading to structure that is formed causally and hence a solution of the horizon problem in the same way as in standard big bang cosmology and as in the usual matter bounce. But, contrary to the matter bounce scenario, during contraction the growth of fluctuations on super-Hubble scales is too weak to produce a scale-invariant spectrum from an initial vacuum state, leading to a subsequent blue spectrum of curvature fluctuations and GW [@beyond]. Therefore, initial vacuum perturbations cannot describe the observed structures in the Universe. In addition, a negligible amplitude of GW is predicted on cosmological scales. However, a scale invariant spectrum of curvature fluctuations can be obtained by using primordial vacuum fluctuations in a second scalar field in the ekpyrotic scenario [@Notari]. Another alternative to cosmological inflation is the [*[emergent string gas]{}*]{} scenario [@BrandenbergerVafa], based on a possible extended quasi-static period in the very early Universe dynamically governed by a thermal gas of fundamental strings, after which there is a transition to the expanding radiation phase of standard cosmology. The thermal fluctuations of a gas of closed strings on a toroidal compact space, which do not originate quantum mechanically (unlike in most models of inflation), then produce a scale-invariant spectrum of curvature fluctuations and GW (the tilts of which are predicted to be red and slightly blue, and hence the same and in contrast to what is produced in inflationary cosmology, respectively). We should note that although some of the alternatives to inflation are suggested by ideas motivated by QG, it is also of interest to know whether inflation occurs naturally within QG. We will discuss this later. The physics horizon and Synge’s g-method ---------------------------------------- [*[The Physics horizon:]{}*]{} The basic problem as regards inflation and any attempts to model what happened at earlier times in the history of the Universe is that we run into the *physics horizon*: we simply do not know what the relevant physics was at those early times. The reason is that we cannot construct particle colliders that extend to such high energies. Thus we are forced either to extrapolate tested physics at lower energies to these higher energies, with the outcome depending on what aspect of lower energy physics we decide to extrapolate (because we believe it is more fundamental than other aspects), or to make a phenomenological model of the relevant physics. [*[Synge’s g-method:]{}*]{} A very common phenomenological method used is *Synge’s g-method*: running the EFE backwards [@EllMaaMac]. That is, in eqn. (\[eq:EFE\]), instead of trying to solve it from right to left (given a matter source, find a metric $\textbf{g}$ that corresponds to that matter source), rather choose the metric and then find the matter source that fits. That is, select a metric $\textbf{g}$ with some desirable properties, calculate the corresponding Ricci tensor $R_{ab}$ and Einstein tensor $G_{ab}$ and then use (\[eq:EFE\]) to find the matter tensor $T_{ab}$ so that (\[eq:EFE\]) is identically satisfied, and *voila!* we have an exact solution of the EFE that has the desired geometric properties. No differential equations have to be solved. The logic is: via (\[eq:EFE\]), $$\label{eq:synge} \{g_{ab}\} \Rightarrow \{R_{ab}\} \Rightarrow \{G_{ab}\} \Rightarrow \{T_{ab}\}.$$ One classic example is choosing an inflationary scale factor $a(t)$ that leads to structure formation in the early Universe that agrees with observations. We can then run the EFE backward as in (\[eq:synge\]) to find a potential $V(\phi)$ for an effective scalar field $\phi$ that will give the desired evolution $a(t)$. It is a theorem that one almost always can find such a potential [@Ell_Mads], essentially because the energy momentum conservation equations are in that case equivalent to the Klein-Gordon equation for the field $\phi$; but there is no real physics behind claims of the existence of such a scalar field. It has not been related to any matter or field that has been demonstrated to exist in any other context. Dynamical behaviour of cosmological solutions --------------------------------------------- The dynamical laws governing the evolution of the universe are the classical EFEs. It is of interest to study exact cosmological solutions and especially spatially inhomogeneous cosmologies [@krameretal], and their qualitative and numerical behaviour. Dynamical systems representations of the evolution of cosmological solutions are very useful [@Collinsandellis; @WE]. In particular, it is of interest to extend stability results to the study of cosmological models with matter and in the case of a non-zero cosmological constant [@Dotti]. ### Stability of cosmological solutions This concerns the question of whether the evolution of the EFE under small perturbations is qualitatively similar to the evolution of the underlying exact cosmological solution (e.g., by including small-scale fluctuations). This problem involves the investigation of the (late time) behavior of a complex set of partial differential equations about a specific cosmological solution [@AnderssonMoncrief]. The asymptotic behaviour of solutions in cosmology was reviewed in [@WE]. We note that for a vanishing cosmological constant and matter that satisfies the usual energy conditions, spatially homogeneous spacetimes of (general) Bianchi type IX recollapse and consequently do not expand for ever. This result is formalized in the so-called “closed universe recollapse conjecture” [@BarrowTipler], which was proven in [@LinWald]. However, Bianchi type IX spacetimes need not recollapse in the case that a positive cosmological constant is present. The study of the stability of de Sitter spacetime for generic initial data is very important, particularly within the context of inflation (although, as noted earlier, precise statements concerning the generality of inflation are problematic). ### Stability of de Sitter spacetime A stability result for de Sitter spacetime (vacuum and a positive cosmological constant) for small generic initial data was proven in [@Friedrich1986]. Therefore, de Sitter spacetime is a local attractor for expanding cosmologies containing a positive cosmological constant. In addition, it was proven that any expanding spatially homogeneous model (in which the matter obeys the strong and dominant energy conditions) that does not recollapse is future asymptotic to an isotropic de Sitter spacetime [@Wald83]. This so-called “cosmic no hair” theorem is independent of the particular matter fields present. The remaining question is whether general, initially expanding, cosmological solutions corresponding to initial data for the EFE with a positive cosmological constant and physical matter exist globally in time. It is known that this is indeed the case for a variety of matter models (utilizing the methods of [@Rendall95]). Global stability results have also been proven for inflationary models with a scalar field with an exponential potential [@Coleybook; @exppot]. It is, of course, of considerable interest to investigate the cosmic no–hair theorem in the inhomogeneous case. A number of partial results are known in the case of a positive cosmological constant [@Jensen]. The possible quantum instability of de Sitter spacetime has also been investigated. In a semi-classical analysis of backreaction in an expanding universe with a conformally coupled scalar field and a cosmological constant, it was advocated that de Sitter spacetime is unstable to quantum corrections and might, in fact, decay. In principle, this could consequently provide a mechanism that might alleviate the cosmological constant problem and also, perhaps, the fine-tuning problems that occur for the very flat inflationary potentials that are necessitated by observations. In particular, it has been suggested that de Sitter spacetime is unstable due to infrared effects, in that the backreaction of super-Hubble scale GW could contribute negatively to the effective cosmological constant and thereby cause the latter to diminish. Indeed, from an investigation of the backreaction effect of long wavelength cosmological perturbations it was found that at one LQG order super-Hubble cosmological perturbations do give rise to a negative contribution to the cosmological constant [@infra]. It has consequently been proposed that this backreaction could then lead to a late time scaling solution for which the contribution of the cosmological constant tracks the contribution of the matter to the total energy density; that is, the cosmological constant obtains a negative contribution from infrared fluctuations whose magnitude increases with time [@Brand]. ### The nature of cosmological singularities Although the singularity theorems imply that singularities occur generally in GR, they say very little about their nature [@Senovilla2012]. For example, singularities can occur in tilted Bianchi cosmologies in which all of the scalar quantities remain finite [@EllisandKing]. However, such cosmological models are likely not generic. Belinskii, Khalatnikov and Lifshitz (BKL) [@art:LK63] have conjectured that within GR, and for a generic inhomogeneous cosmology, the approach to the spacelike singularity into the past is vacuum dominated, local and oscillatory, obeying the the so-called BKL or mixmaster dynamics. In particular, due to the non-linearity of the EFE, if the matter is not an effective massless scalar field, then sufficiently close to the singularity [*[all matter terms can be neglected]{}*]{} in the FE relative to the dynamical anisotropy. BKL have confirmed that the assumptions they utilized are consistent with the EFE. However, that doesn’t imply that their assumptions are always valid in general situations of physical interest. Numerical simulations have recently been used to verify the BKL dynamics in special classes of spacetimes [@Berger; @DavidG]. Rigorous mathematical results on the dynamical behaviour of Bianchi type VIII and IX cosmological models have also been presented [@bianchi]. Up to now there have essentially been three main approaches to investigate the structure of generic singularities, including the original heuristic BKL metric approach and the so-called Hamiltonian approach. The dynamical systems approach [@WE], in which the EFE are reformulated as a scale invariant asymptotically regularized dynamical system (i.e., a first order system of autonomous ordinary or partial differential equations) in the approach towards a generic spacelike singularity, allows for a more mathematically rigorous study of cosmological singularities. A dynamical systems formulation of the EFE (in which no symmetries were assumed a priori) was presented in [@Uggla03], which resulted in a detailed description of the generic attractor, precisely formulated conjectures concerning the asymptotic dynamical behavior toward a generic spacelike singularity, and a well-defined framework for the numerical study of cosmological singularities [@Andersson]. It should be noted that these studies assume that the singularity is spacelike, but there is no reason that this has to be so (this is not, in fact, generic). The effect of GR spikes on the BKL dynamics and on the initial cosmological singularity was reviewed in [@MathGR]. ### Isotropic singularity Penrose [@Penrosebooks] has utilized entropy considerations to motivate the “Weyl curvature hypothesis” that asserts that on approach to an initial cosmological singularity the Weyl curvature tensor should tend to zero or at least remain bounded (this conjecture subsequently led to the conformal cyclic cosmology proposal). It is difficult to represent this proposal mathematically but the clearly formulated geometric condition presented in [@Goode], that the conformal structure should remain regular at the singularity, is closely related to the original Penrose proposal. Such singularities are called isotropic or conformal singularities. It is known [@Claudel] that solutions of the EFE for a radiation perfect fluid that admit an isotropic singularity are uniquely characterized by particular free data specified at the singularity. The required data is essentially half as much as the data necessary in the case of a regular Cauchy hypersurface. This result was generalized to the case of a perfect fluid with a linear equation of state[@Anguige], and can be further extended to more general matter models (e.g., more general fluids and a collisionless gas of massless particles) [@Rendall2002]. As noted earlier, we do not aim to discuss alternative theories of gravity in this review. However, it is of cosmological interest to determine whether isotropic singularities are typical in any modified theories of gravity. For example, the past stability of the isotropic FLRW vacuum solution, on approach to an initial cosmological singularity, in the class of theories of gravity containing higher–order curvature terms in the GR Lagrangian, has been investigated [@Middleton]. In particular, a special isotropic vacuum solution was found to exist, which behaves like a radiative FLRW model, that is past stable to small anisotropies and inhomogeneities (which is not the case in GR). Exact solutions with an isotropic singularity for specific values of the perfect fluid equation of state parameter have also been obtained in a higher dimensional flat anisotropic Universe in Gauss-Bonnet gravity [@Kirnos]. A number of simplistic cosmological solutions of theories of gravity containing a quadratic Ricci curvature term in the Einstein-Hilbert Lagrangian have also been investigated [@BarrowHervik]. Problems in physical cosmology ============================== The predicted distribution of [*[dark matter]{}*]{} in the Universe is based on observations of galaxy rotation curves, nucleosynthesis estimates and computations of structure formation [@Freese]. The nature of the missing dark matter is not yet known (e.g., whether it is due to a particle or whether the dark matter phenomena is not characterized by any type of matter but rather by a modification of GR). But it is, in general, anticipated that this particular problem will be explained within conventional physics. More recently primordial black holes have been invoked to explain the missing dark matter and to alleviate some of the problems associated with the CDM scenario (see later) [@BernardCarr]. Origin of structure ------------------- The CMB anisotropies and structure observed on large angular scales are computed using linear perturbations about the standard background cosmological model. However, such large scale structure could never have been in causal contact within conventional cosmology and hence its origin cannot be cannot be explained by it. In general, the testable predictions of inflationary models are scale-invariant and nearly Gaussian adiabatic density fluctuations and almost, but not exactly, a scale-invariant stochastic background of relic GW. However, and as noted earlier, possible alternatives to inflation to obtain successful structure formation consistent with current observations [@beyond] exist, including the popular matter bounce cosmologies. ### Large scale structure of the Universe In the standard cosmology it is assumed that cosmic structure at sufficiently large scales grew out of small initial fluctuations at early times, and we can study their evolution within (cosmological) linear perturbation theory (LPT) [@Durrer1996]. We assume that on large scales there is a well defined mean density and on intermediate scales, the density differs little from it. This is a highly non-trivial assumption, which is perhaps justified by the isotropy of the CMB. It is usual to use a fluid model for matter and a kinetic theory model for radiation. At late times and sufficiently small scales fluctuations of the cosmic density are not small. The density inside a galaxy is about two orders of magnitude greater than the mean density of the Universe, and LPT is then not adequate to study structure formation on galaxy-cluster scales of a few Mpc and less. It is necessary to treat clustering non-linearly using N-body simulations. Since this is mainly relevant on scales much smaller than the Hubble scale, it has usually been studied in the past with non-relativistic N-body simulations. On intermediate to small scales, density perturbations can become large. Inside a galaxy they are small, and even inside a galaxy cluster the motion of galaxies is essentially decoupled from the Hubble flow (i.e., clusters do not expand). Therefore, the gravitational potential of a galaxy remains small, and in the Newtonian (longitudinal) gauge, metric perturbations remain small. In the past, this together with the smallness of peculiar velocities has been used to argue that Newtonian N-body simulations are sufficient. In the adiabatic case, the last scattering surface is a surface of constant baryon density, so the observed CMB fluctuations do not represent density fluctuations, as is often stated [@Durrer]. Thus, in standard perturbation theory language, this shows that in the uniform density gauge (which for adiabatic perturbation is the same as the uniform temperature gauge) the density fluctuations are given exactly by the redshift fluctuations. In the non-adiabatic case this will no longer be true. The main shortcoming of the conventional analysis is, of course, the instantaneous recombination approximation (accurate to a few percent only for multipoles with $\ell<100$); to go beyond this one has to use a Boltzmann approach [@Durrer] (although nothing changes conceptually). Also, in principle we cannot neglect radiation or neutrino (even massive) velocities. In addition, Newtonian simulations only consider 1 (of in general 6) degrees of freedom, and observations are made on the relativistic, perturbed light cone. Hence relativistic calculations are needed. ### Perturbation theory The complexity of the distribution of the actual matter and energy in our observed Universe, consisting of stars and galaxies that form clusters and superclusters of galaxies across a broad range of scales, cannot be described within the standard model. To do this we must to be able to describe spatial inhomogeneity and anisotropy using a perturbative approach starting from the uniform FLRW model as a background solution [@MalikWands]. The perturbations “live” on the four-dimensional background spacetime, which is split into three-dimensional spatial hypersurfaces utilizing a (1+3) decomposition. Within the standard cosmology a flat background spatial metric ($k=0$) in LPT is assumed, which is consistent with current observations. For generalisations to spatially hyperbolic or spherical FLRW models see, e.g., [@KodamaSasaki]. The introduction of a spatially homogeneous background spacetime to describe the inhomogeneous Universe leads to an ambiguity in the choice of coordinates. Selecting a set of coordinates in the (real) inhomogeneous Universe, which will then be described by an FLRW model plus perturbations, essentially amounts to the assignation of a mapping between spacetime points in the inhomogeneous Universe and the spatially homogeneous background model. The freedom in this selection is the gauge freedom, or gauge problem, in GR perturbation theory. Either the gauge freedom must be handled very carefully by delineating what freedom remains at each stage of coordinate specialisation [@SachsWolfe], by using gauge covariant variables [@Bardeen], or utilizing 1+3 gauge invariant and covariant variables [@gaugeinv]. Indeed, gauge-invariant variables are widely utilized since they constitute a theoretically effective way to extract predictions from a gravitational field theory applied to the Universe for large-scale linear evolution [@KodamaSasaki]. In addition, by using gauge-invariant variables the analysis is reduced to the study of only three decoupled second order ordinary differential equations, and they represent physical quantities that can be immediately connected to observations. In the review [@MalikWands] the focus was on how to construct a variety of gauge invariant variables to deal with perturbations in different cosmological models at first order and beyond. Most work to date has been done only to linear order where the perturbations obey linear FE. As a theoretical application the origin of primordial curvature and isocurvature perturbations from field perturbations during inflation in the very early Universe can be considered. LPT allows the primordial spectra to be related to quantum fluctuations in the metric and matter fields at considerably higher energies. In the most simple single field inflationary models it is, in fact, possible to equate the primordial density perturbation with the curvature perturbation during inflation, which essentially remains constant on very large scales for adiabatic density perturbations. The observed power spectrum of primordial perturbations revealed by the CMB and LSS is thus a powerful probe of inflationary models of the very early Universe. The outstanding problems within LPT are mostly technical issues and, in particular, include the important questions of the physical cut off to the short and long wavelength modes and the convergence of the perturbations (and hence the validity of the perturbative approach itself). ### Non-linear perturbations The new frontier in cosmological perturbation theory is the investigation of non-linear primordial perturbations, at second-order and beyond. Although the simple evolution equations obtained at linear order can be extended to non-linear order [@MalikWands], the non-linearity of the EFE becomes evident and consequently the resulting definitions of gauge invariant quantities at second order clearly become more complicated than those at first order. Recently, perturbations at second order [@MalikWandssecond] and, more generally, non perturbative effects have been studied, where there are certainly more foundational problems. Perturbative methods allow quantitative statements but have limited domains of validity. Recently, several groups have started to develop relativistic simulations [@Computational]. Numerical relativistic N-body simulations are a unique tool to study more realistic scenarios, and appear to compare well to numerical relativity fluid simulations [@EastWojtak]. However, assumptions are still made that need to verified. In particular, care must be taken in applying Newtonian intuition to GR. For example, [@Adamek] do not solve the full EFE and use the fact that the gravitational potential is very small, but spatial derivatives, and second derivatives, are not small. Therefore, when computing the Einstein tensor they go only to first order in the gravitational potentials and their time derivatives (but also include quadratic terms of first spatial derivatives and all orders for second spatial derivatives). New qualitatively effects occur beyond linear order. The non-linearity of the FE inevitably leads to mixing between scalar, vector and tensor modes and the existence of primordial density perturbations consequently generate vector and tensor modes. Non-linearities then permit additional information to be determined from the primordial perturbations. A lot of effort is currently being devoted to the investigation of higher order correlations (and issues of gauge dependence). Non-Gaussianity in the primordial density perturbation distribution would uncover interactions beyond the linear theory. Such interactions are minimal (suppressed by slow-roll parameters) in the simplest single field inflation models, so any detection of primordial non-Gaussianity would cause a major reassessment about our knowledge of the very early Universe. In principle, this approach can be easily extended to higher-orders, although large primordial non-Gaussianity is expected to dominate over non-linearity in the transfer functions. However, cosmological perturbation theory based on a cosmological $1+3$ split is ill-suited to address important questions concerning non-linear dynamics or to evaluate the viability of scenarios based on classical modifications of GR. A new formulation of a fully non-perturbative approach has been advocated [@Ijjas], along with a gauge fixing protocol that enables the study of these issues (and especially the linear mode stability in spatially homogeneous and nearly homogeneous backgrounds) in a wide range of cosmological scenarios, based on a method that has been successful in analyzing dynamical systems in mathematical and numerical GR based on the generalized harmonic formulation of the EFE. ### Non-linear regime At the non-linear order a variety of different effects come into play, including gravitational lensing of the source by the intervening matter and the fact that redshift is affected by peculiar motion, both of which have relatively simple Newtonian counterparts. But there are a host of complicated relativistic corrections once light propagation is worked out in more detail. There are selection effects too: we are much more likely to observe sources in halos, some objects are obscured from view by bright clusters, and so on. Within the context of perturbation theory it is relatively easy to predict the expectation value of the bias in the Hubble diagram for a random direction [@Fleury17]. The full second-order correction to the distance-redshift relation has been calculated within cosmological perturbation theory, yielding the observed redshift and the lensing magnification to second order appropriate for most investigations of dark energy models [@Umeh14]. These results were used in [@ClarksonUmehMaartens] to calculate the impact of second-order perturbations on the measurement of the distance to the last-scattering surface, where relativistic effects can lead to significantly biased measurements of the cosmological parameters at the sub-percent to percent level if they are neglected. The somewhat unexpected percent level amplitude of this correction was discussed in [@Bonvin15], but the focus therein was on on the effect of gravitational lensing only and thus did not consider the perturbations of the observed redshift, notably due to peculiar velocities, which can lead to a further bias in parameter estimation. In addition, [@Ben-Dayan13] noted that the notion of average is adapted to the observation of the Hubble diagram and may differ from the most common angular or ensemble averages, and suggested a possible non-perturbative way for computing the effects of inhomogeneities on observations based on light-like signals using the geodesic light-cone gauge to explicitly solve the geodetic-deviation equation. In order to comprehensively address the issue of the bias of the distance-redshift relation, previous work was improved upon by fully evaluating the effect of second-order perturbations on the Hubble diagram [@Fleury17]. In particular, the notion of average which affects bias in observations of the Hubble diagram for inhomogeneity of the Universe was carefully derived, and its bias at second-order in cosmological perturbations was calculated. It was found that this bias considerably affects direct estimations of the evolution of the cosmological parameters, and particularly the equation of state of dark-energy. Despite the fact that the bias effects can reach the percent level on some parameters, errors in the standard inference of cosmological parameters remain less than the uncertainties in observations [@Fleury17]. In further work [@Adamek18], a non-perturbative and fully relativistic numerical calculation of the observed luminosity distance and redshift for a realistic cosmological source catalog in a standard cosmology was undertaken to investigate the bias and scatter, mainly due to gravitational lensing and peculiar velocities, in the presence of cosmic structures. The numerical experiments provide conclusive evidence that the non-linear relativistic evolution of inhomogeneities, once consistently combined with the kinematics of light propagation on the inhomogeneous spacetime geometry, does not lead to an unexpectedly large bias on the distance-redshift correlation in an ensemble of cosmological sources. However, inhomogeneities introduce a significant non-Gaussian scatter that can give a large standard error on the mean when only a small sample of sources is available. But even for large, high-quality supernovae samples this scatter can bias the inferred cosmological parameters at the percent level [@Adamek18]. It was argued in [@BenDayan14], using a fully relativistic treatment, that cosmic variance (i.e., the effects of the local structures such as galaxy clusters and voids) is of a similar order of magnitude to current observational errors and consequently needs to be taken into consideration in local measurements of the Hubble expansion rate within the standard cosmology. In addition, the constraint equation relating metric and density perturbations in GR is inherently non-linear, and leads to an effective and intrinsic non-Gaussianity in the large-scale dark matter density field on large scales (even when the primordial metric perturbation is itself Gaussian) [@Bartolo]. ### Non-Gaussianities In standard cosmology, the primordial perturbations corresponding to the seeds for the LSS are selected from a Gaussian distribution with random phases, justified primarily from the fact that primordial non-Gaussianity (PNG) has not yet been observed and also theoretically (e.g., the central limit theorem); thus a Gaussian random field constitutes a satisfactory representation of the properties of density fluctuations. However, any deviation from perfect Gaussianity will, in principle, reveal important information on the early Universe, and an investigation of PNG is especially relevant if these initial conditions were generated by some dynamical process such as, for example, inflation. In particular, a direct measurement of non-Gaussianity would permit us to move beyond the free-field limit, yielding important information about the degrees of freedom, the possible symmetries and the interactions characterizing the inflationary action. The current status of the modelling of, and the searching for, PNG of cosmological perturbations was reviewed in [@Celoria]. In order to evaluate PNG from the early Universe to the present time, it is necessary to self-consistently calculate non-Gaussianity during inflation. We must then evolve scalar and tensor perturbations to second order outside the horizon, matching conserved second-order gauge-invariant variables to their values at the end of inflation (appropriately taking into account reheating). Finally, we need to investigate the evolution of the perturbations after they re-entered the Hubble radius, by computing the second-order radiation transfer function and matter transfer function for the CMB and LSS, respectively. Although these calculations are very complicated, PNG represents an important tool to probe fundamental physics during inflation at energies from the grand unified scale, since different inflationary models predict different amplitudes and shapes of the bispectrum, which complements the search for primordial gravitational-waves (PGW) (via a stochastic GW background). The Planck satellite has produced good measurements of higher-order CMB correlations, resulting in considerable stringent constraints on PNG. The latest data regarding non-Gaussianity tested the local, equilateral, orthogonal (and various other) shapes for the bispectrum and led to new constraints on the primordial trispectrum parameter [@planck2018]. The most extreme possibilities have been excluded by CMB and LSS observations, and now primarily the detection of (or constraints from) mild or weak deviations from primordial Gaussian initial conditions are sought, characterized by a small parameter, $f_{NL}$, compatible with observations. Even though the sensitivity is not comparable to CMB data [@planck2018], the bispectra for redshift catalogues can be determined (e.g., the three-point correlation functions for the WiggleZ and BAO spectroscopic surveys) [@GilMarin], and interesting observational bounds on the local $f_{NL}$ from current constraints on the power spectrum can be obtained (see [@Celoria] and references within). Neglecting complications arising from the breaking of statistical isotropy (such as sky-cut, noise, etc.) the procedure is, in general, to fit the theoretical bispectrum template, and $f_{NL}$ is found to be approximately 0.01 in generic inflation [@Gauss]. PNG is certainly the best way of practically investigating the only guaranteed prediction of inflation [@SilkLimits]. Indeed, even though standard models of slow-roll inflation only predict tiny deviations from Gaussianity (consistent with the Planck results), specific oscillatory PNG features can be indicative of particular string-theory models. Therefore, the search for PNG is of interest for theoretically well-motivated models of inflation and the Planck results can potentially severely constrain a variety of classes of inflationary models beyond the simplest paradigm. However, only the failure to find any such evidence for PNG can falsify inflation. There are some outstanding issues regarding non-Gaussianity [@Celoria]. First, it has been argued that the consistency relation is certainly not observable for single field inflation since, in the strictly squeezed limit, this term can be gauged away by an appropriate coordinate tranformation (so that the only residual term is proportional to the same order of the amplitude of tensor modes). Second, in the non-linear evolution of the matter perturbations in GR the second order dark matter dynamics leads to post-Newtonian-like contributions which mimic local PNG. A recent estimate of the effective non-Gaussianity due to GR light cone effects comparable to a PNG signal were discussed in [@Celoria], which would correspond in the comoving gauge to an $f_{NL}$ in the pure squeezed limit. Therefore, such a GR PNG signature may not be detectable via any cosmological observables. ### Simulations and post-Newtonian cosmological perturbations There has been a lot of recent interest in testing the validity of GR using cosmological observables related to structure formation. Since the physics involved in horizon-sized cosmological perturbations is quite different to that which occurs on smaller scales, where galaxies and clusters of galaxies are present, this is challenging. LPT [@MalikWands] is not suitable for investigating gravitational fields associated with structures that have highly non-linear density contrasts (which necessarily have to be small in order for the perturbative expansion to be well defined). GR numerical simulations using, for example, the *gevolution* code developed by Adamek, Durrer and co-workers [@Adamek], have proven to be an important new tool for studying structure formation. Targeted fully relativistic non-linear simulations with an evolving non-zero spatial curvature have also been developed [@Bolejko18]. Alternatively, 2-parameter post-Newtonian cosmological perturbation schemes have been proposed [@Goldberg]. Indeed, recent progress [@Sanghai] has been made in applying the techniques from post-Newtonian expansions of the gravitational FE into cosmology in the presence of highly non-linear structures to relate the functions that parameterize gravity on non-linear scales to those that parameterize it on very large scales. This so-called parameterized post-Newtonian cosmology (PPNC) has been used to analyse alternative theories of gravity [@Sanghai]. This was achieved by simultaneously expanding all of the relevant equations in terms of two parameters; the first associated with the expansion parameter of LPT, and the second characterizing the order-of-smallness from post-Newtonian theory [@Goldberg]. An alternative Lagrangian-coordinates based approximation scheme to provide a unified treatment for the two leading-order regimes was presented in [@Rampf]. Black holes and gravitational waves ----------------------------------- ### Gravitational waves Recent progress in numerical GR has allowed for a detailed investigation of the collision of two compact objects (such as, e.g., black holes and neutron stars). In such a violent inspiral an enormous amount of gravitational radiation is emitted. The detection and subsequent analysis of the gravitational wave (GW) signals produced by black hole mergers necessitate extremely accurate theoretical predictions that can be utilized as template waveforms that can then be used to cross-correlate with the output of GW detectors. This is, of course, of fundamental import in view of the recent LIGO observations [@LIGO]. Indeed, such an analysis led to the direct detection of GW by the LIGO-Virgo collaboration [@LIGO2]. To a large extent the numerical problem has been solved in the case of a black-hole merger, although the relatively simple properties of the two-body non-linear gravity waveforms [@YangPaschalidis] have not been fully understood mathematically. There is also the recent binary neutron star merger event, which is much more difficult to model within GR. There are a number of open problems, particularly concerning the physical nature of the recently observed merger events [@Barack]. GW astronomy will potentially play an increasingly important role within cosmology [@generalGWREFS]. For example, there is a promise that they will allow very good direct estimates of the distance of colliding black holes, avoiding the need for the usual cosmic distance ladder. ### Primordial gravitational waves Primordial GW (PGW) add to the relativistic degrees of freedom of the cosmological fluid. Any change in the particle physics content, perhaps due to a change of phase or freeze-out of a species, will leave a characteristic imprint on an otherwise featureless spectrum of PGW. The existence of a stochastic PGW background at a detectable level would then probe new physics beyond the standard cosmological model, and this may be possible with the Laser Interferometer Space Antenna (LISA) [@LISA]. Recently, a class of early-Universe scenarios has been theoretically identified which produce a strongly amplified, blue-tilted spectrum of GW [@Caldwell]. Detection of GW over a broad range of frequencies can provide important information concerning the underlying source [@Caldwell], and also may well be of relevance for the spectrum of GW emitted by other scaling sources. In addition, a population of massive primordial black holes (PBHs) would be anticipated to generate a stochastic background of GW [@Carr80], regardless of whether they form binaries or not. The focus is usually on the GW generated by either stellar black holes (observable by LIGO) or supermassive black holes (observable by LISA). However, with an extended PBH mass function, the GW background ought to encompass both of these limits and also every intermediate frequency. Many supermassive black holes are in binary pairs that orbit together and eventually merge, emitting GW in the process. The LISA detection window includes mergers of black holes in the mass range of $10^4 −- 10^7$ solar masses [@Amaro]. Due to the possibility that the coalescing black holes observed by LIGO [@LIGO2] could be of primordial origin, black holes in the intermediate mass range of $10 - 10^3$ solar masses are of particular interest since such PBHs might contribute to the dark matter (see below). The primary goal of CMB observations is the polarization signal induced by GW at the start of inflation. There is a considerable effort underway to obtain stricter limits on the tensor-to-scalar ratio, r, the quantitative measure of the ratio of the primordial amplitude of the B-mode (or shearing) polarization component due to GW to the scalar (or compressive) mode of CMB temperature fluctuations associated with the density fluctuations that seeded structure formation. While PGW have not yet been detected, the upper limit on r from the BICEP2/Keck CMB polarization experiments [@planck2018] (in conjunction with Planck temperature measurements and other data) is less than or equal to approximately $0.07$ at the 95% confidence level. However, the tensor amplitude predicted depends on the (fourth power of the) energy scale of inflation, and so the primordial polarization signal could, in principle, be unmeasurably small [@SilkLimits]. ### Primordial black holes The possibility of $10 - 10^3$ solar mass objects is of particular interest in view of the recent detection of black-hole mergers by LIGO which has, in particular, revitalized the interest in stellar mass black holes of around thirty solar masses (which are larger than initially expected) [@LIGO2], and especially non-evaporating primordial black holes (PBHs). In particular, it has been suggested that massive PBHs could provide the dark matter [@Carr16] or the supermassive black holes which reside in galactic nuclei and power quasars [@CarrSilk]. The most natural mechanism for PBH formation involves the collapse of primordial inhomogeneities, such as might arise from inflation (or spontaneously at some kind of phase transition). Interest in PBH increased due to the discovery that black holes radiate [@HawkingNature], since only PBH could be small enough for this to be relevant cosmologically. Indeed, evaporating PBHs have been invoked to explain several cosmological features [@BernardCarr]. Since it was initially believed that PBHs would grow as fast as the Universe during the radiation-dominated era and consequently attain a huge mass by the present time, it was thought that PBH never formed and could thus be excluded. However, such an argument is essentially Newtonian and neglects the cosmological expansion, and in [@CarrHawking] it was shown that there is no self-similar solution in which a black hole can grow as fast as the Universe. Therefore, once formed, their contribution to the dark matter of the Universe grows with time (the mass of non-evaporating PBH is unchanged after formation and can only grow if they accrete matter) [@DolgovSilk]. PBH would have the particle horizon mass at formation and could form as early as the Planck epoch, when QG forces are comparable to gravitational forces that at later epochs are far too weak on particle scales. However, as the Universe expands and cools, tiny black holes of Planck mass all quickly disappear. More massive black holes live longer and should survive until today as early Universe relics [@CarrHawking]. Attention has consequently shifted to larger PBHs, which are unaffected by Hawking radiation. Such PBHs might have important cosmological consequences. Perhaps the most exciting possibility is that PBH larger than $10^3$ solar masses could provide the dark matter which comprises 25% of the critical density [@BernardCarr]. [^7] Since PBHs formed in the radiation-dominated era, they are not subject to the well-known cosmological nucleosynthesis constraint that baryons can contribute at most 5% to the critical density. PBH should thus be classified as non-baryonic and behave like any other form of cold dark matter (CDM). The subject has consequently become very popular and non-evaporating PBHs may turn out to play a more important cosmological role than evaporating ones. PBHs could provide the dark matter but a number of constraints restrict their possible mass ranges [@Carr16], including those arising from gravitational microlensing, but PBHs at a level of $10\%$ of the dark matter are still possible over a wide range of masses. The PBH density might be much less than the dark matter density, but the PBHs are not necessarily required to provide all of the dark matter [@BernardCarr]. For intermediate mass black holes of $10^3$ solar masses a dark matter mass fraction of only $0.1\%$ still allows for important consequences for structure formation. Cosmological structures could be generated either individually through a ‘seed’ effect or collectively through the ‘Poisson’ effect (fluctuations in the black hole population generates an initial density perturbation for PBH dark matter), consequently alleviating some of the possible problems associated with the standard CDM scenario (even when they may only contribute a small portion of the dark matter density). Both mechanisms for generating fluctuations then amplify through gravitational instability to bind massive regions [@CarrSilk] and have been considered as either alternatives or in conjunction with other CDM scenarios. Effects of structure on observations: Gravitational lensing ----------------------------------------------------------- A particularly important cosmological question is whether gravitational lensing significantly alters the distance-redshift relation $D(z)$ to the CMB last scattering surface or the mean flux density of sources. Any such $D(z)$ bias could change CMB cosmology, and the corresponding bias in the mean flux density could alter supernova cosmology. In spatially homogeneous and isotropic cosmologies the ratio between the proper size of a source and the angular diameter distance is a function of redshift only. In an inhomogeneous Universe, lensing by intervening metric fluctuations can cause magnification of the angular size, with a corresponding change of flux density, since surface brightness is not affected by gravitational lensing. Therefore, the apparent distance to objects at a given redshift can effectively become a randomly fluctuating quantity. Using conservation of photons (i.e., flux conservation), Weinberg [@Wei76] argued that in the case of transparent lenses there is no mean flux density amplification, so that the uniform universe formula for $D(z)$ remains unchanged (where the averaging is over sources, and the result relies on the implicit assumption that the area of a constant-$z$ surface is unaffected by gravitational lensing). This issue has recently been revisited [@KP], and it was argued that in an ensemble averaged (and more appropriate cosmological) sense, the perturbation to the area of a surface of constant redshift is in reality a very small (approximately one part in one million) effect, supporting Weinberg’s argument and validating the usual treatment of gravitational lensing in the analysis of CMB anisotropies. However, Weinberg’s argument regarding the mean flux density appears to contradict well-known theorems of gravitational lensing, such as the [*focusing theorem*]{}. Non-linear relativistic perturbation theory to second order indicates that there is bias in the area of a surface of constant redshift and in the mean distance to the CMB last scattering surface. Indeed, a lot of investigations of gravitational lensing continue to advocate significant effects in the mean. Bolejko [@Bolejko2011a] (also see references in [@KP]) has provided a comprehensive review of such studies, some of which claim large effects, some of which obtain effects at the level of a few percent (which would still be important), while others argue that the effects are exceedingly small. A non-vanishing perturbation to the mean flux densities of distant sources caused by intervening structures, at least for sources that are viewed along lines of sight that avoid mass concentrations, effectively contradict Weinberg’s result. Recent non-linear analysis does suggest that non-linear effects have not been proven to be negligible [@Adamek18; @Fleury17; @Durrer]. Backreaction and averaging -------------------------- Averaging in GR is a fundamental problem within mathematical cosmology [@fit]. The cosmological FE on the largest scales are derived by averaging or coarse graining the EFE of GR. A solution of this problem is critical for the correct interpretation of cosmological data [@BC] (on the largest scales the dynamical behavior can be significantly different from the dynamics in the standard cosmology; e.g., the expansion rate can be greatly affected [@bu00]). First, it is of great importance to provide a rigorous mathematical definition for averaging (tensors on a differential manifold) in GR. A spacetime or space volume averaging approach must be well defined and generally covariant [@Av; @Averaging], and produce the structure equations for the averaged macroscopic geometry (and give a prescription for the correlation functions in the macroscopic FE which emerge in the averaging of the non-linear FE), which do not necessarily take on precisely the same mathematical form as the original FE [@Averaging]. It is straightforward to average scalar quantities and since, in general, a spacetime is determined entirely by its scalar curvature invariants, a specific spacetime averaging scheme based on scalar invariants only has been proposed [@Coley10]. In addition, only scalar quantities are (space volume) averaged within the $(1+3)$ cosmological spacetime splitting approach of Buchert [@bu00]. Although the standard FLRW $\Lambda$CDM cosmology has, to date, been very successful in explaining all of the observational data (up to a number of potential anomalies and tensions [@tension]) it does require, as yet undetected, sources of dark energy density that currently dominate the dynamics of the Universe. More importantly, the actual Universe is neither isotropic nor spatially homogeneous on small scales. Indeed, observations of the current late epoch Universe uncovers a very complicated picture in which the largest gravitationally bound structures, consisting of clusters of galaxies of different sizes, form, in turn, “knots, filaments and sheets that thread and surround very underdense voids” [@web]. An enormous fraction of the volume of the current Universe is, in fact, contained within voids of a single characteristic size of about $30$ megaparsecs [@HV1] with an almost “empty” density contrast [@Pan11]. In principle, a number of coarse grainings over different scales is required to reasonably model the observed complicated gravitationally bound large scale structures [@dust]. In standard cosmology it is implicitly taken that the matter distribution on the largest scale can be modeled by an “effective averaged out” stress-energy tensor, regardless of the physical details of the actual coarse graining at each scale. However, based on the two-point galaxy correlation function, the very smallest scale on which there can be a reasonable definition of statistical homogeneity is $70$–$120$ megaparsecs [@sdb12], and even then variations for the number density of galaxies on the order of several percent still arise in the largest possible survey volumes [@h05; @sl09]. It is fair to say that it is not at all clear what the largest scale is that matter and geometry on smaller scales can be coarse-grained such that the average evolution is still an exact solution of the EFE. A smooth macroscopic geometry (with macroscopic matter fields), applicable on cosmological scales, is obtained after an appropriate averaging. The coarse graining of the EFE for local inhomogeneities on small scales can generally lead to important [*backreaction*]{} effects (consisting of not just the mean cosmic variance) [@NotGW] on the average dynamics of the Universe [@bu00]. In addition, all cosmological observations are deduced from null geodesics (the paths of photons) which travel enormous distances, preferentially traversing the underdense voids of the actual Universe. But inhomogeneities perturb curved null geodesics, so that observed luminosity distances can be significantly affected. A consistent approach to cosmology is consequently to treat GR as a [*mesoscopic*]{} theory, which is applicable only on the mesoscopic scales for which it has actually been verified, containing a mesoscopic metric field and a mesoscopic geometry. The effective macroscopic dynamical equations on cosmological scales are then obtained by averaging. It had originally been hoped that such a backreaction approach might help resolve the dark energy and dark matter problems. However, it now seems unlikely that backreaction can replace dark energy (although large effects are theoretically possible from inhomogeneities and averaging [@BC]). But it can certainly affect precision cosmology at the level of 1 % [@Macpherson] and may offer a better understanding of some issues in cosmology (such as the emergence of a homogeneity scale and non-zero spatial curvature due to non-linear evolution of cosmic structure). ### Backreaction magnitude This last point is very important, and since it has been a source of some controversy let us summarize briefly here. The Universe is very inhomogeneous on small scales at the present time but smooth on large scales. It must be remembered that density pertubations $(\delta \rho/\rho) \simeq 10^{28}$ on Earth, but the metric is very close to Minkowski. To establish the backreaction effects we need approximation methods to deal with metric perturbations $(\delta h/h) \simeq 10^{-5}$ but second derivatives $\simeq 10^{28}$. Various approaches have been tried: - Zalaletdinov [@Averaging] developed a very complex bimetric averaging formalism that can, in principle, be applied in general; the effect of such averaging on cosmological observations was estimated to be of the order of about 1 % [@CPZ]. A global Ricci deformation flow for the metric, which is generically applicable in cosmology, was introduced by Carfora. - Buchert [@bu00] developed an explicit ($1+3$) spatial averaging scheme, although the scheme is not fully deterministic and depends on some *ad hoc* phenomenological assumptions. Models based on this scheme, and particularly the “timescape cosmology” of Wiltshire which utilizes time-dilation effects in voids, can predict very large effects, and there have been claims that the results are sufficient to explain dark energy [@BC]. - There have been various approximation schemes that have claimed that the backreaction effects are negligible ($\simeq 10^{-5}$), including a scheme by Green and Wald [@GreenWald; @NotGW] which uses distributional methods that does not involve explicit averaging. - Durrer and collaborators have developed detailed second order calculations that predict percent level changes (i.e., that are sufficient to be of significance in GR precision cosmology studies), and they (using the “gevolution” numerical code) and others have subsequently confirmed this with N-body simulations. The most reasonable outcome of this debate, at least in our view and particularly in light of the latter results, is that observable differences caused by backreaction effects will be of the order of $1\%$. Spatial curvature ----------------- Current constraints on the background spatial curvature, characterized by $\Omega_{k}$, within the standard cosmology are often used to “demonstrate” that it is dynamically negligible: $\Omega_k \sim 5\times 10^{-3} $ (95% confidence level) [@planck2015]. However, in standard cosmology the spatial curvature is assumed to be zero (or at least very small and below the order of other approximations) for the analysis to be valid. Therefore, strictly speaking, the standard model cannot be used to [*[predict]{}*]{} a small spatial curvature. In general, $\Omega_{k}$ is [*[assumed]{}*]{} to be constrained to be very small primarily based on CMB data. However, the recently measured temperature and polarization power spectra of the CMB provides a 99% confidence level detection of a negative $\Omega_{k} = - 0.044$  $(+0.018, -0.015$), which corresponds to a positive spatial curvature [@planck2018]. Direct measurements of the spatial curvature $\Omega_{k}$ using low-redshift data such as supernovae, baryon acoustic oscillations (BAO) and Hubble constant observations (as opposed to fitting the FLRW model to the data) do not place tight constraints on the spatial curvature and allow for a large range of possible values (but do include spatial flatness). Low-redshift observations often rely on some CMB priors [@Ratra2017] and, in addition, are sensitive to the assumptions about the nature of dark energy. [[^8]]{} Attempts at a consistent analysis of CMB anisotropy data in the non-flat case suggest a closed model with $\Omega_{k} \sim 1 \%$ [@PR2018; @Ryan2019]. Including low redshift data, $\Omega_k = -0.086 \pm 0.078$ was obtained [@PR2018], which provides weak evidence in favor of a closed spatial geometry (at the level of $1.1\sigma$), with stronger evidence for closed spatial hypersurfaces (at a significantly higher $\sigma$ level) coming from dynamical dark energy models [@Ryan2019] (see also [@YuRyan]). The inclusion of CMB lensing reconstruction and low redshift observations, and especially BAO data, gives a model dependent constraint of $\Omega_{k} = -0.0007 \pm 0.0019$ [@planck2018]. As an illustration, constraints on the phenomenological [*two curvature model*]{} (which has a simple parametrized backreaction contribution [@CPZ] leading to decoupled spatial curvature parameters $\Omega_{k_g}, \Omega_{k_d}$ in the metric and the Friedmann equation, respectively, and which reduces to the standard cosmology when $\Omega_{k_g} = \Omega_{k_d}$), were investigated in [@CCCS]. It was found that the constraints on the two spatial curvature parameters are significantly weaker than in the standard model, with constraints on $\Omega_{k_g}$ an order of magnitude tighter than those on $\Omega_{k_d}$, and there are tantalizing hints from Bayesian model selection statistics that the data favor $\Omega_{k_d} \neq \Omega_{k_g}$ at a high level of confidence. Observations on recently emerged, [*present-day*]{} [(large-scale mean)]{} average negative curvature are weak and not easy to measure [@Larena09template]. Local inhomogeneities and perturbations to the distance-redshift relation at second-order contribute a monopole at the sub-percent level, leading to a shift in the apparent value of the spatial curvature (as do other GR curvature effects in inhomogeneous spacetimes). Indeed, in an investigation of how future measurements of $\Omega_k$ are affected by GR effects, it was shown that constraints on the curvature parameter may be strongly biased if cosmic magnification is not included in the analysis [@DiDio]. Given that current curvature upper limits are at least one order of magnitude away from the level required to probe most of these effects, there is an imperative to continue pushing the curvature parameter, $\Omega_k$, constraints to greater precision (i.e., to about the 0.01% level). These will become increasingly measurable in future surveys such as the Euclid satellite. In addition, the current curvature parameter estimations are not yet at the cosmic variance limit (beyond which constraints cannot be meaningfully improved due to the cosmic variance of horizon scale perturbations); indeed, the current measurements are more than one order of magnitude away from the limiting threshold [@DiDio]. The prospects for further improving measurements of spatial curvature are discussed in [@Jimenez]. Most importantly, we are interested in model independent [@Clarksonprl] and explicitly CMB-independent [@XuHuang] checks of the cosmic flatness. However, currently there is no fully independent constraint with an appropriate accuracy for a value of $\Omega_{k}$ of approximately less than $0.01$ on the cosmic flatness from cosmological probes. In principle, a small non-zero measurement of $\Omega_{k}$ perhaps indicates that the assumptions in the standard model are not met, thereby motivating models with curvature at the level of a few percent. Such models are certainly not consistent with simple inflationary models in which $\Omega_{k}$ is expected to be negligible [@Martin19]. We remark that an observation of non-zero spatial curvature, even at the level of a percent or so, could be the result of backreaction effects and be a signal of non-trivial averaging effects [@Averaging]. Note that calculations imply a small positive spatial curvature [@CPZ] (although backreaction estimates have tended to give a negative mean curvature [@Larena09template]). If the geometry of the universe does indeed deviate slightly from the standard FLRW geometry (for example, due to the evolution of cosmic structures), then the spatial curvature will no longer necessarily be constrained to be constant and any effective spatial flatness may not be preserved. An investigation of a [*[small]{}*]{} emerging spatial curvature can be undertaken by relativistic cosmological simulations [@Computational; @Adamek]. However, such simulations need to include all relativistic corrections and can suffer from gauge issues [@TB; @Adamek]. In particular, using a fully inhomogeneous, anisotropic cosmological numerical simulation, it was shown that [@Macpherson]: (i) On small scales, below the measured homogeneity scale of the standard cosmology, deviations in cosmological parameters of 6 - 31% were found (in general agreement with LPT and with deviations depending on an observer’s physical location). (ii) On the approximate homogeneity scale of the Universe mean cosmological parameters consistent to about 1% with the corresponding standard cosmology were found (although the parameters can deviate from these mean values by 4-9% again depending on the physical location in the simulation domain). (iii) Above the homogeneity scale of the Universe, 2 - 3% variations in mean spatial curvature and backreaction were found. As noted above, attempts to study relativistic models of inhomogeneities rely upon metric forms that are designed to be “close to” the spatially homogeneous and isotropic metric form. However, these [*[can not]{}*]{} also be used to address the cosmological backreaction problem; backreaction can only be present if the structure–emerging average spatial curvature, and hence the large–scale average of cosmological variables, are allowed to evolve [@Bolejko2017]. A dynamical coupling of matter and geometry on small scales which allows spatial curvature to vary is a natural feature of GR. Indeed, the requirement that spatial curvature remains constant as in an FLRW model on arbitrarily large scales of cosmological averaging is not a natural consequence of any principles of GR. Schemes that suppress average curvature evolution (e.g., by employing periodic boundary conditions as in Newtonian models and neglecting global curvature evolution) can not describe global backreaction but only ‘cosmic variance’ [@BC]. Moreover, within standard cosmology, spatial fluctuations are conceived to evolve on an assumed background FLRW geometry, but this description only makes sense with respect to their spatial average distribution and its evolution. We note that even small fluctuations within averaging schemes are also subject to gauge issues [@VPG]. In principle, large effects are possible from inhomogeneities and averaging [@dust; @BC]. Recently, a relativistic (Simsilun) simulation based on the approximation of a ‘silent universe’ was presented [@Bolejko18]. The simulation begins with perturbations around a (flat) standard model (with initial conditions set up using the Planck data). The perturbations are allowed to have non-zero spatial curvature. Initially, the negative curvature of underdense regions is compensated by the positive curvature of overdense regions [@Roukema19; @CPZ]. But once the evolution enters the non-linear regime, this symmetry is broken and the mean spatial curvature of the universe slowly drifts from zero towards negative curvature induced by cosmic voids (which occupy more volume than other regions). The results of the Simsilun simulation indicate that the present-day curvature of our Universe is $\Omega_k \sim 0.1$, as compared to the spatial flatness of the early universe. It should be emphasised that the fact that structure formation implies that the present-day Universe (is volume-dominated by voids and) is characterized by on average negative curvature is a subtle issue that follows from the result that intrinsic curvature does not obey a conservation law [@BC; @DHB]. Indeed, it dispels the naive expectation that on large scales the distribution of positive spatial curvature for high-density regions and negative spatial curvature for the voids, averages out to the almost or exactly zero spatial curvature assumed. Problems from the quantum realm =============================== There are a number of very fundamental problems in the quantum regime, culminating in the question of whether there is a single unified theory of quantum gravity (QG). And, in particular, is this “theory of everything" string theory? Some problems in the quantum realm are relevant for cosmology. For example, do there exist any fundamental particles that are predicted by QG that have not yet been observed and, if so, what are their properties and are they of importance in cosmology? In particular, the detection of the Higgs boson seems to complete the standard model, but with additional new physics that is needed to protect the particle mass from quantum corrections (that could increase it by 14 orders of magnitude). It is believed that supersymmetry is the most reasonable solution to this naturalness problem, but the most simple supersymmetric models have not proved successful and, to date, there is no convincing mechanism to break supersymmetry nor to determine the multiple parameters of the supersymmetric theory. In addition, does a theory of QG lead to a multiverse in cosmology? And, perhaps most importantly, do theories of QG naturally lead to inflation? The problem of quantum gravity: ------------------------------- The standard model of particle physics concerns only 3 forces: namely, electromagnetism and the strong and weak nuclear forces. A primary goal of theoretical physics is to derive a theory of QG in which all 4 forces, including that of gravitation, are unified within a single field theory. Up to now, no attempt at such a unification has been successful. In particular, it is of interest to know whether QG can be formulated for cosmology and whether there is any extension of quantum mechanics required for QG, and especially quantum cosmology? Quantum cosmology gives rise to a number questions concerning a possible theory for the initial cosmological state [@Hartle], which include: what laws or principles might characterize the initial conditions of the Universe and what are the subsequent predictions of the initial conditions for the Universe on macro-, meso- and micro-scopic scales? Let us first briefly discuss two cosmological problems that originate in QG and have a very distinct mathematical formulation of particular interest here. ### Higher dimensions Ordinary spacetime is 4D, but additional dimensions are possible in, for example, string theory [@string]. At the classical level, gravity has a much richer structure in higher dimensions than in 4D. In particular, there has been a lot of work done on the uniqueness and stability of black holes in arbitrary dimensions [@EmparanReall]. For example, closed trapped surfaces and singularity theorems in higher dimensions have been discussed [@Galloway] and the positive mass theorem has been proven in arbitrary dimension [@SchoenYau2017]. However, the problem of stability in higher dimensions is much more difficult. Indeed, there is evidence from numerical simulations to indicate that there are higher dimensional black holes that are not stable [@EmparanReall]. In addition, the question of cosmic censorship in higher dimensions is extremely difficult and is perhaps not even well posed. In fact, there is numerical evidence that suggests that cosmic censorship does not hold [@LehnerPretorius2010] and that black holes are not necessarily stable to gravitational perturbations in higher dimensions [@GregoryLaflamme]. Indeed, black holes become highly deformed at very large angular momenta and resemble black branes, and in spacetime dimensions greater than six exhibit an “ultraspinning instability” [@EmparanMyers]. Higher dimensional spacetime manifolds are also considered in a number of cosmological scenarios. For example, in the cosmological context all known mathematical results can be investigated in models with a non-zero cosmological constant. In addition, theoretical results, such as the dynamical stability of higher dimensional cosmological models, are of interest. In particular, spatially homogeneous cosmologies in higher dimensions, and especially extensions of the BKL analysis, have been investigated [@Henneaux]. ### AdS/CFT correspondence Anti de Sitter (AdS) spacetimes are of interest in QG theories formulated in terms of string theory due to the AdS/CFT (or Maldacena gauge/gravity duality) correspondence, in which string theory on an asymptotically AdS spacetime is conjectured to be equivalent to a conformally invariant quantum field theory (CFT) on its boundary [@mald; @Klebanov]. This holographic paradigm leads to a number of cosmological questions. In particular, the AdS/CFT conjecture strongly motivates the dynamical investigation of asymptotically AdS spacetimes. But, of course, such a spacetime is clearly not this Universe. In addition, recently it has been conjectured that AdS spacetimes are unstable to arbitrarily small perturbations [@PiotrBizon]. The global non-linear stability of AdS has been investigated in spherically symmetric massless scalar field models within GR [@Bizon]. Numerical evidence seems to indicate that AdS spacetimes are non-linearly unstable to a “weakly turbulent mechanism” in which an arbitrarily small black hole is formed whose mass is determined by the initial energy. Such a non-linear instability appears to happen for various typical perturbations. However, there are also many perturbations that don’t lead to an instability, which consequently implies the existence of “islands of stability” [@DiasSantos; @Martinon]. It is of great interest to study the non-linear stability of AdS with no assumptions on symmetry; however, such a study is currently intractable both analytically and numerically. But the general gravitational case is clearly richer than the case of spherical symmetry analysed to date [@DiasSantos]. Therefore, it is of great significance to determine if the conjectured non-linear instability in AdS spacetime in more general models behaves in a similar or a different way to that in spherically symmetric scalar field collapse [@PiotrBizon]. Singularity resolution ---------------------- ### Singularity resolution and a quantum singularity theorem The existence of singularities indicates a failure of GR when the classical spacetime curvature is sufficiently large. This is exactly when QG effects are anticipated to be important. Therefore, the problem of if, and when, QG can extend solutions of classical GR beyond the singularities is crucial [@DeWitt]. It is, of course, pertinent to determine whether all singularities can be removed in QG. However, it is certainly not true that all singularities can be resolved within string theory; for example, it is known that the string in an exact plane wave background does not propagate through the curvature singularity in a well-behaved manner [@Horowitz]. Gauge/gravity duality, which can be regarded as providing an indirect formulation of string theory [@maldecena], has been utilized to study singularities in the quantum realm and investigate cosmic censorship with asymptotically AdS initial data. The existence of a quantum version of cosmic censorship was suggested from holographic QG [@EngelhardtHorowitz]. It has been deduced that a large class of bounces through cosmological singularities are forbidden. Consequently, although some singularities can indeed be resolved, a novel singularity theorem is possible. Therefore, it is important to determine whether a quantum mechanical generalization of any of the singularity theorems exists, which would subsequently imply that singularities are inevitable even in quantum settings. In particular, it has been shown that a fine-grained generalized second law of horizon thermodynamics can be used to prove the inevitability of singularities [@Wall2013], thereby extending the classical singularity theorem of Penrose [@Penrose1979] to the semi-classical regime. It is plausible that this result, which was constructed in the context of semiclassical gravity, will still hold in a complete theory of QG [@Wall2013]. Therefore, not all singularities can be resolved within QG. ### Cosmological singularity resolution Cosmological singularity resolution can be investigated within loop quantum gravity (LQG) and string theory. (Black hole singularity resolution was reviewed in [@OpenProb].) LQG is a non-perturbative canonical quantization of gravity. It has been suggested that singularities may be generically resolved within LQG as a result of QG effects [@Singh14]. In particular, the classical big bang singularity is replaced by a symmetric quantum big bounce when the energy density is of the order of the Planck density, which occurs without any violation of the energy conditions or fine tuning. The resulting quantum big bounce connects the currently expanding universe to a pre-bounce contracting classical universe. The application of LQG in the context of cosmology is referred to as loop quantum cosmology (LQC). In LQC the infinite degrees of freedom reduce to a finite number due to spatial homogeneity. A variety of spatially homogeneous cosmologies have been investigated [@AshtekarandSingh]. In particular, solutions to the effective equations for the general class of Bianchi type IX cosmological spacetimes has been investigated within LQC computationally, wherein the big bang singularity was shown to be resolved [@Ewing2010]. The reduction of symmetries within LQC involves a very considerable simplification, and consequently crucial aspects of the dynamics may be neglected. However, partly due to evidence supporting the BKL conjecture, it is believed that the singularity resolution in spatially homogeneous cosmologies does capture important features of singularity resolution in more general spatially inhomogeneous cosmological models [@AshtekarandSingh; @Brizuela]. There are ongoing attempts to include spatial inhomogeneities in the analysis [@Tarrio]. Various singularities have been investigated within standard LQC. It has been conjectured that all curvature singularities which result in geodesic incompleteness are so-called strong singularities (such as the big bang in GR). In recent years a number of other types of cosmological singularities have been obtained, which include the big rip and the big freeze, and sudden and generalized sudden singularities. Of these, the big rip and big freeze are strong singularities within GR, whereas sudden and generalized sudden singularities are weak singularities. Using a phenomenological matter model in GR, it has been established that strong singularities are, in general, resolved in LQC, whereas quantum geometry does not usually affect weak singularities [@SainiSingh]. A comprehensive investigation of the resolution of a variety of singularities within modified LQC models, in which the bounce can be asymmetric and the bounce density can be affected, was performed using an effective spacetime description and compared with the analysis in standard LQC [@SainiSingh]. Quantum gravity and inflation ----------------------------- Although some of the alternatives to inflation alluded to earlier are suggested by ideas motivated by QG, it is also of interest to determine whether inflation occurs naturally within QG. For example, it appears to be difficult to get inflation within string theory [@deS]. In particular, so-called swampland criteria constrain inflationary models and there are no-go theorems for the existence of de-Sitter vacua in critical string theory. The fact that exact de Sitter solutions with a positive cosmological constant cannot describe the late-time behaviour of the Universe [@deS] is often interpreted as “bad news” for string theory. The observations of Planck 2018 (of the almost scale-invariant and Gaussian primordial curvature perturbations) [@planck2018] are compatible with the predictions of simple single scalar field inflation models with a canonical kinetic term and an appropriately flat self-interaction potential minimally coupled to gravity. However, despite the success of the single-field slow-roll inflation model, it is not straightforward to embed such a model within a fundamental theory [@baumann]. However, the so-called $\alpha$-attractor models and, in particular, the KKLMMT model [@KKLMMT], have been actively studied. The most attractive theoretical properties of these models is their conformal symmetry and their successful embedding into supergravity via hyperbolic geometry. The KKLMMT model is often acknowledged as the first to discuss the origin of D-brane inflation within string theory [@KKLMMT], and provides the motivation for more general string inspired cosmological models. These models predict values for the spectral index and the tensor-to-scalar ratio which match observational data well. Thus, phenomenological D-brane inflation has attained renewed importance, independent of its string theory origin, since Planck 2018 [@planck2018]. Indeed, it has been shown [@Kall] that further phenomenological models of D-brane inflation can be derived within the string theory approach (see also [@baumann]). Because scalar fields (such as, for example, moduli fields) occur ubiquitously in fundamental theories such as supergravity and string/M theory, multi-field generalizations of the $\alpha$-attractor models have also been considered [@double]. A number of inflationary cosmologies have been suggested within the context of string/M-theory [@KKLMMT; @deS]. However, very few models exist that can be embedded within LQC [@Kiefer]. In particular, there are a number of approaches to QG which include bouncing regimes. In resolving the initial singularity, it is of interest to determine whether slow-roll inflation is subsequently allowed (or is even natural). Inflation within the context of LQC, and how the bounce affects the evolution of the inflaton (as compared to the normal scenario with no bounce), was investigated in [@Louko]. The evolution of the inflaton from the initial bounce was studied analytically for a number of important potentials in the case that the inflaton is taken to be the same scalar field that gives rise to the LQC bounce. It was found [@Louko] that LQC, or any bouncing model in which the total energy density of the inflaton field is bounded at the transition, does provide a viable description of the pre-inflationary epoch and the subsequent smooth evolution to the standard inflationary era. The results were particularly encouraging in that the bounds obtained theoretically (on the critical bounce value for the inflaton field in order for there to subsequently be an appropriate slow roll inflationary regime) match (where appropriate) the known results from the numerical dynamics of the fully non-linear LQC. ### String inflation Cosmological inflation and its realization within QG and, in particular, in string theory, was reviewed in [@baumann]. Examples of string inflation include brane and axion inflation. There are also string inspired effective field theories. Since string theory is considerably more constrained, some effective field theories that are apparently consistent at low energies do not, in fact, admit ultraviolet QG completions (leading to improved predictivity). However, it has been suggested that it may not be possible to formulate inflation naturally within string theory [@deS; @baumann]. One problem is that in order to obtain a period of slow-roll inflation from simple scalar field potentials, field values in excess of the Planck mass are required. But for sufficiently large values of the fields string effects on the shape of the potential must be included, which tend to destroy its required flatness except perhaps in the case of special field symmetries [@beyond] (however, even then string theoretical arguments such as the so-called “Weak Gravity Conjecture” [@Arkani-Hamed] can lead to the effective field theory analysis being invalidated). In addition, generating effective theories from string theory can also lead to different ideas as to what a natural (or a minimal) inflationary model might be. Indeed, a comprehensive understanding of naturalness within string theory is elusive. However, a general feature of all stringy constructions is the existence of a number of light scalar fields, so while multiple ‘unnecessary’ fields might be considered non-minimal in many field theory models, they are ubiquitous within string theory. Time-dependent solutions with string scale curvatures are crucial for any further comprehension, especially if we hope to progress from the paradigm of an effective theory for the massless modes. To date, it is fair to say that there has been no realization of inflation within superstring theory that has been widely accepted. Making predictions in string theory is made exceedingly difficult by the [*[landscape]{}*]{} problem that string theory has an enormous number of vacua. Despite the fact that dynamics within the landscape is not well understood, it appears that false vacuum eternal inflation is an unavoidable consequence. In addition, all 4D de Sitter vacua in supersymmetric string theories are metastable, since 10D supersymmetric Minkowski spacetime has zero energy, but de Sitter spacetime has positive vacuum energy. In particular, there are well-known no-go theorems for the existence of stable de-Sitter vacua in critical string theory [@deS]. This is a real problem for inflation should string theory be the final theory of QG. The so-called string [*[swampland]{}*]{} criteria constrain inflationary models [@Denef]. In addition, the second of the swampland conjectures implies, as noted above, that exact de Sitter solutions with a positive cosmological constant cannot describe the fate of the Universe at late times within string theory [@deS]. Dynamical dark energy scalar field models must also satisfy particular criteria so as to avoid the swampland. The observational implications of such string-theory criteria on quintessence models and the accompanying constraints on the dark energy were studied in [@Heisenberg]. However, since string theory does not naturally lead to scalar fields with an appropriate energy scale to be a reasonable candidate for quintessence, novel physics from string theory must be introduced to explain dark energy. In some very special models it is possible to characterize the Planck-suppressed corrections to the string theory inflatonary action, leading to the first indications for inflation within string theory [@baumann]. But many critical challenges still remain. Indeed, the ‘simple’ cosmological observations (of the almost scale-invariant and Gaussian primordial curvature perturbations measured by Planck) to date are often interpreted as an argument against complex models of inflation in string theory (however, see [@baumann]). Concluding remarks ================== We have reviewed recent developments and described a number of open questions in the field of theoretical cosmology. We described the concordance cosmological model and the standard paradigms of modern cosmology, and then discussed a number of fundamental issues and open theoretical questions, emphasizing the various assumptions made and identifying which results are independent of these assumptions. Indeed, standard cosmology contains a number of philosophical assumptions that are not always scientific, including the assumption of spatial homogeneity and isotropy at large scales outside our particle horizon. Perhaps a more tangible fundamental issue concerns the measure problem and the issue of initial conditions in inflation. Many of the fundamental problems arise due to the inhomogeneities in the Universe. However, this is also one of the great strengths of present day cosmology: our models predict what structure will occur, and consequently the astounding development of observational projects determining in great detail the characteristics of such structure that serve to give strong limits on cosmological parameters. Cosmology is not only a mathematical endeavour, but it is a testable scientific theory due to its ability to produce observational predictions. In recent times there has been a plethora of such detailed tests, leading to the so-called era of precision cosmology. Perhaps fundamental questions are less relevant for current working cosmologists, who are more concerned with physical cosmology and data and statistical analysis. But as the modern emphasis changes to more physical and observational issues, theoretical cosmology is still important and fundamental questions persist. In some sense, we hope to record here the state of the art as it now exists. A qualitative analysis of the properties of cosmological models and the problems of the stability of cosmological solutions and of singularities is important in mathematical cosmology. A number of open problems in theoretical cosmology involve the nature of the origin and details of cosmic inflation, and its relation to fundamental physics. Perhaps the most urgent open problems of theoretical cosmology include the early and late time accelerated expansion of the universe and the role of the cosmological constant $\Lambda$. As we have emphasized, computational cosmology is becoming an increasingly important tool in the investigation of theoretical and physical cosmology. We then reviewed a number of open problems in physical cosmology, with particular focus on perturbation theory (and gauge issues) and the formation and distribution of large scale structure in the Universe at present times (and especially in the non-linear regime). Backreaction is still an important issue, although perhaps the more formal mathematical averaging problem is currently more relevant. Finally, gravitational wave astronomy will potentially play an increasingly important role within cosmology. Indeed, there is a robust prediction within inflation for a gravitational wave induced CMB polarization signal. We also discussed cosmological problems in quantum gravity, including the possible resolution of cosmological singularities and the crucial issue of the role of inflation within quantum gravity. Finally we have emphasized that, given the uniqueness of the Universe and the limitations on the domain we can explore by any conceivable observations, it is key to carry out all possible consistency tests of our models. For example, the first and foremost is the age of the universe: is the Universe older than its stellar and galactic content? If not, cosmology is in deep trouble. Fortunately this consistency test seems to be satisfied at present (thanks to the cosmological constant). Another consistency test is that all number counts must display a dipole aligned with the CMB dipole; this is presently being contested. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to acknowledge Bernard Carr for his helpful comments, and to thank Timothy Clifton and Julian Adamek for fruitful discussions. Financial support was provided by NSERC of Canada (AAC) and NRF of South Africa (GFRE). [99]{} G. F. R. Ellis, [*Relativistic Cosmology*]{} in [*General Relativity and Cosmology*]{}, Proc. Int. School of Physics ‘Enrico Fermi’ (Varenna) Course XLVII pp104-179, ed. R. K. Sachs (Academic Press, 1971) H. Stephani, D. Kramer, M. MacCallum, C. Hoenselaers and E. Herlt, “Exact Solutions of Einstein’s Field Equations” (Cambridge University Press, Cambridge, second ed., 2003). G. F. R. Ellis, “Relativistic cosmology: Its nature, aims and problems”, in General Relativity and Gravitation, pp. 215–288 eds. B. Bertotti, F. de Felice and A. Pascolini (Reidel, Dordrecht, 1984); G.F.R. Ellis and W. Stoeger, Class. Quant. Grav. [**4**]{} 1697 (1987); S. Bildhauer and T. Futamase, Gen. Rel. Grav. [**[23]{}**]{} 1251 (1991; T. Futamase, Phys. Rev. D [**[53]{}**]{} 681 (1993); J. P. Boersma, Phys. Rev. D [**[57]{}**]{} 798 (1998). B. Bertotti, Proc. Roy. Soc. London A [**[294]{}**]{} 195 (1966). A. Coley, Phys. Scr. [**[92]{}**]{} 093003 (2017). D. Hilbert, Bull. Amer. Math. Soc. [**[8]{}**]{} 437 (1902) (see also, in the original German, Gottinger Nachrichten [**[1]{}**]{} 253 (1900) & Archiv. Math. Phys. [**[1]{}**]{} 44 & 213 (1901)). B. Simon, “Fifteen Problems in Mathematical Physics, Perspectives in Mathematics”, Anniversary of Oberwolfach at Birkhiiuser Verlag, Basel (1984). A. Coley, “Mathematical General Relativity” \[arXiv:1807.08628\]. G. Dotti, Phys. Rev. Lett. [**[112]{}**]{} 191101 (2014) & Class. Quant. Grav. [**[33]{}**]{} 205005 (2016). G. F. R. Ellis, Studies in History and Philosophy of Modern Physics, [**[46]{}**]{} 5 (2014). J. Butterfield, Studies in History and Philosophy of Modern Physics, [**[46]{}**]{} 57 (2014). http://gonitsora.com/fivegreatunsolvedproblemsintheoreticalphysics. R. Penrose, “The Emperor’s New Mind: Concerning Computers, Minds, and The Laws of Physics” (Oxford University Press, 1989). R. Penrose, “Fashion, faith, and fantasy in the new physics of the universe” (Princeton University Press, 2016). R. Penrose, “Singularities and time asymmetry”, in “General Relativity: an Einstein Centenary Survey”, eds. S. W. Hawking and W. Israel (Cambridge University Press, Cambridge, 1979). A. Rendall, Living Rev. Rel. [**[5]{}**]{} 6 (2002) \[arxiv:gr-qc/0203012\]. L. Andersson, “The global existence problem in general relativity, The Einstein equations and the large scale behavior of gravitational fields”, pp. 71–120 (Birkhäuser, Basel, 2004) \[arxiv/gr-qc/9911032\]. Y. Choquet-Bruhat and R. Geroch, Comm. Math. Phys. [**[14]{}**]{} 329 (1969). M. Narita, Class. Quant. Grav. [**[19]{}**]{} 6279 (2002) \[arXiv:gr-qc/0210088\]. D. Christodoulou, and S. Klainerman, Commun. Pure Appl. Math. [**[43]{}**]{} 137 (1990). L. Andersson, “Cosmological Models and Stability ”, in “General Relativity, Cosmology and Astrophysics, Fundamental Theories of Physics”, [**[177]{}**]{} p. 277 (Springer International Publishing Switzerland, 2014; ISBN 978-3-319-06348-5); L. Andersson and V. Moncrief, “Future complete vacuum spacetimes”, in “The Einstein equations and the large scale behavior of gravitational fields”, pp. 71–120 (Birkhäuser, Basel, 2004) \[gr-qc/0303045\]. J. M. M. Senovilla and D. Garfinkle, Class. Quant. Grav. [**[32]{}**]{} 124008 (2015) \[arXiv:1410.5226\]. R. Penrose, Phys. Rev. Lett. [**[14]{}**]{} 57 (1965). S. W. Hawking, Proc. Roy. Soc. London [**[A294]{}**]{} 511 (1966); [*[ibid.]{}*]{}, [**[A295]{}**]{} 490 (1966); [*[ibid.]{}*]{}, [**[ A300]{}**]{} 187 (1967). R. Penrose and S. W. Hawking, Proc. Roy. Soc. Lond. A [**[314]{}**]{} 529 (1970). J. M. M. Senovilla, “Singularity theorems in general relativity: achievements and open questions”, Chapter 15 of Einstein and the Changing Worldviews of Physics, eds. C. Lehner, J. Renn and M. Schemmel, Einstein Studies 12 (Birkhauser, 2012) G. F. R. Ellis and A. R. King, Commun. Math. Phys. [**[38]{}**]{} 119 (1974). D. Christodoulou, “The formation of black holes in general relativity” (Monographs in Mathematics, European Mathematical Soc. Publishing House, Helsinki, 2009). S. Klainerman, J. Luk and I. Rodnianski, Invent. Math. [**[198]{}**]{} 1 (2014). S. Klainerman and I. Rodnianski, Acta Math. [**[208]{}**]{} 211 (2012); J. Luk and I. Rodnianski, “Nonlinear interactions of impulsive gravitational waves for the vacuum Einstein equations”, Cambridge J. Math. \[arXiv:1301.1072\]; M. Dafermos, Astrisque [**[123]{}**]{} 352 (2013). P. Bizon and A. Rostworowski, Phys. Rev. Lett. [**[107]{}**]{} 031102 (2011). D. Eardley and V. Moncrief, Comm. Math. Phys. [**[83]{}**]{} 171 (1982) & Comm. Math. Phys. [**[83]{}**]{} 193 (1982); S. Klainerman and M. Machedon, Ann. Math. [**[142]{}**]{} 39 (1995); P. T. Chrusciel and J. Shatah, Asian J. Math. [**[1]{}**]{} 530 (1997); S. Kichenassamy and A. D. Rendall, Class. Quant. Grav. [**[15]{}**]{} 1339 (1998). H. Friedrich, J. Diff. Geom. [**[34]{}**]{} 275 (1991); R. A. Bartnik, M. Fisher and T. A. Olinyk, J. Math. Phys. [**[51]{}**]{} 032504 (2010) \[arXiv:0907.3975\]. P. Bizon, Comm. Math. Phys. [**[215]{}**]{} 45 (2000); P. Bizon, T. Chmaj and Z. Tabor, Nonlinearity [**[14]{}**]{} 1041 (2001). L. Andersson, N. Gudapati and J. Szeftel, “Global Regularity for the 2+1 Dimensional Equivariant Einstein-Wave Map System” \[arXiv:1501.00616\]; J. Sterbenz and D. Tataru, Comm. Math. Phys. [**[298]{}**]{} 231 (2009) \[arXiv:0907.3148\]; P. Bizon and P. Biernat, Comm. Math. Phys. s00220-015-2404-y (2015); P. Bizon, Acta Physica Polonica B [**[33]{}**]{} 1893 (2002); H. Andreasson, Living Rev. Rel. [**[14 ]{}**]{} 4 (2011) \[arXiv:1106.1367\]. D. Christodoulou and S. Klainerman, “The global nonlinear stability of the Minkowski space” (Princeton mathematical series, 41, Princeton University Press, 1993). W. Israel, Comm. Math. Phys. [**[8]{}**]{} 245 (1968); G. Bunting and A. K. M. Masood-ul-Alam, Gen. Rel. Grav. [**[19]{}**]{} 147 (1987). B. Carter, in Black Holes, 1972 Les Houches Lectures, eds. B. DeWitt and C. DeWitt (Gordon and Breach, NY, 1973); B. Carter, Comm. Math. Phys. [**[99]{}**]{} 563 (1985); D. C. Robinson, Phys. Rev. Lett. [**[34]{}**]{} 905 (1975). E. Newman, J. Math. Phys. [**[6]{}**]{} 918 (1965); P. Mazur, J. Phys. A [**[15]{}**]{} 3173 (1982). M. Dafermos, G. Holzegel and I. Rodnianski, “The linear stability of the Schwarzschild solution to gravitational perturbations”, 146 pages (2016) \[arXiv:1601.06467\]. S. Chandrasekhar, “Mathematical Theory of Black Holes” (Oxford University Press, 1983); M. Heusler, Living Rev. Rel. [**[1]{}**]{} 6 (1998); http://www.livingreviews.org/Articles/Volume1/1998-6heusler. G. Holzegel, Class. Quant. Grav. [**[33]{}**]{} 205001 (2016). M. Zilhao, V. Cardoso, C. Herdeiro, L. Lehner and U. Sperhake, Phys. Rev. D [**[90]{}**]{} 124088 (2014) \[arXiv:1410.0694\]. C. B. Collins and G. F. R. Ellis, “Singularities in Bianchi cosmologies", Physics Reports [**[56]{}**]{} 65-105 (1979). A. Coley, S. Hervik and N. Pelavas, Class. Quant. Grav. [**26**]{} 025013 (2009) \[arXiv:0904.4877\]; A. Coley, S. Hervik and N. Pelavas, Class. Quant. Grav. [**[27]{}**]{} 102001 (2010) \[arXiv1003.2373\]; see also A. Coley and S. Hervik, Gen. Rel. Grav. [**[43]{}**]{} 2199 (2011). J. M. Maldacena, Int. J. Theor. Phys. [**[38]{}**]{} 1113 (1999); J. M. Maldacena, Adv. Theor. Math. Phys. [**[2]{}**]{} 231 (1998). I. Klebanov and J. Maldacena, Physics Today [**[62]{}**]{} 28 (2009). P. Bizon, Gen. Rel. Grav. [**[46]{}**]{} 1724 (2014) \[arXiv:1312.5544\]. O. J. C. Dias, G. T. Horowitz and J. E. Santos, Class. Quant. Grav. [**[29]{}**]{} 194002 (2012) \[arXiv:1109.1825\]; O. J. C. Dias, and J. E. Santos, Class. Quant. Grav. [**[33]{}**]{} 23LT01 (2016) & “AdS nonlinear instability: breaking spherical and axial symmetries” \[arXiv:1705.03065\]; A. Rostworowski, Class. Quant. Grav. [**[33]{}**]{} 23LT01 (2016)\] \[arXiv:1612.00042\]; O. J. C. Dias, G. T. Horowitz, D. Marolf and J. E. Santos, Class. Quant. Grav. [**[29]{}**]{} 235019 (2012); S. R. Green, A. Maillard, L. Lehner and S. L. Liebling, Phys. Rev. D [**[92]{}**]{} 084001 (2015) \[arXiv:1507.08261\]. M Green, J Schwarz and E Witten, “Superstring Theory” (Cambridge: Cambridge University Press, 1988); J. Polchinski, “String Theory” (Cambridge: Cambridge University Press 2005) R. Emparan and H. S. Reall, Living Rev. Rel. [**[11]{}**]{} 6 (2008) \[arXiv:0801.3471\]. G. J. Galloway and J. M. M. Senovilla, Class. Quant. Grav. [**[27]{}**]{} 152002 (2010). R. Schoen and S.-T. Yau. “Positive Scalar Curvature and Minimal Hypersurface Singularities” \[arXiv:1704.05490\]. L. Lehner and F. Pretorius, Phys. Rev. Lett. [**[105]{}**]{} 101102 (2010). R. Gregory and R. Laflamme, Phys. Rev. Lett. [**[70]{}**]{} 2837 (1993); J. E. Santos and B. Way, Phys. Rev. Lett. [**[114]{}**]{}, 221101 (2015); K. Tanabe, J. High En. Phys. [**[02]{}**]{} 151 (2016); P. Figueras, M. Kunesch, and S. Tunyasuvunakool, Phys. Rev. Lett. [**[116]{}**]{} 071102 (2016). R. Emparan and R. C. Myers, J. High En. Phys. [**[09]{}**]{} 025 (2003); O. J. C. Dias, P. Figueras, R. Monteiro, J. E. Santos, and R. Emparan, Phys. Rev. D [**[80]{}**]{} 111701 (2009); P. Figueras, M. Kunesch, L. Lehner, and S. Tunyasuvunakool, Phys. Rev. Letts. [**[118]{}**]{} 151103 (2017). B. S. DeWitt, Phys. Rev. [**[160]{}**]{} 1113 (1967). A. C. Wall, Class. Quant. Grav. [**[30]{}**]{} 165003 (2013). P. Singh, Bull. Astr. Soc. India [**[42]{}**]{} 121 (2014) \[1509.09182\]; I. Agullo and P. Singh, “Loop Quantum Cosmology: A brief review” contribution for a volume edited by A. Ashtekar and J. Pullin, to be published in the World Scientific series 100 Years of General Relativity (World Scientific, Singapore) \[arXiv:1612.01236\]; A. Corichi and P. Singh, Phys. Rev. Lett. [**[100]{}**]{} 161302 (2008). A. Ashtekar and P. Singh, Class. Quant. Grav. [**[ 28]{}**]{} 213001 (2011); S. Saini and P. Singh, Class. Quant. Grav. [**[34]{}**]{} 235006 (2017) & [**[35]{}**]{} 065014 (2018). M. Bojowald, Phys. Rev. Lett. [**[95]{}**]{} 091302 (2005). D. Brizuela, G. A. Mena Marugán and T. Pawlowski , Class. Quant. Grav. [**[27]{}**]{} 052001 (2010); E. Wilson-Ewing, ”The loop quantum cosmology bounce as a Kasner transition” \[arXiv:1711.10943\]; M. Bojowald and G. M. Paily, Phys. Rev. D [**[87]{}**]{} 044044 (2013). K. Freese, “Status of Dark Matter in the Universe” \[arXiv:1701.01840\]. E. Witten, “The cosmological constant from the viewpoint of string theory”, in Sources and Detection of Dark Matter and Dark Energy in the Universe, ed. D. B. Cline pages 27–36 (Springer, Berlin, Heidelberg, 2001). P. Steinhardt and N. Turok, Science [**[312]{}**]{} 1180 (2006) \[arXiv:astro-ph/0605173\]. S. Weinberg, Rev. Mod. Phys. [**[61 ]{}**]{} 1 (1989). A. Padilla, “Lectures on the Cosmological Constant Problem” \[arXiv:1502.05296\]. S. Weinberg, Phys. Rev. Lett. [**[59]{}**]{} 2607 (1987). A. G. Riess [*[et al.]{}*]{}, Astron. J. [**[116]{}**]{} 1009 (1998). S. Perlmutter [*[et al.]{}*]{}, Astrophys. J. [**[517]{}**]{} 565 (1999). A. Kashlinsky, F. Atrio-Barandela and H. Ebeling, Astrophys. J. [**[732]{}**]{} 1 (2011); A. Kashlinsky, F. Atrio-Barandela, H. Ebeling, A. Edge, and D. Kocevski, Ap. J. Lett. [**[ 712]{}**]{} L81(2010) \[arXiv:0910.4958\]; H. A. Feldman, R. Watkin, and M. J. Hudson, Mon. Not. R. Astron. Soc. [**[407]{}**]{} 2328 (2010) \[arXiv:0911.5516\]. J. Colin, R. Mohayaee, S. Sarkar and A. Shafeloo, Mon. Not. R. Astron. Soc. [**[ 414]{}**]{} 264 (2011) \[arXiv:1011.6292\]; A. Green, AAO Observer Number 122 (August 2012) \[arXiv:1210.0625\]. A. Coley, L. Lehner, F. Pretorius and D.  Wiltshire, “Computational Issues in Mathematical Cosmology” (2017); http://cms.iopscience.iop.org/alfresco/d/d/workspace/SpacesStore/ 83f10d6e-0b33-11e7-9a47-19ee90157113/Overview-CC.pdf E. Bentivegna and M. Bruni, Phys. Rev. Lett. [**[ 116]{}**]{} 251302 (2016) \[arXiv:1511.05124\]; E. Bentivegna, Phys. Rev. D [**[95]{}**]{} 044046 (2017) \[arXiv:1610.05198\]. J. T. Giblin, J. B. Mertens and G. D. Starkman, Phys. Rev. Lett. [**[ 116]{}**]{} 251301 (2016), [*[ibid.]{}*]{} Phys. Rev. D [**[93]{}**]{} 124059 (2016) \[arXiv:1511.01105\], [*[ibid.]{}*]{} “A cosmologically motivated reference formulation of numerical relativity” \[arXiv:1704.04307\]. J. Adamek, D. Daverio, R. Durrer and M. Kunz, J. Cosmol. Astropart. Phys. [**[2016(07)]{}**]{} 053 (2016). J. Adamek, D. Daverio, R. Durrer and M. Kunz, Nature Physics [**[12]{}**]{} 346 (2016) \[arXiv:1509.01699\]; J. Adamek, C. Clarkson, D. Daverio, R. Durrer and M. Kunz, “Safely smoothing spacetime: backreaction in relativistic cosmological simulations” \[arXiv:1706.09309\]; J. Adamek, D. Daverio, R. Durrer and M. Kunz, J. Cosmol. Astropart. Phys. [**[2016]{}**]{} 053. (2016). H. Macpherson, D. J. Price and P. D. Lasky, “Einstein’s Universe: cosmological structure formation in numerical relativity” \[arXiv:1807.01711\]; H. Macpherson, P. D. Lasky and D. J. Price, “The trouble with Hubble” \[arXiv:1807.01714\]. C. L. Wainwright, M. C. Johnson, A. Aguirre and H. V. Peiris, J. Cosmol. Astropart. Phys. [**[1410]{}**]{} 024, (2014) \[arXiv:1407.2950\]; C. L. Wainwright, M. C. Johnson, H. V. Peiris, A. Aguirre, and L. Lehner, J. Cosmol. Astropart. Phys. [**[2014(03)]{}**]{} (2014) \[arXiv:1312.1357\]. W. E. East, M. Kleban, A. Linde and L. Senatore, J. Cosmol. Astropart. Phys. [**[1609]{}**]{} 010 (2016) \[arXiv:1511.05143\]; J. Braden, M. C. Johnson, H. V. Peiris and A. Aguirre, Phys. Rev. D [**[96]{}**]{} 023541 (2017) \[arXiv:1604.04001\]; K. Clough, E. A. Lim, B. S. DiNunno, W. Fischler, R. Flauger and S. Paban, J. Cosmol. Astropart. Phys. [**[1709]{}**]{} 025 (2017). R. Brandenberger and P. Peter, Found. Phys. [**[47]{}**]{} 797 (2017) \[arXiv:1603.05834\]. D. Garfinkle, W. C. Lim, F. Pretorius and P. J. Steinhardt, Phys. Rev. D [**[78]{}**]{} 083537 (2008); B. Xue, D. Garfinkle, F. Pretorius and P. J. Steinhardt, Phys. Rev. D [**[88]{}**]{} 083509 (2013). N. Turok, M. Perry and P. J. Steinhardt, Phys. Rev. D [**[70]{}**]{} 106004 (2004). J. D. Barrow, G. J. Galloway and F. J. Tipler, Mon. Not. R. Astron. Soc. [**[223]{}**]{} 835 (1986). X. Lin and R. M. Wald, Phys. Rev. D [**[40]{}**]{} 3280 (1989) & [**[41]{}**]{} 2444 (1990). H. Friedrich, J. Geom. Phys. [**[3]{}**]{} 101 (1986). R. Wald, Phys. Rev. D [**[28]{}**]{} 2118 (1983). A. D. Rendall, Math. Proc. Camb. Phil. Soc. [**[118]{}**]{} 511 (1995). A. A. Coley, “Dynamical systems and cosmology” (Kluwer Academic, Dordrecht: ISBN 1-4020-1403-1, 2003). J. M. Heinzle and A. D. Rendall, Comm. Math. Phys. [**[269]{}**]{} 1 (2007); H. Ringstrom, Comm. Math. Phys. [**[290]{}**]{} 155 (2009). L. G. Jensen and J. A. Stein-Schabes, Phys. Rev. D [**[35]{}**]{} 1146 (1987). E. M. Lifshitz and I. M. Khalatnikov, Adv. Phys. [**12**]{} 185 (1963); V. A. Belinskii, I. M. Khalatnikov, and E. M. Lifschitz, Adv. Phys. [**19**]{}, 525 (1970); [*[ibid.]{}*]{} [**31**]{} 639 (1982); V. A. Belinskii and I. M. Khalatnikov, Soviet Scientific Review Section A: Physics Reviews [**3**]{} 555 (1981). B. K. Berger and V. Moncrief, Phys. Rev. D [**[48]{}**]{} 4676 (1993); B. K. Berger, Living Rev. Rel. [**[5]{}**]{} 1 (2002). D. Garfinkle, Phys. Rev. Lett. [**[93]{}**]{} 161101 (2004); D. Garfinkle, Class. Quant. Grav. [**[24]{}**]{} S295 (2007). J. Wainwright and G. F. R. Ellis, “Dynamical systems in cosmology” (Cambridge University Press, Cambridge, 1997). C. Uggla, H. van Elst, J. Wainwright and G. F. R. Ellis, Phys. Rev. D [**[68]{}**]{} 103502 (2003). A. A. Coley and W. C. Lim, Phys. Rev. Lett. [**108**]{} 191101 (2012) \[arXiv:1205.2142\]; W. C. Lim and A. A. Coley, Class. Quant. Grav. [**[31]{}**]{} 015020 (2014) \[arXiv:1311.1857\]. S. W. Goode and J. Wainwright, Class. Quant. Grav. [**[2]{}**]{} 99 (1985); S. W. Goode, A. A. Coley and J. Wainwright, Class. Quant. Grav. [**[9]{}**]{} 445 (1992) \[arXiv:0810.3744\] C. M. Claudel and K. P. Newman, Proc. R. Soc. London, Ser. A [**[454]{}**]{} 3 (1998); R. P. A. C. Newman, Proc. R. Soc. London, [**[443]{}**]{} A473 & A493 (1993). K. Anguige, and K. P. Tod, Ann. Phys. (N. Y.) [**[276]{}**]{} 257 (1999). J. Middleton and J. D. Barrow, Phys. Rev. D [**[77]{}**]{} 10352 (2008) \[arXiv:0801.4090\]. I. V. Kirnos, A. N. Makarenko, S. A. Pavluchenko and A. V. Toporensky, Gen. Rel. Grav. [**[42]{}**]{} 2633 (2010) \[arXiv:gr-qc/0906.0140\]. J. D. Barrow and S. Hervik, Phys. Rev. D [**[81]{}**]{} 023513 (2010) \[arXiv:0911.3805\] R. van den Hoogen, J. Math. Phys. [**[50]{}**]{} 082503 (2009). A. A. Coley, Class. Quant. Grav. [**27**]{} 245017 (2010) \[arXiv:0908.4281\]. R. M.  Zalaletdinov, Gen. Rel. Grav. [**24**]{} 1015 (1992) & Gen. Rel. Grav. [**25**]{} 673 (1993) \[arXiv:gr-qc/9703016\]; M. Mars and R. M. Zalaletdinov, J. Math. Phys. [**38**]{} 4741 (1997). A. A. Coley, N.  Pelavas and R. M. Zalaletdinov, Phys. Rev. Letts. [**95**]{} 151102 (2005) \[arXiv:gr-qc/0504115\]. T. Buchert, Gen. Rel. Grav. [**[32]{}**]{} 105 (2000) \[arXiv:gr-qc/9906015\] & Gen. Rel. Grav. [**[33]{}**]{} 1381 (2001) \[arXiv:gr-qc/0102049\]. T. Buchert, A. A. Coley, H. Kleinert, B. F. Roukema and D. L. Wiltshire, Int. J. Mod. Phys. D [**[25]{}**]{} 1630007 (2016) \[arXiv:1512.03313\]. F. Finelli, J. Garca-Bellido, A. Kovcs, F. Paci and I. Szapudi, Mon. Not. Roy. Astron. Soc. [**[455]{}**]{} 1246 (2016) \[arXiv:1405.1555\]; A. Kovacs and J. Garca-Bellido, Mon. Not. Roy. Astron. Soc. [**[462]{}**]{} 1882 (2016) \[arXiv:1511.09008\]. J. Einasto, “Yakov Zeldovich and the Cosmic Web Paradigm”, in Proc. IAU Symp. [**308**]{}, eds. R. van de Weygaert, S. Shandarin, E. Saar, J. Einasto (Cambridge Univ. Press, 2017) \[arXiv:1410.6932\]. F. Hoyle and M. S. Vogeley, Astrophys. J. [**566**]{} 641 (2002) \[arXiv:astro-ph/0109357\]; Astrophys. J. [**607**]{} 751 (2004) \[arXiv:astro-ph/0312533\]. D. C. Pan, M. S. Vogeley, F. Hoyle, Y. Y. Choi, and C. Park, Mon. Not. R. Astron. Soc. [**421**]{} 926 (2012) \[arXiv:1103.4156\]. D. L. Wiltshire, Class. Quant. Grav. [**28**]{} 164006 (2011) \[arXiv:1106.1693\]. M. Scrimgeour [*[et al.]{}*]{}, Mon. Not. R. Astron. Soc. [**425**]{} 116 (2012) \[arXiv:1205.6812\]. D. W. Hogg, D. J. Eisenstein, M. R. Blanton, N. A. Bahcall, J. Brinkmann, J. E. Gunn and D. P. Schneider, Astrophys. J. [**624**]{} 54 (2005) \[arXiv:astro-ph/0411197\]. F. Sylos Labini, N.L. Vasilyev, L. Pietronero and Y. V. Baryshev, Europhys. Lett. [**86**]{} 49001 (2009) \[arXiv:0805.1132\]; see also \[arXiv:1512.03313\]. T. Buchert [*[et al.]{}*]{}, Class. Quant. Grav. [**[32]{}**]{} 215021 (2015) \[arXiv:1505.07800\]. T. Buchert and M. Carfora, Phys. Rev. Letts. [**90**]{} 031101 (2003) \[gr-qc/0210045\]; D. L. Wiltshire, New J. Phys. [**[9]{}**]{} 377 (2007) \[arXiv:gr-qc/0702082\]; T. Buchert, “Is Dark Energy Simulated by Structure Formation in the Universe?” \[arXiv:1810.09188\]. \[The LIGO Scientific Collaboration, the Virgo Collaboration\] B. P. Abbott [*[et al.]{}*]{}, Phys. Rev. Lett. [**[116]{}**]{} 061102 (2016). \[The LIGO Scientific Collaboration, the Virgo Collaboration\] B. P. Abbott [*[et al.]{}*]{} Phys. Rev. Lett. [**[116]{}**]{} 241102 & 241103 (2016), Phys. Rev. Lett. [**[118]{}**]{} 221101 (2017) & [**[119]{}**]{} 141101 (2017); Astrophys. J. [**[851]{}**]{} L35 (2017); Phys. Rev. Lett. [**[123]{}**]{}, 011102 (2019); “A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and virgo during the First and Second Observing Runs” \[arXiv:1811.12907\]. S. Klainerman and J. Szeftel, “Global Nonlinear Stability of Schwarzschild Spacetime under Polarized Perturbations”, 425 pages \[arXiv:1711.07597\]. M. Dafermos and I. Rodnianski, “Lectures on black holes and linear waves”, Clay Mathematics Proceedings [**[17]{}**]{} 97 (2008) \[arXiv:0811.0354\]; M. Dafermos, G. Holzegel and I. Rodnianski, “Boundedness and decay for the Teukolsky equation on Kerr spacetimes I” \[arxiv/1711.07944\]. H. Yang, V. Paschalidis, K. Yagi, L. Lehner, F. Pretorius and N. Yunes, Phys. Rev. D [**[97]{}**]{} 024049 (2018) \[arXiv:1707.00207\]. G. Martinon, “The instability of anti-de Sitter space-time” \[arXiv:1708.05600\]. M. Henneaux, Khalatnikov-Lifshitz analysis in [*[Quantum Mechanics of Fundamental Systems: the Quest for Beauty and Simplicity - Claudio Bunster Festsschrift]{}*]{} \[arXiv:0806.4670\]. A. Ashtekar, T. Pawlowski and P. Singh, Phys. Rev. D [**[74]{}**]{} 084003 (2006); P. Diener, B. Gupt and P. Singh, Class. Quant. Grav. [**[31]{}**]{} 105015 (2014). P. Singh and E. Wilson-Ewing, Class. Quant. Grav. [**31**]{}, 035010 (2014); A. Corichi and E. Montoy, Class. Quant. Grav. [**[34]{}**]{} 054001 (2017). E. Wilson-Ewing, Phys. Rev. D [**[82]{}**]{} 043508 (2010). L. Andersson [*[et al.]{}*]{}, Phys. Rev. Lett. [**[94 ]{}**]{}051101 (2005). J. Maldacena, Adv. Theor. Math. Phys. [**[2]{}**]{} 231 (1998). N. Engelhardt and G. T. Horowitz, Int. J. Mod. Phys. [**[D25]{}**]{} 1643002 (2016) & Phys. Rev. D [**[93]{}**]{} 026005 (2016). G. T. Horowitz, New J. Phys. [**[7]{}**]{} 201 (2005). J. B. Hartle, “The Impact of Cosmology on Quantum Mechanics” \[arXiv:1901.03933\]; J. B. Hartle, “Quantum Cosmology: Problems for the 21st Century” in Proceedings of the 11th Nishinomiya-Yukawa Symposium, ed by K. Kikkawa [*[et al.]{}*]{}, World Scientific Singapore, 1998. \[arXiv:gr-qc/9701022\]. M. Ishak, “Testing general relativity in cosmology” \[arXiv:1806.10122\]. T. Clifton, P. G. Ferreira, A. Padilla and C. Skordis “Modified gravity and cosmology” Physics reports **513** 1-189 (2012). L. Barack [*[et al.]{}*]{}, “Black holes, gravitational waves and fundamental physics: a roadmap” \[arXiv:1806.05195\]. F. Debbasch, Eur. Phys. J. [**B37**]{} 257 (2004) & [**B43**]{} 143 (2005). G. Lemaître, Ann. Société Sci. de Bruxelles **47** 49 (1927). L. D. Landau and E. M. Lifshitz, [*Fluid Mechanics*]{} (Pergammon Press, Oxford, 1987). J. M. Heinzle and C. Uggla, Class. Quant. Grav. [**[26]{}**]{} 075016 (2009); H. Ringstrom, Class. Quant. Grav. [**[17]{}**]{} 713 (2000) & Annales Henri Poincare [**[2]{}**]{} 405 (2001); B. Brehm, “Bianchi VIII and IX vacuum cosmologies: Almost every solution forms particle horizons and converges to the Mixmaster attractor” \[arXiv:1606.08058, 2016\]. K. Bolejko, Phys. Rev. D [**[97]{}**]{} 103529 (2018) \[arXiv:1712.02967\]. K. Bolejko, J. Cosmol. Astropart. Phys. [**[06]{}**]{} 025 (2017). K. Bolejko, J. Cosmol. Astropart. Phys. [**[02]{}**]{} 025 (2011). H. W. Hamber, Quantum Gravitation, Springer Tracts in Modern Physics (Springer Publishing, Berlin and New York, 2009) & \[arXiv:1707.08188\]\]; H. W. Hamber and L. H. Sunny Yu, “Gravitational Fluctuations as an Alternative to Inflation” \[arXiv:1807.10704\]. J. Martin, ”The Theory of Inflation” \[arXiv:1807.11075\]. A. M. Polyakov, “Infrared instability of the de Sitter space,” \[arXiv:1209.4135\]; A. M. Polyakov, Nucl. Phys. B [**[797]{}**]{}, 199 (2008) \[arXiv:0709.2899\]; E. Mottola, Phys. Rev. D [**[33]{}**]{}, 1616 (1986); N. C. Tsamis and R. P. Woodard, Nucl. Phys. B [**[474]{}**]{}, 235 (1996); L. R. W. Abramo, R. H. Brandenberger and V. F. Mukhanov, Phys. Rev. D [**[56]{}**]{}, 3248 (1997) \[gr-qc/9704037\]; R. Brandenberger, L L. Graef, G. Marozzi and G. P. Vacca, “Back-Reaction of Super-Hubble Cosmological Perturbations Beyond Perturbation Theory” \[arXiv:1807.07494\]; R. H. Brandenberger, “Back reaction of cosmological per- turbations” \[hep-th/0004016\]. S. Weinberg, Ap. J. [**[208]{}**]{} L1 (1976). N. Kaiser and J. A. Peacock, Mon. Not. R. Astron. Soc. [**[455]{}**]{} 4518 (2015) \[arxiv:1503.08506\]. R. Durrer, [*[The cosmic microwave background]{}*]{} (Cambridge: Cambridge University Press, 2008). T. Buchert, P. Mourier and X. Roy, Class. Quant. Grav. [**[35]{}**]{} 24LT02 (2018) \[arXiv1805.10455\]; A. Heinesen, P. Mourier and T. Buchert, “On the covariance of scalar averaging” \[arXiv:1811.01374\]. A. G. Riess et al., Ap. J. [**[826]{}**]{} 56 (2016) \[arxiv:1604.01424\]. P. A. R. Ade [*[et al.]{}*]{}, [*[Planck 2015 results. XIII. cosmological parameters]{}*]{}, Astron. Astrophys. [**[594]{}**]{} A13 (2016) \[arxiv:1502.01589\]. C. Clarkson, T. Clifton, A. Coley and R. Sung, Phys. Rev. D [**[85]{}**]{} 043506 (2012) \[arxiv:1111.2214\]; B. Santos, A. A. Coley, N. C. Devi and J. S. Alcaniz, J. Cosmol. Astropart. Phys. [**[1702]{}**]{} 047 (2017) \[arxiv:1611.01885\]; A. A. Coley, B. Santos and V. A. A. Sanghai, “Data Analysis and Phenomenological Cosmology” \[arXiv:1808.07145\]. E. Di Dio, F. Montanari, A. Raccanelli, R. Durrer, M. Kamionkowski and J. Lesgourgues, J. Cosmol. Astropart. Phys. [**[1606]{}**]{} 013 (2016) \[arxiv:1603.09073\]; C. D. Leonard, P. Bull and R. Allison, Phys. Rev D [**[94]{}**]{} 023502 (2016) \[arxiv:1604.01410\]. A. G. Riess [*[et al.]{}*]{}, Astrophys. J. [**[861]{}**]{}, 126 (2018) \[arxiv:1804.10655\]. E. Di Valentino, A. Melchiorri and J. Silk, Phys. Lett. B [**[761]{}**]{}, 242 (2016) \[arxiv:1606.00634\]; E. Di Valentino, A. Melchiorri and O. Mena, Phys. Rev. D [**[96]{}**]{} 043503 (2017) \[arxiv:1704.08342\]; E. Di Valentino, E. V. Linder and A. Melchiorri, Phys. Rev. D [**[97]{}**]{} 043528 (2018) \[arxiv:1710.02153\]; J. Solˆ, A. G—mez-Valent and J. de Cruz PŽrez, Phys. Lett. B [**[774]{}**]{} 317 (2017) \[arxiv:1705.06723\]. J. Larena, J.-M. Alimi, T. Buchert, M. Kunz and P. S. Corasaniti, Phys. Rev. D [**[79]{}**]{}, 083011 (2009) \[arxiv:0808.1161\]. W. East, R. Wojtak and T. Abel, Phys. Rev. D [**[97]{}**]{} 043509 (2018); J. Adamek, M. Gosenca and S. Hotchkiss, Phys. Rev. D [**[93]{}**]{} 023526 (2016). S. W. Hawking and G. F. R. Ellis, Ap. J. [**[152]{}**]{} 25 (1968); S. W. Hawking and G. F. R. Ellis, *The large scale structure of spacetime* (Cambridge Univ. Press, Cambridge, 1973). G. F. R. Ellis and J. E. Baldwin, Mon. Not. R. Astron. Soc. [**[206]{}**]{} 377 (1984). G. F. R. Ellis, H. Van Elst, J. Murugan and J. P Uzan, Class. Quant. Grav. [**[28]{}**]{} 225007 (2011); G. F. R. Ellis, Gen. Rel. Grav. [**[46]{}**]{} 1619 (2014). M. J. Disney, Nature [**[263]{}**]{} 573 (1976). G. F. R. Ellis, E. Platts, D. Sloan and A. Weltman, J. Cosmol. Astropart. Phys. [**[2016(04)]{}**]{} 026 (2016). Y. F. Cai, R. Brandenberger and P. Peter, Class. Quant. Grav. [**[30]{}**]{} 075019 (2013) \[arXiv:1301.4703\]. J. Khoury, B. A. Ovrut, P. J. Steinhardt and N. Turok, Phys. Rev. D [**[64]{}**]{} 123522 (2001) \[hep-th/0103239\]; J. Khoury, B. A. Ovrut, N. Seiberg, P. J. Steinhardt, and N. Turok, Phys. Rev. D [**[65]{}**]{} 086007 (2002) \[hep-th/0108187\]; P. L. McFadden, N. Turok, and P. J. Steinhardt, Phys. Rev. D [**[76]{}**]{} 104038 (2007) \[hep-th/0512123\]; J.-L. Lehners and N. Turok, Phys .Rev. D [**[77]{}**]{} 023516 (2008) \[hep-th/0708.0743\]. A. Notari and A. Riotto, Nucl. Phys. B [**[644]{}**]{} 371 (2002) \[hep-th/0205019\]; F. Finelli, Phys. Lett. B [**[545]{}**]{} 1 (2002) \[hep-th/0206112\]; F. Di Marco, F. Finelli and R. Brandenberger, Phys. Rev. D [**[67]{}**]{} 063512 (2003) \[astro-ph/0211276\]; J. L. Lehners, P. McFadden, N. Turok and P. J. Steinhardt, Phys. Rev. D [**[76]{}**]{} 103501 (2007) \[hep-th0702153\]; E. I. Buchbinder, J. Khoury and B. A. Ovrut, Phys. Rev. D [**[76]{}**]{} 123503 (2007) \[hep-th/0702154\]; P. Creminelli and L. Senatore, J. Cosmol. Astropart. Phys. [**[0711]{}**]{} 010 (2007) \[hep-th/0702165\]. R. H. Brandenberger and C. Vafa, Nucl. Phys. B [**[316]{}**]{} 391 (1989); A. Nayeri, R. H. Brandenberger, and C. Vafa, Phys. Rev. Lett. [**[97]{}**]{} 021302 (2006); R. H. Brandenberger, Class. Quant. Grav. [**[28]{}**]{} 204005 (2011) \[arXiv:1105.3247\]. G. F. R. Ellis and M. S. Madsen, Class. Quantum Grav. [**[8]{}**]{} 667 (1991). C. Ganguly and M. Bruni, “Quasi-isotropic cycles and non-singular bounces in a Mixmaster cosmology” \[arXiv:1902.06356\]. M. Sahlena, “On Probability and Cosmology: Inference Beyond Data?" \[arXiv:1812.04149\]; Slightly expanded version of contribution to the book ‘The Philosophy of Cosmology’, eds. K. Chamcham, J. Silk, J. D. Barrow and S. Saunders (Cambridge University Press, 2017). R. H. Brandenberger, “Beyond Standard Inflationary Cosmology” \[arXiv:1809.04926\] (modified version of a contribution to “Beyond Spacetime” eds. N. Huggett, K. Matsubara and C. Wuethrich (Cambridge Univ. Press, Cambridge, 2018)). S. Saini and P. Singh, “Generic absence of strong singularities and geodesic completeness in modified LQG” \[arXiv:1812.08937\]; see also B. F. Li, P. Singh and A. Wang, Phys. Rev. D [**[97]{}**]{} 084029 (2018) & [**[98]{}**]{} 066016 (2018); I. Agullo, Gen. Rel. Grav. [**[50]{}**]{} 91 (2018). A. Ijjas, F. Pretorius and P. J. Steinhardt, “Stability and the Gauge Problem in Non-Perturbative Cosmology” \[arXiv:1809.07010\]; see also F. Pretorius. Class. Quant. Grav. [**[22]{}**]{} 425 (2005) & D. Garfinkle, Phys. Rev., D [**[65]{}**]{} 044029 (2002). H. Kodama and M. Sasaki, Prog. Theor. Phys. Suppl. [**[78]{}**]{} 1–166 (1984). N. J. Cornish, D. N. Spergel, and G. D. Starkman, Class. Quant. Grav. [**[15]{}**]{} 2657 (1998). R. Aurich, H. S. Janzer, S. Lustig and F. Steiner, Class. Quant. Grav. [**[25]{}**]{} 125006 (2008); see also G. F. R. Ellis and G. Schreiber, Phys. Lett. A [**[115]{}**]{} 97 (1986). Particle Data Group, Rev. Part. Phys. Chin. Phys. C [**[40]{}**]{} 100001 (2016). S. Carneiro, P. C. de Holanda, C. Pigozzo, F. Sobreira, “Is the $H_0$ tension suggesting a 4th neutrino’s generation?” \[arXiv:1812.06064\]. J. Silk, Found. Phys. [**[48]{}**]{} 1305 (2018). R. Maartens, Phil. Trans. R. Soc. A [**[369]{}**]{} 5115 (2011). G. F. Ellis, R. Maartens and M. A. MacCallum, [*[Relativistic cosmology]{}*]{} (Cambridge University Press, 2012). B. J. Carr and A. A. Coley, Int. J. Mod. Phys. D [**[20]{}**]{} 2733 (2011); T. Clifton, B. Carr and A. Coley, Class. Quantum Grav. [**[34]{}**]{} 135005 (2017) \[arXiv:1701.05750\]. A. Linde and M. Noorbala, J. Cosmol. Astropart. Phys. [**[9]{}**]{} 008 (2010); K. D. Olum, Phys. Rev. D [**[86]{}**]{} 063509 (2012). J. Maldacena, J. High En. Phys. [**[5]{}**]{} 013 (2003); G. Cabass, E. Pajer and F. Schmidt, J. Cosmol. Astropart. Phys. [**[1]{}**]{} 003 (2017). P. Amaro-Seoane, H. Audley, S. Babak, [*[et al.]{}*]{}, “Laser Interferometer Space Antenna” \[arXiv:1702.00786\]. B.J. Carr and S.W. Hawking, Mon. Not. R. Astron. Soc. [**[168]{}**]{} 399 (1974). A. Dolgov and J. Silk, Phys. Rev. D [**[47]{}**]{} 4244 (1993). G. Ellis and J. Silk, Nature [**[516]{}**]{} 321 (2014). D. L. Wiltshire, P. R. Smale, T. Mattsson, T. and R. Watkins, Phys. Rev. D [**[88]{}**]{} 083529 (2013); J. H. McKay and D. L. Wiltshire, Mon. Not. R. Astron. Soc., [**[457]{}**]{} 3285 (2016) (Err. ibid. [**[463]{}**]{} 3113); see also D. L. Wiltshire, “Comment on "Hubble flow variations as a test for inhomogeneous cosmology” \[arXiv:1812.01586\]. D. Kraljic and S. Sarkar, J. Cosmol. Astropart. Phys. [**[1610]{}**]{} 016 (2016). R. Maartens, C. Clarkson and S. Chen, J. Cosmol. Astropart. Phys. **1801** 013 (2018). H. Audley [*[et al.]{}*]{} (LISA) \[arxiv:1702.00786\]. R. R. Caldwell and C. Devulder, Phys. Rev. D [**[97]{}**]{} 023532 (2018) \[1706.03765\]; R. R. Caldwell, T. L. Smith and D. G. E. Walker, “Using a Primordial Gravitational Wave Background to Illuminate New Physics” \[arXiv:1812.07577\] R. Jimenez, A. Raccanelli, L. Verde and S. Matarrese, J. Cosmol. Astropart. Phys. [**[1804]{}**]{} 002 (2018). H. Xu, Z. Huang, Z. Liu and H. Miao, “Flatness without CMB - the Entanglement of Spatial Curvature and Dark Energy Equation of State” \[arXiv:1812.09100\]. M. Celoria and S. Matarrese, “Primordial Non-Gaussianity” \[arXiv:1812.08197\]. B. Carr, F. Kuhnel and M. Sandstad, Phys. Rev. D [**[94]{}**]{} 083504 (2016). B. Carr and J. Silk, Mon. Not. R. Astron. Soc. [**[478]{}**]{} 3756 (2018). H. Gil-Marin [*et al.*]{}, Mon. Not. R. Astron. Soc. [**[465]{}**]{} 1757 (2017). K. Malik and D. Wands, Phys. Rept. [**[475]{}**]{} 1 (2009). V. A. A. Sanghai and T. Clifton, Class. Quant. Grav. [**[34]{}**]{} 065003 (2017); see also V. A. A. Sanghai and T. Clifton, Phys. Rev. D [**[91]{}**]{} 103532 (2015), Phys. Rev. D [**[93]{}**]{} 089903 (2016) & Phys. Rev. D [**[94]{}**]{} 023505 (2016). A. Ijjas, P. J. Steinhardt, and A. Loeb, Phys. Rev. D [**[89]{}**]{} 023525 (2014); A. Ijjas, J.-L. Lehners, and P. J. Steinhardt, Phys. Rev. D [**[89]{}**]{} 123520 (2014); A. Ijjas and P. J. Steinhardt,“Bouncing Cosmology made simple” \[arXiv:1803.01961\]. S. R. Goldberg, T. Clifton and K. Malik, Phys. Rev. D [**[95]{}**]{} 043503 (2017); S. Goldberg, C. Gallagher and T. Clifton, Phys. Rev. D [**[96]{}**]{} 103508 (2017). K. Malik and D. Wands, Class. Quant. Grav. [**[21]{}**]{} L65 (2004); K. Nakamura, Prog. Theor. Phys. [**[110]{}**]{} 723 (2003) [*[ibid.]{}*]{} [**[113]{}**]{} 481 (2005) [*[ibid.]{}*]{} [**[117]{}**]{} 17 (2007). C. Clarkson, O. Umeh, R. Maartens and R. Durrer, J. Cosmol. Astropart. Phys. [**[11]{}**]{} 036 (2014) \[arxiv:1405.7860\]. C. Bonvin, C. Clarkson, R. Durrer, R. Maartens and O. Umeh, J. Cosmol. Astropart. Phys. [**[2015]{}**]{} 050 (2015) \[arxiv:1503.07831\]; Bonvin, C. Clarkson, R. Durrer, R. Maartens and O. Umeh, J. Cosmol. Astropart. Phys. [**[1507]{}**]{} 40 (2015) \[arxiv:1504.01676\]. O. Umeh, C. Clarkson, and R. Maartens, Class. Quant. Grav. [**[31]{}**]{} 205001 (2014) \[arxiv:1402.1933\]. I. Ben-Dayan, M. Gasperini, G. Marozzi, F. Nugier and G. Veneziano, J. Cosmol. Astropart. Phys. [**[1306]{}**]{} 002 (2013) \[arxiv:1308.4935\]. P. Fleury, C. Clarkson and R. Maartens, J. Cosmol. Astropart. Phys. [**[1703]{}**]{} 062 (2017) \[arxiv:1612.03726\]. I. Ben-Dayan, R. Durrer, G. Marozzi and D. J. Schwarz, Phys. Rev. Lett. [**[112]{}**]{} 221301 (2014) \[arxiv:1401.7973\] [*[ibid.]{}*]{} Phys. Rev. Lett. [**[110]{}**]{} 021301 (2013) \[arXiv:1207.1286\]. C. Rampf, E. Villa, D. Bertacca and M. Bruni, Phys. Rev. D [**[94]{}**]{} 083515 (2016) \[arXiv:1607.05226\]; I. Milillo [*[et al.]{}*]{}, Phys. Rev. D [**[92]{}**]{} 023519 (2015). N. Bartolo, D. Bertacca, M. Bruni, K. Koyama, R. Maartens and S. Matarrese, Physics of the dark universe [**[13]{}**]{} 30 (2015) \[arXiv:1506.00915\]. J. Adamek, C. Clarkson, L. Coates, R. Durrer and M. Kunz, “Bias and scatter in the Hubble diagram from cosmological large-scale structure” \[arXiv:1812.04336\]. S.W. Hawking, Nature [**[248]{}**]{} 30 (1974). B. J. Carr, “Primordial black holes as dark matter and generators of cosmic structure” “Contribution to Proceedings of Simons Conference ”Illuminating Dark Matter", held in Kruen, Germany, in May 2018, eds. R. Essig, K. Zurek, J. Feng (to be published by Springer). B. J. Carr, Astron. Astrophys. [**[89]{}**]{} 6 (1980). P. Tarrio, M. F. Mendez and G. A. M. Marugan, Phys. Rev. D [**[88]{}**]{} 084050 (2013). J. M. Bardeen, Phys. Rev. D [**[22]{}**]{} 1882 (1980); V.F. Mukhanov, H. A. Feldman and R. H. Brandenberger, Phys. Reports [**[215]{}**]{} 203 (1992). S. W. Hawking, Astrophys. J. [**[145]{}**]{} 544 (1966); G. F. R. Ellis and M. Bruni, Phys. Rev. D [**[40]{}**]{} 1804 (1989); M. Bruni, P. K. S. Dunsby and G. F. R. Ellis, Astrophys. J. [**[395]{}**]{} 34 (1992); M. Bruni, S. Matarrese, S. Mollerach and S. Sonego, Class. Quant. Grav. [**[14]{}**]{} 2585 (1997). R. K. Sachs and A. M. Wolfe, Astrophys. J. [**[147]{}**]{} 73 (1967). B. J. Carr (editor), [*[Universe or Multiverse]{}*]{} (Cambridge University Press, 2007); also see B. J. Carr and G. F. R. Ellis, Astron. Geophys. [**[49]{}**]{} 2 (2008). B. Ratra, Phys. Rev. D [**[96]{}**]{} 103534 (2017); C.-G. Park and B. Ratra, “Using the tilted flat and non-flat inflation models to measure cosmological parameters from a compilation of observational data” \[arXiv:1801.00213\]; C.-Z. Ruan, M. Fulvio, C. Yu and Z. Tong-Jie, “Using spatial curvature with HII galaxies and cosmic chronometers to explore the tension in $H_0$” \[arXiv:1901.06626\]. C.-G. Park and B. Ratra, “Measuring the Hubble constant and spatial curvature from supernova apparent magnitude, baryon acoustic oscillation, and Hubble parameter data” \[arXiv:1809.03598\]. H. Yu, B. Ratra and F.-Y. Wang, Ap. J. [**[856]{}**]{} 3 (2018); J. Ryan, S. Doshi and B. Ratra, Mon. Not. R. Astron. Soc. [**[480]{}**]{} 759 (2018). J. Ryan, Y. Chen and B. Ratra, “Baryon acoustic oscillation, Hubble parameter, and angular size measurement constraints on the Hubble constant, dark energy dynamics, and spatial curvature” \[arXiv:1902.03196\]. C. [Clarkson]{}, B. [Bassett]{} and T. H. [Lu]{}, Phys. Rev. Lett. [**101**]{} 011301 (2008). R. Durrer, Helv. Phys. Acta [**[69]{}**]{} 417 (1996). C. Desgrange, A. Heinesen and T. Buchert, “Dynamical spatial curvature as a fit to type Ia supernovae” IJMPD \[arXiv:1902.07915\]. J. Martin, “Cosmic Inflation: Trick or Treat?” \[arXiv:1902.05286\]; D. Chowdhury, J. Martin, C. Ringeval and V. Vennin, “Inflation after Planck: Judgment Day” \[arXiv:1902.03951\]. B. F. Roukema, J. J. Ostrowski, P. Mourier and Q. Vigneron, “Does spatial flatness forbid the turnaround epoch of collapsing structures?”, Aston Astrophys \[arXiv:1902.09064\]. I. Brown, A. Coley and J. Latta, Phys Rev D. [**87**]{} 043518 (2013)\[arXiv:1211.0802\]; I. A. Brown, A. A. Coley, D. L. Herman and J. Latta, Phys. Rev. [**88**]{} 083523 (2013) \[arXiv:1308.5072\]. S. R. Green and R. M. Wald, Class. Quant. Grav. [**[31]{}**]{} 234003 (2014). M. Kleban and L.Senatore, “Inhomogeneous Anisotropic Cosmology” \[arXiv:1602.03520\]; A. Linde, “On the problem of initial conditions for inflation” \[arXiv:1710.04278\]. \[The LIGO Scientific Collaboration, the Virgo Collaboration\] B. P. Abbott [*[et al.]{}*]{}, Astrophys. J. Lett. [**[876]{}**]{} L7 (2019); C. Guidorzi [*[et al.]{}*]{}, “Improved constraints on $H_0$ from a combined analysis of gravitational wave and electromagnetic emission from $GW170817$” \[arXiv:1710.06426\]. S. Carlip, “How to Hide a Cosmological Constant” \[arXiv:1905.05216\]. C. P. L. Berry [*[et al.]{}*]{}, “The unique potential of extreme mass-ratio inspirals for gravitational-wave astronomy” \[arXiv:1903.03686\]; S. T. McWilliams [*[et al.]{}*]{}, “Decadal Science White Paper: The state of gravitational-wave astrophysics in 2020” \[arXiv:1903.04592\]; D. Reitze [*[et al.]{}*]{}, “The US Program in Ground-Based Gravitational Wave Science: Contribution from the LIGO Laboratory, \[arXiv:1903.04615\]; R. Caldwell [*[et al.]{}*]{}, “Science White Paper: Cosmology with a Space-Based Gravitational Wave Observatory” \[arXiv:1903.04657\]. A. G. Riess, S. Casertano, W. Yuan, L. M. Macri and D. Scolnic, “Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics Beyond $\Lambda$CDM” \[arXiv:1903.07603\]. G. Obied, H. Ooguri, L. Spodyneiko and C. Vafa, “De Sitter Space and the Swampland" \[arXiv:1806.08362\]; U. H. Danielsson and T. Van Riet, “What if string theory has no de Sitter vacua?" \[arXiv:1804.01120\]; G. Dvali and C. Gomez, Annalen Phys. [**[528]{}**]{}, 68 (2016) \[arXiv:1412.8077\]; A. Castro, N. Lashkari and A. Maloney, Phys. Rev. D [**[83]{}**]{} 124027 (2011) \[arXiv:1103.4620\]. Y. Akrami [*[et al.]{}*]{}, *Planck 2018 results. I. Overview* \[arxiv:1807.06205\]; N. Aghanim [*[et al.]{}*]{} *VI. Cosmological parameters*, \[arxiv:1807.06209\]; Y. Akrami [*[et al.]{}*]{}, *X. Constraints on inflation* \[arxiv:1807.06211\]. D. Baumann and L. McAllister, [*[Inflation and String Theory]{}*]{} (Cambridge Monographs on Mathematical Physics: Cambridge University Press, 2015) \[arXiv:1404.2601\]. S. Kachru, R. Kallosh, A. D. Linde, J. M. Maldacena, L. P. McAllister and S. P. Trivedi, J. Cosmol. Astropart. Phys. [**[0310]{}**]{} 013 (2003) \[hep-th/0308055\]. R. Kallosh, A. Linde and Y. Yamada, “Planck 2018 and Brane Inflation Revisited” \[arXiv:1811.01023\]; Y. Akrami, R. Kallosh, A. Linde and V. Vardanyan JCAP [**[1806]{}**]{} 041 (2018); R. Kallosh, A. Linde and D. Roest, J. High En. Phys. 11 198 (2013) \[arXiv:1311.0472\]. K. Maeda, S. Mizuno and R. Tozuka, “$\alpha$-attractor-type Double Inflation” \[arXiv:1810.06914\]. C. Kiefer, Int. Ser. Monogr. Phys. [**[124]{}**]{} 1 (2004) & Int. Ser. Monogr. Phys. [**[136]{}**]{} 1 (2007) & Int. Ser. Monogr. Phys. [**[155]{}**]{} 1 (2012)\]; R. Gambini and J. Pullin, “A first course in LQG quantum gravity,” (p.183 Oxford Univ. Press, UK, 2011); C. Rovelli, “Quantum gravity” (p455 Cambridge Univ. Press, UK, 2004). A. Bhardwaj, E. J. Copeland and J. Louko, “Inflation in LQC” \[arXiv:1812.06841\]. N. Arkani-Hamed, L. Motl, A. Nicolis and C. Vafa, J. High En. Phys. [**[0706]{}**]{} 060 (2007) \[hep-th/0601001\]; C. Cheung and G. N. Remmen, Phys. Rev. Lett. [**[113]{}**]{} 051601 (2014) \[arXiv:1402.2287\]. F. Denef, A. Hebecker and T.Wrase, “The dS swampland conjecture and the Higgs potential" \[arXiv:1807.06581\]; D. Andriot, “New constraints on classical de Sitter: flirting with the swampland" \[arXiv:1807.09698\]; C. Roupec and T. Wrase, “de Sitter extrema and the swampland" \[arXiv:1807.09538\]; A. Kehagias and A. Riotto, “A note on In ation and the Swampland" \[arXiv:1807.05445\]; J. L. Lehners, “Small-Field and Scale-Free: Inflation and Ekpyrosis at their Extremes" \[arXiv:1807.05240\]. L. Heisenberg, M. Bartelmann, R. Brandenberger, A. Refregier, Phys. Rev. D [**[98]{}**]{} 123502 (2018) \[arXiv:1808.02877\]; Y. Akrami, R. Kallosh, A. Linde and V. Vardanyan, “The landscape, the swampland and the era of precision cosmology” \[arxiv:1808.09440\]. [^1]: Commonly used [**[Acronyms]{}**]{} used in this paper include: Cold dark matter (CDM). Cosmic microwave background (CMB). Einstein field equations (EFE). Friedmann-Lemaítre-Robertson-Walker (FLRW). Gravitational waves (GW). General Relativity (GR). Large scale structure (LSS). Linear perturbation theory (LPT). Loop quantum cosmology (LQC). Loop quantum gravity (LQG). Primordial gravitational waves (PGW). Primordial non-Gaussianities (PNG). Quantum gravity (QG). [^2]: Except in the degenerate cases of spacetimes of constant curvature (de Sitter, anti-de Sitter and Minkowski spacetimes). Such universe models do not correspond to the real Universe, which has preferred world lines everywhere [@Ellis1971]. [^3]: The commonly accepted solution to the mass hierarchy problem at the Planck scale necessitates an anti-de Sitter space-time and a negative $\Lambda$. However, if the sign of $\Lambda$ is allowed to have anthropic freedom, the concept of using Bayesian constraints to yield a non-zero value for $\Lambda$ from below must be discarded [@SilkLimits]. [^4]: A full proof of the linear stability of Schwarzschild spacetime has recently been established [@DHR]. The non-linear stability of the Schwarzschild spacetime is still elusive [@Heusler] (however, see [@KlainermanSzeftel]). Proving the non-linear stability of Kerr has become one of the primary areas of mathematical work in GR [@Christodoulou93; @Shlapentokh]). All numerical results, and current observational data, provide evidence that the Kerr (and Kerr-Newman) black holes are non-linearly stable [@Zilho]. [^5]: [Often erroneously called the ‘Horizon’. It has nothing to do with causality, i.e. with effects related to the speed of light.]{} [^6]: Note that preliminary calculations in quantum field theory suggest that vacuum fluctuations could induce an enormous cosmological constant [@Carlip19]. [^7]: An alternative to PBHs includes persistent (or “pre-big-bang") black holes occurring in bouncing cosmologies [@CarrColey]. [^8]: For late Universe observables there is significant degeneracy between $\Omega_{k}$ and dark energy parameters; the standard approach is to treat $\Omega_{k}$ and these parameters as independent quantities, and to marginalize over the dark energy parameters [@XuHuang].
The first column will handle the large volume input of bodily fluids that is followed by a concentration on a mini column for a final elution of 50 µl to 100 µl. All components for the purification and concentration are provided in one convenient and fast kit for the easy processing of large input volumes of bodily fluids. For a schematic workflow of the protocol click here. The purified plasma/serum RNA is fully compatible with all downstream applications including PCR, qPCR, methylation-sensitive reverse transcription qPCR, reverse transcription PCR, Northern blotting, RNase protection and primer extension, expression array assays, and NGS. Background Plasma/Serum cell-free circulating RNA or exosomal RNA has the potential to provide biomarkers for certain cancers and disease states. Exosomes are 40 - 100 nm membrane vesicles, which are secreted by most cell types. Exosomes can be found in saliva, blood, urine, amniotic fluid and malignant ascitic fluids, among other biological fluids. Evidence has been accumulating recently that these vesicles act as cellular messengers, conveying information to distant cells and tissues within the body. These exosomes may play a functional role in mediating adaptive immune responses to infectious agents and tumours, tissue repair, neural communication and transfer of pathogenic proteins. For this reason exosomal RNAs may serve as biomarkers for various diseases including cancer. As the RNA molecules encapsulated within exosomes are protected from degradation by RNAses they can be efficiently recovered from biological fluids, such as plasma or serum.
https://www.biocat.com/products/56100-NB
Banksia marginata, commonly known as the Silver Banksia, is a species of tree or woody shrub in the plant genus Banksia found throughout much of southeastern Australia. Highly variable in form, it can be encountered as a small shrub 20 cm (8 in) high, to a large 12 m (40 ft) tall tree. The narrow leaves are linear and the yellow inflorescences (flower spikes) occur from late summer to early winter. The flower spikes fade to brown and then grey and develop woody follicles bearing the winged seeds. Many species of bird, in particular honeyeaters, visit the flower spikes, as do native and European honeybees. The response to bushfire varies; some forms are serotinous, that is they are killed by fire and regenerate from large stores of seed which have been held in cones in the plant canopy and are released, while other forms regenerate from underground lignotubers or sucker from lateral roots. Banksia marginata is commonly seen in cultivation, with dwarf forms being registered and sold. Banksia marginata is a highly variable species, usually ranging from a small shrub around a metre (3 ft) tall to a 12 m (40 ft) high tree. Unusually large trees of 15 to possibly 30 m (50–100 ft) have been reported near Beeac in Victoria's Western District as well as several locations in Tasmania. Conversely, it has been recorded as a compact shrub 20 cm (8 in) high on coastal heathland in Tasmania (such as at Rocky Cape National Park). Shrubs reach only 2 m (7 ft) high in Gibraltar Range National Park. The bark is pale grey and initially smooth before becoming finely tessellated with age. The new branchlets are hairy at first but lose their hairs as they mature, the new growth a pale- or pinkish brown. The leaves are alternately arranged on the stems on 2–5 mm long petioles, and characteristically toothed in juvenile or younger leaves (3 – 7 cm long). The narrow adult leaves are dull green in colour and generally linear, oblong or wedge-shaped (cuneate) and measure 1.5 – 6 cm long and 0.3 – 1.3 cm wide. The margins become entire with age, and the tip is most commonly truncate or emarginate, but can be acute or mucronate. The cellular makeup of the leaves shows evidence of lignification, and the leaves themselves are somewhat stiff. Leaves also have sunken stomates. The leaf undersurface is white with a prominent midrib covered in brownish hairs. The complex flower spikes, known as inflorescences, appear generally from late summer to early winter (February to June) in New South Wales and Victoria, although flowering occurs in late autumn and winter in the Gibraltar Range. Cylindrical in shape, they are composed of a central woody spike or axis from which a large number of compact floral units arise perpendicularly to it and measure 5 – 10 cm tall and 4 – 6 cm wide, Pale yellow in colour, they are composed of up to 1000 individual flowers (784 recorded in the Gibraltar Range) and arise from nodes of branchlets of three years' age or more. Sometimes two may grow from successive nodes in the same flowering season. They can have a grey or golden tinge in late bud. As with most banksias, anthesis is acropetal; the opening of the individual buds proceeds up the flower spike from the base to the top. Over time the flower spikes fade to brown and then grey, and the old flowers generally persist on the cone. The woody follicles grow in the six months after flowering, with up to 150 developing on a single flower spike. In many forms, only a few follicles develop. Small and ellptic, they measure 0.7–1.7 cm long, 0.2–0.5 cm high, and 0.2–0.4 cm wide. In coastal and floodplain forms, these open spontaneously and release seed, while they remain sealed until burnt by fire in plant from heathland and montane habitats. However there are exceptions to each case. Each follicle contains one or two fertile seeds, between which lies a woody dark brown separator of similar shape to the seeds. Measuring 0.9–1.5 cm in length, the seed is egg-to wedge-shaped (obovate-cuneate), and composed of a dark brown 0.8–1.1 cm wide membranous 'wing' and wedge- or sickle-shaped (cuneate-falcate) seed proper which measures 0.5–0.8 cm long by 0.3–0.4 cm wide. The seed surface can be smooth or covered in tiny ridges, and often glistens. Cultivation Propagation Pests and diseases Varieties Gallery small sinewy tree in bushland|as a small tree at Arthurs Seat State Park, Victoria a man stands, displaying the girth of the trunk|trunk of large tree at Beeac in Victoria's Western District a globular dense shrub at the side of a dirt road|a large shrub at Revesby in suburban Sydney small shrub among rocks|small shrub among rocks, Wilsons Promontory References - ↑ 1.0 1.1 Template:Cite encyclopedia - ↑ Liber, Cas (2004). "Really Big Banksias". Banksia Study Group Newsletter 6: 4–5. ISSN 1444–285X. http://anpsa.org.au/banksSG/banksiasg-6-1.pdf. - ↑ Salkin, p. 145. - ↑ 4.0 4.1 4.2 Vaughton, Glenda; Ramsey, Mike. (2006). "Selfed Seed Set and Inbreeding Depression in Obligate Seeding Populations of Banksia Marginata". Proceedings of the Linnean Society of New South Wales 127: 19–25. ISSN 0370-047X. - ↑ Cite error: Invalid <ref>tag; no text was provided for refs named George96 - ↑ Cite error: Invalid <ref>tag; no text was provided for refs named George_1981 - ↑ Read, Jennifer ; Edwards, Cheryl ; Sanson, Gordon D.; Aranwela, Nuvan (2000). "Relationships Between Sclerophylly, Leaf Biomechanical Properties and Leaf Anatomy in Some Australian Heath and Forest Species". Plant Biosystems 134 (3): 261–77. doi:10.1080/11263500012331350445. - ↑ Holliday, Ivan; Watton, Geoffrey (2008) . Banksias: A Field and Garden Guide (3rd ed.). Adelaide, South Australia: Australian Plants Society (SA Region). pp. 100–01. ISBN 0-9803013-1-1. - ↑ Salkin, p. 146. External links - w:Banksia marginata. Some of the material on this page may be from Wikipedia, under the Creative Commons license.
http://www.gardenology.org/wiki/Banksia_marginata
The first step to achieving, both, inner and world peace, is through understanding. Instead of pushing away or shunning what is different to you, what stands out, maybe listening to and understanding the ‘other’ might teach you something new or even change your perception on something. And so – understanding what some words weigh and what their origin of usage was, and how they got to where they are now is one way of picking the right choice of words when it’s our turn to speak. This week, we talk about Hysteria; The origin of the word, the mental disorder, the misconceptions around it and what it means today. What is Hysteria? “Hysteria”, the “Ladies’ Disease”, was the first mental disorder ever attributed to women (and only women), with the most interesting variety of symptoms I’ve heard of yet, including but not limited to: anxiety, hallucinations, emotional outbursts, hot flashes, insomnia, shortness of breath and the most popular: sexually forward behavior. The term hysteria itself stems from the Greek hystera, which means uterus. While it isn’t an ancient term, it’s known that Greek founder of western medicine, Hippocrates, first coined it in the 5th century BC, attributing its cause to abnormal movements of the womb in a woman’s body. The term “hysterical suffocation”, referring to the feeling of overheating and inability to breathe, was instead used in ancient Greek medicine. The Greeks believed that the uterus moved through a woman’s body, eventually strangling her and inducing disease, and by linking the symptoms to the uterus, suggesting that the disorder can only be found in women. The History of Hysteria By the late 19th century, however, it had started to become considered a mental disorder, with French neurologist Jean-Martin Charcot and famed psychoanalyst Sigmund Freud experimenting with hypnosis as treatment. “Today, psychology recognizes different types of disorders that were historically known as hysteria including dissociative disorders and somatoform disorders,” Psychology Educator Kendra Cherry writes. “Dissociative disorders are psychological disorders that involve a dissociation or interruption in aspects of consciousness, including identity and memory. These types of disorders include dissociative fugue, dissociative identity disorder, and dissociative amnesia. “Somatoform disorder is a class of psychological disorder that involves physical symptoms that do not have a physical cause. These symptoms usually mimic real diseases or injuries. Such disorders include conversion disorder, body dysmorphic disorder, and somatization disorder.” Hysteria may not be a valid psychiatric diagnosis today, but it is a good example of how concepts can emerge, change, and be replaced as we gain a greater understanding of how human beings think and behave. Modern medical professionals have officially abandoned using the term to denote a diagnostic category, replacing it with more precise categories. Fainting, outbursts, nervousness and irritability weren’t the only hallmarks of female hysteria; certain core aspects of female sexuality, desire and sexual frustration were also on the list. As Mother Jones reports, “excessive vaginal lubrication” and “erotic fantasy” were also both considered symptoms of the disease, of which were cured using the most… interesting… methods. Cures Massaging the woman’s pelvis (i.e., her genitals) to reach the “hysterical paroxysm” (i.e., orgasm) was embraced by many health experts as the cure for female hysteria. Though the practice dates back to the Renaissance, it reached its peak and became a popular money-maker for the medical establishment during the Victorian era. “By the early 19th century, physician-assisted paroxysm was firmly entrenched in Europe and the U.S. and proved a financial godsend for many doctors,” Psychology Today explains. It eventually went on from being physician-assisted to the invention of horse-shaped vibrating machines for women to ride on at home; from electrical massagers and portable vibrators to high-powered douching hydro-therapy aiming straight at the female’s private area. And as mentioned earlier, Mother Jones has published a timeline of the “female hysteria” phenomenon and the sex toys used to treat it, which you will find extremely interesting and can find here. Hysteria in Today’s World Until the 20th century, the label was applied to a mental rather than physical affliction, but today, hysteria is no longer thought of as a real ailment. In modern usage, the term connotes panic and the inability to control emotions, often associated with events like the Salem Witch Trials, but that’s for another time. According to the Huffington Post’s Catherine Pearson, “It’s easy to laugh-off female hysteria as preposterous and antiquated pseudo-science, but the fact is, the American Psychiatric Association didn’t drop the term until the early 1950s. And though it had taken on a very different meaning from its early roots, “hysterical neurosis” didn’t disappear from the DSM — often referred to as the bible of modern psychiatry — until 1980. Sadly, we’re still feeling the impact of this highly-entrenched medical diagnosis today. The “crazy” and “hysterical” labels are hard ones for women to completely shake.” So next time you come across a girlfriend being disproportionately emotional, or just feeling hypersensitive, calling her ‘hysterical’ won’t really be helping anything or anyone. Some words have weight to them, a history and a connotation we should at least be aware of before we decide to talk about them. Have something to say? Join the conversation in our Facebook Group: “The Empower Community” Feel free to subscribe to our weekly newsletter to get the latest releases on all our articles & media. We include our celebrity guest’s take on Mental Health and Wellbeing.
https://empower-mag.com/what-is-hysteria-the-understandtoempower-series/
Bioinspired water collection methods to supplement water supply. Fresh water sustains human life and is vital for human health. Water scarcity affects more than 40% of the global population and is projected to rise. For some of the poorest countries, 1 in 10 people do not have access to safe and easily accessible water sources. Water consumption by man continues to grow with increasing population. Furthermore, population growth and unsafe industrial practices, as well as climate change, have put strain on 'clean' water supply in many parts of the world, including the Americas. Current supply of fresh water needs to be supplemented to meet future needs. Living nature provides many lessons for water source. It has evolved species, which can survive in the most arid regions of the world by water collection from fog and condensation in the night. Before the collected water evaporates, species have mechanisms to transport water for storage or consumption. These species possess unique chemistry and structures on or within the body for collection and transport of water. In this paper, an overview of arid desert conditions and water collection from fog, and lessons from living nature for water collection are provided. Data on various bioinspired surfaces for water collection are also presented. Some bioinspired water purification approaches are presented. Next, consumer to military and emergency applications are discussed and water collection projections are presented. This article is part of the theme issue 'Bioinspired materials and surfaces for green science and technology (part 2)'.
‘Geographical divides, education system reinforce gender stereotypes’Archive ISLAMABAD: A seminar on ‘Pakistan’s Struggle to Redefine the Status of Women’ was held on Wednesday in order to discuss why laws are not making a difference, what is needed to challenge the patriarchal mindset and the role of the media. The event was hosted by White Ribbon Pakistan, the National Commission on Status of Women (NCSW) and Global Village Space. The panel included Federal Secretary for Human Rights Rabia Javeri, White Ribbon CEO Omer Aftab, International Islamic University Dean Sharia Department Ziaul Haq, member NCSW Sohail Warraich, senior journalist Ziauddin and MNA Maiza Hameed. The discussion was moderated by Dunya News anchor Moeen Pirzada. “Our geographical divides - rural, urban, tribal, feudal, provincial, ethnic and linguistic - together with our education system, reinforces stereotypes, particularly gender stereotypes,” said White Ribbon’s Omer Aftab. He said the same stereotypes are also reinforced by religious leaders as well as in dramas, advertisements and the news media. “We need to see the situation holistically and explore how religion and the media can be used positively,” he added. Ms Javeri said Pakistan is possibly one of the most heavily legislated countries in the world and also has a lot of legislation for gender equality. “Despite laws, conventions and institutions, Pakistan ranks very low on the gender equality index. We need to determine what we are doing wrong,” she said. “The women of Pakistan can only seek redress through the laws that exist if they know about them and they can only empower themselves if they have the skills and resources that allow them to move ahead,” she added. NSCW’s Sohail Warriach said, “As an advocate for laws I have serious reservations when people say the laws exist, the issue is the implementation. Laws raise issues express the State’s commitment to deal with them but there are contradictions here. Our Penal Code has a chapter called Anti-Women Practices but the way we perceive certain issues in legal texts is flawed, even where the language of the law is clear. “These offences have been made non-cognizable offences and the burden of proof is placed on the complainant woman and the State becomes absent. Enabling mechanisms and an enabling environment are both missing.” Dr Ziaul Haq said: “It is understood in many areas of the world including in Muslim countries mistakenly, that it is Islam which is the reason for discrimination against women. This misunderstanding is further strengthened by people working in the name of religion.” Mr Pirzada observed that 1,100 years before the French Revolution, Islam was the ideology that mandated that women can own property and conduct transactions in their own right. For 1,100 years it was not possible elsewhere for a woman to conduct a transaction or hold property or conduct business in her own name. But where the rest of the world has changed, we have not. He said: “Under Pakistani law, if you do not have sons, other family members have a claim to your property after your death. Look at the contradiction in this situation; whereas 1,100 years before the French Revolution, Islam appears as the foremost philosophy and for the past 500 years, Islam and Muslim societies have not produced any forward looking ideas.” Mr Ziauddin said: “The media’s role is critical especially in our society. But the problem is that until about 2001 we had only print media. The problem with this country has always been massive illiteracy. Perhaps that was one of the reasons why the media could not reach the target audience as far as this issue is concerned.
https://hapka.info/archive/20170413/63828/geographical-divides-education-system-reinforce-gender-stereotypes/
Listed here are the topics in mathematics These would be the things that students learn the maximum. With those themes, they know that they will soon be ready to have a solid foundation in math. Arithmetic – range makings and amount measuring are one of the topics in mathematics. Students may learn about proper splitting of tasks with values in 2 numbers and additionally about carrying the solution of two numbers. One of the primary theories with this topic is branch. Algebra – Understanding just how exactly to convey that the connections between 2 entities is also part of algebra. Two is one of those fundamentals and the working of addition, subtraction, multiplication and division is still an essential part of this area. It’s essential for college students https://catalog.purdueglobal.edu/calendar-catalog-archive/KU_Catalog_2016-17_June_2017.pdf to understand the problems of algebra if required, so they can be solved by them. In addition, there are different types of algebra like quadratic, linear algebra, harmonic and also others. Geometry – Geometry includes geometric designs such as circles, lines, and polygons. In addition, there are straight lines, curved lines, cylinders, spheres, circles, etc.. Students ought to have the ability to understand standard shapes such as rectangles, squares, ovals, circles, etc.. Functions – One of the issues in math acts. Capabilities are associated with surgeries that were different and they are easy to understand. Students need to be able to use functions to see they are able to change one item to another. Logarithms – Logarithms really are one of the most important topics in mathematics. Students may be in a position to split numerals in base 10 and should have the ability to understand that the fundamentals of both logarithms such as base and exponent. Students must further be able to detect the root of some in both the lower and upper foundations. Currency – Binary is just another one of their absolute most widely used topics in mathematics. Binary involves subtracting and adding numbers in binary options. They should have the ability to multiply and divide by splitting their foundation in binary and add their base. Vertical equations – Pupils should know how exactly to fix equations. They need to be able to be able to calculate matters from 14, to make equations. Some examples of equations that are linear include x and x + y = z =y. Trigonometry – A topic that is educated in amount is trigonometry. Trigonometry contains areas, lengths, and angles. Students ought to have the ability to detect the cosine and sine of a number. Parabola and Hyperbola – themes in mathematics which are frequently utilised are volumes and circles. They’re employed in geometry. They assist in realizing that the origins and needs to be searched thoroughly. Quadratic equations – equations are very common and persons need to be able to remedy them with different methods. It is not always crucial to resolve it. Manners could be used to come across out solutions with the topic. You will find a number of other issues. There are numerous topics to be learnt and it is up to this teacher to pick the topics which ought to be educated. Ahead of teachers start off out teaching such issues, they should understand that regions of mathematics students needs to really be familiar with.
https://www.parraphysio.com.au/many-widely-used-matters-in-mathematics/
We would like to thank our partners that help us to grow up together as important companies. Colegio Arenas and IBO The IB mission statement The IB aims to train young people caring, informed and eager for knowledge, capable of contributing to create a better and more peaceful world, in the framework of mutual understanding and intercultural respect. In pursuit of this objective, the IB Organization works with schools, Governments and international organizations to create and develop programmes of international education demanding and rigorous evaluation methods.
http://www.colegioarenassur.com/en/content/thank-our-partners
Both hyacinths and tulips belong to the Liliaceae family, and contain allergenic lactones or similar alkaloids. The toxic principle of these plants is very concentrated in the bulbs (versus the leaf or flower), and when ingested in large amounts, can result in severe clinical signs. Are tulip leaves edible? The petals and the bulb of a tulip are both edible. It is not advisable to eat the stem and leaves of the tulip. Care should be taken when harvesting tulips for food, as they should not be treated with chemicals or pesticides. Are tulips leaves poisonous to humans? Tulips contain alkaloid and glycoside compounds that are toxic and are concentrated in the bulb. Eating tulip bulbs can cause dizziness, nausea, abdominal pain and, rarely, convulsions and death. Are tulip leaves poisonous to cats? Tulips are beautiful, popular flowers that many of us have in our gardens. But it’s important to note that the Tulipa genus of flowers is toxic to cats, dogs, and horses and can be fatal if ingested. Are tulip stems poisonous to kids? What’s poisonous: The leaves, stems, roots and berries are all toxic, with the bulbs containing the greatest amount of toxic chemicals. Symptoms: Poisoning from eating tulips may cause skin and mouth irritation, as well as abdominal upset and dizziness. Are tulip petals poisonous to dogs? Tulips, Hyacinth and Irises Tulips, Hyacinths and Irises are all considered toxic to both dogs and cats, and can cause vomiting, diarrhea and drooling if ingested. Can you make tea from tulips? Summary Chamomile flowers are widely used medicinally to reduce anxiety and improve sleep. They have an earthy, slightly sweet flavor and may be used to make tea or other infusions. Are tulips safe to touch? Tulips should not be considered food. “Tulip fingers” is an irritating rash that can occur in people who handle tulips for work or pleasure. Do tulips have medicinal properties? Tulip medicinal uses are Have diuretic properties, It has anti-septic properties. Best remedy for Cough & Cold, Reduces risk of cancer, Used for sinus pain, hay fever and headache are some of the Tulip health benefits. Cosmetic use of plant : Benefits of Tulip also comprise of cosmetic uses. What do tulips signify? The most known meaning of tulips is perfect and deep love. As tulips are a classic flower that has been loved by many for centuries they have been attached with the meaning of love. They’re ideal to give to someone who you have a deep, unconditional love for, whether it’s your partner, children, parents or siblings. Do cats try to eat tulips? But unfortunately, tulips are toxic to cats. The bulbs are the most toxic part but any part of the plant can be harmful to your cat, so all tulips should be kept well away. Can tulips hurt cats? Tulips – Tulips are very common and popular plants for outdoor gardens. The most toxic part of a tulip is the bulb, and poisoning could easily result if a cat (or dog) digs up a bulb. Eating the leaves and flowers can cause irritation to a cat’s mouth and esophagus. What happens if you eat tulips? A fresh tulip bulb has a sweet, milky flavour that is actually not very bad. The tulip bulbs that were eaten during the war had a very bitter and dry taste instead. Eating tulip bulbs is not as bad as it sounds like, as long as you eat fresh tulips thate were not sprayed. What does tulip blight look like? Brown spots of dead tissue on leaves. In severe cases the spots enlarge and extensive areas become brown and withered, giving the impression of fire scorch. A fuzzy grey mould may grow over the dead areas in damp conditions. Spots on flowers and, in wet weather, the petals rot rapidly. Which bulbs are poisonous? Toxic bulbs to be aware of Daffodil or Narcissus. Daffodil or Narcissus bulbs, flowers and leaves contain alkaloids which can cause an upset stomach or vomiting if eaten. Tulips and Hyacinth. Bluebells. How many times does a tulip bloom? Tulip bulbs are classified as early and mid-season tulips. Bloom times will depend on your location and the weather but, as a rule, early tulips will bloom from March to April and mid-season types will extend the blooming period later into spring. If the weather is cool, tulips may last 1-2 weeks. Can dogs smell tulips? Quite simply, yes, dogs can smell flowers! With powerful noses, this doesn’t come as a big surprise and they can even help differentiate different types of them. Some dogs will stop by and sniff a flower in bloom, while other might just walk by them and not react at all. How long does tulip poisoning last in dogs? Recovery of Tulip Poisoning in Dogs Although symptoms of the poisoning usually last only a few hours, dogs that are recovering from anesthesia, as would be required for gastric lavage, may have coordination difficulties when they first get home. What kind of leaves are edible? Some of the most common edible leaves we eat, also known as leafy greens, include spinach, kale, lettuce, chard, arugula, and microgreens. What flowers do we eat for food? Flowers You Can Eat Alliums. Chives, leeks and garlic are all delicious in green salads, potato and pasta salads and dips. Nasturtiums. Blossoms have a peppery flavor like watercress. Marigolds. Pansies and Johnny jump-ups. Calendula. Anise hyssop. Honeysuckle. Scarlet runner beans. Are hibiscus leaves edible? Both the foliage and flowers of ‘Panama Red’ Hibiscus are edible. Young leaves have a tart, lemony flavor, are rich in vitamin C and thiamin, and may be eaten raw or cooked. A common use for cooked hibiscus leaves is in stir-fries – just be sure to add the leaves toward the end of preparation. Do tulips poison other flowers? Tulips and roses, in particular, are sensitive to the alkaloids released by daffodils. These flowers really are poisoned by Narcissus species. However, most flowers die an early death due to the mucilage released by cut daffodil stems. The problem has nothing to do with toxicity. What animal eats tulips? A. EVERYTHING eats tulips, Sharon—squirrels, chipmunks, rabbits, deer, groundhogs…the list of malicious masticators is virtually endless! But you don’t need a positive ID—just a good strong deer repellent!.
https://whatalls.com/are-tulip-leaves-poisonous/
The purpose of this chapter is to discuss and present a computational framework for detecting and analysing facial expressions efficiently. The approach here is to identify the face and estimate regions of facial features of interest using the optical flow algorithm. Once the regions and their dynamics are computed a rule based system can be utilised for classification. Using this framework, we show how it is possible to accurately identify and classify facial expressions to match with FACS coding and to infer the underlying basic emotions in real time.
https://rd.springer.com/chapter/10.1007%2F978-3-030-15381-6_2
THE emergence of China as the world’s new economic and technological power is posing a threat to the hegemony of the United States, leading to political shifts in the international arena. Earlier this year, Pakistan’s two neighbouring countries, China and Iran, signed a 25-year landmark deal, called the Strategic Cooperation Pact, to enhance interaction in a range of fields amid continued US sanctions over the latter. The New York Times had published the leaked 18-page agreement last year, which revealed that China will be investing hundreds of billions of dollars for development in various sectors of Iran, including energy, ports, railways, cyber security and transportation. Besides, a joint venture for weapon design and development is also part of the deal. A joint bank will also be established that will help the Iranian economy. In return, China stands to receive heavily discounted Iranian oil and gas for the duration of the pact. This deal will have far-reaching implications, particularly on India-Iran and US-Iran relations, while Pakistan will have to take a serious look at its own diplomatic engagements. India had agreed to invest heavily in port and railway infrastructure of Iran. Its main aim in building the Chabahar Port in Iran was to establish an alternative route to Afghanistan, bypassing Pakistan. It also agreed to build Chabahar-Zahedan railway which cannot be materialised until US lifts its sanctions on Iran. The new US administration, led by Joe Biden, appears little interested in relaxing the sanctions, which is a setback for India and has helped China gain a firm foothold in Iran. Pakistan, as the third party, will gain the most from this strategic partnership between Iran and China as it reduces India’s involvement in Iran. Moreover, the deal would improve the relations between Tehran and Islamabad and will be mutually beneficial. Gwadar and Chabahar can be declared sister ports to increase the trade influx. Pakistan and Iran earlier in the year signed a memorandum of understanding (MoU) to establish ‘border markets’ that will not only enhance bilateral trade, but will also provide economic opportunities and sustenance to the local people on both sides of the border. The two countries are also trying to find different ways, including a free trade agreement, to boost bilateral trade. In addition, CPEC can witness a westward expansion, and will have the potential to facilitate trade among Pakistan, Turkey, Iran, Azerbaijan and China. In short, it is a win-win situation for Pakistan. US President Joe Biden, during his election campaign, had talked of returning to the Joint Comprehensive Plan of Action (JCPOA), an agreement signed between Iran and several world powers in 2015. Iran is now demanding the lifting of sanctions before agreeing to any new deal with the West. The Beijing-Tehran partnership is a shock to the US, while Iran is now and looking forward to better days ahead through smart diplomatic moves. Iran’s strategic partnership with China is a blessing for several countries, while some others are not quite happy.
https://www.dawn.com/news/1630050/pakistan-and-the-china-iran-deal
Results for "special" Result 21 - 30 in 78 Cập Nhật 08-09-2018 Kien Trung Palace is the most typical and important work marking the unique and special period and complementing Hue royal architecture. Tag: Kien Trung Palace, Restoration, Project Cập Nhật 08-05-2018 An Dinh Palace is located on the South bank of Hue, with a favorable position - natural attraction - harmony and bearing the special marks of neo-classical architecture. Cập Nhật 22-04-2018 This was one of the key tasks directed by Vice Permanent Chairman of the Provincial People's Committee Phan Ngoc Tho in the documents to districts, towns, Hue city and specialized agencies under the Provincial People's Committee, so as to continue to improve the effectiveness and efficiency of the district-level Public Administrative Center on April 20. Tag: People, enterprise, centered, approach, service, Public Administrative Center Cập Nhật 06-04-2018 Introducing to the public new creations with many special features, the 3rd young art exhibition - 2018 has gradually affirmed its place as a professional space dedicated to young artists. Tag: A playground, young, artists Cập Nhật 20-03-2018 On the afternoon of March 19, Chairman of the Provincial People’s Committee Nguyen Van Cao had a meeting with Mr. Anders Krystad - the Special Advisor South East Asia, Norwegian Football Federation Tag: Nguyen Van Cao, Norwegian Football Federation, Mr. Anders Krystad Cập Nhật 11-03-2018 After the official spring visit to Hue, the departures of the Emperor and Empress of Japan was accompanied by a special gift made by “Truchigraphy” - with the symbolic image of Ngo Mon Gate crafted with sophistication and subtlety. Tag: Emperor Akihito, Empress Michiko, Hue, truc chi Cập Nhật 28-02-2018 Only a few days after the 2018 Lunar New Year, from February 28th to March 2nd, Hue University will receive a group of external experts specializing in training accreditation from the SHARE organization of the European Community. This is an opportunity for Hue University to assert its capabilities regionally and globally. Tag: Hue University, SHARE organization, Hoang Tinh Bao, Department of Testing and Educational Quality Assurance Cập Nhật 05-02-2018 Paying special care for trade unionists and workers at grassroots levels is the direction of Mr. Bui Thanh Ha, Deputy Standing Secretary of the Provincial Party Committee at the "Tết sum vầy" (a gathering lunar new year) program organized by the provincial Federation of Labor in Phong Dien Industrial Park on February 3 and 4. Tag: "Tết sum vầy”, program, workers, Phong Dien Industrial Park Cập Nhật 31-01-2018 This is the theme of an exhibition organized by the Museum of History in collaboration with the Museum of Special Operations, which opened on January 30 at the Museum of History. Tag: "Hue, Spring 1968 – the Spring of Vietnam, the Spring of Gallantry", exhibition, Mau Than General Offensive and Uprising in Spring 1968 Cập Nhật 05-01-2018 That is what was affirmed by Standing Deputy Secretary of the Provincial Party Committee Bui Thanh Ha at the reception of Salavan provincial delegation led by Deputy Secretary and Deputy Governor of Salavan province - Buathong Khounyotpanya on the morning of January 4.
http://news.baothuathienhue.vn/search/special.html?cate=&fd=&td=&l=vi&p=3
Browse by a state... Cowen Park (Kid Friendly Park) 5849 15th Ave NE Seattle , WA 98115 (206) 684-4075 www.seattle.gov/parks/park_detail.asp?id... A generally quiet enclave north of the University of Washington, Cowen Park is tacked onto the west end of Ravenna Park, adding grassy play and picnic Small, leafy Cowen Park features a decent play area, softball field, picnic tables, and restrooms. The 8.4-acre park probably isn't worth a trip on it Hours: 4am-11:30pm Make corrections... Still a kid friendly park? Last verified: 7/2/2009 3 Comments about Cowen Park " I'm often shocked by how often people who live in other parts of the city give me a blank stare when I mention Lincoln Park. In particular, Northen " - judysbook.com Report this " It's a great park to run through and it is located on the cliff of the the beach. Take a run through it's amazing trails. Soak in the sun from the c " - judysbook.com Report this " Seattle has many good parks. Lincoln Park is an often under appreciated park in West Seattle. The park has numerous trails, for casual walking and hik " - judysbook.com Report this Add a comment How Would You Rate Cowen Park? Some aspects to consider when rating this park or playground: 1. Is the park, playground or recreational facility safe? 2. Is it clean and well maintained? 3. Does the park provide an active, creative or educational experience? 4. Is the park parent and child friendly? 5. Is it toddler, pre-k, small and big kid friendly? Tags for Cowen Park No tags yet, be the first to add one. Add a tag... Bookmark this: Suggestions, bugs, feedback? Send them here. Loading View on map Kid friendly parks near Cowen Park Froula Playground Froula Playground is a three-acre park surrounded ... 0.62 miles Green Lake Park The playground is located just south of the main e... 0.89 miles Green Lake Park Wading Pool This sprawling kiddie pool features a great view o... 0.91 miles View Ridge Park 1.01 miles Meridian Playground Meridian Playground is a quiet neighborhood park. ... 1.03 miles Waldo Dahl Playfield Dahl Playfield is a 15-acre park that features fou... 1.07 miles Wallingford Playfield The playfield, refurbished and opened to the publi... 1.45 miles West Woodland Park An innovatively designed playspace with an outdoor... 1.90 miles Gas Works Park Gas Works Park has a play area with a large play b... 2.13 miles Greenwood Park Greenwood Park, tucked into the Greenwood neighbor... 2.25 miles Northgate Park Northgate Park features a children's play area, co... 2.36 miles Discover more kid friendly parks Neighborhoods All kid friendly parks in Green Lake All kid friendly parks in Roosevelt All kid friendly parks in Woodland Park All kid friendly parks in Ravenna All kid friendly parks in 98115 (Seattle) Cities All kid friendly parks in Seattle Chicago · New York · San Francisco · Charlotte · Seattle · Albuquerque · Durham · St Louis · Houston · Salt Lake City · See all cities... Who are we? Home About us Send us feedback Site map How to contribute Suggest a park Advertise with us Advertise with us Partner with us Link to us There's more Search City guides Kid friendly park guides Kid friendly park FAQs Legal stuff Privacy notice Copyright © 2019 Kids Play Parks. All rights reserved.
http://kidsplayparks.com/spot_kid_friendly_parks_Cowen_Park_Seattle_WA_52736.aspx
Six months later, Bella, at the age of nine, was coloring and cutting paper next to me and suddenly slapped her hand down on the table. She slid towards paper wings she had cut and colored. After asking, she got me to tape the wings onto the backs of some of our other dragons we bought after Merlin and Phoenix. The dragons ran around the room with the wings and I was amazed by how dragon-like they looked. The bearded dragons didn't even seemed bothered by having them on and that was when I knew she had created something that could bring a whole new world to the bearded dragon community. After two years of development and after many prototypes, we came up with a product that both fills a need as well as brings functionality and imagination to the public and their bearded dragons. This has opened up a new world and brought a great deal of possibility from this journey. We are excited to share our current and future products in hopes of creating an amazing, fun world where people experience owning their own pet dragons. As a father, I want to give my daughters the ability to live their dreams and want others, young and old, to be inspired by our journey and use it to fulfill their own dreams. Thank you for taking the time to read,
https://mypetdragon.com/blogs/news/our-story-2
Tablaphilia is a unique symphony of tabla. It is based on the central theme of Chaturashram, the four stages of human life. Composed and directed by renowned tabla maestro, Samir Chatterjee, this Symphony of Tabla brings together the power of 22 thundering tablas and four melodious vocals to produce a moving, inspiring and thought-provoking experience of the philosophy of sound. This 70-minute piece interprets the four stages of human life (ashrams) as perceived and maintained in ancient Indian society. The first stage, Bramhacharya, identified as the student life, is meant to be dedicated to the pursuit of Brahma, the creator – our source and destination. In this stage, we are meant to be engaged solely in the pursuit of real knowledge and wisdom. In the next stage, Garhastha, one chooses a vocation and begins a family life to apply the knowledge and observations acquired in Brahmacharya. Retirement, or Banaprastha, is a transition out of Garhastha, after fulfilling life’s obligations, still having one step in the city and extending the other into the forest. The final stage of Sanyasa brings together complete renunciation of the material world with a feeling of pure ecstasy. In Tablaphilia, we get a reminder of the differences we have created in our approach to all of these (different) phases of our lives, losing their true significance(s). For example, in modern time, Brahmacharya is often engaged in a selected field of study geared toward a particular type of vocation as opposed to being dedicated to the pursuit of truth and comprehensive knowledge about the governing phenomenon of life. Garhastha is the stage we like the most and prefer to drag on to consume the whole of our lives. Retirement is most often painful, contrary to how it used to be perceived in the past, as release and preparation for complete freedom. Modern society has designated Sanyasa for only a particular group of people (only). Tablaphilia reestablishes these stages of our lives, which we live anyway, with greater value and significance. It relates to everyone living this life, irrespective of their varied backgrounds. The experiences of these different phases of life are expressed through the abstract drum-language of tabla. Tablaphilia uses many of the musical principles, common in western symphony, such as harmony, counterpoint, canon etc. and embraces them in the musical experience that is purely Indian. It uses different traditional taals (rhythmic cycles), ragas and compositions produced out of the sound of tabla, enhanced by vocal inserts, and creates a soundscape that naturally relates the theme to the audience, making it accessible even to the untrained listener. They come out with a wholesome experience of life in those 70 minutes. Tablaphilia becomes more of a life-transforming experience than just a musical performance. This enthralling production with extraordinary tonal range made its debut performance at Chhandayan’s All Night Concert in May of 2009, in New York city and grew to further perfection, particularly after its successful seven-city tour of Karnataka, a year and a half later. In the words of some well-known musicians of this style of music, “Tablaphilia has certainly expanded the range of the tabla.” Another said, “it really excited me in a way I have not experienced in a long time.” In October of 2011, Tablaphilia was performed in The Great Hall of the Metropolitan Museum of Art in New York City for veteran listeners of Indian classical music and newcomers alike. Museum administrators raved that, “Tablaphilia transformed the museum into a temple,” and called it “a life-changing concert!” From its exuberant beginnings to its meditative end, Tablaphilia transcends nationality and culture and speaks to all those who experience it. – Renu Nahata This event is part of the 2019 Annual All-Night Concert. To see details of this event and purchase tickets, click on the button below.
https://tabla.org/tablaphilia-1
Striking method to decode the words people have in their mind developed at UC Berkeley Scientists at University of California Berkeley have demonstrated a striking method to reconstruct words, based on the brain waves of patients thinking of those words. The method, reported in PLoS Biology, relies on gathering electrical signals directly from patients’ brains. Based on signals from listening patients, a computer model was used to reconstruct the sounds of words that patients were thinking of. The technique may in future help comatose and locked-in patients communicate. Several approaches have in recent years suggested that scientists are closing in on methods to tap into our very thoughts. In a 2011 study, participants with electrodes in direct brain contact were able to move a cursor on a screen by simply thinking of vowel sounds. A technique called functional magnetic resonance imaging to track blood flow in the brain has shown promise for identifying which words or ideas someone may be thinking about. By studying patterns of blood flow related to particular images, Jack Gallant’s group at the University of California Berkeley showed in September that patterns can be used to guess images being thought of – recreating “movies in the mind”. Now, Brian Pasley of the University of California Berkeley and a team of colleagues have taken that “stimulus reconstruction” work one step further. “This is inspired by a lot of Jack’s work,” Dr. Brian Pasley said. “One question was… how far can we get in the auditory system by taking a very similar modelling approach?” The team focused on an area of the brain called the superior temporal gyrus, or STG. This broad region is not just part of the hearing apparatus but one of the “higher-order” brain regions that help us make linguistic sense of the sounds we hear. The team monitored the STG brain waves of 15 patients who were undergoing surgery for epilepsy or tumours, while playing audio of a number of different speakers reciting words and sentences. The trick is disentangling the chaos of electrical signals that the audio brought about in the patients’ STG regions. To do that, the team employed a computer model that helped map out which parts of the brain were firing at what rate, when different frequencies of sound were played. With the help of that model, when patients were presented with words to think about, the team was able to guess which word the participants had chosen. The scientists were even able to reconstruct some of the words, turning the brain waves they saw back into sound on the basis of what the computer model suggested those waves meant. “There’s a two-pronged nature of this work – one is the basic science of how the brain does things,” said Robert Knight of UC Berkeley, senior author of the study. “From a prosthetic view, people who have speech disorders… could possibly have a prosthetic device when they can’t speak but they can imagine what they want to say,” Prof. Robert Knight explained. “The patients are giving us this data, so it’d be nice if we gave something back to them eventually.” The authors caution that the thought-translation idea is still to be vastly improved before such prosthetics become a reality. But the benefits of such devices could be transformative, said Mindy McCumber, a speech therapist at Florida Hospital in Orlando. “As a therapist, I can see potential implications for the restoration of communication for a wide range of disorders,” she said.
https://bellenews.com/2012/02/01/science-tech/striking-method-to-decode-the-words-people-have-in-their-mind-developed-at-uc-berkeley/
The depletion of disruptive variation caused by purifying natural selection (constraint) has been widely used to investigate protein-coding genes underlying human disorders, but attempts to assess constraint for non-protein-coding regions have proven more difficult. Here we aggregate, process, and release a dataset of 76,156 human genomes from the Genome Aggregation Database (gnomAD), the largest public open-access human genome reference dataset, and use this dataset to build a mutational constraint map for the whole genome. We present a refined mutational model that incorporates local sequence context and regional genomic features to detect depletions of variation across the genome. As expected, proteincoding sequences overall are under stronger constraint than non-coding regions. Within the noncoding genome, constrained regions are enriched for regulatory elements and variants implicated in complex human diseases and traits, facilitating the triangulation of biological annotation, disease association, and natural selection to non-coding DNA analysis. More constrained regulatory elements tend to regulate more constrained genes, while non-coding constraint captures additional functional information underrecognized by gene constraint metrics. We demonstrate that this genome-wide constraint map provides an effective approach for characterizing the non-coding genome and improving the identification and interpretation of functional human genetic variation. Competing Interest Statement The authors have declared no competing interest.
https://www.biorxiv.org/content/10.1101/2022.03.20.485034v1.abstract
Advocates for less-intense homework policies maintain that students should be able to balance school, activities and family life.High school students are better able to manage their time, stay focused and complete complex tasks, which enables them to tap the value of homework. By clicking “I agree” below, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platforms. While some researchers suggest reducing homework for high school students, most researchers agree that homework at this age level is important because it has been positively linked to academic achievement. Yet it’s important to remember that the amount and type of homework matters, and teachers should strive to give less homework when possible so long as it promotes academic excellence. The first benefit of homework is that it allows students and teachers to work more closely together. They can discuss their assignments or any problems that they are having with parts of their textbooks, before or after classes.The second benefit is that it can bring families closer together as students may ask their parents or siblings for help on their homework.Not only will this help the students get a better understanding of their work with any parts they are stuck on, it will also allow parents to get more involved in their child's educational life.Thirdly, doing homework will prepare students for the big end tests.If a child does poorly on an assignment then they will learn what is necessary to do well on the next test without being punished.The goal of homework, especially in the high school years, is for students to spend more time studying a subject and engaging in the curriculum — assuming the homework is designed to be meaningful and engaging rather than passive activities that don’t truly engage or promote understanding of new concepts.Purposeful homework should give students a deeper understanding of content and allow them to practice skills that they can master independently.In high school, the 10-minute per grade level rule still applies (students should receive 10 minutes of homework per night based on the grade level they are in).This rule allows up to 120 minutes of homework in the evening for upper-level students.Researchers agree that homework should serve a specific developmental or educational purpose.High school students should not get the impression their homework is just busy work; that increases resentment and reduces the likelihood they’ll see homework as crucial to their education. Comments Homework At School - High School Students Enjoy Greatest Benefits of Homework Oct 6, 2016. High school students benefit the most from homework assignments While debates still rage over the effectiveness of homework in lower grades.… - Should Schools Be Done With Homework? - NEA Today May 13, 2014. At the start of the 2013-14 school year, the Fentress County School District in Tennessee announced that it would enforce a district-wide ban on.… - Reasons Kids Need Homework and 5 Reasons They Don't Although many people think of homework as doing more harm than good by causing. with the opportunity to practice at what it takes to be successful in school.… - What happened when one school banned homework — and. Feb 26, 2017. Teachers and administrators at a Vermont pre-K-5th grade school decided to stop giving homework this school year.… - Ways to Do Homework in Class - wikiHow Apr 30, 2019. Forgetting to do a homework assignment is something that most of us have. You can actually set it up at home, so when you go to school you.… - What's the Right Amount of Homework? Edutopia Feb 23, 2018. Decades of research show that homework has some benefits, especially for students in middle and high school—but there are risks to.… - Should students really have homework? – College-Homework. With more and more kids and their parents stating that they have almost no time to live because of homework children get at school, educators started wondering.… - Why Homework Doesn't Seem To Boost Learning--And How It. Jan 3, 2019. The following year, the superintendent of a Florida school district serving 42,000 students eliminated homework for all elementary students and.… - Homework - Wikipedia Homework, or a homework assignment, is a set of tasks assigned to students by their teachers. Students who are assigned homework in middle and high school score somewhat better on standardized tests, but the students who have 60 to.… - Should students be allowed to do homework during school.
https://imgworld.ru/homework-at-school-16113.html
FORT FAIRFIELD, Maine — An Aroostook County town that has taken significant steps to reduce its overall energy consumption was recognized for its efforts Friday by a Maine organization that wants to prompt other towns to do the same. During a brief morning ceremony, Efficiency Maine presented Fort Fairfield with its Recognition Award for its commitment to reducing energy consumption and energy costs. The event at the Fort Fairfield Community Center was attended by town officials, representatives from Efficiency Maine, and business and community leaders. Efficiency Maine is an independent statewide agency created to promote energy efficiency and renewable energy programs. Peter Roehrig, community and government relations official with Efficency Maine, presented the award to Town Manager Dan Foster. Foster said Friday it was the first time the organization recognized a town or city as a whole for energy efficiency. In the past, similar awards have gone to the Maine National Air Guard, the Lewiston School District and various businesses. Fort Fairfield was singled out for the award because of the energy projects it has undertaken in partnership with Efficiency Maine, and also because it has encouraged other organizations and businesses to do the same. The town used a $58,290 allocation from Efficiency Maine to replace 174 street lights with high-efficiency LED lights. Officials with Efficiency Maine estimated the move will save the town approximately $19,000 each year for at least the next 12 years. Annual energy costs for Fort Fairfield’s street lighting system will be reduced from $23,600 down to $4,600 per year. Roehrig added that the town also encouraged the Mid-County School System, which covers Fort Fairfield, Bridgewater and Mars Hill, to undertake energy-saving projects with Efficiency Maine. Jim Everett, district operations manager for Mid-County School System, said the organization has helped the school system reduce its energy consumption by more than 15 percent. “We are very pleased with the efforts of Fort Fairfield, and the Mid-County School System has helped lead the way to show other schools what can be done to improve energy efficiency,” said Roehrig. “Fort Fairfield is unique because Aroostook County has significant energy burdens because of its rural makeup. Despite that, they have managed to deliver significant costs savings and reduce energy consumption.” Roehrig said Friday’s ceremony was the first step in a process that Efficiency Maine has instituted to recognize local towns and leaders for energy efficiency and to encourage other towns, businesses and organizations to follow suit. He added partnering with towns and businesses allows Efficiency Maine to promote economic development by providing work for local businesses that offer and install energy-saving equipment. The partnerships also allow the money saved to be put toward other uses. Roehrig said Efficiency Maine is on a mission to grow the Maine economy by increasing energy efficiency across the state by not only providing cash incentives for projects that reduce energy consumption but also by offering expert advice and guidance.
http://bangordailynews.com/2011/03/04/news/fort-fairfield-recognized-for-energy-efficiency/
Filed 2/18/16 Reis v. Time Warner NY Cable CA4/1 NOT TO BE PUBLISHED IN OFFICIAL REPORTS California Rules of Court, rule 8.1115(a), prohibits courts and parties from citing or relying on opinions not certified for publication or ordered published, except as specified by rule 8.1115(b). This opinion has not been certified for publication or ordered published for purposes of rule 8.1115. COURT OF APPEAL, FOURTH APPELLATE DISTRICT DIVISION ONE STATE OF CALIFORNIA JOHN REIS, D069064 Plaintiff and Appellant, v. (Super. Ct. No. CIVRS1013457) TIME WARNER NY CABLE, LLC, Defendant and Respondent. APPEAL from a judgment of the Superior Court of San Bernardino County, Janet M. Frangie, Judge. Reversed. Law Office of Gerald Philip Peters and Gerald P. Peters, Andrew D. Stein & Associates and Rebecca A. Davis-Stein, and Casillas, Moreno & Associates and Arnoldo Casillas for Plaintiff and Appellant. London Fischer, Richard S. Endres and Jonathan F. Sher for Defendant and Respondent. John Reis appeals from summary judgment entered in favor of defendant Time Warner NY Cable, LLC (Time Warner). Reis filed a lawsuit against Time Warner after he sustained injuries tripping over a Time Warner cable that had emerged from the ground in his yard. In its motion for summary judgment, Time Warner argued it could not be held liable because it did not install the cable that Reis tripped over; it did not breach any duty to Reis with respect to the cable or, alternatively, any alleged defect was trivial; and it did not have actual or constructive notice of a dangerous condition. On appeal, Reis asserts the trial court erred by granting the motion for summary judgment because Time Warner did not adequately establish a defense to Reis's negligence claim or show that Reis could not establish an essential element of his claim. Reis also contends triable issues of fact remain as to whether Time Warner can be held liable for Reis's accident. Reis also challenges the court's denial of his request for judicial notice. We conclude Time Warner did not meet its burden and agree with Reis that triable issues of fact remain with respect to whether Time Warner breached a duty of care owed to Reis. Accordingly, we reverse the judgment. FACTUAL AND PROCEDURAL BACKGROUND A. Factual Background Reis purchased his home in Chino Hills in December 1997 from the original owner, who purchased the home new in 1976. At the time he moved into the home in January 1998 Reis had cable television service, which was provided by Aldelphia Communications Corporation (Adelphia). In 2006, Time Warner acquired the franchise for cable service in Chino Hills from Aldelphia as result of Aldelphia's bankruptcy. Time Warner has provided cable television service to Reis's home since that time. Reis does 2 not dispute that the cable at issue was installed before Time Warner acquired the Chino Hills cable franchise. The record contains no evidence establishing who initially installed the cable. A utility easement runs alongside Reis's property. The service cable that Reis tripped over runs from a utility pedestal near the street in Reis's front yard to Reis's house. On the day of the incident, January 10, 2010, Reis was working in his front yard. As Reis was raking leaves that had fallen from a hedge he had just trimmed a few feet from the pedestal, his foot caught on a portion of cable that had come unburied. Reis fell backward, hitting his head on a concrete curb. Reis lost consciousness and was taken to the hospital by ambulance. Reis sustained a concussion and alleges debilitating long- term injuries from the fall, including persistent headaches, back pain, and cognitive and emotional disabilities, that have kept him from working since the incident. Reis had last trimmed the hedge approximately six months before the incident, and had trimmed it approximately 25 times since he moved into the house. He raked the hedge clippings around the area the same way each time. Reis also estimated that he had mowed the lawn around the vicinity of where he fell, but not over the place where the cable emerged, over 300 times. Reis had never seen an exposed cable in his yard before the accident. Time Warner employees were at Reis's home three times in the 90 days before Reis's fall. Technician Gabriel Gonzelez responded to service calls at Reis's home on October 13, 2009 and October 28, 2009. During the first service call, Gonzalez was at 3 Reis's property for about an hour and half and spent approximately 20 minutes working on the pedestal. During the second service call, Gonzalez was again at Reis's property for about an hour and half and spent five minutes working at the pedestal. Gonzalez stated his general practice, and Time Warner's policy, was to inspect all cable equipment within the vicinity of where he performed work. Gonzalez remembered checking around the pedestal and did not recall seeing an exposed cable. Had he seen the cable above the ground, he would have reported it to Time Warner. Time Warner Technician Dominic Gomez and a crew of other employees performed an upgrade to equipment within the pedestal on December 22, 2009. Gomez did not specifically recall working on the property, but like Gonzalez stated that it was his practice to inspect the area where he performed work and would have reported an exposed cable had he seen one. After the accident, on January 13, 2010, Time Warner performed a site evaluation of Reis's property. The following day, the company removed and replaced 45 feet of cable in Reis's yard. B. Procedural History In June 2010, Reis filed a claim for damages with the City of Chino Hills (City), which the City rejected. In December, Reis brought the instant suit against the City and Time Warner. Reis's complaint asserted three causes of action. Two against the City for public entity liability and failure to protect against dangerous conditions and a negligence cause of action against Time Warner. Reis's negligence claim alleged Time Warner owed Reis a duty to exercise ordinary and reasonable care with respect to its use of the utility 4 easement and that Time Warner's failure to ensure the cable was buried 18 inches below ground was a breach of that duty. In January 2011, Reis amended his complaint without substantive change to his claim against Time Warner. Thereafter the parties engaged in discovery and the City brought an unsuccessful motion for summary judgment. In denying the motion, the trial court found the City failed to carry its burden to show (1) "that the conditions of public property alleged by [Reis] do not constitute a dangerous condition of public property [pursuant to Government Code sections 830 and 835] as a matter of law;" (2) that the City did not have constructive or actual notice, pursuant to Government Code section 835.2, of the condition alleged to have caused the harm; and (3) that the City is immune from liability under Government Code sections 818.6 and 821.4. The trial court also concluded there were triable issue of fact as to (1) whether Time Warner was an independent contractor for the City (whose actions the City would be liable for under Government Code section 815.4) and (2) whether the City permitted Time Warner to install cable. In April 2014, Reis was given leave to file an amended complaint adding Adelphia as a defendant. The second amended complaint asserted the negligence claim against Adelphia (in addition to Time Warner) and added an additional specific allegation that Time Warner and Adelphia owed a duty of care with regard to "inspecting, repairing and maintaining the cable system and equipment." Time Warner answered the second 5 amended complaint in May 2014 and brought a motion for summary judgment in September.1 Time Warner asserted it was entitled to summary judgment because: (1) it did not install the cable and, therefore, could not have negligently created the allegedly hazardous condition; (2) it owed no duty to Reis under the factors set forth in Rowland v. Christian (1968) 69 Cal.2d 108 (Rowland), and even if a duty existed the defect was so trivial that liability could not be imposed; and, finally, (3) even if the condition of the cable was sufficiently dangerous or defective, Reis's claim was not viable because Time Warner did not have actual or constructive notice of the condition. In his opposition to the motion, Reis contended that Time Warner failed to establish it owed no ordinary duty to Reis under the Rowland factors and that it owed no statutory duty of care under certain municipal ordinances. Reis also asserted that Time Warner had constructive notice of the defective condition because, even though it did not install the cable, as the successor to Adelphia it was obligated under California Public Utility Commission (PUC) standards to conduct reasonable inspections of the cable and failed to do so. Reis also contended Time Warner did not establish, as a matter of law, that the exposed cable was a trivial defect. Reis alternatively argued that even if Time Warner had shifted the burden to him, he had raised triable issues of material fact as to whether Time Warner breached the standard of care by failing to adequately inspect its 1 Time Warner brought a motion for summary judgment in January 2014, but withdrew that initial motion in April. 6 cable in Reis's yard, and whether Time Warner breached a duty to the City to properly maintain its cable facilities and provide insurance and indemnification to the City to compensate Reis for his injuries. On February 19, 2015, the court granted Time Warner's motion, finding undisputed facts established that Time Warner "did not install the subject cable therefore, it cannot be liable for improperly installing the cable." The court further found that Time Warner "had no notice or knowledge of any exposed cable on [Reis's] residence" and "duly inspected the area of its equipment," and, therefore, was not "liable for failing to inspect, repair, or maintain its equipment, including the subject cable." The court entered judgment in favor of Time Warner on March 2, 2015. DISCUSSION I On appeal, Reis contends the trial court erred by granting summary judgment because Time Warner did not establish an affirmative defense to Reis's negligence claim or show that Reis could not establish an essential element of his claim. Reis also contends that triable issues of material fact remain concerning Time Warner's liability. A A motion for summary judgment or adjudication shall be granted when "all the papers submitted show that there is no triable issue as to any material fact and that the moving party is entitled to a judgment as a matter of law." (Code Civ. Proc., § 437c, subd. (c).) A defendant moving for summary judgment or adjudication has the "initial 7 burden of production to make a prima facie showing of the nonexistence of any triable issue of material fact." (Aguilar v. Atlantic Richfield Co. (2001) 25 Cal.4th 826, 850.) A defendant meets this burden either by showing that one or more elements of a cause of action cannot be established or by showing that there is a complete defense. (Id. at pp. 853-854.) Once a defendant has demonstrated the plaintiff's evidence is deficient, the plaintiff may successfully oppose the motion for summary judgment by showing the evidence permits conflicting inferences as to the particular element of the cause of action or by presenting additional evidence of its existence. (Code Civ. Proc., § 437c, subds. (c), (p)(1); Silva v. Lucky Stores, Inc. (1998) 65 Cal.App.4th 256, 261.) The summary judgment procedure is directed at revealing whether there is evidence that requires the fact-weighing procedure of a trial. " '[T]he trial court in ruling on a motion for summary judgment is merely to determine whether such issues of fact exist, and not to decide the merits of the issues themselves.' [Citation.] The trial judge determines whether triable issues of fact exist by reviewing the affidavits and evidence before him or her and the reasonable inferences which may be drawn from those facts." (Morgan v. Fuji Country USA, Inc. (1995) 34 Cal.App.4th 127, 131.) However, a material issue of fact may not be resolved based on inferences if contradicted by other inferences or evidence. (Aguilar v. Atlantic Richfield Co., supra, 25 Cal.4th at p. 856.) "The evidence of the moving party [is] strictly construed, and that of the opponent liberally construed, and any doubts as to the propriety of granting the motion [are to] be resolved in favor of the party opposing the motion." (Branco v. Kearny Moto Park, Inc. 8 (1995) 37 Cal.App.4th 184, 189.) The trial court does not weigh the evidence and inferences, but instead merely determines whether a reasonable trier of fact could find in favor of the party opposing the motion, and must deny the motion when there is some evidence that, if believed, would support judgment in favor of the nonmoving party. (Alexander v. Codemasters Group Limited (2002) 104 Cal.App.4th 129, 139.) On appeal, we evaluate the respective evidentiary showings de novo to determine if the evidence permits conflicting inferences as to a particular element of the plaintiff's cause of action, or as to a defense to it. In performing our review, "we must view the evidence in a light favorable to plaintiff as the losing party [citation], liberally construing her evidentiary submission while strictly scrutinizing defendants' own showing, and resolving any evidentiary doubts or ambiguities in plaintiff's favor." (Saelzler v. Advanced Group 400 (2001) 25 Cal.4th 763, 768.) "We need not defer to the trial court and are not bound by the reasons for the summary judgment ruling; we review the ruling of the trial court, not its rationale." (Knapp v. Doherty (2004) 123 Cal.App.4th 76, 85.) B Reis asserts summary judgment was not proper because Time Warner did not present evidence that shifted the burden of proof to him with respect to whether Time Warner was liable for Adelphia's alleged negligent installation of the cable, or whether Time Warner itself was negligent for failing to properly maintain the cable. Time Warner responds that the evidence conclusively established that it did not install the cable and that Reis's failure to assert a claim or argue in opposition to its motion that Time 9 Warner was liable for Adelphia's installation of the cable forecloses this court from considering this theory of liability. With respect to its own conduct, Time Warner argues (1) that it cannot be held liable for a breach of duty because it did not have actual or constructive notice of the alleged defect and (2) even if it did breach its duty to Reis it cannot be held liable because the defect was trivial. Time Warner also asserts that Reis's argument concerning negligence per se is misplaced. 1. Time Warner's Liability for Adelphia's Conduct We agree with Time Warner that reversal is not warranted based on Time Warner's liability for Aldelphia's conduct. This issue was not raised in the trial court. The trial court found Time Warner was not directly liable for improper installation because "the undisputed facts with supporting evidence establish that Time Warner Cable did not install the subject cable therefore, it cannot be liability for improperly installing the cable." It did not consider Time Warner's indirect liability for Adelphia's installation or maintenance of the cable. We are cognizant of the principle Reis advances that an appellant may assert a new theory that pertains only to questions of law on undisputed facts for the first time on appeal. (Richmond v. Dart Industries, Inc. (1987) 196 Cal.App.3d 869, 879.) This rule cannot be invoked, however, " 'if the new theory contemplates a factual situation the consequences of which are open to controversy and were not put in issue . . . .' " (Ibid.) In such a case, considerations of fairness dictate that " 'the opposing party should not be required to defend against it on appeal.' " (Ibid.) Further, whether this court considers a 10 new theory of liability on appeal is discretionary. An appellate court is not required to consider a new theory, even if it raises a pure question of law. (Greenwhich S.F., LLC v. Wong (2010) 190 Cal.App.4th 739, 767.) As Time Warner points out, the second amended complaint does not assert claims against Time Warner based on its assumption of Adelphia's liabilities. Further, despite his assertion to the contrary, Reis's opposition to Time Warner's motion for summary judgment did not include argument on this theory.2 Whether Time Warner could be held liabile for Adelphia's conduct and whether the purchase agreement conveyed such potential liability requires at minimum an examination of the agreement between the two companies. That agreement is not in the record before this court. In light of this factual consideration, we decline to make an exception in this case to the general rule that a new theory of liability cannot be raised for the first time on appeal. 2. Time Warner's Liability for Its Own Conduct Time Warner's principal argument in its motion for summary judgment was that it owed no duty to Reis under the Rowland factors. The issue of duty is a question of law for the trial court. (Ann M. v. Pacific Plaza Shopping Center (1993) 6 Cal.4th 666, 674.) The trial court, however, did not base its order granting summary judgment on a lack of duty. Instead, the trial court concluded Time Warner was not liable because there was no 2 To support his argument that the issue was raised below Reis points solely to the fact that the trial court took judicial notice of the 2006 bankruptcy court order authorizing the sale of Adelphia's assets to Time Warner, which he submitted in support of his opposition to Time Warner's motion. The record, however, does not show that Reis made any assertion tying the document to Time Warner's liability for Adelphia's conduct. 11 dispute that Time Warner did not install the cable and liability based on Time Warner's breach of its duty of care with respect to maintaining the cable was foreclosed by a lack of actual or constructive notice of the defect. On appeal, Reis asserts this ruling was error. In essence, he contends that Time Warner owed him a duty of care, Time Warner's evidence in support of its motion did not establish it did not breach that duty, and, even if it did, triable issues of material fact remain as to whether Time Warner had notice of the alleged defect. As evidence of Time Warner's breach of its duty of care Reis points to Time Warner's failure to comply with PUC General Order (G.O.) 128 and Chino Hills Municipal Code section 5.52 et seq. In response to Reis's arguments, Time Warner does not argue it owed no duty to Reis. Instead, Time Warner asserts (1) Reis did not meet his burden to show a factual dispute concerning notice of the alleged defect and (2) the duty it owed to Reis was not breached because it conducted reasonable inspections of the cable or, alternatively, the alleged defect was trivial. "In order to establish liability on a negligence theory, a plaintiff must prove duty, breach, causation and damages. [Citations.] A plaintiff meets the causation element by showing that (1) the defendant's breach of its duty to exercise ordinary care was a substantial factor in bringing about plaintiff's harm, and (2) there is no rule of law relieving the defendant of liability. [Citation.] These are factual questions for the jury to decide, except in cases in which the facts as to causation are undisputed." (Ortega v. Kmart Corp. (2001) 26 Cal.4th 1200, 1205 (Ortega).) 12 In this case, whether or not Time Warner had notice of the allegedly defective condition of the cable, exposing it to liability for failing to repair, remains an open question of fact reserved for the jury. "[W]here the plaintiff relies on the failure to correct a dangerous condition to prove the owner's negligence, the plaintiff has the burden of showing that the owner had notice of the defect in sufficient time to correct it." (Ortega, supra, 26 Cal.4th at p. 1206.)3 "The plaintiff need not show actual knowledge where evidence suggests that the dangerous condition was present for a sufficient period of time to charge the owner with constructive knowledge of its existence." (Ibid.) That knowledge, in turn, "may be shown by circumstantial evidence 'which is nothing more than one or more inferences which may be said to arise reasonably from a series of proven facts.' [Citation.] Whether a dangerous condition has existed long enough for a reasonably prudent person to have discovered it is a question of fact for the jury . . . . Each accident must be viewed in light of its own unique circumstances." (Id. at pp. 1206- 1207; see also Beck v. Sirota (1941) 42 Cal.App.2d 551, 557 ["Ordinary care is a relative term. . . . The amount of care must be in proportion to the danger to be avoided and the consequences reasonably to be anticipated."]; Bridgman v. Safeway Stores, Inc. (1960) 53 Cal.2d 443, 448 ["The care required must, of course, be commensurate with the particular risk involved, and the risks may vary with many different factors . . . ."].) 3 We note that Ortega and cases like it are not perfectly aligned to the situation presented here. Ortega involved store owner liability for failing to detect and correct a hazardous condition. (Ortega, supra, 26 Cal.4th at p. 1205.) In its briefing in the trial court, Time Warner drew an analogy to the store owner slip-and-fall liability cases and we agree this case can be analyzed under the notice framework. 13 As noted, Time Warner does not challenge the existence of a duty to Reis as the entity in possession and control of the cable. Instead, Time Warner argues that it established, as a matter of law, that the duty was not breached and it did not have constructive or actual notice of the alleged defect because its "technicians were at Reis's home three times within 90 days of Reis's accident" and Reis himself was in the yard frequently and never saw the cable. It further argues that "[n]o reasonable inspection would have detected the emerged cable" or "the depth of buried cable." The declarations of Time Warner's technicians and Reis's testimony, however, do not conclusively establish (and, therefore, shift the burden of proof to Reis to refute) that the defective condition was not there long enough for Time Warner to discover it. Nor does Time Warner's evidence conclusively establish that Time Warner's conduct complied with the standard of reasonable care. The questions of when the cable became exposed, whether the cable became exposed because of improper maintenance on the part of Time Warner, and whether Time Warner's inspections were reasonable in these circumstances are issues of fact for the jury.4 4 Although the issue is not raised by Reis, we also note the apparent inconsistency between the court's ruling denying the City's motion for summary judgment and the order under review granting Time Warner's motion. On the record before this court, it appears that the City remains subject to liability for Time Warner's conduct while Time Warner does not. The trial court's earlier order denies the City's motion "on the issue of whether City is liable for plaintiff's injuries, pursuant to [Government] Code section 815.4" and states that "[t]he City does not meet its threshold burden of showing there are no contracts between the city and [Time Warner] which would have created an independent contractor relationship." Government Code section 815.4 states that a "public entity is liable for injury proximately caused by a tortious act or omission of an independent contractor of the public entity to the same extent that the public entity would be subject to 14 Time Warner also argues that even if it had constructive notice, it cannot be held liable for damages because the defective condition at issue is trivial. "The trivial defect doctrine . . . is an aspect of a landowner's duty which a plaintiff must plead and prove. [Citation.] The doctrine permits a court to determine whether a defect is trivial as a matter of law, rather than submitting the question to a jury. [Citation.] 'Where reasonable minds can reach only one conclusion—that there was no substantial risk of injury—the issue is a question of law, properly resolved by way of summary judgment.' " (Stathoulis v. City of Montebello (2008) 164 Cal.App.4th 559, 567 (Stathoulis).) The triviality rule " 'provides a check valve for the elimination from the court system of unwarranted litigation which attempts to impose upon a property owner what amounts to absolute liability for injury to persons who come upon the property. . . . [¶] The legal analysis involves several steps. First, the court reviews evidence regarding the type and size of the defect. If that preliminary analysis reveals a trivial defect, the court considers evidence of any additional factors such as the weather, lighting and visibility conditions at the time of the accident, the existence of debris or obstructions, and plaintiff's knowledge of the area. If these additional factors do not indicate the defect was sufficiently dangerous to a reasonably careful person, the court should deem the defect such liability if it were a private person. Nothing in this section subjects a public entity to liability for the act or omission of an independent contractor if the public entity would not have been liable for the injury had the act or omission been that of an employee of the public entity." Because the underlying briefing on the City's motion is not before this court and nothing in the record suggests that only the City and not Time Warner owed a duty to Reis with respect to the cable, the orders seem at odds with each other. 15 trivial as a matter of law and grant judgment for the landowner.' " (Stathoulis, supra, 164 Cal.App.4th at pp. 567-568.) We do not agree with Time Warner that its evidence conclusively establishes summary judgment is appropriate on the basis of triviality. While some facts in the record might lend support for such a conclusion (the cable's location under the hedge and Reis's knowledge of the area), others (the nature of the fall and the severity of Reis's injuries, and Time Warner's removal and replacement of 45 feet of cable in Reis's yard days after the fall) support the opposite conclusion. Importantly, other relevant facts such as the size of the exposed cable and how far above the ground it protruded are not before this court. On this record we cannot say that the alleged defect was trivial as a matter of law. 3. Negligence Per Se The doctrine of negligence per se creates a presumption that, where there was a violation of law, the violator was negligent. (Jacobs Farm/Del Cabo, Inc. v. Western Farm Service, Inc. (2010) 190 Cal.App.4th 1502, 1526; Evid. Code, § 669, subd. (a).) To establish negligence per se a " '[p]laintiff must show that (1) defendant violated a statute, ordinance or regulation of a public entity, (2) the violation proximately caused his injury, (3) the injury resulted from an occurrence of the nature which the statute was designed to prevent; (4) he was one of the class of persons for whose protection the statute was adopted.' " (DiRosa v. Showa Denko K.K. (1996) 44 Cal.App.4th 799, 805.) "The last two elements are determined by the trial court as a matter of law, since they involve statutory interpretation [citation], while the first two elements are regarded as factual 16 matters to be determined by the jury." (Cade v. Mid-City Hosp. Corp. (1975) 45 Cal.App.3d 589, 597.) If the presumption of negligence is created, it is rebuttable by proof that "[t]he person violating the statute, ordinance, or regulation did what might reasonably be expected of a person of ordinary prudence, acting under similar circumstances, who desired to comply with the law . . . ." (Evid. Code, § 669, subd. (b)(1).) There is no separate cause of action for negligence per se. (Das v. Bank of America, N.A. (2010) 186 Cal.App.4th 727, 737-738.) Rather evidence of a defendant's violations of regulations or statutes may provide proof of negligence. Here, the parties dispute both what the PUC regulations require, and whether Time Warner complied with the regulations.5 In the trial court, Reis contended Time Warner's failure to comply with PUC regulations was evidence of Time Warner's breach of the duty of reasonable care it owed to Reis and that the regulations requiring maintenance of the cable provided Time Warner with constructive notice of the alleged defect. On remand, whether Time Warner violated the PUC's regulations should be addressed by the 5 Reis relies on PUC G.O. 128, which provides rules for "underground electrical supply and communications systems" to "insure adequate service and secure safety to all persons engaged in the construction, maintenance, operation or use of underground systems and to the public in general." The general order contains specific rules to be followed by utilities, including cable providers, in ensuring safety in the installation and maintenance of their facilities. Reis also points to Chino Hills Municipal Ordinance section 5.52.400, which states cable facilities must be maintained "in accordance with applicable [PUC] pole attachment standards, electrical codes and industry standards of the cable television industry generally . . . ." As discussed in the following section the City's ordinances were not included as evidence before the trial court. The ordinance, however, relies on the PUC's regulation of cable providers. 17 finder of fact in the context of determining whether Time Warner's conduct fell below the standard of reasonable care.6 (See Parsons v. Crown Disposal Co. (1997) 15 Cal.4th 456, 495 (dis. opn. of Kennard, J.) ["Deciding whether a person's conduct conformed to this standard of care on a particular occasion is generally a question of fact for the jury."].) II Reis also contends the trial court erred by denying his request for judicial notice of Government Code section 53066 and Chino Hills Municipal Code section 5.52 et seq. Time Warner opposed the request, arguing the provisions were irrelevant to the issues before the court. Time Warner asserted that as a holder of a state franchise under Public Utilities Code section 5840, subdivision (a), the statute and ordinances were not applicable to it and, therefore, properly excluded.7 With respect to a California statute, Evidence Code section 451 generally requires a court to take judicial notice of the law. (See Evid. Code, § 451 ["Judicial notice shall 6 Of note, even if Time Warner can show that the rules were not applicable, or that it adequately complied with them, this would not establish due care as a matter of law. Rather it would merely relieve Time Warner " ' " 'of the charge of negligence per se. It does not affect the question of negligence due to the acts or omissions of the company as related to the particular circumstances of the case.' [Citation.]" [Citation.] Safety regulations prescribe only the minimum care required, "and it is usually a matter for the jury to determine whether something more than the minimum was required under the evidence in the case." ' " (Mata v. Pacific Gas and Electric Company (2014) 224 Cal.App.4th 309, 313.) 7 Reis also asks this court to take judicial notice of Government Code section 53066 and Chino Hills Municipal Code section 5.52.090. California Rules of Court, rule 8.252(a) requires a party seeking judicial notice to serve and file a separate motion and proposed order. Because Reis failed to do so we decline to consider his request. 18 be taken of the following: (a) The decisional, constitutional, and public statutory law of this state . . . ."]; Kasem v. Dion-Kindem (2014) 230 Cal.App.4th 1395, 1400 [court required to take judicial notice of state statutes and failure to do so is error].) With respect to a municipal ordinance, under Evidence Code sections 452 and 453 judicial notice is generally also required so long as the proponent of the request (1) provides each "adverse party sufficient notice of the request, through the pleadings or otherwise, to enable such adverse party to prepare to meet the request" and (2) furnishes "the court with sufficient information to enable it to take judicial notice of the matter." (Evid. Code, § 453.) Judicial notice, however, " 'is always confined to those matters which are relevant to the issue at hand.' " (Mangini v. R. J. Reynolds Tobacco Co. (1994) 7 Cal.4th 1057, 1063.) " 'While Evidence Code, section 451, provides in mandatory terms that certain matters designated therein must be judicially noticed, the provisions contained therein are subject to the qualification that the matter to be judicially noticed must be relevant . . . .' " (Ibid.) A "decision of the judge not to take judicial notice will be upheld on appeal unless the reviewing court determines that the party furnished information to the judge that was so persuasive that no reasonable judge would have refused to take judicial notice of the matter." (Willis v. State of California (1994) 22 Cal.App.4th 287, 291.) As noted, Time Warner's opposition to Reis's request below asserted that Government Code section 53066 and the City's ordinances are not relevant to the disputed issues because at the time of Reis's accident the ordinance was inapplicable 19 under Public Utilities Code section 5840, subdivision (a). That provision states: "Neither the [PUC] nor any local franchising entity or other local entity of the state may require the holder of a state franchise to obtain a separate franchise or otherwise impose any requirement on any holder of a state franchise except as expressly provided in this division. Sections 53066, 53066.01, 53066.2, and 53066.3 of the Government Code shall not apply to holders of a state franchise." (Pub. Util. Code, § 5840, subd. (a).) Reis concedes that Time Warner held a franchise from the State of California at the time of his accident and contends only that Public Utilities Code section 5840 did not preclude the application of the City's ordinances and Government Code section 53066 to Time Warner's operation of the Chino Hills franchise because Time Warner did not obtain the state franchise until July 16, 2009. Reis provides no basis, however, to justify application of Government Code section 53066 or the City's ordinances at the time of his fall. On this record, we cannot say that no reasonable judge would have found these provisions irrelevant and refused to take judicial notice of them.8 8 Further, Reis relies on these provisions primarily to support his contention that Time Warner should be held liable for Adelphia's conduct. As discussed, we decline to consider this theory of liability for the first time on appeal and the relevance of these rules is further called into question for this reason. 20 DISPOSITION The judgment is reversed. Appellant is awarded costs on appeal. O'ROURKE, J. WE CONCUR: HUFFMAN, Acting P. J. HALLER, J. 21
To include world premier of John Woolrich "The Tongs and The Bones" Hounslow Symphony Orchestra 60th anniversary concert. On Saturday November 29th at 7pm. At St Mary's Church, Osterley Road, Isleworth TW7 4PW. Hounslow Symphony Orchestra marks its 60th season with this celebration concert. The programme includes arguably the greatest symphony by a British composer, Elgar's Symphony No 1, and the world premiere of a new modern commission by arguably the most important British composer living today, John Woolrich "The Tongs & The Bones". The third piece is Sculthrope - Beethoven Variations.
http://www.brentfordtw8.com/info/evhso006.htm
Different countries utilise various methods for disseminating information about the governance of maritime activities. Nevertheless, information ‘is generally highly fragmented, sector-specific and/or poorly accessible.’ The diversity of legal systems and the extensive regulatory and legal framework within each country, adds considerable complexity to implementing MSP at a sea basin level. The marine planning project website can address many of the regulatory and legal complexities as it provides a single platform where users can investigate the planning and licensing processes for different activities within different countries or jurisdictions. It has two main functions: It does this by providing the foundations upon which each country around a regional sea can implement marine planning within their own jurisdiction, bridging the gaps that currently exist by: (1) In broad terms, explaining the governance/legal system of the jurisdiction; (2) Explaining the marine planning system in operation in each jurisdiction; (3) By identifying the law and policies within each jurisdiction for each activity; (4) For each activity it provides details of licenses or authorisations required; (5) It provides regulator details and links to the regulator within each jurisdiction for each activity; (6) It provides the links to data sources within each jurisdiction; (7) It identifies where each activity is currently taking place within each jurisdiction;* (8) It identifies where national planning authorities have highlighted an area where an activity could take place;* (9) It highlights nuances within the law by providing legal definitions, and (10) Overcomes language barriers by defining what specific words mean in that jurisdiction and in that particular context. The aim is to enable participating countries to adopt collective responsibility for shared sea basins but respect national boundaries and legal systems within each. * This tool (Activity Locations) within the website currently displays example locations for aquaculture. The locations displayed are for demonstration purposes only.
https://marineplanningexchange.com/pages/3
Memory is simultaneously cultural and personal, muscular and cerebral. So it is studied in a thrilling but daunting range of disciplines, and with a bewildering array of methods. From neurobiology to narrative theory, from artificial intelligence to anthropology, the articles in this issue embrace and analyse the many ways in which remembering tangles us in affect, time, and technology. The papers derive from talks originally presented at a series of workshops on memory, mind, and media held in Sydney (at Tusculum in Potts Point, and at Macquarie University) in late November and early December 2004; all have been subsequently refereed and revised. The impulse for the workshops was to treat memory as a test case for the optimistic vision of future “multidisciplinary alliances” put forward by the philosopher Andy Clark, who was one of the speakers: Much of what matters about human intelligence is hidden not in the brain, nor in the technology, but in the complex and iterated interactions and collaborations between the two. … The study of these interaction spaces is not easy, and depends both on new multidisciplinary alliances and new forms of modelling and analysis. The pay-off, however, could be spectacular: nothing less than a new kind of cognitive scientific collaboration involving neuroscience, physiology, and social, cultural, and technological studies in about equal measure. (Clark 2001: 154) As in the original workshops, some of these papers focus more directly on memory, others on the broader frameworks of ‘distributed cognition’ and the ‘extended mind’ within which this vision was situated. (This is the first of three special issues arising from these workshops: the December 2005 issue of the journal Cognitive Processing, and a 2006 issue of the journal Philosophical Psychology will be devoted to other articles derived from the same event.) The issue begins with Elizabeth A. Wilson’s "Can you think what I feel? Can you feel what I think?": Notes on affect, embodiment, and intersubjectivity in AI. Wilson argues that, contrary to critical stereotype, the cognitive sciences have never excluded emotion and embodiment. Picking out neglected strands of the histories of computation and of psychoanalysis, she shows that theorists imagining different modes of embodiment – robotic, for example – have always tightly coupled affect and machinery. Further historical depth to contemporary ideas about embodied cognition is offered in Evelyn Tribble’s paper "The Chain of Memory": Distributed Cognition in Early Modern England. Building on her recent study of actors’ remembering on the Elizabethan stage, Tribble examines changing technologies of memory in early Protestantism. In both church and theatre, the challenge to individuals of remembering large bodies of information was met by new ways of distributing the mnemonic load onto the environment: architecture, artifacts, and practices spread the burden of memory from the individual brain to various forms of physical and cultural scaffolding. The environmental dependence of temporal thought is demonstrated in a very different cultural context in Paul Memmott’s article Tangkic Orders of Time: an anthropological approach to time study. In a classic ethnographic survey, Memmott draws together an array of evidence about sociogeographic, semantic, cosmological, and economic aspects of the construction of time in the Tangkic cultures of the southern Gulf of Carpentaria in northern Australia. Rich anthropological material of this kind poses exciting challenges for any ambitious programme in memory studies which seeks to do justice to the variety of personal and cultural forms of thinking about the past. In the next two papers, phenomenological approaches to embodiment are used to enrich our understanding of memory and body image. In Body Memory in Muscular Action on Trapeze, Peta Tait draws on her ongoing work with circus aerialists to pinpoint some of the mysterious features of muscular memory in highly skilled performance. Even though, for many trapeze experts, the body must in some sense take over in the unfolding of complex fluid action sequences, a number of Tait’s informants nevertheless offer intriguing clues about the residual and vital place of attention, imagination, and even self-conscious thinking in expert bodily remembering. Then Francine Hanley, in The Dynamic Body Image and the Moving Body: revisiting Schilder’s theory for psychological research, revivifies Paul Schilder’s sophisticated account of the plasticity of the body image. In reporting on interviews with dancers and aerobics instructors which highlight the subtle roles of kinaesthetic experience in the construction and maintenance of the body image, Hanley, like Tait, complicates our picture of the interactions of doing and knowing in embodied skill. The uneasy place of the cognitive sciences in our overaudited academic and cultural life is highlighted in the next paper, by Andrew Murphie. In The Mutation of 'Cognition' and the Fracturing of Modernity: cognitive technics, extended mind and cultural crisis, Murphie offers a powerful diagnosis of the internal fragmentation of contemporary cognitivism. Science studies, cultural theory, and dynamical approaches to cognition together threaten the unity and the sense of our residually individualistic educational and political systems. In place of identifiable and manageable thinking processes located within the brains of single subjects, Murphie sketches a directly political physiology which operates at the social and the subpersonal levels all at once, evidence of ‘a quiet trauma in the ecology of extended mind’. Memory has long been at the heart of new media practice and theory, and in Indexing Audio-Visual Digital Media: the PathScape prototype, Mike Leggett describes the early stages of an exciting interactive navigational system. Setting his Pathscape project in the context of other recent multimedia narrative walks, Leggett suggests that the deep context-sensitivity exhibited in such media art practice might act as a model for understanding path-dependent trajectories of remembering. Since personal and collective memory alike are selective and constructive, Pathscape’s topographic orientation offers an appropriately idiosyncratic form of interactive retrieval. The last two papers deal directly with personal memory. In Seeking Self-Consistency with Integrity: an interdisciplinary approach to the ethics of self and memory, Russell Downham integrates recent psychological studies of motivation in autobiographical narrative with a philosophical approach to the ethics of memory. Many emotions are intrinsically diachronic or temporal in nature: so the creative or constructive nature of remembering raises difficult questions about emotional consistency and integrity, which should not be ignored in mainstream moral theory. James Ley’s paper, On the Likely Form of 'Autobiographical Memory' for Aristotle, also synthesizes philosophical and psychological approaches to memory. Ley counterposes the provocative modern idea of personal memory as ‘mental time travel’ with the action-centred account of narrative in Aristotle’s Poetics. There are principles of plot construction in remembering as in story-telling, and problems of genre are just as relevant to psychology as to literary theory. Acknowledgements Many people have been involved in this issue of Scan, by helping out with the original workshops, with refereeing, and with production. Thanks especially to Tim Bayne, Jennifer Biddle, Sue Campbell, Andy Clark, Steve Collins, Russell Downham, Philip Gerrans, Oliver Granger, Helen Groth, Greg Levine, Doris McIlwain, Adrian Mackenzie, Catherine Mills, Anne Monchamp, Andrew Murphie, Gerard O’Brien, Monte Pemberton, John Potts, Huw Price, Kate Stevens, and Carl Windhorst. We also gratefully acknowledge the support or assistance of the Australian Research Council, the Division of SCMP at Macquarie University, the Royal Australian Institute of Architects, and the Centre for Time at the University of Sydney.
http://scan.net.au/scan/journal/display_synopsis.php?j_id=5
In 1914 women in Britain could not vote, study at university or legally enter a profession. But women did work – as domestic servants, shop workers, in factories and of course at home. The movement for women’s suffrage had gathered pace since 1910. The outbreak of the First World War divided the movement and overtook their protests. Between 1914 and 1918, an estimated two million women replaced men in employment, resulting in an increase in the proportion of women in total employment from 24 per cent in July 1914 to 37 per cent by November 1918. Women were required to make a significant contribution during the First World War. As more men left for combat, women stepped in to take over ‘men’s work’. The government used propaganda films to encourage women to get involved. Jobs carried out by women included farm and factory work, many women worked in munitions, allowing for a rapid rise in production; they also worked on maintaining coal, gas and power supplies. Still others took on work in transport or offices. To women, the First World War resulted in a social revolution.
https://www.lboro-history-heritage.org.uk/women-at-work-in-ww1/
On December 3, The United Nationals recognized annual International Day of Persons with Disabilities. A day committed to championing and advocating for a better world. The annual recognition has been promoted by the UN for 24 years, dating back to 1992, and is dedicated to cultivating awareness and understanding of issues involving disabilities while celebrating the remarkable contributions and achievements of individuals with disabilities across the world. From 1983 to 1992, the UN encouraged governments and organizations alike to take action to improve the life of disabled individuals across the globe. In 1992, the movement was extended when the UN general assembly announced December 3 would be recognized as the International Day of Disabled Persons. Now, the Convection on the Rights of Persons with Disabilities (CRPD) dedicates efforts to removing barriers for the disabled, believing that the more society champions the disabled community to interact with the world, the less their disabilities set them apart. More than 15 percent of the population is considered disabled with more than 80 percent living in developing nations that don’t have the ability or infrastructure to support them enjoying the same lifestyles as their able bodied counterparts. The U.N. continues to raise awareness, funds and other resources to assist those with disabilities in gaining independency and living rewarding lives. On International Day of Persons With Disabilities, state parties and advocates come together to emphasize the importance of the cause and discuss ways to push forward. But the movement wouldn’t be possible without the help of countless others who are committed to joining. If you’re interested in taking action and getting involved, below are some suggestions on how to allocate your resources. Volunteers are always needed and welcomed, and the myriad of avenues to help includes a wide range of opportunity. Some organizations assist individuals with disabilities live with equality, dignity and independence. Others train service dogs to help those impacted gain autonomy in day-to-day living while others work selfishly for the civil rights, advocacy and public policy. If you’re interesting in volunteering for the UN, you can head over to their page to learn more. But don’t forget about local organizations. Educating yourself on the local establishments dedicated to helping individuals with disabilities is a perfect way to make a difference in the community you’re a part of. Naturally, supporting individuals with disabilities can extend outside the boundaries of volunteering your time — you can even try finding ways to use your own gifts to develop avenues for advocacy. For example, you could activate your creative energy to organize an art show, film viewing event or even organize an exhibit to showcase the works of disabled artists. Reminding your community that disabled individuals are just as talented and inspiring as able bodied individuals will not only add community value, but create avenues to cultivate support, love and understanding. In today’s burgeoning media world, soundboards are available everywhere you look. Blogging, Tweeting, Facebooking or the old-fashioned route — talking are impactful avenues to spread messages. Educate others about an organization that works to help improve the lives of others and show support by providing ways to join the cause or pitch in. The more education that is disseminated, the more opportunity there is for progress. While there are many organizations that help individuals with disabilities to select from, choosing one that means something to you personally is a great place to start. Perhaps you’ve been impacted by a specific anecdote or family member battling a certain condition. If you’re in a position that allows you to donate, it can be a great way to ensure adequate resources or to foster improved lives and inclusivity for disabled individuals. If you can’t donate, you can always spread the message and inspire others to chip in when and where they can. UNICEF, for example, helps kids with disabilities live normal lives and protects them from violence. While joining in to celebrate International Day of Persons With Disabilities is a phenomenal opportunity to come together as a planet to support and advocate for others, Aspire of WNY hopes you find some time to help support the disabled community. Whether its Autism in Western New York or any of the other special needs right here in Buffalo NY, we’re committed to establishing and improving community services for the developmentally disabled. Real progress is achieved at the ground level, each and every day and every hand pitching in matters in making sure the world knows that every PERSON matters.
https://www.aspirewny.org/blog/international-day-of-persons-with-disabilities-how-to-show-support/
Sarah Taylor to play for Welsh Fire in The Hundred London [UK], April 6 (ANI): Former England wicketkeeper-batter Sarah Taylor on Tuesday signed up with Welsh Fire for the inaugural edition of 'The Hundred'. Taylor had announced her retirement from international cricket in September 2019 following a long battle with anxiety, and now she would be returning to the field for Welsh Fire. "There has been a real buzz about The Hundred, and especially the women's competition. We've got the best players from around the world involved and the temptation to be part of it was too great to resist," Taylor said in an official statement. "I'm so excited at the prospect of playing again. It'll be really special to get back out there and be part of a Welsh Fire side that hopefully can have a great first season," she added. Taylor is among England's greatest ever keepers, making 226 appearances for her country before retiring from international cricket in 2019. She has two World Cup titles to her name. Beth Barrett-Wild, Head of the Hundred's women competition said "Sarah Taylor is a truly extraordinary cricketer, who has made a habit of breaking new ground in the game throughout her career. It's highly appropriate therefore that she will feature in The Hundred this summer -- a competition which has the potential to transform women's cricket." Taylor will be joined by Australian spin bowler Georgia Wareham. She has already featured 48 times for Australia despite being only 21 and took three wickets in the 2020 Women's T20 World Cup semi-final. Wareham replaces Jess Jonassen, who has withdrawn due to personal reasons. (ANI)
What do I need to consider when choosing metal for an antenna? How do I determine what thickness to use? - 1$\begingroup$ Physical characteristics or electrical characteristics? Physical: yes, electrical: not really. $\endgroup$ – oh7lzb Oct 23 '13 at 4:08 - 1$\begingroup$ Any question asking for "Best" is doomed to the wrath of the Close vote $\endgroup$ – Andrew M0YMA Oct 23 '13 at 7:29 - $\begingroup$ Thickness can help determine bandwidth. Even more surface area can do so due to skin effect, so your question might be asking about the difference between single wire and multi-strand cables, or even designs like used in the Woodpecker: en.wikipedia.org/wiki/Duga_radar. I can't really tell what you are asking - is this about wire antennas? $\endgroup$ – SDsolar Sep 21 '17 at 21:08 My non-technical over-simplified answer; yes, the type of metal used for an antenna will present different characteristics. The most obvious is the conductivity. Greater conductivity will yield higher radiation patterns, however, the size and dimension of the conductor also affects its performance. Cost, weight, and ease of manipulation of the material is often MORE of an issue than a slight percentage of increased performance due to the metal type. Aluminum for example, is a prime choice for beams as it is easy to cut and bend, provides great conductivity, and is lightweight. A copper beam would provide greater conductivity, but as a soft metal it would bend out of shape easily and become unusable. Likewise, a dipole is usually copper wire as its flexibility provides better ease of mounting and wind resistance, and low cost for a long wavelength antenna. The type and thickness (up to a point) of the metal plays a key role in the efficiency of the antenna. Efficiency is important because it is efficiency times directivity that determines the gain of the antenna. Thus the greater the efficiency, the greater the gain. Efficiency is the ratio of radiation resistance divided by radiation resistance plus resistive losses. The RF resistance of the metals of the antenna are one of the causes of resistive losses. As the radiation resistance becomes lower, the resistive losses become more critical. So for a 70 ohm, 1/2 wavelength dipole, reasonable resistive losses do not change much. A vertical, 1/4 wave antenna has about 22 ohms of radiation resistance so resistive losses become more significant. For a small diameter, 40 meter loop antenna with a radiation resistance less than 1/10 ohm, the resistive losses become critical. The resistive losses of the metal are determined by the type of metal, the surface area and thickness of the metal, and the frequencies involved. The two most common metals for antennas are copper and aluminum. While copper has lower resistive losses, aluminum is lighter and less expensive. However, since copper is more conductive, less material is required to achieve the same RF resistance as aluminum. By way of comparison, 100 feet of common 14 gauge, copper wire has a resistive loss of ~7.1 ohms on 15 meters while the same wire in aluminum has a resistive loss of ~8.9 ohms. For a 1/2 wave dipole, this difference ratio would be negligible in terms of comparative gains. 5 feet of 1/2 inch copper has a radiation resistance of ~0.02 ohms compared to aluminum with ~0.03 ohms. This difference in a small loop antenna has a substantial influence on the gain of the antenna. For standard antenna work, five times the skin depth is the maximum thickness of the conducting material that is required. Beyond this maximum, the material is essentially unused from an RF perspective although it may contribute to other physical properties such as strength. This is why a properly copper coated steel wire can be used to make a wire antenna. As long as the copper has sufficient thickness, the RF current will never reach the much higher loss inner core of steel. On 160 meters, a 5 times skin depth in copper is ~240 micrometers. As the frequency rises, the skin depth decreases. At 10 meters, the 5 times copper skin depth is only ~ 60 micrometers. This also highlights why hollow copper or aluminum tubing can be used to save costs and weight in many cases. RF resistance and skin depth calculators abound on the Internet, making quick comparative analyses possible. - $\begingroup$ Nice answer, and it would also be interesting to see some numbers showing losses in permeable materials such as iron, steel, and some stainless steel wire. For example, the 400 series of SS are magnetic (not to mention that all SS has significant resistance), and hams have used this sort of stuff to make antennas. $\endgroup$ – Mike Waters♦ Sep 25 '17 at 13:19 Yes, but unless you get really carried away it doesn't matter. The amount of metal being used, whether it be a bar, pipe, or sheet, is enough to make any concerns about resistive losses meaningless. - 1$\begingroup$ Not true, because of skin effect. The size of the material matters and the conductivity of the outer layer matters. $\endgroup$ – Walter Underwood K6WRU Oct 23 '13 at 5:18 Apart from things already mentioned, I would also like to bring in the Q-Factor. Although it is a figure of merit for Oscillators, it has some relevance to antennae as well. If you had a purely inductive antenna, it would radiate at a given frequency and very narrow bandwidth. This page explain the Q-Factors of antennae quite well. Q = (2 * pi * f * L) / R So your antenna material better have some resistance.
https://ham.stackexchange.com/questions/133/metal-for-antenna-construction
Relationships demand care and attention. If either of the factors is not present in a relation, the strength of relation gets wrecked. It is very important and yet difficult to support a relationship with best practices of desired time, strength and devotion. Both members of a couple must play an equal role in enhancing the quality of their relationship. Situations get worse when either of the partners cheat or deceive his/her partner. The deceiving side usually falls for someone when he/she finds something lacking in his/her original partner. Such betraying side usually prefers to conceal his/her affair from the partner being betrayed by him/her. But, the affair of the betraying one, when once exposed before his partner, the complete relationship gets shattered and moves directly into traumatized circumstances. In most of the cases after the disclosure of an affair, the betrayed partner tends to leave his/her partner and the relationship as well. But, if the love of the couple holds more strength than the exposed infidelity of his partner, it becomes quite possible for the couple to survive their relationship as well. Surviving after infidelity is a difficult and extremely painful situation for both the members of the couple. Such chaos demands excessive attention of the couple to let their relationship survive. Consequences of infidelity in a relationship: Severe distrust; severe distrust is experienced by a partner being betrayed by his loved one. He/she tends to wipe off all forms of trust from his/her life after he experiences a deceit from his/her partner. Disappointment and hatred; huge disappointments and even hatred towards partner are experienced by all the cheated partners. Commitment of suicides; if the relationship was a severe symbol of love to the partner being betrayed, severe disappointments of the betrayed partner might also lead him/her towards the commitment of suicide. How should the couple react? After infidelity has been experienced by a couple, their relation enters into a new phase. Such a change leads the relationship towards its end. But it is very important for the couple to give their relation a new beginning. Although it is extremely difficult for the betrayed partner to forgive his betraying partner at once, but the betrayed partner must be able to develop his will to at least try forgetting what ever made his/her partner to cheat him/her. The couple must spend time together from a new start to develop a new relationship between them. Discussing things is a good way to resolve issues. Couple must try figuring out their likes and dislikes and make friendly discussions to avoid any possible chance of further damage in their relationship. Is it easy to survive a relationship after it undergoes infidelity? No it is really not easy for a relationship to survive after infidelity. It takes months or years to develop trust between the couple from a new beginning. But finally if both the partners get honest and truthful to their relationship and the bond between them, it would be far easier for the couple to live their marital life with pleasure together again. It is not impossible to bring a relation back to its original position! Time is a great wound healer. As time passes it becomes far easier for both the partners to give honesty and time to their relationship. The commitment of honesty and dedicated time towards the relationship can surely make a relation to face all sorts of issues and problems. Although infidelity is not a weak issue, but if the partners tend to give true sincerity to their relationship, they would be able to live and enjoy their marital life with perfect ease and joy!
http://www.streetarticles.com/affairs/is-it-possible-to-survive-a-relationship-after-infidelity-yes-although-it-is-not-so-easy
BACKGROUND: Stretching improves the flexibility of skeletal muscles, increases the range of motion (ROM) of joints. Stretching is important in prevention of sport-related injuries and influences muscle strength and performance. The effects of Static Stretching (SS) and Cyclic Stretching (CS) have been assessed by examining ROM, muscle power, vertical jump performance. However, the effects on postural control after CS compared with SS does not provide evidence for postural control after landing. The aim of this study was to examine the effects of stretching on the range of motion (ROM) of the ankle joint and dynamic postural stability upon landing on one leg. METHODS: Twenty healthy subjects participated in this study. Participants were randomly assigned to SS, CS, and control conditions. The ankle was stretched in each condition for two minutes in a standing position. In the assessment of dynamic postural stability, the participant jumped and landed on one leg onto a force platform and the Dynamic Postural Stability Index (DPSI) was measured. Stability indices including those for medial-lateral, anterior-posterior, and vertical stability were calculated. The data were compared among 3 conditions with repeated measures ANOVA and the correlation between ankle range of motion (ROM), DPSI and the maximum vertical ground reaction force (vGRF max) were calculated. RESULTS: The results indicated that the ROM was significantly greater after SS and CS compared with the control condition. A significant improvement in DPSI was observed after CS. CONCLUSIONS: We showed that 2 minutes of CS had a positive influence on dynamic postural stability after landing on a single leg. Furthermore, CS may improve balance by increasing postural stability.
https://www.minervamedica.it/it/riviste/sports-med-physical-fitness/articolo.php?cod=R40Y2016N06A0692
Here: | | | | Taking Care of Your Paintings These are several tips on how to take care of your paintings. Your paintings are your creations. You invest a lot of time, effort and creativity in bringing your masterpieces to life. It is logical, then, that you should know a few basic tips on how to take care of your paintings, so that they can last for long. And taking care of your patings not only involves the painting itself, but also the canvas. Not only you should know how to protect your painting canvas, but also - heaven forbid! - how to repair it if the worse case scenario happens. You must also take care of the back of the canvas, particularly when sending your paintings to an exhibition. There is always the danger of a dent or a hole being made in a canvas when several are stacked together. This can happen in your studio as easily as when they arrive at an exhibition. A piece of strong heavy cardboard, cut to the size of the painting and tacked to the wooden canvas strips, will minimize this danger. The canvas must be perfectly dry. Remove the tacks and separate the canvas from its stretchers. Cut a piece of new canvas with a margin (including the tacking area) about an inch larger than the damaged canvas. Place the fresh canvas right side up on the floor and spread a heavy layer of white lead, cut with linseed oil, evenly over all of it. Now place the back side of the damaged canvas against the new canvas and apply even pressure over the entire area. Any surplus white lead that oozes out will be deposited on the 1-inch margin of the fresh canvas and can easily be scraped off. Put a sheet of waxed paper cut to the size of the damaged painting over the face of the painting and place everything under a flat drawing board. Let it dry for several days. Remove the board, trim the margin, and restretch. It is not always necessary to re-back or reline the entire canvas. For a small tear, a patch made from new canvas can be applied. Whether the painting is patched or re-backed, some retouching will be necessary if the hole or tear is of any size. Oil paintings that have been stored for some time in a closet or on a curtained rack may darken or yellow. They will brighten considerably if you place them where they will be exposed to constant daylight (not direct sunlight). Keep this in mind if you are planning to exhibit any older paintings, so that they will be shown to best advantage. It is also possible that a new coating of varnish will help; some dull spots may have developed because of color sinking into the canvas. It is good practice to keep a case history of paintings that are the result of experimentation. This experimentation can be the paints used for an underpainting, new colors that you have added to your palette, time allowed for paint layers to dry, or any new approach. This information can be written on the back of the stretcher strips and will often provide vital data for future paintings. They can be stored in portfolios with hinged flaps to keep out the dust and placed in a horizontal position to prevent warping. Bread crumbs can also be used as a gentle means of removing accumulated dust smears. If a water color has a crease in it, moisten the back with clear water on a sponge. Then rub the under side of a spoon gently over the crease to help smooth it. Put the sheet between two clean blotters and place it under a drawing board, using some books for added pressure. Allow it to press for a few days before rematting. Back to the top of Taking Care of Paintings.
https://www.learn-to-draw-and-paint.com/taking-care-of-paintings.html
In 2012, the Association of American Colleges and Universities (AAC&U) expanded its mission to encompass both liberal education and the long-term project of making excellence inclusive. This expansion of our mission built on several decades of AAC&U work to support higher education institutions across all sectors in creating learning experiences through which all students engage productively with the diversity of ideas and experiences that characterize our world. Moreover, as our board of directors affirmed in a series of official statements beginning in 2002, AAC&U “has long been committed to fair and equal access to higher education as part of our contribution to democracy’s promise of equal opportunity.” Because of these longstanding commitments, AAC&U was eager and proud to support several amicus briefs submitted in support of the University of Texas as the university defended its admissions practices in the lawsuit brought by Abigail Fisher and ruled on today. In filing these briefs, we joined with dozens of other associations and higher education institutions, Fortune 500 companies, religious organizations, military leaders, and elected officials in voicing support for the University of Texas. Collectively, we all strongly affirm the compelling national interest in advancing campus diversity as a necessary component of educational excellence. In this context, we are thrilled today by the Supreme Court’s decision to reaffirm that compelling national interest. As I noted in a statement last November and reaffirm today, AAC&U remains “strongly committed to helping our members succeed in the long-term work of educating and graduating students—from all backgrounds—who will be both prepared and inspired to work for a more just and equitable democracy in the United States and for the expansion of human dignity and opportunity around the world.” This work depends on higher education institutions taking seriously our responsibilities to create diverse and educationally effective learning environments using all admissions review practices available under the law. Simply put, diversity remains an essential component of educational excellence and of liberal education in the twenty-first century. The announcement of this Supreme Court decision, however, should be taken as a moment to rededicate ourselves to the significant work on both equity and diversity yet to be done. What is that work? The first step, of course, is to ensure equitable access to quality degree programs—across all sectors of higher education—for students from diverse communities and low-income backgrounds. The United States must tackle the hard work of undoing the highly inequitable patterns that still persist across racial and ethnic groups and across income levels in access to college, in meaningful participation in college, and in the likelihood of completing college with a quality education. The next step is to ensure that quality learning prepares college students—all college students—to both thrive and contribute in a diverse democracy and in the multicultural global community. As noted in our earlier board statement, “persuasive research indicates that for all students, engaging diversity on campus and in the curriculum promotes intellectual development, enhances critical thinking, reduces prejudice, improves intergroup relations, and contributes to student academic success and satisfaction.” We know as well from much educational research that “exploring diversity also produces graduates more likely to engage as informed citizens in remedying unsolved social problems.” Given the value that engagement with diversity creates—both for democracy and for the economy—ensuring students’ rich engagement with diversity on campus ought to be a top priority for every institution. Unhappily, the research shows that too many students are not participating in practices intended to build their capacity to engage difficult differences and solve problems collaboratively with people whose experiences and views are decidedly different from their own. AAC&U’s own studies have shown how students themselves believe that college must do far more to prepare them both for the diversity of the world beyond college and to contribute as engaged democratic citizens (See Optimistic About the Future, But How Well Prepared?, Engaging Diverse Viewpoints, and Making Progress? What We Know About the Achievement of Liberal Education Outcomes.) As we witness new student calls for change within higher education institutions, it is imperative that we build strong educational foundations for engaging differences of all sorts. I believe that higher education institutions would be in a far better position to address controversies related to difference if educational leaders were as explicit as possible—from the day staff and faculty are hired and from the moment that students first apply—about the fact that one of the essential college learning outcomes is effectively engaging diverse perspectives, a proficiency which centrally includes taking seriously and respectfully the perspectives of others. Even as we celebrate this important Supreme Court decision affirming campus diversity as a compelling educational interest, therefore, I urge educators across the country to recommit to the hard work of holding our institutions, our students, our faculty, and ourselves responsible for helping students achieve this essential capacity—constructive engagement with difference—that a quality college education includes. Creating a diverse campus community is the first step to achieving this goal; preparing students to work productively across difference—whatever their major—is the next critical frontier in higher education’s long-term efforts to make excellence inclusive. AAC&U recently devoted its entire Centennial year to an exploration of the intersections of equity and quality learning. We have developed valuable resources to support campuses as they mobilize to address pressing questions related to these issues in their own institutional contexts (see below). As I noted in November, we must use all the research and tools available to us “to ensure that higher education … provide[s] inclusive, respectful, and supportive environments for learners from communities that are today—and that have always been—systemically underserved, not just in higher education, but at all levels of the United States educational system.” AAC&U’s board of directors has affirmed that “making excellence inclusive is a fundamentally democratic ideal. It expresses our confidence in the liberating power of education.” But education is only liberating when it prepares students to thrive and contribute in the world they inherit. In the twenty-first century, that wider world is diverse, contested, and still disfigured by the persistence of deep inequities. To contribute in that context, our students must deepen their engagement both with difficult difference and with the hard work of creating solutions—with diverse partners—to the many challenges we face in our world.
https://aacu.org/about/statements/2016/ut-fisher
Please provide the number of students/participants. If you are an educator, please provide the number of adult supervisors (1 per 15 students please). This program is appropriate for 7th grade and up only. Indicate 3 possible date and time preferences. Our speakers are able to make single one-hour presentations, so if multiple classes are to hear the speaker, they must be pulled together into one group. Briefly describe your Holocaust unit including the main texts, readings, and audio-visual materials you use. Please provide this information about your classroom instruction if you are a 7th-12th grade educator. Briefly describe how you plan to address this experience in your classroom following the program.
https://mchekc.org/second-generation-speakers-bureau/second-generation-speakers-bureau-form/
The NACI focuses on three major efforts—NACI Demonstration Projects, the NACI Strategic Partnership Program, and the NACI Champions Program. In communities across the country, these efforts are engaging health care professionals, patients and families, schools and childcare settings, professional associations, and many others to implement innovative, strategic interventions to overcome barriers to implementing clinical guidelines and reducing asthma disparities. Through such efforts, the NACI hopes to speed the adoption of these recommendations by clinicians and adherence to them by patients and their families and caregivers. The NACI seeks to produce high-impact solutions and meaningful change in asthma control by: Convening and energizing national, regional, state, and local leaders. Developing a communication infrastructure for information sharing and accessing resources. Mobilizing champion networks to implement and integrate clinical and community-based interventions with emphasis on sustainability. Demonstrating evidence-based and best practice approaches for specific audiences in various settings with emphasis on closing the asthma disparity gap. At the core of the NACI are six priority messages selected from the EPR-3 and detailed in the GIP Report. If practiced routinely and implemented widely, these action-oriented messages have the power to improve asthma control and transform the lives of people with asthma: Failure to control asthma diminishes physical, psychological, and social wellbeing and quality of life; increases health disparities, particularly among African American, Puerto Rican, and socioeconomically disadvantaged populations; and places added burden on families, schools, workplaces, and health care systems. It affects us all. Furthermore, the costs of asthma to the United States economy, ranging from hospitalizations to lost wages, are projected to reach $20.7 billion in 2010. That’s why we encourage you, as a health care professional, community practitioner, educator, decision-maker, or concerned citizen, to become part of the NACI. Join the NACI’s collaborative efforts to improve asthma care, asthma control, and quality of life for all people with asthma.
An encyclopedia article should have a definition at the outset, but this requirement presents unique difficulties in the case of Hinduism. This difficulty arises from Hinduism’s universal worldview and its willingness to accept and celebrate diverse philosophies, deities, symbols, and practices. A religion that emphasizes similarities and shared characteristics rather than differences has a difficult time setting itself apart—unless this very quality is considered its defining feature. This is not to say that there are no beliefs and practices that may be identified as Hindu, but rather that the Hindu tradition has concerned itself largely with the human situation rather than the Hindu situation. Instead of basing its identity on separating Hindu from non-Hindu or believer from nonbeliever, Hinduism has sought to recognize principles and practices that would lead any individual to become a better human being and understand and live in harmony with dharma. The distinction of dharma from the Western sense of religion is crucial to understanding Hindu religious identity. To the extent that Hinduism carries with it the Western meaning of being a religion the words distort Indian reality. In the West a religion is understood to be conclusive—that is, it is the one and only true religion. Second, a religion is generally exclusionary—that is, those who do not follow it are excluded from salvation. Finally, a religion is separative—that is, to belong to it, one must not belong to another. Dharma, however, does not necessarily imply any of these. Having made this point, this article will bow to convention and use the expression Hinduism. A – The Dharmic Tradition Dharma is an all-important concept for Hindus. In addition to tradition and moral order, it also signifies the path of knowledge and correct action. Because of Hinduism’s emphasis on living in accordance with dharma, anyone who is striving for spiritual knowledge and seeking the right course of ethical action is, in the broadest sense, a follower of sanātana dharma. Buddhism, Jainism, and Sikhism share with Hinduism the concept of dharma along with other key concepts, and the four religions may be said to belong to the dharmic tradition. At one level Hinduism can refer to the beliefs or practices of followers of any of the dharmic traditions. The word Hinduism retains this sense in some usages in the Indian Constitution of 1950. In the field of religious studies, however, Hinduism is used in a narrower sense to distinguish it from the other religions of Indian origin. A Hindu is thus identified by a dual exclusion. A Hindu is someone who does not subscribe to a religion of non-Indian origin, and who does not claim to belong exclusively to another religion of Indian origin—Buddhism, Jainism, or Sikhism. This effort at definition produces a rather artificial distinction between Hinduism and other dharmic traditions, which stems from an attempt to limit a system that sees itself as universal to an identity that is strictly religious. In many ways, labeling the other dharmic traditions as non-Hindu has a basis that derives more from politics than from philosophy. Indeed, greater differences of belief and practices lie within the broad family labeled as Hinduism than distinguish Hinduism from other dharmic systems. Indian historian Irfan Habib makes this point when he quotes an early Persian source that Hindus are those who have been debating with each other within a common framework for centuries. If they recognize another as somebody whom they can either support or oppose intelligibly, then both are Hindus. Despite the fact that Jains reject many Hindu beliefs, Jains and Hindus can still debate and thus Jains are Hindus. But such discourse does not take place between Hindus and Muslims because they do not share any basic terms. B – Sanātana Dharma Evidence from inscriptions indicates that Hindus had begun to use the word dharma for their religion by the 7th century. After other religions of Indian origin also began to use this term, Hindus then adopted the expression sanātana dharma to distinguish their dharma from others. The word sanātana, meaning immemorial as well as eternal, emphasized the unbroken continuity of the Hindu tradition in contrast to the other dharmas. The Buddhist, Jaina, and Sikh dharmas possess distinct starting points, whereas Hinduism has no historical founder. The Hindu tradition might be said to begin in the 4th century BC when the growth and separation of Buddhism and Jainism provided it with a distinctive sense of identity as sanātana dharma. Some scholars prefer to date its beginnings to about 1500 BC, the period when its earliest sacred texts originated, although recent evidence suggests these texts may be even older. Certain beliefs and practices that can clearly be identified as Hindu—such as the worship of sacred trees and the mother goddess—go back to a culture known as Harappan, which flourished around 3000 BC. Other Hindu practices are even older. For example, belief in the religious significance of the new and full moon can be traced to the distant proto-Australoid period, before 3000 BC. It is with good reason that Hinduism perceives itself as sanātana dharma or a cumulative tradition. Its origins are shrouded in the mist of antiquity, and it has continued without a break. C – A Comprehensive and Universal Tradition The Hindu tradition aims at comprehensiveness so far as religious beliefs and practices are concerned. First, it wishes to make the riches of Hinduism available to the Hindu and to any genuine seeker of truth and knowledge. But it does not limit Hindus to their tradition. Instead, it encourages them to explore all avenues that would lead to a realization of the divine, and it provides a system with many paths for such realization. Second, in the manner of science, Hinduism is constantly experimenting with and assimilating new ideas. Also like science, it is far less concerned with the origin or history of ideas than with their truth as demonstrated through direct experience. Hinduism’s openness to new ideas, teachers, and practices, and its desire for universality rather than exclusivity, set it apart from religions that distinguish their followers by their belief in particular historical events, people, or revelations. Two events in the life of Mohandas Gandhi exemplify aspects of the Hindu tradition. First, Gandhi entitled his autobiography The Story of My Experiments with Truth (1929). In doing so, he was practicing the Hindu willingness to experiment continually as a means of discovering truth and to record the results of such experiments. Although Gandhi was seeking spiritual truth, he approached it in the spirit of science. Second, when asked, “What is your religion?” in 1936, Gandhi answered, “My religion is Hinduism, which for me is the Religion of humanity and includes the best of all religions known to me.” Saintly figures such as Gandhi have periodically renewed Hinduism throughout its history and kept it abreast of the times. Because Hinduism has no central orthodoxy, and no belief in the need for one, renewal of its tradition has invariably come from sages in every age who base their knowledge on experience of the divine. Published originally at http://sankrant.org/2008/09/hinduism/. Reproduced with permission from @sankrant.
https://wayofdharma.com/2016/08/13/what-is-hinduism/
Roger Anguera was going to study biology when in the last year of high school he saw a VR headset for the first time. That changed the direction of his studies and he joined the newly created Multimedia Engineering studies at Ramon Llull University in Barcelona. During his studies he became interested in subjects such as Human Computer Interaction, Interactive Media Installations, Sound and Image Editing, Video games and everything related to the boundaries between Art, Technology and Science. In his previous jobs Roger Anguera worked as a developer in several programming languages, as a web project manager and analyst and as a Creative Technologist developing for exhibitions, museums and faires. In the summer of 2013 he joined the Gazzaley lab with a private grant as a multimedia developer, helping the lab gamify some of their experiments. Since December 2013 he is a full time member of the lab in charge of interaction design, scientific imaging and video game development using a variety of technologies and gadgets in fields such as motion capture, biofeedback, virtual reality and augmented reality.
https://neuroscape.ucsf.edu/profile/roger-anguera/
Introduction {#ss1} ============ Since the first report of 2009 H1N1 influenza infection in Mexico,[^1^](#b1){ref-type="ref"} the 2009 H1N1 infection rapidly spread around the world, leading to a pandemic. According to the May 2010 World Health Organization report, H1N1 infections occurred in more than 214 countries which brought about more than 18 000 deaths.[^2^](#b2){ref-type="ref"} Pneumonia, mixed bacterial infection, and aggravation of underlying conditions such as heart failure are well‐known complications of influenza.[^3^](#b3){ref-type="ref"}, [^4^](#b4){ref-type="ref"}, [^5^](#b5){ref-type="ref"}, [^6^](#b6){ref-type="ref"} Among these complications, mixed bacterial infection is especially known to increase the mortality and morbidity of influenza.[^7^](#b7){ref-type="ref"}, [^8^](#b8){ref-type="ref"} As pandemic influenza strains generally cause self‐limiting illnesses, it is crucial to accurately diagnose concurrent mixed bacterial infections. Ideal inflammatory biomarkers require accurate discriminatory effects between infectious and non‐infectious disease, ability to aid in early detection, and easy application. Classic inflammatory mediators, including tumor necrosis factor (TNF)‐α, interleukin (IL)‐6, and C‐reactive protein (CRP), and more recently developed markers such as procalcitonin (PCT), are widely used in the diagnosis of infectious and inflammatory diseases. PCT levels are known to be higher in bacterial, fungal, and parasitic infections than in viral infections,[^9^](#b9){ref-type="ref"} and this has led to PCT being used as a guide to antibiotic treatment in community‐acquired pneumonia and acute exacerbations of chronic obstructive pulmonary disease.[^10^](#b10){ref-type="ref"}, [^11^](#b11){ref-type="ref"} Several studies regarding the use of inflammatory markers to predict mixed bacterial infection of 2009 H1N1 pneumonia exist,[^12^](#b12){ref-type="ref"}, [^13^](#b13){ref-type="ref"}, [^14^](#b14){ref-type="ref"} but the results are not consistent, especially for CRP, and studies in emergency departments (EDs) are lacking. The aim of the present study was to evaluate the role of two serum inflammatory markers, alone and in combination, in discriminating pneumonia caused by mixed bacterial and 2009 H1N1 influenza infection from pneumonia caused by the 2009 H1N1 influenza infection alone. Materials and methods {#ss2} ===================== Study design {#ss3} ------------ This was a retrospective study of adult patients, 18 years and older, who visited the emergency department of a tertiary‐care hospital in Seoul, Korea, during the 2009 H1N1 pandemic. Patients were eligible if they were diagnosed as having community‐acquired pneumonia caused by 2009 H1N1 influenza infection between August 2009 and February 2010. Patients transferred from other institutions with prior antibiotic administration were excluded from this study. Upon arrival to the ED, laboratory exams including serum PCT and CRP, blood and sputum cultures, and Gram stains and chest X‐rays were performed. The 2009 H1N1 influenza infection was confirmed by real‐time reverse transcriptase polymerase chain reaction (RT‐PCR) on nasopharyngeal swabs.[^15^](#b15){ref-type="ref"} Pneumonia severity index (PSI) was calculated for risk stratification. Serum PCT was measured by the VIDAS BRAHMS enzyme‐linked fluorescence assay (measurement range 0·05--200 ng/ml; bioMerieux, Lyon, France), and CRP was measured with an automated multichannel analyzer (model TBA‐30FR; Toshiba, Saitama, Japan). Treatment outcomes were classified into patients whose conditions improved to discharge and those who died following admission. Definition {#ss4} ---------- Community‐acquired pneumonia was diagnosed when the patient had respiratory symptoms with lung infiltration on chest X‐ray and rales on auscultation.[^10^](#b10){ref-type="ref"} Patients with pneumonia and positive for 2009 H1N1 influenza PCR were diagnosed as having 2009 H1N1 pneumonia. Patients with pneumonia who were both positive for 2009 H1N1 influenza PCR and had detectable bacterial pathogens were defined as having pneumonia caused by mixed infection. The presence of bacterial pathogens was confirmed by positive Gram staining in respiratory samples, a pathogen concentration \>10^5 ^colony‐forming units/ml in tracheobronchial aspirates or a blood culture revealing a bacterial pathogen in the absence of an extrapulmonary focus.[^16^](#b16){ref-type="ref"} Exams for atypical pathogens were performed by serum enzyme immunoassay for mycoplasma antibodies, cold agglutinin test, and legionella urine antigen test. Organisms detected in sputum were considered pathogenic when sputum was qualified with \<10 epithelial cells and more than 25 leukocytes per low power field.[^10^](#b10){ref-type="ref"} Statistical analysis {#ss5} -------------------- Data are presented as mean ± standard deviation (SD) for continuous variables and as absolute or relative frequencies for categorical variables. Univariate analyses using contingency tables and basic descriptive statistics were performed. Statistical analysis was carried out using the chi‐squared test for nominal data, Mann--Whitney test for the medians of non‐parametric data, and Student's *t*‐test for parametric data. To demarcate the PCT and CRP cutoff values in distinguishing mixed infection pneumonia from 2009 H1N1 viral pneumonia, receiver‐operating characteristic (ROC) curve analysis was carried out. All statistical analyses were performed with [spss]{.smallcaps} for Windows 11.0.1 (SPSS Inc, Chicago, IL, USA). All reported *P‐*values are two‐tailed, and *P‐*values \<0·05 were considered statistically significant. Results {#ss6} ======= Baseline characteristics of total patients {#ss7} ------------------------------------------ During the study period, a total of 96 patients were diagnosed as having community‐acquired pneumonia, and 60 of them were positive for the 2009 H1N1 influenza virus. Of these 60 patients, 44 had pneumonia caused by 2009 H1N1 infection alone and 16 had pneumonia caused by mixed bacterial infection. The most common bacterial organisms in mixed bacterial infection pneumonia were *Streptococcus pneumoniae*, followed by *Staphylococcus aureus* and *Pseudomonas aeruginosa* ([Table 1](#t1){ref-type="table-wrap"}). The numbers of specimen revealing causative organisms were 10 in blood cultures, five in sputum cultures, and four in urine samples of pneumococcal antigens. The mean age of the 60 patients was 49·4 ± 18·9 years, and 33 (55·0%) were men. Ten (16·7%) patients died following admission. The mean value of initial PCT concentration was 4·3 ± 11·6 ng/ml, and the CRP was 10·6 ± 9·9 mg/dl. Initial serum white blood cell (WBC) count was 9·2 ± 5·1 (×10^3^/mm^3^). ######  Bacterial pathogens of mixed infection pneumonia Pathogen Number ---------------------------- -------- *Streptococcus pneumoniae* 6 *Staphylococcus aureus* 4 *Pseudomonas aeruginosa* 3 *Haemophilus influenzae* 2 *Klebsiella pneumoniae* 1 Comparison between 2009 H1N1 pneumonia and mixed infection {#ss8} ---------------------------------------------------------- Baseline characteristics, laboratory findings, and clinical presentations of 2009 H1N1 pneumonia and pneumonia with mixed bacterial infection were compared. Patients with 2009 H1N1 pneumonia were younger but this was not statistically significant. No difference in gender was found between the two groups. Vital signs and radiologic findings were not significantly different. The median value for PCT was statistically higher in patients with pneumonia caused by mixed bacterial infection than in patients with 2009 H1N1 pneumonia (3·45 versus 0·15 ng/ml, *P* = 0·019). The median value for CRP was also statistically higher in the mixed infection group than in the H1N1 infection group (14·8 versus 4·6 mg/dl, *P* = 0·022) ([Table 2](#t2){ref-type="table-wrap"}). Other laboratory findings were similar in the two groups. A significantly higher proportion of patients in the 2009 H1N1 pneumonia group (77·3%) exhibited cough compared to the mixed infection group (43·8%, *P* = 0·014), but other clinical presentations did not differ significantly. The mortality rate was higher in the mixed infection group than the H1N1 influenza group (25·0% versus 13·6%), but this difference was not statistically significant. ######  Comparison of characteristics and parameters between the 2009 H1N1 pneumonia and mixed bacterial and 2009 H1N1 influenza infection 2009 H1N1 (*n* = 44) Mixed infection (*n* = 16) *P‐*value --------------------------------------------- ---------------------- ---------------------------- ----------- Age (years) (mean ± SD) 47·2 ± 20·0 55·5 ± 14·5 0·070 Male, *n* (%) 24 (54·5) 9 (56·3) 0·907 Underlying disease, *n* (%) 22 (64·7) 48 (77·4) 0·180  Hypertension 9 (20·5) 3 (18·8) 0·884  Diabetes 13 (29·5) 8 (50·0) 0·142  Chronic lung disease 20 (45·5) 10 (62·5) 0·243  Malignancy 16 (36·4) 9 (56·3) 0·167 Clinical presentations, *n* (%)  Sore throat 13 (29·5) 7 (43·8) 0·302  Rhinorrhea 24 (54·5) 9 (56·3) 0·907  Headache 20 (45·5) 7 (43·8) 0·907  Cough 34 (77·3) 7 (43·8)  0·014\*  Myalgia 17 (38·6) 9 (56·3) 0·223  Nausea/vomiting 17 (38·6) 7 (43·8) 0·721  Diarrhea 14 (31·8) 9 (56·3) 0·085 Vital signs (mean ± SD)  SBP (mmHg) 121·4 ± 18·1 120·5 ± 23·2 0·785  DBP (mmHg) 71·8 ± 14·8 71·6 ± 14·5 0·621  RR (/min) 23·8 ± 6·2 22·8 ± 7·2 0·776  PR (/min) 114·2 ± 24·0 105·8 ± 22·5 0·092  BT (°C) 38·0 ± 1·0 38·2 ± 1·1 0·321  SpO~2~ (%) 90·7 ± 13·8 91·3 ± 9·1 0·935 Initial laboratory findings (Median, range)  WBC (×10^3^/mm^3^) 8·9 (0·8--22·0) 8·6 (0·9--19·2) 0·616  ANC (cells/mm^3^) 6160 (0--18410) 6295 (558--17310) 0·786  Lymphocyte (%) 13·5 (1·2--97·6) 10·9 (2·2--46·8) 0·927  Platelet (×10^3^/mm^3^) 184·0 (11--402) 162·0 (8--285) 0·256  Procalcitonin (ng/ml) 0·15 (0·05--44·4) 3·45 (0·05--65·1) 0·019\*  C‐reactive protein (mg/dl) 4·6 (0·13--43·0) 14·8 (1·21--34·6) 0·022\*  PaO~2~ (mmHg) 69·5 (56·8--80·0) 63·5 (51·5--85·0) 0·703  pH 7·5 (7·4--7·5) 7·4 (7·4--7·5) 0·304 Radiologic findings  Bilateral infiltration, *n* (%) 18 (40·9) 9 (56·3) 0·132  Pleural effusion, *n* (%) 9 (20·5) 2 (12·5) 0·382 PSI (Median, range) 59·0 (8--155) 59·0 (13--192) 0·423 Death following admission, *n* (%) 6 (13·6) 4 (25·0) 0·296 SD, standard deviation; SBP, systolic blood pressure; DBP, diastolic blood pressure; RR, respiratory rate; PR, pulse rate; BT, body temperature; SpO~2~, oxygen saturation; WBC, white blood cell; ANC, absolute neutrophil count; PSI, pneumonia severity index. \**P* \< 0·05. In ROC curve analysis of PCT, the area under the curve was 0·698 \[95% confidence interval (CI) 0·523--0·873\]. A PCT cutoff of \>1·5 ng/ml best identified patients with mixed infection pneumonia (sensitivity 56%, specificity 84%, positive predictive value 56% and negative predictive value 84%) ([Figure 1](#f1){ref-type="fig"}). For CRP, the area under the curve was 0·696 (95% CI 0·539--0·852) and a cutoff of \>10 ng/ml best identified patients with mixed infection pneumonia (sensitivity 69%, specificity 63%, positive predictive value 41% and negative predictive value 54%) ([Figure 1](#f1){ref-type="fig"}). The distribution of PCT and CRP between pneumonia of 2009 H1N1 infection alone and pneumonia caused by mixed infection are also depicted ([Figure 2](#f2){ref-type="fig"}). When PCT and CRP concentrations were considered together, the accuracy of diagnostic criteria for the detection of mixed infection pneumonia was as follows: sensitivity, 50%; specificity, 93%; positive predictive value, 73%; and negative predictive value, 84% ([Table 3](#t3){ref-type="table-wrap"}). ![ Receiver‐operating characteristics curve for discriminating between 2009 H1N1 pneumonia and mixed infection pneumonia for procalcitonin and C‐reactive protein (CRP) on initial emergency department visit \[Area under curve 0·698 (95% confidence interval 0·523--0·873) for procalcitonin, 0·696 (95% confidence interval 0·539--0·852) for CRP\].](IRV-5-0398-g001){#f1} ![ Box plot of procalcitonin and C‐reactive protein levels on initial emergency department visit between 2009 H1N1 pneumonia and mixed infection pneumonia. ○: minor outliers; \*: extreme outliers.](IRV-5-0398-g002){#f2} ######  Accuracy of diagnostic parameters CRP \>10 mg/dl PCT \>1·5 ng/ml CRP \>10 mg/dl & PCT \>1·5 ng/ml ------------- ---------------- ----------------- ---------------------------------- Sensitivity 69% 56% 50% Specificity 63% 84% 93% PPV 41% 56% 73% NPV 84% 84% 84% Accuracy 64% 77% 82% LR+ 1·86 3·5 7·14 LR− 0·49 0·52 0·54 CRP, C‐reactive protein; PCT, procalcitonin; PPV, positive predictive value; NPV, negative predictive value; LR, likelihood ratio. Discussion {#ss9} ========== This study showed that serum PCT concentration was significantly higher in patients with mixed infection pneumonia compared to those with 2009 H1N1 infection alone, indicating that this marker could be useful in discriminating between these conditions. Although statistically less powered, CRP could also aid in discriminating mixed bacterial infection from viral pneumonia. When both criteria are considered together, owing to a high specificity and high negative predictive value, they may further improve the accuracy of discrimination between pneumonia caused by mixed infection and pneumonia caused by 2009 H1N1 influenza infection alone. Mixed bacterial infection is an important contributor to morbidity and mortality during influenza pandemics[^7^](#b7){ref-type="ref"}, [^8^](#b8){ref-type="ref"} and also during periods of seasonal influenza.[^17^](#b17){ref-type="ref"} This has been explained by an increased pathogenicity and virulence of bacterial organisms in such co‐infection settings.[^18^](#b18){ref-type="ref"} Therefore, it is crucial to differentiate mixed bacterial infections from influenza viral infections. In our study, the cutoff values that best differentiated mixed bacterial infection pneumonia from viral pneumonia were 1·5 ng/ml for PCT and 10 mg/dl for CRP. The cutoff value for PCT in discriminating bacterial infections from viral infections varies among previous studies: 0·4 ng/ml in a study by Chirouze *et al.* [^19^](#b19){ref-type="ref"} 1·0 ng/ml in a study of severe acute respiratory syndrome by Chua *et al.* [^20^](#b20){ref-type="ref"} and 0·8 ng/ml in a study by Ingram *et al.* [^13^](#b13){ref-type="ref"} The variety in presented cutoff values may be attributed to the presence of a large overlap in PCT concentrations between viral and bacterial infections.[^21^](#b21){ref-type="ref"} The utility of PCT and CRP as biomarkers has been discussed in various studies.[^22^](#b22){ref-type="ref"}, [^23^](#b23){ref-type="ref"}, [^24^](#b24){ref-type="ref"} And the role of PCT and CRP in discriminating bacterial/mixed infection from 2009 H1N1 pneumonia has been reported in a recent study of 38 patients by Guervilly *et al.* [^12^](#b12){ref-type="ref"} and their results showed that only PCT values were statistically higher in patients with mixed bacterial infections. They measured PCT values in the 24 hours after admission. With the cutoff of 0·5 ng/ml for PCT, the sensitivity, specificity, negative predictive value, and positive predictive values were 100%, 52·5%, 100%, and 42%, respectively. Our study, in comparison, examined the PCT and CRP values on initial ED visit. Our data showed that both PCT and CRP were useful in discriminating viral infection from mixed bacterial infection, with the cutoff was 1·5 ng/ml for PCT, and 10 mg/dl for CRP. This distinction in the cutoffs and performance of markers may be attributed to the difference in the timing of measurement of inflammatory markers, and total patient numbers between the two studies. Our study included 62 patients, all of whom had PCT and CRP measured on arrival to ED. As mixed bacterial infection is an important contributor to poor outcome in influenza, pertinent antibiotic administration is crucial. However, indiscriminant antibiotic usage may lead to bacterial resistance and complications of the drug itself. Therefore, the decision when to stop empirical antibiotics during the course of influenza undoubtedly possesses problems. Wright *et al.* [^25^](#b25){ref-type="ref"} described diffuse infiltration on chest X‐ray and leukopenia in favoring infections caused by influenza alone, and lobar infiltration and leukocytosis in favoring mixed bacterial infections. However, discrimination of mixed bacterial infection from influenza infection by this method would not be possible in most cases during a pandemic. In our study, no significant differences in radiographic findings were found. Furthermore, WBC, absolute neutrophil count, and lymphocyte counts were not significantly different between the two groups. Therefore, clinicians cannot rely on radiographic findings or blood cell counts. Inflammatory markers including serum PCT and CRP concentrations are required to aid in the diagnosis and discrimination of the different types of pneumonia. According to the management protocols of H1N1 influenza at our institute, all patients were initially administered oseltamivir and prophylactic antibiotics, and in this setting, combination of a low PCT and low CRP may be useful to reduce the duration of antibiotic administration. This study has several limitations. The sample size was small and by nature retrospective studies have innate limitations. Recent studies have shown that when serum PCT concentration is used to make decisions concerning antibiotics administration and duration, the trend in PCT values over time may be more important than the initial PCT level itself.[^26^](#b26){ref-type="ref"}, [^27^](#b27){ref-type="ref"} However, our study only evaluated the initial PCT values at the ED visit. PCT determination is not covered by medical insurance in Korea; therefore, cost was a prohibitive factor in the routine use of this biomarker during follow‐up. We also acknowledge that, by limiting mixed infection group to those in whom microbiologically confirmed bacterial diagnosis was made, we may have created a bias. In fact, causative bacterial organisms cannot be confirmed solely with respiratory samples, blood cultures, and immunoassay. Lack of demonstration of bacterial etiology could not rule out its role. Conclusion {#ss10} ========== Our study demonstrated that the biological markers PCT and CRP, alone and in combination, had a moderate ability to detect pneumonia caused by mixed bacterial infection during the 2009 H1N1 pandemic. Because of its high specificity, using PCT and CRP in combination may be able to discriminate pneumonia caused by mixed infection from pneumonia caused by viral infection. This may then aid clinicians in more accurately identifying those patients in which administration of empirical antibiotics could be stopped. Further prospective studies to validate this result are required.
Carnivorous Plants of the United States and Canadaby Donald E. SchnellTimber Press, Inc., 2002ISBN 0881925403 It was a pleasure to see that Dr. Schnell has updated his earlier volume of Carnivorous Plants first published by Blair Press in 1976. Both editions have been the only books devoted to the native carnivorous plants of the United States. The newer updated second edition of Carnivorous Plants of the United States and Canada is an expansion of the previous one, including general notes and a short discussion on cultivation. Schnell delves into what a “true” carnivorous plant is and in what type of habitat each species thrives. He describes the different methods of trapping and how each species attracts their prey. Each species requires low nutrient acidic soils and bright sunlight. Some species may also be dependent on fire. Many species live in or are associated with the acidic loving Sphagnum moss—much of which lives in savannas, bogs, and seepage areas. Schnell describes each of the forty-five species in detail and includes range maps and information on cultivation, pollination, and animal associates. Hybrids exist for some species where their ranges overlap. Dr. Schnell stresses the need for conservation, not only for the plants themselves, but for the ecosystem in which they live. Many of the areas that once harbored many species has been greatly reduced by ditching and draining boggy areas, over-collecting, improper burning practices and complete destruction of the habitat. Many of the species are threatened or endangered in states where these once thrived.
In this article, I will cover how to build a basic movie recommendation system with an integrated graphical user interface. First and foremost we will need data. In order to get a good idea of how well the recommendation system actually performs we will need a sizable dataset. The dataset consists out of six .csv files and a readme file explaining the dataset. Feel free to have a look at it if you wish. We will only be using these 3: movies.csv ; ratings.csv ; tags.csv A couple of python libraries will also be required and installed if you do not have them yet: -numpy -pandas -progress (pip install progress) -fuzzywuzzy (pip install fuzzywuzzy & pip install python-Levenshtein) -easygui I believe they can all be pip installed, the exact commands will be OS dependent. Once we have the data and the libraries installed we are good to go. Any python IDE should work, I use Geany which is a lightweight IDE for Raspbian. A quick peek at the dataset: movies.csv Above we have the movies.csv file which has 3 columns namely: movieId, title and genres. All very handy and straight forward. We will be using all 3. Below we have the tags.csv file. Here we will only be using the ‘movieId’ and ‘tag’ columns which will link the tag to the ‘movieId’ columns also found in movies.csv & ratings.csv #python 1619518440 Welcome to my Blog , In this article, you are going to learn the top 10 python tips and tricks. … #python #python hacks tricks #python learning tips #python programming tricks #python tips #python tips and tricks #python tips and tricks advanced #python tips and tricks for beginners #python tips tricks and techniques #python tutorial #tips and tricks in python #tips to learn python #top 30 python tips and tricks for beginners 1619510796 Welcome to my Blog, In this article, we will learn python lambda function, Map function, and filter function. Lambda function in python: Lambda is a one line anonymous function and lambda takes any number of arguments but can only have one expression and python lambda syntax is Syntax: x = lambda arguments : expression Now i will show you some python lambda function examples: #python #anonymous function python #filter function in python #lambda #lambda python 3 #map python #python filter #python filter lambda #python lambda #python lambda examples #python map 1620569280 Do you wonder how Netflix suggests movies that align your interests so much? Or maybe you want to build a system that can make such suggestions to its users too? If your answer was yes, then you’ve come to the right place as this article will teach you how to build a movie recommendation system by using Python. However, before we start discussing the ‘How’ we must be familiar with the ‘What.’ #data science #movie recommendation system #movie recommendation system using python #python 1624698900 Our working final product can be tested here. Have you ever wondered what a chatbot is and how to build one? In this three-part series, we will teach you everything you need to build and deploy your Chatbot. By “we” here, I mean my team members (Ahmed, Dennis, Pedro, and Steven), four data science students at the Minerva Schools at KGI. The series will cover the following topics: We use a Jupyter Python 3 notebook as a collaborative coding environment for this project, and other bits of code for the web app development and deployment. All the code for this series is available in this GitHub repository. Businesses integrate chatbots into many processes and applications. You might need to interact with one while buying an item from Sephora, booking a flight from British Airways, or even customizing your cup of coffee from Starbucks. Developers build chatbots to understand customers’ needs and assist them without needing human help, making chatbots very useful for many customer-facing businesses. So how does a chatbot work? Generally, there are three types of chatbots: The chatbot we settled on creating is retrieval-based. Our bot can take a diverse set of responses, which are only slightly structured and output tailored recommendations. We had two main challenges to making this work: first, to build a flexible recommendation system in Python capable of taking in written requests by users and outputting decent recommendations. Second, implementing that algorithm in a web-app that is user-friendly and easy to use. #movie-recommendation #towards-data-science #recommendation-system #chatbots #how to build a flexible movie recommender chatbot in python #chatbot in python 1624459920 Recommender systems predict a user’s future choices/preferences and recommend products/items they might be interested in. The two most common types are: This kind of system gives recommendations based on the knowledge of a user’s attitude towards a product. It works on the logic that if users have agreed upon something in the past, then they will do so in the future as well.
https://morioh.com/p/98ead40b6d7b
I am a Licensed Clinical Social Worker and NYS Psychotherapist specializing in individual and family therapy maintaining a private practice based in Melville, NY. My private practice specializes in families, children and young adults struggling with behavioral challenges which are interfering with social, emotional and academic growth. With over 15 years practicing psychotherapy, I can offer individuals and families a unique approach to resolving issues stemming from what I identify as “chronically inflexible” behavior. Throughout my career, I have been working with children and young adults who suffer from a wide range of neurobiological, emotional, and personality disorders. I have identified successful strategies for working with the individual and family in the home, school, and social settings. This could include working with a young adult to help them navigate a social situation, a teen when they typically struggle to clean their room, or helping guide a couple through a conversation where the effects of behaviors on the relationship are discussed. This functional approach to therapy, includes but is not limited to Collaborative Problem Solving, Cognitive Behavioral Therapy, Social Thinking and other therapeutic modalities, is individualized to suit the needs and goals of the individual and the ones that care about them the most.
https://www.danielselmer.com/about-me/
Barbara Slusher is an Associate Professor of Neurology and Psychiatry and the Director of the Brain Science Institute (BSI) NeuroTranslational Drug Discovery Program. Before joining Johns Hopkins in September 2009, Dr. Slusher spent 18 years in the pharmaceutical industry holding positions at companies including ICI Pharmaceuticals, Zeneca (now Astra-Zeneca), Guilford Pharmaceuticals, MGI Pharma, and the Eisai Research Institute, with several years at the level of Senior Vice President of Research and Translational Development. She has extensive experience in drug discovery through Phase I/IIa clinical development and has participated in multiple FDA meetings and both IND and NDA regulatory filings. She has also been involved in the successful development, launch and postmarketing support of four currently marketed medicines. At Johns Hopkins, Dr. Slusher leads the largest integrated drug discovery program on campus, responsible for translating basic science discoveries at Hopkins into new drug therapies for neurological and psychiatric disorders. Dr. Slusher is a member of the JHU NIMH Center for Novel Therapeutics of HIV-associated Cognitive Disorders, and serves as the Director for the Therapeutics Development Core. Dr. Slusher has published over 120 scientific articles and is the inventor on over 50 issued patents. She is a co-founder of Cerecor, a new biotechnology company which will commercialize Johns Hopkins drug discovery interventions, and is leading the National Consortium for Academic Drug Discovery scientists. Dr. Slusher received her Ph.D. in Pharmacology and Molecular Sciences from the Johns Hopkins School of Medicine and a Master’s degree in Management from the Johns Hopkins Carey School of Business. Dr. Slusher’s extensive track record in drug development, as well as her strong ties to Industry, make her an exceptional leader for the Drugs/Biologics/Vaccines/Devices Translational Research Community.
https://ictr.johnshopkins.edu/collaboration/collaborations/translational-research-communities/barbara-slusher/
- Evaluate the design and operating effectiveness of technology controls (GITCs and ITACs) pertaining to Client's Internal Controls over Financial Reporting. - Conduct Process understanding discussions with the Clients as part of assessing risks arising from their use of Technology and identify control gaps within their processes. - Perform SOC 1 and SOC 2 (System and Organization Controls) assessments in accordance with the attestation standards established by the AICPA (American Institute of Certified Public Accountants). - Identify potential opportunities to drive standardization and efficiency across engagements by the use of automation. Prior Experience: - The candidate must have a minimum of 2-4 years of experience in a similar role [Big 4 experience preferred] - Working knowledge of frameworks including COSO, COBIT, ISO 27001, NIST CSF and NIST SP 800-53 is desirable Project and Team management exposure - Knowledge of security measures and auditing practices within various operating systems, databases and applications. - Experience in assessing risks across a variety of business processes. - Experience in identifying control gaps and communicating audit findings and control redesign recommendations to Sr. Management and Clients. - Experience in evaluating SOC1 reports for User organizations. - Knowledge of Business Continuity and Disaster Recovery best practices. - Knowledge of regulations impacting the privacy, integrityand availability of customer PII. - Exposure of having led IT Audit engagements. - Technical skills: Prior experience in evaluating the design and operating effectiveness of technology controls over varied IT platforms including ERP suites, Windows, Unix/Linux, iSeries, Oracle database, DB2 and SQL. This job opening was posted long time back. It may not be active. Nor was it removed by the recruiter. Please use your discretion.
https://www.iimjobs.com/j/executive-senior-executive-assistant-manager-it-audit-assurance-2-8-yrs-813105.html?ref=pp
Special Issue: ‘Italian Ecomedia: Archaeologies, Theories, Perspectives’ Please, submit a 500-word proposal in English of original and unpublished research outlining the topic, approach and theoretical bases, together with a filmography and bibliography, and a bio-note of about 150 words with a detailed list of publications to Prof. Alessia Cervini (alessia.cerviniATunipa.it http://alessia.cerviniATunipa.it) and Prof. Giacomo Tagliani (giacomo.taglianiATunipa.it http://giacomo.taglianiATunipa.it), by 5 September 2022. The outcome of the selection process will be communicated by 15 September 2022. Authors of the selected proposals will be invited to submit full-length articles by 8 January 2023for doubleblind peer-review. Authors will be notified of the results of the peer-review by 15 March 2023. In the last 25 years, the environmental humanities have gained a prominent position in academic research. Their core consists in providing historical, political and critical perspectives on topics traditionally pertaining to the STEM disciplines, such as extinction, species resurrection, biodiversity, rewilding, urban-wildland interfaces, land development and resource use (Hubbel and Ryan 2022), as specific questions emerging from the Anthropocene (Iovino and Opperman 2016; Emmet and Nye 2017). Reflecting on such topics from a humanities point of view means investigating their social and cultural implications (Morton 2010; Malm 2021), the narratives behind them, their political and semiotic effects and the imaginaries they elicit. It also creates beneficial interactions between established disciplinary domains such as philosophy, geography, history, literary and visual studies. Cinema and media studies are profoundly affected by this environmental turn, mainly from a thematic or a production studies perspective (Ingram 2000; Ivakhiv 2013). However, reflecting on the potentialities of images to create, broaden and develop specific aesthetic trajectories is a compelling task in understanding how the environmental question is transforming present audio-visual language and, in turn, how this very language could influence the environmental debate (Cubitt, Monani and Rust 2013). This Special Issue of the Journal of Italian Cinema & Media Studiesaims to foster a transdisciplinary dialogue about the different forms through which the vast domain of ‘green discourse’ has been tackled by Italian cinema and media from a critical–aesthetic perspective. Despite its alternate fortunes, the environmental question has a long history in national audio-visual production, beginning with the ‘miracolo economico’ (‘economic miracle’, 1958–63) as evidenced for instance by Ivens’s TV film L’Italia non è un paese povero (‘Italy is not a poor country’) (1960). It was developed in subsequent decades in both fiction and non-fiction productions, such as Ferreri’s Il seme dell’uomo (The Seed of Man) (1973), De Seta’s In Calabria (1993), and Vicari’s documentary Il mio paese (My Country) (2006). Recently, some important scholarly contributions have started to consistently investigate this history (Past 2019), which, however, remains to be fully explored in terms of its aesthetic, political, theoretical and critical implications. From this point of view, the Italian case can be considered exemplary for several reasons (Iovino, Cesaretti and Past 2018). First, Italian cinema has developed a long and original tradition in the depiction of and reflection on landscape, at least since neorealism (Bernardi 2002), as evidenced by films such as Rossellini’s Paisà (Paisan) (1946), Antonioni’s Il deserto rosso (Red Desert) (1964), and Frammartino’s Il buco (The Hole) (2021). Second, from at least the 1960s, Italy has conferred great yet ambivalent relevance to the environment, something that is both praised as a key asset due to its unique conditions and irremediably wasted, making it a place generating categories for investigating other experiences in other places (Iovino 2016). Media have widely contributed to this relevance, as testified by Ghirri’s Paesaggio italiano (‘Italian landscape’) (1980–89), Quilici’s L’Italia vista dal cielo (‘Italy seen from the sky’) (1966–78) or TV programmes such as Geo (1984–present). Third, as a result of specific historical, geographical and cultural conditions (e.g. ‘failed modernity’ or the north–south divide), Italy has developed an inner diversity that has shaped the relationship between subjects and environment in ways that are different from other countries (Armiero 2021). This diversity has been a recurrent issue for audio-visual objects dealing with the Italian landscape, as demonstrated for instance by the TV programme Meraviglie: La penisola dei tesori (‘Wonders: Treasure peninsula’) (2018–22) or by the recent editorial project Paesaggio Italia (‘Landscape Italy’) (2022) by National Geographic and la Repubblica. Within such a methodological framework, this call for papers invites contributions that can address this topic through two different approaches. First, we welcome examples that can contribute to starting an ‘archaeology’ of environmentalism and sustainability in Italian film and media of the twentieth century. Contributors are invited to submit proposals about specific case studies seeking to identify topics linked to ecocriticism across Italian visual culture. The aim is to start creating an atlas that critically collects the visual signs of the Anthropocene, highlighting the role of these objects in the narratives and society of the period or their relevance for the present collective imaginary. Second, contributors are encouraged to propose broader reflections about the theoretical specificity of Italian film and media for the field of the environmental humanities. In this case, proposals should deal with theoretical and methodological questions addressing the role of Italian visual studies in contributing to a more accurate definition of this new interdisciplinary field of research, as well as the importance of the environmental humanities in redesigning the trajectories, objects and priorities of Italian studies. The Special Issue is conceived as a first survey of investigations concerning the particular aesthetics of sustainability as developed by audio-visual objects and practices. Given the heterogeneous set of questions and perspectives arising from this topic, the call welcomes proposals from different disciplinary approaches (such as film and media studies, environmental studies, critical theory, postcolonial studies, semiotics, aesthetics), including analyses of different media formats (such as cinema, television, new and digital media, videoart). The guest editors welcome submissions that cover, but are not limited to, the following media, subjects and topics: • Documentaries dealing with climate change or sustainable practices. How have climate struggles been depicted in non-fiction production? What forms, genres, tropes and narratives have been employed? How have documentary films narrated sustainability and its broader environmental, economic, social, political implications? • Feature films narrating particular relationships between subjects and environment. What has been the role of the landscape in shaping the Italian audio-visual narratives? Is it possible to detect the signs of a concern for the irreversible transformation of the landscape during and after the economic miracle in Italian cinema? • TV programmes about nature and the environment. How is nature depicted in television? What rhetoric is employed in conveying a particular image of the environment and the landscape? • New media and video activist practices representing and disseminating exemplary experiences of resistance and resilience. Can video be a fundamental instrument for climate struggles? Is it something only belonging to the present, or is it possible to trace a history of this relationship? Are forms and formats somehow connected with the topic addressed? • Ecology and sustainability. Are there specific relationships between media ecology (Postman 1985) and media sustainability (Starosielski and Walker 2016)? To what extent do media help us in connecting the theoretical and political meanings of these two terms? • Topics of denunciation. Are there recurrent narrative and visual strategies to expose the damage to the environment caused by industrialization or criminality? • Topics of magnification. How have the beauty of nature and the landscape been portrayed by Italian audio-visual media? • Imagining of the future. Are there examples within the history of Italian visual culture that attempt to ‘premediate’ (Grusin 2011) future relationship between humans and the environment? What are the most important concerns emerging in these images? • Authors. Have there been authors in the audio-visual field whose work consistently engages with environmental issues? What are the distinctive aesthetic features of their approach that make them exemplary from any perspective? • Periods and movements. Are there periods or movements within film and TV history that have specifically dealt with environmental issues? What is their most important heritage for other periods or experiences, in Italy or abroad? • Genres. Are there privileged genres in the audio-visual field dealing with environmental issues? Is ecocriticism a genre per se? • Practices. How do media practices affect material conditions of living in terms of social, economic and cultural sustainability within given communities? What is the role of film festivals and exhibitions in spreading a particular idea of sustainability in practical and aesthetic terms? Guest-editors: Alessia Cervini University of Palermo, Italy alessia.cerviniATunipa.it http://alessia.cerviniATunipa.it Giacomo Tagliani University of Palermo, Italy giacomo.taglianiATunipa.it http://giacomo.taglianiATunipa.it Principal Editor:
https://nordmedianetwork.org/latest/call-for-papers/italian-ecomedia-archaeologies-theories-perspectives/
About This Source - Reuters Reuters is an international news organization owned by Thomson Reuters. It employs some 2,500 journalists and 600 photojournalists in about 200 locations worldwide. The agency was established in London in 1851 by the German-born Paul Reuter. Recent from Reuters: Reuters published this video item, entitled “Luxury cars, EVs to fuel Hyundai’s sales in 2021; Q4 profit jumps” – below is their description. Hyundai Motor said on Tuesday it expects sales in United States and China to surge this year, driven by the launch of new electric cars and sports utility vehicles (SUVs), after reporting its best quarterly profit in over three years. #News #Business Reuters YouTube Channel Got a comment? Leave your thoughts in the comments section, below. Please note comments are moderated before publication.
https://theglobalherald.com/news/luxury-cars-evs-to-fuel-hyundais-sales-in-2021-q4-profit-jumps/
In line with Government advice all AspinallVerdi staff will now be working from home over the coming weeks to protect their health, and the health of their families and colleagues. This is also a business continuity measure and we remain committed to delivering for our clients. We seek to maintain a normal service and ensure that projects are delivered effectively. The Company has from its inception invested in IT equipment and systems which facilitate remote working and will allow us to communicate with clients and colleagues as normal. These systems enable our colleagues to continue to work effectively on projects as required. We hope that clients will understand that it may be necessary to reduce/rearrange meetings in person and we will communicate with you in advance should new arrangements be necessary. Please do continue your communication with us in the usual way. Alternatively, please call one of the Directors below: Leeds - 0113 243 6644: Ben Aspinall – 07956 315142 Atam Verdi – 07956 315139 London - 0207 183 7580: Parm Dosanjh - 07432 716138 Stuart Cook – 07876 576307 Liverpool - 0151 329 2929:
http://www.aspinallverdi.co.uk/blog/2020/covid-19-announcement
Abstract: Image restoration is an important approach to image and video defogging. One of the most popular algorithms for image restoration is dark channel prior. However, when the algorithm is applied to outdoor digital webcams with limited computing resources, its real-time performance probably cannot be guaranteed. To address the above issue, this paper presents a fast video haze removal algorithm based on mixed optimised transmissivity to improve the time performance of the dark channel prior algorithm. The proposed algorithm combines guided filter and median filter and replaces the soft matting procedure in the classical dark channel prior algorithm. A set of experiments are performed to evaluate the real-time performance and effectiveness of the algorithm. The results show that our proposed improved algorithm can significantly improve the speed of video defogging, without sacrificing much effectiveness in identification of target objects. Keywords: dark channel prior; guided filter; median filter; digital webcam; video defogging.
https://www.inderscience.com/info/inarticle.php?artid=97574
What I find most troubling about the current controversy over whether the divorced and civilly remarried can be allowed to receive Communion (while living in the state of adultery) is the way the debate seems to obscure the whole issue of free will and grace. This is not simply a dispute over moral norms or sacramental discipline; at its very heart is the whole question of the power of God’s grace in the soul of the sinner. Those who say that a person living in adultery may find it “impossible” to obey the sixth commandment – by logical extension any of the commandments in a difficult situation – are in effect demeaning either the operative power of the graces flowing from Christ or the operative freedom of the person struggling with temptation or living in sin. If the operative grace of Christ is not sufficient to enable the sinner to reject the sin, repent, and do what’s necessary to change a sinful way of life, then just how powerful can that grace really be? When Paul begged Christ three time to remove the thorn in his flesh, which he attributed to Satan, Christ replied: “My grace is sufficient for thee: for my power is made perfect in weakness.” (2 Cor 12:9) Now, we don’t know exactly what this “thorn” from Satan was – perhaps a grave temptation or a serious health problem. What is important, regardless of the problem, is the solid assurance that His grace is not only sufficient to overcome it but is actually made perfect in the face of any human weakness. Moreover, Paul has already addressed this same issue of the power of Grace when he reassures the Corinthians: “No temptation has overtaken you except what is common to mankind. And God is faithful; he will not let you be tempted beyond what you can bear. But when you are tempted, he will also provide a way out so that you can endure it.” The “way out” is by virtue of His grace, which is the reason why the Christian never boasts in his own power, but in his weakness, because he trusts that God will come to our aid and help us overcome any temptation, any struggle with evil if we surrender to His grace. When a Christian says that it is “impossible” for another Christian to obey a commandment, as a Roman Cardinal recently did, what is such an assertion but a practical denial of the power of grace and of Christ Himself? If His grace is ultimately conditioned in its effectiveness by human will and passions and circumstances, then it is weaker than they are. Thus, at least in practice, it becomes very much a secondary element in the moral life – somewhat like what the Pelagian heresy taught about grace: that it’s effectively non-essential if helpful in some instances. Pelagianism was not simply an anthropological and moral heresy that denied the transmission of Original Sin and asserted that free will was sufficient to attain moral perfection. The denial of the necessity of Grace for Justification and moral perfection led theo-logically to an even more profound Christological and soteriological heresy, which undermined the whole redemptive mission of Christ and thus undermined the theological foundation for the Incarnation. What do we need Christ for, if man is perfectly capable of saving himself by properly exercising his free will? What is going on today, however, is not exactly a form of neo-Pelagianism, but rather a new form of quasi-determinism. While Pelagius exalted free will to the heavens, the modern denial of the power of grace is based on the reduction of free will to a slave of the passions. Free will is so utterly weak, that in difficult situations, it cannot begin to cooperate with God’s grace. Thus obedience to the will of God becomes “impossible” in some cases, a position condemned at the Council of Trent for important theological reasons. If God’s grace is so weak that it cannot heal the weakness of the will and enable it to overcome temptations, or moral conundrums – especially those related to the flesh – “my grace is sufficient for thee” is reduced to a platitude or banality, a nice saying, but ultimately meaningless for real life. Maybe Christ should have said, “Sometimes my grace is sufficient for thee, and only sometimes it is made perfect in weakness, but not always, in tough cases.” Today, the penetration of the Christian ethos by various forms of determinism, especially by a rabid psychological determinism, has radically demeaned free will and human dignity and the power of actual operative grace, while absolutizing the grace of justification. It is more like a resurgence of extreme Calvinist determinism but without the element of negative predestination. In that view, man’s free will is totally corrupted, but, thanks to Christ’s redemption, most if not all men are positively predestined to heaven. So why agonize over the moral life so much, since many if not most men seem to find it “impossible” to overcome certain sins? The proponents of this strange combination of moral determinism and salvific universalism never seem to see just how these various denials of moral responsibility demean not only the operative grace of Christ but likewise the true dignity of man. How much more dignified is the man who confesses his responsibility for sin than the man who declares himself guiltless because he found it just impossible to follow God’s commandment, regardless of the grace of Christ?
https://www.thecatholicthing.org/2017/04/22/is-his-grace-truly-sufficient-or-not/
What had to be exhibited was not only that which was unique and irreducible in art in general, but also that which was unique and irreducible in each particular art. Each art had to determine, through its own operations and works, the effects exclusive to itself…. Greenberg’s notion of modernism was effectively defined by a separation of the arts, in contrast with their synthesis as posited in Wagner’s concept of the Gesamtkunstwerk (‘total work of art’) whose impact on modernism was also considerable. In any case, Greenberg’s ideas on the uniqueness of each artistic medium have been applied to dance by the likes of Roger Copeland and David Michael Levin. I do not have Copeland’s whole essay to hand here, but he wrote in 1986 that “twenty years ago the reigning sensibility among serious experimental artists was the quest for ‘purity’ of the medium, the desire to determine what each art form can do uniquely well… Choreographers were expected to emphasise the barebones essence of their medium, the human body in motion, unembellished by theatrical trappings.” (178) He was obviously thinking of the Judson Dance Theater here. Copeland has also suggested that Balanchine’s purist works, which strips ballet of everything extraneous such as a story, décor, etc. exemplify Greenberg’s notion of modernism; while the alliance of the arts in the works by the Ballets Russes typifies an approach more akin to Wagner’s Gesamtkunstwerk. Now, we have moved away from modernism quite a while ago, but I can see how these issues are still relevant in our era, and might more specifically be applied to our project…. I appreciate Greenberg’s take on modernism. Traditionally, artists worked within a style, but we don’t think of ourselves that way. We either equate our style with art itself or else offered universal reasons for stylistic choices. For example, classical styles were often defended on the ground that the ancient Greeks had discovered universal principles of beauty and representation. Then, it seems that, at the dawning of modernism, it became obvious that all art depicts the world through a style, that styles differ from time to time and place to place, and there is no independent aesthetic standard that makes one better than all the others. We now think that the artists of other times and places struggled to address issues that seemed inevitable, but these questions were actually relative to the local cultures. It is possible to understand a Baroque dancer who views the world as a stage. But it is impossible to be like him: to address a question that seems intrinsic to art. Instead, beginning with the modern era, everyone is a stylist. There are no longer objective aesthetic questions. To make a dance becomes an entirely different matter. Every artist develops a manner of his or her own and creates works that appear, first, as art objects; second, as products of a particular artist, and last (if at all) as representations of something. Thus “modernism” means recognizing that all past ways of representing the world have been arbitrary and culturally relative styles.
http://rescen.net/blog5/2013/08/modernism-the-arts-and-greenberg/
Click on an image for larger view. ARCH 141 DESIGN IVProfessors Diane Lewis, Peter Schubert, Daniel Sherer, Mersiha Veledar, and Georg Windeck Architecture Inspired by the Cities of Catastrophe from Atlantis to Hiroshima A Civic Architecture for Post-flood New Orleans Many schools and architects in practice have done projects for New Orleans which are predominantly housing for the post flood population. It is clear that many of the proposals from a wide range of sources remind of a repetition of the failed post war urban renewal housing and show no consciousness of the necessity to integrate civic program and inventive public space with a new vision of residential structure in order to anchor and enrich a new incarnation of the rich and varied culture personified in the city of New Orleans. The psyche and the poetry of the city and its inhabitants were studied in parallel with the following project development in the form of the great literature inspired by the city by such authors as Tennessee Williams and William Faulkner. The first day of studio our team of five faculty presented a series of plans, maps, and satellite images in a discussion that revealed the relation between the founding city plan of New Orleans and architectural roots from the Roman plan to the Bastide. The later city plans, from the founding to the present, revealed morphologic transformations relating to geography, flood plains, commerce, war, and other urban forces. The satellite images located the city within a global image of the Mississippi Delta, the Gulf and the weather. With this study as the initiative, each participant selected a city that had undergone a disaster, either natural or man-made, and precipitated a definitive architectural solution/ urban vision. The catastrophes include: FIRE, FLOOD, FAMINE, EARTHQUAKE, VOLCANIC ERUPTION, DAMNATIO MEMORIA, BOMBING, GENOCIDE, URBAN RENEWAL. Model projects from cities as far ranging as the mythic Atlantis and the cultural evacuation of Matera were studied, and the architectural visions that inspired by the destruction of the city or its precincts.
http://archweb.cooper.edu/design/fall2006/designiv/designiv01.html
PDPA Policy SuperSteam Personal Data Protection Statement SuperSteam respects the privacy of individuals and recognizes the importance of the personal data you have entrusted to us and believe that it is our responsibility to properly manage, protect, process and disclose your personal data. We are also committed to adhering to the provisions and principles of the Personal Data Protection Act 2012. As such, this Personal Data Protection Statement is to assist you in understanding how we collect, use and/or disclose your personal data. We will collect, use and disclose your personal data in accordance with the Personal Data Protection Act 2012 (“Act”). The Act establishes a data protection law that comprises various rules governing the collection, use, disclosure and care of personal data. It recognises both the rights of individuals to protect their personal data, including rights of access and correction, and the needs of organisations to collect, use or disclose personal data for legitimate and reasonable purposes. The Act takes into account the following concepts: Consent – Organisations may collect, use or disclose personal data only with the individual’s knowledge and consent (with some exceptions); Purpose – Organisations may collect, use or disclose personal data in an appropriate manner for the circumstances, and only if they have informed the individual of purposes for the collection, use or disclosure; and Reasonableness – Organisations may collect, use or disclose personal data only for purposes that would be considered appropriate to a reasonable person in the given circumstances. In projecting the three main concepts above, the Act contains nine main obligations which organisations are expected to comply with if they undertake activities related to the collection, use and/or disclosure of personal data:- While we will not be going into the details of these Obligations in this Personal Data Protection Statement, you can be rest assured that we are constantly mindful of them in our collection, use and disclosure of personal data. Should you wish to know more about these obligations, an excellent summary can be found in the Advisory Guidelines of the Personal Data Protection Commission at:
The invention relates to a shaving razor that conforms to the surface being shaved. Shaving razors typically have straight cutting edges, while the surfaces being shaved having varying degrees of flatness or curvature and varying abilities to deform to provide a flat surface for the straight edge of the razor. Shaving an area of the body with pronounced curvature, e.g., an ankle or knee, using a razor having a straight cutting edge results in a localized area of contact. This requires repeated strokes to shave the entire area, and causes a high stress concentration at the localized area of contact, which can increase the possibility of a nick or cut at that area. In one aspect, the invention features, in general, a shaving razor including a handle, three blade units that are mounted at the end of the handle, and a mounting structure connecting each blade unit to the handle. Each blade unit includes a guard, at least one blade having a cutting edge, and a cap. The mounting structure provides a pivotal connection of the blade unit to the mounting structure about a pivot axis that is transverse to the cutting edge, and also provides up and down movement of the blade unit along a displacement direction that is transverse to a plane through the guard and cap, thereby permitting each blade unit to conform to the contour of a surface being shaved. In another aspect, the invention features, in general a shaving razor including a handle and a blade unit that is mounted at the end of the handle by a parallelogram, four-bar linkage made of an integral plastic piece including two elongated members, a proximal end member connected to the handle, and a distal end member connected to the blade unit. The elongated members and proximal and distal end members are pivotally connected to each other via resilient living hinges permitting up and down movement of the blade unit. In another aspect the invention features, in general, a shaving razor including a handle and three blade units that are mounted at the end of the handle by a mounting structure. The cutting edges of blades of two of the blade units are generally aligned with each other and have a gap between them, and the third blade unit Is offset with respect to the other two, with its blade overlapping the gap. The aligned blade units have facing cutout portions in respective caps, and the third blade unit is partially located in the region of the cutout portions. Embodiments of the invention may include one or more of the following features. The mounting structure for each blade unit is independent of mounting structures for the other blade units, permitting each blade unit to pivot about a respective pivot axis and to be displaced along a respective displacement axis independently of the pivoting and displacement of the other blade units. The integral plastic piece of the four-bar linkage has an at rest position in which the elongated members are spaced from each other and a stop position in which the elongated members contact each other, and the piece is resiliently deformed at the living hinges to provide a force resisting movement from an at rest position to a stop position, the blade unit moving up and down along the displacement axis as the elongated members move toward and away from each other. The mounting structure has a second living hinge providing pivoting about the pivot axis, the second living hinge being resiliently deformed to provide a force resisting pivoting about the pivot axis from a neutral position. Planes through the guards and caps of the blade units are generally coplanar when in an at rest position with respect to the displacement axis and at a neutral position with respect to the pivot axis. The cutting edges of blades of the first and second units are generally aligned with each other and have a gap between them, and a third blade unit is offset with respect to the first and second blade units, with its blade overlapping the gap during all positions of pivoting and up and down movement. The blades of the blade units are between xc2xcxe2x80x3 and xc2xexe2x80x3 long (preferably between xe2x85x9cxe2x80x3 and ⅝xe2x80x3 long, and most preferably about xc2xdxe2x80x3 long). The blade units are mounted to resist displacement from an at rest position with a spring constant of between 5 and 30 (preferably between 10 and 29, and most preferably about 15) gm force/mm . The blade units are mounted to resist pivoting about the pivot axis from the neutral position with a spring constant of between 3 and 20 gram-millimeters/radian. The plastic of the mounting structure is an elastomeric polymer, preferably a polyethylene block amide available under the PEBAX trade designation. The integral plastic piece is between 0.008 to 0.018 inch thick (preferably 0.012 to 0.014 inch) at the living hinges providing the up and down movement. The integral plastic piece is between 0.006 to 0.014 inch thick (preferably 0.009 to 0.011 inch) at the living hinge providing pivoting. The mounting structure can also provide pivoting about an angle parallel to the cutting edge. The mounting structure can be mounted at an angle with respect to the handle. Each blade unit has plural blades. In other aspects, the invention features, in general, a shaving razor handle having a shape that is comfortable and permits a variety of different grips to be used. In one aspect, the upper surface of the handle has an elongated index finger indent that is sufficiently long to support multiple segments of an index finger. In another aspect the lower surface of the handle has an elongated thumb indent that is sufficiently long along a longitudinal axis to support both segments of a thumb oriented along the longitudinal axis. In another aspect, the upper surface of the handle is sufficiently long and the distal region is curved and shaped so as to fit in the palm of a user when an index finger is placed at a proximal region of the upper surface. In another aspect, side surfaces of the handle have a neck region between two wider regions, the neck region being sufficiently long to receive a thumb on one side and a plurality of fingers on the other side. The index finger indent is about ⅝xe2x80x3 wide and about 2xc2xcxe2x80x3 long, and the thumb indent is about 1xe2x80x3 wide and about 3xe2x80x3 long. The thumb indent has a lip at its distal end to indicate the end of the indent to the user. The thumb indent is scooped in an axis that is transverse to the longitudinal axis with a sufficient curvature to receive the end segment of a thumb oriented along the transverse axis. Embodiments of the invention may include one or more of the following advantages. The razor provides a conforming blade system in which the force is evenly distributed over areas of pronounced curvature. There is more blade contact on curved surfaces with the result that shaving is faster and more efficient. There are lower stresses developed with the result that the razor glides smoothly across the surface. The razor is self-adjusting, making it easy to use. The razor conforms to pronounced curvature with application of low forces on the blade units and adjusts to both convex and concave surfaces. The shaving razor maintains local shaving geometry on the skin (e.g., blade angle and exposure), at the same time that it provides more contact and adjusts to the curvature. The composite overall size of the series of blade units is similar in length to an ordinary cartridge. There are no unshaven stripes between the individual blade units. The footprint of the blade units fits into tight areas. The flexure arms deflect in a controlled manner. The individual blade units do not interfere with each other. The razor achieves even load distribution among the individual blade units, providing maximum percentage contact area for each blade unit. The razor has uniform load distribution across each blade unit. The stiffness of the arms is selected to maintain contact with the skin to thereby avoid vibration. The four-bar linkage provides up and down motion while maintaining the orientation of the plane of the blades"" cutting edges. The shaving razor provides a smooth, safe and comfortable shave. The handle conforms to fit naturally in the user""s hand and accommodates many grip styles. It has soft gripping materials in key locations. Other advantages and features of the invention will be apparent from the following description of preferred embodiment thereof and from the claims.
FIELD OF THE INVENTION BACKGROUND OF THE INVENTION OBJECTS OF THE INVENTION SUMMARY OF THE INVENTION 0001 The present invention relates to temperature compensation devices and more particularly, to a temperature compensation packages for grating areas of optical fibers. 0002 A fiber grating has many applications including band rejection filter, semiconductor laser stabilizer, fiber laser wavelength selector, fiber amplifier reflector, fiber dispersion compensation, Drop Wavelength Division Multiplex (DWDM) and drop multiplex, light pulse shape reforming, optical fiber switch, optical sensor and other. Very often it is necessary using a temperature compensation package for the grating area of the fiber. 0003 Concept and working principle of most popular temperature compensation packages is the usage of two type materials to package fiber gratings. The optical fiber located inside longitudinal packaging body, usually a tube made of very low coefficient of thermal expansion (CTE) material, such as quartz or invar, being stretched between two end pieces with high CTE material, such as aluminum, brass, copper, stainless steel or the like, located on both ends of that tube. When temperature increases, the length of the two end pieces with high CTE increases, the tension inside the fiber reduces and the fiber grating pitch decreased which will compensate the wavelength change of the fiber grating due to the increase of the optical fiber index. 0004 Using an ordinary tube creates many problems in assembling the temperature compensation package. 0005 Operators and robotic systems may experience difficulties in accurately inserting epoxy inside such a temperature compensation device for securing the fiber grating area inside the tube in a strained position. Process of inserting the fiber through the tube and then applying the tension on the fiber grating is very difficult and the optical fiber with fiber grating can be easily broken during this process. At the same time this process is very long and time consuming thus increasing the cost of the compensation device. This is a great limitation of the currently used compensation devices. 0006 An object of the present invention to provide a fiber grating temperature compensation package that obviates the above mentioned disadvantages. 0007 Another object of the present invention is to provide a fiber grating temperature compensation package that allows for easy insertion of an optical fiber into and along an elongated body. 0008 A further object of the present invention is to provide a fiber grating temperature compensation package that allows an operator to easily insert epoxy with a substantial accuracy. 0009 Still another object of the present invention is to provide a fiber grating temperature compensation package which allows for effective control of the induced tension of the optical fiber during its assembling. 0010 Still further object of the present invention is to provide a fiber grating temperature compensation package that allows for low time consuming assembly process. 0011 Other objects and advantages of the present invention will become apparent from a careful reading of the detailed description provided herein, with appropriate reference to the accompanying drawings. 0012 According to one aspect of the present invention, there is provided a fiber grating temperature compensation package that comprises an elongated body that has a longitudinal internal passage therethrough for receiving a grating area of an optical fiber therein, a first and second end pieces longitudinally slidably mounted on a respective first and second extremity of the body for securing the fiber in a stretched configuration therebetween and each having an extending section freely engaging the passage. The passage transversely extends to a perimeter of the body all along between extremities of the same and forming a longitudinal exterior opening. That opening allows for the optical fiber to be transversely inserted therethrough and to be fixed in the stretched configuration to the first and second end pieces using an adhesive means. The elongated body and end pieces are made out of materials with generally low and high coefficient of thermal expansion (CTE) respectively. 0013 Preferably, the adhesive means is an epoxy resin. 0014 Preferably, the elongated body is a hollowed body. 0015 Alternatively, the elongated body has a generally solid cross section. 0016 Preferably, the elongated body has a cylindrical shape. 0017 Alternatively, the elongated body has a polygonal shape. 0018 Alternatively, the extending section of the first and second end pieces extends toward each other to form a common end piece. 0019 According to a second aspect of the present invention, the fiber grating temperature compensation package comprises an elongated body that has a longitudinal internal passage therethrough for receiving a grating area of an optical fiber therein, a first and second end pieces longitudinally slidably mounted on a respective first and second extremity of the body for securing the fiber in a stretched configuration therebetween and each having an extending section freely engaging the passage. The passage transversely extends to a perimeter of the first extremity of the body and forms a regional longitudinal exterior opening. That opening allows for the optical fiber to be fixed in the stretched configuration to the first piece using an adhesive means. The elongated body and end pieces are made out of materials with generally low and high coefficient of thermal expansion respectively. 0020 Preferably, the passage transversely extending to a perimeter of said body at said second extremity of the same and forming a second regional longitudinal exterior opening, said second opening allowing for said optical fiber to be fixed in said stretched configuration to said second end piece using an adhesive means. BRIEF DESCRIPTION OF THE DRAWINGS 0021 In the annexed drawings, like reference characters indicate like elements throughout. 0022FIG. 1 is a perspective view of an embodiment of a fiber grating temperature compensation package according to the present invention; 0023FIG. 2 is a perspective view of the elongated body of the embodiment of FIG. 1; 0024FIG. 3 is a view similar to FIG. 2, showing an elongated body with polygonal shape; 0025FIG. 4 is a view similar to FIG. 1, showing a second embodiment of a fiber grating temperature compensation package; 0026FIG. 5 is a view similar to FIG. 2, showing a second embodiment of an elongated body; 0027FIG. 6 is a view similar to FIG. 5, showing an elongated body with polygonal shape; 0028FIG. 7 is a view similar to FIG. 2, showing an elongated body having a generally solid cross section; 0029FIG. 8 is a view similar to FIG. 7, showing an elongated body with polygonal shape; and 0030FIG. 9 is a view similar to FIGS. 1 and 4, showing the two end pieces being joined to each other and form a common piece. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS 1 10 10 20 20 21 31 30 41 42 22 24 20 30 41 42 43 44 21 21 23 20 22 24 20 25 25 41 42 50 41 42 0031 Referring to FIGS. , there is shown an embodiment of a fiber grating temperature compensation package according to the present invention. The package includes an elongated body made out of a low coefficient of thermal expansion (CTE) material, such as quartz or invar. As shown in FIGS. 1 and 2, the body has a longitudinal internal passage therethrough for receiving a grating area of an optical fiber therein. First and second end pieces longitudinally slidably mounted on a respective first and second extremity of the body for securing the fiber in a stretched configuration therebetween. Each piece and has an extending section and freely engaging the passage . The passage transversely extends to a perimeter of the body all along between extremities and of the same and forms a longitudinal exterior opening . The opening allows for the optical fiber to be transversely inserted therethrough and to be fixed in the stretched configuration to first and second end pieces and using an adhesive means , preferably an epoxy resin or the like material. End pieces and are made out of a material with generally high CTE, such as aluminum, brass, copper, stainless steel or the like. 1 6 20 7 9 0032 Referring to FIGS. to , the elongated body is a hollowed body or, as shown in FIGS. to , has a generally solid cross section. 20 4 5 7 9 8 0033 Preferably, the elongated body has a cylindrical shape (see FIGS. 1, 2, , , and ) or a polygonal shape (see FIGS. 3, 6 and ). 43 44 41 42 45 45 22 24 20 0034 As illustrated in FIG. 9, extension sections and of first and second end pieces and extend toward each other to form a common end piece . In this case it is preferred that the common piece is secured to only one of extremities or of the body . 6 10 20 21 31 30 21 22 20 27 27 30 41 50 20 24 29 29 30 42 50 a a a 0035 With reference to FIGS. 4, 5 and , there is shown a second embodiment of present invention in which the elongated body has a longitudinal internal passage therethrough for receiving a grating area of an optical fiber therein. The passage transversely extends to a perimeter of the first extremity of the body a and forms a regional longitudinal exterior opening . The opening allows for the optical fiber to be fixed in stretched configuration to the first end piece using an adhesive means such as epoxy . Preferably, the passage also transversely extends to the perimeter of the body at its second extremity and forms a second regional longitudinal exterior opening . The second opening allows for the optical fiber to be fixed in the stretched configuration to the second end piece using the epoxy resin . 31 10 0036 The present invention is very easy to assemble, strong in safely and securely holding longitudinal optical fiber grating area along the temperature compensation package in a strained position. 30 31 20 25 20 27 29 30 50 41 42 43 44 50 10 10 a a. 0037 The optical fiber with fiber grating can be put inside the elongated body through longitudinal exterior opening or inside the elongated body through regional longitudinal exterior opening and/or very easily. After applying tension on the optical fiber , the securing member , preferably a standard epoxy material, is also very easily applied at the location of end pieces and , preferably on the extending section and of the same. When the epoxy is fully cured, another stainless or plastic tube (not shown) can be put for embracing the temperature compensation package or 10 0038 Although the present temperature compensating device has been described with a certain degree of particularity, it is to be understood that the disclosure has been made by way of example only and that the present invention is not limited to the features of the embodiments described and illustrated herein, but includes all variations and modifications within the scope and spirit of the invention as hereinafter claimed.
There is a lot more to laundry services than meets the eye. On the surface, it looks simple — collecting dirty linens, loading them in the washer, transferring them to the dryer, folding them and placing them in a cart. But laundry experts say it is a much more complex process than that. Without strict adherence to safety protocols, laundry can be a dangerous practice. “A facility’s first priority is for the personal safety and protection of employees,” said Jim Keeley, vice president of Healthcare Services Group. “The ever-increasing range of contaminants and infectious materials in linen that comes from the units to an in-house laundry for processing has made the need for effective infection control procedures an even bigger issue in recent times.” Another important safety concern is the potential for fire in the dryers, Keeley said, because “most dryer fires in long-term care facilities are caused by a failure to adequately clean lint filters or by drying mops or rags that, even when washed, still contain grease from cleanups and ignite in the confined and heated space of the dryer drum.” Ways to improve Nathan S. Gaubert, chemist and laundry specialist for Spartan Chemical, assesses the overall safety climate in long-term care as “pretty good relative to other industries,” but notes that “there is always room for improvement.” Gaubert also acknowledges the improper collection and transportation of soiled linen as a major risk, followed by improper lock-out/tag-out procedures and improper handling of chemicals. “When proper linen collection, sorting, washing and storage procedures aren’t followed, you open your facility up to a host of hazards — not only to your employees, but also your residents and any other visitors to your facility,” he said. “When you are washing linen contaminated by bodily fluids and waste, you need to make sure that the dirty and clean linens are completely isolated. Sorting of soiled linen should take place in the room where it is collected, while handling it with a minimum of agitation so as to not spread pathogens to other items.” Gaubert advises that facilities ensure linen collection bins are clearly and distinctly labeled as “dirty” or “clean” because “many times cross-contamination is due to clean linen being placed in a bin or cart that previously stored soiled linen.” All surfaces where soiled linens come into contact — including carts, hard surfaces and flooring — should be cleaned and disinfected on a routine basis, he said. “You need to take every step to ensure that dirty, soiled linen doesn’t contaminate the linen that has already been cleaned and processed,” he said. “Contamination of clean textiles can lead to outbreaks causing employee or patient illness.” Unsafe laundry practices can also present “huge potential” for liability, Gaubert says. “In the past few years, there have been well-publicized cases of either worker death or serious injury with regard to lock-out/tag-out accidents,” he said. “These cases have resulted in OSHA fines as well as large settlements in court cases. Even with the rise of ‘super bug’ publicity like the H1N1 or MRSA scares, not enough attention is paid to how soiled linen is gathered and transported.” Worker fallout To be sure, housekeeping staff tasked with collecting or processing soiled linen “are at a heightened risk of exposure to some pretty nasty pathogens,” said Steve Kovacs, research and development section head at Procter & Gamble Professional. “In the absence of proper safeguards and procedures, a facility could experience higher worker absenteeism from an increase in sick days, as well as a greater number of worker compensation claims. In a properly installed laundry room, an employee’s exposure to chemicals should be at a minimum; however, there will still be some chemicals onsite that can be hazardous when misused or mixed improperly.” Even so, Keeley maintains that liability risk for laundry has actually declined in recent years because of a general trend toward disposable diapers. “The volume of linen going through laundries has decreased dramatically and as a result, so has the time the laundries operate daily, lowering exposure and risk,” he said. “Whether it is the risk to CNAs when lifting residents, the risk to kitchen staff when working around stoves or knives or the risk to laundry workers, it all comes down to training and supervision of front line staff when trying to minimize these potential liabilities.” Kicking bad habits Difficult working conditions often lead to risky practices and laundry experts concede that staff can be tempted to lapse into bad — and dangerous — habits. “In the current economic climate, everyone is trying to accomplish more with fewer people or resources, and that usually results in shortcuts being taken,” Gaubert said. “Many times, bad practices get started when an individual is simply trying to get things done faster to help out an overburdened system and other times it is out of ignorance or a misunderstanding of the importance of the task at hand. “Without proper training not only on how to do things, but why things need to be done a certain way, an employee will tend to do tasks in a manner that is easiest and fastest. Educate your workers on the reasons behind your procedures, train them on how to do them properly, actively monitor them to ensure it is being done, retrain those who aren’t compliant, and finally, penalize those who continue to ignore their training.” Poor supervision is another problem, Keeley said. “A laundry staff’s main goal is to get the linen needed for the next shift processed before their shift ends,” he said. “If the laundry process is not organized, scheduled properly and supervised regularly, staff will be left on their own to find ways to get what linen is needed by the next nursing shift.” Kovacs adds that facility managers should take some of the blame for bad practices as well. “Improper training practices “Adequate procedures and focus on training are huge steps in the right direction to mitigate poor practices,” Kovacs said. Educate, train right Education and training on basic safety and health issues is the starting point for a sound laundry operation, according to Keeley. “Properly dealing with soiled linen, using proper storage barrels and carts and continual training and follow-up on the loading and unloading of machines are all major elements of a solid program for handling soiled linen in a fashion that leads to a safer environment in the laundry,” he explained. Kovacs adds that training programs emphasizing safe handling of soiled linens “will put the laundry operation in a position to control potentially problematic cross-contamination.” According to the Centers for Disease Control and Prevention, laundry areas should have handwashing facilities for employees, and staff should wear gloves and protective garments when sorting soiled materials. It is also absolutely necessary, he said, “to maintain and inspect the laundry equipment used in the laundering of soiled material, as well as maintain proper water conditions and to use appropriate laundry detergent.” Some professional laundry companies will offer to conduct a site survey to gauge a laundry operation’s safety, Kovacs said. “Consultation with a professional laundry company will also help the facility establish proper and rigorous procedures,” he said. “Another step is a comprehensive preventative maintenance program aimed at keeping all equipment running effectively and preventing equipment from operating beyond capacity.” The best thing a facility administrator can do to promote safety is to create a culture where safety and proper technique is rewarded above all else, according to Gaubert. “A culture that rewards proper practices and safety measures — as well as one that penalizes those that cut corners — will see a quick turnaround in the realm of safety,” he said. Laundry safety checklist Laundry operations run safer and smoother with a preventative maintenance checklist. Among the items that should be reviewed regularly are: –Routine maintenance checks and fine-tuning on washers and dryers to avoid future safety problems and hazards. Do not wait until there is a breakdown to keep equipment running at peak efficiency. –Dryer cleaning is essential. Remove the dryer front to clean the lint that builds up between the drum and the wall of the dryer. There is usually a thermostat near the drum that tells the dryer the heat level inside. When that thermostat gets covered with lint, the lint acts as insulation and gives a false reading to the dryer, thereby delaying the cooling cycles. If the heat builds high enough, it can become a fire hazard. –Clean ducts often to remove lint, dust and debris buildup. –Create a solid quality assurance and inspection program for the laundry operation. A monthly walk-through by administrators also is a good way to keep the focus on safety within laundry units.
https://www.mcknights.com/news/laundry-duty-hazards/
Shush! - The Complete Series 1 And 2 The complete first and second series of the award-winning sitcom written by and starring Morwenna Banks (Absolutely) and Rebecca Front (The Thick Of It) set in, yes, a library - but no ordinary library. This is the library overseen and run by the most unlikely pairing since Mills met Boon. Meet Alice (Rebecca Front) - a former child prodigy who won a place at Oxford aged nine, but because Daddy went too she never needed to have any friends. She's scared of everything - everything that is, except libraries and Snoo (Morwenna Banks), a slightly confused individual, with a have-a-go attitude to life, marriage, haircuts and reality. Snoo loves books and fully intends to read one someday. And forever popping into the library is Dr Cadogan (Michael Fenton Stevens), celebrity doctor to the stars and a man with his finger in every pie. Charming, indiscreet and quite possibly wanted by Interpol, if you want a discrete nip and tuck and then photos of it accidentally left on the photocopier, Dr Cadogan is your man. Their happy life is interrupted by the arrival of Simon Nielson (Ben Willbond), a man with a mission, a mission to close down inefficient libraries. Fortunately, he hates his mission. What he really wants to do is once, just once get even with his inexhaustible supply of high-achieving brothers... First released: Thursday 11th February 2021 - Distributor: BBC Digital Audio - Download: 220mb If ordering from a UK store, please note that you are in Region 1 (DVD)/Region A (Blu-ray), whilst the UK is in Region 2/B, respectively. This means that many - but by no means all - may fail to play. Look for products marked "Region free", "All regions", "Region 0" or "Multi-region". Blu-rays may also be marked "Region A/B/C", "Region free", or variations thereof. You may also check whether your DVD/Blu-ray player supports multi-region playback (also referred to as "region free"), or whether it can be set to do so. There should, of course, be no problems with books, CDs, or other physical items.
https://www.comedy.co.uk/radio/shush/shop/7021/the-complete-series/
Q: Evaluate $\int_0^\infty \frac{x^2}{x^4 + 6x^2 + 13}dx$ In the context of the residue theorem, I have this integral to evaluate. The function is even, and $|\int_0^\pi\frac{R^2e^{2i\theta}iRe^{i\theta}}{R^4e^{4i\theta}+6R^2e^{2i\theta} + 13}d\theta| \leq \int_0^\pi2\frac{R^3}{R^4}d\theta \to 0$, so the problem is to find the residues in the upper halfplane. $\int_0^\infty\frac{x^2}{x^4 + 6x^2 + 13}dx = \frac12\int_{-\infty}^\infty\frac{x^2}{x^4 + 6x^2 + 13}dx = \pi i\sum_{\{\Im z > 0\}}$res$(\frac{x^2}{x^4 + 6x^2 + 13})$ There are two residues to calculate: $z = \sqrt{-3 + 2i}$: $\frac{\sqrt{-3 + 2i}}{4(-3+ 2i) + 12} = -\frac i8\sqrt{-3 + 2i}$ $z = \sqrt{-3 - 2i}: \frac i8\sqrt{-3 - 2i}$ (Wolfram Alpha if you don't want to trust me) Giving me overall for the integral: $\frac\pi8 (\sqrt{-3 + 2i} - \sqrt{-3 - 2i}) = $1.427346... i But the answer is clearly not meant to be imaginary. A: Let us try to avoid useless computations: $x^4+6x^2+13=(x^2+\alpha)(x^2+\beta)$ for a couple of conjugated complex numbers $\alpha,\beta$ with positive real part and such that $\alpha\beta=13$ and $\alpha+\beta=6$. By partial fraction decomposition we have $$ \int_{0}^{+\infty}\frac{x^2}{(x^2+\alpha)(x^2+\beta)}\,dx = \frac{1}{\beta-\alpha}\int_{0}^{+\infty}\left(\frac{\beta}{x^2+\beta}-\frac{\alpha}{x^2+\alpha}\right)\,dx = \frac{\pi}{2\left(\sqrt{\beta}+\sqrt{\alpha}\right)}$$ and $$\left(\sqrt{\alpha}+\sqrt{\beta}\right)^2 = \alpha+\beta+2\sqrt{\alpha\beta} = 6+2\sqrt{13} $$ hence the wanted integral equals $\frac{\pi}{2\sqrt{6+2\sqrt{13}}}$. Similarly $$ \int_{0}^{+\infty}\frac{x^2\,dx}{x^4+Ax^2+B} = \frac{\pi}{2\sqrt{A+2\sqrt{B}}} $$ for any $A,B>0$. Lazy is good.
General Certificate of Secondary Education (GCSE) examinations represent a significant source of worry and anxiety for students in their final two years of compulsory education, referred to as Key Stage 4 in the UK (Denscombe, 2000; Putwain, 2007). A small inverse relationship has been reported between the appraisal of examinations as threatening, as measured through the test anxiety construct, and GCSE achievement (Putwain, 2008). Test anxiety is hypothesised to have an interfering effect on achievement through occupying cognitive resources, however it may not be the perception of examinations as threatening that is responsible for interference effects, per se, but how the student responds to that threat (Putwain, in press). Some students respond to test anxiety with a ‘catastrophic’ response in which they find it difficult to read and interpret questions, and to recall material required to respond to assessment demands. In contrast, other students respond to test anxiety with a positive response in which they will persist in trying to answer questions and experience a ‘return’ of material required. The present study aims to investigate this relationship further by examining whether the strength and/ or magnitude of the test anxiety – GCSE achievement relationship is influenced by the tendency to catastrophise and draw negative conclusions about events, as measured through the cognitive distortions construct. Two schools were recruited following a mailshot inviting participation. Self-report data for test anxiety and cognitive distortions were collected from 224 students in their final year of compulsory schooling, approximately six weeks before GCSE examinations began. Test anxiety data was colleted using the Revised Test Anxiety scale (Benson et al., 1992), whilst cognitive errors data was collected using the Children’s Negative Cognitive Error Questionnaire (Leitenberg et al., 1986). Questionnaire order was counterbalanced and presented in a single pack. Examination performance data was collected in Mathematics, English Language and Science. GCSE Grades (A*-G) were converted to a numerical value (8-1). Results indicated an inverse relationship between GSCE achievement and two components of test anxiety: worry and bodily symptoms (headaches, muscle tension, etc.). The magnitude of the GCSE achievement – worry relationship was increased by catastrophising (a belief in the worst possible outcome) and selective abstraction (selectively focusing attention on the negative elements of a situation) and the GCSE achievement – bodily symptoms relationship was increased by selective abstraction only. These findings are broadly consistent with those reported in both UK and international contexts. They provide further evidence that in the high stakes context provided by the GCSE, test anxious students who experience high levels of worry and/ or bodily symptoms, may be achieving less than their low test anxious counterparts. The findings in this exploratory study are useful in establishing the nature and direction of interactions between test anxiety and students’ response to this anxiety, which could be used to inform the development of subsequent research and treatment. They suggest that interventions which focus directly on reducing examination-related worries may not be as effective as those which focus on both worry and bodily symptoms test anxiety. Secondly, the cognitive element of interventions may benefit from specifically and explicitly addressing a student’s response to test anxiety.
https://repository.edgehill.ac.uk/2355/
Dr. David E Nichols: The World’s Leading Expert on Psychedelic Pharmacology Dr. David E. Nichols is one of the world’s leading experts in the field of psychedelic pharmacology. He’s a Professor Emeritus of Pharmacology at Purdue University and the co-founder of the Heffter Research Institute. He serves as a member of the Scientific Advisory Board for the Beckley Foundation and the founder of Darpharma Inc. Nichols has had a long and decorated career in academia. He’s published over 250 academic articles in the fields of psychedelics, pharmacology, neurology, and policy. Dr. Nichols is famous for having coined the term entactogen from some of his work in the mid-80s after discovering the psychedelic effects of MDMA. What is David E Nichols Known For? Dr. Nichols is interested in understanding new ways of identifying how molecules of different classes are able to interact with the same receptor binding sites. A focus of this work was spent on understanding how the structure of a molecule translates to its biological action. This concept is known as the structure-activity relationship or SAR. He’s spent a great deal of time studying the 5-HT2A receptors, which is the primary target for the classical psychedelics, including mescaline, LSD, psilocybin, and DMT. His research has also explored related receptors, including 5-HT1A, 5-HT2C, and dopamine receptors. His work on dopamine eventually led to the formation of a biotech company called Darpharma Inc. The company is focused on producing dopaminergic drugs for the treatment of Parkinson’s Disease, schizophrenia, and other neurological disorders. Through his research, Nichols has mapped out the binding affinities for numerous psychedelic substances, including some of the more obscure lysergamide derivatives (LSD alternatives). He’s also developed computer-based homology models for several G-protein-coupled receptors. This research has provided a substantial contribution to the way we understand these receptors and how binding affinity works for various compounds. Entactogens vs. Empathogens Ralph Metzner coined the term empathogens in 1983 to describe the class of compounds that included MDMA, MDA, MDEA, MBDB, and others. Dr. David Nichols suggested entactogen as a better term in 1986 because it removes the incorrect association of the negative aspects that come along with empathy or sympathy. Entactogen is derived from the words en (within) and tactus (to touch). Both terms are used interchangeably today. Drug Discovery Dr. Nichols began his path on drug discovery early in his career. While he was still a graduate student, Nichols patented the method used to make optical isomers of hallucinogenic amphetamines. This work led to the discovery of numerous psychedelic compounds. His work, along with another prominent chemist, Alexander Shulgin, has directly led to the formation of the designer drug market as we know it today. Dr. David E Nichols drug discoveries include: - Phenethylamines — Escaline, 2C-I-NBOMe, NBOMe-2C-B, NBOMe-2C-C, NBOMe-2C-D, 3C-Bromo-Dragonfly - Lysergamides — LSZ, ETH-LAD, PRO-LAD, and AL-LAD - Amphetamines — 6-APB, 4-MTA, 5-methyl-MDA, DOI - Aminoindanes — MDAI Note: Some of the compounds Nichol and his team discovered are dangerous — including the NBOMe class of compounds and several of his amphetamine derivatives. The Heffter Institute Dr. Nichols founded the Heffter Institute in 1993 along with Mark Geyer, Ph.D., George Greer, M.D., Charles Grob, M.D., and Dennis McKenna, Ph.D. The Heffter Institute is a non-profit institution aimed at funneling funds from wealthy philanthropists to support psychedelic research in the form of grants. It was formed at a time where research on psychedelics had become dormant for more than 20 years after psychedelics were prohibited around the world. The institute was named in honor of the late Arthur Heffter, who was the first person to discover the active hallucinogenic ingredient in the peyote cactus (mescaline). After its founding, the Heffter Institute focused on exploring the mechanisms of action for MDMA and ketamine, including clinical research in Russia. More recently, the Heffter Institute has been focused on funding research exploring the role of psychedelics for existential anxiety and depression with a terminal illness, addiction, obsessive-compulsive disorder, and the value of mystical experiences and ego-death. Most of this recent research has focused specifically on psilocybin — the active ingredient in magic mushrooms. The institute also funds research on ayahuasca, ketamine, LSD, and other psychoactive substances such as kava. Darpharma Inc. Nichols is one of the leading researchers on the dopamine receptors. He discovered two selective D1 full agonist compounds, dihydrexidine and dinapsoline, as well as other dopamine agonists, including dinoxyline. These compounds are now patented and commercialized under a company he founded called Darpharma Inc. The company continues to explore new dopaminergic compounds for the treatment of Parkinson’s disease, schizophrenia, and other neurological disorders. Dr. David Nichols Prominent Publications - Potential psychotomimetics. Bromomethoxyamphetamines (1971) - Asymmetric synthesis of psychotomimetic phenylisopropylamines. (1973) - Structure-activity relationships of phenethylamine hallucinogens. (1981) - Synthesis and LSD-like discriminative stimulus properties in a series of N(6)-alkyl norlysergic acid N,N-diethylamide derivatives. (1985) - Effects of enantiomers of MDA, MDMA, and related analogs on [3H] serotonin and [3H] dopamine release from superfused rat brain slices. (1986) - Effects of Schedule I drug laws on neuroscience research and treatment innovation (2013) - Discovery of novel psychoactive drugs: has it ended? (1987) - A new potent and selective DA1 (vascular) dopamine receptor agonist (1990) - Oberlender R. Structure-activity relationships of MDMA and related compounds: a new class of psychoactive drugs? (1990) - Neurotoxicity of MDMA (ecstasy): beyond metabolism. (2005) - Serotonin receptors. (2008) - Serotonin-related psychedelic drugs (2010) - Comparison of the D₁ dopamine full agonists, dihydrexidine and doxanthrine, in the 6-OHDA rat model of Parkinson’s disease. (2012) - Emerging Designer Drugs (2013) - Psychedelics as Medicines: An Emerging New Paradigm. (2016) - Return of the lysergamides. Part IV: Analytical and pharmacological characterization of lysergic acid morpholine (LSM-775). (2017) - Psychedelic Drugs in Biomedicine. (2017) - Microdosing psychedelics: More questions than answers? An overview and suggestions for future research. (2019) David Nichols Lectures David Nichols has given countless lectures during his time as a professor at Purdue University. Some of his lectures have been recorded and are available for free on YouTube. Here are some of his best lectures over the years. 1. From Bench to Bedside: Progress in Psychedelic Research 2. Psychedelic Neuroscience: LSD Gives Up a Secret 3. LSD Neuroscience 4. Psychedelics & Racism 5. DMT & The Pineal Gland: Fact vs. Fantasy Summary: Who is David Nichols? Dr. David E. Nichols is arguably one of the most important researchers to enter the field of psychedelics. His research has led to several cardinal discoveries over the years, including the discovery of over a dozen new psychedelics and entactogens, serotonin receptor modelling, and much more. His contributions to science have influenced everything from drug development to public policy. He’s a co-founder of the Heffter Institute, which is one of the leading non-profits funding psychedelic research today. Dr. Nichols is now retired, but he continues his work in the form of giving public lectures, sitting on advisory boards for his biotech company Darpharma Inc, the Beckley Foundation, and others. His son, Charles D. Nichols, is taking after his father’s footsteps. Charles is a professor at the LSU School of health sciences in New Orleans. His research is focused on exploring the relationship between psychedelics and inflammation.
https://tripsitter.com/people/david-nichols/
No matter where you live, America's lion needs your voice. Public Comments Processing Attn: Docket No. FWS-R5-ES-2015-0001 U.S. Fish and Wildlife Service, MS: BPHC 5275 Leesburg Pike Falls Church, VA 22041-3803 OPPOSED: Removing Eastern Puma (Cougar) From the Federal List of Endangered and Threatened Wildlife, Docket No. FWS-R5-ES-2015-0001 Dear U.S Fish and Wildlife Service, The undersigned organizations and individuals are committed to protecting and restoring sustainable puma (cougar, mountain lion, or panther) populations across the historic range of the puma. We urge USFWS to draft a recovery plan that ensures protection for pumas and evaluates measures to reintroduce the species into suitable habitat throughout their historic range in the United States. The best available science recognizes a single North American puma subspecies. The USFWS Eastern puma review notes: "Young and Goldman's (1946) taxonomy of pumas was inadequate, even by the standards of their time. Their results were based on very small sample sizes, the samples were from an extremely small portion of the alleged eastern puma's range (samples from Vermont and Quebec were available, but not examined), their work was not peer reviewed, their taxonomy lacked statistical analysis, and their work would likely be rejected...Young and Goldman's (1946) and Hall's (1981) conclusions concerning taxonomy of the eastern puma may be wrong." Recent analyses indicate that those animals once designated under the "eastern cougar" subspecies were in fact taxonomically indistinguishable from puma populations to the west (Culver et al, 2000). The USFWS must not declare the Eastern puma subspecies to be extinct when the best available science demonstrates that it simply never existed. Instead, we recommend the Service accept the findings of Culver et al. (2000) of a single North American puma subspecies. Puma concolor has been extirpated from the U.S. east of the Missouri River and north of Florida. There is no documented verification of wild breeding populations of pumas in this region in more than a century. Pumas within their extirpated range meet all of the qualifications for protection under the Endangered Species Act. These include "(A) the present or threatened destruction, modification, or curtailment of the species' habitat or range, (B) overutilization for commercial, recreational, scientific, or educational purposes, and (D) the inadequacy of existing regulatory mechanisms through all of the North American puma's historic, extirpated range." (Assessment of Species Status, ESA Section 4) These qualifications exist in, and are not limited to, the states of Kansas, Oklahoma, Minnesota, Iowa, Missouri, Arkansas, Louisiana, Wisconsin, Illinois, Kentucky, Tennessee, Alabama, Michigan, Indiana, Ohio, New York, Pennsylvania, New Jersey, West Virginia, Virginia, Maryland, Delaware, Vermont, Massachusetts, Connecticut, New Hampshire, Maine, North Carolina, South Carolina, and Georgia. Designation of pumas as a Distinct Population Segment is consistent with the intent of Congress in establishing the classification. Pumas that exist within or enter into their extirpated range meet all the requirements for classification as a DPS (61 Fed. Reg. 4722, 2/7/1996). First, such pumas are "distinct" as defined by USFWS since they are geographically isolated from breeding populations elsewhere in the United States. Second, the puma's former range east of the Mississippi River and north of Florida presents a significant gap. There are ecological and public safety imperatives for classifying pumas as a Distinct Population Segment (DPS) within their extirpated range. Policy C (2) of the Interagency Policy for the Ecosystem Approach to the Endangered Species Act states that "recovery plans shall be developed and implemented in a manner that conserves the biotic diversity... of the ecosystems upon which the listed species depend." Ecosystems lacking breeding populations of pumas currently suffer from a severe overpopulation of ungulates with attendant ecological impacts and loss of human life. Chronic white-tailed deer over-browsing has triggered biodiversity collapse (Goetsch et al 2011), and declines in mast production (McShea et al. 2007), understory recruitment and ground-nesting habitat (U.S. Forest Service 2008) across eastern deciduous forests. Multiple long-range studies have demonstrated that apex predators such as pumas maintain biodiversity and ecosystem functioning (Ripple et al. 2014). Puma restoration would also significantly reduce the acute public safety risk of vehicle collisions with deer and the attendant mitigation costs (Gilbert et al. 2016) as well as the human health issues associated with deer ticks as a vector for Lyme disease (Kilpatrick et al. 2014). We urge greater protection for puma populations in - and dispersers from - the prairie states. To allow the species to begin recovering breeding populations in the extirpated regions, we urge the USFWS to better protect pumas in the prairie states. Human-caused mortality of pumas in these states has reduced the viability of these populations and has limited puma dispersal to the east. (Tucker, 2014; Cougar Rewilding Foundation, 2015). Dispersing individual pumas are essential for recolonization. Reclassifying pumas in Florida as a Distinct Population Segment must maintain their protected status. As the lone surviving population of the North American puma subspecies east of the Missouri River, pumas in Florida are essential to the recolonization of the entire extirpated range. Whether genetically or only geographically separate from western pumas, with fewer than 180 panthers remaining in the wild, these animals must remain protected as federally endangered, and their recovery must be ensured. Florida panthers must be considered as source animals for puma reintroduction efforts at any suitable locations in the Southeastern United States, including but not limited to urgently needed efforts to establish breeding populations of the panthers in central and north Florida. We recommend the USFWS develop a recovery plan for the North American puma subspecies within its extirpated range by reclassification as a Distinct Population Segment (DPS) under the Endangered Species Act. As precedent for the federal recovery of species ranging across the United States we reference federal law (16 U.S.C. 668-668d), international treaties (Migratory Bird Treaty Act) and the recovery plans developed to protect the bald eagle (USFWS Northern States Bald Eagle Recovery Plan) and American peregrine falcon (USFWS Monitoring Program for the American Peregrine Falcon). Also, we note that the gray wolf reintroduction into the Greater Yellowstone Ecosystem was successfully carried out by the USFWS under some of the same logic that can and should be applied to restoring pumas to the eastern United States. Gray wolves retained robust populations in Canada and Alaska, and yet it was clear they were extirpated from large portions of their historical range. While the gray wolf restoration effort remains unfinished in other regions (Pacific Northwest, Southwest, Southern Rockies, and potentially the Northeast) the precedent remains for the USFWS to work to restore carnivores to regions where they have long been extirpated. In the case of pumas, such a recovery effort must explore such issues as habitat protection and connectivity, reducing mortality in current breeding populations of pumas, protection of dispersers, and the reintroduction of pumas into suitable habitat. In addition, it will be necessary to gauge public attitudes towards pumas, to encourage science-based education about the species, to engage in efforts to mitigate any loss of tolerance that may have resulted from extirpation, and to promote coexistence. Pumas are an icon of the American wilderness, a critical component of healthy ecosystems, contribute to human health and safety, and maintain priceless cultural value. By acknowledging the best available science, we can begin a national conversation and the elements needed to restore this essential species throughout its historic range. We hope you will make the most of this opportunity to work towards a future where pumas once again roam the American landscape, coast to coast. Thank you for your consideration. Sincerely,YOUR NAME WILL GO HERE Additional options for making a gift are available by clicking HERE. The Mountain Lion Foundation, founded in 1986, is a national nonprofit organization protecting mountain lions and their habitat. We believe that mountain lions are in peril. Our nation is on the verge of destroying this apex species upon which whole ecosystems depend. Hunting mountain lions is morally unjustified, and killing lions to prevent conflicts is ineffective and dangerous. There is a critical need to know more about the biology, behavior, and ecology of mountain lions, and governments should base decisions upon truthful science, valid data, and the highest common good. Conserving critical lion habitat is essential.
http://mountainlion.org/actionalerts/070816easterncougar/070816easterncougar.asp
By Leila Dycus staff writer From high school volleyball player to receiving one of the her college team’s highest honors Rosalie Keck is taking her sport to new levels. “Well it’s kind of surprising, I didn’t think that I as a freshman would be receiving this award,” said Brewton-Parker volleyball player Rosalie Keck. Recently, Keck was awarded “Offensive Player of the Year” by her coach at Brewton-Parker. Keck played four years of high school volleyball for the MCHS Bulldogs and was also a part of the Georgia Juniors club team. Her family and coaches talked about how deserving Rosalie was of the honor. “As a freshman player, I was just exhilarated it was great to think that she could be able to accomplish something like this,” said Rosalie’s mother Margaret Keck. “And that she could go on to do bigger and better things as the years go by.” After one year of college Rosalie has put in 205 kills. Keck plays middle for Brewton-Parker, which is a position that rarely racks up the number of scores that she has. However, her mom said that this type of scoring was normal for Rosalie as she did the same for the MCHS Bulldogs. “Rosalie was such a hard worker, I never saw her not smiling when she walked into the gym- she was genuinely excited to be there,” said MCHS Junior Varsity Coach Pete Busenitz. “That’s why I knew she could get it done at the next level, her effort level was always there.” In high school, Rosalie was awarded Most Valuable Player her junior and senior year by coach Pam Hooten. She was known for her work ethic, if there was something that wasn’t working out for her she would chip away at it until she figured it out. “Rosalie was an inspiration to all the players, she was a constant motivator,” said MCHS Varsity Volleyball Coach Hooten. “Her leadership was unmatched- she led in her words as well as her actions.” Rosalie said that Varsity coach, Hooten, always encouraged her to do her best on the court and encouraged her to be a leader. Hooten wanted Keck to back up the team and she did. “Rosalie often put our team on her back and carried us places we could not go without her,” said Busenitz. She was a three-year starter for the Varsity Spike Dogs. Keck had a way of being dominant on the court often making her the most dynamic player on the court. Beyond her awards as MVP, Rosalie was also nominated and made the All District team. Margaret Keck said that she played on two very competitive teams in high school both for MCHS and the Georgia Juniors, which prepared her for her college career. After her senior volleyball seasons Rosalie was scouted by several different colleges including LaGrange College and Piedmont College. Brewton-Parker offered her a tryout towards the end of her senior season, but she was not even considering the school at that point. “I attended school at Brewton-Parker and I didn’t think she would ever think of attending school there,” said Margaret Keck. However, after the tryout Rosalie chose her mom’s alma mater. Rosalie said that during her time at Brewton-Parker she has learned a lot about teamwork. Working with her back row and setter has been different that in high school. She also said that she has learned lessons through losing this season. “In high school and in club we were always used to winning but I realized that it really didn’t matter if we won or lost as long as we were having fun,” said Rosalie Keck. Rosalie’s favorite part about her college volleyball career has been working with the girls on the team and traveling. In true Rosalie fashion, she expressed how thankful she was to her coaches- Pete Busenitz, Pam Hooten, and Dr. Mal who supported her and came to her signing day. “I never thought I’d be able to play in college but Pete always told me that he thought I would be able to,” said Rosalie. Her Coaches also expressed just how proud they are of all of Rosalie’s accomplishments they are. Busenitz said that Rosalie is a blue-collar hero- it didn’t matter if it were the girls at the end of the bench on the JV team you were an important part of the team to Rosalie. Coach Hooten added that it’s girls like Rosalie that make coaching worthwhile.
https://morgancountycitizen.com/2014/05/14/keck-wins-offensive-player-of-the-year-at-brewton-parker/
The Christensen Fund is a private foundation founded in 1957 and based in San Francisco, California. We are a non-profit, non-governmental organization governed by an independent Board of Trustees. The Christensen Fund believes in the power of biological and cultural diversity to sustain and enrich a world faced with great change and uncertainty. Since 2003, our grantmaking has centered on the “biocultural”—the rich, adaptive interweave of people and place, culture and ecology. Under this approach, we seek to support the resilience of biological and cultural diversity in partnership with Indigenous Peoples, Local Communities and their allies. Our grantmaking programs are focused around six geographic regions of the world and selected global biocultural initiatives. These regions currently include: the African Rift Valley, Central Asia, Melanesia, the US Southwest, Northwest Mexico and the San Francisco Bay Area. Throughout our programs and relationships, we give special attention to realizing the aspirations and enhancing the implementation of the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP). UNDRIP is central to both our work as a foundation and to our grantmaking processes. As an institution, we deeply value diversity, passion, creativity and excellence—all of which are reflected in our highly skilled staff that is based in San Francisco as well as in our grantmaking regions. Our staff comes from many parts of the world and speaks more than 40 languages. Position Summary The Christensen Fund Fellow, a pilot at the Christensen Fund, will offer an early- to mid-career professional with strong Indigenous connections a unique opportunity to 1). deeply learn about philanthropy thru extensive hands-on grantmaking experience and to 2). positively contribute toward advancing the field of biocultural diversity in the Bay Area. The Fellowship is designed to support a Fellow’s holistic development as a leader and advocate for transformative social change (including within philanthropy) and to strengthen their work with and service to Indigenous Peoples and Local Communities. We acknowledge that the skills needed for paradigm shifting leadership are broad and multifaceted. And, that rooted leadership requires opportunities to be engaged and active in community and culture. The Fellowship will include dedicated time for personal cultural study/practice (such as language learning, art, music or dance)—to be defined by the Fellow. The Christensen Fund Fellow will work with our international Grantmaking teams, manage the Bay Area Portfolio and contribute to several projects across the Fund. The Christensen Fund Fellow will be responsible for: - Managing an annual grant budget of approximately $750,000 for the San Francisco Bay Area Portfolio ($500,000) and a Global Travel and Exchange Program ($250,000) - Reviewing, analyzing and tracking grant proposals throughout the grantmaking process including evaluating and reporting on individual grantee/portfolio-wide progress to Senior Management and Board of Trustees through oral presentations and written reports - Working collaboratively with Grants Management to resolve legal, financial and technical issues with grants when necessary - Designing, implementing and refining grantmaking strategies to achieve defined programmatic goals and to advance substantive change on a variety of scales and arenas (ex: legal, policy, social and community norms) - Supporting grant partners to thoughtfully deepen their program impacts through connection and collaboration and to strategically link across networks and movements and with other Christensen grantees, as appropriate - Developing and implementing an explicit plan for enhancing grantee communications capacity and contributions to building the field of biocultural diversity - Representing the Fund at a broad range of meetings and events - The Christensen Fund Fellow reports to the Director of Grantmaking and works in Grantmaking teams with Grants Manager(s) and consultants as appropriate. The Christensen Fund Fellow will have access to: - Individual coaching and mentorship, including with Christensen Fund Trustees, Senior Management and International Program Staff - A diverse cohort of emerging leaders in philanthropy from a variety of local foundations - Build a strong professional network including access to local, national and international sector leaders from within philanthropy and beyond - Dedicated time for personal cultural study/practice - Travel support to attend local and national conferences and events Traits and Desired Qualifications - Deep commitment to Christensen’s core values of deep diversity, self-determination, cultural expression, critical thinking and, learning and a delight in thinking in integrated ways - Understanding of the relationships between biological and cultural diversity - Familiarity with the history, complexities and experiences of Indigenous Peoples in United States including the impact and legacy of Federal Termination and Relocation policies - Experience working with Indigenous Peoples - Ability to think innovatively about strategic partnerships, alliances and engagement with different types of actors – foundation, nonprofit, public and private individuals. - Ability to articulate a vision that motivates internal and external stakeholders from multiple communities and in diverse settings - Well-organized and self-sufficient, IT-savvy, and able to manage time and work under pressure - High integrity and a commitment to personal and professional excellence - A demonstrated ability and inclination to work as part of a team - An ability to travel around the Bay Area and to seek out, listen to, understand and respect applicants, grantees and other stakeholders and work in support of their passions and dreams Applicants should possess: - A graduate degree or 5-7 years of experience in an environmental or social science field, along with knowledge of biocultural diversity, environmental socio-ecological resilience and/or the issues facing Indigenous Peoples in the United States - Demonstrated commitment to the visions and struggles of Indigenous Peoples to secure their development, cultural integrity and environmental heritage - Superior written and oral communication skills. Terms of Employment - Two-year full-time Fellowship based in San Francisco, California; this is a local hire with no relocation support. - $50,500 per year plus medical and dental benefits - Interviews will begin in August/September 2016. It is expected that the Fellowship will begin October 1, 2016. To Apply Please submit an application packet to [email protected] by Monday, August 8, 2016 at 5pm PST. A complete application packet consists of the following four items: - Cover letter outlining your relevant experience and why you are interested in the Fellowship - Résumé (CV) - Responses to essay questions - Three references All documents should be submitted together in one PDF that includes your name and the Fellowship title (ie: LastName_ChristensenFundFellow.pdf). Review of credentials will begin immediately. Principals only. No phone calls please. Essay Questions - Much of our Bay Area work centers on supporting local Indigenous groups to reconnect and restore their relationships with their historic and ancestral territories and cultural/sacred sites as well as to sustain and revitalize their cultural identities and traditions across generations. How would you explain the relationship between culture and environmental stewardship? How might these processes look in action? How might they be different in an urban context? Please share specific examples. (500 words max) - Additionally, please select one of the following questions (500 words max): - Please tell us about an Indigenous or local movement that has inspired you, and share with us how you’ve seen this movement evolve over time. - What do you see as philanthropy’s role to affect the stewardship of wealth and the sharing of power and privilege? How do you see your role in this process? What aspect is most interesting? What is most challenging? The Christensen Fund is an equal opportunity employer that embraces a diverse, multicultural work environment. People of all ethnic backgrounds, Indigenous Peoples, people with disabilities, and people of diverse sexual orientations and gender identities are encouraged to apply.
https://www.christensenfund.org/christensen-fund-fellowship/
Helping Others Access Truth, Courage, and Love Though the Practice of Nonviolent Communication Communication is a fundamental part of human life, and it is one of the most important ways we as humans attempt to get our basic needs met. In spite of its omnipresence in the world around us, communication is not always successful in its mission. By allowing our fears of separation and scarcity to manifest in the way we speak to one another and ourselves, communication often incites blame, shame, and sorrow in our daily lives, rather than the hoped for compassion, hope, and healing. Living in the awareness of nonviolent communication has given me and many others another way to hear and interpret messages that are difficult to hear. Nonviolent Communication (NVC) provides the necessary tools to live peacefully with myself and others. In the continuous work of living Nonviolence, we experience three gifts of life inherit to every person on the planet : Truth, Courage, and Love. I began studying NVC years ago with the intention of healing my life; through the journey I discovered the profound gift of being with others in their journeys of healing, as well, and over time I have built a practice of supporting individuals, families, friends, and communities in their pursuits of living nonviolently. I have found that, with a willing heart, a committed and mindful practice, and the support of a mentor or model of honesty, vulnerability, and transparency, there is enough light for each step along this new path. I invite you to step onto the path with me, to join me in embracing integrity, empathy, and presence, so that we might together enjoy the beauty of life that NVC reminds us to see. -Carlene Robinson Mission Statement From the place of my own personal transformation through Nonviolent Communication, to teach and to live nonviolence authentically, so that others may benefit from this life-changing modality. Values These are the values I hold closest to me, and they guide my actions concerning my work and Nonviolent Communication: Integrity- I am committed to being honest with myself and others, and upholding the principles of nonviolence at all times. Integrity includes courage, honesty, transparency, authenticity, and self-responsibility. Vulnerability- I believe that the human spirit is interwoven through us all, and we are able to genuinely connect to each other through the power of vulnerability. I cherish the freedom, liberation, and truth that vulnerability brings. Empathy- I am committed to connecting with others where they are and experiencing their emotions with them; empathy encompasses the practices of peace, love, compassion, and sensitivity. Presence- I deeply value being present in the here and now, which allows me to cultivate awareness, curiosity, and nonjudgement in myself, as well as model it to others. Beauty- I bear witness to the beauty present in all aspects of life—even during times of loss, sorrow, and unease. Beauty encourages our human creativity, imagination, and vitality. Vision Nonviolent Communication brings freedom and liberation into the space between the self and others, serving as a gateway to awareness of life without blame, criticism, and judgement that create the illusion of separation and scarcity.
https://www.carlenerobinson.com/welcome
Keeping the Road to School Safe – Tips for Upcoming School Bus Safety Week Children and teenagers around the country are well past the new school year formalities of meeting teachers and classmates and reviewing forgotten material; they are in the thick of the studies. That makes October a great month to remind students, parents, and the community about safety issues regarding school buses on the road. National School Bus Safety Week is October 17-21. The rate of deaths and injuries in crashes involving school buses showed the lowest rate in the last 10 years. Also, it was a 50% decrease compared with the year before. School closures related to the pandemic are likely to have contributed to the decrease. Zutobi has analyzed the latest National Highway Traffic Safety Administration (NHTSA) data regarding school-bus related crashes and has prepared some useful tips to keep drivers, students, and the community safe. These tips will help keep the crash rate low as it was during the pandemic. A Safe Mode of Transportation for School Kids Although we hear of the occasional school bus related accident, school buses are generally the safest way to transport our children to and from school each day. According to the Schoolbus Fleet, just under half a million school buses transported about 23 million elementary and secondary students to and from school in 2018-19. The National Safety Council (NSC) reports that buses are safer than walking or riding in a car to school and advocates for lap-and-shoulder belt safety restraints to improve their safety. While school bus-related accidents killed 54 people and injured 4,800 people in the U.S. in 2020, that number was a 50% decrease compared with the year before, according to the National Highway Traffic Safety Administration (NHTSA). Between 2011 and 2020, about 70% of the deaths in school bus-related accidents were passengers of non-bus vehicles and 16% were pedestrians. Only 5% were students, 5% were drivers, and 3% were bicyclists. Injury rates during this period, on the other hand, were distributed differently with about one-third being students, 8% being drivers, and more than half were occupants of other vehicles. For example, in the year 2020, 3,400 people injured in school bus related accidents were drivers or passengers of other cars. School Bus Safety Tips Part of the reason for School Bus Safety Week is to publicly educate and remind all of us to boost bus-related safety behavior. Zutobi offers the following reminders for not only parents who rely on school buses to transport their children to and from school, but also for all other drivers who share the roads with school buses on a daily basis. Parental Reminders Here are a few things parents can do as well as enforce with their children to keep them safe while taking the school bus: - Pack student belongings in a backpack to avoid dropping items. - Dress them in bright colors to improve visibility. - Encourage them to leave early or on time. - Walk with your child to the bus stop. - Practice and model safe pedestrian behavior. - Have children wait where drivers can see them. - Prohibit playing in the street or with items like balls that can roll into traffic. - Teach children to talk with drivers about any dropped items. - Meet returning children on the side of the street where they dismount. - Limit mobile devices to seated students with muted headphones. Share the Road: Tips for the Broader Community Children are heading off to school around the same time that many adults in the community are driving to work. It’s important that other drivers are extra aware and cautious during these early morning hours as well as in the afternoon when children are returning home. These precautions not only protect school bus passengers but also everyone sharing the roads. - Come to a complete stop as school buses are loading an unloading. - Obey speed limits in school zones. - Yield to pedestrians, especially children, and obey any crossing guards. - Increase following distances when driving behind a school bus to ensure you have adequate time to stop. - Never pass a bus when it is stopped from either direction. - If you are walking, biking, or driving near a bus, maintain adequate distance and be extra aware. School Bus Driver Shortages More than 77% of State of School Transportation survey respondents “reported that the ongoing bus driver shortage is having a negative impact on school district transportation operations,” according to HopSkipDrive. This problem began years before the global pandemic and has gotten worse as a result. The top five reasons cited for the shortages were recruiting issues, low driver pay, drivers retiring, COVID-19 concerns and losing drivers to the private sector. As a result, existing bus routes can be extra long with students riding buses for more than an hour, overcrowding, and stressed out bus drivers. There is only one way to address the shortage – educate more bus drivers. Zutobi is an FMCSA-approved theory training provider, offering online resources for those interested in obtaining a school bus license and making it easier and safer for children to get to and from school.
https://zutobi.com/us/driver-guides/school-bus-safety
Understand what users really think and how your prototypes, wireframes, and products hold up in relation to what they care most about. Decide which concepts are worth moving forward based on interviews, focus groups, user surveys, and other research sources. Minimize time spent analyzing and interpreting data – easily streamline your research workflow across different departments.
https://atlasti.com/ux-designers-and-product-teams
The tropical islands of the South Pacific are home to immense biodiversity, but their inaccessible environment – jagged peaks, hot and humid conditions and remote locations – have limited the ability to document what exists. But the secrets of biodiversity are finally being revealed for the Polynesian volcanic island of Mo’orea. Scientists spent months trekking across its tricky terrain to gather specimens as part of the Mo’orea Biocode Project and have presented the first detailed description of the impressive array of fungi that call it home. Gathering a total of 553 fungal specimens, and sequencing the DNA of 433 of them, they’ve discovered that only a handful are exact genetic matches with other known species. In other words, Mo’orea’s fungi likely contain completely new-to-science species, according to the new study published in the Journal of Biogeography. The collection includes more than 200 species of macrofungi — that is, fungi producing visible, fruiting bodies. “It’s like a treasure trove,” says study lead author Matteo Garbelotto, cooperative extension specialist and adjunct professor of environmental science, policy and management at the University of California Berkeley, U.S. “It’s truly uncharted territory in evolutionary biology and biodiversity of the fungal kingdom, and this is one the first attempts to generate baseline information on fungal diversity, not just for Mo’orea, but for the entire and vast Insular Oceania region.” The research was part of the Mo’orea Biocode Project, which ran from 2007 to 2010 and aimed to catalogue every form of life larger than bacteria on and around the island – from mountaintops to the sea floor. The fungal specimens were collected from the soil, roots and leaves of plants, and even the air, then cultured and compared to databases of new species. As part of the biocode project, the research team also obtained DNA sequences of a specific gene that can be used as a unique “barcode” to differentiate one species from another. “We were really interested in the biodiversity of the island,” says first author Dr Todd Osmundson, who completed the work as a postdoctoral researcher at UC Berkeley. “Mo’orea is an island in the middle of the ocean, and it’s a geologically young volcanic island.” “It’s never touched another piece of land. How did fungi get there, and where did they come from?” The researchers were able to piece together where this incredible fungal diversity had originated from by comparing the DNA sequences of the fungi from Mo’orea to those from other species around the world. Their findings suggest that the majority of the species, or their ancestors, were carried across the ocean from Australia or other South Pacific Islands by easterly winds, before finally resting on Mo’orea. A small number might have even been brought to the island by humans from far-flung locations like East Asia, Europe, and South America. Understanding the biodiversity of fungi on Mo’orea and how different species have journeyed around the world to arrive at this remote location can help scientists better understand the ongoing impacts of global travel and trade on biodiversity. Originally published by Cosmos as Mo’orea’s hidden treasure trove of fungi Imma Perfetto Imma Perfetto is a science writer at Cosmos. She has a Bachelor of Science with Honours in Science Communication from the University of Adelaide. Read science facts, not fiction... There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.
https://cosmosmagazine.com/nature/plants/mooreas-new-fungi-biodiversity/
Cultural and educational rights of sections of society are protected under Article 29 and Article 30 of the Indian Constitution. They both vary in the extent and nuances of their protection. Both are aimed at minority rights protection, although the meaning of “minority” varies in both the sections as we shall see hereunder. In certain points, these two provisions also seem like an extension of the Right to Equality provisions. Article 29 of the Indian Constitution The verbatim reproduction of the section is as hereunder: Protection of interests of minorities.- (1) Any section of the citizens residing in the territory of India or any part thereof having a distinct language, script or culture of its own shall have the right to conserve the same. (2) No citizen shall be denied admission into any educational institution maintained by the State or receiving aid out of State funds on grounds only of religion, race, caste, language or any of them. Notes - Article 29 has the title of “protection of interests of minorities” however, it is to be noted that the term “minority” is not used in Article 29’s body. This was the intention of the Drafting Committee of the Constituent Assembly. The Advisory Committee proposed to include the term ‘minority’ in the body of the provision, but the Drafting Committee had it changed to “sections of citizens” to enable a wider interpretation. They wanted the clause to be widely interpreted to include even Maharashtrians in Bengal as a minority if a case in point begs of it. In Ahmedabad St. Xavier’s College Society v. State of Gujarat (AIR 1974 SC 1389), the bench went one step further and held that even majorities could claim protection under Article 29 of the Conisation of India, even though the term ‘minorities’ is mentioned in the title of the provision. - The “right to conserve” granted under Article 29(1) has been held under Jagdev Singh Sidhanti v. Partap Singh (AIR 1965 SC 183,188) to include the freedom to agitate for the protection of their language, meaning ‘political region.’ Article 30 of the Indian Constitution The section is reproduced as hereunder: Right of minorities to establish and administer educational institutions – (1)All minorities, whether based on religion or language, shall have the right to establish and administer educational institutions of their choice. (lA) In making any law providing for the compulsory acquisition of any property of an educational institution established and administered by a minority referred to in clause (1), the State shall ensure that the amount fixed by or determined under such law for the acquisition of such property is such as would not restrict or abrogate the right guaranteed under that clause. (2) The State shall not, in granting aid to educational institutions discriminate against any educational institution on the ground that it is under the management of a minority, whether based on religion or language. Notes - Article 30 of the Indian Constitution deals with the rights of linguistic and religious minorities. In this provision, there is an express use of the term “minorities” instead of the term “sections of citizens” like in the case of Article 29 of the Indian Constitution. That is because unlike Article 29, Article 30 deals only with the rights of minorities. It emphasizes the protection of minorities per se under Article 29 and Article 30 of the Indian Constitution, even though they are separate rights ( father Proost v. the State of Bihar, AIR 1969 SC 465). - Interestingly, the term “minorities” have been left undefined. The Motilal Nehru Report (1928) showed a prominent desire to afford protection to minorities but did not define the expression. The Sapru Report (1945) also proposed, inter alia, a Minorities Commission but did not define ‘minority.’ The U.N. Sub-Commission on Prevention of Discrimination and Protection of Minorities has defined ‘minority’ (by an inclusive definition), as under: - The term ‘minority’ includes only those non-document groups in a population which possess and wish to preserve stable ethnic, religious or linguistic traditions or characteristics markedly different from those of the rest of the population; - such minorities should properly include a number of people sufficient by themselves to preserve such traditions or characteristics; - and such minorities must be loyal to the State of which they are nationals. - Article 27 of the International Covenant on Civil and Political Rights does not define the expression but gives the following right to them: “In those States in which ethnic, religious or linguistic minorities exist, persons belonging to such minorities shall not be denied the right in community with the other members of the group, to enjoy their own culture, to profess and practice their own religion or to use their own language.” - In Joynal Abunil v. State (AIR 1990 Cal 193, 201, 202) the court summarised the provisions under Article 30 as: - Freedom to establish; - Freedom to administer an educational institution of their own choice – free from external control with regards to the two aspects. This Autonomy cannot be taken away (St. Stephen’s College v. University of Delhi (1992) SCC 558). However, it can be regulated by the State through legislation. But the legislation shall be for ensuring social welfare and similar regulatory measures. The bottom line is that it should not infringe upon the rights under Article 30 unnecessarily, especially to the extent that the power of administration of the institution is taken away. Legislations serving the purpose of setting a standard for education and for the maintenance of excellence is valid, as has been held in Islamic Karimia Society, Indore v. Devi Ahilya Vishvavidyalaya, Indore (AIR 1988 MP 200), even if it is changed to the syllabus. Conclusion In conclusion, the protection of educational and cultural rights is the main import of Article 29 and Article 30. It is an earnest effort at achieving inclusiveness; however, lack of clarity with regards to a definition of “minority” is felt.
https://legodesk.com/legopedia/cultural-and-educational-rights/
This section draws on teachings from Neuroscience of Learning, Dr. Britt Andreatta, a lecture on Lynda.com . LEARNING: HABIT-FORMING FOR VIRTUOUS LOOPS Growth v. fixed mindsets & Learning. According to Dr. Britt Andreatta in her lecture, The Neuroscience of Learning, “Potential is defined as the capacity to develop into something in the future, or to do something more than you can do now.” Potential is unrealized ability. And, crucially, the process of learning is how any potential may become fulfilled: by learning something new, a being’s capacities grow . Cosmos hopes to foster a “growth mindset” meaning the perception that growth is possible and neither realities nor futures are “fixed.” Philosophically, Cosmos embodies the belief that there is always untapped potential (in the form of natural, intellectual, cultural, creative or other forms of capital) to be utilized for optimal or positive transformations. This ties in with the notion of Cosmos as a representation, and frame, for “playing the infinite game.” A growth mindset aligns with and celebrates the eternal force of change, aka flow (this further ties in with having a process orientation) . A fixed mindset emphasizes structure, and feels exaggerated pain and resistance when structures are destroyed or substantially reconfigured. As Dr. Britt Andreatta reports in The Neuroscience of Learning , a growth mindset is conducive to learning; in fact, a growth mindset can lead to a “culture of continuous improvement.” Moreover, she says that people “substantially improve” and better fulfill their potential when they are told “you will be compared to yourself” (meaning: your prior scores on this task) and “we’re looking for improvement” rather than “You will be compared to others” . According to Andreatta, a fixed mindset has been shown to not be conducive to learning, or to self-actualization. It suggests things just are as they are, and one can do little to change them. Reflections on the reinforcement of “fixed mindsets” in the present-day dominant culture, and the radical need for “growth mindsets” as reflected in Cosmos’ design. Let it be noted that t[he status quo system champions the notion that humans are helpless to change the present day system, that capitalism is natural and inevitable, and so on (thereby preserving the present day, suboptimal, system). The status quo culture profits from coercing people to constantly compare themselves to others; one’s low-self esteem correlates closely to one’s high rate of consumerism, which is how the profiteers in the status quo system capitalize on your artificial/manufactured misery. The depletion of ALL forms of a person’s natural capital (including one’s self-respect) is directly tied to the profit margins of the wealthy elite. Such profiteers are quite literally extracting people’s hope of attaining authentic happiness from them, and selling it back to people at a marked-up price on the consumer market. This process gradually erodes the natural capital of most of life on Earth in exchange for the greed-driven hoarding of abstract financial capital in the coffers of a surprisingly also unhappy few. What a sub-optimal system! Not surprisingly, the status quo culture emphasizes structure because hegemonic/hierarchical structures are how inequitable power distributions are maintained. Through aggressive, ubiquitous and manufactured means of enculturation, the population stays convinced that present structures cannot be dissolved, and so the unsustainable, maladapted (some say"cancerous," i.e. self-consuming) status quo system secures a desperate extension of its short-term"life"at severe long-term expense. Despite its maladaptation (its inappropriate development of an infinite-growth profit game within a finite system of natural capital,and one that is dependent upon ongoing exploitation—see also, cancer), all it knows how to do is secure its own continuation. So, we must be doomed? Unless there is learning. By contrast to the above: a"growth mindset"involves comparing your present status just to yourown past status; this mindset posits that there is always unmanifest potential intrinsically dwelling in the present conditions that is ripe to be leveraged for transformational change. It is radical in light of the status quo context, and yet extremely organic/human/alive to embrace and ally with change, to say"The world needs continuous remaking as our understanding and insight grows. We have the right, the honor and the duty to participate meaningfully in the natural flows of change with the tides of time." Bloom’s hierarchy of learning. Dr. Andreatta deepens the understanding of how learning occurs through Bloom’s hierarchy of learning (see image below). In fostering learning processes among its members (esp. with regard to building out system capacities that required distributed and engaged member intelligence in the system—that is, strenghtening the human “web” of knowledge and skills distributed throughout the user base for improved functionality and system resilience), Cosmos would ensure that members are engaging with all six ways of working with new knowledge (to amplify the impact of undertaking member training for the long-term). Dr. Andreatta’s elaborated on the hierarchy using examples from applying this in her own grad student curricula . These generalized applications of each segment of the hierarchy could be studied with respect to Cosmos’ development of curricula re: its need for self-training (i.e., training members to its systems, even as members are in the process of innovating said systems.) Developing new habits. In developing good habits, both positive and negative reinforcement, when transparent and unified, works wonders to affect learning. Cosmos intends to provide generous rewards for progress, as well as implement penalties for inappropriate conduct. Dr. Andreatta shared an interesting story about how both , working in tandem, can optimize for the desired outcomes. Stockholm, Sweden. Speeding drivers getting punished in the traditional ways (receiving a ticket, being forced to pay a fine, perform community service or do jail time) was doing nothing to discourage speeders. So stakeholders “actually took all the research on habits and used it to design a new way of handling an issue.” They made a device that monitors a driver’s speed as you go through an intersection. If you go over the speed limit, it shows you a thumbs down. And it takes a picture of your license plate and sends you a ticket with a fine to pay. AND. If you are not speeding and you drive through the intersection, it shows you a green thumbs up. And it takes a picture of your license plate and you are therefore entered into a lottery to win the pool of the fines coming from speeding drivers! As a result of this new system, speeding went down 22%. Because it was not only punishment that was motivating, but additionally the chance of a positive reward is motivating. People don’t even have to get the reward—it’s the chance of getting a positive reward that is enough. We could experiment with mirroring the success gleaned in this story in Cosmos. For example: Say a member completes a task (tied to a goal) successfully. The member may be rewarded directly by peers through LC, AND/OR can expect to earn C> as a result of the “meaningful impact” of their task’s completion. The degree of C> a task earns is kind of like entering a lottery, because 1) it’s hard for an individual to parse ahead of time what the likely C> value for their completed task will be (in other words, what impact Cosmos will assess the task to have had, and thus the reward attached to it in retrospect.) Also, re: C>, there may be multiplier factors at play affecting your rate of reward—also hard for an individual to parse, upfront. Therefore, it’s kind of a fun unknown—not exactly like a chance to win a lottery, but a similar “hope of personally-gratifying outcome for positive contribution.” HABIT DESIGN. According to Dr. Andreatta, we can modify existing behaviors and habitual patterns in our lives through a simple, accessible process (listed below). The idea behind why this model is supposed to work is that “neurons that fire together, wire together.” Retrieval is crucial: the more you engage with a task or a goal, the more you develop those pathways in the brain. - A clear cue that starts the behavior. Something you see or hear. That is the trigger . - Break the goal into a sequence of baby steps. Immediately after the trigger, if you do the next one small thing, that is the action. - If the action is completed, you receive a reward . Imagine this elegant process framed through Cosmos’ interface. After naming a goal, perhaps the system helps you break down some of the steps to get there (or, at the very least, the very next step, with the remainder left ambiguous. The overarching goal would be your “Guiding Star” and the immediate next step would be you “Near Star.”) Then, perhaps the system prompts you to program triggers into your user experience, tied to these goals. Perhaps triggers look like notifications, emerging at key moments, that prompt the user to undertake one little action in the direction of their goal. Or maybe when you enter a chosen “space” for working on something, a countdown clock or progress bar brings your attention to the main task. Or maybe the dashboard is programmed for a ritual “sequence” of spaces or activities that are offered each time you log in, or at different times of the day or week, reflecting all your multiple goals and enabling progress toward them in dynamic “balance” with one another. The system may even inquire about (or simply observe) existing habitual patterns of the user, and collaboratively explore opportunities to “attach” triggers to existing patterns. For example, if a user’s goal is to write a book and they want to integrate more writing time into their day, AND the user is already in the daily habit of fraternizing in social writing-support forum/discussion groups on the platform, perhaps the system would suggest (or the user would manually program) that the user be prompted to spend just 10 minutes writing immediately before, or immediately after, a writing-group socializing session. Rewarding. First and foremost, the system (mindfulAI) would want to ascertain “What rewards are compelling for you?” Or as Andreatta says, “What rewards would make your brain go, ‘Yeah, I should do this again!’” In keeping with the responsivity principle, the system would strive to tailor rewards to what users express as having value to them. This is true for the system to “prompt the taking of rewards” even if those rewards are not offered by/inherent to the Cosmos system itself. [The notion that Cosmos can interact with and have effects on people’s lives holistically, with effects not strictly limited to embodiment ON the virtual platform.] For example: " You did it! Congrats! You’ve earned yourself a hot bath!" Or “You’ve earned some C>!” Or “You’ve earned a break of 30 minutes listening to THIS favorite podcast that’s at the top of your media queue!” The system could reward with C> for actions taken, and also “announce” to your sponsors/followers/peers that you are making progress on your goals, through which announcement your peers can choose to send you LitCoin. Rewards do not have to be “tangible” capital, either: they can be as simple as hearing a tone or seeing a “reward screen” that lights up the pleasure centers of the brain . One key take-away, per Andreatta, is that you don’t have to reward forever—only until the habit is formed. This is relevant for building up system-essential, but not necessarily intuitive, habits. For instance, encouraging the user to regularly vote (or otherwise contribute meaningfully) on pending proposals that may affect their own user experience, or training members to perform key functions/roles within the social systems, e.g., community guide, editor, moderator (etc.) Not suggesting that Cosmos would cease rewarding essential contributors for valuable work, simply that: sometimes, the habit of serving a certain social role is self- aka intrinsically-rewarding, e.g. serving as a mentor or mediator to beloved peers, but tangible rewards may need to be put in place to initiate the new habit participatory pathway in a user. According to Andreatta, “if you want someone to learn a new system fast, plan for multiple short exposures” to the practices of using the new system, using learning enhancement principles (e.g. social rewards, music), using retrieval (i.e. can you remember what you did last time? Show me, and then i’ll take you on to the next step of learning.) and reward for successfully doing things with the system. This is how intensive, rapid learning can take place and thus an overall market edge can be gained by the company utilizing these methods. (COSMOS) HABIT FORMING. REWARDING. POSITIVE-FEELING-GENERATING (BEING “KIND” AND SUPPORTIVE AS A NORM, modeled in social and technological ways.) “Kindness that’s habit-forming.” A key distinction: Cosmos is concerned with amplifying feedback loops that lead to self-realization, and in parallel is concerned with replacing patterns not conducive to human self-actualization (many of which will creep in by osmosis through exposure to the “outside world”‘s status-quo cultural environment). Cosmos does strive to help effect new habit formation and learning in members. The platform itself seeks to be “habit-forming” insofar as it is gratifying to use, “fun to play the game.” But Cosmos values “off-platform” success just as well as on platform. For example, say Member A ( as an inspired result of using Cosmos), authentically determines that they want to spend less time on the platform and more time focused on writing. Or, Member B had defined on Cosmos that she was looking for work in her field, and then ( perhaps indirectly through her focus on that goal or through serendipity in her social network) found that work outside of Cosmos. Member A might ramp up how often Cosmos is “muted” or “turned off;” Member B might update the goals & parameters in their dashboard to reflect these life changes and how they intersect with her goal attainment. In both cases, Cosmos would attribute value/success to itself, because even though the linked activities are occurring off the platform, they correspond to users’ progress toward stated goals on the platform . And since Cosmos is ultimately concerned with whether its members needs are getting met… there you have it. (There is an admittedly fine line between habit-forming and addictive: it is important that Cosmos not seek to intensify a member’s overall use, per se, but that the value measured by participant interactions/activities intensifies, and/or the progress toward their goals, and successive goal-striving and goal-attainment, is showing a growth trend. See also Addiction in Limiting/Negative Feedback Loops.) Even just witnessing a progress bar “level up” can be inherently rewarding… continue on to Leveling Up!
https://www.infiniteconversations.com/t/learning-habit-loops/2548
Educational Technology Centre, with its three sections namely Computer Services Section (CSS), Educational Services Section (ESS) and Library Services Section (LSS), is an integral hub for the development and integration of technology and innovation. The CSS provides IT support and technical services for HCT departments, academic and administration for both staff and students. Services include network support, technical support, hardware support, software support, web development and software development. The ESS provides the educational needs for all departments such as equipment, reproduction of learning materials and a portal. The LS provides lending of educational books, periodicals, subscription materials and other learning resources. It has an available internet access for research works of students and staffs. The Educational Technology Center gives appropriate supports to the teachers and students to equip them with the full knowledge of Information and Communication Technologies (ICTs). Among them are: Academic Technology Services meets the evolving information technology needs of HCT academic programs, providing support for faculty in integrating information technology into teaching and learning as well as exploring, developing, and promoting the use of next-generation technologies to support and enrich instruction at HCT. Student Computing Services provides facilities and support, including public computer labs, wireless access, and wired network access in residence hall rooms, to meet students’ needs for technology in their class work and, to the extent possible, for personal learning projects. Administrative Systems supports efficient and effective administrative operations through technical support and other information management systems in the College in order to ensure data security and to provide timely, accurate data as the basis for management decisions. Help Services solves problems for faculty, staff, and students, helping them to use technology to meet their needs in learning, research, or administration. Helpdesk Services is the campus-wide point of contact for assistance from ETC for services, projects, and supported software and hardware issues. Desktop Support Services assists teaching, learning, research and administrative operations by providing standardized, up to date, and reliable hardware and software configurations for faculty and staff desktop workstations. Data Network Services provides a flexible, secure, manageable, and reliable network infrastructure and network-based services to support teaching and learning, research, and administrative operations. It manages central systems and servers that support teaching and research, collaboration, communication, data storage, administrative and academic operations, and connectivity to Internet resources. Information and Communication Services provides reduced risk of miscommunication with consistent messages; improves response times and coordination efforts; faster sending of messages to students, from anywhere through the Students messaging System in minutes, at any time; no additional charges for Web-based services; time-intensive, error-prone manual process are automated; Save time and keep information accurate and up-to-date with our Students Messaging System.
https://www.hct.edu.om/centers/etc/about-etc/
HACT's new research that highlights the need for a new approach to customer satisfaction by the social housing sector. Rethinking Customer Insight: Moving beyond the numbers is published today by HACT following a two-year research project funded by seven leading housing associations. The report highlights two key issues for the future of customer satisfaction: - First, rather than collect data to demonstrate how good social housing providers are, they need to collect data to improve how good they are. - Second, communicating with customers needs to be more targeted, more intelligent and more responsive. The report concludes that there is no one-size-fits-all model for customer satisfaction. Rather than a new model, we are recommending the sector adopts a new approach to customer satisfaction: - Only ask a question if it’s relevant to your business and your business objectives. - If you’re not going to use the responses to the question to develop actionable insights, don’t ask the question. - If you’re going to use the data, tell your customers why you’re asking the question, what you’re doing with their responses, and how they can continue to be involved. - Make it as easy as possible for your customers to engage with the process using their choice of channel, at their time of choosing. - If satisfaction rates for a particular service plateau, perhaps it’s time to stop asking questions, at least until changes are made to the service. Start asking again to gauge the impact of the changes. - Review, reflect and redesign, and start again. The research was funded and supported by bpha, Catalyst, Equity, OneManchester, Peabody, settle and Trafford Housing Trust. The report is available to download from the HACT web site. The research report evolved out of a scoping study from 2015 that found: - There were concerns about inconsistency and the implications of using a specific survey mode. - Survey responses can be impressionistic and not indicative of business performance. - Current methods of aggregated benchmarking were flawed; results did not provide meaningful business insights to enable informed decision-making and were of limited value. - Massive amounts of data are being collected, but this data is not being used, leading to significant levels of resource wasted both on collection and analysis.
https://www.hact.org.uk/news/rethinking-customer-insight
Self-regulation is your child’s ability to control their emotional and behavioral responses to any given situation. It’s the ability to calm themselves down when they are upset. It’s being able to wait to be called upon to give an answer. It’s the ability to handle frustration, adapt to changes in routines, share materials, wait a turn, or solve a social problem or conflict. It’s any opportunity your child has to practice self-control. And boy, is it difficult. While self-regulation should be the #1 focus of all preschool programs, unfortunately, it often gets brushed aside as an afterthought. All too often teachers (and parents) take on the role of solving all of their children’s conflicts, rather than facilitating interactions and helping the child build resilience. Little ones are put in time-out for bad behavior, instead of being given tools and strategies to help them be successful the next time. In an effort to “prepare” kids for kindergarten, many preschools put free-play on the back burner and focus on teaching academics. But free play is how children learn to manage their impulses and gain self-control. Without free play, children are at a major disadvantage for kindergarten. Kindergarten is becoming more and more academically rigorous each year, and so the ability for your child to self-regulate is even more important now than ever before. Without being able to keep their emotions and behaviors in check, children cannot possibly focus on the difficult academic concepts they are being expected to learn. Little ones that struggle with self-regulation often spend their entire day trying to keep themselves from falling apart, rather than absorbing any academic content. Self-regulation is not a skill that can be acquired through rote memorization or drills. Children need many, many social opportunities to practice their ability to wait, calm themselves down, and problem solve without emotional outbursts. When your child is going to be faced with a new experience, make sure you talk about what to expect and how to handle the new situation. For example, if your child is going over to a friends house for a playdate for the first time, talk to them about the expectations. You can also make your child their own book about whatever it is they are struggling with (these are commonly known as “Social Stories”). Take pictures of your child and create your own simple picture book talking about the problem your child is facing and the solution or strategies they can use to make it better. Children love seeing their own images in books, so this idea is very effective! Spend time each day practicing calming strategies with your child – and practice when they aren’t upset! Model for them how to take big, deep breaths, and use your own experiences to show them how to put that into practice. You can also help your child recognize what has helped them calm down in the past. Say, “I noticed that last time you were upset it really helped when you squeezed your bear.” Help your child develop a toolbox of strategies to use when they are upset. This seems obvious, but it’s so critical! No matter how much you talk about it or read about it, if your child is tired or fresh-air deprived, their self-regulation skills are going to suffer. Five year olds should be getting between 10 and 13 hours of sleep each day. Some 5 year olds still need to nap, but most children get all of these hours of sleep at night time. Children also need lots of opportunities for active movement and outdoor play in order to be at their best for keeping themselves regulated throughout the day. Want more positive parenting posts like this? Follow along with me on Pinterest! I would read “My Mouth is a Volcano” to my third graders– it helps kids recognize when their mouths are about to erupt (interrupt)– and teaches strategies for waiting until their turn to speak. For the rest of the year, we would use the word “volcano” as a code word to help students remember the lesson in the story. Love this idea! Self-regulation is something kids are ALWAYS learning! What a great article! It really is important for kids to be successful in kindergarten and it’s a very looked over skill. My nephew struggles with this and he had a hard time in kindergarten. He is smart, but just wants to do his own thing and doesn’t understand he can’t. It’s so hard when little ones just aren’t quite ready for the structure of kindergarten. It’s a lot to expect from such little guys…they just want to play! Hi, You had a lot of great information. I loved the books you had listed to help the child through each stage. This has so much great information. My son is starting kindergarten later this month. I hope the transition is smooth! Thanks for the tips! I’m definitely going to check out those books.
https://ourdailymess.com/2018/08/01/the-1-skill-your-child-needs-for-success-in-kindergarten/
While many reports recognise the advantage of an outcomes focus in government, too often our public bodies concentrate on individual outputs. Many agencies are still working on short-term or one-year planning and budgeting cycles, and in some cases confusing their outputs with outcomes. As relatively simple measures of what an organisation has achieved, it is unsurprising that outputs can be attractive indicators of success. For example, the outputs from the body that looks after roadworks might include the number of repairs completed, miles of roadway refurbished or newly built, and at what cost. Outcomes can be more complex. They require an organisation to consider whether what it is doing is making a difference and to what. From our previous example, this could include looking to see whether traffic flow is better, journey times shorter, if there are fewer accidents, or if the works contributed to wider economic development. Successfully re-orientating our public bodies to an outcomes focus requires leadership from the top. Unfortunately, lack of certainty in government funding has made the long-term and even medium-term planning required to achieve this outcomes focus increasingly difficult. However, there are examples within the UK of organisations shifting their focus. The Scottish Government in its national performance framework, and the Welsh Government in its national strategy, are both now pivoting towards a focus on outcomes. These documents make clear statements of long-term outcomes that set the tone for what public bodies should plan to contribute towards, with a focus on social, economic, environmental and cultural wellbeing. Within individual public bodies, there are also examples of outcomes-based approaches. Highways England in its delivery plan for 2015-20, for instance, aims to deliver on wider strategic outcomes, such as supporting economic growth. Such a shift is not a simple task – once outcomes have been set, you will frequently find that important factors are not under the control of any single body. This raises questions about how the resources for those interventions are pooled and shared widely enough to ensure that changes in outcomes are obtained. This puts forward a strong case not only for cross-government collaboration but potentially partnerships and collaboration with the wider community and private sector. For example, improving the wellbeing of children in a certain area might require an improvement across a range of areas – from housing standards to access to pre-school care and education. Another challenge comes with monitoring our progress. If no one organisation has the full picture, judgements on effectiveness must come from a higher level. So governance and scrutiny arrangements must be moved away from a particular organisation to a custodian of the overall plan for outcomes. This is a significant change from the traditional, functional, organisation-based and mainly short-term budgeting focus used by many organisations. However, with continued pressure on available resources, we need to look at delivery differently. As finance professionals, we are well placed to offer a range of tools to look at costs and service planning in different ways. In future, we must support the long-term sustainability of services by helping public bodies to step outside traditional silos.
https://www.publicfinance.co.uk/opinion/2019/01/lets-not-muddle-outputs-outcomes
Given the rigour and hard work required in an evaluation of intangible assets conducted by the company itself, which has full access to all relevant information, what should we make of the annual ‘hit parade’ charts which appear in the economics press, giving new values for the top worldwide brands? Why such big differences between valuations? The Interbrand research company, which is overwhelmingly the main producer of such data, has used two methods over time. Historically, it has attempted to derive values for brand EVA from public information in the annual reports of stock-exchange-listed companies and a variety of other public sources. Not being able to work with a business plan, given the confidentiality of company plans, Interbrand instead analysed data from the last two years. So how does it make the leap from EVA to brand value? It used an estimation of the share of EVA attributable to the brand, multiplied by a figure (the ‘multiple’), itself derived from a statistical model based on the analysis of the price/earnings ratio (p/e) for stock-exchangelisted companies such as Gillette. The price/earnings ratio is actually a multiple itself. It compares the stock value with the profits associated with that stock: this will indicate, say, that a stock is worth 10 times its dividend price. Interbrand configured its statistical model using stock-exchange-listed companies. Knowing the multiple (p/e) for each company, it performed a strategic analysis of its brands, following a method similar to the one we have described for our strategic audit of the brand. The end result of Interbrand’s strategic evaluation of the brand is an overall score for the brand, measuring the strength of the brand (the ‘brand strength index’). This is the sum of the partial scores obtained from each of the individual audit criteria. The criteria are leadership, stability and so on. It is then easy to identify the statistical relationship between the recalculated strength of the brands and the virtual multiple approximated by the price/earnings ratio (p/e) on the stock exchange. This statistical relationship has never been published, but has been represented as shown in Figure below. Having produced an external estimate of the EVA for each brand, it was then easy for Interbrand to calculate the brand strength index which, when factored into the statistical model, identifies the virtual multiple. All that remained at this point was to measure this virtual multiple as the share of estimated EVA allocated to the brand. Several remarks can be made about this external procedure, which is used to produce the published ‘league tables’ of global brand value. The tables are based on this logic, except that they are not in possession of all of the relevant information (as opposed to, say, an auditor appointed by the company to value its brands). They are thus obliged to obtain an external estimation based on the accounts published by stock-exchange-listed companies, and the figures are subject to a wide margin of error. Furthermore, these league tables cannot measure the value of brands belonging to family-run companies such as Mars, Levi’s and Lacoste, which do not release public figures. Nor can they include brands belonging to companies producing consolidated accounts that are not broken down by brand. Lastly, they exclude cases in which sales may be attributable to factors other than pure demand. Consider air transport, for example, where the policy of alliances means that it is possible to end up flying with Delta Airlines after having bought an Air France ticket. Also, a significant part of demand is influenced by exit barriers such as frequent flyer cards: this is not pure demand driven by customer preference. Other critical remarks may be made about this approach, as we have already seen, including sensitivity to variations in the multiple, and the validity of the graph. Recently, Interbrand has changed its method of producing its ‘global brand value’ league tables, moving towards a more conventional financial and economic approach. Although its methodology has not been explicitly published, reference has been made to ‘net present value of future brand earnings’, which would be more in line with our recommended nine-step process. | | | | Strategic Brand Management Related Tutorials |Business Communications Tutorial||Sales Management Tutorial| |Strategic Management Tutorial||Strategic Planning for Project Management Tutorial| |Personal branding Tutorial| | | Strategic Brand Management Related Interview Questions |Business Communications Interview Questions||Sales Management Interview Questions| |Strategic Management Interview Questions||Strategic Planning for Project Management Interview Questions| |Brand Management Interview Questions||Mass communication Interview Questions| |Corporate Communication Interview Questions||Personal branding Interview Questions| |Campaign manager Interview Questions||Marketing Communications Interview Questions| | | Strategic Brand Management Related Practice Tests |Business Communications Practice Tests||Sales Management Practice Tests| |Strategic Management Practice Tests||Brand Management Practice Tests| |Mass communication Practice Tests||Corporate Communication Practice Tests| Strategic Brand Management Tutorial All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd Wisdomjobs.com is one of the best job search sites in India.
https://www.wisdomjobs.com/e-university/strategic-brand-management-tutorial-350/what-about-the-brand-values-published-annually-in-the-press-11275.html
SpaceX's Starlink project will make the detection of killer asteroids very difficult, said Dr Jonathan McDowell, an astronomer at the Harvard Smithsonian Centre for Astrophysics in an interview with the Express. McDowell noted that while the current number of satellites launched by Elon Musk’s company is merely a "pain in the neck" the scientific community can handle it, but the final number will make scientific research impossible. "It’s unclear just how bad it’s going to be right now, but it’s not an order of magnitude away from the really bad case. So while a constellation of 300 is a pain in the neck, we can handle it – 12,000 is going to make it very difficult, especially for these asteroids", McDowell told the Express. These asteroids McDowell refers to are near-Earth objects, celestial bodies, that may pose a danger to the Earth due to the close proximity in which they move. Most scientific research is conducted late at night when the sky is dark. But when it comes to detection of near-Earth objects, which travel close in the sky to the Sun, researchers look for them after sunset and that is when hundreds of Starlink’s satellites illuminate the sky. McDowell called for regulation, which he said will help solve the issue and allow the scientific community and SpaceX to work together. However, he stressed the measures that are being proposed at the moment are not effective. "SpaceX has said they’re going to try and reduce the brightness of their satellites which might help a bit but it doesn’t seem like they’ve been successful on their first try at this. Maybe you could require them to have a little higher orbits, but we’re not sure if that’s going to help yet", McDowell told the Express. SpaceX has already launched 180 satellites and plans to increase this number to 1,400 by the end of the year. The company hopes to finish the project in 2027. By then, it hopes to have launched 42,000 satellite. Activists and astronomers say the number of satellites and their proximity to Earth pose a risk to life on Earth and hampers scientific work.
https://sputniknews.com/20200315/elon-musks-starlink-project-will-make-detection-of-killer-asteroids-very-diffiult---astronomer-1078572796.html
Franklin D. Roosevelt: Signed Sheet of Stamps as President.... DescriptionFranklin D. Roosevelt: Signed Sheet of Stamps as President. -Sheet of 50 U. S. five-cent stamps from the "Occupied Nations" series, this one featuring the flag of Luxembourg. 8.5" x 10.75". Co-signed by Secretary of the Treasury Henry Morgenthau, Jr. -Full margins, two tiny perforations along right edge, imprinted top and bottom "From the Franklin D. Roosevelt Collection / Auctioned Feb. - April, 1946 by H. R. Harmer Inc. N.Y." Overall excellent condition. A very late Roosevelt signature from early 1945. More Information: The extended description below was supplied by the consignor. We are making it available to our web bidders who are interested in more in-depth research and broader historical perspective. Please note that presentation (i.e. framing), lot divisions, and interpretations of condition and content may occasionally differ from our descriptions. Assertions of fact and subjective observations contained in this description represent the opinion of the consignor. These remarks have not been checked for accuracy by Heritage Auctions, and we assume no responsibility for their accuracy; they are offered purely to allow the bidder insight into the way the consignor has viewed the item(s) in question. No right of return or claim of lack of authenticity or provenance based upon this extended description will be granted. A splendid item, especially given the fact that Franklin D. Roosevelt was a lifelong philatelist! A full sheet of the United States Stamp series on "occupied nations" from 1944. In this five cent stamp series, a stamp honored each country occupied by the Axis powers during World War II. This is a Luxembourg sheet. On the left margin are the signatures "Franklin D. Roosevelt" as President, and also signed "Henry Morgenthau, Jr." who was FDR's friend and Secretary of the Treasury, to the left of FDR's bold signature. Sheet is mint, never hinged and intact with full selvedge, and signed by FDR as President. At the top and bottom of the sheet is the notation: "From the Franklin D. Roosevelt Collection/ Auctioned April 12, 1946 by H.R. Harmer, N.Y." Other than the fact that both FDR and his Secretary of the Treasury sign this sheet of fifty stamps during World War II honoring Luxembourg as an occupied nation is the fact that FDR's signature on this stamp sheet indicates most probably that it was a 1945 signature, perhaps signed during FDR's last months as President! Comparing FDR's full signature on this stamp sheet to other signatures contained in the Study and Development of the Signature of Franklin Delano Roosevelt, 1911-1945, prepared under the auspices of Mr. John Reznikoff, Founder and President of University Archives, the fountain pen signature is consistent with the February to April, 1945 time period, considering slant, flow, pen pressure, letter size, and other characteristics. Auction Info Buyer's Premium per Lot: 19.5% of the successful bid (minimum $9) per lot.
https://historical.ha.com/itm/autographs/u.s.-presidents/franklin-d-roosevelt-signed-sheet-of-stamps-as-president/a/6001-53246.s?ic16=ViewItem-Auction-Archive-JumpLot-PreviousLot-050318
The reservoir is 70 cm wide, 70 cm long and 50 cm high. The wall thickness is 10 cm. Which is the smallest volume? The reservoir lays on a concrete foundation and you do not need to take into account the bottom. Please identify the largest area. A symmetrical figure can be folded in half and have both sides match exactly. Of the 26 capital letters, how many are symmetrical along the vertical axis? A wooden cube measures 10 cm on an edge. What is the greatest number of small cubes that can be obtained by cutting the big cube? What is the area of the colored shape? In the figure, each number represents the length of the nearest segment. All corner angles are right angles. What is the area of the figure? The figure shows a pentagon and its exterior angles. Find the sum of the exterior angles. Which of these shapes has the smallest area? What is the total number of lines of symmetry that can be drawn on the shape? There are two points A and B on the grid. We want to plot two more points and then connect all four points to form a square. Which two points should we plot to form a square? What is the area, in square units, of the polygon? Which shape has only one pair of parallel sides? An underground pipe drains water from a house. The pipe must drop 11 centimeters for each 1 meter of the pipe's length. Find the total drop d of an 11-meter-long section of this pipe. Is the dotted line a line of symmetry? Find the two similar shapes. Which shape has 3 lines of symmetry? Anna and Bob are playing tic-tac-toe on their game board. The picture shows the last position. Which picture shows how the game board would look after being rotated (turned) 90° counterclockwise (the opposite direction of clockwise)? None of these answers is correct. Which figure has the largest perimeter? If you look at this object from the front, what will you see? The perimeter of the star is equal to the perimeter of the regular hexagon. Each side of the hexagon is 15 cm long. How long is each side of the star? A rectangular greeting card has an area of 330 cm2. One side of the rectangle is 7 cm longer than the other side. How long is the shorter side? Green Tennis Club members are making a logo. The logo of the club is blue squares with a green shape. How many blue squares are covered (partially or completely) by the green shape? How can you go from A to B? Remember, there may be more than one correct answer. Which angle measures more than a right angle? Two triangles form a parallelogram with an area of 100 square meters. What is the area, in square meters, of the left triangle? There is not enough information to answer this question. What is the approximate measure of this angle in degrees? We will increase the height of the photograph to 22.5 cm while maintaining the ratio of height to width. What will be the width of the enlarged photograph? Anna wraps ribbon around a present. She also needs 34 more centimeters (cm) of ribbon to tie a bow. How much ribbon does she need to wrap the present and to tie the bow? What is the perimeter of the shaded figure? The street blocks on State Street have the same length. What is the distance between the midpoint of AB and the midpoint of CD? Which new figure can be formed by placing these triangles together? Which two roads on this map appear to be parallel? Central St. and Coral St. River St. and State St. State St. and Central St. River St. and Port St. A 7x7x7 cube is painted, and then cut into 1x1x1 cubes. How many of these cubes are painted on 3 sides? Which line segments are equal? What is the maximum number of boxes measuring 0.3m x 0.2m x 1m that can be packed into a container measuring 1.01m x 2m x 3m? For which letter did I use the smallest amount of green paint? Which arc comes from the larger circle? Select the shape with the largest perimeter. The tower shown is made of cubes stacked on top of each other. Some of the cubes are not visible. How many cubes in all are used to form the tower? In a 3 x 3 grid, there are 8 lines that contain at least 3 points. How many such lines are there in 3 x 5 grid? How many triangles are there? How many different lines can you draw through point C, D, E and F that are parallel to the line AB? Which area is the smallest? How many squares of any size exist in the figure? Which of these parts has the smallest area? How many different lines can you draw through these five points? The picture shows a regular polygon. Which line is parallel to the line AB? Removing which square results in the increase of the perimeter of this figure? The line is 0.25 millimeters thick and 8 kilometers long. What area does the line cover? Which of the following could be the graph showing the perimeter Y of a square with a side length X? John ate a share of a pizza and gave the other part to his friends Jim and Jack. The diagram shows what Jim and Jack equally shared from the pizza. How much of the pizza does Jack eat? Estimate how many times larger the red area is compared with the blue area. I cut a triangle from a square piece of cardboard as shown in the picture. Which shape can I not form by fitting the pieces together? For which shape are all sides the same length? Which two shapes have the largest common area? How many axes of symmetry does the figure have? Find the top view of the figure. Which rectangle has the smallest area? The game Battleship (Battleships or Sea Battle) is a guessing game played by two people. How many four-space battleships can you place on the 10 x 10 grid ? They should be placed in straight lines, either horizontally or vertically, but must not overlap. Find a pair of beads that are exactly the same distance apart as the length of the line segment AB. The distance between the beads is the distance between the beads' centers. Which point is the closest to the line AB? Which shape has the smallest perimeter? How many times larger than the area of letter T is the area of the square? Which point is the closest to point A? For which drawing did I use the greatest amout of ink? Which shape has the smallest area? What two pieces will fit together perfectly to form the same shape as the shadow shown on the right? You may rotate the shapes. I am using red and yellow tiles to make the pattern shown on the right. If I continue the pattern in the same way, how many tiles will be in the next "circle"? I drew a straight chalk line across the 8x8 chessboard. What is the largest number of squares that the line can go through? I draw a line through each pair of the vertexes of a square. Into how many regions do I divide the plane? Find the maximum possible number. How many lines can you draw through these six points? Each line has to pass through at least two points. 210 x 297 millimeters for A4 paper. How many A4 sheets can you cut from an A0 sheet? If I fold a square paper vertically, the new rectangle has an area of 32 square units. What is the perimeter of the original square? How many upper case letters look the same when rotated 180 degrees? Remember, a rotation of 180 degrees is the same thing as rotating something until it is upside down. Russia has an area of 17,075,200 square kilometers. Canada has an area of 9,976,140 square kilometers. The United States of America has an area of 9,629,091 square kilometers. China has an area of 9,596,960 square kilometers. India has an area of 3,287,590 square kilometers. Which area is the smallest among the options below? The letter 'A' is symmetrical. A symmetrical figure can be folded in half and have both halves match exactly. Of the 26 capital letters of the alphabet, how many are not symmetrical? Which segment has a length of 9 units? Which shape has the smallest angle? I assemble rectangles from colored squares. All the squares on the edges are red, and the inner squares are blue. For example, the figure shows a 5 x 9 rectangle with 24 red and 21 blue squares. Find the rectangle where the number of red squares is equal to the number of blue squares. Which of these shapes has the largest area? Which shape has 4 lines of symmetry? None of these options is correct. Which picture shows how the game board would look after being rotated (turned) 180° clockwise? Each side of the hexagon is 12 cm long. Which line segment is the smallest? Tim expands his rectangular-shaped garage. The garage is 5 meters wide and 7 meters long. He wants to triple the width and to double the length. How does this affect the area of the garage? The area is 7 times greater. The area is 8 times greater. The area is 5 times greater. The area is 6 times greater. How many of the small triangles would be needed to cover the trapezoid? Which angle is the smallest? In the picture, figures ABCDEF and UVWXYZ have the same shape and size. Which side must have the same length as side CD? Which shape below has the same perimeter as the shape at the right? How many cubes are not visible? The diagram shows where the balloons were found. How many balloons traveled more than 15 km? From which direction did the wind blow? A pentagon (from pente, which is Greek for the number 5) is any five-sided polygon. How many pentagons are there? A hexagon (from hex, which is Greek for the number 6) is any six-sided polygon. How many hexagons are there? How many letters in the word MATH have only one line of symmetry? How many cubes with the side size 0.3 can fit in the box? All angles are right and the lengths of the sides are given in miles in the diagram. Squares 1, 2, and 3 have sides of length 1, 2, and 3 units, respectively. What is the perimeter of the entire figure? How many small cubes were used to build this solid figure? What is the minimum number of straight lines to draw two squares? What is maximum number of times two equal hexagons can intersect? What is the area of the squares (each division in the graph represents 1 mile)? Estimate the ratio of areas of the two parts. How many squares are there? Find the sum of the angles. How many small squares are needed to completely cover the surface of the green triangle? You can cut the squares. Can we use each of the four small shapes on the left exactly once to make the triangle on the right without any cutting or overlapping? How far apart are the marked point and the end of the scale? A shop owner installs a security camera on the ceiling of his shop. The camera can turn up-down and right round through 360°. The picture shows the design of the shop. Where can a thief be hidden without being detected by the camera? I would like to add more blocks to form a cube. I don't move the original blocks in the pyramid. How many blocks must be added to form a cube? The sculpture is built with identical cubes. If the sculpture weighs a total of 1540 tons, how much does one cube weigh? In geometry, a hexagon is any six-sided shape. The tower is made up of three horizontal layers of cubes with no gaps. How many more small blue cubes must be added to form a large 3x3x3 cube? Which of these parts has the largest area? The large cube is composed of smaller cubes. How many small blue cubes are there? If I wish to double the area of the rectangle, what is the minimum number of sticks I need to add? Tree frogs can jump 7 feet, which is 50 times the length of their body. Jonny the Frog is lazy today. He makes successive jumps of length exactly 2 feet. What is the smallest possible number of jumps he can make to end exactly 7 feet eastward from the start point? Which triangle will have the largest perimeter? Although it doesn't look like it, the distances along any of the blue strings are equal because it is a 3-dimensional object. What is the shortest distance between blobs 03 and 09? You can only travel along a blue string. I would like to open a ring so that all rings are separated. Which ring do I need to open? The links look like Borromean rings that are used in the coat of arms of the aristocratic Borromeo family in Northern Italy. Which shape has the area of 20 square units? Which line segment is parallel to AB? How many more cubes would be needed to make the pyramid 8 steps high? Gerry divides a cube into equal parts using 4 vertical planar cuts. How many parts does he get? I can choose 36 different 3x3 squares on the 8x8 grid. How many of them don't contain green cell(s)? I draw a shape on squared paper. Find the perimeter of the composite rectilinear shape. An ant goes from point A to point B along the indicated path. Does the insect turn left or right more frequently? The number of left turns is the same as the number of right turns. It depends how fast the ant goes.
https://www.aplusclick.org/k/k6-geometry.htm