content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Key words : climate change, climate change, global warming, energy policy, cap and trade, climate change policy, waxman-markey, offsets, congress Over-Allocation of Pollution Permits Would Result in No Emissions Reduction Requirement in Early Years of Climate Program 28 Sep, 2009 08:10 am The large decline in U.S. emissions in 2008 and 2009 due to the economic recession means that if the House-passed Waxman-Markey climate legislation becomes law, the bill's emissions reduction cap will require no reduction of carbon emissions over the first two to five years of the program. The resulting oversupply of emissions permits will allow regulated firms to continue business as usual emissions through as late as 2018, according to a new analysis by Breakthrough Institute based on new Energy Information Administration emissions projections that take into account the impacts of the global recession. In conjunction with the free allocation of a high percentage of emissions allowances under Waxman-Markey and lower global demand for offsets from recession-hit EU and U.S. firms, substantial over-allocation of emission allowances in the early years of the program, will likely lead to a cap and trade program awash in both cheap emissions allowances and offsets over at least the first decade of implementation. Under such conditions, the functional carbon prices for the first decade or more under Waxman-Markey are likely to hover at or even below the $10 per ton floor on allowance auction prices (rising slowly each year) established by the bill. The reason for this projected over-allocation of pollution allowances is because the House cap and trade legislation would initially distribute allowances based on 2005 emissions levels, which were much higher than 2008 or 2009 levels. By the time the Waxman-Markey emissions cap would go into effect in 2012, U.S. emissions may still be recovering to pre-recession levels and may remain substantially lower than historic 2005 levels. To test potential economic and emissions recovery scenarios, Breakthrough Institute used economic recovery forecasts from the EIA and the U.S. Congressional Budget Office (CBO), and greenhouse gas emissions projections from the EIA and U.S. Environmental Protection Agency (EPA), to construct two emissions recovery scenarios -- a "slow recovery" scenario and a "fast recovery" scenario -- to provide a range of outcomes. Under the slow recovery scenario, U.S. emissions remain seven percent below 2005 levels in 2012; under the fast recovery scenario they are three percent below 2005 levels in 2012. Under the slow recovery scenario, relatively low business-as-usual (BAU) emissions projections and the banking of excess permits by firms means that the Waxman-Markey cap would not require firms to reduce emissions at all -- either themselves or through purchasing offsets -- until 2018. Under the fast recovery scenario, emissions reductions would not be required until 2014. Slack demand and the resulting low price for pollution permits would create a strong incentive for firms to hedge their future carbon liabilities by buying and banking emissions credits while they are in excess and prices are low, building up a bank of permits for the future while continuing with business-as-usual practices. Furthermore, even if they utilized just a fraction of available offsets each year, U.S. firms would not be required to reduce their own emissions until even as late as 2030 or beyond. Depending on how quickly the economy recovers and emissions rise, firms would need to use just 6 percent to 25 percent of the total amount of offsets permitted under the House bill for emissions levels to rise at business-as-usual rates through 2020, and only 43 percent to 66 percent of total offsets to continue emissions growth through 2030. The Waxman-Markey floor on permit prices would prevent the auction price of carbon dioxide in the primary auction market from dropping below $10/ton. However, with the majority of permits given away for free in the first decades of the Waxman-Markey cap and trade program and a substantial over-allocation of permits projected in the early years of the program, the correspondingly slack demand for permits may result in a large secondary market for emissions allocations in which permits trade for substantially less than $10/ton. The result of such an outcome would be a functional carbon price substantially below the nominal $10/ton floor established through the statute for the primary allowance market. Even at the auction floor price, the carbon price is unlikely to be sufficiently high to act as a strong incentive for firms to improve their energy efficiency at above-BAU rates, or to shift to low-carbon power sources. A carbon dioxide price of $40 per ton in the EU in 2008 was not high enough to derail European plans to build 50 new conventional coal-fired power plants over the next five years, for example. Just six percent of the total offsets permitted under the House legislation would need to be purchased for firms to meet their 2020 emissions reduction requirements, under the slow recovery scenario and 43 percent would need to be purchased by firms to meet their 2030 requirements. Under the fast recovery scenario, 25 percent of the total permitted offsets would need to be utilized for firms to meet 2020 emissions reductions requirements, and 66 percent to meet their 2030 requirements. Both the CBO and EPA analysis of the ACES legislation project significant utilization of emissions offsets. CBO's relatively conservative offset projections forecast 26% of total permitted offsets will be used through 2020 and 38% through 2030, while EPA's more generous projections forecasts 61% of total permitted offsets will be utilized through 2020 and 64% will be used through 2030. Under either set of assumptions, the total supply of permits and offsets created under the ACES cap and trade program would legally permit U.S. emissions to continue at BAU rates through most if not all of the next two decades. And with emissions down in the EU and across the globe due to the recession, demand for offsets from the European ETS and other emissions trading programs will be down as well, increasing the likely supply of offsets at affordable prices. The global recession is expected to drive the biggest annual drop in global greenhouse gas emissions in forty years this year. With full economic recovery in the U.S. and globally likely to take several years, the latest projections from the EIA have revised downwards expected greenhouse gas emissions levels. EIA projects U.S. emissions from the burning of fossil fuels will fall six percent in 2009, after dropping three percent in 2008. Emissions would rebound slightly as the economy begins to recover, rising less than one percent (0.9%) in 2010 in the EIA's projections, but may not fully rebound to historic 2005 levels until well past 2012. In the slow recovery scenario, U.S. emissions follow the projections in the new EIA Short Term Energy Outlook (September, 2009) report through 2010, before returning to the long-term growth rates projected in the EIA's Annual Energy Outlook 2009 from 2011 through 2030. The fast recovery scenario reflects the more optimistic economic forecasts contained in the CBO's most recent economic forecasts (August, 2009) and slightly higher emissions intensity rates (the amount of CO2 emitted per unit of economic activity) in the EPA's analysis of the ACES legislation. By Jesse Jenkins, Ted Nordhaus and Michael Shellenberger, originally published at Watthead and at the Breakthrough Institute. Jesse Jenkins is the director of energy and climate policy at the Breakthrough Institute and the founder and chief editor of WattHead - Energy News and Commentary.
http://www.scitizen.com/climate-change/over-allocation-of-pollution-permits-would-result-in-no-emissions-reduction-requirement-in-early-years-of-climate-program_a-13-3000.html
But it has since been revealed that 1.11 million mental health treatment plans were accessed between January to September this year, only a little more than the 1.08 million used in the same period last year. That is despite a rise in self-harm presentations to emergency departments during the pandemic and huge increases in calls to support lines like Beyond Blue and Lifeline. Former national mental health commissioner and co-director of Sydney University Brain and Mind Centre Ian Hickie said the numbers indicate the Government is investing in the wrong scheme. "I think the emphasis in the budget on doubling psychology sessions is a mistake," Professor Hickie said. "While meeting the needs of some people it puts at risk a much greater issue — access to care. "That's the real challenge we face — not providing more care to those already in care, but providing rapid, accessible care to those already in considerable danger as a consequence of the economic and social dislocation due to the pandemic." Clinical Associate Professor from the Australian National University Medical School Louise Stone said the increase in Medicare-subsidised sessions was only useful for those who could afford the co-payments, or lived in areas where psychologists were available. While private practice fees are discretionary, the Australian Psychological Society's recommended consultation fee for a 45- to 60-minute session is $260, while the rebate typically peaks at $128. Associate Professor Stone said the large gap payment meant the system favoured the more privileged members of a community. "We've subsidised the services that at the moment are reaching the more advantaged parts of the population, and we're missing the disadvantaged," she said. "We know there's plenty of evidence to say the lowest fifth of patients in terms of income have much higher mental health problems and much lower access to services." People aren't accessing treatment because system is under stress Pre-eminent mental health advocate and executive director of youth mental health organisation Orygen Pat McGorry welcomed the decision to expand the number of sessions to 20, saying the care and effectiveness of treatment was compromised when it was limited to just 10. But he said the fact the same amount of people had been accessing treatment plans during the pandemic suggested the system was under stress. Leading mental health expert Dr Patrick McGorry has described a system struggling to keep up with demand.(Supplied: Orygen) "There's a certain number of practitioners and the system doesn't have the capacity to accommodate surges," he said. "It can't get through the eye of the needle into this system that's not big enough to cope with the need." Dr McGorry said the shortage of mental health workers like psychologists was exacerbated by the expansion of subsidised sessions, as practitioners would be tied up with one patient for longer. "You're solving one problem, but you're creating another," he said. Deputy Chief Medical Officer Professor Michael Kidd says the Government has been regularly communicating with allied health providers during the pandemic.(AAP: Mick Tsikas) "The solution is to expand the workforce, that's what we really need." Deputy Chief Medical Officer Michael Kidd said the Government had been meeting with allied health providers on a weekly basis. "[There is] strong advocacy for the need to increase the number of sessions in order to meet the mental health needs of many, many people who are seeking support from psychologists and from other therapists," Dr Kidd said. "At the moment, the demand appears to be met according to our colleagues." Money for the 'missing middle' Both Dr McGorry and Professor Hickie said they would like more money targeted at what's often dubbed the "missing middle" — those often too unwell to merely be treated by a primary health practitioner like a GP, but not deemed sick enough to be admitted in to a mental health unit. Professor Hickie said those people typically languished on waiting lists for services, before presenting to emergency departments. "We're seeing that in young people in particular, what you have is greater pressure in emergency departments and also the risk that people engage in dangerous behaviours while they're waiting for care," he said. Last month, the Government opened 15 dedicated mental health clinics, or "hubs", across Victoria, giving people access to multidisciplinary teams comprised of allied health workers like psychologists, mental health nurses, social workers and occupational therapists. Professor Ian Hickie says a newly opened model of care introduced in Victoria could be a "game changer".(ABC News: David Collins) Professor Hickie said that form of care could be a game changer. "If those hubs effectively link people to real services, that is they don't just refer people on, but they provide new clinical services — that may well be a major innovation," he said. However, it also found just 10 per cent of people would benefit from the increase in the session cap, with the average person using just 4.6 of their 10 sessions. Health Minister Greg Hunt defended the decision to increase the number of Medicare-subsidised sessions, saying it was one element of support being provided across the sector. "Overwhelmingly the AMA, the College of General Practitioners, the psychiatric and psychological communities, have welcomed it," he said. "The fact that there's an extra 430,000 MBS [Medicare Benefits Schedule] subsidised services, an extra million mental health services or support calls, indicates that I think what we're doing is important."
About this book This book is written as a companion book to the Statistical Inference Coursera class as part of the Data Science Specialization. However, if you do not take the class, the book mostly stands on its own. A useful component of the book is a series of YouTube videos that comprise the Coursera class. The book is intended to be a low cost introduction to the important field of statistical inference. The intended audience are students who are numerically and computationally literate, who would like to put those skills to use in Data Science or Statistics. The book is offered for free as a series of markdown documents on github and in more convenient forms (epub, mobi) on LeanPub and retail outlets. This book is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, which requires author attribution for derivative works, non-commercial use of derivative works and that changes are shared in the same way as the original work. About the picture on the cover The picture on the cover is a public domain image taken from Wikipedia’s article on Francis Galton’s quincunx. Francis Galton was an 19th century polymath who invented many of key concepts of statistics. The quincunx was an ingenious invention for illustrating the central limit theorem using a pinball setup. 1. Introduction Before beginning This book is designed as a companion to the Statistical Inference Coursera class as part of the Data Science Specialization, a ten course program offered by three faculty, Jeff Leek, Roger Peng and Brian Caffo, at the Johns Hopkins University Department of Biostatistics. The videos associated with this book can be watched in full here, though the relevant links to specific videos are placed at the appropriate locations throughout. Before beginning, we assume that you have a working knowledge of the R programming language. If not, there is a wonderful Coursera class by Roger Peng, that can be found here. The entirety of the book is on GitHub here. Please submit pull requests if you find errata! In addition the course notes can be found also on GitHub here. While most code is in the book, all of the code for every figure and analysis in the book is in the R markdown files files (.Rmd) for the respective lectures. Finally, we should mention swirl (statistics with interactive R programming). swirl is an intelligent tutoring system developed by Nick Carchedi, with contributions by Sean Kross and Bill and Gina Croft. It offers a way to learn R in R. Download swirl here. There’s a swirl module for this course!. Try it out, it’s probably the most effective way to learn. Statistical inference defined Watch this video before beginning. We’ll define statistical inference as the process of generating conclusions about a population from a noisy sample. Without statistical inference we’re simply living within our data. With statistical inference, we’re trying to generate new knowledge. Knowledge and parsimony, (using simplest reasonable models to explain complex phenomena), go hand in hand. Probability models will serve as our parsimonious description of the world. The use of probability models as the connection between our data and a populations represents the most effective way to obtain inference. Motivating example: who’s going to win the election? In every major election, pollsters would like to know, ahead of the actual election, who’s going to win. Here, the target of estimation (the estimand) is clear, the percentage of people in a particular group (city, state, county, country or other electoral grouping) who will vote for each candidate. We can not poll everyone. Even if we could, some polled may change their vote by the time the election occurs. How do we collect a reasonable subset of data and quantify the uncertainty in the process to produce a good guess at who will win? Motivating example, predicting the weather When a weatherman tells you the probability that it will rain tomorrow is 70%, they’re trying to use historical data to predict tomorrow’s weather - and to actually attach a probability to it. That probability refers to population. Motivating example, brain activation An example that’s very close to the research I do is trying to predict what areas of the brain activate when a person is put in the fMRI scanner. In that case, people are doing a task while in the scanner. For example, they might be tapping their finger. We’d like to compare when they are tapping their finger to when they are not tapping their finger and try to figure out what areas of the brain are associated with the finger tapping. Summary notes These examples illustrate many of the difficulties of trying to use data to create general conclusions about a population. Paramount among our concerns are: Is the sample representative of the population that we’d like to draw inferences about? Are there known and observed, known and unobserved or unknown and unobserved variables that contaminate our conclusions? Is there systematic bias created by missing data or the design or conduct of the study? What randomness exists in the data and how do we use or adjust for it? Here randomness can either be explicit via randomization or random sampling, or implicit as the aggregation of many complex unknown processes. Are we trying to estimate an underlying mechanistic model of phenomena under study? Statistical inference requires navigating the set of assumptions and tools and subsequently thinking about how to draw conclusions from data. The goals of inference You should recognize the goals of inference. Here we list five examples of inferential goals. Estimate and quantify the uncertainty of an estimate of a population quantity (the proportion of people who will vote for a candidate). Determine whether a population quantity is a benchmark value (“is the treatment effective?”). Infer a mechanistic relationship when quantities are measured with noise (“What is the slope for Hooke’s law?”) Determine the impact of a policy? (“If we reduce pollution levels, will asthma rates decline?”) Talk about the probability that something occurs. Several tools are key to the use of statistical inference. We’ll only be able to cover a few in this class, but you should recognize them anyway. Randomization: concerned with balancing unobserved variables that may confound inferences of interest. Random sampling: concerned with obtaining data that is representative of the population of interest. Sampling models: concerned with creating a model for the sampling process, the most common is so called “iid”. Hypothesis testing: concerned with decision making in the presence of uncertainty. Confidence intervals: concerned with quantifying uncertainty in estimation. Probability models: a formal connection between the data and a population of interest. Often probability models are assumed or are approximated. Study design: the process of designing an experiment to minimize biases and variability. Nonparametric bootstrapping: the process of using the data to, with minimal probability model assumptions, create inferences. Permutation, randomization and exchangeability testing: the process of using data permutations to perform inferences. Different thinking about probability leads to different styles of inference We won’t spend too much time talking about this, but there are several different styles of inference. Two broad categories that get discussed a lot are: Frequency probability: is the long run proportion of times an event occurs in independent, identically distributed repetitions. Frequency style inference: uses frequency interpretations of probabilities to control error rates. Answers questions like “What should I decide given my data controlling the long run proportion of mistakes I make at a tolerable level.” Bayesian probability: is the probability calculus of beliefs, given that beliefs follow certain rules. Bayesian style inference: the use of Bayesian probability representation of beliefs to perform inference. Answers questions like “Given my subjective beliefs and the objective information from the data, what should I believe now?” Data scientists tend to fall within shades of gray of these and various other schools of inference. Furthermore, there are so many shades of gray between the styles of inferences that it is hard to pin down most modern statisticians as either Bayesian or frequentist. In this class, we will primarily focus on basic sampling models, basic probability models and frequency style analyses to create standard inferences. This is the most popular style of inference by far. Being data scientists, we will also consider some inferential strategies that rely heavily on the observed data, such as permutation testing and bootstrapping. As probability modeling will be our starting point, we first build up basic probability as our first task. Exercises The goal of statistical inference is to? Infer facts about a population from a sample. Infer facts about the sample from a population. Calculate sample quantities to understand your data. To torture Data Science students. The goal of randomization of a treatment in a randomized trial is to? It doesn’t really do anything. To obtain a representative sample of subjects from the population of interest. Balance unobserved covariates that may contaminate the comparison between the treated and control groups. To add variation to our conclusions. Probability is a? Population quantity that we can potentially estimate from data. A data quantity that does not require the idea of a population. 2. Probability Watch this video before beginning. Probability forms the foundation for almost all treatments of statistical inference. In our treatment, probability is a law that assigns numbers to the long run occurrence of random phenomena after repeated unrelated realizations. Before we begin discussing probability, let’s dispense with some deep philosophical questions, such as “What is randomness?” and “What is the fundamental interpretation of probability?”. One could spend a lifetime studying these questions (and some have). For our purposes, randomness is any process occurring without apparent deterministic patterns. Thus we will treat many things as if they were random when, in fact they are completely deterministic. In my field, biostatistics, we often model disease outcomes as if they were random when they are the result of many mechanistic components whose aggregate behavior appears random. Probability for us will be the long run proportion of times some occurs in repeated unrelated realizations. So, think of the proportion of times that you get a head when flipping a coin. For the interested student, I would recommend the books and work by Ian Hacking to learn more about these deep philosophical issues. For us data scientists, the above definitions will work fine. Where to get a more thorough treatment of probability In this lecture, we will cover the fundamentals of probability at low enough of a level to have a basic understanding for the rest of the series. For a more complete treatment see the class Mathematical Biostatistics Boot Camp 1, which can be viewed on YouTube here. In addition, there’s the actual Coursera course that I run periodically (this is the first Coursera class that I ever taught). Also there are a set of notes on GitHub. Finally, there’s a follow up class, uninspiringly named Mathematical Biostatistics Boot Camp 2, that is more devoted to biostatistical topics that has an associated YouTube playlist, Coursera Class and GitHub notes. Kolmogorov’s Three Rules Watch this lecture before beginning. Given a random experiment (say rolling a die) a probability measure is a population quantity that summarizes the randomness. The brilliant discovery of the father of probability, the Russian mathematician Kolmogorov, was that to satisfy our intuition about how probability should behave, only three rules were needed. Consider an experiment with a random outcome. Probability takes a possible outcome from an experiment and: assigns it a number between 0 and 1 requires that the probability that something occurs is 1 required that the probability of the union of any two sets of outcomes that have nothing in common (mutually exclusive) is the sum of their respective probabilities. From these simple rules all of the familiar rules of probability can be developed. This all might seem a little odd at first and so we’ll build up our intuition with some simple examples based on coin flipping and die rolling. I would like to reiterate the important definition that we wrote out: mutually exclusive. Two events are mutually exclusive if they cannot both simultaneously occur. For example, we cannot simultaneously get a 1 and a 2 on a die. Rule 3 says that since the event of getting a 1 and 2 on a die are mutually exclusive, the probability of getting at least one (the union) is the sum of their probabilities. So if we know that the probability of getting a 1 is 1/6 and the probability of getting a 2 is 1/6, then the probability of getting a 1 or a 2 is 2/6, the sum of the two probabilities since they are mutually exclusive. Consequences of The Three Rules Let’s cover some consequences of our three simple rules. Take, for example, the probability that something occurs is 1 minus the probability of the opposite occurring. Let be the event that we get a 1 or a 2 on a rolled die. Then is the opposite, getting a 3, 4, 5 or 6. Since and cannot both simultaneously occur, they are mutually exclusive. So the probability that either or is . Notice, that the probability that either occurs is the probability of getting a 1, 2, 3, 4, 5 or 6, or in other words, the probability that something occurs, which is 1 by rule number 2. So we have that or that . We won’t go through this tedious exercise (since Kolmogorov already did it for us). Instead here’s a list of some of the consequences of Kolmogorov’s rules that are often useful. The probability that nothing occurs is 0 The probability that something occurs is 1 The probability of something is 1 minus the probability that the opposite occurs The probability of at least one of two (or more) things that can not simultaneously occur (mutually exclusive) is the sum of their respective probabilities For any two events the probability that at least one occurs is the sum of their probabilities minus their intersection. This last rules states that shows what is the issue with adding probabilities that are not mutually exclusive. If we do this, we’ve added the probability that both occur in twice! (Watch the video where I draw a Venn diagram to illustrate this). Example of Implementing Probability Calculus The National Sleep Foundation (www.sleepfoundation.org) reports that around 3% of the American population has sleep apnea. They also report that around 10% of the North American and European population has restless leg syndrome. Does this imply that 13% of people will have at least one sleep problems of these sorts? In other words, can we simply add these two probabilities? Answer: No, the events can simultaneously occur and so are not mutually exclusive. To elaborate let: Then Given the scenario, it’s likely that some fraction of the population has both. This example serves as a reminder don’t add probabilities unless the events are mutually exclusive. We’ll have a similar rule for multiplying probabilities and independence. Random variables Watch this video before reading this section Probability calculus is useful for understanding the rules that probabilities must follow. However, we need ways to model and think about probabilities for numeric outcomes of experiments (broadly defined). Densities and mass functions for random variables are the best starting point for this. You’ve already heard of a density since you’ve heard of the famous “bell curve”, or Gaussian density. In this section you’ll learn exactly what the bell curve is and how to work with it. Remember, everything we’re talking about up to at this point is a population quantity, not a statement about what occurs in our data. Think about the fact that 50% probability for head is a statement about the coin and how we’re flipping it, not a statement about the percentage of heads we obtained in a particular set of flips. This is an important distinction that we will emphasize over and over in this course. Statistical inference is about describing populations using data. Probability density functions are a way to mathematically characterize the population. In this course, we’ll assume that our sample is a random draw from the population. So our definition is that a random variable is a numerical outcome of an experiment. The random variables that we study will come in two varieties, discrete or continuous. Discrete random variables are random variables that take on only a countable number of possibilities. Mass functions will assign probabilities that they take specific values. Continuous random variable can conceptually take any value on the real line or some subset of the real line and we talk about the probability that they lie within some range. Densities will characterize these probabilities. Let’s consider some examples of measurements that could be considered random variables. First, familiar gambling experiments like the tossing of a coin and the rolling of a die produce random variables. For the coin, we typically code a tail as a 0 and a head as a 1. (For the die, the number facing up would be the random variable.) We will use these examples a lot to help us build intuition. However, they aren’t interesting in the sense of seeming very contrived. Nonetheless, the coin example is particularly useful since many of the experiments we consider will be modeled as if tossing a biased coin. Modeling any binary characteristic from a random sample of a population can be thought of as a coin toss, with the random sampling performing the roll of the toss and the population percentage of individuals with the characteristic is the probability of a head. Consider, for example, logging whether or not subjects were hypertensive in a random sample. Each subject’s outcome can be modeled as a coin toss. In a similar sense the die roll serves as our model for phenomena with more than one level, such as hair color or rating scales. Consider also the random variable of the number of web hits for a site each day. This variable is a count, but is largely unbounded (or at least we couldn’t put a specific reasonable upper limit). Random variables like this are often modeled with the so called Poisson distribution. Finally, consider some continuous random variables. Think of things like lengths or weights. It is mathematically convenient to model these as if they were continuous (even if measurements were truncated liberally). In fact, even discrete random variables with lots of levels are often treated as continuous for convenience. For all of these kinds of random variables, we need convenient mathematical functions to model the probabilities of collections of realizations. These functions, called mass functions and densities, take possible values of the random variables, and assign the associated probabilities. These entities describe the population of interest. So, consider the most famous density, the normal distribution. Saying that body mass indices follow a normal distribution is a statement about the population of interest. The goal is to use our data to figure out things about that normal distribution, where it’s centered, how spread out it is and even whether our assumption of normality is warranted! Probability mass functions A probability mass function evaluated at a value corresponds to the probability that a random variable takes that value. To be a valid pmf a function, , must satisfy: It must always be larger than or equal to 0. The sum of the possible values that the random variable can take has to add up to one. Example Let be the result of a coin flip where represents tails and represents heads. for . Suppose that we do not know whether or not the coin is fair; Let be the probability of a head expressed as a proportion (between 0 and 1). for Probability density functions Watch this video before beginning. A probability density function (pdf), is a function associated with a continuous random variable. Because of the peculiarities of treating measurements as having been recorded to infinite decimal expansions, we need a different set of rules. This leads us to the central dogma of probability density functions: Areas under PDFs correspond to probabilities for that random variable Therefore, when one says that intelligence quotients (IQ) in population follows a bell curve, they are saying that the probability of a randomly selected person from this population having an IQ between two values is given by the area under the bell curve. Not every function can be a valid probability density function. For example, if the function dips below zero, then we could have negative probabilities. If the function contains too much area underneath it, we could have probabilities larger than one. The following two rules tell us when a function is a valid probability density function. Specifically, to be a valid pdf, a function must satisfy It must be larger than or equal to zero everywhere. The total area under it must be one. Example Suppose that the proportion of help calls that get addressed in a random day by a help line is given by for . The R code for plotting this density is Code for plotting the density x <- c ( -0.5 , 0 , 1 , 1 , 1.5 ) y <- c ( 0 , 0 , 2 , 0 , 0 ) plot ( x , y , lwd = 3 , frame = FALSE , type = "l" ) The result of the code is given below. Help call density Is this a mathematically valid density? To answer this we need to make sure it satisfies our two conditions. First it’s clearly nonnegative (it’s at or above the horizontal axis everywhere). The area is similarly easy. Being a right triangle in the only section of the density that is above zero, we can calculate it as 1/2 the area of the base times the height. This is Now consider answering the following question. What is the probability that 75% or fewer of calls get addressed? Remember, for continuous random variables, probabilities are represented by areas underneath the density function. So, we want the area from 0.75 and below, as illustrated by the figure below. Help call density This again is a right triangle, with length of the base as 0.75 and height 1.5. The R code below shows the calculation. > 1.5 * 0.75 / 2 [ 1 ] 0.5625 Thus, the probability of 75% or fewer calls getting addressed in a random day for this help line is 56%. We’ll do this a lot throughout this class and work with more useful densities. It should be noted that this specific density is a special case of the so called beta density. Below I show how to use R’s built in evaluation function for the beta density to get the probability. > pbeta ( 0.75 , 2 , 1 ) [ 1 ] 0.5625 Notice the syntax pbeta . In R, a prefix of p returns probabilities, d returns the density, q returns the quantile and r returns generated random variables. (You’ll learn what each of these does in subsequent sections.) CDF and survival function Certain areas of PDFs and PMFs are so useful, we give them names. The cumulative distribution function (CDF) of a random variable, , returns the probability that the random variable is less than or equal to the value . Notice the (slightly annoying) convention that we use an upper case to denote a random, unrealized, version of the random variable and a lowercase to denote a specific number that we plug into. (This notation, as odd as it may seem, dates back to Fisher and isn’t going anywhere, so you might as well get used to it. Uppercase for unrealized random variables and lowercase as placeholders for numbers to plug into.) So we could write the following to describe the distribution function : This definition applies regardless of whether the random variable is discrete or continuous. The survival function of a random variable is defined as the probability that the random variable is greater than the value . Notice that , since the survival function evaluated at a particular value of is calculating the probability of the opposite event (greater than as opposed to less than or equal to). The survival function is often preferred in biostatistical applications while the distribution function is more generally used (though both convey the same information.) Example What are the survival function and CDF from the density considered before? for . Notice that calculating the survival function is now trivial given that we’ve already calculated the distribution function. Again, R has a function that calculates the distribution function for us in this case, pbeta . Let’s try calculating , and > pbeta ( c ( 0.4 , 0.5 , 0.6 ), 2 , 1 ) [ 1 ] 0.16 0.25 0.36 Notice, of course, these are simply the numbers squared. By default the prefix p in front of a density in R gives the distribution function ( pbeta , pnorm , pgamma ). If you want the survival function values, you could always subtract by one, or give the argument lower.tail = FALSE as an argument to the function, which asks R to calculate the upper area instead of the lower. Quantiles You’ve heard of sample quantiles. If you were the 95th percentile on an exam, you know that 95% of people scored worse than you and 5% scored better. These are sample quantities. But you might have wondered, what are my sample quantiles estimating? In fact, they are estimating the population quantiles. Here we define these population analogs. The quantile of a distribution with distribution function is the point so that So the 0.95 quantile of a distribution is the point so that 95% of the mass of the density lies below it. Or, in other words, the point so that the probability of getting a randomly sampled point below it is 0.95. This is analogous to the sample quantiles where the 0.95 sample quantile is the value so that 95% of the data lies below it. A percentile is simply a quantile with expressed as a percent rather than a proportion. The (population) median is the percentile. Remember that percentiles are not probabilities! Remember that quantiles have units. So the population median height is the height (in inches say) so that the probability that a randomly selected person from the population is shorter is 50%. The sample, or empirical, median would be the height so in a sample so that 50% of the people in the sample were shorter. Example What is the median of the distribution that we were working with before? We want to solve , resulting in the solution > sqrt ( 0.5 ) [ 1 ] 0.7071 Therefore, 0.7071 of calls being answered on a random day is the median. Or, the probability that 70% or fewer calls get answered is 50%. R can approximate quantiles for you for common distributions with the prefix q in front of the distribution name > qbeta ( 0.5 , 2 , 1 ) [ 1 ] 0.7071 Exercises 3. Conditional probability Conditional probability, motivation Watch this video before beginning. Conditioning is a central subject in statistics. If we are given information about a random variable, it changes the probabilities associated with it. For example, the probability of getting a one when rolling a (standard) die is usually assumed to be one sixth. If you were given the extra information that the die roll was an odd number (hence 1, 3 or 5) then conditional on this new information, the probability of a one is now one third. This is the idea of conditioning, taking away the randomness that we know to have occurred. Consider another example, such as the result of a diagnostic imaging test for lung cancer. What’s the probability that a person has cancer given a positive test? How does that probability change under the knowledge that a patient has been a lifetime heavy smoker and both of their parents had lung cancer? Conditional on this new information, the probability has increased dramatically. Conditional probability, definition We can formalize the definition of conditional probability so that the mathematics matches our intuition. Let be an event so that . Then the conditional probability of an event given that has occurred is: If and are unrelated in any way, or in other words independent, (discussed more later in the lecture), then That is, if the occurrence of offers no information about the occurrence of - the probability conditional on the information is the same as the probability without the information, we say that the two events are independent. Example Consider our die roll example again. Here we have that and Which exactly mirrors our intuition. Bayes’ rule Watch this video before beginning Bayes’ rule is a famous result in statistics and probability. It forms the foundation for large branches of statistical thinking. Bayes’ rule allows us to reverse the conditioning set provided that we know some marginal probabilities. Why is this useful? Consider our lung cancer example again. It would be relatively easy for physicians to calculate the probability that the diagnostic method is positive for people with lung cancer and negative for people without. They could take several people who are already known to have the disease and apply the test and conversely take people known not to have the disease. However, for the collection of people with a positive test result, the reverse probability is more of interest, “given a positive test what is the probability of having the disease?”, and “given a given a negative test what is the probability of not having the disease?”. Bayes’ rule allows us to switch the conditioning event, provided a little bit of extra information. Formally Bayes’ rule is: Diagnostic tests Since diagnostic tests are a really good example of Bayes’ rule in practice, let’s go over them in greater detail. (In addition, understanding Bayes’ rule will be helpful for your own ability to understand medical tests that you see in your daily life). We require a few definitions first. Let and be the events that the result of a diagnostic test is positive or negative respectively Let and be the event that the subject of the test has or does not have the disease respectively The sensitivity is the probability that the test is positive given that the subject actually has the disease, The specificity is the probability that the test is negative given that the subject does not have the disease, So, conceptually at least, the sensitivity and specificity are straightforward to estimate. Take people known to have and not have the disease and apply the diagnostic test to them. However, the reality of estimating these quantities is quite challenging. For example, are the people known to have the disease in its later stages, while the diagnostic will be used on people in the early stages where it’s harder to detect? Let’s put these subtleties to the side and assume that they are known well. The quantities that we’d like to know are the predictive values. The positive predictive value is the probability that the subject has the disease given that the test is positive, The negative predictive value is the probability that the subject does not have the disease given that the test is negative, Finally, we need one last thing, the prevalence of the disease - which is the marginal probability of disease, . Let’s now try to figure out a PPV in a specific setting. Example A study comparing the efficacy of HIV tests, reports on an experiment which concluded that HIV antibody tests have a sensitivity of 99.7% and a specificity of 98.5% Suppose that a subject, from a population with a .1% prevalence of HIV, receives a positive test result. What is the positive predictive value? Mathematically, we want given the sensitivity, , the specificity, and the prevalence . In this population a positive test result only suggests a 6% probability that the subject has the disease, (the positive predictive value is 6% for this test). If you were wondering how it could be so low for this test, the low positive predictive value is due to low prevalence of disease and the somewhat modest specificity Suppose it was known that the subject was an intravenous drug user and routinely had intercourse with an HIV infected partner? Our prevalence would change dramatically, thus increasing the PPV. You might wonder if there’s a way to summarize the evidence without appealing to an often unknowable prevalence? Diagnostic likelihood ratios provide this for us. Diagnostic Likelihood Ratios The diagnostic likelihood ratios summarize the evidence of disease given a positive or negative test. They are defined as: The diagnostic likelihood ratio of a positive test, labeled , is , which is the . The diagnostic likelihood ratio of a negative test, labeled , is , which is the . How do we interpret the DLRs? This is easiest when looking at so called odds ratios. Remember that if is a probability, then is the odds. Consider now the odds in our setting: Using Bayes rule, we have and Therefore, dividing these two equations we have: In other words, the post test odds of disease is the pretest odds of disease times the . Similarly, relates the decrease in the odds of the disease after a negative test result to the odds of disease prior to the test. So, the DLRs are the factors by which you multiply your pretest odds to get your post test odds. Thus, if a test has a of 6, regardless of the prevalence of disease, the post test odds is six times that of the pretest odds. HIV example revisited Let’s reconsider our HIV antibody test again. Suppose a subject has a positive HIV test The result of the positive test is that the odds of disease is now 66 times the pretest odds. Or, equivalently, the hypothesis of disease is 66 times more supported by the data than the hypothesis of no disease Suppose instead that a subject has a negative test result Therefore, the post-test odds of disease is now 0.3% of the pretest odds given the negative test. Or, the hypothesis of disease is supported times that of the hypothesis of absence of disease given the negative test result Independence Watch this video before beginning. Statistical independence of events is the idea that the events are unrelated. Consider successive coin flips. Knowledge of the result of the first coin flip tells us nothing about the second. We can formalize this into a definition. Two events and are independent if Equivalently if . Note that since is independent of we know that is independent of is independent of is independent of . While this definition works for sets, remember that random variables are really the things that we are interested in. Two random variables, and are independent if for any two sets and We will almost never work with these definitions. Instead, the important principle is that probabilities of independent things multiply! This has numerous consequences, including the idea that we shouldn’t multiply non-independent probabilities. Example Let’s cover a very simple example: “What is the probability of getting two consecutive heads?”. Then we have that is the event of getting a head on flip 1 is the event of getting a head on flip 2 is the event of getting heads on flips 1 and 2. Then independence would tell us that: This is exactly what we would have intuited of course. But, it’s nice that the mathematics mirrors our intuition. In more complex settings, it’s easy to get tripped up. Consider the following famous (among statisticians at least) case study. Case Study Volume 309 of Science reports on a physician who was on trial for expert testimony in a criminal trial. Based on an estimated prevalence of sudden infant death syndrome (SIDS) of 1 out of 8,543, a physician testified that that the probability of a mother having two children with SIDS was . The mother on trial was convicted of murder. Relevant to this discussion, the principal mistake was to assume that the events of having SIDs within a family are independent. That is, is not necessarily equal to . This is because biological processes that have a believed genetic or familiar environmental component, of course, tend to be dependent within families. Thus, we can’t just multiply the probabilities to obtain the result. There are many other interesting aspects to the case. For example, the idea of a low probability of an event representing evidence against a plaintiff. (Could we convict all lottery winners of fixing the lotter since the chance that they would win is so small.) IID random variables Now that we’ve introduced random variables and independence, we can introduce a central modeling assumption made in statistics. Specifically the idea of a random sample. Random variables are said to be independent and identically distributed (iid) if they are independent and all are drawn from the same population. The reason iid samples are so important is that they are a model for random samples. This is a default starting point for most statistical inferences. The idea of having a random sample is powerful for a variety of reasons. Consider that in some study designs, such as in election polling, great pains are made to make sure that the sample is randomly drawn from a population of interest. The idea is to expend a lot of effort on design to get robust inferences. In these settings assuming that the data is iid is both natural and warranted. In other settings, the study design is far more opaque, and statistical inferences are conducted under the assumption that the data arose from a random sample, since it serves as a useful benchmark. Most studies in the fields of epidemiology and economics fall under this category. Take, for example, studying how policies impact countries gross domestic product by looking at countries before and after enacting the policies. The countries are not a random sample from the set of countries. Instead, conclusions must be made under the assumption that the countries are a random sample and the interpretation of the strength of the inferences adapted in kind. Exercises I pull a card from a deck and do not show you the result. I say that the resulting card is a heart. What is the probability that it is the queen of hearts? The odds associated with a probability, , are defined as? The probability of getting two sixes when rolling a pair of dice is? The probability that a manuscript gets accepted to a journal is 12% (say). However, given that a revision is asked for, the probability that it gets accepted is 90%. Is it possible that the probability that a manuscript has a revision asked for is 20%? Watch a video of this problem getting solved and see the worked out solutions here. Suppose 5% of housing projects have issues with asbestos. The sensitivity of a test for asbestos is 93% and the specificity is 88%. What is the probability that a housing project has no asbestos given a negative test expressed as a percentage to the nearest percentage point? Watch a video solution here and see the worked out problem here. 4. Expected values Watch this video before beginning. Expected values characterize a distribution. The most useful expected value, the mean, characterizes the center of a density or mass function. Another expected value summary, the variance, characterizes how spread out a density is. Yet another expected value calculation is the skewness, which considers how much a density is pulled toward high or low values. Remember, in this lecture we are discussing population quantities. It is convenient (and of course by design) that the names for all of the sample analogs estimate the associated population quantity. So, for example, the sample or empirical mean estimates the population mean; the sample variance estimates the population variance and the sample skewness estimates the population skewness. The population mean for discrete random variables The expected value or (population) mean of a random variable is the center of its distribution. For discrete random variable with PMF , it is defined as follows: where the sum is taken over the possible values of . Where did they get this idea from? It’s taken from the physical idea of the center of mass. Specifically, represents the center of mass of a collection of locations and weights, . We can exploit this fact to quickly calculate population means for distributions where the center of mass is obvious. The sample mean It is important to contrast the population mean (the estimand) with the sample mean (the estimator). The sample mean estimates the population mean. Not coincidentally, since the population mean is the center of mass of the population distribution, the sample mean is the center of mass of the data. In fact, it’s exactly the same equation: where . Example Find the center of mass of the bars Let’s go through an example of illustrating how the sample mean is the center of mass of observed data. Below we plot Galton’s fathers and sons data: Loading in and displaying the Galton data library ( UsingR ); data ( galton ); library ( ggplot2 ); library ( reshape2 ) longGalton <- melt ( galton , measure.vars = c ( "child" , "parent" )) g <- ggplot ( longGalton , aes ( x = value )) + geom_histogram ( aes ( y = .. density.. , f\ ill = variable ), binwidth = 1 , color = "black" ) + geom_density ( size = 2 ) g <- g + facet_grid ( . ~ variable ) g Galton’s Data Using rStudio’s manipulate package, you can try moving the histogram around and see what value balances it out. Be sure to watch the video to see this in action. Using manipulate to explore the mean library ( manipulate ) myHist <- function ( mu ){ g <- ggplot ( galton , aes ( x = child )) g <- g + geom_histogram ( fill = "salmon" , binwidth = 1 , aes ( y = .. density.. ), color = "black" ) g <- g + geom_density ( size = 2 ) g <- g + geom_vline ( xintercept = mu , size = 2 ) mse <- round ( mean (( galton $ child - mu ) ^ 2 ), 3 ) g <- g + labs ( title = paste ( 'mu = ' , mu , ' MSE = ' , mse )) g } manipulate ( myHist ( mu ), mu = slider ( 62 , 74 , step = 0.5 )) Going through this exercise, you find that the point that balances out the histogram is the empirical mean. (Note there’s a small distinction here that comes about from rounding with the histogram bar widths, but ignore that for the time being.) If the bars of the histogram are from the observed data, the point that balances it out is the empirical mean; if the bars are the true population probabilities (which we don’t know of course) then the point is the population mean. Let’s now go through some examples of mathematically calculating the population mean. The center of mass is the empirical mean Histogram illustration Example of a population mean, a fair coin Watch the video before beginning here. Suppose a coin is flipped and is declared 0 or 1 corresponding to a head or a tail, respectively. What is the expected value of ? Note, if thought about geometrically, this answer is obvious; if two equal weights are spaced at 0 and 1, the center of mass will be 0.5. Fair coin mass function What about a biased coin? Suppose that a random variable, , is so that and (This is a biased coin when .) What is its expected value? Notice that the expected value isn’t a value that the coin can take in the same way that the sample proportion of heads will also likely be neither 0 nor 1. This coin example is not exactly trivial as it serves as the basis for a random sample of any population for a binary trait. So, we might model the answer from an election polling question as if it were a coin flip. Example Die Roll Suppose that a die is rolled and is the number face up. What is the expected value of ? Again, the geometric argument makes this answer obvious without calculation. Bar graph of die probabilities Continuous random variables Watch this video before beginning. For a continuous random variable, , with density, , the expected value is again exactly the center of mass of the density. Think of it like cutting the continuous density out of a thick piece of wood and trying to find the point where it balances out. Example Consider a density where for between zero and one. Suppose that follows this density; what is its expected value? Uniform Density The answer is clear since the density looks like a box, it would balance out exactly in the middle, 0.5. Facts about expected values Recall that expected values are properties of population distributions. The expected value, or mean, height is the center of the population density of heights. Of course, the average of ten randomly sampled people’s height is itself a random variable, in the same way that the average of ten die rolls is itself a random number. Thus, the distribution of heights gives rise to the distribution of averages of ten heights in the same way that distribution associated with a die roll gives rise to the distribution of the average of ten dice. An important question to ask is: “What does the distribution of averages look like?”. This question is important, since it tells us things about averages, the best way to estimate the population mean, when we only get to observe one average. Consider the die rolls again. If wanted to know the distribution of averages of 100 die rolls, you could (at least in principle) roll 100 dice, take the average and repeat that process. Imagine, if you could only roll the 100 dice once. Then we would have direct information about the distribution of die rolls (since we have 100 of them), but we wouldn’t have any direct information about the distribution of the average of 100 die rolls, since we only observed one average. Fortunately, the mathematics tells us about that distribution. Notably, it’s centered at the same spot as the original distribution! Thus, the distribution of the estimator (the sample mean) is centered at the distribution of what it’s estimating (the population mean). When the expected value of an estimator is what its trying to estimate, we say that the estimator is unbiased. Let’s go through several simulation experiments to see this more fully. Simulation experiments Standard normals Consider simulating a lot of standard normals and plotting a histogram (the blue density). Now consider simulating lots of averages of 10 standard normals and plotting their histogram (the salmon colored density). Notice that they’re centered in the same spot! It’s also more concentrated around that point. (We’ll discuss that more in the next lectures). Simulation of normals Averages of x die rolls Consider rolling a die a lot of times and taking a histogram of the results, that’s the left most plot. The bars are equally distributed at the six possible outcomes and thus the histogram is centered around 3.5. Now consider simulating lots of averages of 2 dice. Its histogram is also centered at 3.5. So is it for 3 and 4. Notice also the distribution gets increasing Gaussian looking (like a bell curve) and increasingly concentrated around 3.5. Simulation of die rolls Averages of x coin flips For the coin flip simulation exactly the same occurs. All of the distributions are centered around 0.5. Simulation of coin flips Summary notes Expected values are properties of distributions. The population mean is the center of mass of population. The sample mean is the center of mass of the observed data. The sample mean is an estimate of the population mean. The sample mean is unbiased: the population mean of its distribution is the mean that it’s trying to estimate. The more data that goes into the sample mean, the more. concentrated its density / mass function is around the population mean. Exercises A standard die takes the values 1, 2, 3, 4, 5, 6 with equal probability. What is the expected value? Consider a density that is uniform from -1 to 1. (I.e. has height equal to 1/2 and looks like a box starting at -1 and ending at 1). What is the mean of this distribution? If a population has mean , what is the mean of the distribution of averages of 20 observations from this distribution? You are playing a game with a friend where you flip a coin and if it comes up heads you give her dollars and if it comes up tails she gives you $Y$ dollars. The odds that the coin is heads is . What is your expected earnings? Watch a video of the solution to this problem and look at the problem and the solution here.. If you roll ten standard dice, take their average, then repeat this process over and over and construct a histogram what would it be centered at? Watch a video solution here and see the original problem here. 5. Variation The variance Watch this video before beginning. Recall that the mean of distribution was a measure of its center. The variance, on the other hand, is a measure of spread. To get a sense, the plot below shows a series of increasing variances. Distributions with increasing variance We saw another example of how variances changed in the last chapter when we looked at the distribution of averages; they were always centered at the same spot as the original distribution, but are less spread out. Thus, it is less likely for sample means to be far away from the population mean than it is for individual observations. (This is why the sample mean is a better estimate than the population mean.) If is a random variable with mean , the variance of is defined as The rightmost equation is the shortcut formula that is almost always used for calculating variances in practice. Thus the variance is the expected (squared) distance from the mean. Densities with a higher variance are more spread out than densities with a lower variance. The square root of the variance is called the standard deviation. The main benefit of working with standard deviations is that they have the same units as the data, whereas the variance has the units squared. In this class, we’ll only cover a few basic examples for calculating a variance. Otherwise, we’re going to use the ideas without the formalism. Also remember, what we’re talking about is the population variance. It measures how spread out the population of interest is, unlike the sample variance which measures how spread out the observed data are. Just like the sample mean estimates the population mean, the sample variance will estimate the population variance. Example What’s the variance from the result of a toss of a die? First recall that , as we discussed in the previous lecture. Then let’s calculate the other bit of information that we need, . Thus now we can calculate the variance as: Example What’s the variance from the result of the toss of a (potentially biased) coin with probability of heads (1) of ? First recall that Secondly, recall that since is either 0 or 1, . So we know that: Thus we can now calculate the variance of a coin flip as This is a well known formula, so it’s worth committing to memory. It’s interesting to note that this function is maximized at . The plot below shows this by plotting by . Plotting the binomial variance p = seq ( 0 , 1 , length = 1000 ) y = p * ( 1 - p ) plot ( p , y , type = "l" , lwd = 3 , frame = FALSE ) Plot of the binomial variance The sample variance The sample variance is the estimator of the population variance. Recall that the population variance is the expected squared deviation around the population mean. The sample variance is (almost) the average squared deviation of observations around the sample mean. It is given by The sample standard deviation is the square root of the sample variance. Note again that the sample variance is almost, but not quite, the average squared deviation from the sample mean since we divide by instead of . Why do we do this you might ask? To answer that question we have to think in the terms of simulations. Remember that the sample variance is a random variable, thus it has a distribution and that distribution has an associated population mean. That mean is the population variance that we’re trying to estimate if we divide by rather than . It is also nice that as we collect more data the distribution of the sample variance gets more concentrated around the population variance that it’s estimating. Simulation experiments Watch this video before beginning. Simulating from a population with variance 1 Let’s try simulating collections of standard normals and taking the variance. If we repeat this over and over, we get a sense of the distribution of sample variances variances. Simulation of variances of samples of standard normals Notice that these histograms are always centered in the same spot, 1. In other words, the sample variance is an unbiased estimate of the population variances. Notice also that they get more concentrated around the 1 as more data goes into them. Thus, sample variances comprised of more observations are less variable than sample variances comprised of fewer. Variances of x die rolls Let’s try the same thing, now only with die rolls instead of simulating standard normals. In this experiment, we simulated samples of die rolls, took the variance and then repeated that process over and over. What is plotted are histograms of the collections of sample variances. Simulated distributions of variances of dies Recall that we calculated the variance of a die roll as 2.92 earlier on in this chapter. Notice each of the histograms are centered there. In addition, they get more concentrated around 2.92 as more the variances are comprised of more dice. The standard error of the mean At last, we finally get to a perhaps very surprising (and useful) fact: how to estimate the variability of the mean of a sample, when we only get to observe one realization. Recall that the average of random sample from a population is itself a random variable having a distribution, which in simulation settings we can explore by repeated sampling averages. We know that this distribution is centered around the population mean, . We also know the variance of the distribution of means of random samples. The variance of the sample mean is: where is the variance of the population being sampled from. This is very useful, since we don’t have repeat sample means to get its variance directly using the data. We already know a good estimate of via the sample variance. So, we can get a good estimate of the variability of the mean, even though we only get to observe 1 mean. Notice also this explains why in all of our simulation experiments the variance of the sample mean kept getting smaller as the sample size increased. This is because of the square root of the sample size in the denominator. Often we take the square root of the variance of the mean to get the standard deviation of the mean. We call the standard deviation of a statistic its standard error. Summary notes The sample variance, , estimates the population variance, . , estimates the population variance, . The distribution of the sample variance is centered around . . The variance of the sample mean is . Its logical estimate is . The logical estimate of the standard error is . . , the standard deviation, talks about how variable the population is. , the standard deviation, talks about how variable the population is. , the standard error, talks about how variable averages of random samples of size from the population are. Simulation example 1: standard normals Watch this video before beginning. Standard normals have variance 1. Let’s try sampling means of standard normals. If our theory is correct, they should have standard deviation Simulating means of random normals > nosim <- 1000 > n <- 10 ## simulate nosim averages of 10 standard normals > sd ( apply ( matrix ( rnorm ( nosim * n ), nosim ), 1 , mean )) [ 1 ] 0.3156 ## Let's check to make sure that this is sigma / sqrt(n) > 1 / sqrt ( n ) [ 1 ] 0.3162 So, in this simulation, we simulated 1000 means of 10 standard normals. Our theory says the standard deviation of averages of 10 standard normals must be . Taking the standard deviation of the 10000 means yields nearly exactly that. (Note that it’s only close, 0.3156 versus 0.31632. To get it to be exact, we’d have to simulate infinitely many means.) Simulation example 2: uniform density Standard uniforms have variance . Our theory mandates that means of random samples of uniforms have sd . Let’s try it with a simulation. Simulating means of uniforms > nosim <- 1000 > n <- 10 > sd ( apply ( matrix ( runif ( nosim * n ), nosim ), 1 , mean )) [ 1 ] 0.09017 > 1 / sqrt ( 12 * n ) [ 1 ] 0.09129 Simulation example 3: Poisson Poisson(4) random variables have variance . Thus means of random samples of Poisson(4) should have standard deviation . Again let’s try it out. Simulating means of Poisson variates > nosim <- 1000 > n <- 10 > sd ( apply ( matrix ( rpois ( nosim * n , 4 ), nosim ), 1 , mean )) [ 1 ] 0.6219 > 2 / sqrt ( n ) [ 1 ] 0.6325 Simulation example 4: coin flips Our last example is an important one. Recall that the variance of a coin flip is . Therefore the standard deviation of the average of coin flips should be . Let’s just do the simulation with a fair coin. Such coin flips have variance 0.25. Thus means of random samples of coin flips have sd . Let’s try it. Simulating means of coin flips > nosim <- 1000 > n <- 10 > sd ( apply ( matrix ( sample ( 0 : 1 , nosim * n , replace = TRUE ), nosim ), 1 , mean )) [ 1 ] 0.1587 > 1 / ( 2 * sqrt ( n )) [ 1 ] 0.1581 Data example Watch this before beginning. Now let’s work through a data example to show how the standard error of the mean is used in practice. We’ll use the father.son height data from Francis Galton. Loading the data library ( UsingR ); data ( father.son ); x <- father.son $ sheight n <- length ( x ) Here’s a histogram of the sons’ heights from the dataset. Let’ calculate different variances and interpret them in this context. Histogram of the sons’ heights Loading the data > round ( c ( var ( x ), var ( x ) / n , sd ( x ), sd ( x ) / sqrt ( n )), 2 ) [ 1 ] 7.92 0.01 2.81 0.09 The first number, 7.92, and its square root, 2.81, are the estimated variance and standard deviation of the sons’ heights. Therefore, 7.92 tells us exactly how variable sons’ heights were in the data and estimates how variable sons’ heights are in the population. In contrast 0.01, and the square root 0.09, estimate how variable averages of sons’ heights are. Therefore, the smaller numbers discuss the precision of our estimate of the mean of sons’ heights. The larger numbers discuss how variable sons’ heights are in general. Summary notes The sample variance estimates the population variance. The distribution of the sample variance is centered at what its estimating. It gets more concentrated around the population variance with larger sample sizes. The variance of the sample mean is the population variance divided by . The square root is the standard error. . It turns out that we can say a lot about the distribution of averages from random samples, even though we only get one to look at in a given data set. Exercises 6. Some common distributions The Bernoulli distribution The Bernoulli distribution arises as the result of a binary outcome, such as a coin flip. Thus, Bernoulli random variables take (only) the values 1 and 0 with probabilities of (say) and , respectively. Recall that the PMF for a Bernoulli random variable is . The mean of a Bernoulli random variable is and the variance is . If we let be a Bernoulli random variable, it is typical to call as a “success” and as a “failure”. If a random variable follows a Bernoulli distribution with success probability we write that Bernoulli . Bernoulli random variables are commonly used for modeling any binary trait for a random sample. So, for example, in a random sample whether or not a participant has high blood pressure would be reasonably modeled as Bernoulli. Binomial trials The binomial random variables are obtained as the sum of iid Bernoulli trials. So if a Bernoulli trial is the result of a coin flip, a binomial random variable is the total number of heads. To write it out as mathematics, let be iid Bernoulli , then is a binomial random variable. We write out that Binomial . The binomial mass function is where . Recall that the notation (read “ choose ”) counts the number of ways of selecting items out of without replacement disregarding the order of the items. It turns out that choose 0 is 1 and choose 1 and choose are both . Example Suppose a friend has 8 children, of which are girls and none are twins. If each gender has an independent % probability for each birth, what’s the probability of getting or more girls out of births? Simulating means of coin flips > choose ( 8 , 7 ) * 0.5 ^ 8 + choose ( 8 , 8 ) * 0.5 ^ 8 [ 1 ] 0.03516 > pbinom ( 6 , size = 8 , prob = 0.5 , lower.tail = FALSE ) [ 1 ] 0.03516 The normal distribution Watch this video before beginning The normal distribution is easily the handiest distribution in all of statistics. It can be used in an endless variety of settings. Moreover, as we’ll see later on in the course, sample means follow normal distributions for large sample sizes. Remember the goal of probability modeling. We are assuming a probability distribution for our population as a way of parsimoniously characterizing it. In fact, the normal distribution only requires two numbers to characterize it. Specifically, a random variable is said to follow a normal or Gaussian distribution with mean and variance if the associated density is: If is a RV with this density then and . That is, the normal distribution is characterized by the mean and variance. We write to denote a normal random variable. When and the resulting distribution is called the standard normal distribution. Standard normal RVs are often labeled Consider an example, if we say that intelligence quotients are normally distributed with a mean of 100 and a standard deviation of 15. Then, we are saying that if we randomly sample a person from this population, the probability that they have an IQ of say 120 or larger, is governed by a normal distribution with a mean of 100 and a variance of . Taken another way, if we know that the population is normally distributed then to estimate everything about the population, we need only estimate the population mean and variance. (Estimated by the sample mean and the sample variance.) Reference quantiles for the standard normal The normal distribution is so important that it is useful to memorize reference probabilities and quantiles. The image below shows reference lines at 0, 1, 2 and 3 standard deviations above and below 0. This is for the standard normal; however, all of the rules apply to non standard normals as 0, 1, 2 and 3 standard deviations above and below , the population mean. Standard normal reference lines. The most relevant probabilities are. Approximately 68\%, 95\% and 99\% of the normal density lies within 1, 2 and 3 standard deviations from the mean, respectively. -1.28, -1.645, -1.96 and -2.33 are the , , and percentiles of the standard normal distribution, respectively. By symmetry, 1.28, 1.645, 1.96 and 2.33 are the , , and percentiles of the standard normal distribution, respectively. Shifting and scaling normals Since the normal distribution is characterized by only the mean and variance, which are a shift and a scale, we can transform normal random variables to be standard normals and vice versa. For example If then: If is standard normal then is . We can use these facts to answer questions about non-standard normals by relating them back to the standard normal. Example What is the percentile of a distribution? Quick answer in R qnorm(.95, mean = mu, sd = sigma) . Alternatively, because we have the standard normal quantiles memorized, and we know that 1.645 is its 95th percentile, the answer has to be . In general, where is the appropriate standard normal quantile. To put some context on our previous setting, population mean BMI for men is reported as 29 with a standard deviation of 4.73. Assuming normality of BMI, what is the population percentile? The answer is then: Or alternatively, we could simply type r qnorm(.95, 29, 4.73) in R. Now let’s reverse the process. Imaging asking what’s the probability that a randomly drawn subject from this population has a BMI less than 24.27? Notice that Therefore, 24.27 is 1 standard deviation below the mean. We know that 16% lies below or above 1 standard deviation from the mean. Thus 16% lies below. Alternatively, pnorm(24.27, 29, 4.73) yields the result. Example Assume that the number of daily ad clicks for a company is (approximately) normally distributed with a mean of 1020 and a standard deviation of 50. What’s the probability of getting more than 1,160 clicks in a day? Notice that: Therefore, 1,160 is 2.8 standard deviations above the mean. We know from our standard normal quantiles that the probability of being larger than 2 standard deviation is 2.5% and 3 standard deviations is far in the tail. Therefore, we know that the probability has to be smaller than 2.5% and should be very small. We can obtain it exactly as r pnorm(1160, 1020, 50, lower.tail = FALSE) which is 0.3%. Note that we can also obtain the probability as r pnorm(2.8, lower.tail = FALSE) . Example Consider the previous example again. What number of daily ad clicks would represent the one where 75% of days have fewer clicks (assuming days are independent and identically distributed)? We can obtain this as: Finding a normal quantile > qnorm ( 0.75 , mean = 1020 , sd = 50 ) [ 1 ] 1054 The Poisson distribution Watch this video before beginning. The Poisson distribution is used to model counts. It is perhaps only second to the normal distribution usefulness. In fact, the Bernoulli, binomial and multinomial distributions can all be modeled by clever uses of the Poisson. The Poisson distribution is especially useful for modeling unbounded counts or counts per unit of time (rates). Like the number of clicks on advertisements, or the number of people who show up at a bus stop. (While these are in principle bounded, it would be hard to actually put an upper limit on it.) There is also a deep connection between the Poisson distribution and popular models for so-called event-time data. In addition, the Poisson distribution is the default model for so-called contingency table data, which is simply tabulations of discrete characteristics. Finally, when is large and is small, the Poisson is an accurate approximation to the binomial distribution. The Poisson mass function is: for . The mean of this distribution is . The variance of this distribution is also . Notice that ranges from 0 to . Therefore, the Poisson distribution is especially useful for modeling unbounded counts. Rates and Poisson random variables The Poisson distribution is useful for rates, counts that occur over units of time. Specifically, if where is the expected count per unit of time and is the total monitoring time. Example The number of people that show up at a bus stop is Poisson with a mean of 2.5 per hour. If watching the bus stop for 4 hours, what is the probability that $3$ or fewer people show up for the whole time? Finding a normal quantile > ppois ( 3 , lambda = 2.5 * 4 ) [ 1 ] 0.01034 Therefore, there is about a 1% chance that 3 or fewer people show up. Notice the multiplication by four in the function argument. Since lambda is specified as events per hour we have to multiply by four to consider the number of events that occur in 4 hours. Poisson approximation to the binomial When is large and is small the Poisson distribution is an accurate approximation to the binomial distribution. Formally, if then is approximately Poisson where provided that is large is small. Example, Poisson approximation to the binomial We flip a coin with success probability 0.01 five hundred times. What’s the probability of 2 or fewer successes? Finding a normal quantile > pbinom ( 2 , size = 500 , prob = 0.01 ) [ 1 ] 0.1234 > ppois ( 2 , lambda = 500 * 0.01 ) [ 1 ] 0.1247 So we can see that the probabilities agree quite well. This approximation is often done as the Poisson model is a more convenient model in many respects. Exercises Your friend claims that changing the font to comic sans will result in more ad revenue on your web sites. When presented in random order, 9 pages out of 10 had more revenue when the font was set to comic sans. If it was really a coin flip for these 10 sites, what’s the probability of getting 9 or 10 out of 10 with more revenue for the new font? A software company is doing an analysis of documentation errors of their products. They sampled their very large codebase in chunks and found that the number of errors per chunk was approximately normally distributed with a mean of 11 errors and a standard deviation of 2. When randomly selecting a chunk from their codebase, whats the probability of fewer than 5 documentation errors? The number of search entries entered at a web site is Poisson at a rate of 9 searches per minute. The site is monitored for 5 minutes. What is the probability of 40 or fewer searches in that time frame? Suppose that the number of web hits to a particular site are approximately normally distributed with a mean of 100 hits per day and a standard deviation of 10 hits per day. What’s the probability that a given day has fewer than 93 hits per day expressed as a percentage to the nearest percentage point? Watch a video solution and see the problem. Suppose that the number of web hits to a particular site are approximately normally distributed with a mean of 100 hits per day and a standard deviation of 10 hits per day. What number of web hits per day represents the number so that only 5% of days have more hits? Watch a video solution and see the problem and solution. Suppose that the number of web hits to a particular site are approximately normally distributed with a mean of 100 hits per day and a standard deviation of 10 hits per day. Imagine taking a random sample of 50 days. What number of web hits would be the point so that only 5% of averages of 50 days of web traffic have more hits? Watch a video solution and see the problem and solution. You don’t believe that your friend can discern good wine from cheap. Assuming that you’re right, in a blind test where you randomize 6 paired varieties (Merlot, Chianti, …) of cheap and expensive wines. What is the change that she gets 5 or 6 right? Watch a video solution and see the original problem. The number of web hits to a site is Poisson with mean 16.5 per day. What is the probability of getting 20 or fewer in 2 days? Watch a video solution and see a written solution. 7. Asymptopia Asymptotics Watch this video before beginning. Asymptotics is the term for the behavior of statistics as the sample size limits to infinity. Asymptotics are incredibly useful for simple statistical inference and approximations. Asymptotics often make hard problems easy and difficult calculations simple. We will not cover the philosophical considerations in this book, but is true nonetheless, that asymptotics often lead to nice understanding of procedures. In fact, the ideas of asymptotics are so important form the basis for frequency interpretation of probabilities by considering the long run proportion of times an event occurs. Some things to bear in mind about the seemingly magical nature of asymptotics. There’s no free lunch and unfortunately, asymptotics generally give no assurances about finite sample performance. Limits of random variables We’ll only talk about the limiting behavior of one statistic, the sample mean. Fortunately, for the sample mean there’s a set of powerful results. These results allow us to talk about the large sample distribution of sample means of a collection of iid observations. The first of these results we intuitively already know. It says that the average limits to what its estimating, the population mean. This result is called the Law of Large Numbers. It simply says that if you go to the trouble of collecting an infinite amount of data, you estimate the population mean perfectly. Note there’s sampling assumptions that have to hold for this result to be true. The data have to be iid. A great example of this comes from coin flipping. Imagine if is the average of the result of coin flips (i.e. the sample proportion of heads). The Law of Large Numbers states that as we flip a coin over and over, it eventually converges to the true probability of a head. Law of large numbers in action Let’s try using simulation to investigate the law of large numbers in action. Let’s simulate a lot of standard normals and plot the cumulative means. If the LLN is correct, the line should converge to 0, the mean of the standard normal distribution. Finding a normal quantile n <- 10000 means <- cumsum ( rnorm ( n )) / ( 1 : n ) library ( ggplot2 ) g <- ggplot ( data.frame ( x = 1 : n , y = means ), aes ( x = x , y = y )) g <- g + geom_hline ( yintercept = 0 ) + geom_line ( size = 2 ) g <- g + labs ( x = "Number of obs" , y = "Cumulative mean" ) g Cumulative average from standard normal simulations. Law of large numbers in action, coin flip Let’s try the same thing, but for a fair coin flip. We’ll simulate a lot of coin flips and plot the cumulative proportion of heads. Finding a normal quantile means <- cumsum ( sample ( 0 : 1 , n , replace = TRUE )) / ( 1 : n ) g <- ggplot ( data.frame ( x = 1 : n , y = means ), aes ( x = x , y = y )) g <- g + geom_hline ( yintercept = 0.5 ) + geom_line ( size = 2 ) g <- g + labs ( x = "Number of obs" , y = "Cumulative mean" ) g Cumulative proportion of heads from a sequence of coin flips. Discussion An estimator is called consistent if it converges to what you want to estimate. Thus, the LLN says that the sample mean of iid sample is consistent for the population mean. Typically, good estimators are consistent; it’s not too much to ask that if we go to the trouble of collecting an infinite amount of data that we get the right answer. The sample variance and the sample standard deviation of iid random variables are consistent as well. The Central Limit Theorem Watch this video before beginning. The Central Limit Theorem (CLT) is one of the most important theorems in statistics. For our purposes, the CLT states that the distribution of averages of iid variables becomes that of a standard normal as the sample size increases. Consider this fact for a second. We already know the mean and standard deviation of the distribution of averages from iid samples. The CLT gives us an approximation to the full distribution! Thus, for iid samples, we have a good sense of distribution of the average event though: (1) we only observed one average and (2) we don’t know what the population distribution is. Because of this, the CLT applies in an endless variety of settings and is one of the most important theorems ever discovered. The formal result is that has a distribution like that of a standard normal for large . Replacing the standard error by its estimated value doesn’t change the CLT. The useful way to think about the CLT is that is approximately . CLT simulation experiments Let’s try simulating lots of averages from various distributions and showing that the resulting distribution looks like a bell curve. Die rolling Simulate a standard normal random variable by rolling (six sided) dice. (six sided) dice. Let be the outcome for die . be the outcome for die . Then note that . . Recall also that . . SE . . Lets roll dice, take their mean, subtract off 3.5, and divide by and repeat this over and over. Result of coin CLT simulation. It’s pretty remarkable that the approximation works so well with so few rolls of the die. So, if you’re stranded on an island, and need to simulate a standard normal without a computer, but you do have a die, you can get a pretty good approximation with 10 rolls even. Coin CLT In fact the oldest application of the CLT is to the idea of flipping coins (by de Moivre). Let be the 0 or 1 result of the flip of a possibly unfair coin. The sample proportion, say , is the average of the coin flips. We know that: , , , , . Furthermore, because of the CLT, we also know that: will be approximately normally distributed. Let’s test this by flipping a coin times, taking the sample proportion of heads, subtract off 0.5 and multiply the result by (divide by ). Results of the coin CLT simulation. This convergence doesn’t look quite as good as the die, since the coin has fewer possible outcomes. In fact, among coins of various degrees of bias, the convergence to normality is governed by how far from 0.5 is. Let’s redo the simulation, now using instead of like we did before. Results of the simulation when p=0.9 Notice that the convergence to normality is quite poor. Thus, be careful when using CLT approximations for sample proportions when your proportion is very close to 0 or 1. Confidence intervals Watch this video before beginning. Confidence intervals are methods for quantifying uncertainty in our estimates. The fact that the interval has width characterizes that there is randomness that prevents us from getting a perfect estimate. Let’s go through how a confidence interval using the CLT is constructed. According to the CLT, the sample mean, , is approximately normal with mean and standard deviation . Furthermore, is pretty far out in the tail (only 2.5% of a normal being larger than 2 sds in the tail). Similarly, is pretty far in the left tail (only 2.5% chance of a normal being smaller than 2 standard deviations in the tail). So the probability is bigger than or smaller than is 5%. Or equivalently, the probability that these limits contain is 95%. The quantity: is called a 95% interval for . The 95% refers to the fact that if one were to repeatedly get samples of size , about 95% of the intervals obtained would contain . The 97.5th quantile is 1.96 (so I rounded to 2 above). If instead of a 95% interval, you wanted a 90% interval, then you want (100 - 90) / 2 = 5% in each tail. Thus your replace the 2 with the 95th percentile, which is 1.645. Example CI Give a confidence interval for the average height of sons in Galton’s data. Finding a confidence interval. > library ( UsingR ) > data ( father.son ) > x <- father.son $ sheight > ( mean ( x ) + c ( -1 , 1 ) * qnorm ( 0.975 ) * sd ( x ) / sqrt ( length ( x ))) / 12 [ 1 ] 5.710 5.738 Here we divided by 12 to get our interval in feet instead of inches. So we estimate the average height of the sons as 5.71 to 5.74 with 95% confidence. Example using sample proportions In the event that each is 0 or 1 with common success probability then . The interval takes the form: Replacing by in the standard error results in what is called a Wald confidence interval for . Remember also that is maximized at 1/4. Plugging this in and setting our quantile as 2 (which is about a 95% interval) we find that a quick and dirty confidence interval is: This is useful for doing quick confidence intervals for binomial proportions in your head. Example Your campaign advisor told you that in a random sample of 100 likely voters, 56 intent to vote for you. Can you relax? Do you have this race in the bag? Without access to a computer or calculator, how precise is this estimate? > 1 / sqrt ( 100 ) [ 1 ] 0.1 so a back of the envelope calculation gives an approximate 95% interval of (0.46, 0.66) . Thus, since the interval contains 0.5 and numbers below it, there’s not enough votes for you to relax; better go do more campaigning! The basic rule of thumb is then, gives you a good estimate for the margin of error of a proportion. Thus, for about 1 decimal place, 10,000 for 2, 1,000,000 for 3. > round ( 1 / sqrt ( 10 ^ ( 1 : 6 )), 3 ) [ 1 ] 0.316 0.100 0.032 0.010 0.003 0.001 We could very easily do the full Wald interval, which is less conservative (may provide a narrower interval). Remember the Wald interval for a binomial proportion is: Here’s the R code for our election setting, both coding it directly and using binom.test . > 0.56 + c ( -1 , 1 ) * qnorm ( 0.975 ) * sqrt ( 0.56 * 0.44 / 100 ) [ 1 ] 0.4627 0.6573 > binom.test ( 56 , 100 ) $ conf.int [ 1 ] 0.4572 0.6592 Simulation of confidence intervals It is interesting to note that the coverage of confidence intervals describes an aggregate behavior. In other words the confidence interval describes the percentage of intervals that would cover the parameter being estimated if we were to repeat the experiment over and over. So, one can not technically say that the interval contains the parameter with probability 95%, say. So called Bayesian credible intervals address this issue at the expense (or benefit depending on who you ask) of adopting a Bayesian framework. For our purposes, we’re using confidence intervals and so will investigate their frequency performance over repeated realizations of the experiment. We can do this via simulation. Let’s consider different values of and look at the Wald interval’s coverage when we repeatedly create confidence intervals. Code for investigating Wald interval coverage n <- 20 pvals <- seq ( 0.1 , 0.9 , by = 0.05 ) nosim <- 1000 coverage <- sapply ( pvals , function ( p ) { phats <- rbinom ( nosim , prob = p , size = n ) / n ll <- phats - qnorm ( 0.975 ) * sqrt ( phats * ( 1 - phats ) / n ) ul <- phats + qnorm ( 0.975 ) * sqrt ( phats * ( 1 - phats ) / n ) mean ( ll < p & ul > p ) }) Plot of Wald interval coverage. The figure shows that if we were to repeatedly try experiments for any fixed value of , it’s rarely the case that our intervals will cover the value that they’re trying to estimate in 95% of them. This is bad, since covering the parameter that its estimating 95% of the time is the confidence interval’s only job! So what’s happening? Recall that the CLT is an approximation. In this case isn’t large enough for the CLT to be applicable for many of the values of . Let’s see if the coverage improves for larger . Code for investigating Wald interval coverage n <- 100 pvals <- seq ( 0.1 , 0.9 , by = 0.05 ) nosim <- 1000 coverage2 <- sapply ( pvals , function ( p ) { phats <- rbinom ( nosim , prob = p , size = n ) / n ll <- phats - qnorm ( 0.975 ) * sqrt ( phats * ( 1 - phats ) / n ) ul <- phats + qnorm ( 0.975 ) * sqrt ( phats * ( 1 - phats ) / n ) mean ( ll < p & ul > p ) }) Output of simulation with . Now it looks much better. Of course, increasing our sample size is rarely an option. There’s exact fixes to make this interval work better for small sample sizes. However, for a quick fix is to take your data and add two successes and two failures. So, for example, in our election example, we would form our interval with 58 votes out of 104 sampled (disregarding that the actual numbers were 56 and 100). This interval is called the Agresti/Coull interval. This interval has much better coverage. Let’s show it via a simulation. Code for investigating Agresti/Coull interval coverage when n=20. n <- 20 pvals <- seq ( 0.1 , 0.9 , by = 0.05 ) nosim <- 1000 coverage <- sapply ( pvals , function ( p ) { phats <- ( rbinom ( nosim , prob = p , size = n ) + 2 ) / ( n + 4 ) ll <- phats - qnorm ( 0.975 ) * sqrt ( phats * ( 1 - phats ) / n ) ul <- phats + qnorm ( 0.975 ) * sqrt ( phats * ( 1 - phats ) / n ) mean ( ll < p & ul > p ) }) Coverage of the Agresti/Coull interval with The coverage is better, if maybe a little conservative in the sense of being over the 95% line most of the time. If the interval is too conservative, it’s likely a little too wide. To see this clearly, imagine if we made our interval to . Then we would always have 100% coverage in any setting, but the interval wouldn’t be useful. Nonetheless, the Agrestic/Coull interval gives a much better trade off between coverage and width than the Wald interval. In general, one should use the add two successes and failures method for binomial confidence intervals with smaller . For very small consider using an exact interval (not covered in this class). Poisson interval Since the Poisson distribution is so central for data science, let’s do a Poisson confidence interval. Remember that if then our estimate of is . Furthermore, we know that and so the natural estimate is . While it’s not immediate how the CLT applies in this case, the interval is of the familiar form So our Poisson interval is: Example A nuclear pump failed 5 times out of 94.32 days. Give a 95% confidence interval for the failure rate per day. Code for asymptotic Poisson confidence interval > x <- 5 > t <- 94.32 > lambda <- x / t > round ( lambda + c ( -1 , 1 ) * qnorm ( 0.975 ) * sqrt ( lambda / t ), 3 ) [ 1 ] 0.007 0.099 A non-asymptotic test, one that guarantees coverage, is also available. But, it has to be evaluated numerically. Code for exact Poisson confidence interval > poisson.test ( x , T = 94.32 ) $ conf [ 1 ] 0.01721 0.12371 Simulating the Poisson coverage rate Let’s see how the asymptotic interval performs for lambda values near what we’re estimating. Code for evaluating the coverage of the asymptotic Poisson confidence interval lambdavals <- seq ( 0.005 , 0.1 , by = 0.01 ) nosim <- 1000 t <- 100 coverage <- sapply ( lambdavals , function ( lambda ) { lhats <- rpois ( nosim , lambda = lambda * t ) / t ll <- lhats - qnorm ( 0.975 ) * sqrt ( lhats / t ) ul <- lhats + qnorm ( 0.975 ) * sqrt ( lhats / t ) mean ( ll < lambda & ul > lambda ) }) Coverage of Poisson intervals for various values of lambda The coverage can be low for low values of lambda. In this case the asymptotics works as we increase the monitoring time, t. Here’s the coverage if we increase to 1,000. Coverage of Poisson intervals for various values of lambda and t=1000 Summary notes The LLN states that averages of iid samples. converge to the population means that they are estimating. The CLT states that averages are approximately normal, with distributions. centered at the population mean. with standard deviation equal to the standard error of the mean. CLT gives no guarantee that $n$ is large enough. Taking the mean and adding and subtracting the relevant. normal quantile times the SE yields a confidence interval for the mean. Adding and subtracting 2 SEs works for 95% intervals. Confidence intervals get wider as the coverage increases. Confidence intervals get narrower with less variability or larger sample sizes. The Poisson and binomial case have exact intervals that don’t require the CLT. But a quick fix for small sample size binomial calculations is to add 2 successes and failures. Exercises I simulate 1,000,000 standard normals. The LLN says that their sample average must be close to? About what is the probability of getting 45 or fewer heads out 100 flips of a fair coin? (Use the CLT, not the exact binomial calculation). Consider the father.son data. Using the CLT and assuming that the fathers are a random sample from a population of interest, what is a 95% confidence mean height in inches? The goal of a a confidence interval having coverage 95% is to imply that: If one were to repeated collect samples and reconstruct the intervals, around 95% percent of them would contain the true mean being estimated. The probability that the sample mean is in the interval is 95%. The rate of search entries into a web site was 10 per minute when monitoring for an hour. Use R to calculate the exact Poisson interval for the rate of events per minute? Consider a uniform distribution. If we were to sample 100 draws from a a uniform distribution (which has mean 0.5, and variance 1/12) and take their mean, . What is the approximate probability of getting as large as 0.51 or larger? Watch this video solution and see the problem and solution here.. 8. t Confidence intervals Small sample confidence intervals Watch this video before beginning. In the previous lecture, we discussed creating a confidence interval using the CLT. Our intervals took the form: In this lecture, we discuss some methods for small samples, notably Gosset’s t distribution and t confidence intervals. These intervals are of the form: So the only change is that we’ve replaced the Z quantile now with a t quantile. These are some of the handiest of intervals in all of statistics. If you want a rule between whether to use a t interval or normal interval, just always use the t interval. Gosset’s t distribution The t distribution was invented by William Gosset (under the pseudonym “Student”) in 1908. Fisher provided further mathematical details about the distribution later. This distribution has thicker tails than the normal. It’s indexed by a degrees of freedom and it gets more like a standard normal as the degrees of freedom get larger. It assumes that the underlying data are iid Gaussian with the result that follows Gosset’s t distribution with degrees of freedom. (If we replaced by the statistic would be exactly standard normal.) The interval is where is the relevant quantile from the t distribution. Code for manipulate You can use rStudio’s manipulate function to to compare the t and Z distributions. Code for investigating t and Z densities. k <- 1000 xvals <- seq ( -5 , 5 , length = k ) myplot <- function ( df ){ d <- data.frame ( y = c ( dnorm ( xvals ), dt ( xvals , df )), x = xvals , dist = factor ( rep ( c ( "Normal" , "T" ), c ( k , k )))) g <- ggplot ( d , aes ( x = x , y = y )) g <- g + geom_line ( size = 2 , aes ( color = dist )) g } manipulate ( myplot ( mu ), mu = slider ( 1 , 20 , step = 1 )) The difference is perhaps easier to see in the tails. Therefore, the following code plots the upper quantiles of the Z distribution by those of the t distribution. Code for investigating the upper quantiles of the t and Z densities. pvals <- seq ( .5 , .99 , by = .01 ) myplot2 <- function ( df ){ d <- data.frame ( n = qnorm ( pvals ), t = qt ( pvals , df ), p = pvals ) g <- ggplot ( d , aes ( x = n , y = t )) g <- g + geom_abline ( size = 2 , col = "lightblue" ) g <- g + geom_line ( size = 2 , col = "black" ) g <- g + geom_vline ( xintercept = qnorm ( 0.975 )) g <- g + geom_hline ( yintercept = qt ( 0.975 , df )) g } manipulate ( myplot2 ( df ), df = slider ( 1 , 20 , step = 1 )) Summary notes In this section, we give an overview of important facts about the t distribution. The t interval technically assumes that the data are iid normal, though it is robust to this assumption. It works well whenever the distribution of the data is roughly symmetric and mound shaped. Paired observations are often analyzed using the t interval by taking differences. For large degrees of freedom, t quantiles become the same as standard normal quantiles; therefore this interval converges to the same interval as the CLT yielded. For skewed distributions, the spirit of the t interval assumptions are violated. Also, for skewed distributions, it doesn’t make a lot of sense to center the interval at the mean. In this case, consider taking logs or using a different summary like the median. For highly discrete data, like binary, other intervals are available. Example of the t interval, Gosset’s sleep data Watch this video before beginning. In R typing r data(sleep) brings up the sleep data originally analyzed in Gosset’s Biometrika paper, which shows the increase in hours for 10 patients on two soporific drugs. R treats the data as two groups rather than paired. The data Loading Galton’s data. > data ( sleep ) > head ( sleep ) extra group ID 1 0.7 1 1 2 -1.6 1 2 3 -0.2 1 3 4 -1.2 1 4 5 -0.1 1 5 6 3.4 1 6 Here’s a plot of the data. In this plot paired observations are connected with a line. A plot of the pairs of observations from Galton’s sleep data. Now let’s calculate the t interval for the differences from baseline to follow up. Below we give four different ways for calculating the interval. Loading Galton’s data. g1 <- sleep $ extra [ 1 : 10 ]; g2 <- sleep $ extra [ 11 : 20 ] difference <- g2 - g1 mn <- mean ( difference ); s <- sd ( difference ); n <- 10 ## Calculating directly mn + c ( -1 , 1 ) * qt ( .975 , n -1 ) * s / sqrt ( n ) ## using R's built in function t.test ( difference ) ## using R's built in function, another format t.test ( g2 , g1 , paired = TRUE ) ## using R's built in function, another format t.test ( extra ~ I ( relevel ( group , 2 )), paired = TRUE , data = sleep ) ## Below are the results (after a little formatting) [, 1 ] [, 2 ] [ 1 ,] 0.7001 2.46 [ 2 ,] 0.7001 2.46 [ 3 ,] 0.7001 2.46 [ 4 ,] 0.7001 2.46 Therefore, since our interval doesn’t include 0, our 95% confidence interval estimate for the mean change (follow up - baseline) is 0.70 to 2.45. Independent group t confidence intervals Watch this video before beginning. Suppose that we want to compare the mean blood pressure between two groups in a randomized trial; those who received the treatment to those who received a placebo. The randomization is useful for attempting to balance unobserved covariates that might contaminate our results. Because of the randomization, it would be reasonable to compare the two groups without considering further variables. We cannot use the paired t interval that we just used for Galton’s data, because the groups are independent. Person 1 from the treated group has no relationship with person 1 from the control group. Moreover, the groups may have different sample sizes, so taking paired differences may not even be possible even if it isn’t advisable in this setting. We now present methods for creating confidence intervals for comparing independent groups. Confidence interval A confidence interval for the mean difference between the groups, is: The notation means a t quantile with degrees of freedom. The pooled variance estimator is: This variance estimate is used if one is willing to assume a constant variance across the groups. It is a weighted average of the group-specific variances, with greater weight given to whichever group has the larger sample size. If there is some doubt about the constant variance assumption, assume a different variance per group, which we will discuss later. Mistakenly treating the sleep data as grouped Let’s first go through an example where we treat paired data as if it were independent. Consider Galton’s sleep data from before. In the code below, we do the R code for grouped data directly, and using the r t.test function. Galton’s data treated as grouped and independent. n1 <- length ( g1 ); n2 <- length ( g2 ) sp <- sqrt ( (( n1 - 1 ) * sd ( x1 ) ^ 2 + ( n2 -1 ) * sd ( x2 ) ^ 2 ) / ( n1 + n2 -2 )) md <- mean ( g2 ) - mean ( g1 ) semd <- sp * sqrt ( 1 / n1 + 1 / n2 ) rbind ( md + c ( -1 , 1 ) * qt ( .975 , n1 + n2 - 2 ) * semd , t.test ( g2 , g1 , paired = FALSE , var.equal = TRUE ) $ conf , t.test ( g2 , g1 , paired = TRUE ) $ conf ) The results are: [, 1 ] [, 2 ] [ 1 ,] -0.2039 3.364 [ 2 ,] -0.2039 3.364 [ 3 ,] 0.7001 2.460 Notice that the paired interval (the last row) is entirely above zero. The grouped interval (first two rows) contains zero. Thus, acknowledging the pairing explains variation that would otherwise be absorbed into the variation for the group means. As a result, treating the groups as independent results in wider intervals. Even if it didn’t result in a shorter interval, the paired interval would be correct as the groups are not statistically independent! ChickWeight data in R Now let’s try an example with actual independent groups. Load in the ChickWeight data in R. We are also going to manipulate the dataset to have a total weight gain variable using dplyr. library ( datasets ); data ( ChickWeight ); library ( reshape2 ) ##define weight gain or loss wideCW <- dcast ( ChickWeight , Diet + Chick ~ Time , value.var = "weight" ) names ( wideCW )[ - ( 1 : 2 )] <- paste ( "time" , names ( wideCW )[ - ( 1 : 2 )], sep = "" ) library ( dplyr ) wideCW <- mutate ( wideCW , gain = time21 - time0 ) Here’s a plot of the data. Chickweight data over time. Here’s a plot only of the weight gain for the diets. Violin plots of chickweight data by weight gain (final minus baseline) by diet. Now let’s do a t interval comparing groups 1 and 4. We’ll show the two intervals, one assuming that the variances are equal and one assuming otherwise. Code for t interval of the chickWeight data wideCW14 <- subset ( wideCW , Diet %in% c ( 1 , 4 )) rbind ( t.test ( gain ~ Diet , paired = FALSE , var.equal = TRUE , data = wideCW14 ) $ conf , t.test ( gain ~ Diet , paired = FALSE , var.equal = FALSE , data = wideCW14 ) $ conf ) [, 1 ] [, 2 ] [ 1 ,] -108.1 -14.81 [ 2 ,] -104.7 -18.30 For the time being, let’s interpret the equal variance interval. Since the interval is entirely below zero it suggest that group 1 had less weight gain than group 4 (at 95% confidence). Unequal variances Watch this video before beginning. Under unequal variances our t interval becomes: where is the t quantile calculated with degrees of freedom: which will be approximately a 95% interval. This works really well. So when in doubt, just assume unequal variances. Also, we present the formula for completeness. In practice, it’s easy to mess up, so make sure to do t.test . Referring back to the previous ChickWeight example, the violin plots suggest that considering unequal variances would be wise. Recall the code is > t.test ( gain ~ Diet , paired = FALSE , var.equal = FALSE , data = wideCW14 ) $ conf [ 2 ,] -104.7 -18.30 This interval is remains entirely below zero. However, it is wider than the equal variance interval. Summary notes The t distribution is useful for small sample size comparisons. It technically assumes normality, but is robust to this assumption within limits. The t distribution gives rise to t confidence intervals (and tests, which we will see later) For other kinds of data, there are preferable small and large sample intervals and tests. For binomial data, there’s lots of ways to compare two groups. Relative risk, risk difference, odds ratio. Chi-squared tests, normal approximations, exact tests. For count data, there’s also Chi-squared tests and exact tests. We’ll leave the discussions for comparing groups of data for binary and count data until covering glms in the regression class. In addition, Mathematical Biostatistics Boot Camp 2 covers many special cases relevant to biostatistics. Exercises 9. Hypothesis testing Hypothesis testing is concerned with making decisions using data. Hypothesis testing Watch this video before beginning. To make decisions using data, we need to characterize the kinds of conclusions we can make. Classical hypothesis testing is concerned with deciding between two decisions (things get much harder if there’s more than two). The first, a null hypothesis is specified that represents the status quo. This hypothesis is usually labeled, . This is what we assume by default. The alternative or research hypothesis is what we require evidence to conclude. This hypothesis is usually labeled, or sometimes (or some other number other than 0). So to reiterate, the null hypothesis is assumed true and statistical evidence is required to reject it in favor of a research or alternative hypothesis Example A respiratory disturbance index (RDI) of more than 30 events / hour, say, is considered evidence of severe sleep disordered breathing (SDB). Suppose that in a sample of 100 overweight subjects with other risk factors for sleep disordered breathing at a sleep clinic, the mean RDI was 32 events / hour with a standard deviation of 10 events / hour. We might wa
- 1Combine flax seeds and water. Let thicken for 10-12 minutes, stirring occasionally.·Preheat oven to 350°F. Grease muffin tin or line with baking cups. - 2In a small bowl, combine flax egg, pumpkin puree, almond butter, avocado oil, maple syrup, and vanilla. - 3In a separate, larger bowl, combine almond flour, coconut flour, baking soda, pumpkin pie spice, and salt. - 4Add wet ingredients to dry and mix until fully blended. Fold in chocolate chips. - 5Transfer muffin batter to muffin tin. Bake for 20-22 minutes, or until a toothpick comes out clean when inserted.·Let cool for a few minutes in the muffin tin. Then transfer to a cooling rack to cool. Dessert, Breakfast, or Snack! It’s hard to find an easier-to-make and more versatile food than muffins, and these vegan pumpkin chocolate chip muffins are no exception! Loaded with healthy ingredients, these muffins can be eaten as a breakfast, dessert or snack with only 176 calories each – plus 12 grams of healthy fats from avocado oil and almond butter, fiber from flaxseed, and 3 grams of protein from almond flour and pumpkin. Vegan baking has its challenges, namely not being able to use eggs. However, ground flaxseed and water mixed together thickens up in just about 10 minutes and mimics the texture of a whisked egg very nicely. In this vegan pumpkin chocolate chip muffin recipe, not only is flaxseed important structurally, but it also has great nutritional benefits. Flaxseed is an excellent source of Omega-3s, in addition to being high in fiber and antioxidants. Pumpkin puree adds moistness to this vegan pumpkin chocolate chip muffin recipe, in addition to being full of antioxidants like beta-carotene, and vitamins A and C. High quality maple syrup is the only sweetener used and lends a rich flavor profile that cane sugar can’t. The combination of almond flour and coconut flour gives the muffins the perfect texture – not too dense but not too airy. This flour combination is also far more nutritious than any all-purpose flour – almond flour contributes protein and coconut flour contains medium-chain triglycerides, a.k.a. MCTs, a beneficial fat that’s been tied to weight loss. Almond butter is extremely high in magnesium and vitamin E, and is essential for the texture of these muffins. Avocado oil is used instead of coconut oil, but if you don’t have any, coconut oil can be used in its place. These pumpkin chocolate chip muffins come together in just 35 minutes; a short amount of time that will give you snacks or breakfast for a week. If you are looking for a more well-rounded meal, these muffins would be the perfect accompaniment to our weekly vegan meal delivery service.
https://www.freshnlean.com/recipes/vegan-pumpkin-chocolate-chip-muffins/
Carry on Spiney! The way I view it is I pour a full glass and drink half , it’s half empty. Though if I only pour half a glass it’s half full . Lol Lol, I'm the same, Harpy!! An optimist sees the glasses as 1/2 full. A pessimist sees the glasses as 1/2 empty. An optometrist sees the glasses as 1/2 off with the purchase of a second pair. Haaaaa! We once spent an entire hour on this in Gen. Chem lecture. My Chem 201 professor called this 'How to Blow Up Someone's Brain in One Paragraph 101' but I call it 'Empty is a Bald-Faced Lie'. It goes like this: The glass is always completely full, but the proportion of liquid to gas fluctuates at any given second due to atmospheric conditions. The halfway point where the gas to air ratio is 1:1 is purely perceptional and the same glass can be viewed to be many different percentages of gas to air based upon the viewpoint of the person looking at it. Therefore, the glass is never really at complete equilibrium for more than a fraction of a second and emptiness is perceived because of the inability of the human eye to see at a molecular level - emptiness is therefore a construct of the human mind. Our perception of what we cannot see in this instance is that there is a lack of visible matter, so we came up with the concept of emptiness to explain what we cannot perceive visually. Basically, we make up 'empty' to explain our inability to see all of the matter that is present inside the glass - the space is always occupied completely with water, whether in a gas or as a liquid but we only perceive the liquid. We call the matter we can't see 'emptiness' and see it as a lack of liquid. The 'empty' isn't actually a complete absence of water - it's there but just not in a form visible to us. There is no empty, just 'visible to humans' and 'not visible to humans'. The glass is full, there's always matter in one state or another inside of it. As for the comparison to life, I think the Doctor Who episode 'Vincent and The Doctor' explains it the best:
https://forum.veritashealth.com/discussion/112296/living-well/wellness-forum/how-you-view-your-cup
Presentation is loading. Please wait. Published byElaina Goffe Modified over 4 years ago 1 SITUATION RESPONSE FLOW CHART SUPERVISORS’S ACTIONS SITUATION OCCURS Direct observation, complainant reports, third party reports Document initial knowledge and action HANDLE YOURSELF (HR is available as a reference) Based on: Preference Less serious/complex No additional facts needed Type of conduct is clear No objectivity concerns REFER TO HR Based on: Preference More serious/complex Additional facts needed-investigation Type of conduct unclear Objectivity concerns You are subject of complaint 1. Take appropriate action to: stop conduct prevent recurrence 2. Follow-up with employee if appropriate Training Performance Notice CAP/Letter of Expectations (in partnership with HR) 3. Support employees and department 1. Participate in investigation 2. Receive recommendations from HR and provide input into resolution (CAP) 3. HR drafts Performance Notice, Corrective Action/Letter of Expectations 4.Supervisor finalizes Performance Notice/CAP and administers 5.Monitor situation (Supervisor and HR) 6. Support department Seek Guidance from HR Resolved Yes, document and send file to HR No, contact HR Yes No, Contact HR 2 Employee Relations Process Employee relations concerns, which often result in workplace conflict, usually fall into one of these areas: Personnel policies Department policies or operations Distribution of duties Relationships with co-workers Relationships with supervisors Legal compliance issues such as workplace discrimination. Human Resources is responsible for ensuring that employee relations problems are addressed and that employees receive answers to their questions. The actual resolution of any situation is normally the responsibility of the department management team and may involve HR depending on the circumstance (i.e. Letter of Expectation/CAP). Human Resources works with employees and supervisors to find solutions which meet both employee and department needs, consistent with policy and legal requirements. Every attempt is made to find solutions which reflect Gonzaga's mission values of human dignity and justice. Employees are first encouraged to bring their issues or concerns directly to their supervisor. Employees may also request Human Resources' assistance at any time to accompany them through the process. 3 Supervisor’s Role Supervisors are responsible for addressing employee relations or conflict issues within their areas and for responding to employees' concerns or questions in a timely manner so that resolution is at the lowest possible level. A supervisors’ involvement in and response to employee relations issues or conflicts should be consistent with University personnel policy and mission values. Supervisors seek guidance from their department head, Dean, or vice president or Human Resources as appropriate, and monitor action plans to ensure that resolution achieves desired results. Supervisors must balance employee needs with department requirements and the University's common good. 4 Human Resources Role Human Resources staff members serve as facilitators of the conflict resolution process, bringing together involved parties and others as appropriate to work toward resolution. A Human Resources representative is the first point of contact for employees bringing issues to Human Resources for resolution. The Human Resources representative then oversees the process to ensure that employees receive answers to their questions and concerns in a timely manner. Human Resources does not take sides in an issue, but rather assists in interpreting the parties' positions as well as University policies and practices which affect the outcome. Human Resources balances the needs and desires of individuals (employees and supervisors) with the University's common good. 5 Human Resources Role Cont. Specifically, Human Resources serves as a resource to employees and supervisors in the following ways: Help employees clarify problems or issues and how to present them to supervisors Assist supervisors to clarify performance standards and positively communicate them to employees, including performance/conduct problems or deficiencies, and develop plans to help employees meet standards Clarify the University's expectations of the respective roles of supervisors and employees as vital contributors to the Gonzaga community Interpret and explain personnel policies and practices Mediate conflicts or communication problems between or among individuals at their request Address with senior administrators unresolved conflicts or University policies/practices which contribute to employee relations issues. 6 SUPERVISOR CHECKLIST FOR MANAGING PERFORMANCE Review job description with each staff member Outline and document expectations Conduct weekly/bi-weekly meetings with staff Receive weekly updates from staff Document meetings, tasks assigned, progress on tasks Provide employee with any additional training/resources to ensure s/he has the skills necessary to perform his/her job Communicate with employee and document when not meeting expectations - give specifics of task assigned and dates Communicate and document if the employee is making progress or not meeting expectations so there are “no surprises” Performance Notice Corrective action/letter of expectations Amendment process to document additional issues Evaluation of progress memo Administrative Leave Suspension Resignation Resignation in lieu of Dismissal (Progressive issues or Serious Misconduct) Similar presentations © 2020 SlidePlayer.com Inc. All rights reserved.
https://slideplayer.com/slide/3556487/
James Buchanan: Autograph Letter Signed "James Buchanan". One page, 8" x 10", March 23, 1850, Wheatland, to Hon. Edmund ...Click the image to load the highest resolution version. DescriptionJames Buchanan: Autograph Letter Signed "James Buchanan". One page, 8" x 10", March 23, 1850, Wheatland, to Hon. Edmund Burke, marked "confidential". In full: "My dear Sir, I desire to recall to your memory a fact which seemed to have escaped the recollection of every person. I mean the letter addressed by Col. Benton & carried by Col. Fremont to the people of California. To counteract the effect of this letter was one of the strongest reasons why my letter to Mr. Voorhies of the 7th October 1848 was written. I am very desirous to obtain a copy of Col. Benton's letter for my own archives. It was published in the New York Herald a short time before the date of my letter. I do not think I ever saw it in any other paper. Could you please procure a copy of it for me? I shall of course pay for the copying. I do not wish to be the instrument of making it public, nor do I desire its publication at the present moment. I am very far from entertaining any unkind feelings towards Col. Benton, and wish the copy merely for my own satisfaction. It is a remarkable fact, however, that whilst Southern members of Congress are barely engaged in discourse, the persons who incited the people of California to form a government independent of the agency of Congress, this important letter should have been entirely overlooked". The California gold rush in 1848 intensified questions about slavery in the new territory. President Taylor believed statehood could become a solution to the issue of slavery in the territories. Admission of California would tip the balance of power in the senate in favor of free states. In 1849, Californians sought statehood and, after heated debate in the U.S. Congress arising out of the slavery issue, California entered the Union as a free, nonslavery state by the Compromise of 1850. In January 1850, two months prior to Buchanan writing this letter, Henry Clay presented a bill to congress with five provisions for California's statehood. Honorable Edmond Burke, a lawyer who was appointed commissioner of patents in 1846 by President Polk, a position he held until 1850. Original mailing folds present, else fine. A highly important James Buchanan ALS about the California question and regarding future presidential candidate John C. Fremont, and Col. Benton. Auction Info Buyer's Premium per Lot: 19.5% of the successful bid (minimum $9) per lot.
https://historical.ha.com/itm/autographs/james-buchanan-autograph-letter-signed-james-buchanan-one-page-8-x-10-march-23-1850-wheatland-to-hon-edmund/a/675-30414.s
Brown sugar, also known as sugar cane finished sugar, is sugar with honey formed by extracting juice and concentrating sugar cane. It has high nutritional value, in addition to the function of sugar, it also contains vitamins and trace elements, such as iron, zinc, manganese, chromium and so on. But how does it work with iron supplements? The Adequate Daily Intake (AI) of dietary iron for Chinese residents formulated by the Chinese Nutrition Society in 2000 is 15 mg for adult males, 20 mg for adult females, and 25-35 mg for pregnant women and nursing mothers, respectively. and 25 mg; the tolerable maximum daily intake (UL) is 50 mg for both men and women. But the iron content of brown sugar is only 2.2 mg/100 g. Iron is generally divided into heme iron and non-heme iron. Heme iron is easily absorbed by the human body and mainly exists in animal red meat and liver blood. Non-heme iron is mainly found in plants. In sexual foods, it must be separated from other organic parts and reduced to ferrous ions before they can be absorbed. The iron in brown sugar belongs to this non-heme iron that is not easily absorbed. So, iron supplementation through brown sugar is negligible, and iron supplementation through animal foods is recommended. Pork liver contains 31.1 mg of iron per 100 grams, beef contains 3.2 mg per 100 grams, and pork contains 3.4 mg per 100 grams. Animal foods are not only highly absorbed, but also rich in content, making them an excellent choice for iron supplementation.
https://en-health.articles01.com/can-drinking-brown-sugar-supplement-iron/
December 18, 2019 There’s always more work to do than time available to do it. Effective prioritisation is very important to provide the focus to be successful at work. In general prioritisation is some product of effort x cost x time or value x complexity applied to possible tasks. The tasks should be aligned to your strategy. There are some well known frameworks for prioritisation that I find useful. I’ll describe them here and talk about how to choose one with your team. You might find that one prioritisation framework suits the team but you prefer to use a different one for your own tasks. I suggest gathering your team together and applying your specific list of work to each framework and see what works best for you. The agenda would look something like this. This is a popular form of prioritisation. It looks at the value and complexity of the piece of work. Create a quadrant break down of value and complexity. Place the various work items in the appropriate quadrant. Only work on the items in 1 and 2 (in that order). You might be able to work on the items in 3 but only after the items in 1 and 2 are completed. Value vs Complexity High | X | 2 | Value ________________________ High | 3 | 1 | Complexity High value / Low complexity are your quick wins High value / high complexity are the strategic features Low value / low complexity things can be worked on usually if there are other factors pushing it but be cautious Low value / High complexity work should be avoided Reach x Impact x Confidence __________________________________ Effort This is a numerical framework from intercom. Numbers are nice because it’s not easy to argue with a number. Of course there is still some subjectivity in this method. The way this works can make it difficult to apply for teams that have customer facing and internal work. The reach number will vary significantly in that case. The reach is the number of customers expected to be affected by the work. So a change to our authentication system could expect to reach 100 customers. A change to a feature used only by large enterprise customers would only reach 10 customers. The impact is very subjective and should be decided as a group with some external input. Use 3 for “massive”, 2 for “high”, 1 for “medium”, 0.5 for low and 0.25 for minimal. These factors will scale the number appropriately. The confidence is also subjective but don’t overthink it. I use 100% for “high”, 70% for “medium” and 50% for “low”. For effort use the timeframe that gives you the closest to whole numbers that matches your work. For my team and I that is person-weeks. For teams constantly working on larger pieces of work that might be person-months. It doesn’t matter as long as you use whole numbers and are consistent for the set of items you’re prioritising. See the post from intercom for more on this one: https://www.intercom.com/blog/rice-simple-prioritization-for-product-managers/ Here we have a list of columns that represent the various things that are important to you right now. This is probably related to your strategy or your mission. They should be written in a binary format. Then for each of your items to prioritise you fill in the columns. The matrix provides a visual representation of what should be prioritised. e.g. |Speed up database team||Speed up support team||Speed up product team||Improves brand| |Project A||Y||N||N||Y| |Project B||Y||N||Y||Y| |Project C||N||N||N||Y| In this example I would do B, A, C in that order. Often the matrix has many more columns. This method forces a group decision between just two items and you just keep repeating until all items have been sorted next to each other. It’s slower but can be useful in groups with different contexts and aims. It encourages discussion (and hopefully empathy) between the various groups. -- A -- B -- Now take another random item from the pool and run the same process with B. Say item C is higher priority than B. Now you have. -- A -- C -- B -- So now you have to run the same process between A and C. If it turns out that the group feels C is higher priority you should put that to the left of A. -- C -- A -- B -- And now you have a sorted list of prioritised items. This takes a long time but the discussions that arise are usually very valuable. You should take notes of the discussions! It’s a great method for cross-team prioritisation. There are many ways to prioritise work and different methods will work for different teams. However always think about why you NEED to prioritise right now. The most important thing by far is that the entire organisation has the SAME vision and is following the same overall strategy to get there. Ensure this is in place along with introducing any prioritisation tools. Hi! I'm Darragh ORiordan. I live and work in Auckland, New Zealand enjoying the mountains and the ocean. I build and support happy teams that create high quality software for the web. Contact me on Twitter!
https://www.darraghoriordan.com/2019/12/18/4-prioritization-frameworks-for-your-team/
In the season 3 premier, we find ourselves in a world where Reverse-Flash didn’t kill Barry’s mother. Now, Barry walks with a spring in his step as he almost has everything he’s ever wanted. However, none of his friends in the other timeline remember who he is. Barry starts by asking Iris out on a date and she accepts. There are two obstacles in their path, one being Joe who is a drunkard and heavily disapproves of them (Iris puts him in his place). They are talking when word reaches them that Kid Flash and The Rival are locked in battle. Iris runs off, as does Barry. He arrives on the scene and tries to save Kid Flash, who has been thrown out a window, but Barry’s powers short out. Kid Flash falls into a dumpster, unconscious. Barry goes to him and removes his mask, revealing Wally West. Wally gained his powers after his enhanced-engine car was struck by lightning. He and Iris have joined forces in order to fight crime. Barry wants to help them take down The Rival and they say that the only person who can is billionaire Cisco Ramon. Cisco, however, doesn’t want anything to do with them. Barry tries to convince him, but the memories of his and Cisco’s friendship start to disappear. After that, Barry decides to tell them all the truth. But first, there’s someone else missing from the team. Barry runs off and “kidnaps” Caitlin, who is now an ophthalmologist. Barry explains everything that has happened and his changing of time. Nobody but Iris believes him. Iris tells Barry that she feels like something has been missing from her life and only felt at peace when Barry appeared. Barry and Wally team-up to take on The Rival, but Wally doesn’t want to be a side-kick. He ignores Barry’s plan and attacks Rival on his own. This ends in Wally taking a lead pipe to the chest. Barry and Rival face off, but once again, Barry’s powers fail. Iris gives him a pep talk, and it is enough to get Barry running. He defeats Rival and goes to Wally, but the Rival doesn’t stay down. He is about to kill Barry when there’s a gunshot. Joe arrives on the scene and shots the evil speedster. Barry removes his cowl and tells Joe that Wally needs help. They get him back to the others at Ramon Industries, but he isn’t healing like he should be. Barry realizes that he can’t continue in this world. He goes to where he has kept Reverse-Flash imprisoned all these months and tells him that they have to go back and kill Nora. Reverse-Flash willingly agrees. Barry finds himself back where he was when he changed time back in the season 2 finale. He goes into the West house and gives Wally a big hug. Joe and Wally are confused by Barry’s pep since Henry just died. Barry says that he is okay and then asks where Iris is. Joe tells him that isn’t funny and storms out of the house. Iris and Joe had a falling out and haven’t spoken in ages. The final scene of the episode is the man behind The Rival mask, Edward Clariss, being woken up by a creepy voice and a name being etched into a mirror: Alchemy. In the comics, Dr. Alchemy was a foe of the Flash. He told Clariss to “wake up,” and this could either mean to embrace his identity as the Rival or Dr. Alchemy. Flashpoint has seemingly come and gone, but not without further consequences. I am incredibly intrigued by what else has been affected in this new timeline, as well as how it trickles to the other CW shows. It also looks as though Barry and Iris’ relationship will take a front seat this season, and makes you wonder what it actually looks like in this new timeline. Are they even friends? What other adverse effects will present themselves?
https://fanfest.com/the-flash-recap-flashpoint/
--- abstract: 'We regard the classification of rational homotopy types as a problem in algebraic deformation theory: any space with given cohomology is a perturbation, or deformation, of the formal space with that cohomology. The classifying space is then a moduli space — a certain quotient of an algebraic variety of perturbations. The description we give of this moduli space links it with corresponding structures in homotopy theory, especially the classification of fibres spaces $F \to E \overset{p}{ \rightarrow} B$ with fixed fibre $F$ in terms of homotopy classes of maps of the base $B$ into a classifying space constructed from $Aut(F)$, the monoid of homotopy equivalences of $F$ to itself. We adopt the philosophy, later promoted by Deligne in response to Goldman and Millson, that any problem in deformation theory is “controlled” by a differential graded Lie algebra, unique up to homology equivalence (quasi-isomorphism) of dg Lie algebras. Here we extend this philosophy further to control by $L_\infty$ -algebras.' author: - Mike Schlessinger and Jim Stasheff title: Deformation Theory and Rational Homotopy Type --- *In memory of Dan Quillen who established the foundation on which this work rests.* Ł[L\_[H]{}]{} Examples and computations {#examples} ========================= Although some of our results are of independent theoretical interest, we are concerned primarily with reducing the problem of classification to manageable computational proportions. One advantage of the miniversal variety is that it allows us to read off easy consequences for the classification from conditions on $H(L)$. For the remainder of this section, let $(S Z,d)$ be the minimal model for a gca ${\mathcal H}$ of finite type and let $L_{{\mathcal H}} \subset {Der}\, L^c ({\mathcal H})$ be the corresponding [dg Lie algebra ]{}of weight decreasing derivations, which is appropriate for classifying homotopy type. The following theorems follow from \[summarythm\] and following remarks. If $H^1(L_{{\mathcal H}}) = 0$, then ${\mathcal H}$ is intrinsically formal, i.e., no perturbation of $(S Z,d)$ has a different homotopy type; $M_{{\mathcal H}}$ is a point. If $H^0 (L_{{\mathcal H}}) = 0$, then $M_{{\mathcal H}}$ is the quotient of the miniversal variety by ${Aut}\, {\mathcal H}$. If $H^2 (L_{{\mathcal H}}) = 0$, then the miniversal variety is $H^1(L_{{\mathcal H}})$. If $L_{{\mathcal H}}$ is formal in degree 1 (in the sense of \[in degree 1\]), then $M_{{\mathcal H}}$ is the quotient of a pure quadratic variety by the group of outer automorphisms of $(S Z,d)$ (cf. §\[miniversal\]). The following examples give very simple ways in which these conditions arise. Let ${Der}^n_k$ denotes derivations which raise top(ological) degree by $n$ and decreases the weight = top degree plus resolution degree by $k$. For ${Der}^n_k L^c ({\mathcal H})$, this specializes as follows: ${Der}\, L^c ({\mathcal H})$ can be identified with ${Hom}\, (L^c({\mathcal H}) ,{\mathcal H})$ and hence with a subspace of ${Hom}\, (T^c ({\mathcal H}),{\mathcal H})$ where $T^c({\mathcal H})$ is the tensor coalgebra. Then each $\theta_k \in {Der}^n_k$ corresponds to an element of ${Hom}\, ({\mathcal H}^{\otimes k+p+1},{\mathcal H})$ which lowers weight by $k$. In particular, $\theta_k$ of top degree 1 and weight $-k$ can be identified with an element of ${Hom}\, (\bar {\mathcal H}^{\otimes k+2},{\mathcal H})$ which lowers the internal ${\mathcal H}$-degree by $k$ (e.g., $d=m : {\mathcal H}\otimes {\mathcal H}\to {\mathcal H}$ preserves degree). Thus examples of the theorems above arise because of gaps in ${Der}^n_k L^c ({\mathcal H})$ for $n=0, 1, $ or $2.$ Shallow spaces {#shallow} -------------- By a [**shallow space**]{}, we mean one whose cohomological dimension is a small multiple of its connectivity. [@felix:Bull.Soc.Math.France80; @NeiMil; @halperin-stasheff] If ${\mathcal H}^i = 0$ for $i<n>1$ and $i \ge 3n-1$, then ${\mathcal H}$ is intrinsically formal, i.e., $M_{\mathcal H}$ consists of one point. From our point of view or many others, this is trivial. We have $\bar {\mathcal H}^{\otimes k+2}= 0$ up to degree $(k+2)n$, so that ${Image}\ \theta_k$ lies in degree at least $(k+2)n-k$ where ${\mathcal H}$ is zero for $k \ge 1$. A simple example is ${\mathcal H}= H(S^n \lor S^n \lor S^{2n+1} )$ for $n>2$. If ${\mathcal H}^i = 0$ for $i<n$ and $i \ge 4n-2$, then the space of homotopy types $M_{\mathcal H}$ is $H^1(L)/{Aut}\, {\mathcal H}$ [@felix:Bull.Soc.Math.France80; @felix:diss]. Now $L_{\mathcal H}^1 = L^1_1$, i.e., $\theta_1$ may be non–zero but $\theta_k= 0$ for $k \ge 2$. Similarly $L_{\mathcal H}^2 = 0$. Thus, $W = V_{L_{\mathcal H}} = Z^1 (L)$. Consider $\L$ and its action. The brackets have image in dimension at least $3n-2$, thus in computing $( exp \, \phi)(d + \theta)$ the terms quadratic in $\phi$ lie in ${\mathcal H}^i$ for $i \ge 4n-2$ and hence are zero. Thus, $(exp\, \phi)(d + \theta)$ reduces to $(1 + ad\, \phi)(d + \theta)$. The mixed terms $[\phi,\theta]$ again lie in dimension at least $4n-2$ and are also zero, so that $(exp\, \phi)(d + \theta)$ is just $d + \theta + [\phi,d]$. Therefore, $W_{L_{\mathcal H}} / exp\, \ L_{\mathcal H}$ is just $H^1 (L_{\mathcal H})$. Here a simple example is ${\mathcal H}= H(S^2 \lor S^2 \lor S^5)$ [@halperin-stasheff] §6.6. Let the generators be $x_2 ,y_2 , z_5$. We have $L^2 = 0$ for the same dimensional reasons, so $W_L = V_L = H^1 (L)$ which is ${{\mathbf}Q}^2$. Finally, ${Aut}\, {\mathcal H}= GL(2) \times GL(1)$ acts on $H^1 (L)$ so as to give two orbits: $(0,0)$ and the rest. The space $M_{\mathcal H}$ is $$\bigodot\ \cdot$$ meaning the non-Hausdorff two-point space with one open point and one closed. For later use, we will also want to represent this as $\ \ \ \cdot \to \cdot\ \ \ $, meaning one orbit is a limit point of the other. If ${\mathcal H}^i=0$ for $i<n$ and $i \ge 5n-2$, then $W = V_L = Z^1(L)$ still, but now the action of $L$ may be quadratic and much more subtle. We will return to this shortly, but first let us consider the problem of invariants for distinguishing homotopy types. Cell structures and Massey products {#cells} ------------------------------------ We have mentioned that ${Der}\ L({\mathcal H})$ can be identified with ${Hom}\,({\mathcal H}^*,L({\mathcal H}))$. This permits an interpretation in terms of attaching maps which is particularly simple in case the formal space is a wedge of spheres $X = \bigvee S^{n_i}$. The rational homotopy groups $\pi_*(\Omega X) \otimes {{\mathbf}Q}$ are then isomorphic to $L(H(X))$ [@hilton:spheres]. In terms of the obvious basis for ${\mathcal H}$, the restriction of a perturbation $\theta$ to ${\mathcal H}_{n_i}$ can be described as iterated Whitehead products which are the attaching maps for the cells $e^{n_i}$ in the perturbed space. In more detail, here is what’s going on: attaching a cell by an ordinary Whitehead product $[S^p,S^q]$ means the cell carries the product cohomology class. Massey (and Uehara) [@massey-uehara; @massey:mex] introduced Massey products in order to detect cells attached by iterated Whitehead products such as $[S^p,[S^q,S^r]]$. If we identify a perturbation $\theta_k$ with a homomorphism $\theta_k : H^{\otimes k+2} \to H$, this suggestion of a $(k+2)$–fold Massey product can be made more precise as follows: Consider the term $\theta$ of least weight $k$ in the perturbation. By induction, we assume all $j$-fold Massey products are identically zero for $3 \leq j < k+2$. Now a $(k+2)$–fold Massey product would be defined on a certain subset $M_{k+2} \subset H^{\otimes k+2}$, namely, the kernel of $\Sigma (-1)^j(1\otimes \dots \otimes m \otimes \dots 1)$ which is to say $$\quad x_0\otimes \dots \otimes x_{k+1} \in M_{k+2}\qquad \text{ iff } \overset {k}{\underset {j-0}{\Sigma}}(-1)^jx_0 \otimes \dots \otimes x_jx_{j+1}\otimes \dots \otimes x_{k+1} = 0.$$ We can then define $\langle x_0,\dots ,x_{k+1}\rangle$ as the coset of $\theta (x_0\otimes \dots \otimes x_{k+1})$ in $H \ \ \text{modulo}\ \hfill x_0H+Hx_{n+1}$. Moreover, if $\theta = [d,\phi]$ for some $\phi \in L$, then $\langle x_0,\dots ,x_{k+1}\rangle$ will be the zero coset because $$\begin{aligned} \quad \theta (x_0 \otimes\dots \otimes x_{k+1}) =& x_0\phi(x_1\otimes \dots \otimes x_{k+1})\\ &\pm \Sigma (-1)^j\phi (x_0\otimes \dots \otimes x_jx_{j+1} \otimes \dots \otimes_{k+1})\\ &\pm \phi (x_0\otimes \dots \otimes x_k)x_{k+1}, \end{aligned}$$ the latter sum being zero on $M_{k+2}$. Notice $\phi$ makes precise uniform vanishing. The first example of continuous moduli, i.e., of a one–parameter family of homotopy types, was mentioned to us by John Morgan (cf. [@neis; @felix:Bull.Soc.Math.France80]). Let ${\mathcal H}= H(S^3 \lor S^3 \lor S^{12})$, so that the attaching map $\alpha$ is in $\pi_{11}(S^3 \lor S^3) \otimes {{\mathbf}Q}$ which is of dimension 6, while ${Aut}\,{\mathcal H}= GL(2) \times GL(1)$ is of dimension 5. Alternatively, the space of 5–fold Massey products ${\mathcal H}^{\otimes 5} \to {\mathcal H}$ is of dimension 6 and so distinguishes at least a 1–parameter family. The Massey product interpretation is particularly helpful when only one term $\theta_k$ is involved. All of the examples of Halperin and Stasheff can be rephrased significantly in this form. To do so, we use the following: [**Notation**]{}: Fix a basis for ${\mathcal H}$. For $x$ in that basis and $y \in L({\mathcal H})$, denote by $y \partial x$ the derivation which takes $x$ to $y$ (think of $\partial x$ as $\partial / \partial x$) and sends the complement of $x$ in the basis to zero. Moderately shallow spaces {#moderately} -------------------------- Returning to the range of ${\mathcal H}^i = 0, i < n \text{ and } i \ge 4n-2$, consider Example 6.5 of [@halperin-stasheff], i.e., ${\mathcal H}= H((S^2 \lor S^2) \times S^3) $ with generators $x_1,x_2,x_3$. Again $L^1$ is all of weight $-1$; any $\theta _1$ is a linear combination: $$\begin{aligned} \mu_1[x_1,[x_1,x_2]]\partial x_1x_3 \, &+ \, \mu_2[x_1,[x_1,x_2]]\partial x_2x_3\\ &+\sigma_1[x_2,[x_1,x_2]]\partial x_1x_3+\sigma_2[x_2,[x_1,x_2]]\partial x_2x_3.\end{aligned}$$ As for $L$, it has basis $[x_1,x_j]\partial x_3$ for $1 \leq i \leq j \leq 2$. Computing $d_L$, it is easy to see that $H^1(L)$ has basis: $[x_1,[x_1,x_2]]\partial x_1x_3 = -[x_2,[x_1,x_2]]\partial x_2x_3$. The action of ${Aut}\, {\mathcal H}$ again gives two orbits: $\mu_1 = \sigma_2 \text{ and } \mu_1 \not= \sigma_2 $. In terms of the spaces, we have respectively $(S^2\lor S^2) \times S^3$ and $S^2 \lor S^3 \lor S^2\lor S^3 \cup e^5 \cup e^5$ where one $e^5$ is attached by the usual Whitehead product and the other $e^5$ is attached by the usual Whitehead product plus a non–zero iterated Whitehead product. Notice the individual Massey products $\langle x_1,x_1,x_2\rangle$ and $\langle x_1,x_2,x_2\rangle$ are all zero modulo indeterminacy (i.e., $x_1{\mathcal H}^3 + {\mathcal H}^3x_2$), but the classification of homotopy types reflects the uniform behavior of all Massey products. For example, changing the choice of bounding cochain for $x_1x_2$ changes $\langle x_1,x_1,x_2\rangle$ by $x_1x_3$ and simultaneously changes $\langle x_1,x_2,x_2\rangle$ by $x_2x_3$, accounting for the dichotomy between $\mu_1 = \sigma_2$ and $\mu_1 \not= \sigma_2$. The language of Massey products is thus suggestive but rather imprecise for the classification we seek. Our machinery reveals that the superficially similar ${\mathcal H}= H((S^3 \lor S^3)\times S^5)$ behaves quite differently. There is only one basic element in $L^1$, namely $\phi = [x_1,x_2]]\partial x_5$, with again $[d,\phi] = [x_1,[x_1,x_2]]\partial x_1x_5 + [x_2,[x_1,x_2]]\partial x_2x_5 , \ \text{ so } V_L/ exp \, L {\cong }{{\mathbf}Q}^3$. If we choose as basic $$\begin{aligned} \qquad p &= [x_1,[x_1,x_2]]\partial x_2x_5 \\ \qquad q &= [x_2,[x_1,x_2]]\partial x_1x_5 \\ \qquad r &= 1/2[x_1,[x_1,x_2]]\partial x_1x_5 - 1/2[x_2,[x_1,x_2]]\partial x_2x_5 \\\end{aligned}$$ then $GL(2,{{\mathbf}Q}) = {Aut}\ {\mathcal H}^3$ acts by the representation $sym\ 2$, the second symmetric power, that is, as on the space of quadratic forms in two variables. Since ${Aut}\,{\mathcal H}^5 = {{\mathbf}Q}^*$ further identifies any form with its non–zero multiples, the rank and discriminant (with values in ${{\mathbf}Q}^*/({{\mathbf}Q}^*)2)$ are a complete set of invariants. Thus there are countably many objects parameterized by $\{0\} \cup {{\mathbf}Q}/({{\mathbf}Q}^*)^2$; in more detail, we have ${{\mathbf}Q}^*/({{\mathbf}Q}^*)^2 \to 0 \to 0$, meaning one zero is a limit point of the other which is a limit point of each of the other points (orbits) in ${{\mathbf}Q}^*/({{\mathbf}Q}^*)2$. Schematically we have $$\begin{aligned} \searrow\quad&\downarrow \quad \quad\swarrow \\ \quad\quad\longrightarrow \ &\ \cdot\ \longleftarrow \\ &\downarrow \\\end{aligned}$$ This can be seen most clearly by using ${Aut} H$ to choose a representative of an orbit to have form $$\begin{aligned} x_2 + dt_2, d \not= 0 \ &\text{(rank 2)} \quad \text{or}\\ x_2 \ &\text{(rank 1)} \quad \text{or}\\ 0 \ &\text{ (rank 0).} \end{aligned}$$ More moderately shallow spaces {#more moderately} ------------------------------- Now consider $H^i = 0$ for $i < n$ and $i \ge 5n-2$. We find $V_L = Z^1(L)$, but there may be a non–trivial action of $L$. Of course for this to happen, we must have $H^i \not= 0$ for at least three values of $i$, e.g., $H(X)$ for $X= S^3 \lor S^3 \lor S^5 \lor S^{10}$. Spaces with this cohomology are of the form $S^3 \lor S^3 \lor S^5 \cup e^{10}$. We have $V_L = Z^1(L) = L_1$ with basis $$\begin{aligned} [x_i,[x_j,x_5]]\partial x_{10}\ \qquad &\text{for}\ L_1^1,\\ [x_i,[x_j,[x_1,x_2]]]\partial x_{10}\ \qquad &\text{with}\ i \ge j \ \text{for}\ L_2^1.\end{aligned}$$ Thus $L_1^1$ corresponds to the space of bilinear forms (Massey products) on $H^3$: $$\langle \ ,\ ,x_5 \rangle : H^3 \otimes H^3 \to H^{10} = {{\mathbf}Q}$$ and thus decomposes into symmetric and antisymmetric parts. On the other hand, $L$ has basis $[x_1,x_2]\partial x_5$ and acts nontrivially on $L_1^1$ except for the antisymmetric part spanned by $$[x_1,[x_2,x_5]]\partial x_{10} - [x_2,[x_1,x_5]]\partial x_{10} = [[x_1,x_2],x_5]\partial x_{10}.$$ (The $ exp\, \ L$ action corresponds to a one–parameter family of maps of the bouquet to itself which are the identity in cohomology but map $S^5$ nontrivially into $S^3 \lor S^3$.) Now $L_1^1$ is isomorphic over $SL(2,{{\mathbf}Q}) = {Aut}\, H^3$ to the space of symmetric bilinear forms on $H^3$. If we represent $L^1 = L_2^1 \otimes L_1^1$ as triples $(u,v,w)$ with $u,w$ symmetric and $v$ anti–symmetric, then $ exp \, L$ maps $(u,v,w)$ to $(u,v,tu+w)$. We have ${Aut}\ H^5 = {{\mathbf}Q}^*$ and ${Aut}\ H^{10} = {{\mathbf}Q}^*$ acting independently on $L^1$. If we look at the open set in $L^1$ where $v \not= 0$, we find the discriminant of $u$ is a modulus. (In fact, even over the complex numbers, it is a nontrivial invariant on the quotient which can be represented as the $SL(2,{\Bbb C})$–quotient of $\{(u,\tilde w)\vert u \in \text{sym}^2, \tilde w \in P^2({\Bbb C}), \text{discriminant}\, (\tilde w) = 0\}$.) The rational decomposition of the degenerate orbits proceeds as before. On the other hand, the obstructions above can be avoided by adding $S^{10}$ to $S^3 \lor S^3 \lor S^8$ and then attaching $e^{13}$ so as to realize $x_2x_{10}$. Then the class of $[\theta_1,\theta_1]$ is zero in $H^2(L)$, namely, $[\theta_1,\theta_1] = [d,\theta_2]$ for $$\theta_2 = [x_1,[x_1,[x_1,x_2]]]\partial x_{10}.$$ Other computations ------------------ Clearly, further results demand computational perseverance and/or machine implementation by symbolic manipulation and/or attention to spaces of intrinsic interest. Tanr' e [@tanre:stunted] has studied stunted infinite dimensional complex projective spaces ${\Bbb C}P^{\infty}_n = {\Bbb C}P^{\infty}/ {\Bbb C}P^n$. Initial work on machine implementation has been carried out by Umble and has led, for example, to the classification of rational homotopy types $X$ having $H^*(X) = H^*({\Bbb C}P^n\vee {\Bbb C}P^{n+k})$ for $k$ in a range [@umble:CPwedge]. At the next level of complexity, he and Lupton [@lupton-umble] have classified rational homotopy types $X$ having $H^*(X) = H^*({\Bbb C}P^n/ {\Bbb C}P^k)$ for all $n$ and $k$: For further results, both computational and theoretical, consult the extensive bibliography created by F' elix building on an earlier one by Bartik. Classification of rational fibre spaces {#fibrations} ======================================= The construction of a rational homopy model for a classifying space for fibrations with given fibre was sketched briefly by Sullivan [@sullivan:inf]. Our treatment, in which we pay particular attention to the notion of equivalence of fibrations, is parallel to our classification of homotopy types. Indeed, the natural generalization of the classification by path components of $C(L)$ provides a classification in terms of homotopy classes of maps $[C,C(L)]$ of a dgc coalgebra into $C(L)$ of an appropriate [dg Lie algebra ]{}$L$. However, the comparison with the topology is more subtle; the appropriate $C(L)$ has terms in positive and negative degrees, because $L$ does, unlike the chains on a space. Because of the convenience of Sullivan’s algebra models of a space and because of the applications to classical algebra, we present this section largely in terms of dgca’s and in particular use $A(L)$ rather than $C(L)$ to classify. The price of course is the need to keep track of finiteness conditions. Tanré carried out the classification in terms of Quillen models [@tanre:modeles] (Corollaire VII.4. (4)). with slightly more restrictive hypotheses in terms of connectivity. Algebraic model of a fibration ------------------------------ The algebraic model of a fibration is a twisted tensor product. For motivation, consider topological fibrations, i.e., maps of spaces $$F \to E \overset{p}{ \rightarrow} B$$ such that $p^{-1}(*) = F$ and $p$ satisfies the homotopy lifting property. We have not only the corresponding maps of dgca’s $$A^*(B) \to A^*(E) \to A^*(F)$$ but $A^*(E)$ is an $A^*(B)$–algebra and, assuming $A^*(B) \text{and} A^*(F)$ of finite type, there is an $A^*(B)$–derivation $D$ on $A^*(B) \otimes A^*(F)$ and an equivalence $$\begin{aligned} & A^*(E)\\ \nearrow \qquad &\qquad \qquad\searrow\\ A^*(B) \qquad &\qquad \qquad A^*(F)\\ \searrow \qquad &\qquad \qquad \nearrow\\ (A^*(B) &\otimes A^*(F),D)\,.\end{aligned}$$ To put this it our algebraic setting, let $F$ and $B$ be dgca’s (concentrated in non–negative degrees) with $B$ [ **augmented**]{}. A sequence $$B \to E \to F$$ of dgca’s such that $F$ is isomorphic to the quotient $E/\bar BE$ (where $\bar B$ is the kernel of the augmentation $B \to {{\mathbf}Q}$) is an [**F fibration over B**]{} if it is equivalent to one which as graded vector spaces is of the form $$B \overset{i}{\longrightarrow} B \otimes F \overset{p}{\longrightarrow} F$$ with $i$ being the inclusion $b \to b \otimes 1$ and $p$ the projection induced by the augmentation. . Two such fibrations $B \to E_i \to F$ are [**strongly equivalent**]{} if there is a commutative diagram $$\begin{matrix} &B\ \rightarrow &E_1\rightarrow &F\\ &id\downarrow &\downarrow &\downarrow id\\ &B\ \rightarrow &E_2\rightarrow &F \end{matrix}$$ (It follows by a Serre spectral sequence argument that $H(E_1) \cong H(E_2)$.) Both the algebra structure and the differential may be twisted, but if we *assume that $F$ is free as a cga*, then it follows that $E$ is strongly equivalent to $$B \overset{i}{\longrightarrow} B \otimes F \overset{p}{\longrightarrow} F$$ with the $\otimes$–algebra structure. The differential in $E = B \otimes F$ then has the form $$d_\otimes + \tau,$$ where $$d_\otimes = d_B\otimes + 1\otimes d_F.$$ The twisting term $\tau$ lies in ${Der}^1(F,\bar B\otimes F)$, the set of derivations of $F$ into the $F$-module $\bar B \otimes F$. This is the sub-[dg Lie algebra ]{}of ${Der}(B\otimes F)$ consisting of those derivations of $B\otimes F$ which vanish on $B$ and reduce to $0$ on $F$ via the augmentation. Assuming $B$ is connected, $\tau$ does not increase the $F$–degree so we regard $\tau$ as a perturbation of $d_{\otimes}$ on $B \otimes F$ with respect to the filtration by $F$ degree. The twisting term must satisfy the integrability conditions: $$\label{integrability} (d + \tau)^2 = 0\ \text{ or }\ [d,\tau] + \frac{1}{2}[\tau,\tau] = 0.$$ To obtain strong equivalence classes of fibrations, we must now factor out the action of automorphisms $\theta\ \text{of}\ B \otimes F$ which are the identity on $B$ and reduce to the identity on $F$ via augmentation. Assuming $B$ is connected, then $\theta - 1$ must take $F\ \text{to}\ \bar B \otimes F$ and therefore lowers $F$ degree, so that $\phi = \text{log}\,\theta = \text{log}\, (1+\theta -1)$ exists; thus $\theta = exp \,(\phi)$ for $\phi$ in ${Der}^0(F,B \otimes F)$. If we set $L = L(B,F) = {Der}\, (F,\bar B \otimes F)$, then for $B$ connected, we may apply the considerations of §\[invariance\] to the [dg Lie algebra ]{}$L$, because the action of $L$ on $L^1$ is complete in the filtration induced by $F$–degree. The variety $V_L = \{\tau \in L^1 \vert (d+\tau)^2 = 0\}$ is defined with an action of $ exp \, L$ as before. For $B$ connected, $F$ free of finite type and $L = {Der}\, (F,\bar B \otimes F)$, there is a one–to–one correspondence between the points of the quotient $M_L = V_L/ exp\, L$ and the strong equivalence classes of $F$ fibrations over $B$. Notice that if $F$ is of finite type, we may identify ${Der}\, (F,F \otimes \bar B)$ with ${(Der}\ F) \hat \otimes \bar B$, i.e., $${Der}^k(F,F \otimes \bar B) {\cong }\underset {n}{\Pi} {Der}^{k-n}(F) \otimes \bar B^n.$$ If $H^1(L) = \Pi H^{1-n}({Der}\, F) \otimes H^n(B)(n>0)$ is $0$, then every fibration is trivial. If $H^2(L) = 0$, then every infinitesimal fibration $[\tau] \in H^1(L)$ comes from an actual fibration (i.e., there is $\tau^\prime \in [\tau]$ satisfying integrability, i.e. the Maurer-Cartan equation). We now proceed to simplify $L$ without changing $H^1(L)$, along the lines suggested by our classification of homtopy types. First consider $F = (S Y,\delta)$, not necessarily minimal. Combining Theorems \[T:Cquism\] and \[T:semi-iso\], we can replace ${Der}\, F$ by $D = sLH\, \sharp \, {Der}\ LH$ where $LH$ is the free Lie algebra on the positive homology of $F$ provided with a suitable differential. If $dim\, H(F)$ is finite, $D$ will have finite type, so we apply the $A$–construction to obtain $A(D)$. Let $[A(D),B]$ denote the set of augmented homotopy classes of dgca maps (cf. Definition \[D:dgahmtpy\]). Classification theorem {#fiberclass} ---------------------- The proof of Theorem \[mainht\] carries over to \[A(D)classifies\] There is a canonical bijection between $M_{D \hat \otimes \bar B}$ and $[A(D),B]$, that is, $A(D)$ classifies fibrations in the homotopy category. However, $A(D)$ now has negative terms and can hardly serve as the model of a space. To reflect the topology more accurately, we first truncate $D$. For a $Z$ graded complex $D$ (with differential of degree +1), we define the $n^{th}$ truncation of $D$ to be the complex whose component in degree $k$ is $$\begin{aligned} D^k \cap\, ker\, d\ \quad &\text{if }\ k =n,\\ D^k\ \quad &\text{if }\ k<n,\\ 0\ \quad &\text{if }\ k>n.\end{aligned}$$ We designate the truncations for $n=0$ and $n=-1$ by $D_c$ and $D_s$ respectively (connected and simply connected truncations). \[ADc-classify\] Let $F$ be a free dgca and $D = sLH \, \sharp \, {Der}\ LH$ as above. If $B$ is a connected (respectively simply connected) dgca, there is a one–to–one correspondence between classes of $F$ fibrations over $B$ and augmented homotopy classes of dga maps $[A(D_c),B]$ (resp. $[A(D_s),B]$). Thus $A(D_c)$ corresponds to a classifying space $B \ {Aut}\, \mathcal F$ where ${Aut}\, \mathcal F$ is the topological monoid of self–homotopy equivalence of the space $\mathcal F$. Similarly, $A(D_s)$ corresponds to the simply connected covering of this space, otherwise known as the classifying space for the sub–monoid $S\, {Aut}\, F$ of homotopy equivalences homotopic to the identity. (Connected Case) $D \to {Der}\, F$ is a cohomology isomorphism. For $K = D_c \hat \otimes \bar B \text{ and } L = {Der}\, F \hat \otimes \bar B$, it is easy to check that $K \to L$ is a *homotopy equivalence in degree one* in the sense of \[in degree 1\], so that $M_K$ is homeomorphic to $M_L$. By \[ADc-classify\], $[A(D_c),B]$ is isomorphic to $M_K$, which corresponds to fibration classes by \[A(D)classifies\]. If we set $\mathfrak g =D_c/D_s$, so that $H(\mathfrak g) = H^0(D)$, then the exact sequence $$0 \to D_s \to D_c \to \mathfrak g \to 0$$ corresponds to a fibration $$B\, S\, {Aut}\, F \to B\, {Aut}\, F \to K(G,1)$$ where $G$ is the group of homotopy classes of homotopy equivalences of $F$, otherwise known as outer automorphisms [@sullivan:inf]. (Simply connected case) Since $A(\ )$ is free, the comparison of maps $A(D_c) \to B$ and $A(D_s) \to B$ can be studied in terms of the twisting cochains of the duals $D_c^* \to B$ and $D_s^* \to B$. Since $D_c^*$ and $D_s^*$ differ only in degrees $0$ and $1$, if $B$ is simply connected, the twisting cochains above are the same, as are the homotopies. The $k^{th}$ rational homotopy groups $(k> 1)$ of $A(D_s)$ and $A(D_c)$ are the same, (namely, $H^{-k+1}({Der}\ F)$), but the cohomology groups are not. In the examples below, we will need the following: Examples -------- Consider $\mathcal F = {\Bbb C}P^n$ and $F = S(x,y)$ with $\vert x \vert = 2, \vert y \vert = 2n+1 \text{ and }\\ dy = x^{n+1}$. Since $F$ is free and finitely generated, we take $D = {Der}\ F$ and obtain $$D = \{\theta^0,\theta^{-2},\phi^{-1},\phi^{-3},\dots ,\phi^{-2n-1} \}$$ with indexing denoting degree and $$\begin{aligned} \theta^0 &= 2x\partial x + (2n+2)y\partial y\\ \theta^{-2} &= \partial x\\ \phi^{-(2k+1)} &=x^{n-k} \partial y.\end{aligned}$$ The only nonzero differential is $d\theta^{-2} = \phi^{-1}$ and the nonzero brackets are $$\begin{aligned} [\theta^0,\theta^{-2}] &= 2\theta^{-2}\\ [\theta^0,\phi^{\nu}] \, \, &= (\nu -1)\phi^{\nu}\ \ \text{ and}\\ [\theta^{-2},\phi^{\nu}] = (n-k) \phi^{\nu -2}. \end{aligned}$$We then have the sub-[dg Lie algebra ]{}$D_c =\{\theta^0,\phi^{-3},\dots\}$ which yields $$A(D_c) \simeq S (v^1,w^4,w^6,\dots ,w^{2n+2}),$$ the free algebra with $dv^1 = 0, dw^4 = \frac{1}{2}v^1w^4$, etc., and $v^1, w^4, \dots $ dual to $\theta^0,\phi^{-3},\dots .$ The cohomology of this dgca is that of the subalgebra $S(v^1)$ (by the theorem above), which here is a model of $BG = K(G,1)$ for $G = GL(1)$, the (discrete) group of homotopy classes of homotopy equivalences of ${\Bbb C}P^n$. (These automorphisms are represented geometrically by the endomorphisms $(z,\dots ,z_n) \mapsto (z^\lambda ,\dots ,z_n^\lambda )$ of the rationalization of ${\Bbb C}P^n$. this formula is not well defined on ${\Bbb C}P^n$ unless $q$ is an integer, but does extend to the rationalization for all $q$ in ${{\mathbf}Q}^*$. This follows from general principles, but may also be seen explicitly as follows. The rationalization is the inverse limit of ${\Bbb C}P^n_s$, indexed by the positive integers $s$, which are ordered by divisibility. The transition maps ${\Bbb C}P^n_s \to {\Bbb C}P^n_t$ are given by $z_i \mapsto z_i ^{s/t}$. The $m$-th root of the sequence $(x_s$) is then $( y_s)$, where $y_s = x_{ms}$. Algebraically these grading automorphisms of the formal dga $F$ are given by $a \mapsto t^wa$ ($w$ = weight of $a$) for $a$ in $F$. Thus, the characteristic classes in $HA(D_c)$ have detected only the fibrations over $S^1$, not the remaining fibrations given by $$[A(D_c),S^{2k}] = H^{-2k-1}(D) = \{[\phi^{-2k-1}] \}$$ dual to $w^{2k}$. These other fibrations are, however, detected by $$H(A(D_s)) = S(w^4,w^6,\dots ,w^{2n+2}),$$ since $D_s = \{\theta^{-2},\phi^{-1},\phi^{-3},\dots\}$ has the homotopy type of $\{\phi^{-3},\dots\}$. These last fibrations come from standard ${\Bbb C}^{n+1}$ vector bundles over $S^{2k}$, and the $w^{2i}$ correspond to Chern classes $c_i$ via the map $BGL(n+1,{\Bbb C)}\to BS\, {Aut}\, {\Bbb C}P^n$. (The fibration for $c_1$ is missing because, for $n=0$, the map $BGL(1,{\Bbb C)}\to BS\, {Aut}^*\, \simeq * $ is trivial; a projectivized line bundle is trivial. To look at some other examples, we use computational machinery and the notation of Section \[examples\]. If the positive homology of $F$ is spanned by $x_1,\dots,x_r$ of degrees $\nu_1,\dots,\nu_r, r > 1$, then ${Der}\, L({\mathcal H})$ is spanned by symbols of the form $$[x_1,[x_2,[\dots,x_{m+1}]\dots]\partial x_{m+2}$$ of degree $\nu_{m+2} - (\nu_1 +\dots+\nu_{m+1}) + m$. If we take the fibre to be the bouquet $S^\nu \vee \dots \vee S^\nu$ ($r$ times), then $\nu_i = \nu, d = 0$ in $LH$ and in $D$, and the weight $0$ part of $D_c$ has weight is $$\mathfrak g = \{x_i \del x_j\} \simeq \mathfrak{gl}(r)$$ and $$H(A(D_c)) = H(A(g)) = S(v^1,v^3,\dots,v^{2n-1})$$ (superscripts again indicate degrees) and detects over $S^1$ the fibrations (with fibre the bouquet) which are obtained by twisting with an element of $GL(r)$. The model $A(D_c)$ has homotopy groups in degree $p = m(\nu -1) + 1$, spanned by symbols as above (mod $ ad\, L({\mathcal H}))$. Such a homotopy group is generated by a map corresponding to a fibration over $S^p$ with twisting term $$\tau \in H^p(S^p) \otimes H^{1-p}({Der}\ F)$$ which has weight $1-m$ in $H(S^p \otimes F)$. For $m>1$, this is negative and gives a perturbation of the homotopy type of $S^p \times F$ (fixing the cohomology); we thus have for $m>1$, a surjection from fibration classes $F \to E \to S^p$ to homotopy types with cohomology $H(S^p \otimes F)$, the kernel being given by the orbits of $GL(r)$ acting on the set of fibration classes. For $m=1, p = \nu$, the twisting term gives a new graded algebra structure to $H(S^p \otimes F)$ via the structure constants $a^k_{ij}$ which give $x_ix_j = \Sigma a^k_{ij}yx_k$ where $y$ generates $H^p(S^p)$. If we replace the base $S^\nu$ by $K({{\mathbf}Q},\nu)$ (for $\nu =2, {\Bbb C}P^2$ will suffice), then the integrability condition $[\tau,\tau] = 0$ is no longer automatic; it corresponds to the associativity condition on the $r$ dimensional vector space $H^\nu(F)$ with multiplication given by structure constants $a^k_{ij}$. The cohomology of $B\, S\, {Aut}\, (S^\nu \vee \dots \vee S^\nu)$ generated by degree $\nu$ is the coordinate ring of the (miniversal) variety of associative commutative unitary algebras of dimension $r+1$; that is, it is isomorphic to the polynomial ring on the symbols $a^k_{ij}$ modulo the quadratic polynomials expressing the associativity condition, and the $r$ linear polynomials arising from the action of $ad\, x_i$ (translation of coordinates). Apart from these low degree generators and relations, the cohomology of $A(D_s)$ remains to be determined. For example, is it finitely generated as an algebra? Already for the case of $S^2 \vee S^2$, there is, beside the above classes in $H^2( A(D_s))$, an additional generator in $H^3$ dual to $$\theta = [x_1,[x_1,x_2]]\partial x_1 - [x_2[x_1,x_2]]\partial x_2 \in D^{-2}$$ (which gives the nontrivial fibration $S^2 \vee S^2 \to E \to S^3$ considered before). Since $\theta \in [D_s,D_s]$, it yields a nonzero cohomology class. In this last example, we saw that $H^*(E)$ need not be $H^*(B) \otimes H^*(F)$ as an algebra. Included in our classification are fibrations in which $H(E)$ is not even additively isomorphic to $H(B) \otimes H(F)$. Consider the case in which the fibre is $S^\nu$. For $\nu$ odd, we have $F = S(x)$ and ${Der}\ F = S(x)\partial x$ with $D_s = \{\partial x\}$. The universal simply connected fibration is $$S^\nu \to E \to K({{\mathbf}Q},\nu +1)$$ or $$S (x) \gets (S (x,u),dx = u) \gets S (u)$$ with $E$ contractible. Here $\tau = \partial x \otimes u$ and the transgression is not zero. By contrast, when $\nu$ is even, $F = S (x,y)$ with $dy = x^2$ and we get $D_s = \{y\partial x,\partial x,\partial y\}$ which is homotopy equivalent to $\{\partial y\}$. The universal simply connected $S^2$–fibration is then $$S^\nu \to E \to K({{\mathbf}Q},2\nu)$$ where $E = (S (x,y,u),dy=x^2 - u) \simeq S (x)$ is the model for $K({{\mathbf}Q},\nu)$. Here $\tau = \partial y \otimes u$ gives a deformation of the algebra $H(B) \otimes H(F) = S (x,u)/x^2$ to the algebra $H(E) = S (x,u)/(x^2-u) = S (x)$. (Fibrations $$\bigvee^n_1 S^{2(n+i)} \to E \to K({{\mathbf}Q},2(n+2))$$ occur in Tanr' e’s analysis [@tanre:stunted] of homotopy types related to the stunted infinite dimensional complex projective spaces ${\Bbb C}P^{\infty}_n$.) A neat way to keep track of these distinctions is to consider the Eilenberg–Moore filtration of $B \otimes F$ where $F$ is a filtered model, i.e., $\text{weight}\ (b \otimes f) = \text{degree}\ b \, + \, \text{weight}\ f = \text{deg} \ b \, + \, \text{deg}\ f + \text{resolution degree}\ f$. (Cf. [@thomas:fibrations].) In general, $\tau \in ({Der}\ F \hat\otimes B)1$ will have weight $\leq 1$ since weight $f \leq\ \text{degree}\ f$ and $\tau$ does not increase $F$ degree. If, in fact, weight $\tau \leq 0$, then $H(E)$ is isomorphic to $H(B) \otimes H(F)$ as $H(B)$–module but not necessarily as $H(B)$–algebra. Finally, if we can accept dgca’s with negative degrees (without truncating so as to model a space), we can obtain a uniform description of fibrations and perturbations of the homotopy type $F$. Consider in ${Der}\, F$, the sub-[dg Lie algebra ]{}$D_-$ of negatively weighted derivations, then $[A(D_-),B]$ is for $B =S^0$, the space of homotopy types underlying $H(F)$ while for connected $B$, it is the space of strong equivalence classes of $F$–fibrations over $B$. Open questions {#questions} --------------- We turn now to the question of realizing a given quotient variety $M = V/G$ as the set of fibrations with given fibre and base, or as the set of homotopy types with given cohomology. The structure of $M$ appears to be arbitrary, except that $V$ must be conical (and for fibrations, $G$ must be pro-unipotent). The fibrations of an odd dimensional sphere $S^\nu$ over $B$ form an affine space $M = V = H^{\nu +1}(B)$, though it is not clear how to make $V$ have general singularities or pro-unipotent group action. For homotopy types, we consider the following example, provoked by a letter from Clarence Wilkerson. Take $F$ to be the model of $S^\nu \vee S^\nu$, for $\nu$ even. As we have seen, the model $D$ of ${Der}\, F$ contains a derivation $\theta = [x_1,[x_1,x_2]]\partial x_1 - [x_2,[x_1,x_2]]\partial x_2$ of degree $-2\nu +2$ and weight $-2\nu$, which generates $H^{-2\nu +2}(D)$. If $B = S^3 \times (CP^\infty)^n$, then $H^{-2\nu +2}(D) \otimes H^{2\nu -1}(B) = V$ has weight $-1$ and may be identified with the homogeneous polynomials of degree $\nu -2$ in $r$ variables. If we truncate $B$ suitably, so that we have $H^1((D \otimes B)_-) = V$, then the set of rational homotopy types with cohomology equal to $H(F) \otimes H(B)$ is the quotient $V/GL(r)$, i.e., equivalence classes of polynomials of (even) degree $\nu =2$ in $r$ variables. We may ask, similarly, which dgca’s occur, up to homotopy types, as classifying algebras $A(D_c)$ or $A(D_s)$. The general form of the representation problem is the following: given a finite type of dgL $D$, does there exist a free [dg Lie algebra ]{}$\pi$ such that $D \sim {Der}\ \pi/ad \, \pi$? Postscript ========== Some $n$ years after a preliminary version of this preprint first circulated, there have been major developments of the general theory and significant applications, many inspried by the interaction with physics. We have not tried to describe them; a book would be more appropriate to address properly this active and rapidly evolving field.
Chaudhary Charan Singh was born on December 23, 1902. He served as the fifth Prime Minister of India between 28 July 1979 and 14 January 1980. He is popularly known as the ‘champion of India’s peasants’. The farmers’ leader was born in a rural peasant Jat family at village Noorpur in Hapur district of Uttar Pradesh. Charan Singh’s entry in politics started with his participation in the Independence Movement which was motivated by Father of the Nation Mahatma Gandhi. Charan Singh was active from 1931 in the Ghaziabad District Arya Samaj as well as the Meerut District Indian National Congress for which he was arrested twice by the British. As a member of Legislative Assembly of the United Provinces in 1937, Singh took interest in the laws that were dangerous to the economy of the villages. Gradually, he took a stand against the exploitation of tillers of the land by landlords. In 1939, Singh introduced the Debt Redemption Bill to give relief to the peasantry from moneylenders. Born in a village in a farmer’s family, Charan Singh had seen the plights of farmers and how money lenders exploited them. Introducing the bill was one of the significant steps in making the farmers’ lives better. Further, in April 1939, he drafted a Land Utilisation Bill, whose aim was to ‘transfer … the proprietary interest in agricultural holdings of UP to such of the tenants or actual tillers of the soil who chose to deposit an amount equivalent to ten times the annual rent in the government treasury to the account of the landlord’. Charan Singh began his fight against the landlords. Following that in June, he published a newspaper article which discussed the blueprint of the land reform he would pursue after Independence. Charan Singh was one of three main leaders in the Congress state politics between 1952 and 1967. He became popular in Uttar Pradesh from the 1950s for drafting one of the most revolutionary land reform laws under the guidance of the then Chief Minister Pandit Govind Ballabh Pant. The peasants’ leader became more popular from 1959 when he publicly opposed the first Prime Minister of India Jawaharlal Nehru’s land policies in the Nagpur Congress Session. Even though his position in the UP Congress was enfeebled, the middle farmer communities in North India started looking up to him as their spokesperson and leader. In April 1967, Charan Singh was deserted from the Congress. Then he joined the opposition party and became the first non-Congress chief minister of UP. This was the time when non-Congress governments were a strong force in India from 1967–1971. After the 1974 elections, Charan Singh moved to politics at the Centre. Soon the Emergency was imposed and he was imprisoned. Later, he played a significant role in the formation of the Janata Party. Though Charan Singh failed to become the Prime Minister of India in 1977, later, because of the efforts of Raj Narain, he got the opportunity to serve the country as the Prime Minister in 1979. However, he resigned just after six months in office when Indira Gandhi’s Congress Party withdrew support to the government. Afterwards, Charan Singh continued to lead the Lok Dal in opposition till his death in 1987. Charan Singh married Gayatri Devi, with whom he had six children. His son Ajit Singh is currently the president of a political party Rashtriya Lok Dal, a former Union Minister and a many times Member of Parliament. Ajit Singh’s son Jayant Chaudhary was elected to 15th Lok Sabha from Mathura, which he lost to Hema Malini in the election of 2014. Charan Singh died of Cardio vascular collapse in May 29, 1987. Chaudhary Charan Singh Achievements The National Farmers’ Day or Kisan Divas is a good day to remember Chaudhary Charan Singh. It is observed in India to commemorate the birth anniversary of Chaudhary Charan Singh—a very simple-minded man who led an extremely simple life. As the Prime Minister of the country, he introduced many policies to improve the lives of the peasants. He brought farmer issues into the electoral politics at the state level and then at the national level during the 1960s and 1970s. In 1939, Charan Singh proposed the idea of a 50% quota for sons of the farmers in government jobs before the executive committee of the Congress parliamentary group in the Uttar Pradesh assembly. During his tenure as an agriculture minister, Singh led the Uttar Pradesh government in drafting, articulating and implementing the Uttar Pradesh Zamindari and Land Reforms Bill in 1952 which he considered as one of the main achievements of his life. The implementation of the bill gave ownership of land to the tenants belonging to the middle and lower castes. This became one of the reasons for his rising popularity among people of middle castes, both in western and eastern Uttar Pradesh. Soon, Charan Singh became the representative of farmers’ politics, first in Uttar Pradesh and then at the national level in the 1960s and 1970s. The emergence of Charan Singh both at the state level and at the Centre led to the politicisation of the peasantry and its assertion in electoral politics. As a result, no government was in a position to ignore the interests and issues of the farmers from 1960 to the late 1980s. A politicised peasantry was able to exert pressure on the governments to reduce taxes and increase subsidies, water, free electricity, among several other things. Today, Charan Singh’s legacy may seem to have disappeared from the minds of people, but his imprint on India’s national politics continues.
https://blog.indiacontent.in/politics/buy-images-remembering-chaudhary-charan-singh-on-his-birth-anniversary_4905/
North Dakota voters decided in 2010 to set aside 30 percent of state oil and gas tax revenues for the Legacy Fund. The trust fund was inaccessible until 2017, but now – and every two years into the future – the Legislature can use the fund’s earnings with a simple majority vote. So, we find ourselves at a pivotal point in North Dakota’s history: What will be the legacy of the Legacy Fund? Will we use it to transform our future? Or will we use it to fund ongoing government operations? When revenues fell short last biennium, we used $200 million in Legacy Fund earnings to close the gap and balance the budget. Yet most North Dakotans would likely agree that funding day-to-day operations isn’t what they had in mind when they voted for a “legacy,” given that the state today is still collecting billions of dollars in oil tax revenue. In the Governor’s Office, we took a thoughtful, high impact-based approach to investing $300 million in Legacy Fund earnings when preparing our executive budget recommendation for the 2019-21 biennium. Our first rule was to leave the Legacy Fund’s rapidly growing $5.3 billion principal untouched so it can continue to grow for future generations. The projects we’ve proposed do just that, from $80 million for permanent revolving loan funds to spur $535 million in infrastructure and school construction; to $30 million for a statewide UAS infrastructure network that will help diversify our economy; to $50 million for a Theodore Roosevelt Presidential Library and Museum that will catalyze 2-to-1 federal and private matches to create a world-class tourism and workforce draw.
http://www.minotdailynews.com/opinion/community-columnists/2019/02/what-will-be-the-legacy-of-the-legacy-fund/
“Our measurements are a bucket of cold water for designers of molecular nanomachines” An innovative measurement method was used at the Institute of Physical Chemistry of the Polish Academy of Sciences in Warsaw for estimating power generated by motors of single molecule in size, comprising a few dozens of atoms only. The findings of the study are of crucial importance for construction of future nanometer machines – and they do not instil optimism. Nanomachines are devices of the future. Composed of a very little number of atoms, they would be in the range of billionth parts of a meter in size. Construction of efficient nanomachines would lead most likely to another civilization revolution. That’s why researchers around the world look at various molecules trying to put them at mechanical work. Researchers from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS) in Warsaw were among the first to have measured the efficiencies of molecular machines composed of a few dozen of atoms. “Everything points to the belief that the power of motors composed of single, relatively small molecules is considerably less than expected”, says Dr Andrzej ?ywoci?ski from the IPC PAS, one of the co-authors of the paper published in the “Nanoscale” journal. Molecular motors studied at the IPC PAS are molecules of smectic C*-type liquid crystals, composed of a few tens of atoms (each molecule is 2.8 nanometer long). After depositing on the surface of water, the molecules, under appropriate conditions, form spontaneously the thinnest layer possible – a monomolecular layer of specific structure and properties. Each liquid crystal molecule is composed of a chain with its hydrophilic terminal anchored on the surface of water. A relatively long, tilted hydrophobic part protrudes over the surface. So, monomolecular layer resembles a forest with trees growing at certain angle. The free terminal of each chain includes two crosswise arranged groups of atoms with different sizes, forming a two-blade propeller with blades of different lengths. When evaporating water molecules strike the “propellers”, the entire chain starts to rotate around its “anchor” due to asymmetry. Specific properties of liquid crystals and the conditions of experiment give rise to an in-phase motion of adjacent molecules in the monolayer. It is estimated that “tracts of the forest” of up to one trillion (10^12) molecules, forming areas of millimeter sizes on the surface of water, are able to synchronise their rotations. “Moreover, the molecules we studied were rotating very slowly. One rotation could be as long as a few seconds up to a few minutes. This is a much desired property. Would the molecules be rotating with, for instance, megahertz frequencies, their energy could be hardly transferred on larger objects”, explains Dr ?ywoci?ski. Earlier power estimations for molecular nanomotors were related either to much larger molecules, or to motors powered by chemical reactions. In addition, these estimations did not account for the resistance of the medium where the molecules worked. Free, collective rotations of liquid crystal molecules on the surface of water can be easily observed and measured. Researchers from the IPC PAS checked how the speed of rotation changes as a function of temperature; they estimated also changes in (rotational) viscosity in the system under study. It turned out that the energy of single molecule motion generated during one rotation is very low: just 3.5·10^-28 joule. This value is as many as ten million times lower than the thermal motion energy. “Our measurements are a bucket of cold water for designers of molecular nanomachines”, notices Prof. Robert Ho?yst (IPC PAS). In spite of generating low power, rotating liquid crystal molecules can find practical applications. This is due to the fact that a large ensemble of collectively rotating molecules generates a correspondingly higher power. Moreover, a single square centimeter of the surface of water can accommodate many such ensembles with trillions of molecules each. The Latest on: Molecular motors - Using a molecular motor to switch the preference of anion-binding catalystson December 6, 2019 at 10:10 am The results were published online by the journal Angewandte Chemie on November 17. The process is based on the use of a molecular motor created by Professor Feringa, for which he was awarded the 2016 ... - The Use of Molecular Motors for Stereodivergent Anion Binding Catalysis (image)on December 6, 2019 at 7:24 am A photoresponsive chiral catalyst based on an oligotriazole-functionalized unidirectional molecular motor has been developed for stereodivergent anion binding catalysis. The motor function controls ... - Identifying kids with high IQ: An age-by-age guide for parentson December 3, 2019 at 12:22 am According to a study published in the Journal of Molecular Psychiatry, there is a positive correlation between the size ... Tips to boost your child’s overall development at this age: Regular massage ... - Groundbreaking cohesin study describes the molecular motor that folds the genomeon November 22, 2019 at 4:45 am The looping process is mediated by cohesin, which must therefore be a molecular motor, similar to other motor proteins such as myosin, which activates muscles. The cohesin molecule does not just ... - Ultra-High Molecular Weight Polyethylene Market -Share, Growth, Trends, and Forecast 2019 – 2025on November 22, 2019 at 12:37 am Nov 22, 2019 (AmericaNewsHour) -- The Ultra-High Molecular Weight Polyethylene Market is likely ... the leading automotive manufacturers which accounted for about 27 million motor vehicles in 2018. - Cohesin - a molecular motor that folds our genomeon November 21, 2019 at 12:00 pm The looping process is mediated by cohesin which must therefore be a molecular motor, similar to other motor proteins such as myosin which makes our muscles move. • The cohesin molecule does not just ... - Scientists create microscopic submarines with light-powered molecular motorson November 17, 2019 at 4:00 pm The technology is still in early stages of development, but these nanosubs powered by “molecular motors” may ultimately be among the most important scientific breakthroughs we’ve seen in recent ... - ASU professor tackles life at the atomic level with NSF CAREER awardon November 13, 2019 at 10:37 am Frasch and doctoral student Seiga Yanagisawa have developed new assays to examine the rotation of single molecules of molecular motor proteins under a microscope. John Vant and Jon Nguyen, Singharoy’s ... - Researchers identify molecular process that could accelerate recovery from nerve injurieson October 2, 2019 at 10:14 am As a next step, Butler and her lab are using human stem cell-derived motor neurons to screen for drug candidates that could modify this molecular process and speed nerve regeneration in humans. They ... - Molecular motors: Rotation on an eight-shaped pathon October 1, 2019 at 10:08 am Chemical engineers have developed the first molecular motor that enables an eight-shaped movement. Chemical engineers at Ludwig-Maximilians-Universitaet (LMU) in Munich, Germany, have developed the ...
https://www.innovationtoronto.com/2013/09/molecular-motors-power-much-less-expected/?responsive=false
Dr. Denise K. Sommers Dr. Denise K. Bockmier-Sommers is an Associate Professor in the Social Service Administration Concentration within the department of Human Services. Denise earned her doctorate of education in counselor education from the University of Missouri at Saint Louis. Her dissertation was entitled The Use of Service Learning to Enhance Multicultural Counseling Competencies. Denise earned her bachelor’s degree in human growth and development from the University of Illinois in Urbana/Champaign and her master’s degree in rehabilitation from East Carolina University in Greenville, NC. Along with her academic accomplishments she brings 25 years of counseling, supervisory, and administrative experience to the HMS Program. She has extensive professional experience with for-profits, non-profits, and state government organizations. She has actively served on various agency and community boards in St. Louis, MO and Springfield, IL. She is a member of the National Organization of Human Services. Her monograph chapter about service learning was published in the Council for Standards in Human Service Education (CSHSE) Monograph in 2011. She joined the UIS faculty in August 2007. Prior to 2007, she taught for a total of two years as adjunct faculty at UIS and UMSL. Denise’s research interests include the development of online pedagogy, the use of service learning to enhance administrative, management, and supervisory competencies, and the development of organizational diversity competencies. Denise has presented at several regional and national academic conferences on the use of service learning to enhance diversity and management-related competencies. Prior to joining the UIS faculty, Denise did public speaking in the areas of addiction, mental health, rehabilitation, and GLBTQ issues. Contact Dr. Bockmier-Sommers at:
https://www.uis.edu/humanservices/faculty/sommers/
World’s population is projected to reach 9.7 billion in 2050 and 11.2 billion in 2100. To meet the food demands of the exponentially increasing population, a massive food production is necessary. Agricultural production on land and aquatic systems pose negative impacts on the earth’s ecosystems. Combined effects of climate change, land degradation, cropland losses, water scarcity and species infestations are major causes for loss of agricultural yields up to 25%. Therefore, the world needs a paradigm shift in agriculture development for sustainable food production and security through green revolution and eco-friendly approaches. Hence, agriculture practices must be sustained by the ability of farm land to produce food to satisfy human needs indefinitely as well as having sustainable impacts on the broader environment. The real agricultural challenges of the future as well as for today differ according to their geopolitical and socioeconomic contexts. Therefore, sustainable agriculture must be inclusive and have adaptability and flexibility over time to respond to demands for food production. Considering all these points, this book has been prepared to address and insights to generate awareness of food security and focuses on perspectives of sustainable food production and security towards human society. The book facilitates to describes the classical and recent advancement of technologies and strategies by sustainable way through plant and animal origin including, breeding, pest management, tissue culture, transgenic techniques, bio and phytoremediation, environmental stress and resistance, plant growth enhancing microbes, bio-fertilizer and integrated approaches of food nutrition. Chapters provide a new dimension to discuss the issues, challenges and strategies of agricultural sustainability in a comprehensive manner. It aims at educating the students, advanced and budding researchers to develop novel approaches for sustainability with environmentally sound practices.
https://rd.springer.com/book/10.1007%2F978-981-10-6647-4
--- abstract: 'The sparse spike estimation problem consists in estimating a number of off-the-grid impulsive sources from under-determined linear measurements. Information theoretic results ensure that the minimization of a non-convex functional is able to recover the spikes for adequately chosen measurements (deterministic or random). To solve this problem, methods inspired from the case of finite dimensional sparse estimation where a convex program is used have been proposed. Also greedy heuristics have shown nice practical results. However, little is known on the ideal non-convex minimization method. In this article, we study the shape of the global minimum of this non-convex functional: we give an explicit basin of attraction of the global minimum that shows that the non-convex problem becomes easier as the number of measurements grows. This has important consequences for methods involving descent algorithms (such as the greedy heuristic) and it gives insights for potential improvements of such descent methods.' author: - | Yann Traonmilin$^{1,2,}$[^1] and Jean-François Aujol$^{2}$\ $^1$CNRS,\ $^2$Univ. Bordeaux, Bordeaux INP, CNRS, IMB, UMR 5251,F-33400 Talence, France.\ bibliography: - 'working\_paper\_SR.bib' title: 'The basins of attraction of the global minimizers of the non-convex sparse spike estimation problem' --- Introduction ============ Context ------- Sums of sparse off-the-grid spikes can be used to model impulsive sources in signal processing (e.g. in astronomy, microscopy,...). Estimating such signals from a finite number of Fourier measurements is known as the super-resolution problem [@Candes_2014]. In the space ${\mathcal{M}}$ of finite signed measure over ${\mathbb{R}}^d$, we aim at recovering $x_0 = \sum_{i=1,k} a_i \delta_{t_i}$ from the measurements $$y= Ax_0 + e,$$ where $\delta_{t_i}$ is the Dirac measure at position $t_i$, the operator $A$ is a linear observation operator, $y \in {\mathbb{C}}^m$ are the $m$ noisy measurements and $e$ is a finite energy observation noise. Recent works have shown that it is possible to estimate spikes from a finite number of adequately chosen Fourier measurements as long as their locations are sufficiently separated, using convex minimization based variational methods in the space of measures [@Candes_2013; @Bhaskar_2013; @Tang_2013; @Castro_2015; @Duval_2015]. Other general studies on inverse problems have shown that an ideal non-convex method (unfortunately computationally inefficient) can be used to recover these signals as long as the linear measurement operator has a restricted isometry property (RIP) [@Bourrier_2014]. In the case of super-resolution, adequately chosen random compressive measurements have been shown to meet the sufficient RIP conditions for separated spikes, thus guaranteeing the success of the ideal non-convex decoder [@Gribonval_2017]. These RIP results are based on an adequate kernel metric on ${\mathcal{M}}$. It must be noted that, according to the work of [@Bourrier_2014], the success of the convex decoders as described in [@Candes_2013] for regular Fourier sampling implies a (lower) restricted isometry property of $A$ with respect to such a kernel metric (and not with the natural total variation metric: in this case no RIP is possible with finite regular Fourier measurements, see e.g. [@Boyer_2017]). Greedy heuristics have also been proposed to approach the non-convex minimization problem and they have shown good practical utility [@Keriven_2016; @Keriven_2017; @Traonmilin_2017]. While giving theoretical recovery guarantees, the convex-based method is non-convex in the space of parameters (amplitudes and locations) due to a polynomial root finding step. Also, it is difficult to implement in dimensions larger than one in practice [@Duval_2017]. Greedy heuristics based on orthogonal matching pursuit are implemented in higher dimension (they can practically be used up to $d=50$), but they still miss theoretical recovery guarantees [@Keriven_2016]. It would be possible to overcome the limitations of such methods if it were possible to perform the ideal non-convex minimization: $$\label{eq:minimization} x^* \in \underset{x \in \Sigma}{{\mathrm{argmin}}} \|Ax-y\|_2$$ where $\Sigma$ is a low-dimensional set modeling the separation constraints on the $k$ Diracs. While simple in its formulation, properties of this minimization procedure have not yet been thoroughly studied. In this article, as a first important step towards the understanding of the non-convex sparse spike estimation problem , we study its formulation in the parameter space (the space of amplitudes and locations of the Diracs). We observe that a smooth non-convex optimization can be performed. We place ourselves in a context where the number of measurements, either deterministic or random, guarantees the success of the ideal non-convex decoder with respect to a kernel metric $\|\cdot\|_h$, i.e. when we can ensure that: $$\label{eq:perf_bound} \|x^*-x_0\|_h\leq C \|e\|_2,$$ where $C$ is an absolute constant with respect to $e$ and $x_0 \in \Sigma_{k,\epsilon}$, the set $\Sigma_{k,\epsilon}$ is the set of sums of $k$ spikes separated by $\epsilon$ on a given bounded domain. Qualitatively, the kernel metric can be viewed as a measure of the energy at a given resolution set by a kernel $h$ (see Section \[sec:kernel\_dipole\]). The bound  is guaranteed by a restricted isometry property of $A$ defined using such kernel metric [@Gribonval_2017]. This RIP setting is verified in the deterministic (see Section \[sec:kernel\_dipole\]) and random weighted Fourier measurement contexts [@Gribonval_2017]. We link this RIP of measurement operators with the conditioning of the Hessian of the global minimum, and we give an explicit basin of attraction of the global minimum . This study has direct consequences for the theoretical study of greedy approaches. Indeed a basin of attraction permits to give recovery guarantees for the gradient descent (the initialization must fall within the basin), which is a step in the iterations of the greedy approach. Parametrization of the model set $\Sigma$ ----------------------------------------- Let $\Sigma \subset {\mathcal{M}}$ be a model set (union of subspaces) and $x_0 \in \Sigma$. Let $f(x) =\|Ax-y\|_2 $. A parametrization of $\Sigma$ is a function $\phi$ such that $\Sigma \subset \phi({\mathbb{R}}^d) = \{\phi(\theta) : \theta \in {\mathbb{R}}^d \}$. The point $\theta \in {\mathbb{R}}^d$ is a local minimum of $g : {\mathbb{R}}^d \to {\mathbb{R}}$ if there is $\epsilon > 0 $ such that for any $\theta' \in {\mathbb{R}}^d$ such that $\|\theta-\theta'\|_2 \leq \epsilon$, we have $ g(\theta) \leq g(\theta')$. In the following, we consider the model of $\epsilon$-separated Diracs with $\epsilon >0$: $$\begin{split} \Sigma = \Sigma_{k,\epsilon} := \{ \phi(\theta)= \sum_{r=1,k} a_r \delta_{t_r} : \; &\theta = (a, t_1,..,t_k) \in {\mathbb{R}}^{k(d+1)}, a \in {\mathbb{R}}^k, t_r \in {\mathbb{R}}^d,\\ &\forall r \neq l, \|t_r-t_l\|_2> \epsilon, t_r \in {\mathcal{B}}_2(R)\},\\ \end{split}$$ where $$\label{eqB2} {\mathcal{B}}_2(R) = \{t \in {\mathbb{R}}^d : \|t\|_2 \leq R\}.$$ Note that, in this paper, the Dirac distributions could be supported on any compact set. We use ${\mathcal{B}}_2(R)$ for the sake of simplicity. For $t_r \in {\mathbb{R}}^d$, we write $t_r =(t_{r,j})_{j=1,d}$. We consider the following parametrization of $\Sigma_{k,\epsilon}$: $\sum_{i=1,k} a_i \delta_{t_i} = \phi(\theta) $ with $\theta= (a_{1},.., a_{k}, t_{1},..,t_{k})$. We define $$\Theta_{k,\epsilon}:= \phi^{-1}(\Sigma_{k,\epsilon}).$$ We consider the problem $$\label{eq:minimization2} \theta^* \in \arg \min_{ \theta \in E} g(\theta) = \arg \min_{ \theta \in E} \|A\phi(\theta)-y\|_2 .$$ where $E= {\mathbb{R}}^{k(d+1)}$ or $E= \Theta_{k,\epsilon}$ and $g(\theta)= f(\phi(\theta))$. Note that when $E = \Theta_{k,\epsilon}$, performing minimization  allows to recover the minima of the ideal minimization , yielding stable recovery guarantees under a RIP assumption. Hence we are particularly interested in this case. When $E = {\mathbb{R}}^{k(d+1)}$, we speak about unconstrained minimization for minimization . The objective of this paper is to study the shape of the basin of attraction of the global minimum of  when $E = \Theta_{k,\epsilon}$. Basin of attraction and descent algorithms {#sec:basin_def} ------------------------------------------ In this work, we are interested in minimizing $g$ defined in . Since $g$ is a smooth function, a classical method to minimize $g$ is to consider a fixed step gradient descent. The algorithm is the following. Consider an initial point $\theta_0 \in {\mathbb{R}}^d$ and a step size $\tau >0$. We define by recursion the sequence $\theta_n$ by $$\label{def_thetan} \theta_{n+1} = \theta_n - \tau \nabla g(\theta_n)$$ We now give the definition of basin of attraction that we will use in this paper. \[defbassin\] We say that a set $\Lambda \subset {\mathbb{R}}^d$ is a basin of attraction of $g$ if there exists $\theta^* \in \Lambda$ and $\tau>0$, such that if $\theta_0 \in \Lambda$ then the sequence $\theta_n$ defined by converges to $\theta^*$. This definition of basin of attraction is related to the following classical optimization result (see e.g. [@ciarlet1989introduction]): Assume $g$ to be a smooth coercive convex function, whose gradient is $L$ Lipschitz. Let $\theta_0 \in {\mathbb{R}}^d$. Then, if $\tau < \frac{1}{ L}$, there exists $\theta^* \in {\mathbb{R}}^d$ such that the sequence $\theta_n$ defined by converges to $\theta^*$. An immediate consequence of the previous proposition is the following corollary. \[cor:convergence\_gradient\_descent\] Assume $g$ to be a smooth function. Assume that $g$ has a minimizer $\theta^* \in {\mathbb{R}}^d$. Assume that there exists an open set $\Lambda \subset {\mathbb{R}}^d$ such that $\theta^* \in \Lambda$ , $g$ is convex on $\Lambda$ with $L$ Lipschitz gradient. Then, if $\theta_0 \in \Lambda$ and $\tau < \frac{1}{ L}$, the sequence $\theta_n$ defined by converges to $\theta^*$. Assume that $g$ is in ${\mathcal{C}}^2$. Let $\lambda_{\max}(t)$ the largest eigenvalue of the Hessian matrix of $g(t)$. Let $\Theta \subset {\mathbb{R}}^d$ an open set. If there exists $L>0$ such that for all $t$ in $\Theta$, $\lambda_{\max}(t) \leq L$, then $g$ has a $L$ Lipschitz gradient in $\Theta$. Related work ------------ While original for the sparse spike estimation problem, it must be noted that the study of non-convex optimization schemes for linear inverse problems has gained attraction recently for different kinds of low-dimensional models. For low-rank matrix estimation, a smooth parametrization of the problem is possible and it has been shown that a RIP guarantees the absence of spurious minima [@Zhao_2015; @Bhojanapalli_2016]. In [@Waldspurger_2018], a model for phase recovery with alternated projections and smart initialization is considered. Conditions on the number of measurements guarantee the success of the technique. In the area of blind deconvolution and bi-convex programming, recent works have exploited similar ideas [@Ling_2017; @Cambareri_2018]. In the case of super-resolution, the idea of gradient descent has been studied in an asymptotic regime ($k\to \infty$) in [@Chizat_2018] with theoretical conditions based on Wasserstein gradient flow for the initialization. In our case, we study the particular super-resolution problem with a fixed number of impulsions and we place ourselves in conditions where stable recovery is guaranteed, leading to explicit conditions on the initialization. The objective of this article is to investigate to what extent these ideas can be applied to the theoretical study of the case of spike super-resolution estimation. The question of projected gradient descent raised in the last Section has been explored for general low-dimensional models [@Blumensath_2011]. It has been shown that the RIP guarantees the convergence of such algorithms with an ideal (often non practical) projection. Approached projected gradient descents have also been studied and shown to be successful for some particular applications [@Golbabaee_2018]. The spikes super-resolution problem adds the parametrization step to these problems. Contributions and organization of the paper ------------------------------------------- After a precise description of the setting, the definition of the kernel metric of interest and the associated restricted isometry for the spike estimation problem at the beginning of Section \[sec:Hessian\], this article gives the following original results: 1. A bound on the conditioning of the Hessian at a global minimum of the minimization in the parameter space is given in Section \[sec:Hessian\]. This bound shows that the better RIP constants are (RIP constants improve with respect to the number of measurements), the better the non-convex minimization problem behaves. It also shows that there is a basin of attraction of the global optimum where no separation constraints are needed (for descent algorithms with an initialization close to the minimum, separation constraints can be discarded). 2. An explicit shape of the basin of attraction of global minima is given in Section \[sec:basin\]. The size of the basin of attraction increases when the RIP constant gets better. To conclude, we discuss the role of the separation constraint in descent algorithms in Section \[sec:projected\_gradient\], and we explain why enforcing a separation might improve them. Conditioning of the Hessian {#sec:Hessian} =========================== This section is devoted to the study of the Hessian matrix of $g$. In particular, we provide a bound on the conditioning of the Hessian at a global minimum of the minimization in the parameter space. Notations --------- The operator $A$ is a linear operator modeling $m$ measurements in ${\mathbb{C}}^m$ ( ${\mathrm{Im}}A \subset {\mathbb{C}}^m$ ) on the space of measures on ${\mathbb{R}}^d$ defined by: for $l=1,m$, $$\label{eq:distribution} (Au)_l = \int_{{\mathbb{R}}^d} \alpha_l(t) {\mathop{}\!\mathrm{d}}u(t)$$ where $(\alpha_l)_l$ is a collection of functions in ${\mathcal{C}}^2({\mathcal{B}}_2(R))$ (twice continuously differentiable functions on ${\mathcal{B}}_2(R)$ defined in ). Notice that the integral used in is in fact a duality product between a function in ${\mathcal{C}}^2({\mathcal{B}}_2(R))$ and a finite signed measure over ${\mathbb{R}}^d$. As the $\alpha_l$ are in ${\mathcal{C}}^2({\mathcal{B}}_2(R))$, While a lot of results for spike super-resolution are expressed on the $d$-dimensional Torus ${\mathbb{T}}^d$, we prefer the setting of Diracs with bounded support on ${\mathbb{R}}^d$ which is often closer to the physics of the considered phenomenom. However, our work is directly extended to the Torus setting by replacing ${\mathbb{R}}^d$ by ${\mathbb{T}}^d$ and ${\mathcal{B}}^2(R)$ by ${\mathbb{T}}^d$. In ${\mathbb{C}}^m$, we consider the Hermitian product ${\langle}x,y{\rangle}= \sum x_i \bar{y}_i$. An example of such measurement operator is the (weighted) Fourier sampling: $(Au)_l = \frac{1}{\sqrt{m}} \int_{{\mathbb{R}}^d} c_l e^{-j {\langle}\omega_l,t{\rangle}} {\mathop{}\!\mathrm{d}}u(t)$ for some chosen frequencies $\omega_l \in {\mathbb{R}}^d$ and frequency dependent weights $c_l \in {\mathbb{R}}$. Let $x = \sum_{i=1,k} a_i \delta_{t_i}$. By linearity of $A$, we have $$(Ax)_l = \sum_{i=1}^k (A\delta_{t_i})_l =\sum_{i=1}^k a_i \alpha_l(t_i).$$ With $g(\theta)=f(\phi(\theta))= \|A \phi(\theta)-y\|_2^2$, we get: $$g(\theta)=\sum_{l=1}^m \left|\sum_{i=1}^k a_i\alpha_l(t_i)-y_l\right|^2.$$ In the following, the notion of directional derivative will be important. Let $f$ be a ${\mathcal{C}}^1$ function, and $v \in {\mathbb{R}}^d$ such that $\|v\|_2 = 1$. Then we can define the directional derivative of $f$ in direction $v$ by: $$f_v'(t):=\langle v, \nabla f(t) \rangle=\lim_{h \to 0^+} \frac{f(t+hv)-f(t)}{h}$$ Let $f$ be a ${\mathcal{C}}^2$ function, and $(v_1,v_2) \in {\mathbb{R}}^{2d}$ such that $\|v_1\|_2 = \|v_2\|_2 = 1$. Then we can define the second order directional derivative of $f$ in directions $v_1$ and $v_2$ by: $$f_{v_1,v_2}''(t):=\langle v_1, \nabla^2 f(t) v_2 \rangle$$ Notice that of course $f_{v_1,v_2}''(t)=f_{v_2,v_1}''(t)$. If $v_1=v_2$, we write $f_{v_1}''(t):=f_{v_1,v_1}''(t)$ In particular, they permit to introduce derivatives of Dirac measures supported on ${\mathbb{R}}^d$. Let $v \in {\mathbb{R}}^d$ such that $\|v\|_2 = 1$. The distribution $\delta_{t_0,v}^\prime $ is defined by . It is the limit of $\nu_\eta=-\frac{\delta_{t_0+\eta v} -\delta_{t_0}}{\eta}$ for $\eta \to 0^+$ in the distributional sense : for all $h \in {\mathcal{C}}^1 ({\mathbb{R}}^d)$, $\int_{{\mathbb{R}}} h(t) {\mathop{}\!\mathrm{d}}\nu_\eta(t) \to_{\eta\to 0^+} {{\boldsymbol \langle}}\delta_{t_0,v}^\prime , h{{\boldsymbol \rangle}}$. Similarly, the distribution $\delta_{t_0,v}^{\prime \prime}$ is defined by for $f \in {\mathcal{C}}^2 ({\mathbb{R}}^d)$ and the distribution $\delta_{t_0,v_1,v_2}^{\prime \prime}$ is defined by for $f \in {\mathcal{C}}^2 ({\mathbb{R}}^d)$ where $f_{v_1,v_2}''$ is the derivative of $f$ in direction $v_1$ chained with the derivative of $f$ in direction $v_2$. When $v = e_i$ is a vector of the canonical basis of ${\mathbb{R}}^d$ , we write $\delta_{t_0,i}^{\prime}=\delta_{t_0,e_i}^{ \prime}$ and $\delta_{t_0,i}^{\prime \prime}= \delta_{t_0,e_i,e_i}^{\prime \prime}$. We now have the necessary tools to start the study of the Hessian of $g$. Gradient and Hessian of the objective function $g$ {#sec:gradient_Hessian} -------------------------------------------------- We calculate the gradient and Hessian of $g$ in the two following propositions. We start with the gradient of $g$. \[prop:gradient\] For any $\theta \in {\mathbb{R}}^{2k}$, we have: $$\begin{split} \frac{\partial g(\theta)}{\partial a_r}&= 2 {\mathcal{R}e}{\langle}A\delta_{t_r}, A \phi(\theta)-y{\rangle}, \\ \end{split}$$ $$\begin{split} \frac{\partial g(\theta)}{\partial t_{r,j}}&= - 2 a_r{\mathcal{R}e}{\langle}A \delta_{t_r,j}^{\prime}, A \phi(\theta)-y{\rangle}.\\ \end{split}$$ See Appendix \[proof21\]. The next proposition gives the values of the Hessian matrix of $g$ which has a simple expression with the use of derivatives of Diracs. \[prop:Hessian\] For any $\theta \in {\mathbb{R}}^{k(d+1)}$ $$\begin{split} H_{1,r,s}=\frac{\partial^2 g(\theta)}{\partial a_r \partial a_s}&= 2 {\mathcal{R}e}{\langle}A\delta_{t_r}, A \delta_{t_s} {\rangle}. \\ \end{split}$$ $$\begin{split} H_{2,r,j_1,s,j_2}=\frac{\partial^2 g(\theta)}{\partial t_{r,j_1} \partial t_{s,j_2}}&= 2 a_r a_s {\mathcal{R}e}{\langle}A \delta_{t_r,j_1}^{\prime}, A \delta_{t_s,j_2}^{\prime}{\rangle}\\ & + {\mathbf{1}}(r=s)2a_r{\mathcal{R}e}{\langle}A \delta_{t_r,j_1,j_2}^{\prime \prime}, A\phi(\theta)-y {\rangle}.\\ \end{split}$$ $$\begin{split} H_{12,r,s,j} =\frac{\partial^2 g(\theta)}{\partial a_r \partial t_{s,j}}&= -2 a_s {\mathcal{R}e}{\langle}A \delta_{t_r}, A \delta_{t_s,j}^{\prime}{\rangle}-{\mathbf{1}}(r=s) 2{\mathcal{R}e}{\langle}A \delta_{t_s,j}^{\prime}, A\phi(\theta)-y {\rangle}. \\ \end{split}$$ Hence the Hessian can be decomposed as the sum of two matrices $H= G + F$ with $$\begin{split} G_{1,r,s}&= 2 {\mathcal{R}e}{\langle}A\delta_{t_r}, A \delta_{t_s} {\rangle}, \\ G_{2,r,j_1,s,j_2}&= 2 a_r a_s {\mathcal{R}e}{\langle}A \delta_{t_r,j_1}^{\prime}, A \delta_{t_s,j_2}^{\prime}{\rangle}, \\ G_{12,r,s,j}&= - 2 a_s {\mathcal{R}e}{\langle}A \delta_{t_r}, A \delta_{t_s,j}^{\prime}{\rangle}.\\ \end{split}$$ and $$\begin{split} F_{1,r,s}&= 0,\\ F_{2,r,j_1,s,j_2}&= {\mathbf{1}}(r=s)2a_r{\mathcal{R}e}{\langle}A \delta_{t_r,j_1,j_2}^{\prime \prime}, A\phi(\theta)-y {\rangle},\\ F_{12,r,s,j}&= -{\mathbf{1}}(r=s) 2{\mathcal{R}e}{\langle}A \delta_{t_s,j}^{\prime}, A\phi(\theta)-y {\rangle}.\\ \end{split}$$ See Appendix \[proof21\]. Kernel, dipoles and the RIP {#sec:kernel_dipole} ---------------------------- In order to be able to build an operator $A$ with a RIP, we define a reproducible kernel Hilbert space (RKHS) structure on the space of measures as in [@Gribonval_2017], see also [@Sriperumbudur_2010]. The natural metric on the space of finite signed measures, the total variation of measures, is not well suited for a RIP analysis of the spikes super-resolution problems, as it does not measure the spacing between Diracs. When using the RIP, fundamental objects appear in the calculations: dipoles of Diracs. In this section we show that the typical RIP implies a RIP on dipoles and their generalization. For finite signed measures over ${\mathbb{R}}^d$, the Hilbert structure induced by a kernel $h$ (a smooth function from ${\mathbb{R}}^d \times {\mathbb{R}}^d \to {\mathbb{R}}$) is defined by the following scalar product between 2 measures $\pi_1,\pi_2$ $${\langle}\pi_1,\pi_2 {\rangle}_h = \int_{{\mathbb{R}}^d}\int_{{\mathbb{R}}^d} h(t_1,t_2) {\mathop{}\!\mathrm{d}}\pi_1(t_1){\mathop{}\!\mathrm{d}}\pi_2(t_2).$$ We can consequently define $$\| \pi_1 \|_h^2 = {\langle}\pi_1,\pi_1 {\rangle}_h.$$ We have the relation $$\| \pi_1 +\pi_2\|_h^2 = \| \pi_1 \|_h^2 +2{\langle}\pi_1,\pi_2 {\rangle}_h +\| \pi_2\|_h^2.$$ Measuring distances with the help of $\|\cdot\|_h$ can be viewed as measuring distances at a given resolution set by $h$. Typically we use Gaussian kernels where the sharper the kernel is, the more accurate it is. The next definition is taken from [@Gribonval_2017]. An $\epsilon$-dipole (noted dipole for simplicity) is a measure $\pi = a_1\delta_{t_1}-a_2 \delta_{t_2}$ where $\|t_1-t_2\|_2 \leq \epsilon$. Two dipoles $\pi_1 = a_1\delta_{t_1}-a_2 \delta_{t_2}$ and $\pi_2 = a_3\delta_{t_3}-a_4 \delta_{t_4}$ are $\epsilon$-separated if their support are strictly $\epsilon$-separated (with respect to the $\ell^2$-norm on ${\mathbb{R}}^d$), i.e. if $\|t_1-t_3\|_2 > \epsilon$, $\|t_2-t_3\|_2 > \epsilon$ and $\|t_1-t_4\|_2 > \epsilon$ and $\|t_2-t_4\|_2 > \epsilon$. Compared to [@Gribonval_2017], we need to introduce a new definition. A generalized dipole $\nu$ is either a dipole or a distribution of order 1 of the form $a_1\delta_{t}+ a_2 \delta_{t,v}^{\prime}$. Two generalized dipoles are $\epsilon$-separated if their support are strictly $\epsilon$-separated (with respect to the $\ell^2$-norm on ${\mathbb{R}}^d$). In this article we use regular, symmetrical, translation invariant kernels. Most recent developments to non translation invariant kernels [@Poon_2018] could be considered to generalize this work, but they are out of the scope of this article for the sake of simplicity. \[assum:kernel\_prop\] A kernel $h$ follows this assumption if - $h \in {\mathcal{C}}^2({\mathbb{R}}^d,{\mathbb{R}}^d)$. - $h$ is symmetrical with respect to $0$, translation invariant, i.e. we can write $h(t_1,t_2)= \rho(\|t_1-t_2\|_2)$ where $\rho \in {\mathcal{C}}^2({\mathbb{R}})$. - . - - there is a constant $\mu_h$ such that, for all two $\epsilon$-separated dipoles, ${\langle}\nu_1,\nu_2 {\rangle}_h \leq \mu_h \|\nu_1\|_h \|\nu_2\|_h$ (mutual coherence). Note that the assumption that $h \in {\mathcal{C}}^2$ guarantees the existence of integrals with respect to finite signed measures and duality product with distribution of order 1 with bounded supports. #### Example The now almost canonical well behaved kernel is the Gaussian kernel. From [@Gribonval_2017], for $\epsilon=1$, using $h_0(t,s)= e^{-(t-s)^2/(2\sigma_k^2)} $ with $\sigma_k^2 = \frac{1}{2.4log(2k-1) +24}$, we have that $h_0$ follows Assumption \[assum:kernel\_prop\] with $\mu_{h_0} =\frac{3}{4(k-1)}$.\ The following Lemma and definition extend the scalar product induced by $h$ to generalized dipoles. \[def:scalar\_dip\] Let $\nu_1 = a_1 \delta_{t_1} +b_1 \delta_{v_1,t_1}^{\prime}, \nu_2 = a_2 \delta_{t_2} +b_2 \delta_{v_2,t_2}^{\prime} $ be two generalized dipoles. Then $\nu_1$ and $\nu_2$ are limits (in the distributional sense) of two sequences of dipoles $\nu_1^{\eta_1} $ and $\nu_2^{\eta_2}$ for $\eta_1,\eta_2 \to 0$, the quantity ${\langle}\nu_1^{\eta_1},\nu_2^{\eta_2} {\rangle}_h $ converges, the limit is unique (does not depend on the choice of $\nu_1^{\eta_1} $ and $\nu_2^{\eta_2}$) and $$\begin{split} \lim_{\eta_1,\eta_2 \to 0} {\langle}\nu_1^{\eta_1},\nu_2^{\eta2}{\rangle}_h =& a_1 a_2f( t_1-t_2 ) - a_2 b_1 f_{v_1}^{\prime}(t_1-t_2) -a_1b_2f_{v_2}^{\prime}(t_2-t_1)\\ &-b_1b_2 f_{v_1,v_2}^{\prime \prime}(t_1- t_2 ) \\ \end{split}$$ where $f(t) = \rho(\|t\|_2)$. See Appendix \[proof22\]. Let $\nu_1 = a_1 \delta_{t_1} +b_1 \delta_{v_1,t_1}^{\prime}, \nu_2 = a_2 \delta_{t_2} +b_2 \delta_{v_2,t_2}^{\prime} $ be two generalized dipoles. With the previous Lemma, we define $$\begin{split} {\langle}\nu_1,\nu_2 {\rangle}_h &:= \lim_{\eta_1,\eta_2 \to 0} {\langle}\nu_1^{\eta_1},\nu_2^{\eta2}{\rangle}_h \\ \end{split}$$ where $\nu_1^{\eta_1} $ and $\nu_2^{\eta_2}$ are two sequences of dipoles that converge to $\nu_1$ and $\nu_2$ (in the distributional sense) for $\eta_1,\eta_2 \to 0$. We have the following properties that are immediate consequences of Lemma \[def:scalar\_dip\]. \[lem:kernel\_dirac\_properties\] Let $h$ be a kernel meeting Assumption \[assum:kernel\_prop\]. We have the following properties for any $t \in {\mathbb{R}}$: $$\|\delta_t\|_h^2 = \rho(0) = 1$$ $${\langle}\delta_t, \delta_{t,v}^{\prime}{\rangle}_h = -\rho'(0) = 0$$ $$\|\delta_{t,v}^{\prime}\|_h^2 =|\rho''(0)|$$ See Appendix \[proof22\]. From [@Gribonval_2017 Lemma 6.5], we have the following Lemma: \[lem:pyth\_dipole\] Suppose for all two $\epsilon$-separated dipoles, ${\langle}\pi_1,\pi_2 {\rangle}_h \leq \mu \|\pi_1\|_h \|\pi_2\|_h$ (mutual coherence). Then for $k$, $\epsilon$-separated dipoles $\pi_1, ...\pi_k$ such that $\max_i \|\pi_i\|_h>0$, we have $$1 - (k-1)\mu \leq \frac{\|\sum_{i=1,k} \pi_i\|_h^2}{\sum_{i=1,k}\| \pi_i\|_h^2} \leq 1+ (k-1) \mu.$$ We can generalize the previous result to generalized dipoles. \[lem:dip2Generalizeddip\] Let two $\epsilon$-separated **generalized** dipoles $\nu_1,\nu_2$. Suppose for all two $\epsilon$-separated dipoles $\pi_1,\pi_2$, ${\langle}\pi_1,\pi_2 {\rangle}_h \leq \mu \|\pi_1\|_h \|\pi_2\|_h$ (mutual coherence). Then we have: $${\langle}\nu_1, \nu_2{\rangle}_h \leq \mu \|\nu_1\|_h \|\nu_2\|_h$$ See Appendix \[proof22\]. A consequence of the previous result is the following Lemma: \[lem:mutual\_Generalized\_dipoles\] Suppose for all two $\epsilon$-separated generalized dipoles, ${\langle}\nu_1,\nu_2 {\rangle}_h \leq \mu \|\nu_1\|_h \|\nu_2\|_h$ (mutual coherence). Then for $k$ $\epsilon$-separated generalized dipoles $\nu_1, ...\nu_k$ such that $\max_i \|\nu_i\|_h>0$, we have $$1 - (k-1)\mu \leq \frac{\|\sum_{i=1,k} \nu_i\|_h^2}{\sum_{i=1,k}\| \nu_i\|_h^2} \leq 1+ (k-1) \mu.$$ See Appendix \[proof22\]. We are now able to define the Restricted Isometry Property (RIP). The secant set of the model set $\Sigma$ is $\Sigma - \Sigma := \{ x-y : x \in \Sigma, y \in \Sigma\}$. $A$ has the RIP on $\Sigma-\Sigma$ with respect to $\|\cdot\|$ with constant ${\gamma}$ if for all $x \in \Sigma -\Sigma$: $$\label{eq:DefRIP} (1-{\gamma})\|x\|^2 \leq \|A x\|_2^2 \leq (1+{\gamma})\|Ax\|^2.$$ In the following we will suppose that $A$ has RIP ${\gamma}$ on $\Sigma_{k,\epsilon}-\Sigma_{k,\epsilon}$ with respect to $\|\cdot\|_h$, i.e. for $\sum_{r=1,k}a_r\delta_{t_r}-\sum_{r=1,k} b_r \delta_{s_r} \in \Sigma_{k,\epsilon} -\Sigma_{k,\epsilon}$, we have $$\begin{aligned} (1-{\gamma})\left\|\sum_{r=1,k} (a_r\delta_{t_r}-b_r\delta_{s_r} )\right\|_h^2 & \leq & \left\|A\sum_{r=1,k} (a_r\delta_{t_r}- b_r\delta_{s_r})\right\|_2^2 \\ & \leq & (1+{\gamma}) \left\|\sum_{r=1,k}a_r\delta_{t_r}-b_r\delta_{s_r}\right\|_h^2. \nonumber\end{aligned}$$ From [@Gribonval_2017], with a Gaussian kernel $h$ it is possible to build a random $A$ with RIP constant ${\gamma}$. With this choice of $A$, the ideal minimization  yields a stable and robust estimation of $x_0$ with respect to the $\|\cdot\|_h$. In [@Candes_2013], stable recovery for $\epsilon$-separated Diracs is guaranteed on the Torus with the metric $\|K_{hi}*\cdot\|_{L^1}$ where $K_{hi}*$ is the convolution with a Fejér kernel. From [@Bourrier_2014 IV.A], this guarantees a lower RIP with respect to this metric. This guarantees the existence of a lower RIP with respect to a kernel metric for the conventional deterministic spike super-resolution setting. The RIP on $\Sigma_{k,\epsilon}-\Sigma_{k,\epsilon}$ implies a RIP on $\epsilon$-separated generalized dipoles. \[lem:RIP\_derivative\] Suppose $A$ has the RIP on $\Sigma_{k,\epsilon}-\Sigma_{k,\epsilon}$ with constant ${\gamma}$. Let $(\nu_r)_{r=1,k}$, $k$ $\epsilon$-separated dipoles supported in $\text{rint} {\mathcal{B}}_2(R)$, we have $$\begin{split} (1-{\gamma})\left\| \sum_{r=1,k}\nu_r\right\|_h^2 \leq \left\|A(\sum_{r=1,k} \nu_r)\right\|_2^2 \leq (1+{\gamma})\left\|\sum_{r=1,k} \nu_r \right\|_h^2. \\ \end{split}$$ See Appendix \[proof22\]. Finally, we will need a last estimate. To state it, we need first to introduce the following definition: Let $A$ such that the $\alpha_l$ are in ${\mathcal{C}}^2({\mathcal{B}}_2(R))$. We define $$\label{defDAR} D_{A,R} :=\sup_{1\leq l \leq m ; v\in {\mathbb{R}}^d,w\in{\mathbb{R}}^d: \|v\|_2= \|w\|_2=1; t \in {\mathcal{B}}_2(R)} | \alpha_{l,v,w}^{\prime \prime}(t)| .$$ The constant $D_{A,R}$ is finite, and it is thus a bound of the directional second derivatives of the $\alpha_l$ over ${\mathcal{B}}_2(R)$. \[lem:upper\_RIP\_second\_deriv\] Let $A$ such that the $\alpha_l$ are in ${\mathcal{C}}^2({\mathcal{B}}_2(R))$. Then, for any $t \in {\mathcal{B}}_2(R)$, with directions $v_1,v_2$, we have $$\|A\delta_{t,v_1,v_2}^{\prime \prime}\|_2 \leq \sqrt{m}D_{A,R}.$$ where $D_{A,R}$ is defined in Equation . See Appendix \[proof22\]. Control of the conditioning of the Hessian with the restricted isometry property {#sec:control_Hessian} -------------------------------------------------------------------------------- We can now give a lower (resp. upper) bound for the highest (resp. lowest) eigenvalues of the Hessian matrix $H$ of $g$ (computed in Proposition \[prop:Hessian\]). \[th:min\_max\_eigen\_control\_H\] Let $\theta = (a_1,..,a_k, t_1,..t_k) \in \Theta_{k,\epsilon}$ with $t \in \text{rint}{\mathcal{B}}_2(R)$ and $\theta^* \in \Theta_{k,\epsilon}$ a minimizer of . Suppose $h$ follows Assumption \[assum:kernel\_prop\]. Let $H$ the Hessian of $g$ at $\theta$. Suppose $A$ has RIP ${\gamma}$ on $\Sigma_{k,\epsilon}-\Sigma_{k,\epsilon}$. We have $$\label{largeeigenvalue} \sup_{\|u\|_2 =1} u^THu \leq 2(1+{\gamma})(1+(k-1)\mu)\max(1,(a_r^2|\rho''(0)|)_{r=1,l}) + \xi; \\$$ $$\label{smalleigenvalue} \inf_{\|u\|_2 =1} u^THu \geq 2(1-{\gamma})(1-(k-1)\mu)\min(1,(a_r^2|\rho''(0)|)_{r=1,l})-\xi\\$$ where $\xi = 2(d+1)\max( \max_{r}|a_r| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) (\|A\phi(\theta)-A\phi(\theta^*)\|_2+ \|e\|_2)$, the constant $D_{A,R}$ is defined in and $e$ is the finite energy measurement noise. See Appendix \[proof23\]. Notice that, in the noiseless case, ensures in particular that $g$ has a positive Hessian matrix in $\theta^* $. Moreover, if $\min_r |a_r| >0$, there exists a neighbourhood of $\theta^* $, in which $g$ remains convex. We will give an explicit size for this neighbourhood in the next section. Notice also that gives an upper bound on the Lipschitz constant of the gradient of $g$. This implies the existence of a basin of attraction (see Definition \[defbassin\]) with a uniform bound for the step size. With the method to choose $A$ from [@Gribonval_2017 Lemma 6.5], for any ${\gamma}$ and $m \gtrsim k^2d \text{polylog}(k,d)/{\gamma}^2$, we can find $A$ that has RIP with high probability with a kernel $h_0$ having the right properties. We can control the conditioning of the Hessian matrix $\kappa(H)$ at a global minimum as the term $\|A\phi(\theta)-A\phi(\theta^*)\|_2$ vanishes in the control from Theorem \[th:min\_max\_eigen\_control\_H\]. Particularly, in the noiseless case we have the following Corollary. The lower bound is useful to confirm the dependency on the ratio of amplitudes when it converges to $+\infty$. For this next result, we make the additional assumption that $\min_r |a_r| > 0$. In practice, this amounts to assuming that when estimating the Diracs, we do not over-estimate their number (which will often be the case, in particular in the presence of noise). When the number of Diracs is overestimated, the minimizers of  are points that are not isolated, the notion of basin of attraction would have to be generalized to a basin of attraction of a set of minimizers (when $a_r = 0$, $g(\theta)$ does not depend on $t_r$), which is out of the scope of this article for clarity purpose. \[cor:control\_Hessian\] Let $x_0 = \sum_{r=1,k} a_r \delta_{t_r} \in \Sigma_{k,\epsilon} = \phi(\theta_0)$ and $e=0$. Suppose $h$ follows Assumption \[assum:kernel\_prop\]. Let $H$ the Hessian of $g$ at $\theta_0$. Suppose $A$ has RIP ${\gamma}$ on $\Sigma_{k,\epsilon}-\Sigma_{k,\epsilon}$, and that $\min_r |a_r| > 0$. We have $$\begin{split} \frac{(1-{\gamma})\max(1,(a_r^2|\rho''(0)|)_{r=1,l})}{(1+{\gamma})\min(1,(a_r^2|\rho''(0)|)_{r=1,l})} & \leq \kappa(H) \\ & \leq \frac{(1+{\gamma})(1+(k-1)\mu)\max(1,(a_r^2|\rho''(0)|)_{r=1,l})}{(1-{\gamma})(1-(k-1)\mu)\min(1,(a_r^2|\rho''(0)|)_{r=1,l})}. \end{split}$$ See Appendix \[proof23\]. It is easy to see that for a noise $e$ with small enough energy (i.e. such that $\xi$ is strictly lower than $2(1-{\gamma})(1-(k-1)\mu)\min(1,(a_r^2|\rho''(0)|)_{r=1,l})$, if $\min_r |a_r| > 0$, then the Hessian at a global minimum is strictly positive. Of course, this may require a very small noise since the ratio of amplitudes at the global minimum can be large. We remark that for a same maximal ratio of amplitudes in $\theta^*$, a better conditioning bound is achieved when $\max_{r=1,l}a_r^2 |\rho''(0)| \geq 1 \geq \min_{r=1,l}a_r^2 |\rho''(0)|$. We attribute this to the fact that we estimate amplitudes and locations at the same time. The amplitudes must be appropriately scaled to match the variations of $g$ with respect to locations. Intuitively, alternate descent with respect to amplitudes and locations might be better than the classical gradient descent for easily setting the descent step. As $g$ is ${\mathcal{C}}^2$, ensuring the strict positivity of the Hessian at the global minimum guarantees the *existence* of a basin of attraction as emphasized in Section \[sec:basin\_def\]. In the next Section, we give an explicit formulation of a basin of attraction. Explicit basin of attraction of the global minimum {#sec:basin} ================================================== Let $\theta_1 \in {\mathbb{R}}^d$. Can we guarantee, for some notion of distance $d$, that $d(\theta_1,\theta^*)\leq C$ and $\theta_1 \neq \theta^*$, with $C$ an explicit constant, implies $\nabla g(\theta_1) \neq 0$ ? The following theorems show that it is in fact the case. With a strong RIP assumption, we can give an explicit basin of attraction of the global minimum for minimization  without separation constraints. Uniform control of the Hessian ------------------------------ In the noiseless case, a global minimum $\theta^*$ of the constrained minimization of $g$ over $\Theta_{k,\epsilon}$ is also a global minimum of the unconstrained minimization because $g(\theta^*) = 0$. In the presence of noise, we can no longer guarantee that the minimizer of the constrained problem $\theta^*$ is a global minimum of the unconstrained problem. However, the shape of the constraint guarantees that it is a local minimum (see next Lemma). \[lem:link\_unconstrained\_constrained\] Suppose $\theta^* =(a_1,..,a_k,t_1,..,t_k)$ is a result of constrained minimization  with $t_i \in \text{rint} {\mathcal{B}}_2(R)$. Then $\theta^*$ is a local minimum of $g$. let $\theta^*=(a_1,..,a_k,t_1,..,t_k)$. As for all $i \neq j$, $\|t_i-t_j\|_\infty> \epsilon$, there exists $\eta >0$ such that for all $\theta = (b_1,..,b_k,s_1,..,s_k)$ such that $\|s_i-t_i\|_\infty < \eta$, we have $\theta \in \Theta_{k,\epsilon}$. Hence, $\theta^* + B_\infty(\eta) \subset \Theta_{k, \epsilon}$, and $\theta^* \in \arg \min_{\theta \in \theta^* + B_\infty(\eta) } g(\theta)$. Hence we can still calculate a basin of attraction of $\theta^*$ (for the unconstrained minimization). The expression of the basin in the next Section is a direct consequence of the following Theorem that uniformly control the Hessian of $g$ in an explicit neighbourhood of $\theta^*$. \[th:basin\] Suppose $A$ has RIP ${\gamma}$ on $\Sigma_{k,\frac{\epsilon}{2}}-\Sigma_{k,\frac{\epsilon}{2}}$ and that $h$ follows Assumption \[assum:kernel\_prop\] and has mutual coherence constant $\mu$ on $\frac{\epsilon}{2}$-separated dipoles. Let $\theta^*=(a_1,..,a_k,t_1,..,t_k) \in \Theta_{k,\epsilon}$ be a result of constrained minimization  such that $t_i \in \text{rint} {\mathcal{B}}_2(R)$. Suppose $0<|a_1| \leq |a_2| ... \leq |a_k|$. Let and $$\begin{split} \Lambda_{\theta^*,\beta} := \{& \theta: \|\theta-\theta^*\|_2 < \beta \}. \end{split}$$ If $\theta \in \Lambda_{\theta^*,\beta}$, then $H$ the Hessian of $g$ at $\theta$ has the following bounds : $$\sup_{\|u\|_2 =1} u^THu \leq 2(1+{\gamma})(1+(k-1)\mu)\max(1,{\textcolor{black}{(|a_k|+ \beta)}}^2|\rho''(0)|) + \xi; \\$$ $$\inf_{\|u\|_2 =1} u^THu \geq 2(1-{\gamma})(1-(k-1)\mu)\min(1,{\textcolor{black}{(|a_1| -\beta )}}^2|\rho''(0)|)-\xi\\$$ where $\xi =2(d+1)\max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) (\sup_{\theta \in \Lambda_{\theta^*,\beta}} \|A\phi(\theta)-A\phi(\theta^*)\|_2+ \|e\|_2)$, the constant $D_{A,R}$ is given in  and $e$ is the finite energy measurement noise. See Appendix \[proof3\]. We observe that we require a stronger RIP than the usual one on $\Sigma_{k,\epsilon} - \Sigma_{k,\epsilon}$ to guarantee that unconstrained minimization converges in the basin of attraction $ \Lambda_{\theta^*,\beta} $. When the separation constraint is added for the basin of attraction (we look for potential critical points in $\Sigma_{k,\epsilon}$), we can provide better bounds. We will discuss what we could expect from constrained descent algorithms in Section \[sec:projected\_gradient\]. \[th:basin\_with\_constraint\] Suppose $A$ has RIP ${\gamma}$ on $\Sigma_{k,\epsilon}-\Sigma_{k,\epsilon}$ and that $h$ follows Assumption \[assum:kernel\_prop\] and has mutual coherence constant $\mu$ on $\epsilon$-separated dipoles. Suppose $0<|a_1| \leq |a_2| ... \leq |a_k|$. Let $\theta^*=(a_1,...,a_k,t_1,..t_k)\in \Theta_{k,\epsilon}$ be a result of constrained minimization  such that $t_i \in \text{rint} {\mathcal{B}}_2(R)$. Let $ \beta \geq 0 $ and $$\begin{split} \Lambda_{\theta^*,\beta} := \{& \theta: \|\theta-\theta^*\|_2 < \beta \}. \end{split}$$ Then for $\theta \in \Theta_{k,\epsilon} \cap \Lambda_{\theta^*,\beta} $, then $H$ the Hessian of $g$ at $\theta$ has the following bounds: $$\sup_{\|u\|_2 =1} u^THu \leq 2(1+{\gamma})(1+(k-1)\mu)\max(1,{\textcolor{black}{(|a_k|+ \beta)}}^2|\rho''(0)|) + \xi; \\$$ $$\inf_{\|u\|_2 =1} u^THu \geq 2(1-{\gamma})(1-(k-1)\mu)\min(1,{\textcolor{black}{(|a_1| -\beta)}}^2|\rho''(0)|)-\xi\\$$ where $\xi =2(d+1)\max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) (\sup_{\theta \in \Lambda_{\theta^*,\beta}} \|A\phi(\theta)-A\phi(\theta^*)\|_2+ \|e\|_2)$, the constant $D_{A,R}$ is given in and $e$ is the finite energy measurement noise. See Appendix \[proof3\]. Explicit basin of attraction in the noiseless and noisy case ------------------------------------------------------------- With the help of this uniform control of the Hessian we give an explicit (yet suboptimal) basin of attraction. \[cor:basin\_noiseless\] Under the hypotheses of Theorem \[th:basin\], let $\theta^* \in \Theta_{k,\epsilon}$ be a result of constrained minimization . Let $a^* =(a_1,a_2...,a_k)$. Take $$\beta_{max} := \min \left( c_h, \frac{|a_1|}{2}, C_1C_2 \right)$$ where $C_1 = \frac{ (1-{\gamma})(1-(k-1)\mu) }{ (d+1)\sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} } $ and $C_2 = \frac{ \min(1,|a_1|^2|\rho''(0)|/4) }{ \max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) \sqrt{ 1 + 2|\rho''(0)| \|a^*\|_2^2}}$ Then the set $\Lambda_{\theta^*,\beta_{max}}$ is a basin of attraction of $\theta^*$. See Appendix \[proof3\]. The parameter $\beta$ controls the distance between a parameter and the optimal parameter. When the RIP constant $\gamma$ decreases (and generally as the number of measurement increases), the size of the basin of attraction increases. In both the context of regular Fourier sampling and random Fourier sampling, the constant $D_{A,R}$ is bounded when $m$ increases. When the mutual coherence constant $\mu$ decreases, the basin of attraction also increases. . Finally, we note that the smaller $\beta$ is, the smaller is the upper bound on the operator norm of the Hessian. When the noise contaminating the measurements is small enough, we have similar results with a smaller basin of attraction. \[cor:basin\_noisy\] Under the hypotheses of Theorem \[th:basin\], let $\theta^* \in \Theta_{k,\epsilon}$ be a result of constrained minimization . . Let $a^* =(a_1,a_2...,a_k)$. Take $$\beta_{max} := \min \left( c_h, \frac{|a_1|}{2}, C_1C_3 \right)$$ where $C_1 = \frac{ (1-{\gamma})(1-(k-1)\mu) }{ (d+1)\sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} } $ and $C_3 = \frac{ \min(1,|a_1|^2|\rho''(0)|/4) }{ \max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) (1+ \sqrt{ 1 + 2|\rho''(0)| \|a^*\|_2^2})}$ Suppose $\|e\|_2 \leq \sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} \beta $. Then the set $\Lambda_{\theta^*,\beta_{max}}$ is a basin of attraction of $\theta^*$. See Appendix \[proof3\]. Towards new descent algorithms for sparse spike estimation? {#sec:projected_gradient} =========================================================== We have shown that, given an appropriate measurement operator for separated Diracs, a good initialization is sufficient to guarantee the success of a simple gradient descent. Such gradient descent is used is in the practical setting of compressive statistical learning [@Keriven_2017]. . If we could guarantee additionally that by greedily estimating Diracs, we fall within the basin of attraction, we would have a full non-convex optimization technique with guarantees of convergence to a global minimum. In other works [@Duval_2017thin1; @Duval_2017thin2], it has been shown that discretization (on grids) of convex methods have a tendency to produce spurious spikes at Dirac locations. Our results seem to indicate that merging spikes that are close to each other when performing a gradient descent might break the barrier between continuous and discrete methods. Theorem \[th:basin\_with\_constraint\] brings another question as the Hessian of $g$ is more easily controlled in $\Theta_{k,\epsilon}$. More generally, can we build a simple descent algorithm that stays in $\Theta_{k,\epsilon}$ to get larger basins of attraction? Consider the problem for $d=1$ in the noiseless case for the sake of clarity. We want to use the following descent algorithm: $$\theta_{i+1} = P_{\Theta_{k,\epsilon}}(\theta_i - \tau \nabla g(\theta_i))$$ Where $P_{\Theta_{k,\epsilon}}$ is a projection onto the separation constraint. Notice that since $\Theta_{k,\epsilon}$ is not a convex set, we cannot easily define the orthogonal projection onto it (it may not even exists). If we suppose that the gradient descent step decreases $g$ (i.e. $g(\theta_i - \tau \nabla g(\theta_i))< g(\theta_i)$), is it possible to guarantee that applying projection step keeps decreasing $g$? Consider: $$\label{pseudoproj} P_{\Theta_{k,\epsilon}}(\theta) \in \arg \min_{\tilde{\theta} \in \Theta_{k,\epsilon}} \left|\|A \phi(\tilde{\theta})-y \|_2 - \|A \phi(\theta)-y \|_2|\right|$$ First consider the following Lemma: \[lem:pseudo\_conv\] Let $d=1$. Let $\theta_0,\theta_1 \in \Theta_{k,\epsilon}$. Let $g(\theta) = \|A\phi(\theta) - A \phi(\theta_0)\|$. Then for all $\alpha$ such that $0 = g(\theta_0) \leq \alpha \leq g(\theta_1)$, there exists $\theta^* \in \Theta_{k,\epsilon}$ such that $g(\theta^*) = \alpha$. See Appendix \[proof4\]. Lemma \[lem:pseudo\_conv\] essentially guarantees that is is possible to continuously map the interval $[0,g(\theta_1)]$ by $g$ with elements of $ \Theta_{k,\epsilon}$. Hence, at a step $i+1$, we have $$|g(\theta_{i+1}) -g(\theta_i)| = |g(\theta_i - \tau \nabla g(\theta_i)) -g(\theta_i)|.$$ The projection $P_{\Theta_{k,\epsilon}}$ defined by is not easy to calculate (in fact, it is a similar optimization problem as the main problem). Other more “natural” projections on $\Theta_{k,\epsilon}$ could be defined as : $$P_{\Theta_{k,\epsilon}}(\theta) \in \phi^{-1} (\arg \inf_{x \in \Sigma_{k,\epsilon}}\|A x-A \phi(\theta) \|_2)$$ or $$P_{\Theta_{k,\epsilon}}(\theta) \in \phi^{-1}( \arg \inf_{x \in \Sigma_{k,\epsilon}} \|x- \phi(\theta) \|_h).$$ However they suffer from the same calculability drawback. This suggests to build a new family of heuristic algorithms of spike estimation where we propose heuristics to approach the projection of $\hat{\theta}_{i+1}$ on $\Theta_{k,\epsilon}$. Recovery guarantees would be obtained by guaranteeing that the projection heuristic does not increase the value of $g$ by too much compared to the gradient descent step. Annex ===== Proofs for Section \[sec:gradient\_Hessian\] {#proof21} -------------------------------------------- $$\begin{split} \frac{\partial g(\theta)}{\partial a_r} &= \frac{\partial }{\partial a_r} \sum_{l=1}^m \left|\sum_{i=1}^k a_i\alpha_l(t_i)-y_l\right|^2\\ &= \sum_{l=1}^m 2 {\mathcal{R}e}\left(\alpha_l(t_{r,j}) \left(\overline{\sum_{i=1}^k a_i\alpha_l(t_i)-y_l}\right)\right) \\ &= 2 {\mathcal{R}e}{\langle}A\delta_{t_r}, A \phi(\theta)-y{\rangle}.\\ \end{split}$$ Similarly, $$\begin{split} \frac{\partial g(\theta)}{\partial t_{r,j}} &= \frac{\partial }{\partial t_r} \sum_{l=1}^m \left|\sum_{i=1}^k a_i\alpha_l(t_i)-y_l\right|^2\\ &= \sum_{l=1}^m 2 {\mathcal{R}e}\left( a_r \partial_j\alpha_l(t_r) \left(\overline{\sum_{i=1}^k a_i\alpha_l(t_i)-y_l}\right)\right)\\ &= - 2 a_r{\mathcal{R}e}{\langle}A \delta_{t_r,j}^{\prime}, A \phi(\theta)-y{\rangle}.\\ \end{split}$$ For $H_{1,r,s}$, $$\begin{split} \frac{\partial^2 g(\theta)}{\partial a_r \partial a_s} &= \frac{\partial }{\partial a_s } \sum_{l=1}^m 2 {\mathcal{R}e}\left( \alpha_l(t_r) \left(\overline{ \sum_{i=1}^k a_i\alpha_l(t_i)-y_l}\right)\right)\\ &= \sum_{l=1}^m 2 {\mathcal{R}e}\left( \alpha_l(t_r) \overline{ \alpha_l(t_s)}\right).\\ \end{split}$$ For $H_{2,r,j_1,s,j_2}$, $$\begin{split} \frac{\partial^2 g(\theta)}{\partial t_{r,j_1} \partial t_{s,j_2}} &= \frac{\partial }{\partial t_{s,j_1}} \sum_{l=1}^m 2 {\mathcal{R}e}\left( a_r \partial_{j_1}\alpha_l(t_r) \left(\overline{ \sum_{i=1}^k a_i\alpha_l(t_i)-y_l}\right)\right)\\ &= \sum_{l=1}^m 2 {\mathcal{R}e}\left( a_r \partial_{j_1}\alpha_l(t_r) \left(\overline{ a_s \partial_{j_2}\alpha_l(t_s) }\right) \right) \\ &+ {\mathbf{1}}(r=s) \sum_{l=1}^m 2 {\mathcal{R}e}\left( a_r \partial_{j_2}\partial_{j_1}\alpha_l(t_r) \left(\overline{ \sum_{i=1}^k a_i\alpha_l(t_i)-y_l}\right)\right).\\ \end{split}$$ For $H_{12,r,s,j}$ $$\begin{split} \frac{\partial^2 g(\theta)}{\partial a_r \partial t_{s,j}} &= \frac{\partial}{\partial t_{s,j}} \sum_{l=1}^m 2 {\mathcal{R}e}(\alpha_l(t_r)) \left(\overline{ \sum_{i=1}^k a_i\alpha_l(t_i)-y_l}\right)\\ &= \sum_{l=1}^m 2 {\mathcal{R}e}\left( \alpha_l(t_r) \left(\overline{ a_s\partial_{j}\alpha_l(t_s)}\right)\right)\\ &+ {\mathbf{1}}(r=s) \sum_{l=1}^m 2 {\mathcal{R}e}\left( \partial_{j}\alpha_l(t_r) \left(\overline{ \sum_{i=1}^k a_i\alpha_l(t_i)-y_l}\right)\right).\\ \end{split}$$ Proofs for Section \[sec:kernel\_dipole\] {#proof22} ----------------------------------------- First remark that a generalized dipole $\nu= a \delta_{t} + b\delta_{t,v}^{\prime}$ with $\|v\|_2=1$ is the limit in the distributional sense of the dipoles $\nu^{\eta} = a\delta_t - b\frac{\delta_{t+\eta v}-\delta_t}{\eta}$ when $\eta \to 0$. Now let two generalized dipoles $\nu_1 = a_1 \delta_{t_1} +b_1 \delta_{t_1,v_1}^{\prime}, \nu_2 =a_2 \delta_{t_2} +b_2 \delta_{t_2,v_2}^{\prime}$. The $\nu_i$ are the limit (in the distributional sense) of a family of dipole $\nu_i^{\eta_i}$ for $\eta_i \to 0^+$. Let $f(t) = \rho(\|t\|_2)$. We have $${\langle}\nu_1^{\eta_1}, \nu_2^{\eta_2}{\rangle}_h = \int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}^d} f(t-s){\mathop{}\!\mathrm{d}}\nu_1^{\eta_1}(t) {\mathop{}\!\mathrm{d}}\nu_2^{\eta_2}(s).$$ Remark that by construction $ g_{\eta_1}(s) := \int_{{\mathbb{R}}^d} f(t-s) {\mathop{}\!\mathrm{d}}\nu_1^{\eta_1}(t) \to_{\eta_1 \to 0^+} g(s) := a_1 f(t_1-s) + b_1{{\boldsymbol \langle}}\delta_{t_1,v_1}^{\prime}, f(\cdot-s) {{\boldsymbol \rangle}}< +\infty$ where $g_{\eta_1}$ is in ${\mathcal{C}}^2$ and $g$ is in ${\mathcal{C}}^1$ thanks to the assumption on $h$ and $\rho$. Hence by boundedness of the integrals and the dominated convergence theorem, for any $\eta_2$, $${\langle}\nu_1^{\eta_1}, \nu_2^{\eta_2}{\rangle}_h \to_{\eta_1 \to 0^+} \int_{{\mathbb{R}}^d} g(s) {\mathop{}\!\mathrm{d}}\nu_2^{\eta_2}(s).$$ Moreover, by construction of $\nu_2^{\eta_2} $, and symmetry of $f$ (i.e. $f(t_1-t_2) = f(t_2-t_1)$), $$\begin{split} \int_{{\mathbb{R}}^d} g(s) {\mathop{}\!\mathrm{d}}\nu_2^{\eta_2}(s) &\to_{\eta_2 \to 0^+} a_1 a_2f( t_1-t_2 ) \\ & + a_2 b_1 {{\boldsymbol \langle}}\delta_{t_1,v_1}^{\prime},f(\cdot-t_2) {{\boldsymbol \rangle}}+a_1b_2{{\boldsymbol \langle}}\delta_{t_2,v_2}^{\prime},f(t_1-\cdot) {{\boldsymbol \rangle}}\\ &-b_1b_2 {{\boldsymbol \langle}}\delta_{t_2,v_2}^{\prime}, f_{v_1}^{\prime}(t_1- \cdot ) {{\boldsymbol \rangle}}\\ & = a_1 a_2f( t_1-t_2 ) - a_2 b_1 f_{v_1}^{\prime}(t_1-t_2) -a_1b_2f_{v_2}^{\prime}(t_2-t_1)\\ &-b_1b_2 f_{v_1,v_2}^{\prime \prime}(t_1- t_2 )\\ \end{split}$$ We define ${\langle}\nu_1, \nu_2{\rangle}_h := a_1 a_2f( t_1-t_2 ) - a_2 b_1 f_{v_1}^{\prime}(t_1-t_2) -a_1b_2f_{v_2}^{\prime}(t_2-t_1) -b_1b_2 f_{v_1,v_2}^{\prime \prime}(t_1- t_2 ) $. We just showed that $${\langle}\nu_1^{\eta_1}, \nu_2^{\eta_2}{\rangle}_h \to_{\eta_1 \to 0^+,\eta_2 \to 0+} {\langle}\nu_1, \nu_2{\rangle}_h .$$ Note that the value of ${\langle}\nu_1, \nu_2{\rangle}_h $ only depends on $\rho, \nu_1, \nu_2$. Using Lemma \[def:scalar\_dip\] with $t_1=t_2=t$, $b_1 = b_2 = 0$ and $a_1 =a_2 =1$ gives $$\|\delta_t\|_h^2 = \rho(0).$$ Using Lemma \[def:scalar\_dip\] with $t_1=t_2=t$, $b_1 =a_2 = 0$ and $a_1 =b_2 =1$ gives $$\begin{split} {\langle}\delta_t, \delta_{t,v}^{\prime}{\rangle}_h := -f_{v}^{\prime}(0) = -\lim_{\eta\to 0^+ } \frac{\rho(\eta\|v\|)-\rho(0)}{\eta} = -\rho'(0)=0. \end{split}$$ Using Lemma \[def:scalar\_dip\] with $t_1=t_2=t$, $b_1 = b_2 = 1$ and $a_1 =a_2 =0$ gives $$\|\delta_{t,v}^{\prime}\|_h^2 := -f_v''(0)= |\rho''(0)|.$$ Using the construction from the proof of Lemma \[def:scalar\_dip\], let two $\epsilon$-separated generalized dipole $\nu_1 , \nu_2$. The $\nu_i$ are the limit (in the distributional sense) of a family of $\epsilon$-separated dipole $\nu_i^{\eta_i}$ for $\eta_i \to 0^+$. With the hypothesis, we have $$\label{eq:dip2Generalizeddip1} {\langle}\nu_1^{\eta_1}, \nu_2^{\eta_2}{\rangle}_h \leq \mu \|\nu_1^{\eta_1}\|_h \|\nu_2^{\eta_2}\|_h.$$ Furthermore, $${\langle}\nu_1^{\eta_1}, \nu_2^{\eta_2}{\rangle}_h \to_{\eta_1 \to 0^+,\eta_2 \to 0+} {\langle}\nu_1, \nu_2{\rangle}_h .$$ Let $\nu= a \delta_{t} + b\delta_{t,v}^{\prime}$ with $\|v\|_2=1$ and $\nu^{\eta} = a\delta_t - b\frac{\delta_{t+\eta v}-\delta_t}{\eta} =\left(a+\frac{b}{\eta}\right)\delta_t - b\frac{\delta_{t+\eta v}}{\eta}$.We have $\|\nu\|_h^2 = a^2 +b^2|\rho''(0)| $ (with Lemma \[lem:kernel\_dirac\_properties\]) and $$\begin{split} \|\nu^\eta\|_h^2 &=\left(a+\frac{b}{\eta}\right)^2 +\left(\frac{b}{\eta}\right)^2 -2\left(a+\frac{b}{\eta}\right)\frac{b}{\eta}\rho(\eta) \\ & = a^2 +2\left(\frac{b}{\eta}\right)^2 +2\frac{ab}{\eta} -2\frac{ab}{\eta}\rho(\eta) - 2\left(\frac{b}{\eta}\right)^2\rho(\eta)\\ &= a^2 + 2\frac{ab}{\eta}(1-\rho(\eta)) +2\frac{b^2}{\eta^2}(1-\rho(\eta)).\\ \end{split}$$ But $\frac{1-\rho(\eta)}{\eta}=\frac{\rho(0)-\rho(\eta)}{\eta} \to -\rho'(0)$ when $\eta \to 0^+$, and $\rho'(0)=0$. Moreover, $\rho(\eta)=h(0)+\eta \rho'(0) + \frac{\eta^2}{2}\rho''(0)+o(\eta^2)=1- \frac{\eta^2}{2}|\rho''(0)|+o(\eta^2)$. Hence $\frac{1-\rho(\eta)}{\eta^2}\to_{\eta \to 0^+} \frac{1}{2}|\rho''(0)|$. We thus deduce that $\|\nu^\eta\|_h^2 \to a^2 + b^2 |\rho''(0)|=\|\nu\|_h$ when $\eta \to 0^+$. Hence, with such choice of $\nu_1^{\eta_1} $ $ \nu_2^{\eta_2}$, we can take the limit $\eta_1,\eta_2 \to 0$ in Equation  to get the result. Using Lemma \[lem:dip2Generalizeddip\], and the same proof as in Lemma \[lem:pyth\_dipole\], we get the result. Let $\nu_r = a_r\delta_{t_r}+b_r\delta_{t_r,v}^{\prime}$ the $\epsilon$-separated generalized dipoles. Similarly to Lemma \[lem:dip2Generalizeddip\], take $\nu_r^\eta= (a_r+\frac{b_r}{\eta})\delta_{t_r} - b_r\frac{\delta_{t_r +\eta v}}{\eta}$. For sufficiently small $\eta$ the $ \nu_r^\eta$ are $\epsilon$-separated dipoles, hence $\sum \nu_r^\eta \in \Sigma-\Sigma$ and $$\label{eq:RIP_int_dirac_deriv} \begin{split} (1-{\gamma}) \left\| \sum_{r=1,k}\nu_r^\eta \right\|_h^2 &\leq \left\|A( \sum_{r=1,k}\nu_r^\eta)\right\|_2^2 \leq (1+{\gamma})\left\| \sum_{r=1,k}\nu_r^\eta \right\|_h^2. \\ \end{split}$$ Now remark that $g_1(\eta)=\| \sum_{r=1,k}\nu_r^\eta \|_h^2$ and $g_2(\eta)=\|A( \sum_{r=1,k}\nu_r^\eta )\|_2^2 $ are continuous functions of $\eta$ that converge to $\| \sum_{r=1,k} (a_r\delta_{t_r}+b_r\delta_{t_r,v}^{\prime})\|_h^2 $ and $\|A(\sum_{r=1,k} (a_r\delta_{t_r}+b_r\delta_{t_r,v}^{\prime}))\|_2^2 $ when $\eta \to 0$: - For $g_1$, use the same proof as in Lemma \[lem:dip2Generalizeddip\] with the linearity of the limit. - For $g_2$: $$\begin{split} g_2(\eta) &= \sum_{l=1,m} \left|\sum_{r=1,k} \int \alpha_l(t)(a_r {\mathop{}\!\mathrm{d}}\delta_{t_r}(t)-\frac{b_r}{\eta}({\mathop{}\!\mathrm{d}}\delta_{t_r+\eta v}(t)-{\mathop{}\!\mathrm{d}}\delta_{t_r}(t)))\right|^2\\ &= \sum_{l=1,m} \left|\sum_{r=1,k} \left( \alpha_l(t_r)a_r -\frac{b_r}{\eta}(\alpha_l(t_r+\eta v)-\alpha_l(t_r)) \right)\right|^2\\ &\to_{\eta\to 0^+} \sum_{l=1,m} \left|\sum_{r=1,k} \left( \alpha_l(t_r)a_r -b_r(\alpha_l)_v'(t_r)\right)\right|^2\\ &=\left\|A(\sum_{r=1,k} a_r\delta_{t_r}+b_r\delta_{t_r,v}^{\prime})\right\|_2^2. \end{split}$$ Taking the limit of Equation  for $\eta \to 0$ yields the result. We have $$\begin{split} \|A\delta_{t,v_1,v_2}^{\prime \prime}\|_2^2 & = \sum_{l=1,m} | (A \delta_{t,v_1,v_2}^{\prime \prime})_l|^2\\ & = \sum_{l=1,m} | \alpha_{l,v_1,v_2}^{\prime \prime}(t)|^2\\ &\leq m \sup_{l=1,m; t \in {\mathcal{B}}_2(R)} | \alpha_{l,v_1,v_2}^{\prime \prime}(t)|^2 \leq m D_{A,R}^2 \end{split}$$ where $D_{A,R}$ is given in , i.e. $D_{A,R}$ is the supremum of directional second derivatives of the $\alpha_l$ over ${\mathcal{B}}_2(R)$. We have $D_{A,R} <+\infty$ because the $\alpha_l$ are supposed to be in ${\mathcal{C}}^2({\mathcal{B}}_2(R))$. \[lem:rkhs\_convol\] Let $K$ be a symmetrical convolution kernel in ${\mathcal{C}}^2$ and $h_K: (t_1,t_2) \to h_K(t_1,t_2) = [K*K](t_1-t_2)$ (the convolution of $K$ by itself) then for any $x \in \Sigma_{k,\epsilon}-\Sigma_{k,\epsilon}$, we have $$\|x\|_{h_K}^2 = \|K* x\|_{L^2}^2.$$ Write $x = \sum a_i \delta_{t_i}$ and use the symmetry of $K$: $$\begin{split} \|K*x\|_{L^2}^2 = \int \left|\sum a_i K(t-t_i) \right|^2 {\mathop{}\!\mathrm{d}}t &= \sum_{i,j} a_i a_j \int K(t-t_i)K(t-t_j) {\mathop{}\!\mathrm{d}}t\\ &= \sum_{i,j} a_i a_j \int K(t)K(t+t_i-t_j) {\mathop{}\!\mathrm{d}}t\\ &= \sum_{i,j} a_i a_j [K*K] (t_i-t_j) = \|x\|_{h_K}^2. \end{split}$$ Proofs for Section \[sec:control\_Hessian\] {#proof23} ------------------------------------------- We will use the following Lemma on directional derivatives of Diracs. \[lem:sum\_directional\_dirac\_derivative\] Let $u, t_0\in {\mathbb{R}}^d$. Suppose $u \neq 0$. Then, $\sum_{i=1,d} u_i \delta_{t_{0},j}^{\prime} = \|u\|_2\delta_{t_{0},\frac{u}{\|u\|_2}}^{\prime}$. Let $f$ a function in ${\mathcal{C}}^2({\mathbb{R}}^d)$, we have $\int_{t\in{\mathbb{R}}^d}f(t) \sum_{i=1,d} u_i {\mathop{}\!\mathrm{d}}\delta_{t_{0},i}^{\prime}(t) = -\sum_{i=1,d} u_i\partial_i f(t_0) = -{\langle}u_i, \nabla f(t_0) {\rangle}= -\|u\|_2f_{\frac{u}{\|u\|_2}}'(t_0) $. Hence, $\sum_{i=1,d} u_i \delta_{t_{0},i}^{\prime} = \|u\|_2\delta_{t_{r},\frac{u}{\|u\|_2}}^{\prime}$ To prove Theorem \[th:min\_max\_eigen\_control\_H\], we control first the eigenvalues of $G$ in the decomposition $H = G +F$. \[le:min\_max\_eigen\_control\_G\] Suppose $h$ follows Assumption \[assum:kernel\_prop\]. Let $\theta = (a_1,..,a_k, t_1,..t_k) \in \Theta_{k,\epsilon}$ with $t \in \text{rint}{\mathcal{B}}_2(R)$. Let $H$ the Hessian of $g$ at $\theta$. Suppose $A$ has RIP $\gamma$ on $\Sigma_{k,\epsilon}-\Sigma_{k,\epsilon}$. We have $$\sup_{\|u\|_2 =1} u^TGu \leq 2(1+{\gamma})(1+(k-1)\mu)\max(1,(a_r^2|\rho''(0)|)_{r=1,l}); \\$$ $$\inf_{\|u\|_2 =1} u^TGu \geq 2(1-{\gamma})(1-(k-1)\mu)\min(1,(a_r^2|\rho''(0)|)_{r=1,l}).\\$$ where $G$ is defined in Proposition \[prop:Hessian\]. Let $u \in {\mathbb{R}}^{k(d+1)} $ such that $\|u\|_2=1$. We index $u$ as follows: $u_r \in {\mathbb{R}}$ for $r=1,k$. $u_r\in{\mathbb{R}}^{d}$ for $r=k+1,2k$ (it follows the indexing of $H$ and $G$ we used). Remark that $$\begin{split} u^TGu =& \sum_{r,s=1,k} u_r u_s G_{1,r,s} + \sum_{r=k+1,2k;j_1=1,d;s=k+1,2k;j_2=1,d} u_{r,j_1} u_{s,j_2} G_{2,r,j_1,s,j_2}\\ &+ \sum_{r=1,k;s=k+1,2k;j=1,d} u_r u_{s,j} G_{12,r,s,j} + \sum_{r=k+1,2k;j=1,d;s=1,k} u_{r,j} u_s G_{21,r,j,s}\\ =& 2\sum_{r,s=1,k} {\mathcal{R}e}{\langle}Au_r\delta_{t_r}, Au_s\delta_{t_s} {\rangle}\\ & + 2\sum_{r=k+1,2k;j_1=1,d;s=k+1,2k;j_2=1,d} {\mathcal{R}e}{\langle}Au_{r,j_1}a_{r-k}\delta_{t_{r-k},j_1}^{\prime}, Au_{s,j_2}a_{s-k}\delta_{t_{s-k},j_2}^{\prime} {\rangle}\\ &-2\sum_{r=1,k;s=k+1,2k;j=1,d} {\mathcal{R}e}{\langle}Au_r\delta_{t_r}, Au_{s,j}a_{s-k}\delta_{t_{s-k},j}^{\prime} {\rangle}\\ & - 2\sum_{r=k+1,2k;j=1,d;s=1,k} {\mathcal{R}e}{\langle}Au_{r,j}a_{r-k}\delta_{t_{r-k},j}^{\prime}, Au_s\delta_{t_{s}} {\rangle}\end{split}$$ Thus we have $$\begin{split} u^TGu =& 2\left\|A\sum_{r=1,k} u_r \delta_{t_r}\right\|_2^2 + 2\left\|A\sum_{r=k+1,2k;j=1,d} u_{r,j} a_{r-k}\delta_{t_{r-k},j}^{\prime}\right\|_2^2 \\ &-2{\mathcal{R}e}\left{\langle}A\sum_{r=1,k} u_r \delta_{t_r}, A\sum_{r=k+1,2k;j=1,d} u_{r,j} a_{r-k}\delta_{t_{r-k},j}^{\prime}\right{\rangle}\\ &-2{\mathcal{R}e}\left{\langle}A\sum_{r=k+1,2k;j=1,d} u_{r,j} a_{r-k}\delta_{t_{r-k},j}^{\prime}, A\sum_{r=1,k} u_r \delta_{t_{r}}\right{\rangle}\\ =& 2\left\|A\left( \sum_{r=1,k}u_r\delta_{t_r}- \sum_{r=k+1,2k;j=1,d} u_{r,j}a_{r-k}\delta_{t_{r-k},j}^{\prime}\right) \right\|_2^2 \\ =& 2\left\|A \left( \sum_{r=1,k}\left( u_r\delta_{t_r}- a_{r}\sum_{j=1,d}u_{r+k,j}\delta_{t_{r},j}^{\prime}\right)\right) \right\|_2^2. \\ \end{split}$$ Using Lemma \[lem:sum\_directional\_dirac\_derivative\], we have $\sum_{j=1,d} w_j \delta_{t_{r},j}^{\prime} = \|w\|_2\delta_{t_{r},\frac{w}{\|w\|_2}}^{\prime}$ and $$\label{eq:expr_G} \begin{split} u^TGu =& 2\left\|A\sum_{r=1,k} (u_r\delta_{t_r}- a_{r}\|u_{r+k}\|_2\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime})\right\|_2^2. \\ \end{split}$$ We use the lower RIP in Lemma \[lem:RIP\_derivative\], $$\begin{split} u^TGu & \geq 2(1-{\gamma})\left\|\sum_{r=1,k}( u_r\delta_{t_r} - a_{r}\|u_{r+k}\|_2\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime})\right\|_h^2 .\\ \end{split}$$ Then the hypothesis on $\|\cdot\|_h$ and Lemma \[lem:mutual\_Generalized\_dipoles\] yields $$\begin{split} \|\sum_{r=1,k}( u_r\delta_{t_r} - a_{r}\|u_{r+k}\|_2\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime})\|_h^2 & \\ \geq (1-(k-1)\mu) \sum_{r=1,k} &\| u_r\delta_{t_r} - a_{r}\|u_{r+k}\|_2\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime}\|_h^2 \end{split}$$ and $$\begin{split} u^TGu &\geq 2(1-{\gamma})(1-(k-1)\mu) \sum_{r=1,k} \| u_r\delta_{t_r}- a_{r}\|u_{r+k}\|_2\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime}\|_h^2\\ & \geq 2(1-{\gamma})(1-(k-1)\mu) \sum_{r=1,k} \left(| u_r|^2 -2a_r u_r \|u_{k+r}\|_2{\langle}\delta_{t_r},\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime} {\rangle}_h \right. \\ & \left. + a_r^2\|u_{k+r}\|_2^2 \|\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime}\|_h^2\right).\\ \end{split}$$ Then using Lemma \[lem:kernel\_dirac\_properties\]: $$\begin{split} u^TGu & \geq 2(1-{\gamma})(1-(k-1)\mu) \sum_{r=1,k} \left(| u_r|^2 + a_r^2 \|u_{k+r}\|_2^2 |\rho''(0)| \right)\\ &\geq 2(1-{\gamma})(1-(k-1)\mu) \inf_{\|u\|_2=1} \sum_{r=1,k} \left(| u_r|^2 + \| u_{k+r}\|_2^2a_r^2|\rho''(0)| \right). \\ & = 2(1-{\gamma})(1-(k-1)\mu)\min(1,(a_r^2|\rho''(0)|)_{r=1,l}) .\\ \end{split}$$ Similarly, using the upper RIP in Lemma \[lem:RIP\_derivative\]: $$\begin{split} u^TGu & \leq 2(1+{\gamma})\|\sum_{r=1,k}( u_r\delta_{t_r}- a_{r}\|u_{r+k}\|_2\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime})\|_h^2 .\\ \end{split}$$ Then the hypothesis on $\|\cdot\|_h$ yields (Lemma \[lem:mutual\_Generalized\_dipoles\]) $$\begin{split} \|\sum_{r=1,k}( u_r\delta_{t_r}- a_{r}\|u_{r+k}\|_2\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime})\|_2^2 & \\ \leq (1+(k-1)\mu) \sum_{r=1,k}& \| u_r\delta_{t_r}{\textcolor{black}{- a_{r}\|u_{r+k}\|_2\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime}}}\|_h^2 \\ \end{split}$$ and $$\begin{split} u^TGu &\leq 2(1+{\gamma})(1+(k-1)\mu) \sum_{r=1,k} \| u_r\delta_{t_r}{\textcolor{black}{- a_{r}\|u_{r+k}\|_2\delta_{t_{r},\frac{u_{r+k}}{\|u_{r+k}\|_2}}^{\prime}}}\|_h^2.\\ \end{split}$$ Then using Lemma \[lem:kernel\_dirac\_properties\]: $$\begin{split} u^TGu & \leq 2(1+{\gamma})(1+(k-1)\mu) \sum_{r=1,k} \left(| u_r|^2 + a_r^2 \|u_{k+r}\|_2^2 |\rho''(0)| \right) \\ &\leq 2(1+{\gamma})(1+(k-1)\mu) \sup_{\|u\|_2=1} \sum_{r=1,k} \left(| u_r|^2 + \| u_{k+r}\|_2^2a_r^2|\rho''(0)| \right) \\ &= 2(1+{\gamma})(1+(k-1)\mu)\max(1,(a_r^2|\rho''(0)|)_{r=1,l}). \\ \end{split}$$ Let $\theta^*$ a minimizer of . Consider $H$ the Hessian of $g$ at $\theta$. We recall that $H=G+F$ (see Proposition \[prop:Hessian\]). Using Lemma \[le:min\_max\_eigen\_control\_G\], we just need to bound the operator norm of $F$ and then to combine it with the bounds on the eigenvalues of $G$ to get bounds on eigenvalues of $H=G+F$. We use Lemma \[lem:upper\_RIP\_second\_deriv\], the Cauchy-Schwartz and triangle inequalities. We have $\|A \delta_{t_r,j_1,j_2}^{\prime \prime} \|_2 \leq \sqrt{m}D_{A,R} $ and $$\begin{split} |F_{2,r,j_1,s,j_2}| & \leq{\mathbf{1}}(r=s)2|a_r|\| A \delta_{t_r,j_1,j_2}^{\prime \prime}\|_2 \|A\phi(\theta)-y\|_2 .\\ & \leq{\mathbf{1}}(r=s)2|a_r|\sqrt{m}D_{A,R}\|A\phi(\theta)-A\phi(\theta^*) + A\phi(\theta^*)-y\|_2 .\\ & \leq{\mathbf{1}}(r=s)2|a_r|\sqrt{m}D_{A,R} (\|A\phi(\theta)-A\phi(\theta^*)\|_2+ \|e\|_2).\\ \end{split}$$ Similarly, with Lemma \[lem:RIP\_derivative\], $$\begin{split} F_{12,r,s,j} &\leq {\mathbf{1}}(r=s) 2\sqrt{1+{\gamma}}\|\delta_{t_r,j}^{\prime}\|_h \|A\phi(\theta)-y\|_2 \\ &\leq {\mathbf{1}}(r=s) 2\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} (\|A\phi(\theta)-A\phi(\theta^*)\|_2+ \|e\|_2). \\ \end{split}$$ Let $\|\cdot\|_{op}$ be the $\ell^2$ operator norm of a matrix. With Gerschgorin circle theorem [@Golub_2012], we have $$\|F\|_{op} \leq \max_{l} \|F_{l,:}\|_1$$ where $F_{l,:}$ is the $l$-th row of $F$. We get $$\begin{split} \|F\|_{op} &\leq \max( d\max_{r,s,j} |F_{12,r,s,j}| , \max_{r,s,j} |F_{12,r,s,j}| + d \max_{r,j_1,s,j_2} |F_{2,r,j_1,s,j_2} | )\\ &\leq (d+1)\max( \max_{r,s,j} |F_{12,r,s,j}| , \max_{r,j_1,s,j_2} |F_{2,r,j_1,s,j_2} |)\\ &\leq 2(d+1)\max( \max_{r}|a_r| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) (\|A\phi(\theta)-A\phi(\theta^*)\|_2+ \|e\|_2).\\ \end{split}$$ Hence, using Weyl’s perturbation inequalities on $H = G + F$, i.e. $\lambda_{min}(H) \geq \lambda_{min}(G)-\lambda_{max}(F)$ and $\lambda_{max}(H) \leq \lambda_{max}(G) + \lambda_{max}(F)$, we get the result. First, observe that at $\theta_0$, $F =0$. The upper bound is a direct consequence of Theorem \[le:min\_max\_eigen\_control\_G\]. We show the result in the case $ \max(1,(a_r^2|\rho''(0)|)_{r=1,l}) \neq 1$ and $ \min(1,(a_r^2|\rho''(0)|)_{r=1,l}) \neq 1 $ (the proof is similar in the other case). For the lower bound let $v \in {\mathbb{R}}^{k(d+1)}$ and $i_0 = \arg \max_{r=1,l} (a_r^2|\rho''(0)|)$, set $\|v_{i_0}\|_2 =1$ and $v_j = 0$ for $j \neq i_0$. With Equation , we have $$\sup_{\|u\|_2 =1} u^THu \geq v^THv \geq 2(1-{\gamma}) \max(1,(a_r^2|\rho''(0)|)_{r=1,l}). \\$$ Similarly, let $v \in {\mathbb{R}}^{k(d+1)}$ and $i_0 = \arg \min((a_r^2|\rho''(0)|)_{r=1,l})$, $\|v_{i_0}\|_2 =1$ and $v_j = 0$ for $j \neq i_0$. $$\inf_{\|u\|_2 =1} u^THu \leq 2(1+{\gamma})\min(1,(a_r^2|\rho''(0)|)_{r=1,l}).\\$$ Proofs for Section \[sec:basin\] {#proof3} -------------------------------- Let $\theta^* =(a_1,...,a_k,t_1,..t_k)\in \Theta_{k,\epsilon}$ the global minimum of $g$ and $\theta = (b_1,...,b_k,s_1,..s_k) \in \Lambda_{\theta^*,\beta} $. First notice that $ \|\theta-\theta^*\|_2^2 \leq \beta^2$ implies that for any $j$, we have $|a_j- b_j|^2 \leq \beta^2 $ and $$\label{ineq:th2} |a_1|-\beta \leq |a_j|-\beta \leq |b_j| \leq |a_j|+\beta\leq |a_k|+\beta.$$ We also have $\|s_j- t_j\|_2 <\beta \leq \frac{\epsilon}{4}$. Hence for $i\neq j$ we have $\|s_i-s_j\|_2 = \|s_i -t_i +t_i -t_j +t_j-s_j\|_2 \geq \|t_i -t_j\|_2 -\|t_i-s_i\|_2 -\|t_j-s_j\|_2 > \epsilon - 2\epsilon/4 = \epsilon/2$ and $\phi(\theta) \in \Sigma_{k,\frac{\epsilon}{2}}$. We use Theorem \[th:min\_max\_eigen\_control\_H\] to get the bound on the min and max eigenvalues of the Hessian. We can then plug Inequality  into the one of Theorem \[th:min\_max\_eigen\_control\_H\]. Finally we notice the fact that $\sup_{\theta \in \Lambda_{\theta^*,\beta}} \|A\phi(\theta)-A\phi(\theta^*)\|_2$ exists because $\Lambda_{\theta^*,\beta}$ is bounded. This is a direct consequence of Theorem \[th:min\_max\_eigen\_control\_H\]. The proof follows the same lines as the one of Theorem \[th:basin\]. The set $\Lambda =\Lambda_{\theta^*,\beta}$ is an open set where the Hessian of $g$ at $ \Lambda$ is positive as long as $\xi \leq 2 (1-{\gamma})(1-(k-1)\mu)\min(1,(|a_1|-\beta)^2|\rho''(0)|)$ with Theorem \[th:basin\]. In this case $g$ is convex on $\Lambda$. Theorem \[th:basin\] also gives a uniform bound for the operator norm of the Hessian: $\|H\|_{op} \leq 2(1+{\gamma})(1+(k-1)\mu)\max(1,(|a_k|+\beta)^2|\rho''(0)|)+\xi$ and $g$ has Lipschitz gradient. and we deduce from Corollary \[cor:convergence\_gradient\_descent\] that $\Lambda$ is a basin of attraction. Hence we just need to show that $\xi \leq 2(1-{\gamma})(1-(k-1)\mu)\min(1,(|a_1|-\beta)^2|\rho''(0)|)$. Let $\theta \in \Lambda$, we have, with the RIP hypothesis, $$\begin{split} \xi(\theta)&:=2(d+1)\max( \max_{r}|a_r| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) \|A\phi(\theta)-A\phi(\theta^*)\|_2 \\ \leq& 2(d+1)\max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) \sqrt{1+{\gamma}} \|\phi(\theta)-\phi(\theta^*)\|_h \\ \leq& 2(d+1)\max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) \sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} \sqrt{\sum_i\|a_i\delta_{t_i}-b_i\delta_{s_i}\|_h^2} \\ \end{split}$$ where we wrote $\theta^* = \sum_i a_i \delta_{t_i}$ and $\theta = \sum_i b_i \delta_{s_i}$ such that $|s_i-t_i| \leq \epsilon/4$. We now bound the term $\sum_i\|a_i\delta_{t_i}-b_i\delta_{s_i}\|_h^2$: $$\begin{split} \sum_i\|a_i\delta_{t_i}-b_i\delta_{s_i}\|_h^2 = & \sum_i a_i^2 +b_i^2 -2 a_i b_i\rho(\|s_i-t_i\|_2) \\ =& \sum_i \rho(\|s_i-t_i\|_2) |a_i- b_i|^2 + (1-\rho(\|s_i-t_i\|_2) )\sum_i a_i^2 +b_i^2 \\ \end{split}$$ Using the hypothesis that $\|\theta-\theta^*\|^2 \leq \beta^2$ and $\beta \leq |a_1|/2$, we have $|b_i| \leq |a_i| + \beta \leq \frac{3}{2} |a_i|$. With the assumption on $h$ (and $\rho$), $$\begin{split} \sum_i\|a_i\delta_{t_i}-b_i\delta_{s_i}\|_h^2 \leq & \beta^2 + \frac{|\rho''(0)|}{2}\beta^2 \frac{13}{4}\|a^*\|_2^2 \\ \leq & \beta^2 + \frac{|\rho''(0)|}{2}\beta^2 4\|a^*\|_2^2 \\ \leq & \beta^2( 1 + 2|\rho''(0)| \|a^*\|_2^2 ) \\ \end{split}$$ where $a^* = (a_1,...,a_k)$. The fact that $\beta \leq |a_1|/2$ implies $$\begin{split} &\frac{\xi(\theta)}{\min(1,(|a_1|-\beta)^2|\rho''(0)|)}\\ & \leq \frac{2 (d+1)\sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} \max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) \sqrt{ 1 + 2|\rho''(0)|\|a^*\|_2^2 }\beta }{\min(1,|a_1|^2|\rho''(0)|/4)} \end{split}$$ Hence using the hypothesis that $$\beta \leq \frac{ (1-{\gamma})(1-(k-1)\mu)\min(1,|a_1|^2|\rho''(0)|/4) }{ (d+1)\sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} \max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) \sqrt{1 + 2|\rho''(0)|\|a^*\|_2^2}}$$ we have $$\begin{split} \xi(\theta)& \leq 2 (1-{\gamma})(1-(k-1)\mu)\min(1,(|a_1|(1-\beta))^2|\rho''(0)|). \end{split}$$ Following the same argument as Corollary \[cor:basin\_noisy\], we just need to show that $\xi \leq 2(1-{\gamma})(1-(k-1)\mu)\min(1,(|a_1|-\beta))^2|\rho''(0)|)$. Let $\theta \in \Lambda_{\theta^*,\beta}$, we have, with the RIP hypothesis, $$\begin{split} \xi(\theta)&:=2(d+1)\max( \max_{r}|a_r| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} )( \|A\phi(\theta)-A\phi(\theta^*)\|_2 +\|e\|_2)\\ \leq& 2(d+1)\max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) (\sqrt{1+{\gamma}} \|\phi(\theta)-\phi(\theta^*)\|_h +\|e\|_2) \\ \leq& 2(d+1)\max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) (\sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} \sqrt{\sum_i\|a_i\delta_{t_i}-b_i\delta_{s_i}\|_h^2}+\|e\|_2) \\ \end{split}$$ where we wrote $\theta^* = \sum_i a_i \delta_{t_i}$ and $\theta = \sum_i b_i \delta_{s_i}$ such that $|s_i-t_i| \leq \epsilon/4$. Similarly as in Corollary \[cor:basin\_noisy\], we bound the term $\sum_i\|a_i\delta_{t_i}-b_i\delta_{s_i}\|_h^2$: $$\begin{split} \sum_i\|a_i\delta_{t_i}-b_i\delta_{s_i}\|_h^2 \leq & \beta^2( 1 + 2|\rho''(0)| \|a^*\|_2^2 ) \\ \end{split}$$ The fact that $\beta \leq |a_1|/2$ and $\|e\|_2 \leq \sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} \beta$ implies $$\begin{split} &\frac{\xi(\theta)}{\min(1,(|a_1|-\beta)^2|\rho''(0)|)}\\ & \leq \frac{2 (d+1)\sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} \max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) (1+ \sqrt{ 1 + 2|\rho''(0)| \|a^*\|_2^2 })\beta }{\min(1,|a_1|^2|\rho''(0)|/4)} \end{split}$$ Hence using the hypothesis that $$\beta \leq \frac{ (1-{\gamma})(1-(k-1)\mu)\min(1,|a_1|^2|\rho''(0)|/4) }{ (d+1)\sqrt{1+{\gamma}} \sqrt{1+(k-1)\mu} \max( |a_k| \sqrt{m}D_{A,R} ,\sqrt{1+{\gamma}}\sqrt{|\rho''(0)|} ) (1+ \sqrt{ 1 + 2|\rho''(0)| \|a^*\|_2^2 })},$$ we have $$\begin{split} \xi(\theta)& \leq 2 (1-{\gamma})(1-(k-1)\mu)\min(1,(|a_1|(1-\beta))^2|\rho''(0)|). \end{split}$$ Proofs for Section \[sec:projected\_gradient\] {#proof4} ---------------------------------------------- Remark that $g(\theta)$ does not depend on the ordering of the positions. Reorder $\theta_0 =(a,t )$ and $\theta_1=(b,s)$ such that $t_1< t_2...<t_k$ and $s_1<s_2...<s_k$. Consider the function $g_1(\lambda) = g(\theta_\lambda)$ with $\theta_\lambda = (1-\lambda) \theta_0 + \lambda \theta_1$. Remark that $g_1$ is a continuous function of $\lambda$ taking values $g_1(0)= g(\theta_0)$ and $g_1(1)=g(\theta_1)$. Hence, with the intermediate value theorem, there is $\lambda$ such that $g(\theta_\lambda)=g_1(\lambda) = \alpha$. Moreover, denoting $\theta_\lambda = (a_\lambda, t_\lambda)$, we have, using the sorting of $t$ and $s$, for $1 \leq i < k $, $$\begin{split} |t_{\lambda,i+1} -t_{\lambda,i}| &= |(1-\lambda)t_{i+1} + \lambda s_{i+1} -(1-\lambda)t _{i} - \lambda s_{i}| \\ &= (1-\lambda)|t_{i+1}-t_i| +\lambda |s_{i+1}-s_i| > (1-\lambda)\epsilon + \lambda \epsilon = \epsilon.\\ \end{split}$$ Hence $\theta_\lambda \in \Theta_{k,\epsilon}$. [^1]: Contact author : [[email protected]]([email protected])
This work presents a new model to predict complex time series. It ts based on two concepts coming Irotn both Chaos Theory and Artificial Intelligence. Speciiicelly, it uses both the phase space representation of observables and Artificial Neural Network (ANN) for prcdtcting the resulting variables in such space. For the case when a chaotic dynamics behevior is identifled vía nonlinear time series analysis, the problem reduces to train the ANN. Although it is not required to identify such a bebevior in order to apply the model, it is highly suiteble given the obtained results in this work. In this light, it is noted that for an observable that does not come from a dynamical system sbowing low-dimensional chaos, the results suggest a poor efIlciency in the prediction application. In general, the model implies an optimization problem, since in order to achieve an adequate phase space representation it is necessary to es tima te the embedding dimension (mJand the time delay it). Such parameters elongwttb other two related to the topology ofANN forrn a tetradimensional search space which for this case was explored in an exhaustive way. ABARBANEL, H. D. I., R. BROWN, J. J. SIDOROWICH, and L. S. TSIMRING (1993), The Analysis of Observed Chaotic Data in Physical Systems, en: Rev. ofModem Phys., No. 65, pp. 1331. ABARBANEL, H.D. I. (1988), Analysis ofObserved Chaotic Data, NewYork, Springer. BOWDEN, G., G. DANDY, and R. MAIER (2002), Ant Colony Optimization of a general Regressíon Neural Network for Forecasting Water Quality. Proceedíngs of the Fífth Intemational Conference on Hydroinformatics, Cardiff, UK, julio 5-8. CARPENTER, G. and S. GROSSBERG (1988), The ART of Adaptive Pattem Recognítíon by a Self-Organizing Neural Network, en Computer, No. 21(3), pp. 77-88. CASDAGLI, M., S. EUBANK, J. D. FARMER, and J. GIBSON (1991), State Space Reconstruction in the Presence of Noíse, en: Physica D, No. 51, pp. 52-98. DELGADO, A. (1998), Inteligencia artificial y minirobots. Bogotá, Ecoe. DUAN, Q., V. GUPTA, and S. SOROOSIAN (1993), Shuffled Complex Evolution Approach for Effective and Efficient Global Optímízatíon, en: Joumal of Optimization Theory and Applications, No. 76(3), pp. 1015-103l. FRASER, A. M. and H. L. SWINNEY (1986), Independent coordinates for strange attractors from mutual information, en Physical Review A,No.33,pp.1134-1140. GRAF, K. E., and T. ELBERT (1990), Dimensional Analysis of the Waking EEG, enE. Basar, (ed.) Chaosin brainfunction, Berlín, Sprínger. GRASSBERGER, P. and I. PROCACCIA (1983), Measuring the Strangeness of Strange Attractors, en: Physica D, No. 9 pp. 189-208. GRASSBERGEPR.,, T. SCHREIBERan, d C. SCHAFFRAT(1H991), Non-Linear Time Sequence Analysis, Physics Dept., University of Wuppertal. HAYKIN, S. (1994), Neural Network -A comprehensive Foundation. London, Macmillan. HOLZFUSS, J. and G. MAYER-KRESS (1986), AnApproach to Error-Estímatíon in the Application of Dimensions Algoríthms, en G. Mayer-Kress (ed.), Dimensions and Entropies in chaotic Systems, Berlin: Springer, pp. 114-147. KOHONEN, T. (1984), Self-Organizing and Associative Memory. Springer Series in Information Scíences, Vol. 8, New York, Sprínger. KOSKO, B. (1992), Neural Networks and Fuzzy Systems: A Dynamical System Approach to Machine Intellígence. New Jersey, Prentíce Hall. OBREGÓN, N. (2001), Perspectivas en hidroinformática urbana. Seminario Internacional de Hídrología Urbana. Pontificia Universidad Javeriana, Bogotá, septiembre. OBREGÓN, N., P. AVELLANEDA y P. RENGIFO (2002), Aplicación de las redes neuronales artificiales para la predicción de caudales medios mensuales. Aprobado y por aparecer en las Memorias del XV Seminario Nacional de Hídrología e Hidráulica, Medellín, agosto 29-31. PACKARD, N. H., J. P. CRUTCHFIELD, J. D. FARMER, and R. S. SHAW (1980), Geometry from a Time Series, en: Physical Review Letters, No. 45, pp. 712-716. PETTIS, K. W., T. A., BALEY, A. K. JAIN, and R. C. DUBES (1979), An intrinsic dimensionality Estimator from Near-Neíghbor Information, en: IEEE Transactions on Pattem Analysis and Machine Intelligence, PAMI-1(1), pp. 25-37. PRESS, W., B., F'LANNERSY. ,TEUKOLSKY, and W. VETIERLING (1987), Numerical Recipes, New York, Cambrídge University Press. PROVENZALE, A., A. R. OSBORNE, and R. SOJ (1991), Convergence of the K2 Entropy for Random Noises with Power Law Spectra, en: Pbystce D, No. 47, pp. 361-372. PUENTE, C. E. and N. A. OBREGÓN (1996), Determíntstic Geometric Representation of Temporal Rainfall. Results for a Storm in Boston, en: Water Resources Research, No. 32(9), pp. 2825-2839. RODRÍGUEZ-ITURBE, I. and B. FEBRES DE POWER(1989), Chaos in Raínfall, en: Water ResourcesResearch, Vol. 25, No. 7, pp. 1667-1675, July. ROWLANDS, G., and J. C. SPROTT (1992), Extraction of dynamical Equations from chaotic Data, en: Physica D, No. 58, pp. 251-259. RUELLE, D. (1994), Where Can One Hope to Profitably Apply the Ideas of Chaos?, en: Pbysics Today, pp. 24-30. RUMELHART, D., G. HINTON, and R. WILLIAM (S1986), Learning Internal Representation by Error Propagation, en: D. Rumelhart, J. McClelland, and the ODP Research Group (eds.I, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, Cambrídge, MITPress, pp. 318-362. SALZMAN, B. (1962), Fíníte Amplitude free Convection as an ínítíal Value Problern-I, en: J. Atmos. Sci. No. 29, pp. 329-334l. SÁNCHEZ, L., V., J. ARROYO, K. GARCÍAY, J. REVILLA (1998), Use of Neural Networks in Desígn of Coastal Sewage Systems, en: Joumal of Hydreuhc Engineertng. May, pp. 457-464. SINCAK, P. M. M. BUNDZEL, D. SOKAC, M. SZTRUHÁR y J. MARSALEK (1998), Urban Runoff Prediction by Neural Networks. Hydroinformatics' 98 International Conference, Rotterdam, Balkema. TAKENS, F. (1981), Dynamical Systems and Turbulenc, D. Rand, and L. S. Young, en: D. Rand, and L. S. Young, Dynamical Systems and Turbulence, Berlin, Springer, p. 366. TSONIS, A. A., ELSNER, J. B. (1988), The Weather Attractor over very short Timescales, en: Nature, 333, pp. 545-547. VASSILIADIS, C. (1990), Neural Networks -Twelve leaming Algoríthms, Twenty Second Symposium on System Theory. IEEE Computo Soco Press, Los Alamitos, Cal, pp. 449-454. WILCHES, C. (2002), Análisis no-lineal de series de tiempo en ingeniería civil. Proyecto final de grado. Departamento de Ingeniería Civil, Universidad de los Andes, Bogotá. This work is licensed under a Creative Commons Attribution 4.0 International License.
https://revistas.javeriana.edu.co/index.php/iyu/article/view/33960
--- abstract: 'A new class of cylindrically symmetric inhomogeneous cosmological models for perfect fluid distribution with electromagnetic field is obtained in the context of Lyra’s geometry. We have obtained two types of solutions by considering the uniform as well as time dependent displacement field. The source of the magnetic field is due to an electric current produced along the z-axis.Only $F_{12}$ is a non-vanishing component of electromagnetic field tensor. To get the deterministic solution, it has been assumed that the expansion $\theta$ in the model is proportional to the shear $\sigma$. It has been found that the solutions are consistent with the recent observations of type Ia supernovae and the displacement vector $\beta(t)$ affects entropy. Physical and geometric aspects of the models are also discussed in presence and absence of magnetic field.' --- \ \ \ \ \ \ PACS: [98.80.Jk, 98.80.-k]{}\ Keywords: [Cosmology; cylindrically symmetric universe; inhomogeneous models; Lyra’s geometry ]{} Introduction and Motivations ============================ The inhomogeneous cosmological models play a significant role in understanding some essential features of the universe such as the formation of galaxies during the early stages of evolution and process of homogenization. The early attempts at the construction of such models have been done by Tolman [@ref1] and Bondi [@ref2] who considered spherically symmetric models. Inhomogeneous plane-symmetric models were considered by Taub [@ref3; @ref4] and later by Tomimura [@ref5], Szekeres [@ref6], Collins and Szafron [@ref7], Szafron and Collins [@ref8]. Senovilla [@ref9] obtained a new class of exact solutions of Einstein’s equations without big bang singularity, representing a cylindrically symmetric, inhomogeneous cosmological model filled with perfect fluid which is smooth and regular everywhere satisfying energy and causality conditions. Later, Ruiz and Senovilla [@ref10] have examined a fairly large class of singularity free models through a comprehensive study of general cylindrically symmetric metric with separable function of $r$ and $t$ as metric coefficients. Dadhich et al. [@ref11] have established a link between the FRW model and the singularity free family by deducing the latter through a natural and simple in-homogenization and anisotropization of the former. Recently, Patel et al. [@ref12] have presented a general class of inhomogeneous cosmological models filled with non-thermalized perfect fluid assuming that the background space-time admits two space-like commuting Killing vectors and has separable metric coefficients. Singh, Mehta and Gupta [@ref13] obtained inhomogeneous cosmological models of perfect fluid distribution with electro-magnetic field. Recently, Pradhan et al. [@ref14] have investigated cylindrically-symmetric inhomogeneous cosmological models in various contexts. The occurrence of magnetic field on galactic scale is a well-established fact today, and its importance for a variety of astrophysical phenomena is generally acknowledged as pointed out by Zeldovich et al. [@ref15]. Also Harrison [@ref16] suggests that magnetic field could have a cosmological origin. As a natural consequences, we should include magnetic fields in the energy-momentum tensor of the early universe. The choice of anisotropic cosmological models in Einstein system of field equations leads to the cosmological models more general than Robertson-Walker model [@ref17]. The presence of primordial magnetic field in the early stages of the evolution of the universe is discussed by many [@ref18]$-$[@ref27]. Strong magnetic field can be created due to adiabatic compression in clusters of galaxies. Large-scale magnetic field gives rise to anisotropies in the universe. The anisotropic pressure created by the magnetic fields dominates the evolution of the shear anisotropy and decays slowly as compared to the case when the pressure is held isotropic [@ref28; @ref29]. Such fields can be generated at the end of an inflationary epoch [@ref30]$-$[@ref34]. Anisotropic magnetic field models have significant contribution in the evolution of galaxies and stellar objects. Bali and Ali [@ref35] obtained a magnetized cylindrically symmetric universe with an electrically neutral perfect fluid as the source of matter. Pradhan et al. [@ref36] have investigated magnetized cosmological models in various contexts. In 1917 Einstein introduced the cosmological constant into his field equations of general relativity in order to obtain a static cosmological model since, as is well known, without the cosmological term his field equations admit only non-static solutions. After the discovery of the red-shift of galaxies and explanation thereof Einstein regretted for the introduction of the cosmological constant. Recently, there has been much interest in the cosmological term in context of quantum field theories, quantum gravity, super-gravity theories, Kaluza-Klein theories and the inflationary-universe scenario. Shortly after Einstein’s general theory of relativity Weyl [@ref37] suggested the first so-called unified field theory based on a generalization of Riemannian geometry. With its backdrop, it would seem more appropriate to call Weyl’s theory a geometrized theory of gravitation and electromagnetism (just as the general theory was a geometrized theory of gravitation only), instead a unified field theory. It is not clear as to what extent the two fields have been unified, even though they acquire (different) geometrical significance in the same geometry. The theory was never taken seriously inasmuchas it was based on the concept of non-integrability of length transfer; and, as pointed out by Einstein, this implies that spectral frequencies of atoms depend on their past histories and therefore have no absolute significance. Nevertheless, Weyl’s geometry provides an interesting example of non-Riemannian connections, and recently Folland [@ref38] has given a global formulation of Weyl manifolds clarifying considerably many of Weyl’s basic ideas thereby. In 1951 Lyra [@ref39] proposed a modification of Riemannian geometry by introducing a gauge function into the structure-less manifold, as a result of which the cosmological constant arises naturally from the geometry. This bears a remarkable resemblance to Weyl’s geometry. But in Lyra’s geometry, unlike that of Weyl, the connection is metric preserving as in Remannian; in other words, length transfers are integrable. Lyra also introduced the notion of a gauge and in the “normal” gauge the curvature scalar in identical to that of Weyl. In consecutive investigations Sen [@ref40], Sen and Dunn [@ref41] proposed a new scalar-tensor theory of gravitation and constructed an analog of the Einstein field equations based on Lyra’s geometry. It is, thus, possible [@ref40] to construct a geometrized theory of gravitation and electromagnetism much along the lines of Weyl’s “unified” field theory, however, without the inconvenience of non-integrability length transfer. Halford [@ref42] has pointed out that the constant vector displacement field $\phi_i$ in Lyra’s geometry plays the role of cosmological constant $\Lambda$ in the normal general relativistic treatment. It is shown by Halford [@ref43] that the scalar-tensor treatment based on Lyra’s geometry predicts the same effects within observational limits as the Einstein’s theory. Several authors Sen and Vanstone [@ref44], Bhamra [@ref45], Karade and Borikar [@ref46], Kalyanshetti and Wagmode [@ref47], Reddy and Innaiah [@ref48], Beesham [@ref49], Reddy and Venkateswarlu [@ref50], Soleng [@ref51], have studied cosmological models based on Lyra’s manifold with a constant displacement field vector. However, this restriction of the displacement field to be constant is merely one for convenience and there is no [*a priori*]{} reason for it. Beesham [@ref52] considered FRW models with time dependent displacement field. He has shown that by assuming the energy density of the universe to be equal to its critical value, the models have the $k=-1$ geometry. Singh and Singh [@ref53]$-$ [@ref56], Singh and Desikan [@ref57] have studied Bianchi-type I, III, Kantowaski-Sachs and a new class of cosmological models with time dependent displacement field and have made a comparative study of Robertson-Walker models with constant deceleration parameter in Einstein’s theory with cosmological term and in the cosmological theory based on Lyra’s geometry. Soleng [@ref51] has pointed out that the cosmologies based on Lyra’s manifold with constant gauge vector $\phi$ will either include a creation field and be equal to Hoyle’s creation field cosmology [@ref56]$-$ [@ref60] or contain a special vacuum field, which together with the gauge vector term, may be considered as a cosmological term. In the latter case the solutions are equal to the general relativistic cosmologies with a cosmological term. Recently, Pradhan et al. [@ref61], Casama et al. [@ref62], Rahaman et al. [@ref63], Bali and Chandani [@ref64], Kumar and Singh [@ref65], Singh [@ref66] and Rao, Vinutha and Santhi [@ref67] have studied cosmological models based on Lyra’s geometry in various contexts. With these motivations, in this paper, we have obtained exact solutions of Einstein’s field equations in cylindrically symmetric inhomogeneous space-time within the frame work of Lyra’s geometry in the presence and absence of magnetic field for uniform and time varying displacement vector. This paper is organized as follows. In Section $1$ the motivation for the present work is discussed. The metric and the field equations are presented in Section $2$, in Section $3$ the solution of field equations, the Section $4$ contains the solution of uniform displacement field ($\beta = \beta_{0}$, constant). The Section 5 deals with the solution with time varying displacement field ($\beta = \beta(t)$). Subsections $5.1, 5.2$ and $5.3$ describe the solutions of Empty Universe, Zeldovich Universe and Radiating Universe with the physical and geometric aspects of the models respectively. The solutions in absence of magnetic field are given in Section $6$. Sections $7$ and $8$ deal with the solutions for uniform and time dependent displacement field. Finally, in Section $9$ discussion and concluding remarks are given. The Metric and Field Equations ============================== We consider the cylindrically symmetric metric in the form $$\label{eq1} ds^{2} = A^{2}(dx^{2} - dt^{2}) + B^{2} dy^{2} + C^{2} dz^{2},$$ where $A$ is the function of $t$ alone and $B$ and $C$ are functions of $x$ and $t$. The energy momentum tensor is taken as has the form $$\label{eq2} T^{j}_{i} = (\rho + p)u_{i} u^{j} + p g^{j}_{i} + E^{j}_{i},$$ where $\rho$ and $p$ are, respectively, the energy density and pressure of the cosmic fluid,and $u_{i}$ is the fluid four-velocity vector satisfying the condition $$\label{eq3} u^{i} u_{i} = -1, ~ ~ u^{i} x_{i} = 0.$$ In Eq. (\[eq2\]), $E^{j}_{i}$ is the electromagnetic field given by Lichnerowicz [@ref68] $$\label{eq4} E^{j}_{i} = \bar{\mu}\left[h_{l}h^{l}\left(u_{i}u^{j} + \frac{1}{2}g^{j}_{i}\right) - h_{i}h^{j}\right],$$ where $\bar{\mu}$ is the magnetic permeability and $h_{i}$ the magnetic flux vector defined by $$\label{eq5} h_{i} = \frac{1}{\bar{\mu}} \, {^*}F_{ji} u^{j},$$ where the dual electromagnetic field tensor $^{*}F_{ij}$ is defined by Synge [@ref69] $$\label{eq6} ^{*}F_{ij} = \frac{\sqrt{-g}}{2} \epsilon_{ijkl} F^{kl}.$$ Here $F_{ij}$ is the electromagnetic field tensor and $\epsilon_{ijkl}$ is the Levi-Civita tensor density.\ The co-ordinates are considered to be co-moving so that $u^{1}$ = $0$ = $u^{2}$ = $u^{3}$ and $u^{4} = \frac{1}{A}$. If we consider that the current flows along the $z$-axis, then $F_{12}$ is the only non-vanishing component of $F_{ij}$. The Maxwell’s equations $$\label{eq7} F_[ij;k] = 0,$$ $$\label{eq8} \left[\frac{1}{\bar{\mu}}F^{ij}\right]_{;j} = 4 \pi J^{i},$$ require that $F_{12}$ is the function of x-alone. We assume that the magnetic permeability is the functions of $x$ and $t$ both. Here the semicolon represents a covariant differentiation\ The field equations (in gravitational units $c = 1$, $G = 1$), in normal gauge for Lyra’s manifold, obtained by Sen [@ref4] as $$\label{eq9} R_{ij} - \frac{1}{2} g_{ij} R + \frac{3}{2} \phi_i \phi_j - \frac{3}{4} g_{ij} \phi_k \phi^k = - 8 \pi T_{ij},$$ where $\phi_{i}$ is the displacement field vector defined as $$\label{eq10} \phi_{i} = (0, 0, 0, \beta),$$ where $\beta$ is either a constant or a function of $t$. The other symbols have their usual meaning as in Riemannian geometry.\ For the line-element (\[eq1\]), the field Eq. (\[eq9\]) with Eqs. (\[eq2\]) and (\[eq10\]) lead to the following system of equations $$\frac{1}{A^{2}}\left[- \frac{B_{44}}{B} - \frac{C_{44}}{C} + \frac{A_{4}}{A} \left(\frac{B_{4}}{B} +\frac{C_{4}}{C}\right) - \frac{B_{4} C_{4}}{B C} + \frac{B_{1}C_{1}}{BC}\right] - \frac{3}{4}\beta^{2}$$ $$\label{eq11} = 8 \pi \left(p + \frac{F^{2}_{12}}{2\bar{\mu} A^{2} B^{2}} \right),$$ $$\label{eq12} \frac{1}{A^{2}}\left(\frac{A^{2}_{4}}{A^{2}}- \frac{A_{44}}{A} - \frac{C_{44}}{C} + \frac{C_{11}}{C} \right) - \frac{3}{4}\beta^{2} = 8 \pi \left(p + \frac{F^{2}_{12}} {2\bar{\mu} A^{2} B^{2}} \right),$$ $$\label{eq13} \frac{1}{A^{2}}\left(\frac{A^{2}_{4}}{A^{2}} - \frac{A_{44}}{A} - \frac{B_{44}}{B} + \frac{B_{11}}{ B}\right) - \frac{3}{4}\beta^{2} = 8 \pi \left(p - \frac{F^{2}_{12}} {2\bar{\mu} A^{2} B^{2}} \right),$$ $$\frac{1}{A^{2}}\left[- \frac{B_{11}}{B} - \frac{C_{11}}{C} + \frac{A_{4}}{A} \left(\frac{B_{4}}{B} +\frac{C_{4}}{C}\right) - \frac{B_{1}C_{1}}{BC} + \frac{B_{4} C_{4}}{B C}\right] + \frac{3}{4}\beta^{2}$$ $$\label{eq14} = 8 \pi \left(\rho + \frac{F^{2}_{12}}{2\bar{\mu} A^{2} B^{2}}\right),$$ $$\label{eq15} \frac{B_{14}}{B} + \frac{C_{14}}{C} - \frac{A_{4}}{A}\left(\frac{B_{1}}{B} + \frac{C_{1}}{C}\right) = 0,$$ where the subscript indices $1$ and $4$ in A, B, C and elsewhere denote ordinary differentiation with respect to $x$ and $t$ respectively. Solution of Field Equations =========================== Equations (\[eq11\])-(\[eq15\]) are five independent equations in seven unknowns $A$, $B$, $C$, $\rho$, $p$, $\beta$ and $F_{12}$. For the complete determinacy of the system, we need two extra conditions which are narrated hereinafter. The research on exact solutions is based on some physically reasonable restrictions used to simplify the field equations.\ To get determinate solution we assume that the expansion $\theta$ in the model is proportional to the shear $\sigma$. This condition leads to $$\label{eq16} A = \left(\frac{B}{C}\right)^{n},$$ where $n$ is a constant. The motive behind assuming this condition is explained with reference to Thorne [@ref70], the observations of the velocity-red-shift relation for extragalactic sources suggest that Hubble expansion of the universe is isotropic today within $\approx 30$ per cent [@ref71; @ref72]. To put more precisely, red-shift studies place the limit $$\frac{\sigma}{H} \leq 0.3$$ on the ratio of shear, $\sigma$, to Hubble constant, $H$, in the neighbourhood of our Galaxy today. Collins et al. [@ref73] have pointed out that for spatially homogeneous metric, the normal congruence to the homogeneous expansion satisfies that the condition $\frac{\sigma}{\theta}$ is constant.\ From Eqs. (\[eq11\])-(\[eq13\]), we have $$\label{eq17} \frac{A_{44}}{A} - \frac{A^{2}_{4}}{A^{2}} + \frac{A_{4}B_{4}}{AB} + \frac{A_{4}C_{4}} {AC} -\frac{B_{44}}{B} - \frac{B_{4}C_{4}}{BC} = \frac{C_{11}}{C} - \frac{B_{1}C_{1}}{BC} = \mbox{K (constant)}$$ and $$\label{eq18} \frac{8\pi F^{2}_{12}}{\bar{\mu}B^{2}} = - \frac{C_{44}}{C} + \frac{C_{11}}{C} + \frac{B_{44}}{B} - \frac{B_{11}}{B}.$$ We also assume that $$B = f(x)g(t),$$ $$\label{eq19} C = f(x)k(t).$$ Using Eqs. (\[eq16\]) and (\[eq19\]) in (\[eq15\]) and (\[eq17\]) lead to $$\label{eq20} \frac{k_{4}}{k} = \frac{(2n - 1)}{(2n + 1)}\frac{g_{4}}{g},$$ $$\label{eq21} (n - 1)\frac{g_{44}}{g} - n\frac{k_{44}}{k} - \frac{g_{4}}{g}\frac{k_{4}}{k} = K,$$ $$\label{eq22} f f_{11} - f^{2}_{1} = Kf^{2}.$$ Equation (\[eq20\]) leads to $$\label{eq23} k = cg^{\alpha},$$ where $\alpha = \frac{2n - 1}{2n + 1}$ and $c$ is the constant of integration. From Eqs. (\[eq21\]) and (\[eq23\]), we have $$\label{eq24} \frac{g_{44}}{g} + \ell \frac{g^{2}_{4}}{g^{2}} = N,$$ where $$\ell = \frac{n\alpha(\alpha - 1) + \alpha}{n(\alpha - 1) + 1}, \, \, N = \frac{K}{n(1 - \alpha) - 1}.$$ Equation (\[eq22\]) leads to $$\label{eq25} f = \exp{\left(\frac{1}{2}K(x + x_{0})^{2}\right)},$$ where $x_{0}$ is an integrating constant. Equation (\[eq24\]) leads to $$\label{eq26} g = \left(c_{1}e^{bt} + c_{2}e^{-bt}\right)^{\frac{1}{(\ell + 1)}},$$ where $b = \sqrt{(\ell + 1)N}$ and $c_{1}$, $c_{2}$ are integrating constants. Hence from (\[eq23\]) and (\[eq26\]), we have $$\label{eq27} k = c\left(c_{1}e^{bt} + c_{2}e^{-bt}\right)^{\frac{\alpha}{(\ell + 1)}}.$$ Therefore we obtain $$\label{eq28} B = \exp{\left(\frac{1}{2}K(x + x_{0})^{2}\right)} \left(c_{1}e^{bt} + c_{2}e^{-bt} \right)^{\frac{1}{(\ell + 1)}},$$ $$\label{eq29} C = \exp{\left(\frac{1}{2}K(x + x_{0})^{2}\right)} c \left(c_{1}e^{bt} + c_{2}e^{-bt} \right)^{\frac{\alpha}{(\ell + 1)}},$$ $$\label{eq30} A = a \left(c_{1}e^{bt} + c_{2}e^{-bt}\right)^{\frac{n(1 - \alpha)}{(\ell + 1)}},$$ where $a = \frac{c_{3}}{c}$, $c_{3}$ being a constant of integration.\ After using suitable transformation of the co-ordinates, the model (\[eq1\]) reduces to the form $$ds^{2} = a^{2}(c_{1}e^{bT} + c_{2}e^{-bT})^{\frac{2n(1 - \alpha)}{(\ell + 1)}} (dX^{2} - dT^{2}) + e^{KX^{2}}(c_{1}e^{bT} + c_{2}e^{-bT})^{\frac{2}{(\ell + 1)}} dY^{2}$$ $$\label{eq31} + e^{KX^{2}}(c_{1}e^{bT} + c_{2}e^{-bT})^{\frac{2\alpha}{(\ell + 1)}} dZ^{2},$$ where $x + x_{0} = X$, $t = T$, $y = Y$, $cz = Z$.\ For the specification of displacement vector $\beta$ within the framework of Lyra geometry and for realistic models of physical importance, we consider following two cases described in Sections $4$ and $5$. When $\beta$ is a constant i.e. $\beta = \beta_{0}$ (constant) ============================================================== Using Eqs. (\[eq28\]), (\[eq29\]) and (\[eq30\]) in Eqs. (\[eq11\]) and (\[eq14\]) the expressions for pressure $p$ and density $\rho$ for the model (\[eq31\]) are given by $$8 \pi p = \frac{1}{a^{2}\psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \Biggl[K^{2}X^{2} - \frac{2(3 + \alpha)b^{2}c_{1}c_{2}} {(\ell + 1)\psi_{2}^{2}}$$ $$\label{eq32} - \frac{(2n \alpha^{2} + \alpha^{2} + 2\alpha - 2n + 3)b^{2}}{2(\ell + 1)^{2}} \frac{\psi_{1}^{2}}{\psi_{2}^{2}}\Biggr] - \frac{3}{4}\beta_{0}^{2},$$ $$8 \pi \rho = \frac{1}{a^{2}\psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \Biggl[- 3 K^{2}X^{2} - 2K + \frac{2b^{2} (\alpha - 1)c_{1}c_{2}}{(\ell + 1)\psi_{2}^{2}}$$ $$\label{eq33} - \frac{(2n \alpha^{2} - \alpha^{2} - 2\alpha - 2n + 1)b^{2}}{2(\ell + 1)^{2}} \frac{\psi_{1}^{2}}{\psi_{2}^{2}}\Biggr] + \frac{3}{4}\beta_{0}^{2},$$ where $$\psi_{1} = c_{1}e^{bT} - c_{2}e^{-bT},$$ $$\psi_{2} = c_{1}e^{bT} + c_{2}e^{-bT}.$$ From Eq. \[eq18\]) the non-vanishing component $F_{12}$ of the electromagnetic field tensor is obtained as $$\label{eq34} F_{12}^{2} = \frac{\bar{\mu}}{8\pi}\frac{b^{2}(1 - \alpha)}{(\ell + 1)^{2}}e^{KX^{2}} \psi_{2}^{\frac{2}{(\ell + 1)}} \Biggl[\frac{4(\ell + 1)c_{1}c_{2} + (1 + \alpha)\psi_{1}^{2}}{\psi_{2}^{2}}\Biggr].$$ From above equation it is observed that the electromagnetic field tensor increases with time.\ The reality conditions (Ellis [@ref74]) $$(i) \rho + p > 0, ~ ~ (ii) \rho + 3p > 0,$$ lead to $$\label{eq35} \frac{b^{2}(n - n\alpha^{2} - 1)}{(\ell + 1)^{2}}\frac{\psi_{1}^{2}}{\psi_{2}^{2}} - \frac{4b^{2}c_{1}c_{2}}{(\ell + 1)\psi_{2}^{2}} > K (KX^{2} + 1),$$ and $$\frac{b^{2}(4n - 4n\alpha^{2} - \alpha^{2} - 2\alpha - 5)}{(\ell + 1)^{2}} \frac{\psi_{1}^{2}}{\psi_{2}^{2}} - \frac{4b^{2}(\alpha + 5)c_{1}c_{2}} {(\ell + 1)\psi_{2}^{2}}$$ $$\label{eq36} > 2K + \frac{3}{2}\beta_{0}^{2}a^{2} \psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}},$$ respectively. The dominant energy conditions (Hawking and Ellis [@ref75]) $$(i) \rho - p \geq 0, ~ ~ (ii) \rho + p \geq 0,$$ lead to $$\label{eq37} \frac{b^{2}(\alpha + 1)^{2}}{(\ell + 1)^{2}}\frac{\psi_{1}^{2}}{\psi_{2}^{2}} + \frac{4b^{2}(\alpha + 1)c_{1}c_{2}}{(\ell + 1)\psi_{2}^{2}} + \frac{3}{2} \beta_{0}^{2}a^{2}\psi_{2}^{\frac{2n(1 - \alpha)} {(\ell + 1)}} \geq 2K(2K X^{2} + 1),$$ and $$\label{eq38} \frac{b^{2}(n - n\alpha^{2} - 1)}{(\ell + 1)^{2}}\frac{\psi_{1}^{2}}{\psi_{2}^{2}} - \frac{4b^{2}c_{1}c_{2}}{(\ell + 1)\psi_{2}^{2}} \geq K(KX^{2} + 1),$$ respectively. The conditions (\[eq36\]) and (\[eq37\]) impose a restriction on constant displacement vector $\beta_{0}$. When $\beta$ is a function of $t$ i.e. $\beta$ = $\beta(t)$ =========================================================== In this case to find the explicit value of displacement field $\beta(t)$, we assume that the fluid obeys an equation of state of the form $$\label{eq39} p = \gamma \rho,$$ where $\gamma(0 \leq \gamma \leq 1)$ is a constant. Using Eqs. (\[eq28\]) - (\[eq30\]) and (\[eq39\]) in equations (\[eq11\]) and (\[eq14\]) we obtain $$\label{eq40} 4 \pi(1 + \gamma)\rho = \frac{1}{a^{2}\psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \Biggl[- K^{2}X^{2} - K - \frac{4b^{2}c_{1}c_{2}}{(\ell + 1)\psi_{2}^{2}} - \frac{b^{2}(n - n\alpha^{2} - 1)}{(\ell + 1)^{2}}\frac{\psi_{1}^{2}}{\psi_{2}^{2}}\Biggr],$$ and $$(1 + \gamma)\beta^{2}{(t)} = \frac{4}{3a^{2}\psi_{2}^{\frac{2n(1 - \alpha)} {(\ell + 1)}}}\Biggl[K^{2}X^{2}(1 + \gamma) + 2 K\gamma$$ $$+ \, \frac{2b^{2}c_{1}c_{2}\{(1 - \alpha)(1 - \gamma) - 4 \}}{(\ell + 1)\psi_{2}^{2}}$$ $$\label{eq41} + \, \frac{b^{2}}{(\ell + 1)^{2}}\{(2n\alpha^{2} - \alpha^{2} - 2\alpha - 2n + 1)(1 + \gamma) - 2(n \alpha^{2} - n + 1)\}\frac{\psi_{1}^{2}}{\psi_{2}^{2}}\Biggr].$$ Here we consider the three cases of physical interest in following Subsections $5.1$, $5.2$ and $5.3$. Empty Universe -------------- Putting $\gamma = 0$ in (\[eq39\]) reduces to $p = 0$. Thus, from Eqs. (\[eq40\]) and (\[eq41\]), we obtain the expressions for physical parameters $\rho$ and $\beta^{2}{(t)}$ $$\label{eq42} 4 \pi \rho = \frac{1}{a^{2}\psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \Biggl[- K^{2}X^{2} - K - \frac{4b^{2}c_{1}c_{2}}{(\ell + 1) \psi_{2}^{2}} + \frac{b^{2}(n - n\alpha^{2} - 1)}{(\ell + 1)^{2}} \frac{\psi_{1}^{2}}{\psi_{2}^{2}}\Biggr],$$ $$\label{eq43} \beta^{2}(t) = \frac{4}{3a^{2}\psi_{2}^{\frac{2n(1 - \alpha)} {(\ell + 1)}}}\Biggl[K^{2} X^{2} - \frac{2b^{2}(\alpha + 4)c_{1}c_{2}} {(\ell + 1)\psi_{2}^{2}} - \frac{b^{2} (\alpha + 1)^{2}}{(\ell + 1)^{2}} \frac{\psi_{1}^{2}}{\psi_{2}^{2}}\Biggr].$$ From Eqs. (\[eq42\]) and (\[eq43\]), we observe that $\rho > 0$ and $\beta^{2}{(t)} > 0$ according as $$\label{eq44} \frac{b^{2}(n - n\alpha^{2} - 1)}{(\ell + 1)^{2}}\psi_{1}^{2} - K(KX^{2} + 1) \psi_{2}^{2} > \frac{4b^{2}c_{1}c_{2}}{(\ell + 1)},$$ and $$\label{eq45} K^{2}X^{2}\psi_{2}^{2} - \frac{b^{2}(\alpha + 1)^{2}}{(\ell + 1)^{2}}\psi_{1}^{2} > \frac{2b^{2}(\alpha + 4)c_{1}c_{2}}{(\ell + 1)},$$ respectively. Halford [@ref6] has pointed out that the displacement field $\phi_i$ in Lyra’s geometry plays the role of cosmological constant $\Lambda$ in the normal general relativistic treatment. From Eq. (\[eq43\]), it is observed that the displacement vector $\beta(t)$ is a decreasing function of time which is corroborated with Halford as well as with the recent observations [@ref76; @ref77] leading to the conclusion that $\Lambda(t)$ is a decreasing function of $t$. Zeldovich Universe ------------------ Putting $\gamma = 1$ in Eq. (\[eq39\]) reduces to $\rho = p$. In this case the expressions for physical quantities are given by $$\label{eq46} \beta^{2}(t) = \frac{4}{3a^{2}\psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \Biggl[K^{2} X + K - \frac{4b^{2}c_{1}c_{2}}{(\ell + 1)\psi_{2}^{2}} - \frac{b^{2}(\alpha + 1)^{2}}{(\ell + 1)^{2}}\frac{\psi_{1}^{2}}{\psi_{2}^{2}}\Biggr].$$ $$8\pi p = 8\pi \rho = \frac{1}{a^{2}\psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \Biggl[- K^{2} X^{2} - K - \frac{4b^{2}c_{1}c_{2}} {(\ell + 1)\psi_{2}^{2}}$$ $$\label{eq47} + \, \frac{b^{2}(n - n\alpha^{2} - 1)} {(\ell + 1)^{2}}\frac{\psi_{1}^{2}}{\psi_{2}^{2}}\Biggr].$$ The reality condition (Ellis [@ref74]) $$(i) \rho + p > 0, ~ ~ (ii) \rho + 3p > 0,$$ lead to $$\label{eq48} \frac{b^{2}(n - n\alpha^{2} - 1)}{(\ell + 1)^{2}}\psi_{1}^{2} - K(KX^{2} + 1) \psi_{2}^{2} > \frac{4b^{2}c_{1}c_{2}}{(\ell + 1)}.$$ Radiating Universe ------------------ Putting $\gamma = \frac{1}{3}$ in Eq. (\[eq39\]) reduces to $p = \frac{1}{3}\rho$. In this case the expressions for $\beta(t)$, $p$ and $\rho$ are obtained as $$\beta^{2}(t) = \frac{2}{3a^{2}\psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \Biggl[2K^{2}X^{2} + K + \frac{2 b^{2}(\alpha + 5)c_{1}c_{2}}{(\ell + 1) \psi_{2}^{2}}$$ $$\label{eq49} + \frac{b^{2}(n\alpha^{2} - 2\alpha^{2} - 4\alpha - n - 1)}{3(\ell + 1)^{2}} \frac{\psi_{1}^{2}} {\psi_{2}^{2}}\Biggr],$$ $$\label{eq50} 8\pi p = \frac{1}{2a^{2}\psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \Biggl[- K^{2} X^{2} - K - \frac{4 b^{2}c_{1}c_{2}}{(\ell + 1)\psi_{2}^{2}} + \frac{b^{2}(n - n\alpha^{2} - 1)}{(\ell + 1)^{2}}\frac{\psi_{1}^{2}} {\psi_{2}^{2}}\Biggr],$$ $$\label{eq51} 8\pi \rho = \frac{3}{2a^{2}\psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \Biggl[- K^{2} X^{2} - K - \frac{4 b^{2}c_{1}c_{2}}{(\ell + 1)\psi_{2}^{2}} + \frac{b^{2}(n - n \alpha^{2} - 1)}{(\ell + 1)^{2}}\frac{\psi_{1}^{2}} {\psi_{2}^{2}}\Biggr].$$ From Eq. (\[eq49\]), it is observed that displacement vector $\beta$ is decreasing function of time and therefore it behaves as cosmological term $\Lambda$. The reality conditions (Ellis [@ref74]) $$(i) \rho + p > 0, ~ ~ (ii) \rho + 3p > 0,$$ are satisfied under condition (\[eq48\]). The dominant energy conditions (Hawking and Ellis[@ref75]) $$(i) \rho - p \geq 0, ~ ~ (ii) \rho + p \geq 0,$$ lead to $$\label{eq52} \frac{b^{2}(n - n\alpha^{2} - 1)}{(\ell + 1)^{2}}\psi_{1}^{2} - K(KX^{2} + 1) \psi_{2}^{2} \geq \frac{4b^{2}c_{1}c_{2}}{(\ell + 1)}.$$\ [**[Some Geometric Properties of the Model]{}**]{}\ The expressions for the expansion $\theta$, shear scalar $\sigma^{2}$, deceleration parameter $q$ and proper volume $V^{3}$ for the model (\[eq31\]) are given by $$\label{eq53} \theta = \frac{b\{n(1 - \alpha) + (1 + \alpha)\}}{(\ell + 1)a \psi_{2}^ {\frac{n(1 - \alpha)}{(\ell + 1)}}} \frac{\psi_{1}}{\psi_{2}},$$ $$\label{eq54} \sigma^{2} = \frac{b^{2}\left[\{n(1 - \alpha) + (1 + \alpha)\}^{2}- 3n(1 - \alpha) (1 + \alpha) - 3\alpha\right]} {3(\ell + 1)^{2}a^{2} \psi_{2}^{\frac{2n(1 - \alpha)}{(\ell + 1)}}} \frac{\psi_{1}^{2}}{\psi_{2}^{2}},$$ $$\label{eq55} q = - 1 - \frac{6 c_{1}{c_{2}}(\ell + 1)}{n(1 - \alpha^{2})\left(c_{1}e^{bT} - c_{2}e^{-bT}\right)^{2}},$$ $$\label{eq56} V^{3} = \sqrt{-g} = a^{2}\psi_{2}^{\frac{2n(1 + \alpha)(1 - \alpha)} {(\ell + 1)}}e^{KX^{2}}.$$ From Eqs. (\[eq53\]) and (\[eq54\]) we obtain $$\label{eq57} \frac{\sigma^{2}}{\theta^{2}} = \frac{\{n(1 - \alpha) + (1 + \alpha)\}^ {2} - 3n(1 - \alpha^{2}) - 3\alpha } {3\{n(1 - \alpha) + (1 + \alpha)\}^{2}} = \mbox{constant}.$$ The rotation $\omega$ is identically zero.\ The rate of expansion $H_{i}$ in the direction of x, y and z are given by $$H_{x} = \frac{A_{4}}{A} = \frac{nb(1 - \alpha)}{(\ell + 1)}\frac{\psi_{1}}{\psi_{2}},$$ $$H_{y} = \frac{B_{4}}{B} = \frac{b}{(\ell + 1)}\frac{\psi_{1}}{\psi_{2}},$$ $$\label{eq58} H_{z} = \frac{C_{4}}{C} = \frac{b\alpha}{(\ell + 1)}\frac{\psi_{1}}{\psi_{2}}.$$ Generally the model (\[eq31\]) represents an expanding, shearing and non-rotating universe in which the flow vector is geodetic. The model (\[eq31\]) starts expanding at $T > 0$ and goes on expanding indefinitely when $\frac{n(1 - \alpha)}{(\ell + 1)} < 0$. Since $\frac{\sigma}{\theta}$ = constant, the model does not approach isotropy. As $T$ increases the proper volume also increases. The physical quantities $p$ and $\rho$ decrease as $F_{12}$ increases. However, if $\frac{n(1 - \alpha)}{(\beta + 1)} > 0$, the process of contraction starts at $T>0$ and at $T = \infty$ the expansion stops. The electromagnetic field tensor does not vanish when $b \ne 0$, and $\alpha \ne 1$. It is observed from Eq. (\[eq55\]) that $q < 0$ when $c_{1} > 0$ and $c_{2} > 0$ which implies an accelerating model of the universe. Recent observations of type Ia supernovae [@ref76; @ref77] reveal that the present universe is in accelerating phase and deceleration parameter lies somewhere in the range $-1 < q \leq 0$. It follows that our models of the universe are consistent with recent observations. Either when $c_{1} = 0$ or $c_{2} = 0$, the deceleration parameter $q$ approaches the value $(-1)$ as in the case of de-Sitter universe. Solution in Absence of Magnetic Field ===================================== In absence of magnetic field, the field Eq. (\[eq9\]) with Eqs. (\[eq2\]) and (\[eq10\]) for metric (\[eq1\]) read as $$\label{eq59} \frac{1}{A^{2}}\left[- \frac{B_{44}}{B} - \frac{C_{44}}{C} + \frac{A_{4}}{A} \left(\frac{B_{4}}{B} + \frac{C_{4}}{C}\right) - \frac{B_{4} C_{4}}{B C} + \frac{B_{1}C_{1}}{BC}\right] = 8 \pi p + \frac{3}{4}\beta^{2} ,$$ $$\label{eq60} \frac{1}{A^{2}}\left(\frac{A^{2}_{4}}{A^{2}}- \frac{A_{44}}{A} - \frac{C_{44}}{C} + \frac{C_{11}}{C} \right) = 8 \pi p + \frac{3}{4}\beta^{2},$$ $$\label{eq61} \frac{1}{A^{2}}\left(\frac{A^{2}_{4}}{A^{2}} - \frac{A_{44}}{A} - \frac{B_{44}}{B} + \frac{B_{11}}{ B}\right) = 8 \pi p + \frac{3}{4}\beta^{2},$$ $$\label{eq62} \frac{1}{A^{2}}\left[- \frac{B_{11}}{B} - \frac{C_{11}}{C} + \frac{A_{4}}{A} \left(\frac{B_{4}}{B} + \frac{C_{4}}{C}\right) - \frac{B_{1}C_{1}}{BC} + \frac{B_{4} C_{4}}{B C}\right] = 8 \pi \rho - \frac{3}{4}\beta^{2},$$ $$\label{eq63} \frac{B_{14}}{B} + \frac{C_{14}}{C} - \frac{A_{4}}{A}\left(\frac{B_{1}}{B} + \frac{C_{1}}{C}\right) = 0,$$ Eqs. (\[eq60\]) and (\[eq61\]) lead to $$\label{eq64} \frac{B_{44}}{B} - \frac{B_{11}}{B} - \frac{C_{44}}{C} + \frac{C_{11}}{C} = 0.$$ Eqs. (\[eq19\]) and (\[eq64\]) lead to $$\label{eq65} \frac{g_{44}}{g} - \frac{k_{44}}{k} = 0.$$ Eqs. (\[eq23\]) and (\[eq65\]) lead to $$\label{eq66} \frac{g_{44}}{g} + \alpha \frac{g^{2}_{4}}{g^{2}} = 0,$$ which on integration gives $$\label{eq67} g = (c_{4} t + c_{5})^{\frac{1}{(\alpha + 1)}},$$ where $c_{4}$ and $c_{5}$ are constants of integration. Hence from (\[eq23\]) and (\[eq67\]), we have $$\label{eq68} k = c(c_{4} t + c_{5})^{\frac{\alpha}{(\alpha + 1)}}.$$ In this case (\[eq17\]) leads to $$\label{eq69} f = \exp{\left(\frac{1}{2}K(x + x_{0})^{2}\right)}.$$ Therefore, we have $$\label{eq70} B = \exp{\left(\frac{1}{2}K(x + x_{0})^{2}\right)} (c_{4} t + c_{5})^{\frac{1} {(\alpha + 1)}},$$ $$\label{eq71} C = \exp{\left(\frac{1}{2}K(x + x_{0})^{2}\right)} c (c_{4} t + c_{5})^{\frac{\alpha} {(\alpha + 1)}},$$ $$\label{eq72} A = a (c_{4} t + c_{5})^{\frac{n(1 - \alpha)}{(1 + \alpha)}},$$ where $a$ is already defined in previous section.\ After using suitable transformation of the co-ordinates, the metric (\[eq1\]) reduces to the form $$ds^{2} = a^{2}(c_{4}T)^{\frac{2n(1 - \alpha)}{(1 + \alpha)}}(dX^{2} - dT^{2}) + e^{KX^{2}}(c_{4} T)^{\frac{2}{(\alpha + 1)}} dY^{2}$$ $$\label{eq73} + e^{KX^{2}}(c_{4} T)^{\frac{2\alpha}{(\alpha + 1)}} dZ^{2},$$ where $x + x_{0} = X$, $y = Y$, $cz = Z$, $t + \frac{c_{5}}{c_{4}} = T$.\ For the specification of displacement field $\beta(t)$ within the framework of Lyra geometry and for realistic models of physical importance, we consider the following two cases given in Sections $7$ and $8$. When $\beta$ is a constant i.e. $\beta = \beta_{0}$ (constant) ============================================================== Using Eqs. (\[eq70\])-(\[eq72\]) in Eqs.\[eq59\]) and (\[eq62\]) the expressions for pressure $p$ and density $\rho$ for the model (\[eq73\]) are given by $$\label{eq74} 8 \pi p = \frac{1}{a^{2}(c_{4}T)^{\frac{2n(1 - \alpha)}{(1 + \alpha)}}} \Biggl[\left\{\frac{n(1 - \alpha^{2}) + \alpha}{(\alpha + 1)^{2}}\right \} \frac{1}{T^{2}} + K^{2}X^{2}\Biggr] - \frac{3}{4}\beta_{0}^{2},$$ $$\label{eq75} 8 \pi \rho = \frac{1}{a^{2}(c_{4}T)^{\frac{2n(1 - \alpha)}{(1 + \alpha)}}} \Biggl[\left\{\frac{n(1 - \alpha^{2}) + \alpha}{(\alpha + 1)^{2}}\right \} \frac{1}{T^{2}} - K(2 + 3KX^{2})\Biggr] + \frac{3}{4}\beta_{0}^{2},$$ The dominant energy conditions (Hawking and Ellis [@ref75]) $$(i) \rho - p \geq 0, ~ ~ (ii) \rho + p \geq 0,$$ lead to $$\label{eq76} \frac{3}{4}\beta_{0}^{2}a^{2}(c_{4}T)^{\frac{2n(1 - \alpha)}{(1 + \alpha)}} \geq K(1 + 2KX^{2}),$$ and $$\label{eq77} \left\{\frac{n(1 - \alpha^{2}) + \alpha}{(1 + \alpha)^{2}}\right\} \frac{1}{T^{2}} \geq K(1 + KX^{2}).$$ respectively. The reality conditions (Ellis [@ref74]) $$(i) \rho + p > 0, ~ ~ (ii) \rho + 3p > 0,$$ lead to $$\label{eq78} \left\{\frac{n(1 - \alpha^{2}) + \alpha}{(1 + \alpha)^{2}}\right\}\frac{1}{T^{2}} > K(1 + KX^{2}),$$ and $$\label{eq79} \frac{2[n(1 - \alpha^{2}) + \alpha]}{(1 + \alpha)^{2}}\frac{1}{T^{2}} > K + \frac{3}{4}\beta_{0}^{2} (c_{4}T)^\frac{2n(1 - \alpha)}{(1 + \alpha)}.$$ The condition (\[eq76\]) and (\[eq79\]) impose a restriction on $\beta_{0}$. When $\beta$ is a function of $t$ ================================= In this case to find the explicit value of displacement field $\beta(t)$, we assume that the fluid obeys an equation of state given by (\[eq39\]). Using Eqs. (\[eq70\]) - (\[eq72\]) and (\[eq39\]) in Equations (\[eq59\]) and (\[eq62\]) we obtain expressions for $\rho(t)$ and $\beta{(t)}$ given by $$\label{eq80} 8 \pi (1 + \gamma)\rho = \frac{1}{a^{2}(c_{4}T)^{\frac{2n(1 - \alpha)}{(1 + \alpha)}}} \Biggl[\left\{\frac{n(1 - \alpha^{2}) + \alpha}{(\alpha + 1)^{2}}\right \} \frac{2}{T^{2}} - 2K(1 + KX^{2})\Biggr],$$ $$(1 + \gamma)\beta^{2}{(t)} = \frac{4}{3a^{2}(c_{4}T)^{\frac{2n(1 - \alpha)}{(1 + \alpha)}}} \Biggl[\left\{\frac{n(1 - \alpha^{2}) + \alpha}{(\alpha + 1)^{2}}\right \} \frac{(1 - \gamma)}{T^{2}}$$ $$\label{eq81} + 2K \gamma + KX^{2} (1 + 3\gamma)\Biggr].$$ It is observed that $\rho > 0$ and $\beta^{2}{(t)} > 0$ according as $$\label{eq82} \left\{\frac{n(1 - \alpha^{2}) + \alpha}{(\alpha + 1)^{2}}\right \} \frac{1}{T^{2}} > K(1 + KX^{2}),$$ and $$\label{eq83} \left\{\frac{n(1 - \alpha^{2}) + \alpha}{(\alpha + 1)^{2}}\right \} \frac{1}{T^{2}} > K^{2} X^{2},$$ respectively. It is worth mention here that by putting $\gamma = 0, 1, \frac{1}{3}$ in Eqs. (\[eq80\]) and (\[eq81\]), one can derive the expressions for energy density $\rho(t)$ and displacement vector $\beta{(t)}$ for empty universe, Zeldovich universe and radiating universe respectively. It is also observed that these three types of models have similar properties as we have already discussed above. Therefore, we have not mentioned the expressions for physical quantities of these models.\ [**[Some Geometric Properties of the Model]{}**]{}\ The expressions for the expansion $\theta$, Hubble parameter $H$, shear scalar $\sigma^{2}$, deceleration parameter $ q $ and proper volume $V^{3}$ for the model (\[eq73\]) in absence of magnetic field are given by $$\label{eq84} \theta = 3H = \frac{n(1 - \alpha) + (1 + \alpha)}{a(1 + \alpha)c_{4}^{\frac{n(1 - \alpha)} {(1 + \alpha)}}} \frac{1}{T^{\frac{n(1 - \alpha) + (1 + \alpha)}{(1 + \alpha)}}}$$ $$\label{eq85} \sigma^{2} = \frac{\{n(1 - \alpha) + (1 + \alpha)\}^{2} - 3n(1 - \alpha^{2}) - 3\alpha} {3 a^{2}(1 + \alpha)^{2}c_{4}^{\frac{n(1 - \alpha)}{(1 + \alpha)}}} \frac{1}{T^{\frac{2n(1 - \alpha) + 2(1 + \alpha)}{(1 + \alpha)}}}$$ $$\label{eq86} q = - 1 + \frac{3(\alpha + 1)}{2n(1 - \alpha) + 2(1 + \alpha)},$$ $$\label{eq87} V^{3} = \sqrt{-g} = a^{2} e^{KX^{2}} (c_{4}T)^{\frac{2n(1 - \alpha) + (1 + \alpha)} {(1 + \alpha)}}.$$ From Eqs. (\[eq84\]) and (\[eq85\]) we obtain $$\label{eq88} \frac{\sigma^{2}}{\theta^{2}} = \frac{\{n(1 - \alpha) + (1 + \alpha)\}^{2} - 3n(1 - \alpha^{2}) - 3\alpha } {3\{n(1 - \alpha) + (1 + \alpha)\}^{2}} = \mbox{constant}.$$ The rotation $\omega$ is identically zero.\ The rate of expansion $H_{i}$ in the direction of x, y and z are given by $$H_{x} = \frac{A_{4}}{A} = \frac{n(1 - \alpha)}{(1 + \alpha)}\frac{1}{T},$$ $$H_{y} = \frac{B_{4}}{B} = \frac{1}{(1 + \alpha)}\frac{1}{T},$$ $$\label{eq89} H_{z} = \frac{C_{4}}{C} = \frac{\alpha}{(1 + \alpha)}\frac{1}{T}.$$ The model (\[eq73\]) starts expanding with a big bang at $T = 0$ and it stops expanding at $T = \infty$. It should be noted that the universe exhibits initial singularity of the Point-type at $T = 0$. The space-time is well behaved in the range $0 < T < T_{0}$. In absence of magnetic field the model represents a shearing and non-rotating universe in which the flow vector is geodetic. At the initial moment $T = 0$, the parameters $\rho$, $p$, $\beta$, $\theta$, $\sigma^{2}$ and $H$ tend to infinity. So the universe starts from initial singularity with infinite energy density, infinite internal pressure, infinitely large gauge function, infinite rate of shear and expansion. Moreover, $\rho$, $p$, $\beta$, $\theta$, $\sigma^{2}$ and $H$ are monotonically decreasing toward a non-zero finite quantity for $T$ in the range $0 < T < T_{0}$ in absence of magnetic field. Since $\frac{\sigma}{\theta}$ = constant, the model does not approach isotropy. As $T$ increases the proper volume also increases. It is observed that for all the three models i.e. for empty universe, Zeldovice universe and radiating universe, the displacement vector $\beta(t)$ is a decreasing function of time and therefore it behaves like cosmological term $\Lambda$. It is observed from Eq. (\[eq86\]) that $q < 0$ when $\alpha < \frac{2n - 1}{2n + 1}$ which implies an accelerating model of the universe. When $ \alpha = -1$, the deceleration parameter $q$ approaches the value $(-1)$ as in the case of de-Sitter universe. Thus, also in absence of magnetic field, our models of the universe are consistent with recent observations. Discussion and Concluding Remarks ================================= In this paper, we have obtained a new class of exact solutions of Einstein’s field equations for cylindrically symmetric space-time with perfect fluid distribution within the framework of Lyra’s geometry both in presence and absence of magnetic field. The solutions are obtained using the functional separability of the metric coefficients. The source of the magnetic field is due to an electric current produced along the z-axis. $F_{12}$ is the only non-vanishing component of electromagnetic field tensor. The electromagnetic field tensor is given by equation (\[eq34\]), $\bar{\mu}$ remains undetermined as function of both $x$ and $t$. The electromagnetic field tensor does not vanish if $b \ne 0$ and $\alpha \ne 1$. It is observed that in presence of magnetic field, the rate of expansion of the universe is faster than that in absence of magnetic field. The idea of primordial magnetism is appealing because it can potentially explain all the large-scale fields seen in the universe today, specially those found in remote proto-galaxies. As a result, the literature contains many studies examining the role and the implications of magnetic fields for cosmology. In presence of magnetic field the model (\[eq31\]) represents an expanding, shearing and non-rotating universe in which the flow vector is geodetic. But in the absence of magnetic field the model (\[eq70\]) is found that in the universe all the matter and radiation are concentrated at the big bang epoch and the cosmic expansion is driven by the big bang impulse. The universe has singular origin and it exhibits power-law expansion after the big bang impulse. The rate of expansion slows down and finally stops at $T \to \infty$. In absence of magnetic field, the pressure, energy density and displacement field become zero whereas the spatial volume becomes infinitely large as $T \to \infty$. It is possible to discuss entropy in our universe. In thermodynamics the expression for entropy is given by $$\label{eq90} TdS = d(\rho V^{3}) + p(dV^{3}),$$ where $V^{3} = A^{2}BC$ is the proper volume in our case. To solve the entropy problem of the standard model, it is necessary to treat $dS > 0$ for at least a part of evolution of the universe. Hence Eq. (\[eq90\]) reduces to $$\label{eq91} TdS = \rho_{4} + (\rho + p)\left(2\frac{A_{4}}{A} + \frac{B_{4}}{B} + \frac{C_{4}} {C}\right) > 0.$$ The conservation equation $T^{j}_{i:j} = 0$ for (\[eq1\]) leads to $$\label{eq92} \rho_{4} + (\rho + p)\left(\frac{A_{4}}{A} + \frac{B_{4}}{B} + \frac{C_{4}} {C}\right) + \frac{3}{2}\beta \beta_{4} + \frac{3}{2}\beta^{2}\left(2\frac{A_{4}}{A} + \frac{B_{4}}{B} + \frac{C_{4}}{C}\right) = 0.$$ Therefore, Eqs. (\[eq91\]) and (\[eq92\]) lead to $$\label{eq93} \frac{3}{2}\beta \beta_{4} + \frac{3}{2}\beta^{2}\left(2\frac{A_{4}}{A} + \frac{B_{4}}{B} + \frac{C_{4}}{C}\right) < 0.$$ which gives to $\beta < 0$. Thus, the displacement vector $\beta(t)$ affects entropy because for entropy $dS > 0$ leads to $\beta(t) < 0$. In spite of homogeneity at large scale our universe is inhomogeneous at small scale, so physical quantities being position-dependent are more natural in our observable universe if we do not go to super high scale. This result shows this kind of physical importance. It is observed that the displacement vector $\beta(t)$ coincides with the nature of the cosmological constant $\Lambda$ which has been supported by the work of several authors as discussed in the physical behaviour of the model in Sections $5$ and $8$. In recent time $\Lambda$-term has attracted theoreticians and observers for many a reason. The nontrivial role of the vacuum in the early universe generates a $\Lambda$-term that leads to inflationary phase. Observationally, this term provides an additional parameter to accommodate conflicting data on the values of the Hubble constant, the deceleration parameter, the density parameter and the age of the universe (for example, see Refs. [@ref78] and [@ref79]). Assuming that $\Lambda$ owes its origin to vacuum interaction, as suggested particularly by Sakharov [@ref80], it follows that it would, in general, be a function of space and time coordinates, rather than a strict constant. In a homogeneous universe $\Lambda$ will be at most time dependent [@ref81]. In the case of inhomogeneous universe this approach can generate $\Lambda$ that varies both with space and time. In considering the nature of local massive objects, however, the space dependence of $\Lambda$ cannot be ignored. For details, reference may be made to Refs. [@ref82], [@ref83], [@ref84]. In recent past there is an upsurge of interest in scalar fields in general relativity and alternative theories of gravitation in the context of inflationary cosmology [@ref85; @ref86; @ref87]. Therefore the study of cosmological models in Lyra’s geometry may be relevant for inflationary models. Also the space dependence of the displacement field $\beta$ is important for inhomogeneous models for the early stages of the evolution of the universe. In the present study we also find $\beta(t)$ as both space and time dependent which may be useful for a better understanding of the evolution of universe in cylindrically symmetric space-time within the framework of Lyra’s geometry. There seems a good possibility of Lyra’s geometry to provide a theoretical foundation for relativistic gravitation, astrophysics and cosmology. However, the importance of Lyra’s geometry for astrophysical bodies is still an open question. In fact, it needs a fair trial for experiment. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank the Harish-Chandra Research Institute, Allahabad, India for local hospitality where this work is done. [000]{} R. C. Tolman, Proc. Nat. Acad. Sci. [**20**]{}, 169 (1934). H. Bondi, Mon. Not. R. Astro. Soc. [**107**]{}, 410 (1947). A. H. Taub, Ann. Math. [**53**]{}, 472 (1951). A. H. Taub, Phy. Rev. [**103**]{}, 454 (1956). N. Tomimura, II Nuovo Cimento B [**44**]{}, 372 (1978). P. Szekeres, Commun. Math. Phys. [**41**]{}, 55 (1975). C. B. Collins and D. A. Szafron, J. Math. Phy. [**20**]{}, 2347 (1979a); J. Math. Phy. [**20**]{} (1979b) 2362. D. A. Szafron and C. B. Collins, J. Math. Phy. [**20**]{}, 2354 (1979). J. M. M. Senovilla, Phy. Rev. Lett. [**64**]{}, 2219 (1990). E. Ruiz and J. M. M. Senovilla, Phy. Rev. D [**45**]{}, 1995 (1990). N. Dadhich, R. Tikekar and L. K. Patel, Curr. Sci. [**65**]{}, 694 (1993). L. K. Patel, R. Tikekar and N. Dadhich, Pramana-J. Phys. [**49**]{}, 213 (1993). G. Singh, P. Mehta and S. Gupta, Astrophys. Space Sc. [**281**]{}, 677 (1990). A. Pradhan, P. K. Singh and K. R. Jotania, Czech. J. Phys. [**56**]{}, 641 (2006).\ A. Pradhan, A. Rai and S. K. Singh, Astrophys. Space Sci. [**312**]{}, 261 (2007).\ A. Pradhan, Fizika B, [**16**]{}, 205 (2007).\ A. Pradhan, K. Jotania and A. Singh, Braz. J. Phys. [**38**]{}, 167 (2008).\ A. Pradhan, V. Rai and K. Jotania, Comm. Theor. Phys. (2008), in press. Ya. B. Zeldovich, A. A. Ruzmainkin and D. D. Sokoloff, [*Magnetic field in Astrophysics*]{}, (Gordon and Breach, New York, 1993). E. R. Harrison, Phys. Rev. Lett. [**30**]{}, 188 (1973). H. P. Robertson and A. G. Walker, Proc. London Math. Soc. [**42**]{}, 90 (1936). C. W. Misner, K. S. Thorne and J. A. Wheeler, [*Gravitation*]{}, (W. H. Freeman, New York, 1973). E. Asseo and H. Sol, Phys. Rep. [**6**]{}, 148 (1987). M. A. Melvin, Ann. New York Acad. Sci. [**262**]{}, 253 (1975). R. Pudritz and J. Silk, Astrophys. J. [**342**]{}, 650 (1989). K. T. Kim, P. G. Tribble and P. P. Kronberg, Astrophys. J. [**379**]{}, 80 (1991). R. Perley and G. Taylor, Astrophys. J. [**101**]{}, 1623 (1991). P. P. Kronberg, J. J. Perry and E. L. Zukowski, Astrophys. J. [**387**]{}, 528 (1991). A. M. Wolfe, K. Lanzetta and A. L. Oren: , Astrophys. J. [**388**]{}, 17 (1992). R. Kulsrud, R. Cen, J. P. Ostriker and D. Ryu, Astrophys. J. [**380**]{}, 481 (1997). E. G. Zweibel and C. Heiles, Nature [**385**]{}, 131 (1997). J. D. Barrow, Phys. Rev. D [**55**]{}, 7451 (1997). Ya. B. Zeldovich, Sov. Astron. [**13**]{}, 608 (1970). M. S. Turner and L. M. Widrow, Phys. Rev. D [**30**]{}, 2743 (1988). J. Quashnock, A. Loeb and D. N. Spergel, Astrophys. J. [**344**]{}, L49 (1989). B. Ratra, Astrophys. J. [**391**]{}, L1 (1992). A. D. Dolgov and J. Silk, Phys. Rev. D [**47**]{}, 3144 (1993). A. D. Dolgov, Phys. Rev. D [**48**]{}, 2499 (1993). R. Bali and M. Ali, Pramana-J. Phys. [**47**]{}, 25 (1996). I. Chakrabarty, A. Pradhan and N. N. Saste, Int. J. Mod. Phys. D [**5**]{}, 741 (2001).\ A. Pradhan and O. P. Pandey, Int. J. Mod. Phys. D [**7**]{}, 1299 (2003).\ A. Pradhan, S. K. Srivastav and K. R. Jotania, Czech. J. Phys. [**54**]{}, 255 (2004).\ A. Pradhan and S. K. Singh, Int. J. Mod. Phys. D [**13**]{}, 503 (2004).\ A. Pradhan, P. Pandey and K. K. Rai, Czech. J. Phys. [**56**]{}, 303 (2006).\ A. Pradhan, A. K. Yadav and J. P. Singh, Fizika B, [**16**]{}, 175 (2007). H. Weyl, Sber. Preuss. Akad. Wiss. Berlin, 465 (1918). G. Folland, J. Diff. Geom. [**4**]{}, 145 (1970). G. Lyra, Math. Z. [**54**]{}, 52 (1951). D. K. Sen, Z. Phys. [**149**]{}, 311 (1957). D. K. Sen and K. A. Dunn, J. Math. Phys. [**12**]{}, 578 (1971). W. D. Halford, Austr. J. Phys. [**23**]{}, 863 (1970). W. D. Halford, J. Math. Phys. [**13**]{}, 1399 (1972). D. K. Sen and J. R. Vanstone, J. Math. Phys. [**13**]{}, 990 (1972). K. S. Bhamra, Austr. J. Phys. [**27**]{}, 541 (1974). T. M. Karade and S. M. Borikar, Gen. Rel. Gravit. [**9**]{}, 431 (1978). S. B. Kalyanshetti and B. B. Waghmode, Gen. Rel. Gravit. [**14**]{}, 823 (1982). D. R. K. Reddy and P. Innaiah, Astrophys. Space Sci. [**123**]{}, 49 (1986). A. Beesham, Astrophys. Space Sci. [**127**]{}, 189 (1986). D. R. K. Reddy and R. Venkateswarlu, Astrophys. Space Sci. [**136**]{}, 191 (1987). H. H. Soleng, Gen. Rel. Gravit. [**19**]{}, 1213 (1987). A. Beesham, Austr. J. Phys. [**41**]{}, 833 (1988). T. Singh and G. P. Singh, J. Math. Phys. [**32**]{}, 2456 (1991a). T. Singh and G. P. Singh, Il. Nuovo Cimento [**B106**]{}, 617 (1991b). T. Singh and G. P. Singh, Int. J. Theor. Phys. [**31**]{}, 1433 (1992). T. Singh, and G. P. Singh, Fortschr. Phys. [**41**]{}, 737 (1993). G. P. Singh and K. Desikan, Pramana-journal of physics, [**49**]{}, 205 (1997). F. Hoyle, Monthly Notices Roy. Astron. Soc. [**108**]{}, 252 (1948). F. Hoyle and J. V. Narlikar, Proc. Roy. Soc. London Ser. A, [**273**]{}, 1 (1963). F. Hoyle and J. V. Narlikar, Proc. Roy. Soc. London Ser.A, [**282**]{}, 1 (1964). A. Pradhan, I. Aotemshi and G. P. Singh, Astrophys. Space Sci. [**288**]{}, 315 (2003).\ A. Pradhan and A. K. Vishwakarma, J. Geom. Phys. [**49**]{}, 332 (2004).\ A. Pradhan, L. Yadav and A. K. Yadav, Astrophys. Space Sci. [**299**]{}, 31 (2005).\ A. Pradhan, V. Rai and S. Otarod, Fizika B, [**15**]{}, 23 (2006).\ A. Pradhan, K. K. Rai and A. K. Yadav, Braz. J. Phys. [**37**]{}, 1084 (2007). R. Casama, C. Melo and B. Pimentel, Astrophys. Space Sci. [**305**]{}, 125 (2006). F. Rahaman, B. Bhui and G. Bag, Astrophys. Space Sci. [**295**]{}, 507 (2005).\ F. Rahaman, S. Das, N. Begum, M. Hossain, Astrophys. Space Sci. [**295**]{}, 507 (2005). R. Bali and N. K. Chandani, J. Math. Phys. [**49**]{}, 032502 (2008). S. Kumar and C. P. Singh, Int. Mod. Phys. A [**23**]{}, 813 (2008). J. K. Singh, Astrophys. Space Sci. [**314**]{}, 361 (2008). V. U. M. Rao, T. Vinutha and M. V. Santhi, Astrophys. Space Sci. [**314**]{}, 213 (2008). A. Lichnerowicz, [*Relativistic Hydrodynamics and Magnetohydrodynamics*]{}, W. A. Benzamin. Inc. New York, Amsterdam, p. 93 (1967). J. L. Synge, [*Relativity: The General Theory*]{}, North-Holland Publ., Amsterdam, p. 356 (1960). K. S. Thorne, Astrophys. J. [**148**]{}, 51 (1967). R. Kantowski and R. K. Sachs, J. Math. Phys. [**7**]{}, 433 (1966). J. Kristian and R. K. Sachs, Astrophys. J. [**143**]{}, 379 (1966). C. B. Collins, E. N. Glass and D. A. Wilkinson, Gen. Rel. Grav. [**12**]{}, 805 (1980). G, F, R. Ellis, [*General Relativity and Cosmology*]{}, ed. R. K. Sachs Clarendon Press, p. 117, (1973). S. W. Hawking and G, F, R. Ellis, [*The Large-scale Structure of Space Time*]{}, Cambridge University Press, Cambridge, p. 94, (1973). S. Perlmutter et al., Astrophys. J. [**483**]{}, 565 (1997).\ S. Perlmutter et al., Nature [**391**]{}, 51 (1998).\ S. Perlmutter et al., Astrophys. J. [**517**]{}, 565 (1999). A. G. Reiss et al., Astron. J. [**116**]{}, 1009 (1998).\ A. G. Reiss et al., Astron. J. [**607**]{}, 665 (2004). J. Gunn and B. M. Tinsley, Nature, [**257**]{}, 454 (1975). E. J. Wampler and W. L. Burke, in [*New Ideas in Astronomy*]{}, Eds. F. Bertola, J. W. Sulentix and B. F. Madora, Cambridge University Press, P. 317 (1988). A. D. Sakharov, Doklady Akad. Nauk. SSSR [**177**]{}, 70 (1968)\[translation, Soviet Phys. Doklady, [**12**]{}, 1040 (1968)\]. P. J. E. Peeble and B. Ratra, Astrophys. J. [**325**]{}, L17 (1988). J. V. Narlikar, J. C. Pecker and J. P. Vigier, J. Astrophys. Astr. [**12**]{}, 7 (1991). S. Ray and D. Ray, Astrophys. Space Sci. [**203**]{}, 211 (1993). R. N. Tiwari, S. Ray and S. Bhadra, Astrophys. Space Sci. [**303**]{}, 211 (1993). G. F. R. Ellis, [*Standard and Inflationary Cosmologies*]{}, Preprint SISSA, Trieste, 176/90/A (1990). D. La and P. J. Steinhardt, Phys. Rev. Lett. [**62**]{}, 376 (1989). J. D. Barrow, Phys. Lett. B [**235**]{}, 40 (1990).
NEH LAUNCHES WE THE PEOPLE READING LIST ON "COURAGE" | Lynne Cheney and NEH Chairman Bruce Cole announce new book list for young readers; new NEH program will provide books to libraries that offer public programs WASHINGTON, D.C., June 3, 2003 - The National Endowment for the Humanities (NEH) today issued a new list of recommended books for young readers (K-12) on the theme of "courage" as part of the Endowment's We the People initiative. Lynne Cheney and NEH Chairman Bruce Cole announced the first "We the People Bookshelf" to a group of local schoolchildren at the Vice President's Residence. "Acts of courage have shaped our nation throughout its history," said Cheney, who served as NEH chairman from 1986-93. "By reading these books, young readers can gain greater understanding of how people from all walks of life - facing challenges large and small - can find strength to do what is right." Cole also announced that this fall the Endowment plans to offer complete sets of the 15 "We the People Bookshelf" books to more than 500 libraries across the country. These libraries will use the selections in programs for young readers in their communities. "Stories from history and literature can teach us a great deal," said Cole. "Young readers will find inspiration in these stories about characters, real and fictional, who demonstrated personal courage when faced with difficult situations in uncertain times." With the Endowment's plans to issue a short list of books each year on themes related to American ideas and ideals, this year's "We the People Bookshelf" includes 15 books that examine the theme of courage. A blue-ribbon panel of experts recommended these books from a new edition of Summertime Favorites, an NEH book list also issued today with 300 titles recommended for young readers and divided into four educational reading levels. The following titles appear on the 2003 "We the People Bookshelf": Grades K - 3 Grades 4 - 6 Grades 7 - 8 Grades 9 - 12 A fact sheet on the "We the People Bookshelf" is also available. During today's event, Cheney and Cole talked about the importance of reading good books and learning more about our history with young public school students who participate in "Everybody Wins! DC," a local program that pairs public elementary school students with adult reading mentors. Cheney also discussed with the students stories from Narrative of the Life of Frederick Douglass, one of the titles on the new list. A complete list of books included in Summertime Favorites and additional information about NEH and its We the People initiative can be found on the Internet at www.neh.gov and www.WethePeople.gov. The We the People initiative supports projects that strengthen the teaching, study, and understanding of American history and culture. President Bush, who announced the launch of the initiative last September, has requested $100 million in additional funding for NEH over the next three years to support the initiative, beginning with $25 million included in the President's budget request for FY 2004. The National Endowment for the Humanities gratefully acknowledges support for its "We the People Bookshelf" provided by the National Trust for the Humanities and Bristol-Myers Squibb Company. NEH looks forward to working with the American Library Association to disseminate information and to encourage libraries to take part in the "We the People Bookshelf" grant program.
https://webarchive.library.unt.edu/eot2008/20090112122034/http://www.neh.gov/news/archive/20030603.html
Q: How to interpolate data in a range in Google Sheets I have an array with data: X Y 3 50 5 60 9 120 11 130 18 90 20 150 The data is entirely non-linear. X is guaranteed to be sorted. Now for any given value, I'd like to have linear interpolation between the numbers (so for example, 3 => 50, 4 => 55, 5 => 60). A bilinear interpolation would be even nicer, but I'm keeping my expectations low. A: I found a way to do it - there may be a better way, but this is what I came up with: Assuming the data is in A1:B10 and $C$1 contains the key to look for: =FORECAST($C$1, OFFSET(B$1,MATCH($C$1,A$1:A$10,1)-1,0,2,1), OFFSET(A$1,MATCH($C$1,A$1:A$10,1)-1,0,2,1)) In detail: FORECAST does a linear interpolation, but it assumes a straight line. So we need to find the two values that enclose the value we're looking for. So we use MATCH to find the first number that is equal or higher to what we're looking for. FORECAST expects a data range, so we use OFFSET to create a reference to a data range. MATCH is one-indexed, so we need to subtract one first. We create a range that is one wide and two high. This value is guaranteed to enclose $C$1, our search value. A: This script will do the same (plus a little bit more). Code function myInterpolation(x, y, value) { if(value > Math.max.apply(Math, x) || value < Math.min.apply(Math, x)) { throw "value can't be interpolated !!"; return; } var check = 0, index; for(var i = 0, iLen = x.length; i < iLen; i++) { if(x[i][0] == value) { return y[i][0]; } else { if(x[i][0] < value && ((x[i][0] - check) < (value - check))) { check = x[i][0]; index = i; } } } var xValue, yValue, xDiff, yDiff, xInt; yValue = y[index][0]; xDiff = x[index+1][0] - check; yDiff = y[index+1][0] - yValue; xInt = value - check; return (xInt * (yDiff / xDiff)) + yValue; } Explained In the beginning of the script, there's a small error handling. After that it will find the first lowest entry compared to the input value. Once found, it will does some math and present the result. Note If the selected value equals 20, the script returns 150 as where the formula yields #DIV/0. Screenshot Formula Use the following formula to take in account all values =IF( ISNA( MATCH(C2,A2:A7,0)), FORECAST( $C$2, OFFSET(B$2,MATCH($C$2,A$2:A$7,1)-1,0,2,1), OFFSET(A$2,MATCH($C$2,A$2:A$7,1)-1,0,2,1)), INDEX( B2:B7, MATCH(C2,A2:A7,0) ,0) ) copy / paste =IF(ISNA(MATCH(C2, A2:A7, 0)), FORECAST($C$2,OFFSET(B$2,MATCH($C$2,A$2:A$7,1)-1,0,2,1),OFFSET(A$2,MATCH($C$2,A$2:A$7,1)-1,0,2,1)), INDEX(B2:B7, MATCH(C2, A2:A7, 0), 0)) Example Add the script under Tools>Script editor and press the save button (no authentication needed). I've created an example file for you: How to interpolate data in a range in Google Sheets
About Gregory Blue I am an observer. When I enrolled in art school, we were taught that in order to paint, you must first learn how to see. I have spent most of my life observing the beauty in the world around me and studying the colors, patterns and textures revealed by the light of the sun. When we take the time to look closely, to look with the awe and wonder of a child, we come to understand our connection to the world. I have always enjoyed being outside, hiking in the woods, walking along a frozen stream or cresting a hill in an open field. I spent a great deal of my youth hiking the mountain trails of central Pennsylvania. When I moved to Bucks County, I was captivated by the light and color along the towpath of the Delaware Canal and the river. I began to paint plein air and felt as though I was walking in the footsteps of the New Hope Impressionists, kindred spirits of a bygone era. Painting has become an essential part of my life. I cannot imagine what it would be like not to make paintings. It is both a satisfying and frustrating endeavor, but I cannot not do it. At whatever compromise, I have always maintained a studio, a place to work where I am able to cajole the images in my imagination out onto the canvas. It is a constant experiment of mixing colors and making marks. Out of necessity, I adapted my work process to my life. The demands of a growing family impacted the time I could devote to pursuing my work, and so paintings were conjured from memories, photographs, sketchbooks and drawings. Landscape was still the foundation, but my work grew more simplified, and my colors more expressive. Life took an unexpected turn for me, and in the wake of the turmoil, I was no longer able to conceive the vivid color and dynamic compositions of my memories. I was distracted- my creative side clouded over and the emotional trauma of events changed everything about the life I had known. Painting became a struggle. It has taken more than a decade and along the journey, I found my way back to the beginning. Now, the painting reflects what I have learned along the way; the focus on light and color still entices me, and through it all, my love for the landscape continues to serve as the through-line. My daughter told me of a wonderful place to hike and walk not far from our home, and I was introduced to Stroud Preserve; over 500 acres of protected open space. It has become a sanctuary, a reconnection to the natural world that has always been the haven for my imagination, a place where I can think, the place where I feel most at home. The purpose of my painting is to share my vision of the world I observe. It is one that many of us see but rarely stop to experience. My hope is to express the whole of my experience of that moment in time with you. I strive to present that vision in much the same way as we would have a conversation. My work is an impression, intentionally incomplete to enable you to fill in the blanks. It is a process that can shift and change indefinitely with every viewing. In a world turned upside down by a pandemic, divided by political turmoil, and racked with extreme weather and natural disasters, many of us have returned to nature as a remedy. Spending time under the open sky and among the trees is grounding and helps us live mindfully in each moment. I hope my work will inspire the same awareness and appreciation for our world and the natural beauty that surrounds us all. It is that spirit of appreciation and gratitude that inspired me to partner with the Natural Lands Trust. I am donating 10% of the purchase price of all the work inspired by Stroud Preserve in support of their ongoing commitment to protecting open land for future generations.
https://gregoryblue.com/about/
Q: Remove dictionary values based on regex? I have the following dictionary in Python dict1 = {"key1": 2345, "key2": 356, "key3": 773, "key44": 88, "key333": 12, "key3X": 13} I want to delete keys that do not follow the pattern "xxx#" or "xxx##". That is, three characters followed by a one-digit integer or a two-digit integer. Using the above example, this is: new_dict = {"key1": 2345, "key2": 356, "key3": 773, "key44": 88} For one or two keys, the way I would create a new dictionary would be with a list comprehension: small_dict = {k:v for k,v in your_dic.items() if v not in ["key333", "key3X"]} However, how would I use regex/other string methods to remove these strings? Separate question: What if there's a special exception, e.g. one key I would like to key called "helloXX"? A: This should match all the keys in your example as well as your exception case: new_dict = {k:dict1[k] for k in dict1 if re.match('[^\d\s]+\d{1,2}$', k)} Using a new example dict with your exception in it: >>> dict1 = {"key1": 2345, "key2": 356, "key3": 773, "key44": 88, "key333": 12, "key3X": 13, "hello13": 435, "hello4325": 345, "3hi33":3} >>> new_dict = {k:dict1[k] for k in dict1 if re.match('[^\d\s]+\d{1,2}$', k)} >>> print(new_dict) {'hello13': 435, 'key44': 88, 'key3': 773, 'key2': 356, 'key1': 2345}
Pencor Services, Inc. Penn’s Peak, Jim Thorpe, PA ARM performed a wind energy feasibility study to assess the viability of siting a commercial-scale wind energy installation to supply electricity to multiple facilities operating at Penn’s Peak located in Jim Thorpe, Pennsylvania. Site Suitability and Wind Resource Definition ARM evaluated the available land area, exposure to the wind, existing land uses, and proximity to residences and property boundaries. Based on this initial assessment, ARM determined that a single wind turbine would be able to meet the onsite electrical loads. Once the turbine location on the site was determined, ARM estimated the wind resource at the site, based on the site’s exposure, topography, elevation, and upon an analysis of existing regional climatological data. Preliminary Energy Production Estimation Using the estimated long-term average wind speed and site suitability information developed from the previous task, the annual energy production potential from selected wind turbine models was determined using wind modeling software that is licensed to ARM. Transmission Options and Interconnection Viability ARM examined the electrical transmission and interconnection options for delivering the wind-generated power to the Penn’s Peak facilities. This analysis involved determining the necessary transmission voltage, identifying the transmission equipment (e.g., utility poles and cables), and evaluating transmission corridor feasibility. Environmental Due Diligence A preliminary environmental and ecological evaluation was conducted. This evaluation consisted of determining the potential presence of State or Federal threatened or endangered species or habitats of concern. Potential wetlands that could conflict with proposed wind turbine locations or transmission corridors were identified. Federal Aviation Administration Screening ARM used the FAA Radar Impact Pre-screening Tool to assess the site rating relative to potential impacts to air defense and homeland security radar systems and whether an aeronautical study is required. Engineering Feasibility Assessment ARM’s civil engineers examined the feasibility of constructing a commercial-scale wind energy installation at the prospective site. ARM then provided comments related to the site’s suitability to accommodate wind turbine foundations, access roads, crane paths, crane pads, and electrical infrastructure. Economic and Financial Viability Analysis A preliminary financial analysis was prepared to assess the economic and financial viability of the proposed wind energy project based on a net metering model. Zoning Issues ARM evaluated the local zoning ordinance and outlined the tasks associated with successfully acquiring zoning approval. Recommendations ARM recommended a commercial-scale wind energy system that is technically and financially feasible for the project site. The economic analysis indicated that a 750 kW wind turbine is the size that provides the best financial return. Additionally, this turbine size should provide enough electricity to meet most, if not all, of the electricity loads at the site. ARM has also completed a 12-month wind resource assessment at the site.
https://armgroup.net/projects/pencor-services-inc/
It really stands out. Lovely. Sunny Comments Really, really good, super painting Sarah and love your brushwork Love your style and sense of light Sarah. Love the flowers and jug. Wow Sarah this is rather good and beautifully painted. Beautiful! Those Sunflowers droop over so well and contrast with the blue vase, they bow their heads and shed light from their sun! Thanks for the comments A beautiful painting, Sarah! Gorgeous painting This is beautiful Sarah, I lovely the light and colours, and your brushstrokes too. A reworked oil on canvas board 12x9in. I wasn't happy with the original painting, so changed some of the colours to create a more harmonious and warmer painting.
https://www.painters-online.co.uk/gallery/sarahjstanley/oil-paintings/430276/
Devaluation Starts With Us Over the “Christmas Break” I had an insightful conversation with my 9 year old daughter about devaluation. It started simply about a penny. Yes, that little copper coin so dissed for only being one cent. The conversation centered around kids at my daughter’s school saying that pennies are worthless. As we counted (and rolled) the many pennies we had stashed away over 2019 I asked my daughter: “So are pennies really worthless?” She said, “They’re not worthless dad, they’re just worth one cent.” I asked, “Are you sure?” thinking now’s a chance to discuss the value of money and investing. She asked, “How can a penny be worth more than a penny?” I asked, “How many pennies in a dollar?” “100” she replied. “How many pennies in $5 dollars?” “500” she replied. “Do you think with $5 dollars we could go to Walmart and buy some lemons, sugar and cups?” I asked. “Yeah” she said…”but, why would we do that?” “How much can you sell a cup of lemonade for?” I prodded. “$.50” she said. Knowing that her soccer season just ended a month ago I asked: “How many cups could you sell in a day at one of your soccer matches at the park?” “50” she said. “How much money would you make at $.50 a glass selling 50 glasses?” “$25 dollars” she said quickly, excited to use her multiplication skills. “Why?” she asked. Getting to the root of the point, “How much did we make off that $5 dollars in worthless pennies?” “$20 dollars” she said. “Not so worthless, huh?” I stated. “Nope dad, I get your point.” she said. A simple conversation about basic investing principles got me thinking about business relationships. How many of our relationships have turned into “unintended pennies”? – relationships we just take for granted since we don’t invest in them anymore. The “one cent” only mentality. Our team members, clients, colleagues, networking relationships, vendors, family, friends…those relationships that haven’t even blossomed yet – how many have been tarnished by our devaluation over the years. It is interesting that devaluation starts with us. We have to tarnish the value of a relationship to start down the road to devaluation. Either we invest in our relationships or the value goes away. Think through your relationships heading into 2020 and ask yourself – “Am I stepping over pennies that could be invested in my future?” and “Am I the penny that needs to be polished to help invest in someone else’s future?” How much are you going to invest this year in relationships that have been tarnished by your devaluation? I’m reassessing my relationships and I’m looking for those pennies I’ve allowed to tarnish and my goal is to polish them up and invest in them wisely. I know the returns in friendship, collaboration and yes, money, will be there in 2020.
https://www.xpleomedia.com/a-shift-in-mindset-breaking-legacy-2/
Q: Logistics of a hovering watercraft in a fantasy setting So in my fantasy world, there exists a substance that (when applied to the surface of an object) repels water in the same way a very strong magnet would. This allows boats to hover above the surface of the water. The people that use this substance are a small tropical island culture, and use their boats to hunt large creatures that can manipulate the water to defend themselves. My question is this: what would be the optimal design for a boat that uses this substance in terms of balance, propulsion, and handling large waves? Assume pre-industrial technology, but any materials you’d like, since I haven’t nailed down that part of the world yet anyway. A: Strangely enough, your boats aren't going to be THAT different from the boats we already have in terms of propulsion and wave handling. Balance is a completely different matter, but let's deal with each of these criteria one at a time. Propulsion You really have two options with your level of technology and they're the same options that everyone else had; oars and sails. Depending on how far above the water you sit though, oars are problematic because they have to be longer to get into the water meaning you need to be stronger to pull the oar. For open sea journeys this is an issue because putting the rowers close enough to the water to make oars effective is counter productive to being able to handle large waves, where you want high watertight 'walls' on the side of your boat (more on that later) so I'd stick with sails. Sails could be managed by the people you describe technologically, especially as pre-industrial doesn't mean pre-science. There are plenty of examples in history of pre-industrial sailors who used sophisticated means to get their boats from one place to another in terms of both navigation and wind management so this is the best option. Just bear in mind, these kinds of large sailing vessels were a massive expenditure prior to industrialisation and they would be for your world as well. Large Waves Ultimately the best defence against high waves is high walls. On conventional ships, they sat very tall in the water (with massive ballast reserves in the hull to keep them upright) so as to survive high seas. Your floating boat will need the same walls, so your boat will still have a number of decks on it with walls on the side to stave off wave strike. Balance (and Navigation because they're related) These boats will have flat bottoms. It's that simple. You don't need to keep the boat from drifting in current because your boat floats above it in the first place, so there's no need for deep hulls. You'll need keels (and rudders) though, because the sails are only part of the propulsion equation in that boats actually rely on some resistance against the hull to change direction. Rudders for instance need contact with the water to reorient the boat. So, your boat probably has a large rectangular square bottom to maximise the repulsion area against the water, hence maximising the balance of the boat. BUT, it also has a series of long keels that dip into the water (not coated with the repulsive material) that help with steering, and at least some of these will be on swivels that can be controlled from within the boat for steering as a rudder. Ideally, these would be on the outer edges of the flat surface to preserve stability, like a catamaran. As Aron points out in comments, this may not work. Certainly, the resistance that a keel can generate is minimal by comparison to a hull, so the idea that you could successfully do anything other than use a rudder like control surface is in doubt and as such, should be taken as speculative. So, your boat would look like a very large floating bathtub, with keels and sails below and above it respectively. That will allow it to balance, survive high seas, and move & navigate on the open sea. A: Alright I think a flat hull will be by far the most stable design. You want a constant repelling force and a flat wide hull gives you stability against waves. At the same time you do want a somewhat high front against waves. A A tall front to steer against waves. This also gives you a vantage point to engage your prey with if they're large enough. Obviously this comes equipped with a railing, in fact your whole boat will be. B Back/front view. A wide flat hull for stability with a high railing on the main deck. C I was considering using your water repellent for some sort of ingenious propulsion mechanic before I realized you need to push that down in the water with more force then the boat weights. But maybe it will inspire you or someone else to make it work with some force multipliers. D An upgrade could be extra hulls not unlike a catamaran. The idea being you place those far and wide to give addition stability against waves from the sides. This will require some very strong water repellent to work. Extra strength by joining them with the mast against the central line of the ship. E Sails will be your best propulsion system. With no contact with water you should be able to get some frightening speeds. Now for more specific details I turn you to regular ship design. You might want more storage space then a single space between your hull and the main deck. You probably want regular rudders in the back to aid with steering your vessel, regular rudders should work, uncoated. You might want multiple sails, again regular sources on sail design should have you covered. A: I'd think about it completely differently than a boat. It's true that sailing would be impossible since it's been pointed out that sailing relies on the resistance of the hull in the water to navigate effectively. But that's not true for aviating like you might do in a balloon, which is essentially what you have. You have a flying craft that simply flies very close to the ground. You haven't specified how far above the surface the craft would hover, but I'm assuming that it's at least a meter (~3ft) so that it can get over most of the mundane waves and swells. Your technology level doesn't really allow for using fans like a zeppelin or blimp would use to steer, but you could still navigate just like a hot-air balloonist or glider pilot would. Like the Polynesians, your civilization would be familiar with ocean and wind currents, but with a bias towards the wind aspect as that's the strongest form of propulsion you'll use. Knowledge of the ocean currents would mostly only be useful for tracking prey. As far as the design of the craft, it should definitely get wider as it goes down into a large flat base for the repelling material. Water would just stream off it down the sides. You can use sails or even kites, balloons or parachutes to grab the wind and pull you along. Control surfaces like wings and tails on airplanes help you steer. Remember, you're in the air not on the water. As long as you can keep your means of moving and navigating intact, your craft should survive just about anything. As other answers suggest, the wider base will provide stability and since you have basically a sink proof material lining the bottom it won't really matter if the craft is totally swallowed by a wave. As long as the bottom is heavier than the top, the craft will rise back up from underwater. Maybe some kind of air bladder/attic area(s) sealed into the upper/highest parts of the ship. As long as your decks drained you could temporarily be fully submerged and probably come out ok.
Our Society and Higher Education Educational institutions, and the machine of which they are a part, face extraordinary demands in society. It can torment by these very institutions and their learner and teacher corporations. These forces include rapid demographic changes, shrinking provincial budgets, modern advances in statistics and telecommunications technology, globalization, increasing market demand, and the creation of scientific and knowledge-driven approaches to a profitable possession. Changes include basics. It is almost a place of high training in public coverage and public accountability in resolving the pressures of societies and societies. Any of these challenges will be significant in their own right. But collectively they add to the complexity and difficulty of education. So, they do it to maintain or advance the basic work of serving the public quality. Through a schooling discussion board, we will agree: Strengthening the connection between better education and society will require an extensive-based total attempt. Here, this attempt encompasses not the best man or woman institutions, departments, and associations, but training as an entire. Therefore, a movement for change guarantees a more modern teaching style than the “organizational” approach. The dynamic business will require alliances, networks, and partnerships with the expansion of education stakeholders in internal and external training. The Common Agenda is specifically designed to support the vision of a “movement” for change by encouraging the emergence of strategic alliances between individuals and organizations that embrace diverse democratic ideologies through educational methods, relationships, and service. So, concerned about the role of higher education in advancing in society. Common plan The Common plan aims to be a “long live” document and an open process that guides collective action. As a resident record, joint work is a collection of games develop to advance civic, social, and cultural roles in society. The Joint Plan jointly formulate, pursue, and the Common plan respects the diversity of activities and programmatic focus of individuals, organizations, and networks, as well as recognizing common interests. Issues The benefits of higher education are tantamount to getting a “good job” and receiving a “higher salary”. Public and higher education leaders want to critically and honestly discuss the place of higher education in society. Objective Develop this extraordinary language within and within the Institute. By discussing with the wider public, broaden the topics in the unusual language. Therefore, share your scholarly work and duties for public rights and public rights. Collect scholarships for the benefit of the people, check themes, and identify the remaining questions. Therefore, promote national awareness of the importance of higher education for the well-being of the people by developing marketing efforts.
https://theppsc.com/2021/07/04/our-society-and-higher-education/
Houdini was a great escape artist, but not so good a conjurer. (04/09) Houdini was a lousy magician. I know that this one statement will probably get me hate mail from the four corners of the globe, but it's the conclusion you reach after reading Jim Steinmeyer's book Hiding the Elephant. Though most people would not recognize his name, Steinmeyer is one of the most brilliant, modern creators of magic tricks (he's the guy who taught David Copperfield how to make the State of Liberty disappear). Steinmeyer has built tricks for some of the best in the business, including Lance Burton, Doug Henning and Rick Thomas. In addition to making magic tricks, however, Steinmeyer is also a student of magic history. His book tells about the golden era of magic starting in the mid-19th century and running through the 1940's. Houdini lived in the middle of this time period from 1874 to 1926. What interested me in Steinmeyer's book, however, was not just the history of magic, but some parallels that can be drawn from it to learning and education. I think the difference between a great teacher and a mediocre teacher is a lot like the difference between a great magician and a mediocre magician. Most teachers have the same basic knowledge to give to students in a similar way to how all magicians have the same basic illusions to display. What makes a magician great is how he presents his illusion. The same is true in teaching. Before we get into that discussion, however, we need to go back to Houdini. Despite what I've said up to this point, after reading Hiding the Elephant you don't come to the conclusion that a Houdini show was dull or boring. Not at all. As an escape artist, a subset of magic he practically invented, he had no equal. As a child Orson Wells, who wasn't a bad magician himself, saw Houdini perform his escapes and pronounced them "thrilling." However, when Houdini's act moved on to conjuring - that is making things appear and disappear - Wells was disappointed. "It was awful stuff," he recalled. Why did Houdini have such a hard time with this type of magic? It required a finesse which he simply didn't have. Steinmeyer writes in his book "Watching him play the part of an elegant conjurer was a bit like watching a wrestler play the violin." Perhaps the best example of Houdini's problem in this area is the story of the vanishing elephant. New York's largest theater in 1918, the Hippodrome. Houdini was well aware of his shortcomings as a magician and very much wanted to show the public and his fellow prestidigitators that he was a world class conjurer. In 1918 he got his chance. Early that year Houdini was engaged for 19 weeks as a feature player at New York's Hippodrome. At that time the Hippodrome was the largest theater in the city seating almost 6,000 people. The immense stage featured lavish spectacles complete with circus animals, diving horses, dancing girls and choruses. The entire stage could be turned into a massive pool that was sometimes used to re-stage famous naval battles. According to Houdini, it was while he was watching the elephants perform at the Hippodrome that he had a wonderful idea. One of the most impressive feats of the conjurer was to make a live animal appear or disappear. Because magicians were constantly traveling from location to location, the creature used was usually something small like a bird, rabbit or dog. Some of the more impressive tricks, however, had involved a larger animal like a donkey. Here Houdini would have elephants available to him for the entire 19 weeks he was working at the Hippodrome. Suppose he made one of them disappear? He would be considered the greatest conjurer of all time! It took Houdini several weeks to work out the details (in conjunction with Charles Morritt, another magician and illusion designer), but by the time he debuted at the Hippodrome the trick was ready. Houdini and Jennie in a publicity still. Houdini would appear on the stage and Jennie, a 10,000 pound elephant, would be led out by her trainer. Houdini would introduce her as "The Vanishing Elephant" while 15 assistants rolled a giant box out onto the stage. The box was eight feet high and eight feet wide and probably about fourteen feet in length. It was also elevated more than two feet in the air so that the audience could see underneath it and know that the elephant wasn't going through a trap door in the floor. Houdini would have the small end of the box turned to face the spectators. He would then open doors on either end so they could see through it and be assured it was empty. The box would then be turned so the long part faced the audience and a ramp would be put up by the door. The trainer would lead the elephant inside and the doors would be closed. The assistants would then turn the box again so it faced the audience and the doors would be opened. The spectators looking through could see out the other side. No elephant. It had apparently vanished. The Hippodrome being of such a colossal size, only those sitting directly in front got the real benefit of the deception. The few hundred people sitting around me took Houdini's word for it that the "animile" had gone - we couldn't see into the box at all! The now demolished Egyptian Hall in London exhibited Stodare's "Sphinx" illusion. Compare this with the "Sphinx" illusion presented by an earlier Victorian magician named Colonel Joseph Stodare. Stodare was working a theater called Egyptian Hall in London. The theater had once been a museum and was adorned with Egyptian sculpture and hieroglyphics. When a new illusion mechanism became available to him, Stodare decided to work the theater's theme into the trick. For weeks before unveiling his illusion, Stodare placed cryptic ads on the front page of the London Times like "The Sphinx has left Egypt," and "The Sphinx has arrived and will soon appear." When he finally premiered the trick, the hall was packed with the curious. When the curtains were opened the audience was greeted with a small, round, thin, bare three-legged table with no tablecloth. Stodare would walk onto the stage carrying a fabric covered traveling case about a foot high, wide and deep. After placing it on the center of the table, he would open the hinged doors in front to reveal what appeared to be a sculpture of a head in Egyptian headdress. Stodare would move to the edge of the stage, then command, "Sphinx, awake!" The eyes of the sculpture would pop open and look around, slowly appearing to become aware of its surroundings. Suddenly it became clear to the audience that they were viewing a disembodied human head. Stodare would ask it questions and the Sphinx would answer. The head finally gave a short speech and closed its eyes. Stodare would then return to the table, close the doors and spend a moment reflecting on the mysterious nature of the Sphinx. When he reopened the box the head was gone, replaced with a pile of ashes. Stodare then carried the box to the edge of the stage so the audience could get a better look at it. The Egyptian Hall rang with applause and the next day the papers were filled with acclaim. "The Sphinx is the most remarkable deception ever included in a conjurer's programme," proclaimed the Daily News. The following month Stodare found himself performing for royalty at Windsor Castle. Why did Stodare's illusion work so well and Houdini's didn't? From a technical point of view, both tricks were amazing for their times. One, however, created a sense of wonder with the audience and the other didn't. A drawing of Stodare's Sphinx Illusion. First, Stodare got people engaged in thinking about the illusion before he did it. His cryptic ads caught their imagination. They made people think about the sphinx. What did they already know about it? What did it mean that it was coming to London? Secondly, Stodare's presentation had a story arc. He placed the sculpture onto the table. He awakened it. He engaged it in conversation. Finally he closes his presentation reminding his audience about the mystical nature of the sphinx and when he opens the box again it has turned to ashes. Finally, Stodare not only made sure the entire audience could clearly see the Sphinx during the performance, at the end of the performance he carried the open box to the edge of the stage so they could get a better look. This way they could see it with their own eyes and be absolutely sure it was empty. This differed from what Houdini did with the elephant. He did plenty of advertising, but it didn't really entice people to think about elephants the way Stodare's cryptic statements did about the Sphinx. Houdini had no story behind his trick; he simply shoved the elephant in the box and presto it was gone. Finally, and most importantly, lots of people couldn't really see the elephant disappear. They only knew it had because Houdini had told them so. What does this have to do with teaching? Everything. A good lesson is like a good magic trick. As Stodare enticed his audience with his riddle-like statements even before they got in the theater, a good teacher must grab the student's attention at the beginning of the lesson. I've heard a couple of terms for this, but where I got my degree they called it "the hook." Hooks can be as simple as an intriguing story, joke or riddle or as complicated as a fun quiz or video clip. The key is that they grab the student and are somehow linked into your subject. The best hooks also "activate" the student's current knowledge. That is, they remind the student of what he already knows about a subject so he will be more prepared to learn the new material. Jim Steinmeyer's book on the history of magic tells you something about education too. Stodare's trick had a whole story to it. A good lesson, in the same way, is like a story. It has a beginning, a middle and an end. There is a flow to it. One item logically follows another in quick succession. A good lesson may even have elements of drama in it. While this is easier when dealing with social studies and history which lend themselves to good stories, it is also possible with the sciences. A couple years ago I integrated a 40-year-old science fiction TV drama about scientists lost in time into learning the mechanics of reading clocks and calendars for my 4th grade class (Time Tunnel to Fourth Grade). It was one of the best units I ever created and one of the most fun and memorable for my kids. A good storyteller also makes sure that he gives his audience the necessary background information needed to understand his tale before launching into it. He needs to make sure his listeners aren't lost from the get go. This kind of thing sounds obvious, but when doing a lesson many teachers find they have assumed their students have come to class with more background information than they actually have. It is very easy for a 40-year-old teacher to forget that his 4th grade students weren't even alive a decade ago. They might well know that Hillary Clinton ran for President, but not be aware that her husband, Bill, actually was the president back in the 90's. Finally, in a good lesson, the student must be able to see (or discover) for themselves the point of it. In education we call this "constructing your own knowledge." Stodare's audience was able to construct their own knowledge about the Sphinx disappearing because they saw it for themselves, not because he told them it disappeared. The same idea works in a classroom. For example, in a science lesson a good science teacher doesn't just tell his class that you can separate hydrogen and oxygen from water by electrolysis. He demonstrates it for them in front of their own eyes. Even better, if he has the time and the equipment, he lets them do the experiment and figure out how the reaction works by themselves, rather than just giving them the information. People remember things they have done and have figured out on their own. Things they've just heard about they often forget. A fall leaf can always be a source of wonder to a student if presented right. A lesson done right, like a magic trick done right, leaves the audience with a sense of wonder. In his book, Steinmeyer tells a story about how as a boy he was fascinated with why the leaves of tree turned red in the fall. When he reached the 4th grade, his teacher told him that the chlorophyll in the leaf dies in the autumn, leaving a bright color. "I appreciated that the mystery had been completely solved," he writes, "and I could stop worrying about it." However Steinmeyer also writes that "Unfortunately, science often serves the purpose of actively teaching us to stop wondering about things, causing us to lose interest." I submit that this is true of science, and learning in general, only if it is done wrong. Good science, like good learning, answers questions at the same time it poses more for the student to think about. Wondering what it would look like to ride a bicycle at the speed of light helped Einstein to create his theory of relativity. Wondering why apples fall helped Newton think about how the laws of gravity and motion might work. Like good magic, good teaching should always leave the viewer with a sense of wonder.
http://www.unmuseum.org/notescurator/magicteaching.htm
Ghirardelli Peppermint Hot Chocolate Cookies make one festive holiday dessert, perfect for socially distanced gift-giving. No ratings yet Print Recipe Pin Recipe Prep Time 15 mins Course Dessert Servings 40 cookies Ingredients 1x 2x 3x 1/2 cup granulated sugar 1/2 cup light brown sugar 1 cup shortening (Crisco)* 1 tsp vanilla extract 2 large eggs 2 cups flour 1 pouch Ghirardelli peppermint hot chocolate mix (This is approximately 1/4 cup) 1 tsp salt 1 tsp baking soda 2 3.5 ounce Ghirardelli peppermint bark bars** Instructions Add 1/2 cup granulated sugar, 1/2 cup light brown sugar, and 1 cup shortening to a mixing bowl. Cream sugar and shortening together with a hand mixer until well combined, approximately 2 minutes. Add 1 teaspoon of vanilla extract and 1 large egg. Mix until combined. Add the second large egg and mix until combined. Combine 2 cups flour, 1 package of Ghirardelli peppermint hot cocoa mix, 1 teaspoon of salt, and 1 teaspoon of baking powder in a separate bowl. Stir. Add the dry ingredients to the mixing bowl methodically. I add the dry ingredients in four equal-ish additions. After each addition, mix until just combined with a hand mixer. Repeat until the dry ingredients are completely mixed in. Chop up those Ghirardelli peppermint bark bars, roughly. Try to keep eating bits and pieces of it to a minimum. Add the chopped Ghirardelli peppermint bark to the mixing bowl, and mix by hand with a spoon until just combined. Cover with plastic wrap, pushing the plastic wrap down tightly to the cookie batter to keep the air out. Refrigerate for at least 4 hours or overnight. Preheat the oven to 325 degrees Fahrenheit. Remove cookie dough from the refrigerator. Using a cookie scoop, scoop 12 cookies onto a baking sheet, leaving a generous amount of space between each (or however many will fit; I was able to fit 12 at a time on a large aluminum baking sheet). Bake for 13-14 minutes. Remove the cookies from oven and let sit on the baking sheet for 2-3 minutes. Remove each cookie with a spatula and let cool on a cooling rack. Repeat until you've used up all of the dough. Makes 40 delicious and festive AF cookies. Notes *You can use butter instead, but I've only made this recipe with Criso. **You can use plain peppermint bark, dark chocolate peppermint bark, or a mix of both :) Keyword christmas cookies, cookies, dessert, ghirardelli, holiday cookies, hot chocolate, peppermint Tried this recipe? Let us know how it was!
https://www.midwexican.com/wprm_print/recipe/1442
The Nash equilibrium [ Nash, ] is applied. The "market" is represented by a collection of independent servers. Each server tries to maximize its profit by setting optimal service prices and optimal server rates. Indeed, for cell B,A 40 is the maximum of the first column and 25 is the maximum of the second Nash equilibrium existence. For A,B 25 is the maximum of the second column and 40 is the maximum of the first row. Same for cell C,C. For other cells, either one or both of the duplet members are not the maximum of the corresponding rows and columns. This said, the actual mechanics of finding equilibrium cells is obvious: If these conditions are met, the cell represents a Nash equilibrium. Check all columns this way to find all NE cells. Stability[ edit ] The concept of stabilityuseful in the analysis of many kinds of equilibria, can also be applied to Nash equilibria. A Nash equilibrium for a mixed-strategy game is stable if a small change specifically, an infinitesimal change in probabilities for one player leads to a situation where two conditions hold: If these cases are both met, then a player with the small change in their mixed strategy will return immediately to the Nash equilibrium. The equilibrium is said to be stable. If condition one does not hold then the equilibrium is unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed. In the "driving game" example above there are both stable and unstable equilibria. If either player changes their probabilities slightly, they will be both at a disadvantage, and their opponent will have no reason to change their strategy in turn. Stability is crucial in practical applications of Nash equilibria, since the mixed strategy of each player is not perfectly known, but has to be inferred from statistical distribution of their actions in the game. In this case unstable equilibria are very unlikely to arise in practice, since any minute change in the proportions of each strategy seen will lead to a change in strategy and the breakdown of the equilibrium. The Nash equilibrium defines stability only in terms of unilateral deviations. In cooperative games such a concept is not convincing enough. Strong Nash equilibrium allows for deviations by every conceivable coalition. In fact, strong Nash equilibrium has to be Pareto efficient. As a result of these requirements, strong Nash is too rare to be useful in many branches of game theory. However, in games such as elections with many more players than possible outcomes, it can be more common than a stable equilibrium. A refined Nash equilibrium known as coalition-proof Nash equilibrium CPNE occurs when players cannot do better even if they are allowed to communicate and make "self-enforcing" agreement to deviate. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE. CPNE is related to the theory of the core. Finally in the eighties, building with great depth on such ideas Mertens-stable equilibria were introduced as a solution concept. Mertens stable equilibria satisfy both forward induction and backward induction.Theorem 1 (Nash, ) There exists a mixed Nash equilibrium. Here is a short self-contained proof. We will define a function Φ over the space of mixed strategy profiles. (Nash) Every finite game has a mixed strategy Nash equilibrium. Implication: matching pennies game necessarily has a mixed strategy equilibrium. Why is this important? Without knowing the existence of an equilibrium, it is difficult (perhaps meaningless) to try to understand its properties. pi(s,α) ≤ 0 for every i and every s ∈ Si, hence must be a Nash equilibrium. This concludes the proof of the This concludes the proof of the existence of a Nash equilibrium. Reny™s () equilibrium existence theorem states that every compact, quasiconcave, better-reply secure game has a Nash equilibriumin pure strate-gies. Theorem 1 (Reny,) If G = (X i;u i) i2I is compact, quasiconcave, and better-reply secure, then it possesses a pure strategy Nash equilibrium. @John Baez existence of a Nash equilibrium for two person finite zero sum games is a linear programming problem. The existence of symmetric equilibrium for a two person finite game with symmetric payoff matrices that are symmetric is a quadratic programming problem. Game Theory: Lecture 5 Existence Results Existence Results We start by analyzing existence of a Nash equilibrium in finite (strategic form) games, i.e., games with finite strategy sets. Theorem (Nash) Every finite game has a mixed strategy Nash equilibrium. Implication: matching pennies game necessarily has a mixed strategy equilibrium.
https://lanagaqelizipidat.attheheels.com/nash-equilibrium-existence-44345ap.html
Based on this schedule, the total cost of a DeSales education for the 2019-20 school year is $14,434. This total includes the cost of apps and an iPad, which is retained by the student upon graduation. With the exception of the non-refundable deposit, this total cost may be divided into equal payments over the whole year. This option helps relieve the burden of having to make large payments at the same time that other fees come due. The school requires a $75 non-refundable deposit (as listed above in the Tuition Breakdown) that must accompany applications for enrollment or re-enrollment. The school will not consider a student to be enrolled until all forms within the application for enrollment or re-enrollment are complete and the deposit has been received. The school's tuition covers the cost of locker rental, administrative record-keeping, iPad/laptop usage, and standardized testing. Some additional fees are assessed for items such as parking permits, admittance to social and extracurricular events, school lunches, field trips, Immersion Week activities, participation in sports and retreats, and Advance Placement (AP) classes. Single Payment Plan – This plan requires payment made-in-full on or before July 1 of each year. Families who choose this option make their one-time payment directly to the school and receive a discount if the payment is received in the school office no later than July 1. Monthly Payment Plan – This plan requires payment to be made in twelve installments, beginning in June and completing in May of the following year. The school utilizes an automatic withdrawal payment plan that may deduct from either a checking or savings account. The monthly payment plan requires that the family sign a Payment Debit Authorization Form. The school assesses an annual fee for this service and automatically withdraws the fee directly from the designated checking or savings account. The school is required by the Archdiocese of Louisville to assess a non-parish fee of $50.00 for any student whose family is not a registered and contributing member of an Archdiocese of Louisville parish. Please note that we do not seek this information or make this determination but rather, it is a matter administered by the Diocese.
http://desaleshighschool.com/admissions/tuition-fees
A FEW NIGHTS AGO I woke up to a strange but familiar smell. It was a scent that had woken me up in the middle of the night once before, almost a year ago: a warm, electric, incense-like presence. It was so overpowering I had to get out of the room in order to breathe fresh air. My partner at the time woke up and I said: can you smell that? It kept recurring so I asked some cunning men of my acquaintance what they thought of the situation. They both felt it was a daimonic spiritual presence, and what is more, it was apparently very keen to contact me. They suggested communication. The proprietor of the esoteric perfumery, The House of Orpheus, gave me some incenses to use which matched my description of the smell. Warm, solar, electric. On Halloween, I burnt the incenses and performed a rite of insight meditation to commune directly with this most aromatic of daimons. It revealed itself as a solar spirit. It gave me its name and its symbols. I recognised them from the Greek alchemical manuscripts which I had been translating at the time. But more specifically, it admonished me to take up certain practices that I had previously worked with but which had fallen into neglect. These practices were a series of daily solar meditations—invocations to Ra-Helios—that I used to perform before the rising sun. But the most important practice that I was specifically admonished to resume was not any external rite. It was the crux of inner alchemy itself: the cultivation of the golden elixir. Yogi practicing tummo (cultivation of inner fire). In Taoist alchemy, the golden elixir (jindan) is the vital force cultivated in the “field of cinnabar” (dantien): the storehouse of essence and energy located at the body’s centre of gravity. Frequently likened to an embryo, the golden elixir represents the nascence of our divine nature: the emergence of the immortal principle at the root of our very being. Its seat at the body’s centre of gravity correlates closely with the Indo-Tibetan practice of generating yogic heat through an “inner fire” cultivated just beneath the navel. This “inner fire” (Sanskrit caṇḍālī, Tibetan tummo) is generated by drawing the breath down through the subtle body’s serpentine channels, compressing them at the vital centre, and then radiating it through the body. Ultimately, this practice serves to purify the axis of being through which consciousness surpasses death. M. C. Escher, Self Portrait in Spherical Mirror, 1935. In the nights before this strange smell (and its daimonic reminder) returned to my life, I had two distinctly alchemical dreams. In the first dream I was adding a yellow, gunpowder-like substance to a large glass vessel containing a liquid substance. I sealed the vessel with a heavy glass stopper, but I’d put too much of the yellow powder in. When it reacted, it blew the glass stopper right through the roof. The next night I fell asleep reading, oscillating between the Latin text and English translation of the Rosarium Philosophorum—the “Rose Garden of the Philosophers” (a sixteenth-century alchemical text). I dreamt that I was holding a sphere of pure metallic gold. It was perfectly round, and polished to mirror-like clarity. It was heavy. It had very real gravity, and as I held it I could feel its profound weight. Aurum nostrum non est aurum vulgi. Rosarium Philosophorum, Frankfurt am Main, 1550. "Aurum nostrum non est aurum vulgi". In Chinese inner alchemy, the vital essence cultivated in the lower dantien is visualized as a golden ball of energy radiating like a sun. In these practices you “hold” the sphere of energy in your hands as you guide its radiant presence. Significantly, the energy generated has to be properly contained. Before you cultivate this energy in the vessel of your body, you must first stop the leaks in your vessel, otherwise the energy dissipates. The message of the two dreams was thus perfectly clear: You have a powerful mix. Seal it, hermetically. Otherwise you’ll blow the lid off. Sky-high. Seal the vessel and you will hold the golden elixir. As these pieces were falling together, I had been corresponding with Austin Coppock, an astrological colleague who was aflame with an alchemical influx of his own. Among other things, we were discussing the cross-cultural intricacies of eastern and western alchemies. Austin pointed out that the present planetary emphasis—the conjunction of sun and moon, as well as Mercury—were all currently congealing in the second decan of Cancer. This decan acts precisely as a philosopher’s womb—a vessel and furnace. And like the alchemical receptacle, it must be sealed to accomplish its purpose. “The inner alchemy of China”, he writes, “emphasizes the importance of locating these leaks within both the body and the psyche. These traditions describe emotionally errant states of mind as thieves, because they steal from us. These burglars steal our attention and our energy—the inner resources required for our great work”. While I was familiar with this idea from my own study of these traditions, it was clearly being brought to my attention via a series of obliquely connected events. Through synchronous channels, a singular message was being reinforced: the way of the golden elixir pivots on a fundamental vigilance. Inner and outer behaviours that dissipate energy and awareness must be uprooted. Rudderless thoughts, emotional sinkholes, fruitless behaviours and habitudes—all must be calmly yet rigorously relinquished in order to bring our power back to its rightful centre. The approach is essentially meditative, and yet it is a meditation free of all formal structure, for it permeates all aspects of life. When our very body and being becomes the vessel of the great work, every act, inner and outer, potentially reveals (or conceals) our primordially immortal nature. Aaron Cheak, PhD, is a scholar of comparative religion, philosophy, and esotericism. He is the author and editor of Alchemical Traditions: From Antiquity to the Avant-Garde (Numen Books, 2013), and founding director of Rubedo Press.
http://www.aaroncheak.com/news/2016/7/17/our-gold
QuestionWhat are some of the reasons why people would be afraid of heights and/or flying?Community AnswerIt can be the element of danger involved. Being so high off the ground can bring to mind the possibility of a a fall that kills. For many, it's easier and safer to avoid those situations in the first place than to be exposed to the anxiety it brings on. - QuestionHow do I cure my fear of heights quickly?Community AnswerYou probably can't. Try these steps and just don't let your fear stop you from doing something you want to do. - QuestionHow did I get a fear of heights?Community AnswerMost people instinctively have a fear of heights. It's an evolutionary protective mechanism. Heights can be very dangerous, so fear helps us to avoid them instinctively. - QuestionWhen I'm at the mall with my friends, I get scared riding the escalator. Should I stop riding it?Community AnswerI wouldn't. Since you seem to be able to tolerate it to some extent, just breathe slowly and focus your thoughts on something else, something positive and relaxing, or count your way through it. Tell yourself "I will be off this escalator before I can count to 50" and start counting. If you avoid everything you're afraid of, you'll be letting your fears run your life and missing out on a lot. - QuestionFlying doesn't make me anxious, but climbing a ladder does. What causes this?Community AnswerIt could be a semi-rational fear. A plane could seem very secure, tested and designed to be of highest safety -- there are other people working just for that safety. A ladder, however, could be nothing to be sure about. It's up to you how you check its placement and climb it. So, it may just be a simple lack of trust in your balancing skills. - QuestionWhy are people afraid of heights?Community AnswerIt can be many reasons. Maybe because they are scared that they will fall from that great height. Or maybe the can get dizzy by just looking down a cliff or something, and they prefer not to stick around and find out what will happen. - QuestionHow is it that some people are terrified of heights but don't mind roller coasters or zip lining?Community AnswerIt could be because when you are zip lining instead of falling straight down, you go tilted vertically a bit down. It is a different experience. - QuestionI want to be able to jump off a bridge into water with my friends, but anxiety always gets the best of me. What can I do?Community AnswerFirst of all, check the water to see if it's actually safe. So many young people have died jumping off high points into unsafe waters. If it's safe, then learn how to land in the water so as the minimize the discomfort of the impact. That should lessen your anxiety. - QuestionI am an organist and turning my back against the ledge of the choir loft is quite intimidating. What can I do?Community AnswerPerhaps try training your mind to forget about what is behind you as you play. - QuestionHow can I ride a roller coaster with acrophobia?Community AnswerMany people successfully ride roller coasters by simply covering or closing their eyes. You might also have to imagine yourself somewhere else/doing something else. - QuestionIs it accurate to say that people develop a fear of something that they've not had much experience with simply by observing other peoples' reactions to those things?Community AnswerIf a mother screams and jumps up on the furniture to escape a mouse, then the children may develop a fear of mice as well, even though mice are actually harmless. So yes, some fears are learned. Irrational hysteria can definitely be contagious. But scientists have proven that a fear of heights is innate in some people. (They have it from birth.) Even tiny babies will "startle" at the fear of falling before they are old enough to understand what heights are. It's a natural instinct that we have developed in the wild, to protect us from straying too close to high objects from which we could fall. - QuestionCan hypnotism cure my fear of heights?Community AnswerIt does work on some people, so if you are able to by hypnotized, give it a shot. - QuestionHow do I overcome my fear of heights while I am standing on the starting block for swimming?Community AnswerJust look out across the pool instead of down. - QuestionHow do I get over my fear of roller coasters?Community AnswerRoller coasters are often not as bad as you expect them to be, and they are completely safe to ride on. If you’re really scared, start small and work your way up, but just taking the plunge can help you get over your fear and enjoy roller coasters. - QuestionWhat do I do if I am in airborne school in the military and I have a fear of heights?Community AnswerFace your fear. That is the best way to get over it. - QuestionHow do I overcome my fear of heights when I'm climbing up a rock wall?Community AnswerIn the beginning, try not to look down, and when you get up to a high point, stay there for a while until you get acclimated. If you start climbing regularly, your of heights will go away. - QuestionWhat could be the reason why I am afraid of heights when I didn't used to be?Community AnswerMaybe you had a bad experience, or heard about someone else having a bad experience related to heights that you just don't remember. Sometimes the brain represses things we find disturbing. It's also kind of a mystery how these types of things develop; it could just the result of a change in your brain chemistry. - QuestionHow do I know what's caused my fear of heights?Community AnswerVery often there's no simple, linear, straightforward answer, and sometimes there's more than one answer. Also, researchers haven't reached a consensus, but there's a theory that it might not be learned or adopted, but innate - in which case you wouldn't have had anything to do with it. A therapist might be able to help you get to the root of the problem, though. - QuestionI have a huge fear of heights, but I love flying and roller coasters and I don’t mind tall buildings. Is that unusual?Community AnswerNot at all, this is most likely because you know you’re safe and won’t fall off. While looking down a cliff is more risky and might leave you feeling afraid. Unanswered Questions - When I went to the grand canyon, I was afraid of getting too close to the edge. Do I have acrophobia?
https://www.wikihow.com/Questions/Overcome-a-Fear-of-Heights
Juan Catalan loves baseball and the hit HBO show Curb Your Enthusiasm, which is not surprising considering that both had played a pivotal role in clearing him of a murder charge. A new Netflix documentary Long Shot, available for streaming beginning September 29, tells the unlikely tale of how comedian Larry David helped Catalan regain his freedom by furnishing his lawyer with outtakes from an episode of his show that placed the accused man at the Los Angeles Dodger Stadium on the day of the murder in 2003. Catalan’s six-month-long nightmare began unfolding in August 2003 when then-24-year-old machinist from Los Angeles was arrested for the drive-by shooting of 16-year-old Martha Puebla, who was a witness in another gang-related murder case, in which Juan's brother was a co-defendant. Scroll down for video Pretty, pretty, pretty good: Juan Catalan, pictured left in 2004, owes his freedom to comedian Larry David (right) and his HBO series, which played a key role in clearing him of a murder charge Juan Catalan, left, and his attorney Todd Melkin hold screenshots from footage shot by Curb Your Enthusiasm film crew at the Los Angeles Dodger Stadium on the night of a teen's killing, with which Catalan had been charged From the outset, Catalan maintained that he had an alibi: on the night of the murder, May 12, 2003, he took his six-year-old daughter to the Dodger Stadium to watch a game against the Atlanta Braves. Catalan produced ticket stubs from the game and offered on three separate occasions to undergo a polygraph test, but prosecutors rejected his alibi, refused to administer a lie detector test and pressed on with building a capital murder case against him, which, had he been convicted, could have landed him on death row. While awaiting trial in jail, Catalan recalled that there was a film crew shooting something during the Dodgers-Braves game on the fateful night of Puebla’s murder. 'I wasn't supposed to be at that game, and that would replay in my head over and over,' Catalan says in the trailer for the new Netflix documentary. Martha Puebla, 16, was killed in San Fernando Valley, California, on May 12, 2003 (pictured), just 10 dasy before she was set to testify agaisnt a gang member On the same night, Catalan, then aged 24, took his six-year-old daughter to a Dodgers-Atlanta Braves game (pictured) at Dodger Stadium Wrongly accused: Catalan produced ticket stubs from the game and offered on three separate occasions to undergo a polygraph test, but prosecutors rejected his alibi During an appearance on Good Morning America in 2004, Catalan's defense attorney Todd Melnick recounted that his client remembered that ‘Super Dave Osbourne’ - a character created and played by comedian Bob Einstein - was part of the show that was being filmed at the ballpark, reported ABC News Armed with that clue, Melnick reached out to the Dodgers, who, in turn, told him to contact the HBO production of Larry David’s Curb Your Enthusiasm, which was in its fourth season in 2003. The attorney convinced David, the celebrated creator of Seinfeld, to help him look through unaired footage from the episode titled ‘The Carpool Lane.’ David explains in the Long Shot trailer that the crux of the episode was that his character picks up a prostitute so he could use the carpool lane and takes her to Dodger Stadium. Melnick proceeded to look through tape after tape recorded at the sporting venue, which on the night of the murder was packed with 56,000 fans, one of whom was he client. Defender: Catalan's defense attorney Todd Melnick reached out to HBO and Larry David, asking to review outtakes from footage shot on May 12, 2003, at the LA ballpark Headed to death row: Catalan was charged with capital murder and, if convicted, could have been sentenced to death After reviewing the footage for some time, with David at his side in the editing bay, Melnick was stunned when he spotted a familiar figure in the corner of the screen, eating a hot dog and watching the game with his daughter. ‘I jumped out of my chair and said, “Roll that tape back!"’ Melnick told Court TV, as reported by CNN. ‘It was him.’ in January 2004, after spending more than five months behind bars for a murder he did not commit, a judge released Catalan, citing insufficient evidence to take him to trial. Against all odds: After reviewing the footage from the stadium, Melnick spotted his client in the stands, eating a hot dog and watching the game ‘This experience was a nightmare,’ he told Courttv.com. ‘It was the worst time of my life.’ Catalan later sued the City of Los Angeles and the LAPD, claiming false imprisonment, misconduct and defamation of character, and in March 2007 he reached a $320,000 settlement in the civil case. A year later, Jose Ledesma, a member of the notorious Vineland Boyz street gang, was sentenced to life in prison after pleading guilty to a slew of crimes, among them ordering the murder of Martha Puebla, who was shot dead 10 days before she was scheduled to testify against him, reported Mercury News in 2008. Prior to his arrest, Juan Catalan he never watched Curb Your Enthusiasm, but after his release from jail, he said became an avid fan of the series. As for David, he quipped at the time that he was quitting his show 'to devote the rest of my life to freeing those unjustly incarcerated.' Curb Your Enthusiasm returns to HBO for Season 9 on September 16. Catalan is seen in the trailer for Netflix's Long Shot, standing in the stands at Dodger Stadium more than a dozen years after his arrest Do you want to automatically post your MailOnline comments to your Facebook Timeline? Your comment will be posted to MailOnline as usual. Do you want to automatically post your MailOnline comments to your Facebook Timeline? Your comment will be posted to MailOnline as usual We will automatically post your comment and a link to the news story to your Facebook timeline at the same time it is posted on MailOnline. To do this we will link your MailOnline account with your Facebook account. We’ll ask you to confirm this for your first post to Facebook. You can choose on each post whether you would like it to be posted to Facebook. Your details from Facebook will be used to provide you with tailored content, marketing and ads in line with our Privacy Policy.
First, I will explain the reason why I learn German. I have always wanted to learn foreign languages other than English, as I found learning languages is very interesting for me. It is not just learning vocabulary or grammar, but I can learn new ways of seeing things and expressions. It seems that as long as people can speak English, there is no point in learning any other languages because it is common language and we can communicate with foreigners by using it in almost all cases. However, when people want to get to know the country, the locals, the cultures, etc., it is necessary to know the language. I took German lessons when I was a freshman in college just because the grammar is similar to English and I think it is easy to learn it, and also I like watching soccer games (soccer is very popular in Germany). It was a little bit hard at first but I was really interested in it, especially when speaking in German with my teacher and classmates. When I was a sophomore, I participated in a one-month language program in Munich during summer vacation. I remember those days well as it was my first time in Germany, everything was new and interesting. As I learned German in class, I started to think about studying abroad. I wanted to improve my German skills so that I could communicate with locals closer and get to know their cultures. Besides that, in Germany, there are many people who came from different countries all over the world and have various backgrounds, I wanted to get to know them and improve communication skills and problem-solving ability. Thanks to my parents, teachers and SAF, I was able to study abroad in Leipzig. What I Want To Do In Germany? First, I want to improve my German skills and be able to communicate with locals fluently. After I finish language school, I want to study German literature and culture at Leipzig University. Not only language skills, but I also want to grow mentally by creating relationships with people who have various values, ideas and cultures that I have never encountered before. My future dream job is to work for an international company that connects the countries. Therefore, it is necessary to have flexible thinking and an understanding of different cultures, rather than being confined to narrow perspectives by the stereotypes formed in life in Japan. And also I want to share my real experiences both good and bad in Leipzig, and encourage many people to study abroad as the experience will be food for the future!
https://www.safscholars.com/student-blog/off-to-a-life-changing-experience-in-leipzig
Life is filled with tedious but necessary tasks. As a lazy natural, washday is high on my list. During various points in my natural life, the amount of time to cleanse, detangle, and style my hair has stretched out in front of me like a vast, empty wasteland. I’ve spent as many as eight hours in a stretch on my quest for healthy, full curls and coils. Luckily, those days are long gone. While I haven’t yet figured out how to spend, say zero time on my regimen (if anyone has this covered, please let me know), I have streamlined the process to two hours or less from start to finish. Here’s how I keep washday as simple and painless as possible. 1. SchedulingMy elaborate ritual of washing, deep conditioning, detangling, moisturizing, and styling is a monthly affair. I co-wash in between to keep sweat, dirt, and tangles at bay. I make an appointment with myself to get the job done without interruptions or distractions. Dedicating time for your hair is half the battle. 2. Detangling Before beginning my shampoo, I saturate my hair with a sexy cocktail of oil, cheap conditioner (Tresemme Naturals Nourishing Moisture Conditioner for the win), and gently remove any knots, mats, and tangles. I never attempt to detangle without first saturating my strands. As a tender headed American, combing my hair dry has never brought me anything but headaches and tears. After quickly running my fingers through my curls, I divide my hair into four sections to prepare for washing. 3. Cleansing in sectionsIt might seem like cleansing your hair in multiple sections would add time to a washday regimen, but I’ve found that parting my tresses into four quadrants cuts down on both the time and tangles. After dousing my head with water, I use a sulfate-free clarifying shampoo such as SheaMoisture African Black Soap Deep Cleansing Shampoo. I apply the shampoo to my scalp and work it in with my fingers. I then massage the shampoo down the length of each twist without unraveling, and rinse. This takes approximately 10 minutes. 4. Conditioner detangling (30 minutes) I gently smooth my conditioner or deep treatment (try the Karen's Body Beautiful Luscious Locks Hair Mask) down the freshly washed twists, then untangle them to get into the nitty-gritty of detangling. If I need more conditioner, I slather it on. I’m definitely not stingy with my conditioner, because it’s the easiest way for me to detangle my hair. I divide each twist into two separate parts and gently work my way backwards, from ends to the roots with a comb. After the hair is completely detangled, I cover my head with a shower cap and prep for the next steps while I let the deep conditioner penetrate. During the late fall and winter, I sit under my steamer for an extra moisture boost. 5. Cool Water Rinsing (<5 minutes)I finish off my wash process by rinsing with cool or cold water, which closes the pores, blocks dirt, and smoothes cuticles. It also seals in all the fabulous moisture from the conditioning treatment. A smoother, flatter cuticle reduces frizz and potential tangles due to friction. 6. T-Shirt Drying (<5 minutes)Using an old t-shirt to squeeze (not blot) the excess water from my curls dramatically cuts down my total drying time. Best of all, it keeps my hair smooth and frizz free for styling. 7. Moisturizing and Styling (45 minutes) I add a leave in (currently loving Camille Rose Naturals Moisture Milk) while styling, as kind of a two-in-one step. I section my hair into five or six parts, apply the leave-in from root to tip, and then twist or braid as desired. My go-to style is two strand twists, which I braid at the root for more stability. There it is, a wash day regimen that doesn’t actually take all day. What’s your secret to a fast hair care regimen? Share your secrets below.
https://www.naturallycurly.com/curlreading/curl-products/lazy-natural-wash-day-routine
Q: Undefined behavior when exceed 64 bits I have written a function that converts a decimal number to a binary number. I enter my decimal number as a long long int. It works fine with small numbers, but my task is to determine how the computer handles overflow so when I enter (2^63) - 1 the function outputs that the decimal value is 9223372036854775808 and in binary it is equal to -954437177. When I input 2^63 which is a value a 64 bit machine can't hold, I get warnings that the integer constant is so large that it is unsigned and that the decimal constant is unsigned only in ISO C90 and the output of the decimal value is negative 2^63 and binary number is 0. I'm using gcc as a compiler. Is that outcome correct? The code is provided below: #include <iostream> #include<sstream> using namespace std; int main() { long long int answer; long long dec; string binNum; stringstream ss; cout<<"Enter the decimal to be converted:"<< endl;; cin>>dec; cout<<"The dec number is: "<<dec<<endl; while(dec>0) { answer = dec%2; dec=dec/2; ss<<answer; binNum=ss.str(); } cout<<"The binary of the given number is: "; for (int i=sizeof(binNum);i>=0;i--){ cout<<binNum[i];} return 0; } A: First, “on a 64-bit computer” is meaningless: long long is guaranteed at least 64 bits regardless of computer. If could press a modern C++ compiler onto a Commodore 64 or a Sinclair ZX80, or for that matter a KIM-1, a long long would still be at least 64 bits. This is a machine-independent guarantee given by the C++ standard. Secondly, specifying a too large value is not the same as “overflow”. The only thing that makes this question a little bit interesting is that there is a difference. And that the standard treats these two cases differently. For the case of initialization of a signed integer with an integer value a conversion is performed if necessary, with implementation-defined effect if the value cannot be represented, … C++11 §4.7/3: “If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined” while for the case of e.g. a multiplication that produces a value that cannot be represented by the argument type, the effect is undefined (e.g., might even crash) … C++11 §5/4: “If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.” Regarding the code I I only discovered it after writing the above, but it does look like it will necessarily produce overflow (i.e. Undefined Behavior) for sufficiently large number. Put your digits in a vector or string. Note that you can also just use a bitset to display the binary digits. Oh, the KIM-1. Not many are familiar with it, so here’s a photo: It was, reportedly, very nice, in spite of the somewhat restricted keyboard.
Note: Materials are for educational purposes only. Students can illustrate a type of good or service that they will need when they grow up. In addition, they could write the definition of a good. As a class, each student could bring in a picture from a magazine or newspaper of a good or service and create a collage of goods or services that they think will exist when they are grown. Each student would be responsible for identifying one good or service.
http://www.sycamorehistory.org/education/second-grade/
Hi everyone! My name is Peter Smyth and I am a final year Music and French student in Trinity College, Dublin. This year I am happy to be selected to work with 100minds to raise funds for Temple Street Children's University Hospital. Set up in 2013, 100minds is a programme which challenges 100 college students throughout Ireland to each raise 1,000 euro in 100 days. The hope is to raise at least 100,000 euro for Temple Street. This year the money raised will be used to purchase 8 new nasopharyngeal endosopes as well as a VuMax HD A/B Ultrasound Scanner. This equipment is not cheap and can be potentialy lifesaving for some children. The rest of the money raised will go towards family support and the Play Therapy Department. These two resources are fundamental in providing care for the children as well as their family during their stay and afterwards. Many of us are lucky to say we have never had to be in this circumstance but for the unfortunate few who do, we should do everything in our power to help them. So over the next 100 days I will be updating my page about events and fundraisers I'll be hosting but in the mean time feel free to donate online and help me towards my goal. Even the smallest amount can make a huge difference and change the life of a sick child or their family. Please feel free to contact me at [email protected] if you would like to help out or have more questions regarding the programme, I will be more than happy to help.
https://www.100minds.org/campaigns/2018/participants/peter-smyth
1.4 Describing shapes When you are describing a geometrical figure or shape, you often need to refer to a particular line or angle on the diagram, so others know what you are referring to. This can be done by labelling the diagram with letters. For example, Figure 10 shows a triangle labelled clockwise at the corners with A,B,C. This is then known as triangle ABC, in which the longest side is AB and the angle is a right angle. is the angle formed by the lines AC and CB. The point where two lines meet is known as a vertex (the plural is vertices). So A, B and C are vertices of the triangle. Note that you can use the shorthand notation ‘’ for ‘the triangle ABC’ if you wish. There is a lot of new maths vocabulary in these last few sections, so you might find it useful to make a note of these to refer back to when completing this next activity, or for this week’s quiz and the badged quiz in Week 8. Activity 1 What can you see? Look at the image below and then answer the following questions using the letters shown. - a.Which sets of lines appear to be parellel? Answer - a. b.AB is parallel to DC. AD is parallel to GI and BC. AI is parallel to EH. BD is parallel to HJ - b.Which lines are perpendicular? Answer b.AB is perpendicular to AD and to BC. DC is perpendicular to AD and BC. EH is perpendicular to DB and HJ. AI is perpendicular to HJ and DB. - c.How many triangles can you see? Answer - c. - d.What other shapes can you see? Answer d.The parallelogram, GBJI. The squares, ABCD and EFIH. The trapeziums, DHIF, EHIG, DBJH, BJHE and DGIH. This completes your work on defining shapes and how to label them in order to describe them clearly to others. The next section looks at the different ways for measuring shapes.
https://www.open.edu/openlearn/ocw/mod/oucontent/view.php?id=19188&section=1.4
before its short five-month Mercosul presidency ends in December. End Summary. 2. (SBU) At various times in the past, Brazil and Argentina have been perhaps only one or two steps away from dollarizing their economies (either de jure or de facto). However, in 2003 the shortage of dollars in both markets pushed the two governments in the opposite direction, i.e., towards greater reliance on local currencies. That year the Sao Paulo Futures and Mercantile Exchange (BMF) proposed to then-Argentine Central Bank President Alfonso Prat Gay the idea of real/peso denominated trade. However, negotiations between Brasilia and Buenos Aires tailed off in 2004 and later stalled completely in 2005 when former Brazilian Finance Minister Antonio Palocci became enmeshed in unrelated scandals. Now current FinMin Mantega and his Argentine counterpart have revived the idea. 3. (SBU) In 2005, Brasil exported US$9.9 billion worth of goods to Argentina, while Argentina sent US$6.2 billion to Brazil. Under the contemplated local currency mechanism, payment for these transactions would be made in reais and pesos. Proponents argue that such a move would strengthen both countries by: a) making the two currencies more widely accepted, and b) lowering operational costs. To the extent that trade flows match, reliance on reais/pesos to make payments would be foreign exchange neutral as the respective Central Banks and currency futures trading via the BMF could be used as a conduit to allow exporters to recycle the local currency to importers. Still unanswered, though, is the question of whether the two central banks would purchase businesses' excess pesos or reais. Given that Brazil over recent years has run a consistent trade surplus with Argentina, the BCB could expect to have to purchase pesos from Brazilian businesses, presumably for immediate resale to the Argentine Central Bank. And if the BCB did so, what currency would it use to effectuate payment? 4. (SBU) An advisor to the Board of the BCB told us that the idea echoes in many ways currency settlements mechanisms created in the 1970s (and then abandoned in the early 1980's) to facilitate trade between hard-currency starved countries. In those arrangements, the central banks served as clearing houses, purchasing the foreign currency (zlotys, pesos, etc) from local exporters using local currency (e.g. Reais) and then settling accounts quarterly or so with the counterpart central bank. The central banks bore the exchange rate risk until the settlement took place and any surplus that one side had was paid in hard currency. According to our BCB contact, the arrangements collapsed in large part because the central banks could not continue to bear the exchange rate risk as more and more countries liberalized capital movements and moved to flexible or floating exchange rates. While overcoming this problem would be the biggest challenge for the renewal of such local currency settlement mechanisms, the BCB advisor noted that it could be done to the extent that the currency risk was hedged immediately (by the exporter or importer) using BMF futures contracts. BRASILIA 00001655 002 OF 002 5. (SBU) Argentine diplomats tell us that, based upon their conversations with GOA Central Bank staffers, the idea of conducting bilateral trade in reais/pesos is one that is not yet ripe. Notwithstanding Miceli's initial contact with Mantega, the true demandeur on this issue, they say, is Brazil. The BMF, they note, would gain by selling peso/real currency futures to businesses as might Brazilian small and medium-sized enterprises which sometimes find the cost of conducting dollar-based transactions, which entails two currency trades (from Reals to dollars and then to pesos or vice versa), too expensive. However, in the short-term Argentine exporters want to keep the dollars they receive from Brazilian buyers and thus may look at the scheme askance. Meanwhile, a technical-level meeting between staff of the two countries' central banks is scheduled for August 14 in Sao Paulo for what a BCB contact said are "very preliminary" discussions. According to our contacts, no concrete proposals are likely to be floated; instead, the two sides will take the opportunity to "brainstorm." 6. (SBU) Notwithstanding the fact that bilateral Brazil-Argentina local currency trade may not be ready for prime-time, we were told that Brazil intends to move further and propose that all Mercosul transactions be conducted in local currency. Our Argentine Embassy contacts speculated that Brasilia was charging forward on this because it wanted to local currency trade to be one of its highlight achievements during its current 5-month Mercosul presidency, which terminates at the end of 2006. 7. (SBU) Comment. While local currency trade within Mercosul is a laudable goal, post does not see it coming to pass anytime soon. The lion's share of commerce is between Brazil and Argentina. If those two countries can't agree on the concept, it will be all the more difficult to forge consensus with Uruguay, Paraguay and Venezuela. End Comment.
https://www.scoop.co.nz/stories/WL0608/S00382.htm
Q: Android crash when app is closed and reopened I have a really simple android app that just displays a blank white screen. When I close the app by pressing the HOME button, then try to open the app again, it crashes and I get the "Force Close" button. In Eclipse I'm getting this error, "ActivityManager: Warning: Activity not started because the current activity is being kept for the user.". How do I fix this crash? public class HelloAndroid extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags( WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); setContentView(new Panel(this)); } class Panel extends SurfaceView implements SurfaceHolder.Callback { private TutorialThread _thread; public Panel(Context context) { super(context); // register our interest in hearing about changes to our surface SurfaceHolder holder = getHolder(); holder.addCallback(this); _thread = new TutorialThread(holder, this); setFocusable(true); } @Override public void onDraw(Canvas canvas) { // Clear the background canvas.drawColor(Color.WHITE); } @Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { // resize canvas here } @Override public void surfaceCreated(SurfaceHolder holder) { _thread.setRunning(true); _thread.start(); } @Override public void surfaceDestroyed(SurfaceHolder holder) { // simply copied from sample application LunarLander: // we have to tell thread to shut down & wait for it to finish, or else // it might touch the Surface after we return and explode boolean retry = true; _thread.setRunning(false); while (retry) { try { _thread.join(); retry = false; } catch (InterruptedException e) { // we will try it again and again... } } } } class TutorialThread extends Thread { private SurfaceHolder _surfaceHolder; private Panel _panel; private boolean _run = false; public TutorialThread(SurfaceHolder surfaceHolder, Panel panel) { _surfaceHolder = surfaceHolder; _panel = panel; } public void setRunning(boolean run) { _run = run; } @Override public void run() { Canvas c; while (_run) { c = null; try { c = _surfaceHolder.lockCanvas(null); synchronized (_surfaceHolder) { _panel.onDraw(c); } } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { _surfaceHolder.unlockCanvasAndPost(c); } } } } } } Adding the LogCat 03-15 15:36:05.579: INFO/AndroidRuntime(4441): NOTE: attach of thread 'Binder Thread #2' failed 03-15 15:36:05.719: DEBUG/AndroidRuntime(4449): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 03-15 15:36:05.719: DEBUG/AndroidRuntime(4449): CheckJNI is OFF 03-15 15:36:05.719: DEBUG/dalvikvm(4449): creating instr width table 03-15 15:36:05.759: DEBUG/AndroidRuntime(4449): --- registering native functions --- 03-15 15:36:05.969: INFO/ActivityManager(1294): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.helloandroid/.HelloAndroid } 03-15 15:36:05.979: DEBUG/Launcher(1371): onPause+ 03-15 15:36:05.979: DEBUG/Launcher.DragController(1371): +endDrag: false 03-15 15:36:05.979: DEBUG/Launcher.DragController(1371): mDragging == false 03-15 15:36:05.979: DEBUG/Launcher.DragController(1371): -endDrag: false 03-15 15:36:05.979: DEBUG/Launcher(1371): onPause- 03-15 15:36:05.999: DEBUG/AndroidRuntime(4428): Shutting down VM 03-15 15:36:05.999: DEBUG/AndroidRuntime(4449): Shutting down VM 03-15 15:36:05.999: WARN/dalvikvm(4428): threadid=1: thread exiting with uncaught exception (group=0x4001d7e0) 03-15 15:36:06.009: DEBUG/dalvikvm(4449): Debugger has detached; object registry had 1 entries 03-15 15:36:06.009: INFO/AndroidRuntime(4449): NOTE: attach of thread 'Binder Thread #3' failed 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): FATAL EXCEPTION: main 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): java.lang.IllegalThreadStateException: Thread already started. 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at java.lang.Thread.start(Thread.java:1322) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at com.example.helloandroid.HelloAndroid$Panel.surfaceCreated(HelloAndroid.java:55) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.view.SurfaceView.updateWindow(SurfaceView.java:538) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.view.SurfaceView.onWindowVisibilityChanged(SurfaceView.java:206) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.view.View.dispatchWindowVisibilityChanged(View.java:3888) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.view.ViewGroup.dispatchWindowVisibilityChanged(ViewGroup.java:725) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.view.ViewGroup.dispatchWindowVisibilityChanged(ViewGroup.java:725) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.view.ViewRoot.performTraversals(ViewRoot.java:748) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.view.ViewRoot.handleMessage(ViewRoot.java:1737) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.os.Handler.dispatchMessage(Handler.java:99) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.os.Looper.loop(Looper.java:123) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at android.app.ActivityThread.main(ActivityThread.java:4627) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at java.lang.reflect.Method.invokeNative(Native Method) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at java.lang.reflect.Method.invoke(Method.java:521) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 03-15 15:36:06.029: ERROR/AndroidRuntime(4428): at dalvik.system.NativeStart.main(Native Method) 03-15 15:36:06.039: WARN/ActivityManager(1294): Force finishing activity com.example.helloandroid/.HelloAndroid 03-15 15:36:06.541: WARN/ActivityManager(1294): Activity pause timeout for HistoryRecord{450300c0 com.example.helloandroid/.HelloAndroid} 03-15 15:36:06.549: DEBUG/Launcher(1371): onResume+ 03-15 15:36:06.549: DEBUG/Launcher.DragController(1371): +endDrag: false 03-15 15:36:06.549: DEBUG/Launcher.DragController(1371): mDragging == false 03-15 15:36:06.549: DEBUG/Launcher.DragController(1371): -endDrag: false 03-15 15:36:06.549: DEBUG/Launcher(1371): onResume- 03-15 15:36:08.645: ERROR/KINETO(1370): KLOG0C3- xmk_QueryOSQueue SDL Queue empty : WAIT_FOREVER A: I've answered a question like this here. The error you're getting is probably caused by your Thread (without seeing the full Logcat it's hard to tell though). You're starting it every time the surface is created, which will make your application crash because you can't call Thread.start() twice. Look at my link above for a more in-depth description of the problem and how you should solve it. Since my explanation wasn't enough I will post the whole solution: Inside your Runnable/Thread: private Object mPauseLock = new Object(); private boolean mPaused; // Constructor stuff. // This should be after your drawing/update code inside your thread's run() code. synchronized (mPauseLock) { while (mPaused) { try { mPauseLock.wait(); } catch (InterruptedException e) { } } } // Two methods for your Runnable/Thread class to manage the thread properly. public void onPause() { synchronized (mPauseLock) { mPaused = true; } } public void onResume() { synchronized (mPauseLock) { mPaused = false; mPauseLock.notifyAll(); } } In your SurfaceView class: private boolean mGameIsRunning; @Override public void surfaceCreated(SurfaceHolder holder) { // Your own start method. start(); } public void start() { if (!mGameIsRunning) { thread.start(); mGameIsRunning = true; } else { thread.onResume(); } } A: The solution below was tested. The code checks thread state, and if terminated creates a new one. No crashes, the only problem I can see is the game state is not saved so basically getting back from a HOME key press, starts the game again. p.s. remember to pass through the context from Lunarview view and set to mContextLunarView. Hope this helps. These forums are awesome. Keep it up. public void surfaceCreated(SurfaceHolder holder) { // start the thread here so that we don't busy-wait in run() // waiting for the surface to be created if(thread.getState() == Thread.State.TERMINATED) { //LunarView Thread state TERMINATED..make new...under CheckCreateThread thread = new LunarThread(holder, mContextLunarView, new Handler() { @Override public void handleMessage(Message m) { mStatusText.setVisibility(m.getData().getInt("viz")); mStatusText.setText(m.getData().getString("text")); } }); } thread.setRunning(true); thread.start(); }
015? 5 What is the units digit of 121986? 6 What is the hundreds digit of 83378? 3 What is the units digit of 1908? 8 What is the units digit of 227? 7 What is the thousands digit of 32090? 2 What is the tens digit of 111501? 0 What is the units digit of 982? 2 What is the thousands digit of 37974? 7 What is the units digit of 3713? 3 What is the ten thousands digit of 38368? 3 What is the thousands digit of 17248? 7 What is the hundreds digit of 258? 2 What is the thousands digit of 12050? 2 What is the units digit of 86905? 5 What is the thousands digit of 4968? 4 What is the tens digit of 286? 8 What is the tens digit of 131276? 7 What is the thousands digit of 6283? 6 What is the hundreds digit of 3719? 7 What is the units digit of 512? 2 What is the tens digit of 2716? 1 What is the hundreds digit of 12040? 0 What is the hundreds digit of 10839? 8 What is the units digit of 19063? 3 What is the tens digit of 27062? 6 What is the tens digit of 1048? 4 What is the tens digit of 67597? 9 What is the units digit of 10539? 9 What is the hundreds digit of 368? 3 What is the units digit of 4483? 3 What is the tens digit of 193602? 0 What is the thousands digit of 1512? 1 What is the units digit of 82? 2 What is the tens digit of 5457? 5 What is the thousands digit of 7984? 7 What is the units digit of 750? 0 What is the tens digit of 666? 6 What is the tens digit of 21020? 2 What is the tens digit of 2267? 6 What is the units digit of 1913? 3 What is the tens digit of 73? 7 What is the thousands digit of 4516? 4 What is the ten thousands digit of 112760? 1 What is the hundreds digit of 1171? 1 What is the units digit of 17301? 1 What is the ten thousands digit of 35610? 3 What is the tens digit of 34615? 1 What is the thousands digit of 34408? 4 What is the tens digit of 23365? 6 What is the thousands digit of 1698? 1 What is the thousands digit of 18174? 8 What is the thousands digit of 3244? 3 What is the hundreds digit of 1863? 8 What is the tens digit of 96? 9 What is the tens digit of 3578? 7 What is the units digit of 5? 5 What is the hundreds digit of 208? 2 What is the thousands digit of 6726? 6 What is the hundreds digit of 983? 9 What is the tens digit of 5369? 6 What is the thousands digit of 2616? 2 What is the units digit of 25598? 8 What is the thousands digit of 109885? 9 What is the hundreds digit of 6139? 1 What is the hundreds digit of 626? 6 What is the tens digit of 8210? 1 What is the tens digit of 129128? 2 What is the hundreds digit of 1571? 5 What is the units digit of 7606? 6 What is the units digit of 489? 9 What is the hundreds digit of 81354? 3 What is the hundreds digit of 4222? 2 What is the thousands digit of 1336? 1 What is the thousands digit of 1606? 1 What is the units digit of 2606? 6 What is the hundreds digit of 1042? 0 What is the tens digit of 1685? 8 What is the tens digit of 650? 5 What is the thousands digit of 52130? 2 What is the units digit of 18278? 8 What is the ten thousands digit of 15221? 1 What is the units digit of 2319? 9 What is the units digit of 18421? 1 What is the tens digit of 373? 7 What is the tens digit of 118? 1 What is the hundreds digit of 931? 9 What is the hundreds digit of 5660? 6 What is the hundreds digit of 29586? 5 What is the thousands digit of 3905? 3 What is the hundreds digit of 3694? 6 What is the tens digit of 4073? 7 What is the units digit of 3403? 3 What is the units digit of 420? 0 What is the tens digit of 35490? 9 What is the ten thousands digit of 40963? 4 What is the thousands digit of 4208? 4 What is the units digit of 2323? 3 What is the units digit of 691? 1 What is the hundreds digit of 3682? 6 What is the hundreds digit of 133315? 3 What is the hundreds digit of 106? 1 What is the hundreds digit of 14335? 3 What is the tens digit of 37039? 3 What is the units digit of 929? 9 What is the tens digit of 40? 4 What is the units digit of 1019? 9 What is the units digit of 1724? 4 What is the thousands digit of 7582? 7 What is the units digit of 38778? 8 What is the tens digit of 2735? 3 What is the units digit of 2047? 7 What is the tens digit of 22344? 4 What is the units digit of 785? 5 What is the hundreds digit of 83069? 0 What is the hundreds digit of 1536? 5 What is the tens digit of 3369? 6 What is the hundreds digit of 857? 8 What is the hundreds digit of 1407? 4 What is the tens digit of 655? 5 What is the hundreds digit of 8634? 6 What is the tens digit of 8079? 7 What is the tens digit of 46116? 1 What is the tens digit of 5128? 2 What is the units digit of 958? 8 What is the units digit of 6188? 8 What is the tens digit of 71686? 8 What is the tens digit of 1318? 1 What is the tens digit of 458? 5 What is the ten thousands digit of 18587? 1 What is the hundreds digit of 6945? 9 What is the units digit of 856? 6 What is the units digit of 58096? 6 What is the units digit of 201? 1 What is the units digit of 9500? 0 What is the units digit of 40500? 0 What is the thousands digit of 36815? 6 What is the hundreds digit of 689? 6 What is the tens digit of 717? 1 What is the hundreds digit of 2765? 7 What is the units digit of 143? 3 What is the thousands digit of 2420? 2 What is the units digit of 922? 2 What is the hundreds digit of 1318? 3 What is the thousands digit of 15183? 5 What is the hundreds digit of 2226? 2 What is the units digit of 2499? 9 What is the hundreds digit of 11510? 5 What is the tens digit of 864? 6 What is the hundreds digit of 23662? 6 What is the units digit of 919? 9 What is the ten thousands digit of 21273? 2 What is the hundreds digit of 2217? 2 What is the units digit of 592? 2 What is the hundreds digit of 9076? 0 What is the units digit of 81476? 6 What is the tens digit of 2805? 0 What is the hundreds digit of 38552? 5 What is the tens digit of 3049? 4 What is the tens digit of 1579? 7 What is the units digit of 1307? 7 What is the hundreds digit of 24137? 1 What is the tens digit of 629? 2 What is the thousands digit of 11054? 1 What is the hundreds digit of 29357? 3 What is the hundreds digit of 3835? 8 What is the hundreds digit of 1711? 7 What is the hundreds digit of 1401? 4 What is the units digit of 970? 0 What is the units digit of 42869? 9 What is the hundreds digit of 558? 5 What is the ten thousands digit of 20721? 2 What is the hundreds digit of 8019? 0 What is the units digit of 2392? 2 What is the hundreds digit of 14620? 6 What is the units digit of 871? 1 What is the tens digit of 6011? 1 What is the tens digit of 937? 3 What is the hundreds digit of 973? 9 What is the thousands digit of 27150? 7 What is the thousands digit of 2197? 2 What is the ten thousands digit of 33463? 3 What is the units digit of 769? 9 What is the units digit of 2725? 5 What is the thousands digit of 63718? 3 What is the hundreds digit of 103? 1 What is the units digit of 6136? 6 What is the hundreds digit of 3231? 2 What is the tens digit of 74639? 3 What is the units digit of 2614? 4 What is the units digit of 4943? 3 What is the tens digit of 3831? 3 What is the tens digit of 118082? 8 What is the units digit of 1162? 2 What is the units digit of 89? 9 What is the thousands digit of 3811? 3 What is the units digit of 699? 9 What is the units digit of 4763? 3 What is the units digit of 70881? 1 What is the ten thousands digit of 13854? 1 What is the hundreds digit of 76950? 9 What is the tens digit of 828? 2 What is the thousands digit of 138465? 8 What is the units digit of 693? 3 What is the units digit of 142? 2 What is the tens digit of 8229? 2 What is the thousands digit of 2581? 2 What is the ten thousands digit of 23654? 2 What is the hundreds digit of 837? 8 What is the tens digit of 12622? 2 What is the units digit of 165169? 9 What is the hundreds digit of 16565? 5 What is the hundreds digit of 244? 2 What is the thousands digit of 26460? 6 What is the hundreds digit of 1226? 2 What is the hundreds digit of 1850? 8 What is the thousands digit of 11742? 1 What is the units digit of 109? 9 What is the tens digit of 1483? 8 What is the thousands digit of 1076? 1 What is the units digit of 261? 1 What is the units digit of 128? 8 What is the hundreds digit of 46873? 8 What is the tens digit of 7336? 3 What is the te
Ideas For Using Wood Pallets In Bathroom or Toilet Wood pallets are a common type of shipping platform used to transport goods and materials. They are typically made from either hardwood or softwood, and are designed to be strong, durable, and easy to handle. In addition to being used for transportation, pallets can also be used for other purposes such as in construction, as mentioned earlier for DIY projects or for a firewood. But it’s also important to consider that most pallets are used for one time and are often discarded after a single use, which can lead to significant amount of waste. Reusing or recycling wood pallets can help to conserve resources and reduce the amount of waste generated. There are several ways that you can use wood pallets in a bathroom or toilet. Here are a few ideas: - Shelving: You can disassemble a wood pallet and use the planks to create floating shelves for your bathroom. This can be a great way to add storage and display space for items like towels, toiletries, and decor. - Vanity: If you’re looking to add a rustic touch to your bathroom, you could build a vanity using reclaimed wood pallets. You can use the pallet planks to create the top of the vanity and the framework, then add a sink, faucet, and other hardware. - Toilet paper holder : You can build a holder for your toilet paper rolls using a reclaimed wood pallet. This can be a simple and inexpensive way to add a rustic touch to your bathroom. Remember that you must keep in mind the humidity and moisture in the bathroom, you may need to add extra coats of sealant to protect the pallets from warping or rotting. Keep in mind that if you’re going to be using wood pallets in a bathroom or toilet, it’s important to make sure that the pallets are clean and free of any chemicals or pests. Additionally, it is important to work safely with tools and consider the weight of the pallets you are handling.
https://www.woodpalletcreations.com/ideas-for-using-wood-pallets-in-bathroom-or-toilet/
Source: CoreLogic. 248 Illinbah Road, Illinbah QLD 4275 is a House, with 3 bedrooms, 1 bathroom, and 2 parking spaces. This House is estimated to be worth around $610k, with a range from $520k to $700k. The Domain property ID is UK-3446-WZ, and the Government legal property description is 2/RP169294. 248 Illinbah Road last sold 9 years ago, for $480k. It has been listed for rent since it was last purchased, indicating that it may be an investment property. It was most recently listed for rent in 2019 with an asking price of $500 per week. It was listed by Canungra Valley Real Estate for 22 days. View Street Profile for Illinbah Road, Illinbah QLD 4275. In the same street, 565 Illinbah Road, Illinbah QLD 4275 has just been advertised for sale.
https://www.domain.com.au/property-profile/248-illinbah-road-illinbah-qld-4275
Click here for tips on how to use the schedule. Please let us know if you find a bad link in the schedule by reporting it in the comments section below. Click here to return to the Science of Seasons schedule home page. |Topics:| Art: Making a Papier-mâché globe Bible: The parable of the sower Geography: continents & oceans Math: angles, using a protractor Science: soil temperature, germination, photosynthesis (basic), hydroponics, tilt of the earth |Day 1||Day 2||Day 3||Day 4||Day 5| |Books| |The Science of Seasons||Have your kids see if they can find the pages that match this week’s activities as they do them.| |The Science of Seasons Learn-and-Play Activities||p. 30 The Science of Seasons| p. 31 Cut out the pieces and store for tomorrow |p. 33 Protractor activity||p. 34-37 Papier-mâché globe project| Supplies: flour, scissors, round balloon, newspaper or thin paper, blue tempera paint, water, glue, paintbrush, scissors, continent cutouts |Let globe continue drying.||Let globe continue drying.| Review the Greek & Latin root cards. |Explore Spring||Read: Chapter 2 Green, Green, Green| Activity: Soil Warm up Supplies: 2 soil thermometers Activity: The Great Sprout Race Supplies: 2 pieces black construction paper, water, 2 clear plastic cups or glass jars, a few seeds like peas, pumpkin, or beans, science journal |Printable: seed growth (paste in order)| Design your own seed packet Video: time-lapse mung beans sprouting Video: time-lapse corn |Activity: These Seeds are All Wet!| Supplies: 2 pieces black construction paper, 2 clear plastic cups or glass jars, water, a few seeds like peas, pumpkin or beans, 1 cup potting soil, science journal 2nd option: Hydroponics experiment Supplies: 2 liter soda bottle, strip of fabric, growing media (rock, marbles, sand, Legos, shredded paper, etc.), scissors, fertilizer, plant |Notes: Day 2: Exploring Spring – The soil warm up test can be adapted for any time of the year. Just pick a sunny spot of the yard vs. a shady spot and compare soil temperatures. The activity says to use soil thermometers, but you can probably use a digital thermometer.| |Miracles on Maple Hill||Chapter 6||Chapter 7||Chapter 8||Chapter 9||Chapter 10| |Read each book on the day it’s listed:||What’s Your Angle, Pythagoras?||Great Migrations Whales| Seaworld Whales Guide grades 4-8 K-3 guide |The Greedy Triangle||Explore Earth’s Five Oceans| |Printables| |Types of angles||Angle steering puzzle| Whale lapbook – Choose items you’d like to do this week. You don’t need to do all of it. Whale vocabulary |Laser Angles| Continents and Oceans (color and label) Younger students: Label the continents |Spider web angles| Weaving a web angles Other angle worksheets: https://www.education.com/worksheets/angles/ Just put in your child’s grade on the left-hand side to get worksheets at his level.
https://guesthollow.com/science-of-the-seasons-online-curriculum-schedule/science-of-seasons-week-3/
Are we headed for a sixth mass extinction? Reference: Rothman, Daniel H. “Thresholds of Catastrophe in the Earth System.” Science Advances, vol. 3, no. 9, 2017, doi:10.1126/sciadv.1700906. A mass extinction is an event in which the world very rapidly loses a large number of its living species. You’ve probably heard of the mass extinction that occurred sixty-five million years ago, when an asteroid crashed near Mexico and led to the extinction of the dinosaurs. There have been four other mass extinctions in the last 500 million years, and each has resulted in the loss of at least 60% of living species. In a recent study, Professor Daniel Rothman at the Massachusetts Institute of Technology argues that human activities – specifically our inundating the atmosphere with carbon – may result in a sixth mass extinction. Why do mass extinctions occur? The theory of evolution tells us that organisms evolve in ways that promote success in their environment. However, if a species’ environment changes too quickly, evolution can’t keep up. For example, when dodos were discovered and subsequently hunted by humans, they were suddenly confronted with a new predator and could not adapt in time. The last dodo was seen in 1662. In order to trigger a mass extinction, a sudden change has to occur for not just one, but most living species. This requires a global catastrophe, like the asteroid that killed the dinosaurs or the volcanic eruptions that created the Siberian Traps and likely resulted in the end-Permian extinction 252 million years ago. The end-Permian extinction is also known as the “Great Dying” because 90-95% of all species were lost. All five mass extinctions that have occurred to this date are believed to have been associated with huge, rapid changes in the amount of greenhouse gas in the atmosphere. Such changes can occur due to a variety of reasons, including volcanism, increased plant activity, or an asteroid impact. Why do greenhouse gases matter? In a recent study on mass extinctions, Daniel Rothman suggests that the ocean-atmosphere system is a bit like the dodo: if you change the system slowly, it can adjust and maintain equilibrium, but if you change it too quickly, the whole system spins out of control. In the context of Rothman’s study, this “change” refers to a substantial shift in atmospheric CO2, which can have a profound effect on our climate. If atmospheric CO2 increases too rapidly, it becomes difficult for the ocean to adjust, and the earth’s climate can become unstable. Since CO2 is a greenhouse gas, it warms our climate. Oceans mitigate this effect by absorbing roughly a quarter of our CO2 emissions. The CO2 gets dissolved in the oceans, kind of like how CO2 is dissolved in seltzer. Generally speaking, if you put more CO2 in the atmosphere, then the oceans will also absorb more CO2, which protects our climate from changing too rapidly. Unfortunately, the oceans also become more acidic when they absorb CO2. Ocean acidification is dangerous for two reasons. Firstly, acidic oceans are lethal to marine wildlife like snails and coral, and secondly, the ocean might eventually become so acidic that it can no longer absorb atmospheric CO2, which means that it can no longer mitigate the effects of greenhouse warming. The ocean’s pH actually can return back to normal, but this takes thousands of years. If atmospheric CO2 increases faster than that, then the ocean will be unable to adjust in time, putting the earth at risk of ocean acidification and rapid climate change. Both of these can have catastrophic consequences for the earth’s inhabitants, especially for those species which can’t adapt to their new climate quickly enough. How much carbon is too much? Rothman’s study indicates that a rapid change in in atmospheric CO2 is a key component of mass extinctions in the last 500 million years. Using carbon isotope ratios of ocean-floor sediments, Rothman examined 31 different disruptions to the carbon cycle. Only those that occurred exceptionally rapidly led to mass extinctions. Rothman also used these results to deduce the amount of carbon that the oceans would have to absorb in the present-day in order to match the rapid changes of past mass extinctions. His results suggest that, relative to 1850 (the Industrial Revolution), the oceans would have to pick up an additional 310 gigatons of carbon in order to set the stage for a new mass extinction. According to current estimates, our fossil fuel emissions have already added about 150 gigatons of carbon to the ocean since 1850. Scientists can estimate how much more carbon the oceans will absorb by first estimating how much CO2 we’ll add to the atmosphere in the next century. When Rothman examined the IPCC carbon emissions projections for the year 2100, he found that we will stay under the 310 gigaton limit only if we are exceptionally proactive about reducing our fossil fuel use over the next century. Are we making any other rapid changes? Rothman’s analysis suggests that, by 2100, we may have emitted so much CO2 that a global mass extinction will be imminent. Unfortunately, this is not the only rapid change we’ve caused. In The Sixth Extinction, Elizabeth Kolbert reviews several other changes we’ve made, and a few of these are described below. 1) Globalization. When we travel the earth, we bring animals, plants, and pathogens with us. This results in a disruption of native species’ biomes, which are unprepared to deal with “invaders” like the brown tree snake pictured here. 2) Fragmentation. When we build roads through forests, each resulting section is smaller and less likely to recover from random disasters. 3) Overpopulation. As of 2016, there are 7.4 billion of us, and space for humans comes at the cost of habitats for other species. These changes result in a loss of biodiversity, which means that the total number of species is dropping. We’re experiencing a loss, not just in the number of species, but also in the number of individual animals: according the World Wildlife Fund, the number of animals has dropped by 50% in the last 40 years. At the time of that report, the cause was not yet climate change, but other human activities such as over-predation and habitat destruction. Has the sixth extinction already started? Earlier this year, a group of scientists warned that the sixth extinction may indeed have already started. They noted that even animals which are not extinct have experienced losses in population and, worryingly, range. All 177 mammals in their study have experienced losses in their ranges, with varying degrees of severity. For example, the map below shows just how much lion habitats have shrunk. In conclusion, humans have made rapid and profound changes to global ecosystems. Firstly, our CO2 emissions are acidifying the oceans and increasing greenhouse gas concentrations in the atmosphere. Daniel Rothman’s study suggests that, if we do not act immediately and drastically to reduce CO2 emissions, a mass extinction may be imminent by 2100. Secondly, as discussed by Elizabeth Kolbert, the impact of human activities is not limited to climate change. Everyday aspects of our lives, including traveling, living in previously undeveloped areas, and driving on roads that fragment natural habitats, also have serious consequences for global biodiversity. In order to prevent another mass extinction, we’ll have to reduce our dependence on fossil fuels, and we may also need to reconsider how we interact with our environment in the first place.
https://envirobites.org/2017/12/05/are-we-headed-for-a-sixth-mass-extinction/
LeaperLad wakes in LatticeLand on a disk suspended above a lake of lava. Leaper spies his HeloPak on one of the disks; with it he knows he can escape this nefarious trap. The disks are quite far apart, however; without some momentum, he can only jump to an immediately adjacent disk. Once he has acquired the speed to make that jump, he can accelerate on every disk he touches. He notices the disks are laid out in a rectangular grid, with a disk on each grid point. He calculates that on each disk he can accelerate or decelerate his speed by one unit in the horizontal or vertical direction (but not both on the same disk). Alternatively, he can just maintain his speed when stepping on a disk. Thus, in a straight line, from a standing start, he can jump one unit, then two units, then three, then two, then one. Some pairs of disks are joined by walls of fire that he knows he must not touch. He can get arbitrarily close to one of these walls, but he must not touch one. Nor can he fall off the edge of the grid. How quickly can LeaperLad reach his HeloPak and stop on that disk? Input Input will have one problem per input line. The input line will contain a sequence of integers, each separated by a single space. The first two integers will be w and h, the width and height of the grid. Each of these values will be between 1 and 64, inclusive. Following that will be two integers representing the coordinates of the disk that LeaperLad wakes on. After that will be two integers representing the coordinates of the disk that the HeloPak is on. The next integer will be f , the number of fire walls. There will be six or fewer fire walls. After that will be f sets of 4 integers, representing the two coordinates of the end points of the walls. For all coordinates, the first number will be between 0 and w − 1, inclusive, and the second number will be between 0 and h − 1, inclusive. All fire walls will be at least one unit long. The HeloPak and LeaperLad will never start on the same disk, nor will either start on a disk that is on a firewall. There will always be a way for LeaperLad to reach his HeloPak. There will be no more than 50 problems. Output For each input line, print a single integer indicating the minimal number of moves needed for LeaperLad to reach his HeloPack. Pay close attention to the first couple of examples; they clarify how moves are counted.
http://sharecode.io/sections/problem/problemset/2582
M60 Bullet Belt -Brass Nickel Tip Black Link(POWER VIOLENCE) 019 Belt Description and Specs: Height: 2 3/4 inches. Base: 1/2 inch. Shell Length: 2 inches. Color: brass shell with copper tips Link: Black-gray link The width of Belt choice: 34 inches with 58 bullets. 38 inches with 65 bullets. 42 inches with 72 bullets. 46 inches with 79 bullets.
https://shop.metaldevastation.com/M60-Bullet-Belt-Brass-Nickel-Tip-Black-Link-POWER-VIOLENCE-019-BULBELT019.htm
Manchester Academies This project involves the provision of additional floor space at the University’s Steve Biko Building, home of the University Students’ Union. The design seeks to build on the quality of the commercial areas of the building whilst enhancing and extending the facilities for the student facing areas, which are the essence of the Students’ Union. Strong infrastructure within flexible open plan spaces allows for simplified orientation whilst providing identity at each level. This approach facilitates the opportunity for the building to evolve and reflect the transitional nature of the students. The University of Manchester Students’ Union has committed funding to ensure the spaces in the extended building deliver the objectives set by the trustee board. This has been achieved through the provision of fast-casual food, coffee, grab and go retail, branded merchandise, and stationary outlets. Alongside the new retail spaces – and as one of Europe’s largest combined gig and venue spaces – the works look to modernise and extend its performance spaces with a new purpose-built theatre. Manchester Museum The Study within Manchester Museum creates an innovative interaction space inviting the public to use the collections, tools, and resources to research their own interests or discover new ones entirely. The scheme was designed and delivered by Wilson Mason in collaboration with Ben Kelly Design, and re-imagines a forgotten space within the Grade II listed Alfred Waterhouse building. Taking advantage of the impressive existing structure, the transformation of the entire top floor created a ‘think space’ that incorporated design elements of the original features within a stunning new contemporary interior. “The Study is an extremely important project for Manchester Museum. It will be a wonderful and inspiring new space, which combines cutting edge research into contemporary science and human and natural history, with the widest possible access and opportunities for new thinking.” Dr Nick Merriman, Director of Manchester Museum. PC Workshop The creation of a workshop for a PC repair facility provided a retail frontage for a service that was originally contained within another business. The primary focus of the project was to provide a contemporary environment from which the PC workshop could operate. Accessibility was also a key consideration within the design, creating an inclusive and accessible facility for all campus inhabitants. The fit-out was centred around the creation of a lighting system that echoed the punched tape/ card programming systems of bygone computing. John Lewis Cheltenham This complex refurbishment and remodelling of John Lewis & Partners’ 51st store was successfully delivered to demanding timescales from concept design through to the much-anticipated opening. Extensively adapting the former Beechwood Shopping Centre, the expansive 150,000ft2 floor area was stripped back to the building’s framework to incorporate a new façade, and a complete refurbishment and fit out was carried out in line with the John Lewis branding. Detailed remodelling of the building’s upper floor public car parks was also integrated into the development. The use of BIM was fundamental to the success of this project, with Wilson Mason acting as BIM Manager on behalf of the client and developing the model with the contractors team through to completion. Cheltenham embraced the new store, and much of the immediate public vicinity was upgraded in parallel. Russell Dean The site and existing store suffered significant flood damage during storm Desmond in 2015. This led the client to reconsider the viability of the site and commission a design to mitigate the impact of probable future flooding events. Due to the site lying within the flood plain between the river and the Rochdale Canal, appropriate flood resisting measures became an integral part of the proposals negotiated with the local authority. Discussion related to the acquisition of additional land was integral in allowing the scheme to provide the resistance required. The design developed lifted the functional floor above ground level and created retail space at first and second floor level. The ground level is occupied by car parking in order to support the retail activities. High levels of natural light and an integrated natural ventilation strategy provide a sustainable building solution on what was considered a previously undevelopable site. Burscough The project involved the fit-out of a pre-constructed shell for Booths Supermarket. Using the 3D steel fabricators IFC model and 2D CAD drawings, the existing shell was modelled in Revit. Through the use of these programmes, the team was able to accurately model the fit-out with a clear understanding of the existing structure and the junctions between old and new, hence minimising the time on site and limiting the number of unknowns in the delivery of the project. Families within the model were created to represent the standard fixtures and fittings used by the client based on supplier’s information and these families have been stored for use in future schemes. The Revit model was made accessible by the design team through the use of online storage and cloud technology, with the data provided allowing coordination between the structural, mechanical, and electrical elements of the design. The model has been exported into various 3D imaging packages for use in presentations and it has also been used to undertake sun path analysis due to the design of the existing heavily glazed façade. Retail Park Re-Brand Wilson Mason produced a design for the Crown Point Retail Park re-brand. The idea was to create an aspirational outlook for both the centre and visitors, lifting the profile of the existing buildings and consequently stimulating redevelopment in this corner of the development and the wider Crown Point retail park. The canopy soffits and structural supports were formed from timber which maintained continuity with the existing park canopy soffits. The organic materials, detailing, and form of the proposal incorporated non-vertical elements and a range of timber species to provide diversity and visual interest across the development. The stores to either side of the corner store incorporated covered colonnade spaces which encourage visitors to dwell and be drawn in to the stores. The colonnade produced a presence to this corner of Crown Point, creating a distinctive, simple, and elegant identity. The removal of the existing low-level canopy and the introduction of mezzanine level glazing visually connected the mezzanine level with the park and maximised daylight penetration from the east facing glazing. The enhanced visibility in to the stores also provided the opportunity for increased display and branding. Design Designing for discovery At Wilson Mason, we believe architecture has the power to change the world. Design goes beyond meeting people’s needs; it changes how we think and feel. Combining technical innovation with design excellence, we uncover inspirational, cost effective buildings that enhance the lives of those who experience them. Our rare expertise has been shaped by our well-established relationship with science and research. We hold a wealth of intelligence and experience that we apply to a range of specialist sectors including science, education, healthcare, workplace, manufacturing, and retail. By manipulating space, colour, material, sound and light we produce spaces that seamlessly blend architectural logic with architectural poetry. Our designs amplify the sensory experience, scripting the interaction between people and space.
https://www.idshowcase.co.uk/designers/william-mason/
23 January 2019 New Technology Uses Lasers to Transmit Audible Messages to Specific People Photoacoustic communication approach could send warning messages through the air without requiring a receiving device WASHINGTON — Researchers have demonstrated that a laser can transmit an audible message to a person without any type of receiver equipment. The ability to send highly targeted audio signals over the air could be used to communicate across noisy rooms or warn individuals of a dangerous situation such as an active shooter. Caption: Ryan M. Sullenberger and Charles M. Wynn developed a way to use eye- and skin-safe laser light to transmit a highly targeted audible message to a person without any type of receiver equipment. Image Credit: Massachusetts Institute of Technology’s Lincoln Laboratory In The Optical Society (OSA) journal Optics Letters, researchers from the Massachusetts Institute of Technology’s Lincoln Laboratory report using two different laser-based methods to transmit various tones, music and recorded speech at a conversational volume. “Our system can be used from some distance away to beam information directly to someone's ear,” said research team leader Charles M. Wynn. “It is the first system that uses lasers that are fully safe for the eyes and skin to localize an audible signal to a particular person in any setting.” Creating sound from air The new approaches are based on the photoacoustic effect, which occurs when a material forms sound waves after absorbing light. In this case, the researchers used water vapor in the air to absorb light and create sound. “This can work even in relatively dry conditions because there is almost always a little water in the air, especially around people,” said Wynn. “We found that we don't need a lot of water if we use a laser wavelength that is very strongly absorbed by water. This was key because the stronger absorption leads to more sound.” One of the new sound transmission methods grew from a technique called dynamic photoacoustic spectroscopy (DPAS), which the researchers previously developed for chemical detection. In the earlier work, they discovered that scanning, or sweeping, a laser beam at the speed of sound could improve chemical detection. “The speed of sound is a very special speed at which to work,” said Ryan M. Sullenberger, first author of the paper. “In this new paper, we show that sweeping a laser beam at the speed of sound at a wavelength absorbed by water can be used as an efficient way to create sound.” Caption: The researchers use water vapor in the air to absorb light and create sound. By sweeping the laser they can create an audio signal that can only be heard at a certain distance from the transmitter, allowing it to be localized to one person. Image Credit: Massachusetts Institute of Technology’s Lincoln Laboratory For the DPAS-related approach, the researchers change the length of the laser sweeps to encode different frequencies, or audible pitches, in the light. One unique aspect of this laser sweeping technique is that the signal can only be heard at a certain distance from the transmitter. This means that a message could be sent to an individual, rather than everyone who crosses the beam of light. It also opens the possibility of targeting a message to multiple individuals. Laboratory tests In the lab, the researchers showed that commercially available equipment could transmit sound to a person more than 2.5 meters away at 60 decibels using the laser sweeping technique. They believe that the system could be easily scaled up to longer distances. They also tested a traditional photoacoustic method that doesn’t require sweeping the laser and encodes the audio message by modulating the power of the laser beam. “There are tradeoffs between the two techniques,” said Sullenberger. “The traditional photoacoustics method provides sound with higher fidelity, whereas the laser sweeping provides sound with louder audio.” Next, the researchers plan to demonstrate the methods outdoors at longer ranges. “We hope that this will eventually become a commercial technology,” said Sullenberger. “There are a lot of exciting possibilities, and we want to develop the communication technology in ways that are useful.” Paper: R. M. Sullenberger, S. Kaushik, C. M. Wynn. “Photoacoustic communications: delivering audible signals via absorption of light by atmospheric H 2 O,” Opt. Lett., 44, 3, 622-625 (2019). DOI: https://doi.org/10.1364/OL.44.000622. About Optics Letters Optics Letters offers rapid dissemination of new results in all areas of optics with short, original, peer-reviewed communications. Optics Letters covers the latest research in optical science, including optical measurements, optical components and devices, atmospheric optics, biomedical optics, Fourier optics, integrated optics, optical processing, optoelectronics, lasers, nonlinear optics, optical storage and holography, optical coherence, polarization, quantum electronics, ultrafast optical phenomena, photonic crystals and fiber optics. About The Optical Society Founded in 1916, The Optical Society (OSA) is the leading professional organization for scientists, engineers, students and business leaders who fuel discoveries, shape real-life applications and accelerate achievements in the science of light. Through world-renowned publications, meetings and membership initiatives, OSA provides quality research, inspired interactions and dedicated resources for its extensive global network of optics and photonics experts. For more information, visit osa.org. Media Contact: [email protected]
Graphics Files Included: JPG Image, Vector EPS; Layered: Yes; Minimum Adobe CS Version: CS. Fried English, breakfast with egg, bacon, sausages, tomatoes, coffee and orange juice. Each item on grouped on separate layer. Vector eps (minimum Illustrator v10, vCS). Hi-res RGB 300dpi jpg included. Chocolate Oval Presenation Box with Various Chocol; Valentine Heart Chocolate Box with Various Chocolates; Chocolate Selection Box; Fried Breakfast in Frying Pan; Bacon Sandwich Breakfast with Coffee and Juice; Chocolate Box Chocolates; Kitchen Tablecloth Pattern Various Colours.
https://www.dondrup.com/stock-vector/41049-graphicriver-fried-breakfast-on-table-with-coffee-and-orange-ju-4690743.html
Repressed memories are those thoughts that are blocked from conscious experience due to extreme stress or trauma. When one undergoes a significant degree of anxiety or distress, their nervous system turns hyperactive and overcomes their brain. The brain is overwhelmed by a flood of complex emotions and a reaction caused by a sympathetic nervous system. This essay will discuss factors that cause repressed memories, possible mechanisms associated with the disorder, long-term effects, and several recovery methods. Causes of Repressed Memories The source of repressed memories is a subject to significant child differences. The popular theme associated with this disorder is the aspect of extreme stress or trauma. When this feature intensifies, neurological adjustments (resulting in repression) are thought to help maintain survival (Health, 2020). The above occurrence may be caused by a variety of circumstances, including abuse. Many who have endured trauma, whether physical, psychological, or sexual, are vulnerable to repressed memories. Abuse, including being mistreated by a parent, guardian, or an isolated incident, may be persistent. The suffering often affects a child’s psychological coping capacity in any respect, and one of the only ways of dealing with it is to force the memory out of conscious perception. Physical abuse can obscure from memory all recollections of the incident. It also leaves psychological marks, but the psychological wounds are permanent. Physical violence can be ongoing or can still be an occasional incident that progresses to the repression of the abusive experiences. Additionally, psychological violence could include verbal abuse or persistent harassment, which poses a risk of mental illness (Saroyan, 2015). It is worth noting that one child may not be harmed to the same level as the other. To restrain reminders of these psychological attacks, some children actively “shut down” mentally and re-organize their brains, such that they sometimes forget what they had encountered, until later on when they grow to be adults. Alternatively, sexual abuse, such as rape, also causes the victim to feel depressed and humiliated to an extent that it represses all memories of trauma. This repression occurs because they cannot handle their past and ultimately bury the trauma experiences under their conscience. With any loss of a loved one comes grief, a natural process that is the human way of emotional healing. But all too often, this natural occurrence is delayed or distracted, or pushed away. Repressed memories may be encountered as a result of extreme grief (Saroyan, 2015). For instance, where someone loses a close relative or a partner can make someone feel traumatized until they cannot function normally. Subsequently, the memories accompanying the grief are hidden under conscious experience and are “repressed.” For children, the experience caused by the loss is extremely overwhelming to the extent of making the child withdraw from any social interaction. Many that have experienced significant pressure caused by stress may realize that it accumulates, approaches the peak, and inevitably leads to a nervous breakdown. There may be a few traumatic incidents that cause a nervous breakdown, but in many other situations, they may be the product of poor self-care (Kunst, Saan, Bollen, & Kuijpers, 2017). Irrespective of the trigger of one’s high stress and mental breakdown, one might find that memories may be lost due to a fight-or-flight reaction. Long Term effects of Repressed Memories Children who have experienced violence are more likely to experience reduced educational achievement and are addicted to drugs and alcohol. They also develop long-term issues with physical and mental well-being, including depression. Childhood abuse also affects an individual’s social and emotional development, as it makes it difficult for the victim to connect and build social networks. As a result, the person tends to lead a lonely and secluded life. Consequently, they appear to trust no one and they create a pessimistic attitude towards society (Saroyan, 2015). Most of them resort to criminal life, as they ‘seek to revenge’ the evils that were committed to them in their childhood. Recovery from Repressed Memories It is always difficult to recover from repressed memories, its recovery should be pursued only when the victim can handle the memories and emotions that follow those experiences. Only under the guidance of a highly qualified psychotherapist or counselor (who can interact with the victim and understand their condition) should the procedure be attempted (Saroyan, 2015). While the individual may recover on their own, they may not be capable of managing emotional upheavals that might occur concurrently. In most instances, repressed memories can cause significant emotional reactions. Therefore, the survivor should take some time to develop a healthy relationship with their healthcare attendant before heading directly into the recovery process. Given that one is on the trajectory of finding repressed memories and recovering from a traumatic incident, one might need some social support. If the victim has some close friends, relatives, and family whom they can trust, they might need some additional social support networks during this period (Health, 2020). It helps if they can speak to them about the repressed memories they have had. Even though social support could never substitute a professional psychotherapist, it may act as a valuable supplement. Conclusion Several lessons have to be learned from the above-gathered information. It is only logical to conclude that children are more affected by these mental disorders because of their vulnerability. In addition, the core of the debate on repressed memories of any type of abuse during one’s childhood leads to the assumption that both real and genuine memories from a person’s past and fabricated memories exist. Therefore, a repressed memory condition is a severe disorder that can cause permanent damage to one’s well-being and should be handled with a professional and at the earliest time possible. References Health, M. (2020). Repressed memories: Causes, mechanisms, & coping strategies. Web. Kunst, M., Saan, M., Bollen, L., & Kuijpers, K. (2017). Secondary traumatic stress and secondary posttraumatic growth in a sample of Dutch police family liaison officers. Stress And Health, 33(5), 570–577. Web. Saroyan, J. (2015). Suppressed and repressed memories among armenian genocide survivors. Peace Review, 27(2), 237–243. Web.
https://psychologywriting.com/repressed-memory-in-childhood-experiences/
BACKGROUND SUMMARY DESCRIPTION OF EXEMPLARY EMBODIMENTS 1 4 First Embodiment (FIGS. to B) 5 5 Second Embodiment (FIGS. A and B) 6 6 Third Embodiment (FIGS. A and B) 1. Technical Field The present invention relates to a recording apparatus. 2. Related Art In the related art, various recording apparatuses have been used. Among these, a recording apparatus that is provided with a transport belt which transports a medium and that performs recording on a medium transported by the transport belt is disclosed. For example, in JP-A-2014-47442, and JP-A-2011-51165, a recording apparatus which performs recording by ejecting ink onto a medium which is transported by a transport belt is disclosed. There are various configurations of a recording apparatus provided with a transport belt which transports a medium, and for example, as disclosed in JP-A-2014-47442, there is an apparatus provided with a pressing roller (press roller) which presses a medium onto a transport belt, or an apparatus provided with a support unit (suctioning platen) which can support a transport belt as disclosed in JP-A-2011-51165. However, with a configuration in which the pressing roller which presses a medium onto a transport belt, and a support unit which can support the transport belt are provided, there has been a case in which friction increases between the transport belt and the support unit at a position where the medium faces the pressing roller, or the like, due to interference caused by a foreign substance, or the like. An advantage of some aspects of the invention is to suppress an increase in friction between a transport belt and a support unit which supports the transport belt. According to an aspect of the invention, there is provided a recording apparatus which includes a transport belt which transports a medium; a pressing roller which presses the medium to the transport belt; and a support unit which can support the transport belt, in which friction is relieved on at least a part of a portion of the support unit that faces the pressing roller with the transport belt disposed therebetween. According to the aspect, the transport belt, the pressing roller, and the support unit are provided, and at least a part of a portion of the support unit which faces the pressing roller through the transport belt is subjected to friction relief processing. For this reason, it is possible to suppress an increase in friction between the transport belt and the support unit. In the recording apparatus, in the aspect, friction relief may be provided by reducing the contact area of the support unit with respect to the transport belt. According to the aspect, the support unit is subjected to the contact area reducing processing with respect to the transport belt, as the friction relief processing. For this reason, it is possible to suppress an increase in friction between the transport belt and the support unit by reducing the contact area between the transport belt and the support unit. In the recording apparatus, in the aspect, the support unit may be provided with a protruding portion for reducing the contact area, and an apex portion of the protruding portion may be in contact with the transport belt. According to the aspect, the support unit is provided with the protruding portion for reducing the contact area, and the apex portion of the protruding portion is in contact with the transport belt. For this reason, it is possible to suppress an increase in friction force between the transport belt and the support unit by reducing the contact area between the support unit and the transport belt by causing the transport belt and the apex portion of the protruding portion to be in contact with each other. In the recording apparatus, in the aspect, the support unit may be provided with a protruding portion including a ridge as the contact area reducing processing, and the ridge may be in contact with the transport belt. According to the aspect, the support unit is provided with the protruding portion including a ridge as the contact area reducing processing, and the ridge is in contact with the transport belt. For this reason, it is possible to suppress an increase in friction force between the transport belt and the support unit by reducing the contact area between the support unit and the transport belt, by causing the transport belt and the ridge of the protruding portion to be in contact with each other. In the recording apparatus, in the aspect, the ridge may be formed in a transport direction of the medium. According to the aspect, the ridge is formed along the transport direction of the medium. For this reason, it is particularly possible to effectively suppress an increase in friction force between the transport belt and the support unit. In the recording apparatus, in the aspect, a sliding property improving material with which sliding properties with respect to the transport belt can be improved may be disposed in the support unit, as the friction relief processing. According to the aspect, the sliding property improving material with which sliding properties with respect to the transport belt can be improved is disposed in the support unit to achieve friction relief. For this reason, it is possible to suppress an increase in friction force between the transport belt and the support unit by improving sliding properties between the support unit and the transport belt. In the recording apparatus, in the aspect, the support unit in the recording apparatus may be coated with the sliding property improving material to achieve friction relief. According to the aspect, the support unit is coated with the sliding property improving material to achieve friction relief. For this reason, it is possible to suppress an increase in friction force between the transport belt and the support unit by improving sliding properties between the support unit and the transport belt by causing a portion of the support unit which is coated with the sliding property improving material to be in contact with the transport belt. In the recording apparatus, in the aspect, the support unit in the recoding apparatus may be provided with the sliding property improving material to achieve friction relief. According to the aspect, the support unit is provided with the sliding property improving material to achieve friction relief. For this reason, it is possible to suppress an increase in friction force between the transport belt and the support unit by improving sliding properties between the support unit and the transport belt by causing the sliding property improving material, which forms the support unit, to be in contact with the transport belt. In the recording apparatus, in the aspect, the transport belt in the recording apparatus may be an adhesive belt of which a support face of the medium is painted with an adhesive, and the support unit may be subjected to the friction relief processing on a face which faces an end portion in a width direction of the transport belt. In a case in which the transport belt is an adhesive belt, there is a case in which an adhesive comes around or migrates, as a foreign substance, into an end portion in the width direction of a face which is a face on a side opposite to a support face and faces the support unit from the end portion in the width direction of the support face which is painted with the adhesive, and a friction force between the transport belt and the support unit increases. According to the aspect, the transport belt is the adhesive belt of which the support face of the medium is painted with an adhesive, and the support unit is subjected to the friction relief processing on a face which faces the end portion in the width direction of the transport belt. For this reason, it is possible to suppress an increase in friction force between the transport belt and the support unit, even when the transport belt is an adhesive belt. Since it is possible for the support unit to suppress the increase in friction force between the transport belt and the support unit, even when the friction relief processing is not performed on a face which faces a center portion in the width direction of the transport belt, when the face which faces the end portion in the width direction of the transport belt is subjected to the friction relief processing, it is possible to reduce the cost of friction relief processing by performing the friction relief processing only in the end portion in the width direction of the transport belt. Hereinafter, a recording apparatus according to one embodiment of the invention will be described with reference to the accompanying drawings. 1 First, an outline of a recording apparatus according to a first embodiment of the invention will be described. FIG. 1 1 is a schematic side view of the recording apparatus according to the embodiment. 1 2 1 3 10 4 16 7 15 10 38 17 The recording apparatus according to the embodiment is provided with a feeding unit which can reel out a roll R of a medium for recording (medium) P for performing recording. A transport mechanism which transports the medium for recording P in a transport direction A using an adhesive belt (transport belt formed of an endless belt) which supports the medium for recording P on a support face F onto which an adhesive is disposed is further provided. A recording mechanism which performs recording on the medium for recording P by causing a carriage including a recording head which ejects ink to perform reciprocating scanning in a reciprocating direction B, which intersects the transport direction A of the medium for recording P, is further provided. A cleaning mechanism of the adhesive belt is further provided. A winding mechanism including a winding shaft which winds up the medium for recording P is further provided. 2 5 1 1 5 3 6 3 5 The feeding unit is provided with a rotating shaft which is also a position for setting the roll R of the medium for recording P for performing recording and has a configuration with which it is possible to reel out, from the roll R which is disposed on the rotating shaft , the medium for recording P toward the transport mechanism by using a driven roller . In addition, when reeling out the medium for recording P toward the transport mechanism , the rotating shaft rotates in a rotation direction C. 3 10 2 8 9 10 10 12 8 The transport mechanism is provided with the adhesive belt which transports the medium for recording P reeled out from the feeding unit by mounting thereon, and a driving roller and a driven roller which move the adhesive belt in a direction E. The medium for recording P is attached to the support face F of the adhesive belt by being pressed by the pressing roller and is mounted. The driving roller rotates in a rotation direction C when the medium for recording P is transported. However, the endless belt as the transport belt is not limited to the adhesive belt. For example, an endless belt of an electrostatic adsorption type may be used. 18 10 10 18 10 10 A platen as a support unit which can support the adhesive belt is provided under the adhesive belt according to the embodiment. Since the platen supports the adhesive belt , it is possible to suppress vibration of the adhesive belt which is associated with movement thereof. 12 12 In addition, the pressing roller in the embodiment is configured so as to reciprocate (swing) in the transport direction A in order to suppress formation of a contact mark on the medium for recording P by being in contact with the same portion of the medium for recording P for a certain time. However, the pressing roller is not limited to such a configuration. FIG. 1 FIGS. 3A to 4B FIG. 1 FIGS. 3A to 4B 12 12 In , and which will be described later, a contact portion between the pressing roller and the medium for recording P, or the like, is illustrated in a simplified manner, and a contact angle between the pressing roller and the medium for recording P (approach angle of the medium for recording P) is the same in and in . 4 30 16 7 FIG. 2 FIG. 1 The recording mechanism includes a carriage motor (refer to ) which causes the carriage including the recording head to reciprocate in the reciprocating direction B. In , the reciprocating direction B is a direction perpendicular to a paper surface. 16 7 3 16 16 3 10 16 Recording is performed by causing the carriage including the recording head to reciprocate during recording; however, the transport mechanism stops transporting of the medium for recording P in the middle of recording scanning (during movement of carriage ). In other words, reciprocating scanning of the carriage and transporting of the medium for recording P are alternately performed when performing recording. That is, when performing recording, the transport mechanism intermittently transports the medium for recording P (intermittent movement of adhesive belt ) in response to reciprocating scanning of the carriage . 1 7 1 In addition, the recording apparatus according to the embodiment is provided with the recording head which ejects ink while reciprocating in the reciprocating direction B; however, the recording apparatus may be a printing apparatus provided with a so-called line head in which a plurality of nozzles which eject ink are provided in a direction intersecting the movement direction of the medium for recording P. Here, the “line head” is a recording head in which a region of nozzles, which are formed in the direction intersecting the movement direction of the medium for recording P, is provided so as to cover the entire intersecting direction and is used in a recording apparatus which forms an image by causing the recording head or the medium for recording P to move relative to each other. In addition, the region of the nozzles in the intersecting direction of the line head may not cover the entire intersecting direction of the medium for recording P to which the recording apparatus responds. 15 10 13 14 13 The cleaning mechanism of the adhesive belt includes a cleaning brush formed of a plurality of cleaning rollers which are connected in a rotating shaft direction, and a tray in which detergent for cleaning the cleaning brush is accommodated. 38 3 11 2 17 The winding mechanism is a mechanism for winding up the medium for recording P on which recording is performed, which is transported from the transport mechanism by using a driven roller , and can wind up the medium for recording as a roll R of the medium for recording P, by disposing a paper tube, or the like, for winding on the winding shaft , and winding the medium for recording P around the paper tube. 1 Subsequently, an electrical configuration in the recording apparatus according to the embodiment will be described. FIG. 2 1 is a block diagram of the recording apparatus according to the embodiment. 24 1 23 24 25 26 24 27 A CPU which manages control of the entire recording apparatus is provided in a control unit . The CPU is connected, via a system bus , to a ROM , which stores various control programs and the like, which are executed by the CPU , and a RAM , which can temporarily store data. 24 25 28 7 In addition, the CPU is connected, via the system bus , to a head driving unit for driving the recording head . 24 25 29 30 31 32 33 The CPU is further connected, via the system bus , to a motor driving unit for driving the carriage motor , a transport motor , a feeding motor , and a winding motor . 30 16 7 31 8 32 5 5 3 33 17 Here, the carriage motor is a motor for moving the carriage that includes the recording head . The transport motor is a motor for driving the driving roller . The feeding motor provides the mechanism for rotating the rotating shaft and is a motor for driving the rotating shaft in order to send the medium for recording P to the transport mechanism . In addition, the winding motor is a driving motor for rotating the winding shaft . 24 25 21 21 25 22 The CPU is further connected, via the system bus , to an input-output unit , and the input-output unit is connected, via the system bus , to a PC for performing transceiving of data such as recording data and signals. 23 1 The control unit can control the entire recording apparatus with such a configuration. 18 1 Subsequently, the platen as a main portion of the recording apparatus in the embodiment will be described. FIGS. 3A and 3B FIG. 3A FIG. 3B 20 18 1 12 19 20 12 19 20 12 Here, are schematic side views which illustrate the periphery of the portion of the platen of the recording apparatus according to the embodiment which faces the pressing roller . illustrates a state in which a foreign substance (for example, adhesive painted on the support face F of the adhesive belt that has migrated around to the opposite side of the adhesive belt) approaches the portion facing the pressing roller , and illustrates a state in which the foreign substance reaches the portion facing the pressing roller . FIGS. 4A and 4B FIG. 4A FIG. 4B FIG. 4B 20 18 1 12 18 18 are schematic views which illustrate the periphery of the portion of the platen of the recording apparatus according to the embodiment which faces the pressing roller . is a side view and is a plan view, and in , constituent members other than the platen are omitted in order to make the shape of the platen easy to grasp. FIGS. 3A and 3B FIG. 3A FIG. 3B 10 10 10 10 19 20 18 12 10 12 10 10 18 10 20 18 12 19 10 10 In the recording apparatus provided with the transport belt which transports a medium, as illustrated in , there is a case in which a foreign substance is attached to a side opposite to the support face of the medium (medium for recording P) of the transport belt (adhesive belt ). In particular, in a configuration of including the adhesive belt as the transport belt, there is a case in which an adhesive painted on the adhesive belt becomes a foreign substance by coming around onto into the side opposite to the support face of the medium. In addition, when a state illustrated in proceeds to a state illustrated in in association with movement of the adhesive belt in the direction E, there is a case in which a foreign substance such as an adhesive, or the like, comes into contact with the portion in the platen which faces the pressing roller . The reason for this is that the adhesive belt is an endless belt, and accordingly, is flexible, and when the pressing roller presses the medium for recording P onto the adhesive belt , the adhesive belt is pressed by the platen due to the force. Then, a friction force between the adhesive belt and the portion in the platen , which faces the pressing roller , increases due to an influence (interference) of the foreign substance such as the adhesive, and there are concerns of causing an abnormal sound, increasing the movement load of the adhesive belt , degrading the movement accuracy of the adhesive belt , or the like. 1 34 39 10 20 12 18 FIG. 4 FIGS. 4A and 4B For this reason, in the recording apparatus according to the embodiment, as illustrated in (), friction relief processing is performed on the face which faces an end portion (end portion in width direction of adhesive belt ) of the portion in the reciprocating direction B which faces the pressing roller in the platen . 1 10 12 10 18 10 18 34 20 12 10 That is, the recording apparatus according to the embodiment is provided with the adhesive belt which transports the medium for recording P, the pressing roller which presses the medium for recording P to the adhesive belt , and the platen which can support the adhesive belt , and the platen is subjected to the friction relief processing in at least a portion of the portion which faces the pressing roller through the adhesive belt . 10 18 For this reason, it is a configuration in which an increase in friction force between the adhesive belt and the platen can be suppressed. FIG. 1 FIG. 43 10 18 34 39 10 As illustrated in , the adhesive belt according to the embodiment is an adhesive belt of which the support face F of the medium for recording P is painted with an adhesive. In addition, the platen is subjected to the friction relief processing on the face which faces the end portion in the width direction of the adhesive belt as illustrated in . 10 1 18 10 18 1 10 39 10 34 18 1 10 10 18 In a case in which the transport belt is the adhesive belt , as in the recording apparatus according to the embodiment, there is a case in which an adhesive migrates to an end portion in the width direction of a face which faces the platen which is a face on a side opposite to the support face F, as foreign substances, from the end portion in the width direction of the support face F on which the adhesive is painted, and a friction force between the adhesive belt and the platen increases. Here, the transport belt of the recording apparatus according to the embodiment is the adhesive belt in which an adhesive is painted on the support face F of the medium for recording P. In addition, as described above, the face which faces the end portion in the width direction of the adhesive belt is subjected to the friction relief processing , in the platen . For this reason, in the recording apparatus according to the embodiment, the transport belt is the adhesive belt ; however, it is set to a configuration in which the friction force between the adhesive belt and the platen can be suppressed. 34 39 10 18 10 18 34 10 1 10 18 34 34 39 10 18 When the friction relief processing is performed on the face which faces the end portion in the width direction of the adhesive belt , the platen can suppress an increase in friction force between the adhesive belt and the platen , even when the friction relief processing is not performed on a face which faces a center portion in the width direction of the adhesive belt . For this reason, in the recording apparatus according to the embodiment, an increase in friction force between the adhesive belt and the platen is suppressed while reducing a cost for the friction relief processing , by performing the friction relief processing only on the face which faces the end portion in the width direction of the adhesive belt , in the platen . 39 10 10 10 10 The face which faces the end portion in the width direction of the adhesive belt may be a face which can face the end portion in the width direction of the adhesive belt , and may include a region which faces a portion other than the end portion in the width direction of the adhesive belt , or a part of a portion which does not face the adhesive belt . 10 39 18 18 The friction relief processing may be performed also in the adhesive belt (in particular, face which is end portion in width direction, and faces the face of platen ), not only in the platen . 18 1 35 10 34 10 18 18 10 In the platen of the recording apparatus according to the embodiment, a sliding property improving material which can improve sliding properties with respect to the adhesive belt is disposed as the friction relief processing . In this manner, it is possible to suppress an increase in friction force between the adhesive belt and the platen , by improving sliding properties between the platen and the adhesive belt . As a specific example of the “placement”, for example, it is possible to perform the placement using various methods such as “applying”, “attaching”, “coating”, “deposition”, or the like. 18 35 34 10 18 18 10 10 35 18 20 12 Specifically, in the platen according to the embodiment, a fluororesin (polytetrafluoroethylene) is coated as the sliding property improving material , as the friction relief processing . In addition, an increase in friction force between the adhesive belt and the platen is suppressed by improving sliding properties of the platen and the adhesive belt , by causing the adhesive belt to be in contact with the portion which is coated with the sliding property improving material (fluororesin) of the platen (that is, portion which faces pressing roller ). 35 35 Placement of the sliding property improving material in the embodiment is fluororesin coating; however, it is not limited to such a configuration. For example, ceramic may be coated, instead of fluororesin coating. In addition, a coating method is not particularly limited, as well, and the sliding property improving material may be attached in the form of a sticker. 35 34 18 35 10 18 18 10 10 35 18 In addition, as the placement of the sliding property improving material (friction relief processing ), the platen itself may be configured of the sliding property improving material . Also in such a configuration, it is possible to suppress an increase in friction force between the adhesive belt and the platen by improving sliding properties between the platen and the adhesive belt , by causing the adhesive belt to be in contact with the sliding property improving material which configures the platen . 35 The sliding property improving material is not particularly limited, and it is possible to preferably use polyacetal, polyimid, nylon (particularly, nylon 11, 12, 4-6, or the like), or the like, instead of the fluororesin or ceramic. 1 Subsequently, a recording apparatus according to a second embodiment will be described in detail with reference to accompanying drawings. FIGS. 5A and 5B FIGS. 4A and 4B 20 12 18 1 1 are schematic views which illustrate the periphery of the portion which faces the pressing roller in the platen as a main portion of the recording apparatus according to the second embodiment, and are diagrams corresponding to in the recording apparatus according to the first embodiment. 1 1 34 18 The recording apparatus according to the embodiment has the same configuration as that of the recording apparatus in the first embodiment, except for the friction relief processing of the platen . 1 35 34 In the recording apparatus according to the first embodiment, the sliding property improving material is disposed as the friction relief processing . 1 10 34 10 18 18 10 Meanwhile, the recording apparatus according to the embodiment is subjected to contact area reducing processing in which a contact area with respect to the adhesive belt is reduced, as the friction relief processing . In this manner, it is possible to suppress an increase in friction force between the adhesive belt and the platen , by reducing the contact area between the platen and the adhesive belt . FIGS. 5A and 5B 36 18 36 10 10 36 18 10 10 18 Specifically, as illustrated in , a protruding portion is provided in the platen according to the embodiment as the contact area reducing processing, and it is configured so that an apex portion of the protruding portion is in contact with the adhesive belt . By causing the adhesive belt and the apex portion of the protruding portion to be in contact, a contact area between the platen and the adhesive belt is reduced, and an increase in friction force between the adhesive belt and the platen is suppressed. 36 10 36 The protruding portion in the embodiment is formed in a conical shape; however, it is not limited to such a configuration, and may be a polygonal pyramid shape such as a trigonal pyramid shape or a square pyramid shape, or may be a configuration in which a tip end of the apex portion is an R shape (smooth). In addition, a contact portion between the adhesive belt and the apex portion of the protruding portion is not limited to a point contact, and the adhesive belt and the apex portion of the protruding portion may contact in a predetermined area. 1 Subsequently, a recording apparatus according to a third embodiment will be described in detail with reference to accompanying drawings. FIGS. 6A and 6B FIG. 6A FIG. 6B FIG. 4B FIG. 5B 20 12 18 1 1 1 are schematic views which illustrate the periphery of the portion which faces the pressing roller in the platen as a main portion of the recording apparatus in the third embodiment. In these, is a rear view, and is a plan view corresponding to of the recording apparatus in the first embodiment, and of the recording apparatus in the second embodiment. 1 1 34 18 The recording apparatus according to the embodiment has the same configuration as that of the recording apparatus in the first or second embodiments, except for the friction relief processing of the platen . 18 1 36 The platen of the recording apparatus according to the second embodiment has a configuration of including the plurality of protruding portions as the contact area reducing processing. FIGS. 6A and 6B 18 1 Meanwhile, as illustrated in , the platen of the recording apparatus according to the embodiment has a configuration in which a plurality of columnar constituent members extend along the transport direction A, as the contact area reducing processing. 18 37 10 18 10 10 37 10 18 In other words, the platen according to the embodiment has a configuration in which a protruding portion with ridge is provided, as the contact area reducing processing, and the ridge is in contact with the adhesive belt . In addition, a contact area between the platen and the adhesive belt is reduced by causing the adhesive belt and the ridge of the protruding portion to be in contact, and an increase in friction force between the adhesive belt and the platen is suppressed. 18 10 18 In other words, the ridge of the platen according to the embodiment is formed along the transport direction A of the medium for recording P. For this reason, it is a configuration in which the increase in friction force between the adhesive belt and the platen can be effectively suppressed, in particular. 37 10 37 18 The protruding portion in the embodiment is formed in a columnar shape, and the ridge is formed in an arc shape. However, it is not limited to such a configuration. The contact portion between the adhesive belt and the ridge of the protruding portion is not limited to a line contact, and the adhesive belt and the ridge of the protruding portion may contact in a predetermined area (width). In addition, the ridge of the platen according to the embodiment is formed along the transport direction A of the medium for recording P; however, it is not limited to such a configuration, and for example, the ridge may be formed along the reciprocating direction B. FIGS. 4B and 5B 18 1 34 39 10 20 12 As illustrated in , the platen of the recording apparatus according to the first and second embodiments is subjected to the friction relief processing only on the face which faces the end portion (end portion in reciprocating direction B) in the width direction of the adhesive belt in the portion which faces the pressing roller . FIG. 6B 18 1 34 10 20 12 Meanwhile, as illustrated in , the platen of the recording apparatus according to the embodiment is subjected to the friction relief processing in the entire adhesive belt in the width direction in the portion which faces the pressing roller . 34 1 10 20 12 34 1 39 10 20 12 However, it is not limited to such a configuration, and the friction relief processing in the recording apparatus according to the first and second embodiments may be performed in the entire adhesive belt in the width direction in the portion which faces the pressing roller , or the friction relief processing in the recording apparatus according to the third embodiment may be performed only on the face which faces the end portion of the adhesive belt in the width direction in the portion which faces the pressing roller . The invention is not limited to the above described embodiments, various modifications can be made in the scope of the invention which is described in claims, and it is needless to say that those are also included in the scope of the invention. This application claims priority under 35 U.S.C. §119 to Japanese Patent Application No. 2016-029615, filed Feb. 19, 2016. The entire disclosure of Japanese Patent Application No. 2016-029615 is hereby incorporated herein by reference. BRIEF DESCRIPTION OF THE DRAWINGS The invention will be described with reference to the accompanying drawings, wherein like numbers reference like elements. FIG. 1 is a schematic side view which illustrates a recording apparatus according to a first embodiment of the invention. FIG. 2 is a block diagram which illustrates the recording apparatus according to the first embodiment of the invention. FIG. 3A FIG. 3B is a schematic side view which illustrates main portions of the recording apparatus according to the first embodiment of the invention, and is a schematic side view which illustrates main portions of the recording apparatus according to the first embodiment of the invention. FIG. 4A FIG. 4B is a schematic side view which illustrates main portions of the recording apparatus according to the first embodiment of the invention, and is a schematic plan view which illustrates main portions of the recording apparatus according to the first embodiment of the invention. FIG. 5A FIG. 5B is a schematic side view which illustrates main portions of a recording apparatus according to a second embodiment of the invention, and is a schematic plan view which illustrates main portions of the recording apparatus according to the second embodiment of the invention. FIG. 6A FIG. 6B is a schematic rear view which illustrates main portions of a recording apparatus according to a third embodiment of the invention, and is a schematic plan view which illustrates main portions of the recording apparatus according to the third embodiment of the invention.
Hello My Name Is... Victor Medrano About Me Welcome Welcome to Mr. Medrano's Algebra Class ! Algebra Course Description Part 1: Students will understand the use of manipulatives and symbols in order to simplify expressions, and solve equations and inequalities in problem situations. Students will translate among the various representations of functions and gain an understanding of slope and intercepts of linear functions (including the effects of change in parameters) in real-world and mathematical situations. Part 2: Students will understand that the graphs of quadratic functions are affected by the parameters of the functions, describe those affects, and solve the quadratic functions using appropriate methods. Franklin High School 9th Grade Center 825 E. Redd Rd El Paso, TX 79912 915.236.2400 Room 205 E-mail: [email protected] Conference Times Purple - 4th period; 2:15 - 3:45 Silver- 8th pe riod; 2:15 - 3:45 Franklin High School 900 N Resler El Paso TX 79912 Phone (915) 236-2200 Fax (915) 587-4094 Website by SchoolMessenger Presence . © 2017 West Corporation. All rights reserved.
http://franklin.episd.org/directory/mathematics/medrano__victor
With rising cases of cyber fraud and security incidents, RBI published a Master Direction providing necessary guidelines stating the directions of security control,” Reserve Bank of India (Digital Payment Security Controls) Directions 2021” to strengthen the nation’s digital payments architecture, improving the security, control, and compliance among banks, gateways, wallets, and other non-banking entities, The efforts are to regulate security in commercial banks, small finance banks, payment banks and credit card-issuing non-banking financial companies (NBFC). The new set of norms also specifies the criteria under which regulated entities can form partnerships and interact with third-party apps and ecosystem players such as mobile applications, payment operators and gateways. The comprehensive guidelines aim to tackle the recent severe rise in digital outages, cyber frauds and data breaches incidents. The control guidelines consolidates multiple vital aspects of the cybersecurity space like: - Governance and Management of Security Risks - Other Generic Security Controls - Application Security Life Cycle (ASLC) - Authentication Framework - Fraud Risk Management - Reconciliation Mechanism - Customer Protection, Awareness, and Grievance Redressal Mechanism Apart from the General guideline, the directive consists of separate controls namely : - Internet Banking Security Controls - Mobile Payments Application Security Controls - Card Payments Security With NPCI’s plan to revamp the IT infrastructure across popular payment channels like UPI, IMPS, AePS etc, the latest RBI directives are considered as a crucial update to improve the security of digital payment channels and also customer convenience. The Master guideline aids in setting up a robust governance structure and implementing common minimum standards of security controls for digital payment products and services. Adhering to the guideline, all the regulated entities (REs) need to update their secure process and policies time to time and place an online dispute resolution for resolving disputes and grievances of customers pertaining to digital payments. In view of a security incident, the financial institutions are expected to inform about the threats and attack against their digital payment product and ensure that precautionary safeguards are in place to avoid incidents like ” phishing, remote access, safeguard of PIN, credentials, card details” etc. RBI has given all the regulated entities to comply within the next six months Full document here : Master Direction on Digital Payment Security Controls About Us: As a CERT-in empanelled organization QRC Assurance and Solutions has been a forerunner on the cybersecurity front, empanelled with CERT-in, certified to provide PCI DSS QSA, PA QSA, PCI 3DS, PCI SSF, ISO 27001, ISO 27701 Certifications along with other security compliance services. As forerunners in Cybersecurity Space, QRC supports our customers to establish, document, implement and maintain Data Security and Privacy frameworks to protect their sensitive data from all Internal / External Threats and manage the confidentiality, Integrity, availability, Security, Privacy of such information systematically. 24th February, 2021 | Compliance | Posted by Tags: HIPAA Compliance, Security Framework,
https://qrcsolutionz.com/blog/rbi-issues-new-norms-for-digital-payments
WATLINGTON Environment Group will hold its first meeting of the year at the town hall on January 27 at 8pm. This will include the programme for the year and a review of the past year as well as a short talk by a guest. A year’s membership of the group costs £7.50 for an adult, £14 for a couple and £2.50 for a child. 16 January 2017 More News:
https://www.henleystandard.co.uk/news/watlington/103456/green-start.html
The Department for Digital, Culture, Media & Sports (DCMS) confirmed on August 30, 2022, that it will push forward with tough new regulations and a code of practice to bolster the security and resilience of the United Kingdom’s electronic communications networks and services against current and future cyberthreats. TECHNOLOGY, OUTSOURCING, AND COMMERCIAL TRANSACTIONS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS On July 18, 2022, the UK government published high-level proposals for its approach to regulating uses of artificial intelligence (AI), as part of its National AI Strategy and, more broadly, its UK Digital Strategy. The government is seeking public views on the approach, which is contained in a policy paper; a more detailed White Paper will be published in late 2022. In June 2022, the UK government published its cross-government UK Digital Strategy for creating a world-leading environment in which to grow digital businesses. The Digital Strategy brings together various initiatives on digitalization and data-driven technologies, including the National AI Strategy. The government states that it is actively seeking to grow expertise in deep technologies of the future, such as artificial intelligence, next generation semiconductors, digital twins, autonomous systems, and quantum computing. Join Daniel S. Savrin and Mark J. Fanelli in the next installment of the Morgan Lewis Automotive Hour Webinar series, focused on All Things Autonomous – Regulatory and Commercial Considerations for Delivery Robots (On and Off Campus), Escooters, and Drones. As we all try to keep up with the Metaverse and as the healthcare system wilts under a data deluge, the convergence of realities in a shared online space is not merely a chance for practitioners and patients to find each other and interact in new ways, it’s also a rare opportunity to help a new paradigm sprout. The answers to detangling some sticky wickets of Health 2.0, like ensuring efficient, secure communications and exchanges between participants, may share a common thread: clear out (not just debug) the cobwebs and flip the crypt. On May 6, 2022, the UK government outlined its plans to boost competition and drive economic growth and innovation in a major regulatory reform aimed at big tech. The news comes in the wake of fears that a handful of tech giants disproportionately dominate the market, subjecting smaller businesses to predatory prices and ultimately harming consumers through higher prices as well as limited options and control over their online experiences. Spotlight As we start 2022, as part of our Spotlight series, we connect with Reece Hirsch, the co-head of Morgan Lewis’s privacy and cybersecurity practice, to discuss the recent policy statement issued by the US Federal Trade Commission regarding the Health Breach Notification Rule and how it applies to health app developers that handle consumers’ sensitive health information. Our Tech & Sourcing @ Morgan Lewis blog also published a summary of the policy statement. Contract Corner As 2021 comes to a close, we have once again compiled all the links to our Contract Corner blog posts, a regular feature of Tech & Sourcing @ Morgan Lewis. In these posts, members of our global technology, outsourcing, and commercial transactions practice highlight particular contract provisions, review the issues, and propose negotiating and drafting tips. Contract Corner Over the last year, companies implemented new digital technology solutions at record levels, looking to implement emerging technologies, improve the user digital experience, leverage cloud solutions to store the massive amounts of data being generated, and test the waters on how to transact using digital assets. And we don’t see things slowing down. According to recent guidance from the US Federal Trade Commission (FTC), providers of health apps and connected devices that collect consumers’ health information must comply with the FTC’s Health Breach Notification Rule, 16 CFR Part 318, and therefore are required to notify consumers and others when their health data is breached.
https://www.morganlewis.com/blogs/sourcingatmorganlewis?tag=69c6e4f7-c4f6-4a4c-93a7-9a591f9177b9
The American Academy of Ophthalmology agrees with important patient safety guidelines recommended in a joint report issued by the three federal agencies that help guide the nation’s health care system. The report, authored by the three cabinet secretaries of the U.S. Departments of Health and Human Services, Treasury, and Labor, focuses on reforms that deliver system-wide cost savings, with significant attention paid to the delivery of care at the state level. One recommendation is that states forego scope-of-practice expansion when legitimate health and safety concerns exists. The Academy’s community of 23,000 U.S. ophthalmologists supports this approach. In ophthalmology, scope-of-practice regulations protect patients from harm during surgery by ensuring that only those with the necessary medical education and clinical training are authorized to perform surgical eye procedures. The Academy views this recommendation as a clear win for patients and their safety, particularly since it has the endorsement of three cabinet secretaries. The report’s authors recommend that states remove so-called “restrictive” scope-of-practice laws that allegedly “limit provider entry and ability to practice in ways that do not address demonstrable or substantial risks to consumer health and safety.” The Academy, which is the nation’s leading voice for the profession of ophthalmology on policy issues that affect how medical and surgical eye care is provided in the United States, supports the authors’ stated standard of a justified safety regulation to prevent risk of serious harm. “Too often there is a rush to extend surgical privileges to those who lack the years of medical education and clinical training necessary for understanding and safely performing critical procedures,” Keith D. Carter, MD, FACS, president of the American Academy of Ophthalmology, said. “It is critical, certainly in eye care, that should our states opt to expand scope of practice, that they eschew any dangerous softening of surgical standards and heed the recommendations in this report by preserving regulations that protect patients seeking surgery and complex medical care of eye disease.” The report’s authors further conclude that states should allow all healthcare providers to practice to the top of their license, an approach for which the Academy is generally supportive. Additionally, the Academy urges Congress and the Trump administration to back federal truth-in-advertising legislation to ensure patients understand their providers’ surgical and clinical qualifications. Such legislation can ensure that patients can “assess quality of care at the time of delivery,” as recommended by the report’s authors. This can help alleviate documented patient confusion on the qualifications of the myriad eye care providers in each town and state.
https://eyewire.news/articles/american-academy-of-ophthalmology-calls-on-states-to-maintain-high-safety-standards-for-surgical-eye-care/
Our group works on the development of magnetic resonance detection techniques for novel targeted contrast agents. Xenon biosensors have an outstanding potential to increase the significance of magnetic resonance imaging (MRI) in molecular imaging and to combine the advantages of MRI with the high sensitivity of hyperpolarized 129Xe and the specificity of a functionalized contrast agent. To explore this potential, the European Research Council (ERC) is providing funding in the form of a Starting Grant (BiosensorImaging, GA No. 242710) over the next 5 years. Based on new detection schemes (Hyper-CEST method) in Xe MRI, this novel concept in molecular diagnostics will be made available for biomedical applications. The advancement focuses on high-sensitivity in vitro diagnostics for localization of tumour cells in cell cultures and first demonstrations on animal models. Such a sensor will enable detection of tumours at high sensitivity without any background signal. More detailed work on the different available Hyper-CEST contrast parameters focuses on an absolute quantification of new molecular markers that will improve non-invasive tumour diagnostics significantly. NMR detection of functionalized Xe biosensors have the potential to close the sensitivity gap between modalities of nuclear medicine like PET/SPECT and MRI without using ionizing radiation or making compromises in penetration depth such as in optical methods.
http://yacadeuro.org/members/Leif-Schroder/
Honor roll: Criticism books Each of these Criticism books has received at least one award nomination. They are ranked by honors received. Books to Die For: The World's Greatest Mystery Writers on the World's Greatest Mystery Novels The world’s greatest mystery writers on the world’s greatest mystery novels: - Michael Connelly on The Little Sister … - Kathy Reichs on The Silence of the Lambs… - Mark Billingham on The Maltese Falcon… - Ian Rankin on I Was Dora Suarez… With so many mystery novels to choose among, and so many new titles appearing each year, where should a reader start? What are the classics of the genre? Which are the hidden gems? In the most ambitious anthology of its kind yet attempted, the world’s leading mystery writers have come together to…[more] Talking About Detective Fiction In a perfect marriage of author and subject, P. D. James—one of the most widely admired writers of detective fiction at work today—gives us a personal, lively, illuminating exploration of the human appetite for mystery and mayhem, and of those writers who have satisfied it. P. D. James examines the genre from top to bottom, beginning with the mysteries at the hearts of such novels as Charles Dickens’s Bleak House and Wilkie Collins’s The Woman in White, and bringing us into the present with such writers as Colin Dexter and Henning Mankell. Along the way she writes about Arthur Conan Doyle, Dorothy L. Sayers, Agatha Christie (“arch-breaker of rules”), Josephine Tey, Dashiell Hammett, and Peter Lovesey, among many others. She traces their lives into and out of their fiction, clarifies their individual styles, and gives us indelible portraits of the characters they’ve created,…[more] Agatha Christie: Murder in the Making–More Stories and Secrets from Her Notebooks This follow-up to the Edgar-nominated Agatha Christie’s Secret Notebooks features Christie’s unpublished work, including an analysis of her last unfinished novel, and a foreword by the acclaimed actor David Suchet. In this invaluable work, the Agatha Christie expert and archivist John Curran examines the unpublished notebooks of the world’s bestselling author to explore the techniques she used to surprise and entertain generations of readers. Also drawing on Christie’s personal papers and letters, he reveals how more than twenty of her novels, as well as stage scripts, short stories, and some more personal items, evolved. As he addresses the last notebook, Curran uses his deep knowledge of Christie to offer an insightful, well-reasoned analysis of her final unfinished work, based on her notes. Agatha Christie: Murder in the Making features several wonderful gems, including Christie’s own essay on her…[more] On Conan Doyle: Or, The Whole Art of Storytelling A passionate lifelong fan of the Sherlock Holmes adventures, Pulitzer Prize-winning critic Michael Dirda is a member of The Baker Street Irregulars—the most famous and romantic of all Sherlockian groups. Combining memoir and appreciation, On Conan Doyle is a highly engaging personal introduction to Holmes’s creator, as well as a rare insider’s account of the curiously delightful activities and playful scholarship of The Baker Street Irregulars. Because Arthur Conan Doyle wrote far more than the mysteries involving Holmes, this book also introduces readers to the author’s lesser-known but fascinating writings in an astounding range of other genres. A prolific professional writer, Conan Doyle was among the most important Victorian masters of the supernatural short story, an early practitioner of science fiction, a major exponent of historical fiction, a charming essayist and memoirist, and an outspoken public figure who attacked racial injustice…[more] Dame Agatha's Shorts: An Agatha Christie Short Story Companion In Dame Agatha's Shorts, mystery author Elena Santangelo guides you through the short works of one of mystery’s most famous writers, Agatha Christie. Find out: - what was the most exciting event in Christie’s life outside of dining with the Queen. - who Miss Lemon, detective Hercule Poirot’s secretary, listed as a former employer on her resume. - which three short stories inspired the novel Evil Under The Sun. - what was Ariadne Oliver’s sideline before she became a famous novelist. - what inspired Christie’s first story, and possibly her whole writing career. - why you should read, or re-read, Agatha Christie’s short stories. And much more. The Lineup: The World's Greatest Crime Writers Tell the Inside Story of Their Greatest Detectives A great recurring character in a series you love becomes an old friend. You learn about their strange quirks and their haunted pasts and root for them every time they face danger. But where do some of the most fascinating sleuths in the mystery and thriller world really come from? What was the real-life location that inspired Michael Connelly to make Harry Bosch a Vietnam vet tunnel rat? Why is Jack Reacher a drifter? How did a brief encounter in Botswana inspire Alexander McCall Smith to create Precious Ramotswe? In The Lineup, some of the top mystery writers in the world tell about the genesis of their most beloved characters—or, in some cases, let their creations do the talking. In Pursuit of Spenser: Mystery Writers on Robert B. Parker and the Creation of an American Hero A Tribute to Robert B. Parker and His Greatest Creation: Spenser. Join award-winning mystery editor Otto Penzler and a first-rate lineup of mystery writers as they go in pursuit of Spenser and the man who created him, Robert B. Parker. These are the writers who knew Parker best professionally and personally, sharing memories of the man, reflections on his impact on the genre, and insights into what makes Spenser so beloved. Ace Atkins, the author chosen to take up Parker’s pen and continue the Spenser series, relates the formative impact Spenser had on him as a young man; gourmet cook Lyndsay Faye describes the pleasures of Spenser’s dinner table; Lawrence Block explains the irresistibility of Parker’s literary voice; and more. In Pursuit of Spenser pays tribute to Spenser, and Parker, with affection, humor, and a deep appreciation for what both have left behind. …[more] Portrait of a Novel: Henry James and the Making of an American Masterpiece A revelatory biography of the American master as told through the lens of his greatest novel. Henry James (1843–1916) has had many biographers, but Michael Gorra has taken an original approach to this great American progenitor of the modern novel, combining elements of biography, criticism, and travelogue in re-creating the dramatic backstory of James’s masterpiece, Portrait of a Lady (1881). Gorra, an eminent literary critic, shows how this novel—the scandalous story of the expatriate American heiress Isabel Archer—came to be written in the first place. Traveling to Florence, Rome, Paris, and England, Gorra sheds new light on James’s family, the European literary circles—George Eliot, Flaubert, Turgenev—in which James made his name, and the psychological forces that enabled him to create this most memorable of female protagonists. Appealing to readers of Menand’s The Metaphysical Club and McCullough’s The Greater Journey, Portrait of a Novel provides a brilliant account of the greatest American novel of expatriate life ever written. It becomes a piercing detective story on its own. 10 illustrations Sherlock Holmes for Dummies Get a comprehensive guide to this important literary figure and his author. A classic literary character, Sherlock Holmes has fascinated readers for decades—from his repartee with Dr. Watson and his unparalleled powers of deduction to the settings, themes, and villains of the stories. Now, this friendly guide offers a clear introduction to this beloved figure and his author, Sir Arthur Conan Doyle, presenting new insight into the detective stories and crime scene analysis that have has made Sherlock Holmes famous. Inside you’ll find easy-to-understand yet thorough information on the characters, recurring themes, and locations, and social context of the Sherlock Holmes stories, the relationship of these stories to literature, and the forensics and detective work they feature. You’ll also learn about the life of the author. …[more] Thrillers: 100 Must Reads The most riveting reads in history meet today’s biggest thriller writers. Thrillers: 100 Must-Reads examines 100 seminal works of suspense through essays contributed by such esteemed modern thriller writers as: David Baldacci, Steve Berry, Sandra Brown, Lee Child, Jeffery Deaver, Tess Gerritsen, Heather Graham, John Lescroart, Gayle Lynds, Katherine Neville, Michael Palmer, James Rollins, R. L. Stine, and many more. Features 100 works—from Beowulf to The Bourne Identity, Dracula to Deliverance, Heart of Darkness to The Hunt for Red October—deemed must-reads by the International Thriller Writers organization. Much more than an anthology, Thrillers: 100 Must-Reads goes deep inside the most notable thrillers published over the centuries. Through lively, spirited, and thoughtful essays that examine each work’s significance, impact, and influence, Thrillers: 100 Must-Reads provides both historical and personal perspective on those spellbinding works that have kept readers on the edge of their seats for centuries.
http://www.awardannals.com/vq/Honor_roll:Criticism_books/?yr=2010-2019
Academics from the School of Nursing and Midwifery call for an increased spiritual awareness The School of Nursing and Midwifery at Trinity College Dublin are hosting an international conference today, to raise awareness of the necessity to address patients’ and families’ spiritual care needs in the healthcare setting. The conference: The Spiritual Imperative in Healthcare: Securing Foundations comes on foot of an increasing body of evidence around spiritual care, competence, assessment and an awareness that person’s religious beliefs and spirituality should be sensitively explored and routinely considered as an essential component of care delivery. Spirituality may be viewed as central in many people’s lives, it deals with issues of hope, meaning and purpose and contributes to health and wellbeing. Spirituality is core to many peoples’ identity and often is reported as vital in helping them to cope with their distress. The health benefits of spiritual well-being on quality of life, anxiety, depression and anxiety are widely accepted internationally. The issue of spirituality is pertinent in our current pandemic climate where many patients have suffered while in care settings with multiple levels of restrictions, restricted visitors to acute hospitals or nursing homes and minimal meetings with family members. Many patients have spent long periods alone and some were unable to have family members present as they approached death. Society depended on our health care workers to be present with our relatives at this time. The conference notes the expanding volume and scope of international literature that confirms nursing’s and the wider healthcare community’s interest in spirituality as a dimension of caring and holistic person-centred care. Internationally, there is a growing belief that professional nurses and midwives need direction regarding spiritual care. Many international studies have found that nurses lack specific skills in spiritual assessment, awareness and referral opportunities. Speakers from Ireland, UK, Malta, Norway, Netherlands, and the US are presenting international perspectives on this emerging field of research. The conference explores areas such as: equipping nurses and midwifes for spiritual care holistic communication in context of spiritual care, foundations of spirituality and spiritual development in human consciousness. Recommendations Academics from the School of Nursing and Midwifery have recommended: - An increased spiritual awareness and recognition of spirituality as a standard for good practice - The integration of religious and spiritual aspects of care within nursing and midwifery practice, education and research. - The development of human connections and humanistic compassionate person-centred approaches to care within health care. This is a multipronged approach where the individual is nurtured on all levels. - Holistic approaches to communication when delivering patient care should encompass all aspects of care while nurturing the spirit. - Ongoing education training and research and espousing the value of mindful compassionate communication to enable effective modes of preparation to benefit care patients. - Ongoing support for staff in their caring role to support well-being and recognition of the need for self-compassion for healthcare staff. Professor Kathleen Neenan, Chair of the SRIG group, School of Nursing and Midwifery said: Spiritual care that recognises and responds to the human spirit when faced with trauma, ill health or sadness impending death can include the need for meaning, a safe space to express oneself or a place to allow rituals for prayer or sacraments or a human presence is relevant in modern healthcare delivery. We are privileged as healthcare professionals to share in peoples’ suffering and be present when they are experiencing deep hurt and when they are at their most vulnerable. This conference allows us the opportunity to place the true value of delivering spiritual care centre-stage to our patients. We have come together to develop a deeper understanding of the issues and challenges that we encounter and help us more forward to translate the importance and relevance of spiritual care into clinical practice. We are indebted to healthcare workers all over the country for the care they have given so selflessly over their careers but especially during the COVID 19 pandemic.
https://www.tcd.ie/news_events/articles/spirituality-and-spiritual-awareness-central-to-good-healthcare-practice/
The Enagol Math consists of 2 weight plus True italics. It is a typeface with rounded Slab-Serif of Semi-Condensed proportions. I have composed all the proportions of the character based on a study of mathematical proportions related to the golden sequences of Perrin, Lucas and Fibonacci. From an initial matrix of golden proportions applied in the letters 'H' for capital letters and 'n' for lowercase letters, calculated for the versions of the extremes of the Light and Bold type, below I do the whole calculation of proportions using my formula of three axes. For the Italic versions I have drawn a complete set of lowercase letters that give these fonts an aspect close to the Italic writing. In these versions I have also applied many optical corrections to balance the deformations created in many curves by the mere inclination of the letters, which in the case of this type is 11°. The Commercial versions includes:
https://creativemarket.com/deFharo/3102617-Enagol-Math-Rounded-4-fonts
Here are three most common culprits that cause eczema: It’s in the genes Eczema is an allergic condition passed down from generation to generation. Nevertheless, as soon as signs of eczema occur, scan your diet for the offender that triggers the allergy. People suffering from eczema might also have experienced bouts of asthma or hay fever. Common allergic foods These include eggs, soybeans, peanuts, wheat, oats, chocolates and dairy products such as milk and cheese. So, if you are allergic to any one of these, eliminate them. Other factors such as emotional stress, excessive sweating, contact with detergents and certain chemicals trigger an allergic reaction, resulting in eczema. The Omega-3 deficiency Besides heredity and food additives, a number of people are deficient in Omega-3 fatty acids. There are two families of EFAs, the Omega-6 fatty acids found in sunflower, safflower and corn oil and the Omega-3 fatty acids found in flaxseed (Alsi seeds), seafood, walnuts and rapeseeds. Your diet should provide a balance in the proportion of 4:1 of Omega 6: Omega 3. Unfortunately, our diets contain excessive amounts of Omega-6 fats and we deprive ourselves of the essential Omega-3 fats. Urban diets tuck in 10 to 15 times more Omega-6 than Omega-3 fats. Deficiency of Omega-3 leads to a host of degenerative diseases. So balance is the key, which can be easily achieved by cutting down on the Omega 6 from our diet and incorporating more Omega 3. Consume at least two tablespoons of flaxseeds (Alsi) daily. Grind them after roasting and sprinkle them on your salad to ensure your daily quota of Omega-3 fatty acids.
https://www.health-total.com/skin-improvement-articles/3-main-culprits-eczema/
Willemstad/Philipsburg – Inflation rose steeply across the monetary union reflecting a sharp increase in international oil and non-oil commodity prices. The rise in non-oil commodity prices was driven by, among other things, supply chain disruptions and soaring transport costs amid the COVID-19 pandemic, Centrale Bank van Curaçao en Sint Maarten (CBCS) president Richard Doornbosch explained in the CBCS’ second Quarterly Bulletin of 2021. Given the countries’ high dependence on imports, inflation in Curaçao and Sint Maarten is to a great extent imported. Therefore, the hike in international commodity prices is passed through into local prices and, hence, reduces purchasing power. Average consumer prices rose sharply by an estimated 3.9% in Curaçao and 4.0% in Sint Maarten. Particularly low-income households, which spend a significant share of their income on food, are affected. “The question then arises whether under the current situation minimum wages should be at least adjusted for inflation”, Doornbosch stated. “Supporters of a minimum wage system argue that it serves an important distributional purpose by providing a basic standard of living for workers. Also, minimum wages reduce poverty, and protect workers against underpayment. Opponents of minimum wages, on the other hand, argue that it increases unemployment and therefore poverty, and can be damaging to business”, Doornbosch pointed out. Based on economic theory, one can provide arguments in favor of both supporters and opponents of a minimum wage system. So far, most literature on the impact of minimum wages has been focused on advanced economies, while little attention has been paid to the effects of minimum wage systems on small and developing economies such as Curaçao and Sint Maarten. “Therefore, CBCS has started a research project to quantify the economic effects of the minimum wage system in Curaçao1 ”, Doornbosch said. “Research on the impact of minimum wage systems indicates, however, that minimum wages have not only an effect on the employment of workers, but also on their wages, on-the-job training, prices of goods and services, the distribution of income, and welfare”, Doornbosch explained. “Hence, when considering increases in minimum wages, all these factors should be taken into account. In addition, 1 Due to a lack of data regarding the economy of Sint Maarten, the research will start with Curaçao only. evidence suggests that the negative effects of raising minimum wages tend to be stronger in times of recession and economic crises due to increased financial pressure on employers and employees”, he continued. “Under the current circumstances where Curaçao and Sint Maarten are recovering from a deep economic contraction with high unemployment, an increase of minimum wages to compensate the current surge in consumer prices should therefore be prudent given the challenging environment businesses and workers face”, the CBCS-president concluded. The complete text of the Report of the President and the second Quarterly Bulletin of 2021 can be viewed on the CBCS website at www.centralbank.cw under the Publications section.
https://www.721news.com/2021/12/cbcs-started-research-on-economic-impact-prudence-required-with-increase-minimum-wages/
Warning: more... Fetching bibliography... Generate a file for use with external citation management software. Elderly patients with type 2 diabetes are at a greater risk for cognitive decline. The purpose of this study was to assess the relationship between the degree of hyperglycemia and cognitive status in nondemented, elderly diabetics. Between Jan 2013 and Dec 2014, 1174 geriatric patients with type 2 diabetes were enrolled in the study (579 males; age ≥ 60 years; from Fuzhou, Fujian, China). Cognitive function was measured with the Mini Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). A statistically significant, age-adjusted association was observed between the A1C levels and the scores on two cognitive tests (MMSE and MoCA). Specifically, a 1% higher A1C value was associated with a 0.21-point lower MMSE score (95% CI; compare -0.11 to -0.28; P < 0.0001), as well as a 0.11-point lower MoCA score (95% CI; compare -0.10 to -0.38; P < 0.0001). Higher A1C levels were not significantly associated with lower MMSE and MoCA test scores after adjusting for all variables. No significant correlation was found between the two variables in patients older than 80 years of age (n = 215; OR = 1.019; 95% CI = 0.968 - 1.099; p = 0.251). Evidence strongly suggests that chronic hyperglycemia is associated with a decline in cognitive function in nondemented elderly patients with type 2 diabetes. When cognitive assessments are made, comprehensive factors such as advanced age, education level, duration of diabetes, hypertension and other vascular risks should be taken into account. For older geriatric patients (age ≥ 80 years), there is no significant correlation between A1c levels and cognitive function. Cognitive function; Hemoglobin A1c; Mini mental state examination; Montreal cognitive assessment; Type 2 diabetes mellitus National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/26530222?dopt=Abstract
The AI that can help you spot bot attacks By: Paul Kennedy The machine learning algorithms used in data mining have improved in recent years, but one thing that hasn’t improved is the accuracy of their predictions. This month, researchers published a paper detailing how an algorithm developed at IBM’s Watson Research Labs could be used to identify the source of bot activity in large databases. The technique was named the “Gain Information on Bot Activity” algorithm, after the famous AI researcher David Allen Watson. Watson researchers originally used the algorithm in a study that sought to predict which keywords on a Google search would lead to an attack. Watson’s algorithm was able to identify a large amount of malicious content on the search engine that they believe could be attributed to bot accounts. However, they eventually decided to focus on other bot accounts that could potentially be used in an attack that targeted specific keywords. For example, they believed they could identify a bot account that was behind a recent wave of distributed denial-of-service (DDoS) attacks that caused large websites to slow down. The researchers found that a large number of these DDoS attacks targeted certain keywords in a particular order. By finding these keywords in the order that were most likely to be searched for by the bot, they could then determine whether it was actually the bot or not. In a paper published in January 2017, the researchers explained how they used the algorithms they had developed to identify this order of bot content. For each of these domains, the algorithm found a set of “keywords that were commonly searched” in that domain. These keywords were then used to determine whether the bot was the source, or if the bot had simply moved to another domain and re-used the same keywords. They found that the algorithm could correctly identify approximately 30 percent of these bot accounts in a given domain, or around 4,500 bot accounts across all of Google’s search results. The problem was that they couldn’t identify the bot account’s identity, or the bot itself. For this reason, Watson researchers used a machine learning algorithm called “sigmoid regression” to try to identify which keywords had the highest chance of being associated with a bot attack. This was the approach that had been used to find the bot accounts responsible for the attacks against Google in 2017. To make this work, Watson needed to perform some advanced mathematics on the data and its patterns, and then use a variety of algorithms to classify the data into groups based on the results of their analysis. One of the techniques used in this research was the Sigmoid Regression algorithm, which is based on a mathematical model that attempts to predict the probability of a specific pattern of values appearing in a large set of data. The algorithm takes into account several factors, including how the data is organized and how the values appear in the data. For instance, when the data sets have many different types of data, the probability for each of the data points to appear in one of the groups increases with the number of groups. Similarly, the number and frequency of the values appearing increases with each group. However in the case of bot accounts, the data in a set is more likely to appear together, and so the probability that one of those values will appear in a group is higher than that of another group. In this way, the Sigma algorithm is able to predict how likely a particular value will appear as a value in a specific group. The key takeaway here is that the Siga algorithm works better when the bot activity is more evenly distributed throughout the data, and when the number is less than 1 percent of the total. Watson also used a number of different algorithms to identify bot accounts based on how they were organized. For the analysis that was conducted in 2017, Watson analyzed the text of over 10,000 words that were stored in Google Search’s API. In order to do this, Watson used a text search engine to query the API for these words, and analyzed them using different algorithms. The first algorithm they used was called “Gemma,” which uses a deep neural network to learn a model of how words in text are grouped into groups. This model then predicts how the text will be organized in a text corpus. Another method they used to make their predictions was called the “Reverse Recursive Recursive Model,” which is a form of a recurrent neural network that can be used for classification, to determine how likely different types are to appear as distinct groups of values. The last algorithm that was used was known as “Larsen,” which attempts to detect how often different types appeared in different text areas. These three algorithms are what Watson used to predict whether a particular bot account was the culprit behind the recent DDoS attack against Google. The most important takeaway here for the researchers was that, while the number one strategy for detecting bots in large-scale data mining is to identify specific keywords, the second and third strategies that they used are a lot more general and general enough to work with large data sets.
https://ankara-escortt.com/2021/09/29/the-ai-that-can-help-you-spot-bot-attacks/
USU students interested in film now have an opportunity to watch acclaimed movies through the new USUSA Film Club. The club, which started during the Fall 2018 semester, was created by students Samuel Berry and Holden Regnier after they and their friends began watching movies together weekly. “We were hanging out one night back in October and we started talking about our favorite films,” Regnier said. “We kind of veered from movies to films, a bit more serious. We were like, ‘We should start a film club.’” While the club now meets in a spacious room in the Fine Arts-Visual building, Danny Boyer, a member of the club, said they started off watching movies in a basement. “There were no seats,” he said. “We had six people and the sixth person had to lay on the floor.” After the group met for a few weeks, Samuel Berry decided to seek out an official film club on campus. When he couldn’t find one, he and Regnier decided to start a club themselves. Berry met with David Wall, a professor of film and visual studies, who agreed to be the club’s advisor. Every week, members of the group vote on what movie to watch the following Monday. Rather than watching blockbusters, the club tends to view acclaimed, provocative films. Movies shown this semester have ranged anywhere from the 1979 war film, “Apocalypse Now,” to the latest Wes Anderson film, “Isle of Dogs.” Brody Smith, a member of the club, said, “We try to get out of our comfort zones a little bit with the movies we watch. We try to watch things people usually haven’t seen.” Smith enjoys the variety of movie genres the club views. Regnier says that he enjoys watching films because of the thought and symbolism directors put into their films. “I think it’s a really cool way to express your art,” he said. Regnier’s favorite movies include “Boyhood,” “Call Me by Your Name” and “Lady Bird.” Regnier hopes the club can help people who don’t know where to start when getting into important films. “I used to find it kind of intimidating to get into the important and acclaimed films,” he said. “It doesn’t have to be a hard thing to get into.” According to Berry, the club has remained small this year, with a maximum of about 10 people per week, but he hopes to continue to grow the club next year. The club plans on hosting a booth at USU’s Day on the Quad event as well as other advertising efforts to increase club participation. Writer: - - Alek Nelson, Student Reporter, Utah Stateseman Additional Resources:
http://www05.usu.edu/today/index.cfm?id=58409
To lie is to misalign others from the truth. To lie is also to misalign oneself from the truth that there will be karmic repercussions for doing so. Herein someone avoids false speech and abstains from it. He speaks the truth, is devoted to truth, reliable, worthy of confidence, not a deceiver of people. Being at a meeting, or amongst people, or in the midst of his relatives, or in a society, or in the king’s court, and called upon and asked as witness to tell what he knows, he answers, if he knows nothing: “I know nothing,” and if he knows, he answers: “I know”; if he has seen nothing, he answers: “I have seen nothing,” and if he has seen, he answers: “I have seen.” Thus he never knowingly speaks a lie, either for the sake of his own advantage, or for the sake of another person’s advantage, or for the sake of any advantage whatsoever. (AN 10:176) This statement of the Buddha discloses both the negative and the positive sides to the precept. The negative side is abstaining from lying, the positive side speaking the truth. The determinative factor behind the transgression is the intention to deceive. If one speaks something false believing it to be true, there is no breach of the precept as the intention to deceive is absent. Though the deceptive intention is common to all cases of false speech, lies can appear in different guises depending on the motivating root, whether greed, hatred, or delusion. Greed as the chief motive results in the lie aimed at gaining some personal advantage for oneself or for those close to oneself — material wealth, position, respect, or admiration. With hatred as the motive, false speech takes the form of the malicious lie, the lie intended to hurt and damage others. When delusion is the principal motive, the result is a less pernicious type of falsehood: the irrational lie, the compulsive lie, the interesting exaggeration, lying for the sake of a joke. The Buddha’s stricture against lying rests upon several reasons. For one thing, lying is disruptive to social cohesion. People can live together in society only in an atmosphere of mutual trust, where they have reason to believe that others will speak the truth; by destroying the grounds for trust and inducing mass suspicion, widespread lying becomes the harbinger signaling the fall from social solidarity to chaos. But lying has other consequences of a deeply personal nature at least equally disastrous. By their very nature lies tend to proliferate. Lying once and finding our word suspect, we feel compelled to lie again to defend our credibility, to paint a consistent picture of events. So the process repeats itself: the lies stretch, multiply, and connect until they lock us into a cage of falsehoods from which it is difficult to escape. The lie is thus a miniature paradigm for the whole process of subjective illusion. In each case the self-assured creator, sucked in by his own deceptions, eventually winds up their victim. The Noble Eightfold Path Bhikkhu Bodhi Related Course:
https://thedailyenlightenment.com/2012/02/importance-of-truthfulness-as-right-speech/
As the Olympic Games continue to captivate the nation the debate is continuing on how to maximise the legacy of the Games once the Olympics are over. Today the Prime Minister has been speaking about the need for "a big cultural change" towards sport in schools in Britain. Commenting on the Prime Minister's statement on school sport, John Steele, Chief Executive of the Youth Sport Trust, said "I am delighted that the Prime Minister has recognised that the legacy from the Olympic and Paralympic Games should focus on school sport. "There is some fantastic work going on in schools to deliver sport from some very dedicated staff. What many of these passionate people lack is simply the time and resource to deliver PE and sport as they know it can be. This is what they crave as they know that much more can be done in schools to improve the delivery of sport. "Where the Youth Sport Trust feel a difference can be made, is upping the scale of this commitment so that a potential of a national legacy from the Games can be delivered. This will realise the promise to inspire a generation and create an active generation."
https://www.youthsporttrust.org/news/london-2012-legacy-debate-continues
A listing of Alzheimer's Disease medical research trials actively recruiting patient volunteers. Search for closest city to find more detailed information on a research study in your area. Found (13) clinical trials Impact of Anticoagulation Therapy on the Cognitive Decline and Dementia in Patients With Non-Valvular Atrial Fibrillation Patients will be screened at Intermountain Medical Center and at Intermountain-affiliated anticoagulation clinics in the Salt Lake City region. Patients with non-valvular atrial fibrillation will be considered for study. After written informed consent is obtained, subjects who meet eligibility criteria will be randomized 1:1 to 2 treatment arms: Group 1: ... Biomarker Predictors of Memantine Sensitivity in Patients With Alzheimer's Disease The effects of the medication, memantine, on brain functions and the symptoms of Alzheimer's Disease will be tested Prazosin and CSF Biomarkers in mTBI A majority of neurodegenerative dementing disorders, including Alzheimer's disease, (AD), dementia with Lewy bodies (DLB) and chronic traumatic encephalopathy (CTE), now appear to be caused by the accumulation and aggregation of proteins that cause progressive damage to the brain. Recent preclinical results suggest that clearance of such neurotoxic proteins from ...
https://www.centerwatch.com/clinical-trials/listings/condition/11/alzheimers-disease/?phase=4&page=2
Relational development 2.0. Conceptually, interpersonal scholars must negotiate… Conceptually, social scholars must negotiate whether brand new phenomena, such as for example Tinder merit distinctions in relational procedures, particularly since websites on the internet and mobile software research, have as a common factor lent from old-fashioned research that is dating. The partnership development model developed from face-to-face interactions, commonly involves five actions, beginning with initiating, since the action where relational partners begin interaction and work out impressions that are first. This research adds pre-interaction procedures that include information searching for as main to people’s everyday lives and motivations in relationship development, currently absent from present old-fashioned models. Consequently, the step that is pre-interaction in this research, should really be used given that brand brand new first rung on the ladder in the escalation model where rising technologies, websites on the internet, and mobile apps are used to start relationships. This very first process that is pre-interaction explicit and conscious selection requirements upon going into the application or web web site. The criterion immediately eliminates prospective lovers without the relationship instead through the generation of self-generated fixed constructions (for example., age, sex, intimate orientation, proximity, etc.). After categorical choices are self-determined, users be involved in the 2nd action, where they craft specific recognition in artistic presentations and textual information. While these information want to draw a specific market, they blur lines between interpersonal and mass communication since creating fixed, yet optimal-mediated representations of oneself calls for self-reflection, understanding, and expertise. The pre-interaction encompasses (1) determining partner categorical options and (2) creating a mediated rendering of a offline truth, ahead of any interaction and very first impression relationship. Tinder’s swipe logic implies that instrumental habits discursively developed through this motion bind users’ decision-making up to a yes that are binary no (David & Cambre, 2016), whereby the data presented and examined ended up being all generated previous to your discussion. On Tinder, users must navigate others’ self-generated information to complement, where chances are they try an initiation to create an intimacy that is mediated be expedited offline (David & Cambre, 2016). Each celebration must show interest that is mutual then either celebration can start discourse, equality exists through shared interest. Tinder ha Table 3. Reasons individuals usually do not consist of bios within their Tinder pages. In conventional face-to-face models, the conversation commonly begins face-to-face with nonverbal interaction. Nevertheless, Tinder yields novel pre-interaction mechanisms that position possible offline conference initiation through photographs and bios. Premeditated actions individuals undertake prior to matches that are potential strategic. Pre-interaction procedures are driven because of the app’s screen and constrict the communication that is organic face-to-face. These strategic procedures intentionally force individuals to choose their choices (a long time, intercourse, and intimate orientation). Tinder provides the room (setting, scene, and phase) for folks to create representations that promote who they really are (within their eye that is mind’s who they desire their prospective lovers become predicated on look and passions. These representations are generally enacted through face-to-face relationship, however the preplanned procedure eliminates spontaneity that is communicative. The pre-initiation procedures afforded through Tinder suggest that individuals mobile that is employing apps process must (1) know, select, and slim potential mate qualifications (in other words., choosing dating parameters); (2) create an individualized online impression through photos and bio—by focusing on how to provide him/herself as a viable partner; and (3) filter through another’s interpretations of by by by themselves portrayed through photographs and written explanations whenever determining possible lovers’ well worth. The premeditated pre-interaction processes prove static, scripted intrapersonal tasks made to ideally produce communication that is interpersonal and prospective relationships. As mobile apps turn into a supplementary and prominent venue that is dating people must evaluate just how to assess prepared representations and their impact on prospective social relationships. This app is limited to specific populations and has nominal representation of other populations (e.g., minority, rural, and same-sex individuals) upon reviewing user demographics and preferences. People can be self-selecting into particular apps to acquire their desired mate. Until their update that is latest, Tinder (2016) failed to require training or work information, which offers the opportunity for traversing and enriching status boundaries; nevertheless, as Tinder constantly updates its interfaces, future modifications may restrict or expand to transgender, financial status, course, battle, and cultural diversification. Future research should examine exactly exactly how self-selecting previous apps constrains or expands partner that is potential. Relationship initiation strategies that are swipingRQ3) When utilizing see-and-swipe features, individuals suggested they split on swiping kept (M = 3.06, SD = 1.04) and right (M = 2.63, SD = 0.92). Whenever swiping through ten individuals, individuals indicated they swipe that is likely on 3.75 (SD = 2.78) prospective lovers. Typical connections, or connections through their internet sites, had been just often used (M = 2.42, SD = 1.1). Participants abnormally utilized super likes (M = 1.41, SD = 0.80). Individuals suggested they matched just a little fewer than half the time (M = 2.45, SD = 0.86) and initiated interaction approximately half the full time (self-initiated (46.8%) and other-initiated (53.2%)). When swiping appropriate (first portion) or left (second portion), users (letter = 365/364) identified three top themes, attraction (33.4%, 29.9%), selective swiper (21.4%, 28%), and interesting (15%, 16.8%). These themes were identical both for swipes, and lots of other themes overlapped, while they differed in order and frequency. Attraction relied on images and bios. As individuals suggested, “Their face either took my breathing away or these were notably appealing with great things within their bio; ” otherwise, possible lovers had been disregarded if regarded as “real fatties or uggos. ” Conventional face-to-face and dating that is online distinct differences, such as for example gate features that help users opt to approach or avoid possible lovers; nonetheless, real attractiveness is often the very first and a lot of important aspect when you look at the selection procedure (McKenna, 2008). These gating features restriction access beyond a profile that is initial nevertheless, there is a great number of processes that happen just before relationship initiation. Although online dating services and mobile dating apps afford relationship possibilities, numerous users and scholars are critical of selection and success that is relational. Finkel and peers (2012) rendered online dating sites being a device that objectifies prospective partners, does not holistically evaluate possible lovers, and undermines the ability to commit. Nonetheless, despite having skeptics, people continue steadily to utilize digital proximities to enhance their possible conference and dating venues via emergent technologies. Internet dating and mobile apps facilitate relationship initiation by increasing dating that is potential mating access, expanding information available ( ag e.g., look, career, passions, other preferences, etc. ), and delaying initial face-to-face discussion (Bredow, Cate, & Huston, 2008). Virtual proximity provides use of prospective lovers beyond real constraints, widening the field and increasing accessibility (Regan, 2017), even in the event attraction is master. After individuals create their premeditated self-idealizations, they pursue other idealizations much like face-to-face relationship initiation. Individuals articulate that minimal noticeable information (attraction) determined if they swiped kept or appropriate. After attraction, users become selective; cardholders have fun with the game pursuing the interactive card-playing deck of faces, discarding and keeping cards (in other words., possible partners) according to their needs and wants—in hopes of mutual matches and having happy. The next most typical theme, selective swipers, designed that they had particular requirements or criteria, and they quickly dismissed them if they were not present. Interesting suggested the bio and/or profile sparked inquisitiveness (in swiping right) or an unappealing response caused an adverse reaction ( e.g., medications, physical fitness, or no bio). Those perhaps not discarded according to attraction often received further scrutiny. Whenever swiping appropriate, numerous users used a shotgun approach (12.1%) where they swiped directly on all possible partners and filtered out choices after getting matches. As you participant noted, through them”; the ability to see who is interested was appealing“ I get more matches and then sift. Those cardholders whom go with broke frequently apply a shotgun approach, casts a broad internet. Overall, individuals with an Learn More Here intention in guys versus women had more similarities than distinctions emerged in swiping methods (for more information see Tables 4 and 5 along with notable differences between those thinking about gents and ladies). Dining Table 4. Grounds for swiping right. Dining dining Table 4. Grounds for swiping right. Table 5. Grounds for swiping kept. Table 5. Cause of swiping kept. Whenever both lovers swiped appropriate, or matched, individuals often varied in reaction time: 5.3% instantly, 23.9% in mins, 39.3% hours, 22.8% times, 4.8% week, and 3.9% never ever react. Many individuals ventured to meet up their matches: 76.9per cent met matches, while 23.1% never ever did. An average of, individuals reported having 4.58 meetings offline (SD = 6.78). Numerous individuals (37%) suggested that upon fulfilling their Tinder-initiated date it resulted in exclusive relationship. Conventional models don’t account fully for modality switching, and there’s a restricted discussion of online pre-interaction mechanisms that position possible offline conferences. Future research should examine platform that is individualistic; both as pre-interaction and strategic information-seeking techniques that set the phase for social communication, face-to-face objectives, and relationship norms.
https://anthuy.com/relational-development-2-0-conceptually-4/
He wrote more than 150 Clifford titles and 129 million copies of Norman Bridwell's books are in print. Click the BOOK to access Clifford's Homepage https://www.scholastic.com/clifford/ https://www.scholastic.com/clifford/games.asp Springville School Library Rules 1. Follow oral and written directions in the Library and Computer Lab. 2. Show respect to each other and school property. 3. Go to your assigned seat and stay in your own space. 4. Use Library print and online materials responsibly. 5. You need permission to be out of your seat. 6. If you have an emergency situation, inform the Library staff or the Computer Lab staff. A hallway pass to the restroom, school office, classroom, nurse's office, or Guidance Counselor will be issued. 7. No candy or gum is to be comsumed during the time in the Library or Computer Lab. The motto of the Springville Library is: "Lower your voice and raise your mind" All students voices should be: Level 1: very soft whispering while at the book shelves. Level 2: a soft voice only a partner could hear or others at your Library table.
https://springville.ops.org/Library/Ms-Langan/Information-Menu
Basic knowledge of history, geography or sociology. This is a curricular unit based on TP (theoretical-practical) classes. Therefore, the evaluation combines the involvement of students in classes (oral participation and texts presentations), the performing of small written essays and answers with consultation,. The collection of examples and phenomena removed from everyday life is strongly encouraged and valued. The main concern of this curricular unit is to familiarize students with the language, concepts and key issues that define the specificity of sociology as a scientific discourse. It is intended to lead students to distinguish the sociological perspective from the common sense and discourse, which requires special attention to the epistemological component, particularly with regard to the need to deconstruct and break with the commonplaces about the false evidence of the social. It is intended to adopt a critical stance simultaneously throughout the teaching sessions, while confronting some of the main theoretical perspectives of sociology, emphasizing the importance of plurality in the theoretical construction of sociological knowledge. Accordingly, students are encouraged to think sociologically about issues and problems of the present 1. The sociological perspective: individual, society and social relations: sociological issues; The practical meaning of sociology; the scientific nature of sociological knowledge; Sociological research; Social structures and human action. 2. Culture and Society: socialization processes and structuring of social relationships: primary and secondary socialization; Socialization and multiple plurality of contexts; The normativity of the “social”: social roles, norms, values, institutions; Social control, conformity and deviation. 3. Power structures and social inequalities: A marxist and weberian theory of social inequalities; New axes of class differentiation; Inequalities in contemporary societies: classes, sexual differentiation and generational differentiation; Capitalist societies as political constellations: the basic modes of production of power; Social hierarchies and cultural hierarchies. Maria Madalena Santos Duarte Continuous Assessment Final Exam: 50.0% Group work, presentations in the classroom: 50.0% Final Assessment Exam: 100.0% BERGER, Peter L.; LUCKMANN, Thomas-A construção social da realidade: um livro sobre sociologia do conhecimento. 2ª ed. Lisboa: Dinalivro, 2004. [BP 165 BER] BERGER, Peter- Perspetivas sociológicas. Petrópolis: Vozes, 1980. [316 BER] COSTA, António Firmino — O que é Sociologia. Lisboa: Difusão Cultural, 1992 GIDDENS, Anthony -“Sociologia: problemas e perspetivas”, in A. Giddens, Sociologia. Lisboa: Fundação Calouste Gulbenkian, pp. 19-40, 1997. LAHIRE, Bernard - “O actor plural”, in B. Lahire, O homem plural. As molas da ação. Lisboa: Instituto Piaget, pp. 21-47, 2003. MILLS, C. Wright- “A promessa”, in C. W. Mills, A imaginação sociológica. Rio de Janeiro: Zahar Editores, pp. 9-32, 1980. RODRIGUES, Carlos Farinha (coord.) -Desigualdades Sociais. Lisboa: Fundação Francisco Manuel dos Santos, 2011.
https://apps.uc.pt/courses/EN/unit/14106/12213/2019-2020?common_core=true&type=ram&id=872
Ecosourcing Native Plants Seed Collection Project Ecosourcing means using plants grown from local seed. This is now accepted good practice for ecological plantings and has several advantages including: - Protecting the genetic diversity within local populations - Protecting the character of local ecosystems from being swamped by imported varieties from other areas - Providing the best chance of planting success by using plants that have adapted to local conditions Ecosourcing is quite difficult in Marlborough, partly because some local species have all but disappeared, and also because it is costly and difficult for plant nurseries to collect seed from dispersed sources. Ideally, seed should be collected as close as possible to the original site and at least within the ecological district area. However, this is not always possible and two broad ecosourcing zones for Marlborough have been agreed to by Council ecologists, Department of Conservation and QEII staff to provide a practical minimum guide to sourcing of seed. Since 2006, Council has undertaken to collect some local seed with the cooperation of private landowners. Species and locations are shown on the map below. The seed has been provided to local nurseries specialising in native plants. Ask for ecosourced plants at local nurseries; demand will help to create supply. Keen landowners can collect their own locally sourced seed for propagating by a local nursery, using sites on their own or neighbours' properties as a source. This, of course, involves having to think ahead, as it will involve a couple of years' delay before planting can occur, but should ensure ecologically worthwhile results and high plant survival.
https://www.marlborough.govt.nz/environment/biodiversity/ecosourcing-native-plants-seed-collection-project
San Mateo County Community College District (SMCCCD) is committed to supporting sustainable practices in all stages of the supply chain. The purchasing power of all three campuses can be strategically leveraged to support the transition to greener products, such as recycled printer paper; recycled can liners, paper towels, and toilet paper; and cleaning products free of toxics. The District strives to streamline practices by working with General Services, Auxiliary Services, and other campus stakeholders. A green purchasing policy will be established in 2018 to provide guidance for specific purchasing practices. Aside from providing long-term environmental benefits, sustainable procurement will support operational efficiency, employee wellbeing, and positive student outcomes. Changes in procurement policies incentivize students and faculty to use the campus as a living laboratory and see the effects of District policies. Sustainable procurement is most effective when paired with an overall conscientious approach toward resource use. Reducing the materials used in the District, while still maintaining high quality learning environments for students, would allow SMCCCD to set a positive example in higher education. Progress and Steps Forward A portion of SMCCCD bathroom and waste disposal supplies are made partially or completely from recycled materials. This includes products such as toilet paper, can liners, and paper towels. Hand roll towels in the restrooms are EcoLogo Certified and Green Seal Certified. The District-wide Green Office Program is one additional way SMCCCD is pursuing sustainable procurement. The Green Office Program will be used to guide responsible resource use within campus offices. For example, the program will be used to centralize printers and computers where possible to reduce exposure to chemicals and minimize energy use. Reduction of electronic use will diminish the need to purchase as many computers and printers, thus minimizing greenhouse gas generation. The Sustainability Team will partner with General Services, campus administrators, and facility staff to assist in the transition to more sustainable products and increased materials sharing within and between offices. Implementing conservation measures, such as systematizing double-sided printing on campus, will reduce the need to buy as many products. Sustainable procurement practices support SMCCCD’s value of handling resources responsibly from beginning to end. Ultimately, having more environmentally friendly purchasing policies on a District-wide scale will encourages students to tap into their own capacity to effect positive change.
http://smccd.edu/sustainability/sustainableprocurement.php
The most correct way to charge lithium batteries is to charge in two stages. This method is used by Sony in all its chargers. Despite the more complex charge controller, it provides a more complete charge of li-ion batteries, without reducing their service life. Here we are talking about a two-stage charge profile of lithium batteries, abbreviated as CC/CV (constant current, constant voltage). There are also options with impulse and step currents, but in this article they are not considered. 1. At the first stage , a constant charge current should be provided. The magnitude of the current is 0.2-0.5C. For accelerated charging, an increase in current up to 0.5-1.0С is allowed (where C is the battery capacity). For example, for a battery with a capacity of 3000 mA / h, the nominal charge current at the first stage is 600–1500 mA, and the accelerated charge current may lie within 1.5–3A. To ensure a constant charging current of a given value, the charger (charger) circuit must be able to raise the voltage at the battery terminals. In fact, in the first stage, the charger works like a classic current stabilizer. At the moment when the voltage on the battery rises to 4.2 volts, the battery will gain approximately 70-80% of its capacity (the specific value of the capacity will depend on the charging current: with accelerated charging it will be slightly less, at nominal – a little more). This moment is the end of the first stage of the charge and serves as a signal for the transition to the second (and last) stage. 2. The second stage of the charge is the charge of the battery with a constant voltage, but gradually decreasing (falling) current. At this stage, the charger maintains a voltage of 4.15–4.25 volts on the battery and monitors the current value. As you type capacity, the charging current will decrease. As soon as its value decreases to 0.05-0.01С, the charging process is considered to be finished. During the second stage of the charge, the battery manages to gain about 0.1-0.15 more of its capacity. The total battery charge in this way reaches 90-95%, which is an excellent indicator. We considered two main stages of charge. However, coverage of the issue of charging lithium batteries would be incomplete if one more charge stage was not mentioned – the so-called. precharge Preliminary charge stage (precharge) – this stage is used only for deeply discharged batteries (below 2.5 V) to bring them to normal operating mode. At this stage, the charge is provided by a constant low current until the voltage on the battery reaches 2.8 V. The preliminary stage is necessary to prevent swelling and depressurization (or even an explosion with a fire) of damaged batteries, for example, having an internal short circuit between the electrodes. If a large charge current is immediately passed through such a battery, it will inevitably lead to its warming up, and then how lucky. Another advantage of precharge is preheating of the battery, which is important when charging at low ambient temperatures (in an unheated room during the cold season). Intellectual charging should be able to control the battery voltage during the preliminary charge stage and, if the voltage does not rise for a long time, make a conclusion about the battery failure. All stages of the lithium-ion battery charge (including the pre-charge stage) are schematically depicted in this graph: Exceeding the rated charging voltage by 0.15V can shorten the battery life time by half. Decreasing the charge voltage by 0.1 volts reduces the capacity of a charged battery by about 10%, but significantly extends its service life. The voltage of a fully charged battery after removing it from the charger is 4.1-4.15 volts. 1. How can I charge a li-ion battery (for example, 18650 or whatever)? The current will depend on how quickly you would like to charge it and can lie in the range from 0.2 to 1C. For example, for a battery of size 18650 with a capacity of 3400 mA / h, the minimum charge current is 680 mA and the maximum charge is 3400 mA. 2. How long does it take to charge, for example, the same 18650 rechargeable batteries? The charge time directly depends on the charge current and is calculated by the formula: T = C / I hours. For example, the charge time of our battery with a capacity of 3400 mA / h with a current of 1A will be about 3.5 hours. 3. How to charge the lithium-polymer battery? Any lithium batteries charge the same. It does not matter whether it is lithium-polymer or lithium-ion. For us, consumers, there is no difference.
https://simplemetaldetector.com/batteries-and-chargers/charging-li-ion-batteries/
It's about 2,000 times the number of Teeth in a Great White Shark's Mouth. In other words, 99,000 is 2,100 times the count of Teeth in a Great White Shark's Mouth, and the count of Teeth in a Great White Shark's Mouth is 0.00048 times that amount. A Great White Shark has 48 exposed teeth on their upper and lower jaws. Behind these exposed teeth are developing teeth — up to 300 in about 5 rows. The sharks will continuously lose and replace the exposed teeth throughout their lives. It's about 3,500 times the number of Countries in the European Union. In other words, the count of Countries in the European Union is 0.00029 times 99,000. It's about 6,500 times the number of Living Kings and Queens. In other words, the count of Living Kings and Queens is 0.00015 times 99,000. It's about 10,000 times the number of Symphonies Composed by Beethoven. In other words, 99,000 is 11,000 times the count of Symphonies Composed by Beethoven, and the count of Symphonies Composed by Beethoven is 0.00009090909090910 times that amount. It's about one-fifteen-thousandth the number of Users on Facebook. In other words, the count of Users on Facebook is 14,500 times 99,000. It's about 20,000 times the number of Living U.S. Presidents. In other words, the count of Living U.S. Presidents is 0.0000500 times 99,000. It's about 50,000 times the number of Escalators in the State of Wyoming. In other words, the count of Escalators in the State of Wyoming is 0.0000200 times 99,000.
http://bluebulbprojects.com/CountOfThings/results.php?coeff=x&c=99000&sort=pr&p=7
Vida is a 28-year-old midwife from the Akha community who is proud of her role in ensuring women in Lao PDR realize positive maternal health outcomes. Based at Long District hospital in Luang Namtha province, this Akha midwife graduated from the Oudomxay UNFPA-MOH supported midwifery programme in 2017. In Lao PDR, supporting the training and deployment of midwives from ethnic groups is a key component of UNFPA’s wide-ranging engagement in midwifery programmes that recognizes the significant role of midwives in saving lives and changing harmful norms, they are even more critical due to impact of COVID-19 on health services. “In the past, Akha women gave birth without assistance and they did not come to the health centre. As most of the pregnancies and births were unattended among Akha women, there was a high risk of complications” said Vida. “But since we have Akha midwives, Akha clients are comfortable to come and seek support”. This positive trend is a result of newly-trained ethnic midwives returning to their home villages to support their communities. Their cultural insights combined with newly-learnt skills adhering to international standards means they are trusted and well placed to deliver essential maternal health care to ethnic mothers who traditionally would face health risks, especially during delivery. Such UNFPA and MOH backed interventions have played a significant role in reducing maternal mortality as well as ensuring safe pregnancies, childbirth and family planning in Lao PDR during the past decade. Together with district teams, Vida conducts health education outreach in villages surrounding Long District hospital to provide information and mobilize women to visit health facilities. Now, many women come to seek healthcare at such facilities and are less reluctant to seek guidance on birth spacing and contraceptives. “For example, every time I provide health education, I also encourage them [Akha ethnic women] to exclusively breastfeed for at least six months,” said Vida, who reported that many ethnic mothers in the past were reluctant to breastfeed as they juggle motherhood with cultivating crops in the field, and didn’t know about the health benefits of breastfeeding for the infant and the mother. Raising awareness on the importance of antenatal care to ensure health and safety of mothers and babies is a key task for Vida and other ethnic midwives. “I encourage women to make at least four antenatal care visits [international standards are 8 ANC visits per pregnancy] and give birth at health facilities to keep mothers and babies safe. The proudest moment for me is to see women giving birth at a health facility assisted by qualified health personnel, so they don’t suffer any complications,” said Vida. To provide a full spectrum of care, counseling for youth and adolescents is delivered as part of family planning services. Vida encourages women to learn more about long-acting reversible contraceptives. More couples are seeking long-acting methods such as injectables and implants so that women do not need to travel to the health facility often. Midwives like Vida are being gradually deployed all over Lao PDR for community awareness raising. Their role is crucial in applying skills acquired through the midwifery courses adapted to international standards through MOH and UNFPA collaboration. While development of midwifery capacity has benefited women and children in Lao PDR in the past decade, midwives such as Vida still face challenges in countering harmful cultural practices for childbirth and child care, such as dietary restrictions, roasting (lying on a bed above a fire) for 15 days after giving birth and the belief that giving birth to twins is bad luck. Following training, Vida and other midwives are connected to a midwifery helpline where they can access peer support, to discuss difficult cases and develop response strategies. UNFPA is supporting Lao PDR to realize its commitment to the 25th anniversary of the International Conference on Population and Development (ICPD25) to have at least one midwife per health facility. The support includes training, curricula development and equipping health facilities. Thanks to the Maternal Health Trust Fund, KOFIH and Luxembourg, the midwifery program is accelerating the role of midwives in saving lives and changing harmful norms and practices. According to the State of the World Midwifery Report 2021, investing in midwives is cost-effective as a fully educated and trained midwife can provide about 90 percent of essential reproductive maternal neonatal child adolescent health care. *************** UNFPA, the UN's sexual and reproductive health agency, works in over 150 countries including Lao PDR, to achieve zero maternal deaths, zero unmet need for family planning and zero gender-based violence. For more information please contact:
https://lao.unfpa.org/en/news/investing-ethnic-midwives-protect-maternal-health-special-cultural-contexts-lao-pdr
By Pierre de Boisséson, Economist, OECD Development Centre and Alejandra Meneses, Policy Analyst, OECD Development Centre Human development relies on three fundamental building blocks — health, education and income. A recent report from the OECD Development Centre shows that in Southeast Asia, women’s human development remains severely constrained by discriminatory social institutions, in other words, formal and informal laws, practices and social norms. These socially and culturally embedded norms, attitudes and behaviour limit women’s ability to control and make decisions on their own health, education and access to labour opportunities. Dewi’s story is especially telling. Dewi’s teen pregnancy: putting her health at risk and her life on hold Dewi is 16. She lives with her family and spends most of her time helping her mother with household chores, visiting her friends and doing her homework. Dewi does not know it yet but her life is about to change. She finds out she is pregnant. She never had proper access to sexual and reproductive health education and services, and now her parents and community want to marry her to the father of the child. “In 2017, the adolescent birth rate was high in Southeast Asia, with an average of 43 births per 1,000 women aged 15 to 19 years.” #DevMattersTweet Dewi’s story is all too familiar. In 2017, the adolescent birth rate was high in Southeast Asia, with an average of 43 births per 1,000 women aged 15 to 19 years. Adolescent pregnancy rates are closely correlated with the prevalence of girl child marriage, revealing the extent to which social norms can significantly impact women’s health. Globally, almost 9 out of 10 adolescent births occur within the context of child marriage. These early pregnancies increase the likelihood of maternal mortality. In 2017, half of Southeast Asian countries had maternal mortality rates higher than 100 deaths per 100,000 live births. The negative consequences of adolescent pregnancy not only affect young mothers’ wellbeing but also put the new-born babies’ health at risk as they may suffer from inadequate physical development. Beyond health, adolescent pregnancies can hold girls back from accessing education and employment opportunities. Girls and women who are fully informed about their health and reproductive choices are more likely to stay in school longer, pursue a profession and seize economic and productive opportunities – all of which enhance their agency. Dewi’s career path: a forced choice Dewi is 19. With the support of her family, she was able to overcome the health complications related to her pregnancy and graduated from high school. Despite all the challenges, Dewi wants to pursue a career in the technological sector but her parents and husband tell her that women are not good at math; she should study history or literature instead. Dewi’s aunt thinks that all of this is just a waste of time and money and that a woman’s place is at home, raising her children. But Dewi is strikingly perseverant and thanks to a scholarship targeted at “young moms”, she enrols in an administrative assistant programme. Dewi is not alone in having to take a forced career path. While Southeast Asia has reached gender parity in enrolment in primary and secondary education, girls and women are lagging behind in terms of enrolment rates in the Science, Technology, Engineering and Mathematics (STEM) fields. For instance, in six1 out of the nine Southeast Asian countries for which data are available, the gender gap in STEM enrolment is larger than ten percentage points. “45% of the population declare that children will suffer when a mother works for pay outside the home, while 22% of the population believes that it is not acceptable for a woman to have a paid job outside the home.” #DevMattersTweet As we have seen in the case of Dewi, discriminatory social norms and attitudes play an important role in shaping educational choices. On one hand, social norms, stereotypes and unconscious biases lead people to perceive STEM fields as masculine and play a critical role in dictating the types of programmes that women enrol in compared to men. On the other hand, from as early as primary or secondary education, learning materials perpetuate gender stereotypes by assigning certain functions and skills to girls and boys. The lack of female teachers in STEM as the level of education increases, combined with the low labour force participation of women in STEM fields, results in fewer female role models in STEM. This plays a role in shaping young girls’ expectations and further reduces girls’ engagement in these fields. Can Dewi make an income of her own? Dewi is 30. She has three beautiful children who are growing up healthy and strong. From morning to evening, Dewi runs around the household, cleaning the bathroom, washing the laundry, picking up children at school, preparing meals for when her husband comes home after a long day of hard work. Dewi sometimes thinks she would have liked to work in an office and have an income of her own. She even got job offers after graduating from the administrative assistant programme. A few years ago, she had a great idea of creating a meal delivery service for local restaurants; but she did not have time to pursue it – who would have taken care of the household and children? Southeast Asia is home to large imbalances in the labour market. In 2020, women’s average labour force participation rate was 23 percentage points lower than for men. At the same time, women in Southeast Asia continue to assume the bulk of unpaid care and domestic work. In 2018, women in the region spent, on average, 3.8 times more than men did on unpaid care and domestic work. Not only does the impact on women’s income limit their ability to invest, thrive as entrepreneurs or access credit, it also constrains some of their critical life choices. It affects women’s ability to make meaningful and strategic decisions of their own, which further affects other dimensions of their empowerment and human development, such as investing in their health or education. The OECD has long documented the negative effect of discriminatory social institutions on the gender gap in labour force participation. In Southeast Asia, a significant part of the population opposes women and mothers’ paid work: 45% of the population declare that children will suffer when a mother works for pay outside the home, while 22% of the population believes that it is not acceptable for a woman to have a paid job outside the home even if she wants one. Meanwhile, restrictive masculinities entail binary gender roles, including expectations that men should provide financially for their families and that women should take care of the home and family members through unpaid care and domestic work. Social norms that create the expectation that men have to be the main breadwinner undermine women’s access to work, promotions and equal remuneration for work of equal value. These forces, in turn, lead to very few women having the necessary resources to purchase assets or earn an income of their own, which further reinforces men’s economic dominance. How can Dewi take back control? In the wake of the COVID‑19 pandemic, women in Southeast Asia are facing additional challenges: violence against women is on the rise, their share of unpaid care and domestic work is increasing, access to maternal and reproductive health has been disrupted and sectors in which women are primarily employed face the worst of the economic crisis. The consequences for women’s human development are dramatic. As governments and policymakers from Southeast Asia ready themselves for the recovery phase and pledge to build back better, now is the time to seriously challenge the discriminatory social norms and practices that continue to hold women back. They owe it not only to Dewi – and together with her, half of their population—, but also to their societies as a whole, as the windfalls will benefit all. 1. ↩Indonesia, Lao PDR, Malaysia, the Philippines, Thailand and Viet Nam. 2. ↩Average is calculated on for the seven countries for which data are available: Cambodia, Lao PDR, Malaysia, the Philippines, Thailand, Timor-Leste and Viet Nam.
https://oecd-development-matters.org/2021/03/30/dewis-story-discriminatory-social-institutions-hold-women-back-in-southeast-asia/?shared=email&msg=fail
A simple general relativity theory for objects moving in gravitational fields is developed based on studying the behavior of an atom in a gravitational field and maintaining the principle of relativity. The theory complies with all the known effects of gravity such as the gravitational time dilation and faster light speeds higher in the gravitational field. The field equations are applied to calculate the satellite time dilation in any orbit, the light deflection by the sun, and the anomalous advance of Mercury?s perihelion. In all these calculations, the results matched observations with an error of less than 1%. The approach to the new theory introduced here is different from the geometric approach used by the general relativity theory. The theory is field based where the potential energy of a system of masses can be easily calculated and the force can be found as the gradient of the potential field in analogy to Newtonian mechanics. The resulting field equations become the traditional Newton?s equations when week gravitational effects are present. The special relativity theory of an object moving without experiencing gravitational fields can be derived directly from the gravitational field equations introduced here. The theory introduced here has several differences from the general relativity theory. For example, the event horizon of a black hole (where light cannot escape) has to be of zero radius, essentially meaning that light can escape any object unless the object has infinite density. Another primary consequence of this study is that the principle of equivalence of gravitational and inertial mass has limited validity and a new definition of gravitational mass is given here. Besides its extreme simplicity as compared to general relativity, the new theory removes all the known infinities and puzzles that can result from the general relativity theory and Newtonian mechanics. In addition, the new theory is in full compliance with quantum mechanical concepts and it is shown that the very essence of quantum mechanics is gravitational in nature and that electrons are gravitational black holes. Finally, a striking relation between the potential energy stored in the universe and the total mass energy of the universe flows naturally from the field equations introduced here which explains an observation that Feynman referred to as the great mystery.
http://db.naturalphilosophy.org/member/?memberid=898&subpage=abstracts
Representative Duties: Providing Patient Triage: the safe, effective and appropriate disposition of health related problems - Administers comprehensive evaluation of patient telephone callers and walk-in patients, develops care plan, consults with other disciplines as needed, and implements plan including referrals - Advocates for patient care as nurse member of multidisciplinary medical team - Care plan is communicated to patients in a timely manner - Documents triage encounters in medical record, documenting all appropriate activities and plan Case Management - Delivers personalized services to our patients to improve their care using careful assessment, planning, coordination and delivery of care, and outcome evaluation of the plan - Coordinates follow up care for acutely ill patients including primary care and specialty visits, VNA, Infusion companies, and prior authorization for services - Coordinates preventative services for defined population including education, vaccinations, monitoring and arranging for lab tests and referrals - Evaluates adherence to care plan and intervenes to promote patient’s well-being - Documents services in medical record, documenting all appropriate activities and plan Providing direct patient care, using nursing procedures as appropriate for a Registered Nurse in an outpatient clinic - Includes those both acute and chronic care as appropriate to an ambulatory care setting especially HIV/AIDS (ie opportunistic infections, medication side effects) and behavioral issues (ie medication compliance, safe sex) issues as well as other medical conditions as appropriate for nursing case management - Assists with patient care under the direction of appropriate medical provider and/or under nursing scope of practice - Provides nursing support for LPN’s and MAs - Places and maintains IV lines - Administers medications and vaccinations, (PO, IM, SC, PR, ID, INH, IV) per medical provider order - Checks supplies for nursing patient encounters and notifies manager or supply ordering designee is additional stock needed - Administers all medical tests (e.g. EKG, spirometry, TB-ppd) and procedures as per protocol or medical provider request - Works with MA and pharmacy to call in prescription refills or obtain prior authorizations - Assists with preventative medicine efforts (e.g.: flu, STD, vital sign checks, occupational health) Maintains comprehensive knowledge of medical issues - Maintains comprehensive knowledge of HIV/AIDS medical issues (i.e. opportunistic infections, medication side effects) and behavioral issues (i.e. medication compliance, safe sex) as well as other medical conditions appropriate to an outpatient population - Seeks out continuing education opportunities related to essential job functions - Completes continuing education requirements per the Massachusetts Board of Nursing regulations - Obtains ACRN or other certifications related to essential job functions within 2 years of hire - Understands and maintains universal precautions in all clinical activities Provides health education - Provides community health education - Provides patient education related to their medical and psychosocial needs - Maintains comprehensive knowledge of vaccines, medications, and other therapeutic interventions as appropriate to our outpatient clinic - Maintains comprehensive knowledge of FH policies, procedures and services including those of TFI - Maintains knowledge of community resources - Assists in training other employees in the medical department including orientation of new staff, volunteers, and students Effectively communicates with patients, co-workers on medical team, in medical department and across departments - Demonstrates competency in EMR, Microsoft Outlook, CareWeb and/or Care360 - Attends and participates in regularly scheduled general staff meetings as well as Nursing meetings (e.g. MDSM) - Is available to meet with team members during assigned work hours - Understands and abides by HIPAA regulations - Demonstrates a commitment to strong customer service - Ability to work as a team member Meets Agency Participatory Expectations Performs other related duties as required Requirements: - ASN/BSN/MSN and current licensure as Registered Nurse (RN) in the Commonwealth of Massachusetts - Familiarity with the LGBTQ community and people living with HIV/AIDS (e.g., ANAC certification) as well as a commitment to community health - Experience with electronic medical records strongly preferred - Current CPR Certification - BLS certification required - IV insertion/phlebotomy skills preferred - Minimum 2 years experience in an ambulatory care setting preferred - Requires being able to work a 7.5 hour day, the majority of it sitting (>70%) We offer competitive salaries, and for those who qualify, an excellent benefits package; including comprehensive medical and dental insurance plans, and a retirement plan with employer match. We also provide 11 paid holidays, paid vacation, and more. LGBTQ-identified persons, people of color, and others from historically underrepresented communities are encouraged to apply.
https://careerservices.upenn.edu/jobs/fenway-health-nurse-case-manager-rn-2/
Herbal Eze 90 Caps by Nutri-Dyn Inflammation is often one of the leading causes of pain in the body. But most anti-inflammatory medications have adverse side effects. Herbal Eze, formerly Pain-Eze, by Nutri-Dyn is a dietary supplement designed to provide a safe and effective all-natural herbal remedy to reduce inflammatory response in the body. This anti-inflammatory supplement is not habit-forming and feature an all-natural blend of herbal ingredients proven effective at calming inflammation. These include ginger root extract, Boswellia gum extract, turmeric root extract, and black pepper fruit extract. Minor aches and pains don't have to result in discomfort or a reduced schedule of activities when an individual takes Herbal Eze Tablets by Nutri-Dyn to receive the following benefits: - Aids in reducing inflammation using all-natural, herbal ingredients. - May help to reduce the pain and discomfort caused by inflammation due to stress, over-exertion or injury.
https://blueskyvitamin.com/products/herbal-eze-nutri-dyn
29./30. June in Berlin Humans and computers have serious communication problems: while machines can only understand specific programming languages or at least need structured data to process, human beings speak and understand “natural” language, with all the imprecision and ambiguity that this entails. One of the fundamental aims of natural language processing (NLP) is therefore to simplify communication between humans on the one hand and machines on the other. Natural language processing, part of the wider field of artificial intelligence, provides technologies which enable computers to understand, interpret and generate unstructured human language. The origins of natural language processing date back to the 1940s. After many decades of slow progress, today natural language processing is a highly dynamic field, thanks in large part to more powerful hardware and innovations such as machine learning. Though advances in the field are certainly far from over, a whole range of applications based on natural language processing, from the specialised to the everyday, are already in use: No matter what sort of linguistic content it processes, a computer must be able to distinguish the individual components of that content and recognise their meaning before it can understand the whole. Therefore, the theoretical tools for natural language processing are drawn from the field of linguistics, particularly computational linguistics. The best way to understand how an NLP system works is to take a closer look at the individual phases of language and language processing. Depending on whether the system works with written or spoken language, one of the following aspects will be central. An NLP system that works with spoken inputs records and analyses sound waves, encodes them into a digital signal and then interprets the data using various rules or by comparing it with an underlying language model. The theoretical foundations of speech recognition come from the linguistic disciplines of phonology and phonetics. Regardless of whether an input is received as an audio file or as written text, a natural language processing system must parse the input into its individual components before it can discern the meaning of an utterance. At the sentence and phrase level – the syntactical level – natural language processing determines the grammatical structure of an utterance. Below the syntactical level, morphological processes identify individual words and their constituent parts. The goal here is to understand the meaning of each individual term at the lexical level and so create the conditions for understanding the utterance as a whole. The combination of information about the structure of a sentence and the meaning of its individual elements provides clues about the sentence’s meaning. Finally, placing the individual elements into context and so ideally understanding multiple elements of a coherent utterance correctly is a matter for semantics. A natural language processing system may use various procedures falling within the domain of semantics. These include entity extraction (also called named entity recognition), sentiment analysis and disambiguation. Because natural language processing is so multi-faceted, it has become common practice to categorise narrowly focussed applications into one of two recognised fields. Natural language understanding (NLU) and natural language generation (NLG) are both regarded as subdisciplines of natural language processing. Natural language understanding focusses primarily on enabling machines to understand written texts or the spoken word. An application that analyses a news item on a website and uses entity extraction to identify elements such as people, places, and events, “only” uses natural understanding. But if it responds to the content it has identified, as a chatbot does for instance, it is classed as an NLP application. Natural language generation, by contrast, refers to the production of text using an algorithm. To do this, an application needs structured data, as can be found in stock market information, sports results and weather data. Automatic text generation is then used to create any amount of content in real time. Because natural language generation turns data into language, it too is considered to be a sub-field of natural language processing. Natural language processing can be colloquially defined as “computers doing things with language”. Information scientist Elizabeth D. Liddy provides a scientific definition:
https://www.retresco.com/encyclopedia-article/what-is-natural-language-processing
By MATT VOLZ, Associated Press Six gay couples who are suing Montana for the benefits that married couples have asked the state Supreme Court on Friday to rule that denying those benefits is an unconstitutional violation of their equal protections. The couples’ attorney, James Goetz, said his clients are not asking to for the right to marry, but they are entitled to make the same decisions about their families’ health care and finances as married couples under the Montana Constitution. The state’s refusal to expressly provide those rights is discriminatory, he said. The couples are appealing a Helena judge’s dismissal of their case last year after state prosecutors argued that spousal benefits are limited by definition to married couples. A voter-approved amendment in 2004 defined marriage as between a man and a woman. The Legislature can create a separate class for couples regardless of sexual orientation, assistant attorney general Mike Black told the seven justices. But lawmakers do not have a constitutional mandate to do so, and the couples’ demands are overly sweeping and do not cite the specific laws that would have to be changed. Oral arguments were held before hundreds of people in a packed theater at the University of Montana. The justices questioned both sides intently, but did not make an immediate ruling. One of the couples, Kellie Gibson and Denise Boettcher, said they suspect this may just be one more step in a lengthy legal journey that began nearly two years ago, and which may end up in a federal appeals court. But, they said, they want to ensure that the next generation is protected. “It’s hope for the future,” Gibson said. “Montana requires equal protection. We’re just citizens of Montana.” The couples appealed after District Judge Jeffrey Sherlock dismissed the lawsuit filed in July 2010. Sherlock based his ruling in part on the state’s marriage amendment and also said that an order to force state lawmakers to write new laws would violate the separation of powers. Goetz told the justices on Friday that Sherlock focused on only one of several options the couples requested, and ignored the constitutional questions raised. Goetz said the court does not need to order the Legislature to do anything but can make a simple declaratory statement that to not provide legal benefits to same-sex couples is a violation of their equal-rights protections. Several justices questioned whether extending spousal benefits to others would gut Montana’s marriage amendment and leave it without meaning. Goetz responded that the amendment would still be significant because the couples still would not be able to marry. Justice Patricia Cotter asked Goetz what would happen if the court made such a declaration and the Legislature did not act to change the laws. “We should not presume the Legislature will not do its duty,” Goetz said. Black argued that the couples’ lawsuit does not name any specific laws that may be discriminatory and the lawsuit would be helped if statutes were named, since different state laws would require different levels of review. “The scope of the relief being asked here is unprecedented,” Black said. His argument drew questions from justices who asked whether Black believed each individual law should be litigated. It drew a sharp response from Justice James Nelson, who asked why the burden should be on the couples if they are entitled to equal protection under the Constitution. Black said he believed a broad ruling such as that being requested by the couples would not reduce the number of lawsuits filed, it would increase it. Among the rights the lawsuit is seeking: - Inheritance rights, and the ability to make burial decisions and receive workers compensation death benefits. - The right to file joint tax returns, claim spousal tax exemptions or take property tax benefits. - The right to make health care decisions for a spouse when that person cannot. - Legal protection in cases of separation and divorce, including children’s custody and support.
http://www.outsmartmagazine.com/2012/04/same-sex-benefits-case-goes-to-state-supreme-court/
Most commonly, it is caused by the immune system attacking the glands as if they were harmful bacteria or viruses. However, it can be caused in other ways. The adrenal glands, which form part of the endocrine system, are situated just above each kidney. They produce hormones that affect every organ and tissue in our bodies. The adrenal glands consist of 2 layers, the medulla (interior) and cortex (outer layer). The medulla produces adrenaline-like hormones, while the cortex secretes corticosteroids. Here are some key points about Addison's disease. More detail and supporting information is in the main article. - Addison's disease is caused by disruptions to the adrenal glands, preventing normal secretions of corticosteroids. - Disruptions may be caused by immune system response, genetic defects, or other conditions, including cancer. - The most common cause is an immune system response. Adrenal gland disruption The adrenal glands are located on top of the kidneys. They produce hormones, but when this process is disrupted, it can cause Addison's disease. Disruptions to the hormone production of the adrenal glands cause Addison's disease. This disruption can be caused by a number of factors, including an autoimmune disorder, tuberculosis, or a genetic defect. However, approximately 80 percent of cases of Addison's disease in industrialized nations are caused by autoimmune conditions. The adrenal glands stop producing enough steroid hormones (cortisol and aldosterone) when 90 percent of the adrenal cortex is destroyed. As soon as levels of these hormones start to drop, Addison's disease signs and symptoms begin to emerge. Autoimmune conditions The immune system is the body's defense mechanism against disease, toxins, or infection. When a person is ill, the immune system produces antibodies, which attack whatever is causing them to be ill. Some people's immune systems may start attacking healthy tissue and organs - this is called an autoimmune disorder. In the case of Addison's disease, the immune system attacks cells of the adrenal glands, slowly reducing how well they can function. Addison's disease that is the result of an autoimmune condition is also known as autoimmune Addison's disease. Genetic causes of Autoimmune Addison's Disease Recent studies have demonstrated that some people with specific genes are more likely to have an autoimmune condition. Although the genetics of Addison's are not fully understood, the genes most commonly associated with the condition belong to a family of genes called the human leukocyte antigen (HLA) complex. This complex helps the immune system distinguish between the body's own proteins and those made by viruses and bacteria. Many patients with autoimmune Addison's disease have at least one other autoimmune disorder, such as hypothyroidism, type 1 diabetes, or vitiligo. Tuberculosis Tuberculosis (TB) is a bacterial infection that affects the lungs and can spread to other parts of the body. If the TB reaches the adrenal glands it can severely damage them, affecting their production of hormones. Patients with TB have a higher risk of damage to the adrenal glands, making them more likely to develop Addison's disease. In America, because TB is now less frequent, cases of Addison's disease caused by TB are uncommon. However, in countries where TB is a significant problem, there are higher rates. Other causes Having surgery to remove the adrenal glands may cause Addison's disease in some cases. Addison's disease may also be caused by other factors that affect the adrenal glands: - a genetic defect in which the adrenal glands do not develop properly - a hemorrhage - adrenalectomy - the surgical removal of the adrenal glands - amyloidosis - an infection, such as HIV or a disseminated fungal infection - cancer that has metastasized to the adrenal glands Secondary adrenal insufficiency The adrenal glands can also be negatively affected if the pituitary gland becomes diseased. Normally, the pituitary produces adrenocorticotropic hormone (ACTH). This hormone stimulates the adrenal glands to produce hormones. If the pituitary is damaged or diseased, less ACTH is produced and, consequently, less hormones are produced by the adrenal glands, even though they are not diseased themselves. This is called secondary adrenal insufficiency. Steroids Some people taking anabolic steroids, such as bodybuilders, may increase their risk of Addison's disease. The production of hormones caused by taking steroids, particularly over a long period of time, can disrupt the adrenal glands' ability to produce healthy levels of hormones - this can increase the risk of developing the disease. Glucocorticoids, such as cortisone, hydrocortisone, prednisone, prednisolone, and dexamethasone act like cortisol. In other words, the body believes there is an increase of cortisol and suppresses ACTH. As mentioned above, a reduction in ACTH causes less hormones to be produced by the adrenal glands. Also, individuals who take oral corticosteroids for conditions, such as lupus or inflammatory bowel disease, and stop taking them suddenly, may experience secondary adrenal insufficiency.
https://www.medicalnewstoday.com/articles/186235.php
In most CMST classes, you will cite sources verbally in your presentations, in addition to creating a written bibliography or works cited page. Your audience likely won't have your bibliography in front of them when they are listening to you, so it's important to let them know where you found your information. This page offers tips to help you create effective oral citations. Oral citations help you demonstrate the reliability and accuracy of the information you share during your speech. They provide the audience with proof you've researched your topic and help you establish ethos, or credibility, with your audience. Oral citations should include the following information. Who did you get the information from? Also share the author's credentials, to help establish this person or organization as a credible source. Where did the information come from? This could be a book, magazine, academic journal article, website, interview, etc. In most cases, oral citations require only the journal or website name. However, if you have used multiple sources from the same journal, also cite the article title. When was the information published? For websites that don't identify a date, say the date the site was last updated or the date you accessed the site. Check out the links below for some resources to help you with your writing!
https://libguides.mnsu.edu/c.php?g=664671&p=4672518
DISQUS is a global comment system that improves discussion on websites and has many other features. The EXT:ns_disqus_comments extension will help you to integrate DISQUS comments plugin into your website. Our extension will integrate DISQUS comments section on TYPO3 pages Project Management is the well-ordered practice of Originating, Planning, Implementing, Supervising, and Closing a project or a task within a specified period of time to achieve organizational goals. Where several people work on a project, task allocation is important. Some agencies rely on Agile Methods, others do it at the daily boarding stage, and others organize almost everything via email. Today we would like to present ten ToDo applications, which may be missing from your agency. Take a little time. Be innovative. Fun. Well, let's be honest: the fun factor also depends on the related task. But with the appropriate ToDo administration, an important foundation is laid. Here are our ten recommendations for you, some perhaps already known, others almost forgotten, some perhaps new. To select a specific Project Management Software for a project we need to consider the below criteria: - Time-duration of the project - Number of employees working in the project - Resource estimation for resources to be spent on the tool - Storage Capacity of the tool - PM Functionalities available with the tool like email communications, file sharing, tracking etc. - Usability Based on the above-mentioned points the list of the best-highlighted project management tools is as below: - Basecamp - Jira - Active.collab - Asana - Redmine - Zoho - Trello - Revolver - Wunderlist - Remember the milk 1) The team in focus with Basecamp Source : https://picksaas.com/project-management/basecamp Today Basecamp has some exciting features and has undergone a hugely positive development in recent years. Key features of base camps are : - To-do Lists: Make daily agendas for all the work you have to do, relegate undertakings, and set due dates. - Scheduling: Each project in Basecamp incorporates a schedule that shows dated to-dos and occasions for that task. - Documents & File Storage: Every project includes a space to share documents, files, and images so everyone on the project knows from where he/she needs to find the resource. - Message boards: A nice feature that emphasizes the social factor of Basecamp is an automated group queries function. Depending on the desired frequency, questions can be queried daily, weekly or monthly. - Check-In questions: Ideally, what has been done today, which topics are to be discussed at the next agency meeting, or what should is missing. - Work with client: Work with clients and your team in one organized place and get everything on the record. - Email forwards: Import your Emails to Basecamp and also stay updated with notifications via Email. - Reports: Generate reports and statistics from the data and track your progress! 2) Jira Source : http://www.stickpng.com/img/icons-logos-emojis/tech-companies/jira-logo JIRA is an ultimate tool to manage all your projects and its resources and tasks at one spot. JIRA allows you to track any kind of unit of work (be it an issue, bug, story, project task, etc.) through a predefined workflow. - Organize & track: JIRA allows you to Plan and Keep your team on track and Monitor the progress on your projects. It is compatible with all platforms and every team member has secure online access. - Managing and Scheduling: Basic functionalities of JIRA is to track bugs, manage to-dos and assigning the unlimited number of tasks and sub-tasks. - Security: JIRA provides high-level granular security schemes and enterprise level security management. - Documents & File Storage: Every task that one assigns or is being assigned has a detailed description of the task to be accomplished with any attachments (JIRA supports almost all kind of file formats ). - Project structure: One may use pre-designed workflows or may build their own customizable workflows. - Report generation and Email notification: JIRA allows Time tracking reports, roadmaps, dashboards with at-a-glance status updates, charts and reports o your personal dashboard and Email updates for efficient tracking of the project. - Import & Export: One may easily export report data to Microsoft Word and Excel and import data in XML and RSS feeder format. 3) ActiveCollab Source : https://wistia.com/learn/showcase/explaining-online-payments Active collab is the perfect combination of all the tools for project management, task outline, task overview, Team creation and management and billing which disentangles everything your group is taking a shot at. - Task Management with resources: It has various features such as task distribution, time management, and billing for each feature also very simple to utilize and very much composed application. - Real-time chat: Keeps all your data in one place, where your team can communicate, get informed and see what they need to work on next. - File sharing & Security: It allows them to share files, brainstorm, discuss important topics etc.. Clients can be included, with the full protection of your sensitive data. - Functioning at Cloud: Active Collab keeps running on the cloud stage which keeps you free of any authoritative, hosting and maintenance work. - Easy sharing & customization: On the other hand, if you need full control of your data, custom URL and an unlimited number of team members, there is the self-hosted option. Always on the latest version and maximum speed, this is a go-to option for most teams. - Stay notified: Active Collab keeps you updated with Email Integration so your group will be more aware of upcoming tasks, team collaboration will increase and won't ever miss a notification. - Platform independent: Active collab is compatible with all platforms and provides excellent file management features. 4) Asana Source : https://commons.wikimedia.org/wiki/File:Asana_logo_new.png Asana is a web and mobile application intended to enable groups to compose, track, and deal with their work. Have a look at its wide range of features: - Projects: Organize your work into shared projects and manage it in pieces. Assign tasks to mates with start and due dates. - Attachments: Asana supports every file format plus you may share attachments with your teammates in just two clicks! - Task comments: Commenting on a task, to clarify doubts, tag teammates or other work in Asana so everyone. Discuss a project’s progress and share the current state of the project. - Stay updated and planned: Be updated about the projects, conversations, and tasks. Also, plan your day with a prioritized to-do list. - Team management: Create teams to organize your projects ,add teammates as followers and limit access with permissions and admin controls. - File sharing: Use Dropbox, Google Drive, and Box to attach files directly to tasks. - Communication mediums: You may perform integration with Slack, Hipchat, Send emails to Asana or Asana for Gmail Add-on. - Apps: Asana application is available on both Android and iOS platforms. 5) Redmine Source : https://commons.wikimedia.org/wiki/File:Redmine_logo.svg Redmine is a flexible project management web application. Written using the Ruby on Rails framework, it is cross-platform and cross-database. Redmine is open source and released under the terms of the GPL. - Multiple projects support: Each user can have multiple projects with a different role on each project. - Flexible issue tracking system: Define your own statuses and issue types. - Gantt chart and calendar: Automatic Gantt and calendar based on issues start and due dates. - Time tracking functionality: Time can be entered at project or ticket level. - File management: You can easily post messages and share files of various formats. - Multi Language support: Supports more than 50+ languages. - Multiple databases support: Redmine runs with MySQL, PostgreSQL or SQLite. - Suitability: Most suitable for software building related projects. Least suitable for non- technical projects. 6) Zoho Invoice Source : http://www.stickpng.com/img/icons-logos-emojis/tech-companies/zoho-logo Zoho Invoice is an ideal accounting software for sole proprietorships and small to large businesses. It is an all-in-one tool for accounting. Its key features are: - Sales and Marketing: Zoho provides wide variety of tools for sales and marketing such as crm,forms,salesiq,survey,sales etc. - Finance: Zoho provides vast variety of tools for finance such as generation of invoice, subscriptions, expense calculator,checkout - Collaboration: Zoho empowers you to collaborate with docs, sheet, projects, sprint, bugtracker, meeting,notebook,vault, showtime. - IT tools: Also it has IT related tools such as Service desk,mobile device management and site 24*7 - Human Resources: It contains two services in HR category i.e recruitment and people. 7) Remember the milk Source : https://www.marktastic.com/2015/07/task-management-challenge-remember-the-milk-vs-todoist/ It’s good when developers do not just think about the rough features, but also the little extras. Remember The Milk is a well-known ToDo application, where developers work with an attention to details. - ToDo functionality: An important function for ToDo applications is the simplicity. Remember The Milk has some extras that allow that. - Smart Add and Assign: With "Smart Add", tasks can be quickly assigned to a specific day in the week, equipped with Subtasks. - Easy Sync: It’s better to use Remember The Milk in the combination with other services. This app makes a combination with Evernote, Twitter, Google Calendar more feasible. Synchronization with Microsoft Outlook is also possible. - Personal cum professional: It is important to understand that Remember The Milk is based on personal ToDo administration, unlike Basecamp, where team is the concentration on the focus. - Agency friendly: Ideal for agencies, where employees work quite independently. Remember The Milk contains all the functions that an agency needs to manage To Do's. The simple operation, connection to known third-party applications as well as the Smart Add function. 8) Revolver Revolver is the perfect tool for managing complex and large projects efficiently. It comes with extensive features making it a strong tool that can handle anything. Its features are: - All-in-one: Revolver is more like an ‘agent’ software, which starts with the design of printing paper, the calculation, the time recording and reports, including project management and planning tools. - Streamlined: Several employees can create projects, assign deadlines, assign tasks, and record the required hours. This means that Revolver takes a good step further compared to other the services presented here. - Suitability: This planning and working software is particularly interesting for larger agencies, who want to gain more overview for their everyday life. - Chat: Among other things, this offers realtime chat functions. - Integration: Connections to over 300 third-party applications, and much more. - Complexity: If you want to manage "only" tasks, you will find Revolver too complex. 9) Wunderlist Source : https://businesskitbag.com/p/task-management/wunderlist/ Wunderlist is a cloud-based project management application. It enables clients to deal with their errands from a cell phone, tablet, PC and smartwatch.Have a look at its features: - Management & sharing: Wunderlist makes it possible to manage tasks in lists, and it is also possible to share these lists with employees. - Grouping: Multiple lists can be grouped into a single folder, which allows you to manage larger projects. - Enterprise or personal use: Wunderlist is that the application is already used by many people privately. The environment is therefore well-known and the work process is easy. - Apps: The developers of Wunderlist have brought their service on numerous platforms. In browser, on Android or iOS: Wunderlist is available everywhere. - Task Classification:Wunderlist, tasks can be expanded with various information. This allows creating tasks, as well as leaving notes, working with attachments and writing comments. - Communication mediums: You may integrate your project with Slack,Hipchat,Send emails to Asana or Asana for Gmail Add-on. - Restore: Restore the files or tasks that you deleted accidently. 10) Trello Source : https://trello.com/about/logo We all know Trello. It is the best tool to track progress of the project. Also have look at other features: - Structure: Trello consists of cards and boards. Each card can be expanded with notes, deadlines, attachments, checklists and more. - More features: Trello offers lots of interesting things for Premium or Gold customers: - Expanding cards with the tuning function - Connect the cards to applications (such as Evernote, Dropbox, GitHub, Google Drive, MailChimp, Slack, and many more) - Manage the entire organization in multiple teams and related boards - Projects: Organize your work into shared projects and manage it in pieces.Assign tasks to mates with start and due dates. - Check-In questions: Ideally, what has been done today, which topics are to be discussed at the next agency meeting, or what should is missing - Suitability: Trello is a must when it comes to small team management. So how to choose the right project management tool for your agency? 1. It must be providing an introduction. The tool you choose should be self-explanatory. With others app, you might feel overwhelmed but then you may have to go back to the paperwork. Therefore some guidance is needed. 2. Suitability. The tool you choose must be suitable to your enterprise plus it should cover all the features that your enterprise needs. 3. Integrates into everyday life. At the morning meeting, during the discussion or while planning a new project. No more "I write it briefly in my notebook", only Wunderlist and nothing else. 4. Lead by example. If you don't use the app extensively, the others won’t too. So do not write emails "I have a task for you here" but leave a comment in Remember The Milk and assign the task to the person. Hope this article will help you in project management. We would love to know your favourite project management system via below comment box. Contribute to TYPO3 by becoming my Patreon As I love TYPO3, I would like to furnish TYPO3 people with informative content, tutorials, and experiences by composing regular TYPO3 blogs “as to give back to the community”.
https://www.nitsan.in/blog/top-10-project-management-systems-for-agencies/
The face of higher education is changing every day. Colleges and universities are faced with numerous challenges such as low retention, decline in degree completion, budget cuts, rising costs, and changes in teaching methods and curricula. As institutions are looking for ways to increase degree completion and student retention, they also focus on improving student learning experience in the classroom. In order to accomplish this goal, it is important to understand the different generations that are occupying the classroom. Generational cohort refers to a group of individuals who were born within the same time span, thus tend to share the same attitudes, beliefs, and values (Dimock, 2019). Today’s classroom is now occupied by a new generation of students called Generation Z or iGen. This generation born after 1996 is called Gen Z while individuals born between 1981 and 1996 are part of the Millennials generation. However, they are not the only one occupying today’s classroom. Other generations include the Generation X born between 1965-1980 and the Baby Boomers born between 1946 and 1964 (Dimock, 2019). The aforementioned generations have different experience with technology in their life. The Generation X-ers were introduced to the personal computer when they became teenagers. In comparison, the Millennials, also known as Net-Generation, were brought up in the world of personal computers and electronic devices. They are comfortable using any form of technology and use the internet for research and social media to connect with others. They are considered the earliest adopters of social media and internet technology (Seemiller & Grace, 2016). The experience of Gen Z with technology is different in terms of accessibility and connectivity. All the devices used by the preceding generations are now combined into one device that does not leave their sight. Their high sense in technology makes them well informed online and offline (Seemiller & Grace, 2016). They live simultaneously in a virtual and physical reality and are more technologically savvy than all previous generations. They are the true digital-natives generation who believe that there is an app for everything. Twenge (2017) refers to this new generation as iGen, noting that the “i” in the word represents the internet, individualism, income inequality, in no hurry, in person no more, insecure, insulated but not intrinsic, income insecurity, indefinite, inclusive, and independent. Seemiller and Grace (2016) describe this generation as loyal, thoughtful, compassionate, open-minded, and responsible; characteristics that they bring to the classroom. Gen Z students who are in today’s classroom are different from their precedent generations in terms of learning, interaction with technology, and social relation. Traditional styles of teaching that have been successful in the past are becoming ineffective in a world where most of the students are accustomed to a fast-paced environment with easy access to information at their fingertips. They have little patience for any experience that takes a long time. Research on Gen Z in the classroom shows that these students learn differently from their predecessors. In addition to their cellphones, they are also bringing their values and their strong opinions (Seemiller & Grace, 2016). Having access to digital technology from an early age, Gen Z students have a greater need of technology-based instruction than their preceding generations. One way to keep this generation engaged in the classroom is to incorporate new technology in adaptive learning activities and understand the generational divide that exists between the instructor and the students in the classroom (Roehl, Reddy, & Shannon, 2013). While this generation is considered more self-directed and quicker learners than previous generations, they are not team players; thus teaching approaches that emphasize cooperative and social learning are important in creating a learning environment where the students are eager to learn and share what they learn. Seemiller and Grace (2016) argued that learning for Generation Z is more than access to the content; the emphasis should be put on the process through which these students learn and understand the content. The authors suggested to provide a platform for the students to gain practical experience that they can use in their field. They also recommended that students be exposed to learning approaches that help develop creativity. While this generation prefers to learn independently at their own pace, working in group settings can help them engage in social learning. To stay relevant and effective in education, teachers, faculty, and administrators should thrive to understand the dynamics and cultural shift that is affecting the campus. There is no going back to the old paradigm; some drastic changes must be made in order to close the generation gap and improve student learning experience. Educators should reflect on their teaching styles and outcomes and be able to embrace new active learning and technology-enabled strategies in the classroom to keep the new generation engaged. References Dimock, M. (2019, January 17). Defining generations: Where Millennials end and Generation Z begins. Retrieved from Pew Research Center: http://www.pewresearch.org/fact-tank/2019/01/17/where-millennials-end-and-generation-z-begins/ Jones , V., Jo, J., & Martin, P. (n.d.). Future schools and how technology can be used to support millennial and generation-Z students. School of Information and Communication Technology. Roehl, A., Reddy, S. L., & Shannon, G. J. (2013). The flipped classroom: An opportunity to engage Millennial students through active learning strategies. Journal of Family & Consumer Sciences, 105(2), 44-49. Seemiller, C., & Grace, M. (2016). Generation Z goes to college. San Francisco, CA: Jossey-Bass. Twenge, J. M. (2017). iGen: Why today's super-connected kids are growing up less rebellious, more tolerant, less happy - and completely unprepared for adulthood. New York, NY: Simon & Schuster, Inc. About the author: Dr. Claudia Bonilla has worked in higher education for more than 20 years. She holds a bachelor’s degree in Computer Information Systems and a master’s degree in Mathematics Education from Nova Southeastern University. In 2015, she completed her Ed.D. in higher education and organizational leadership at Nova Southeastern University. She is the Chairperson of the Mathematics Department at Miami Dade College in Florida and the Chair convener for the mathematics discipline. Her expertise includes Mathematics, Math Education, Curriculum Development, Higher Education and Organizational Leadership. She is also a member at-large of the Florida Mathematics Redesign group contributing to supporting community colleges’ effort to develop student-centered pathways and increase student completion rates.
https://www.whataboutleadership.com/single-post/2019/07/17/Who-are-the-Generation-Z-students-in-your-classroom
In order to find potential tools, scaffolds, and differentiation to be employed by English Language (EL) and content teachers alike, a small qualitative study was conducted that found that English learners (ELs) displayed better reading comprehension and increased memory retention of the chapter events when reading the graphic novel versions of a text in comparison to the traditional book format. Key words: language learners, ELL, graphic novels, comics, reading, reading comprehension, memory recall Using Graphic Novels to Increase Comprehension and Recall As an English Language teacher, I work with English learners (ELs) not only by helping them with listening, speaking, reading, and writing, but also by providing tools and scaffolds to help them successfully navigate the English language in their mainstream classes. Sadly, ELs can face more challenges in the classroom than their native speaking peers, including, but not limited to, skill transfer from the learner’s first language L1 to the target language (L2); the unique nature of student’s L1 (e.g. is there a different alphabet? Is the language a spoken language only?); interrupted schooling; and the possibility that school is the only place these learners are hearing and using their L2 (Ford, 2005). My personal belief in the power of visuals in the form of comic books and graphic novels led me to study graphic novels as a potential tool for EL reading success in elementary education. The purpose of my study was to explore how the use of graphic novels in an EL classroom could increase reading comprehension and memory recall. I sought to find answers to the following questions: - How can graphic novels affect the proficiency of reading comprehension, as shown by increased performance on the task of retelling, for middle school ELs in comparison with a text only novel? - In what ways can the memory recall of a chapter’s events be affected by the use of a graphic novel adaptation in contrast with the traditional text format? The purpose of this article is to provide a brief summary of the major findings that emerged from my research. The article will begin by discussing the usefulness of visual aids for reading comprehension and memory recall. Next, it will introduce graphic novels, and their potential for serving as visual aids in the classroom. Finally, my study and findings, followed by their implications, will be discussed. Terms There is a lot of confusion regarding the true difference between the terms comic and graphic novel. Will Eisner originally defined comics as simply “sequential art” (1985). Scott McCloud later defined them as a “collection of pictures and words arranged side-by-side in a sequential story format” (McCloud, 1993). The term graphic novel maintains this sequential art aspect, but differs in that instead of being serialized, they are often published as original trade paperbacks that tell a single story from beginning to end, more similar to a traditional novel (Arnold, 2003). A casual search will show there are graphic novels for just about every subject or literary genre with equally thought-provoking themes as those present in traditional novels, but with the added scaffold of visuals. It is this visual scaffold aspect that allows both graphic novels and comics the potential to provide support for struggling readers while still working through the same difficult themes and complicated stories that exist in the text-only format. It is when we look through the lens of the visual scaffold that is provided by both comic books and graphic novels in education that the difference between them becomes negligible. To avoid confusion in this article, the term “graphic novel” will be used as a catchall to describe both comic books and graphic novels, as it is their shared visual scaffolding aspect I am focusing on, and not whether they are serialized (comics) or whole works (graphic novels). The graphic novel used in this study was published as a whole work in sequential story format. Visual Aids and Reading Comprehension Visuals have long been hailed as useful aids in assisting students in their reading comprehension (e.g., Levie & Lentz, 1982; Levin, Anglin, & Carney, 1987). Luckily for EL educators, visuals are ever-present within the context of graphic novels, which may aid reading comprehension. Any student confusion that could arise regarding the comprehension of the plot, characters, or setting may have a more concrete representation in the accompanying visual than appears in the graphic novel format of the story. A graphic novel’s ability to display the relationship between words and visual images simultaneously allows readers an easier path to imagine what they just read, a fundamental key to facilitating comprehension (Eisner, 1998). In fact, this path seems to naturally assist students with the use of a key reading strategy, visualization, or forming mental pictures in students’ minds, which helps students to “…find they are living the story as they read” and therefore increase their enjoyment and understanding (Roe & Smith, 2005, p. 333). Additional research suggests that pictures presented alongside text facilitates comprehension by reducing the cognitive load of dense text or more sophisticated concepts (Burke, 2012; Mayer 1994, 2014; Metros & Woolsey 2006; Schnotz, 2002). A recent study of native English-speaking university students, for example, found that incorporating visuals into a lesson on how blood circulates through the heart resulted in a better understanding than text alone (Butcher, 2006). In addition, the usefulness of visuals has been found effective for adult ESL learners (Liu, 2004). The results of Liu’s study showed that low-level adult EL students given a high-level text with added visuals scored significantly higher on a series of comprehension recall protocols immediately administered than those given the high-level text only. Paivio’s (1991) dual coding theory, which discusses the process our brain undergoes during reading, sheds light on how graphic novels may facilitate reading comprehension. Paivio argues that all learners learn to read or write using two separate language systems of cognition. The first is the verbal system, which is the information garnered from words, sequence, speech, and writing. The second system is the imagery system, consisting of non-verbal information, such as images and visualizations. Paivio (1991) explains that students are making connections between these two different systems simultaneously while they read, and it is these connections between the two different systems that allow for better understanding and recall. Essentially, information is stored both verbally and non-verbally, as words and images, and in this format one can recall information to a greater degree. Graphic novels, it seems, have a unique format that includes both of the language systems of cognition in one reading experience; they are student reading materials with visual scaffolds already designed into them. Visual Aids and Reading Recall Paivio (1991) further argues that the visual portion of the system of the dual coding is even more important when it comes to memory recall. An early study that seems to support the connection between visuals and memory recall was conducted by Omaggio (1979), which measured comprehension among Native English speakers reading several different texts with and without visuals in both the L1 of English and the L2 of French. He found that while the visuals had no effect on reading comprehension in the English L1, they did have a positive effect on reading comprehension and recall in the French L2 reading (Omaggio, 1979). In another study, two different groups of participants were given readings, with one group using a text only excerpt, and a second group using a text with added visuals (Waddill & McDaniel, 1992). Upon completion of the reading of the excerpts, participants were simply asked to write a much as they could recall on the subject. It was found that those from the group with the added visual support were able to recall a greater amount of information than those without (Waddill & McDaniel, 1992). It is worth noting that visuals are not a panacea for comprehension; a study conducted by Daniel Bruski (2011) found that a group of beginning-level adult language learners, some first language non-literate and some first language literate, produced non-universal understandings and inaccurate descriptions of what was occurring in the visual when presented with images of speech bubbles, arrows, and symbolic signs. Context and cultural background were shown to play a major role in differing interpretations of images, noting that misinterpretations of meaning that may occur with visuals in the same manner it may with text. Graphic Novels: A Visual Scaffold Visuals can sometimes better illustrate a concept. This is the reason that manuals contain images alongside written instruction, and why companies use logos to brand themselves: visuals are not affected as easily by language barriers (McCloud, 1993). Visuals are more concrete. So, when applying this to a classroom, the simplified visual nature of comic books may provide a scaffold by allowing the reader to focus their attention on important text aspects as well as eliminate potentially confusing aspects. The Study This qualitative classroom study used retells and memory recall assessments to determine how graphic novels versus traditional novels might support comprehension and memory of texts. Setting and participants The school setting of this qualitative action research study was a grade five through eight environmental, science, technology, engineering and math (E-STEM) public middle school. The fifth grade class in which this research was conducted consisted of seven students, five boys and two girls, all of whom participated in the study. All participants had been in the United States for a minimum of two years, were labeled as Limited English Proficient (LEP), and had received regular prior formal schooling. Their language levels ranged from 2.8-3.9 according to the World Class Instructional Design and Assessment (WIDA) scale used for assessing English Language levels in Minnesotan K-12 schools, indicating they were low-intermediate students, and received sheltered English instruction from an English as a Second Language (ESL) instructor, also the author of this study. The sheltered classes provide language support while simultaneously meeting mainstream content standards. Data collection Instruments I used student-produced written retells to show comprehension of the text, and multiple choice questions to measure recall. The process of retelling requires students to consider the information they read, and summarize what they understand; it also includes higher order thinking skills including the processing of schema, the ability to process and filter textual information, the ability to sequence events, the ability to determine the relative importance of events, the ability to later recall this important information, the ability to organize this information in an understandable and meaningful way, and the ability to draw conclusions about the relationships that may exist within ideas in the text itself (Fisher & Frey, 2011; Klingner, 2004; Shaw, 2005). With the variety of required skills needed to produce a retell, it has been argued that retells are a powerful way to measure reading comprehension or to check for understanding (Shaw, 2005). Retell preparation To begin the process, I modeled how to complete the graphic organizers to help the students capture the essential information needed for a retell. After modeling, the graphic organizers were practiced with a traditional novel. A chapter of the young adult, science-fiction novel City of Ember was read aloud to the whole group at the end of each class period for the first eight chapters. This text was chosen not only because the lexile levels for both the traditional and graphic novel fell within the reading level range of the participants, but also because the graphic novel uses a great deal of the same text and quotations from the traditional novel. Using these beginning chapters as practice for the graphic organizers and the written retells, gradual release was used through first modeling the act of completing the organizer, and then slowly shifting this responsibility onto the participants (Pearson & Gallagher, 1983). After this practice with the traditional novel, the graphic organizers were practiced with a graphic novel of the same City of Ember text. This was also done at the end of each period for three consecutive days. The same graphic organizers were used to help participants complete written retells for each of these graphic novel chapters daily. This gradual release assured participants could complete the organizers and retells independently for the study. Data collection Upon reaching Chapter 11, participant retells began to be used for data collection in the study. Participants began the process of using the end of the class period to independently read, fill out their graphic organizers, and compose their written retells from these organizers. They created a new retell for each chapter read on each consecutive day. The use of the traditional novel and the graphic novel alternated for the purpose of comparison. Only the data collected from the independently-produced student written retells for Chapters 11-18 were formally scored, and their results were used as data in this study. In addition to completing written retells of chapter events, participants were given a short three to four multiple-choice question quiz after a 24-hour period to gauge recall from the previous day’s chapter. The assessment contained information specific to the last chapter read, and each assessment was similar in format, regardless of the text medium (traditional or graphic novel) that it was used to assess. Retells were collected during the final 15 minutes of each class period for approximately six school weeks. Data analysis The retell data was analyzed using participants’ final written retells. These retells were assessed using a six-category instructor-created retell-scoring rubric (Table 1) focused solely on student delivery of information related to their comprehension of the story holistically. The rubric emphasized construction of meaning rather than anything sentential or sub-sentential such as construction of clauses, spelling, or punctuation, thereby eliminating possible loss of points due to language transfer errors. The rubric categories were used to assess retell of the chapter’s key events, sequence of events, problem, resolution, characters and setting. Scores ranged from zero to three for each category, resulting in 18 points total. Scores resulting from a traditional chapter and from a graphic novel chapter were compared. Table 1. Recall Rubric |Idea Unit||Verbal Prompts Used||0||1||2||3| |Key idea of chapter’s event||What important events took place during this chapter?||Wholly inaccurate or not included||Does not recall many key ideas or inaccurately expresses events||Accurately expresses some key, although incomplete, events.||Accurately expresses all key events in the chapter to completeness.| |Sequence of events||How does this chapter begin? What was the order of the events?||Wholly inaccurate or not included||States some events in order, but with some inaccuracies.||States many events in order, but with some inaccuracies||Accurately states events in correct order.| |Problem||What was one important problem in this chapter?||Wholly inaccurate or not included||Includes chapter non-specific, vague, or unrelated problem.||Chapter’s problem description is accurate but vague or with some inaccuracies.||Accurately states chapter’s problem.| |Resolution||How does the chapter end? Is a problem solved?||Wholly inaccurate or not included||States chapter non-specific or unrelated resolution.||Chapter’s resolution description is accurate but vague or with some inaccuracies.||Accurately states chapter’s resolution.| |Characters||Who were the important or main characters in this chapter?||Wholly inaccurate or not included||States chapter non-specific or unrelated character descriptions or includes unimportant characters.||Chapter’s character description is accurate but vague or with some inaccuracies.||Accurately states chapter’s main characters.| |Setting||Where and when does this chapter take place?||Wholly inaccurate or not included||States chapter non-specific or unrelated chapter setting.||Chapter’s setting is accurate but vague or with some inaccuracies.||Accurately states chapter’s setting.| The data from the multiple choice memory recall assessments was assessed using the standard percentage basis. For example, three of four correct was assessed at 75%. The scores were totaled and given a percentage point for ease of scoring, as well as averaged to create a student personal average on the four traditional novel chapters as well as student personal average on the graphic novel chapters. Findings The results of this study found that when students read the graphic novel chapter, they displayed increased scores in both reading comprehension and in memory recall assessments. Reading comprehension When the total rubric score of the four written retells for the traditional novel were averaged, and compared to the average of the total scores of the four written retells for the graphic novel, all seven student participants’ averages increased with the use of the graphic novel version in place of the traditional novel. The averaged increase in score when using the graphic novel was an increase of 2.64 points of the rubric’s 18 total points. The maximum increase in average score was 4.25 and minimum increase in average score was an increase of .5. Figure 1 displays shows this comparison. Memory recall The average percentage score on memory recall assessments increased when students had read graphic novel chapters with seven of the seven participants, with an average higher score of 16.8% on the chapters read in the graphic novel format. Figure 2 shows this comparison. The largest percentage score increase for a student participant was a 33.9% increase from 46.1% on the traditional novel memory recall assessment, to 80% on the graphic novel memory recall assessment. Discussion Research on visuals (Butcher, 2006; Omaggio, 1979; Waddill & McDaniel, 1992) as well as this study suggest the importance of visuals for language learners as they progress towards the language levels of their native speaking peers. ELs in this study likewise seemed to be able to better reproduce the main plot elements of a story while retaining comprehension after reading graphic novels. Furthermore, this study suggests ELs will even remember the previous day’s reading better when using the graphic novel format, allowing them to better maintain their understanding as they progress throughout the story. These results seem to produce similar findings to those attained by Lui, Butcher, Waddill and McDaniel, and Omaggio: participants consistently show increased comprehension and can recall texts more accurately when they are provided scaffolding in the form of visuals. While these findings apply only to the limited number of participants in this study, their promise warrants additional research of the topic on a larger scale and with ELs of other age groups and language levels. Additionally, one could explore the possible benefits of graphic novels with other student populations who benefit from reading scaffolding, including those with Individual Education Plans (IEPs) and learning disabilities. Activities with Visual Literacy All classroom teachers could begin integrating the use of comic books and graphic novels into the classroom by identifying the goal for the lesson, and determining if graphic novels could be used as alternate or companion readers. If the goal is to have students identify character development, for example, or to identify the conflict and other plot elements of a story, it may be of no consequence whether students do so by reading the traditional versus the graphic version of the novel. Another implementation for graphic novels is simply to use them in the same manner as leveled readers by creating reading groups based upon needs. An EL classroom can contain very diverse needs when it comes to text studies, which at one point had me splitting our class into three different groups including one group reading a Pearson ESL Leveled Frankenstein reader, a small group reading a graphic novel adaptation of Frankenstein, and a final small group reading the traditional novel. The group reading the graphic novel adaptation was actually reading a more difficult text than the leveled reader group, but they had the built-in scaffolds of pictures to give them the visual support they still needed to access the higher level text. All of the groups were then able to come back together and participate in the same discussions, activities, and check-ins despite the vast differences in their reading abilities. Graphic novels could be used as an initial at-level replacement for the traditional novel version for below-level reading groups, they could be pre-taught or used for front-loading before attempting a traditional text that may be above a reader’s level with visual scaffolds removed, or they could be used as a supplement to the traditional novel. The graphic format can be used for student-produced work as well, as it is a perfect format for presenting concepts like sequencing, dialogue, predicting, and summarizing. If students are struggling to write a good summary, educators could allow students to create a visual summary of the events of a chapter using blank comic panels that students fill in. Students will still be required to show they understand the sequence of events through the use of sequential panels, and can show whether they grasped the setting and important chapter events through their visual summary. If teachers want to incorporate writing into the summary, they may require students to produce dialogue by requiring a set amount of speech balloons, or scaffold their writing by adding cloze passages and sentence starters in the panels for students to complete. Conclusion ELs are often required to perform difficult tasks that require native-like levels of comprehension. Content teachers need a varied set of tools and strategies to help ELs develop their language levels and experience greater success. This initial, albeit limited, study suggests that the use of graphic novels as one of these tools can help language learners improve achievement in reading comprehension and memory retention. As ELs use these tools to reach the level of reading comprehension of their native speaking peers, they may be more willing and confident to retell, participate in class readings, and take risks in their learning. While the use of visuals has always been a hallmark of good language teaching, this study shines a light on the potential of the often-overlooked format of graphic novels for increasing reading comprehension and memory retention among ELs. Links to Resources If you are interested in learning more about the medium of graphic novels, I highly recommend Scott McCloud’s work on the topic, “Understanding Comics.” The text delves into the graphic novel as an art form, and serves to really remove the stigma that comic books are just for children. http://scottmccloud.com/ The links below can provide with you additional information regarding using graphic novels in your classroom, including additional resources and lessons for teachers, comic book and graphic novel listings, and more. The American Library Association’s yearly graphic novel reading list: http://www.ala.org/alsc/publications-resources/book-lists/graphicnovels2018 Reading with pictures includes research, content, and best practices for integrating comics into curriculum: http://www.readingwithpictures.org/ Getgraphic.org gives up-to-date news in the graphic novel world as well as ideas for how to use graphic novels as a tool for literacy: https://www.buffalolib.org/content/get-graphic The Comic Book Legal Defense Fund has library and educator tools for using graphic novels: http://cbldf.org/using-graphic-novels/ Readingrockets has classroom ideas and booklists for graphic novels in the classroom: http://www.readingrockets.org/article/graphic-novels-kids-classroom-ideas-booklists-and-more References Arnold, A.D. (2003). The graphic novel silver anniversary. Time Online Edition. Retrieved from http://content.time.com/time/arts/article/0,8599,547796,00.html Bruski, D. (2012). Graphic device interpretation by low-literate adult ELLs: Do they get the picture? MinneTESOL/WITESOL Journal, 29, 7-29. Burke, B.P. (2012). Using comic books and graphic novels to improve and facilitate community college students’ literacy development. Retrieved from ProQuest Dissertations and Theses Global. (UMI No. 3546922) Butcher, K.R. (2006). Learning from text with diagrams: Promoting mental model development and inference generation. Journal of Educational Psychology, 98(1), 182-197. Eisner, W. (1985). Comics and sequential art. Paramus, NJ: Poorhouse Press. Ford, K. (2005, July). Fostering literacy development in English Language Learners. Color in Colorado. Article retrieved from http://www.colorincolorado.org/article/fostering-literacy-development-english-language-learners Frey, N., & Fisher, D. (2004). Using graphic novels, anime, and the internet in an urban high school. English Journal, 93,19-25. Frey, N., & Fisher, D. (2011). The formative Assessment Action Plan. Alexandria, VA: ASCD. Klingner, J.K. (2004). Assessing reading comprehension. Assessment for Effective Intervention, 29(4), 59–70. Levie, W.H., & Lentz, R. (1982). Effects of text illustrations: A review of research. Educational Communication and Technology, 30(4), 195-233. Levin, J.R., Anglin, G.J., & Carney, R.N. (1987). On empirically validating functions of pictures in prose. In D.M. Willows, & H.A. Houghton (Eds.), The psychology of illustration, volume 1. New York: Springer-Verlag. (pp. 51-85). Liu, J. (2004). Effects of comic strips on L2 learners’ reading comprehension. TESOL Quarterly, 38(2), 225-243. Mayer, R.E. (1994). Visual aids to knowledge construction: Building mental representations from pictures and words. Advances in Psychology, 108, 125-138. Mayer, R. (Ed.). (2014). The Cambridge Handbook of Multimedia Learning (Cambridge Handbooks in Psychology). Cambridge: Cambridge University Press. McCloud, S. (1993). Understanding comics: The invisible art. New York: Paradox Press. Metros, S.E., & Woolsey, K. (2006). Visual literacy: An institutional imperative. EDUCAUSE Review, 41(3), 80-81. Omaggio, A.C. (1979). Pictures and second language comprehension: Do they help? Foreign Language Annals, 12, 107–116. Paivio, A. (1991). Dual coding theory: Retrospect and current status. Canadian Journal of Psychology, 45(3), 255-287. Pearson, P.D., & Gallagher, M.C. (1983). The instruction of reading comprehension. Contemporary Educational Psychology, 8, 317-344. Roe, B.D., Smith, S.H., & Burns, P.C. (2005). Teaching reading in today’s elementary schools (9th ed.). Boston: Houghton Mifflin. Schnotz, W. (2002). Towards an integrated view of learning from text and visual displays. Educational Psychology Review, 14, 101-120. Shaw, D. (2005). Retelling strategies to improve comprehension: Effective hands-on strategies for fiction and nonfiction that help students remember and understand what they read. New York: Scholastic. Snowball, C. (2005) Teenage reluctant readers and graphic novels. Young Adult Library Services, 3(4), 43-45. Waddill, P.J., & McDaniel, M.A. (1992). Pictorial enhancement of text memory: Limitations imposed by picture type and comprehension skill. Memory & Cognition 20, 472-482.
http://minnetesoljournal.org/journal-archive/mtj-2018-2/reading-comprehension-through-graphic-novels-how-comic-books-and-graphic-novels-can-help-language-learners/
DENVER (Reuters) – Taylor Swift sat next to her lawyers at a federal courthouse in Denver on Monday as jury selection began for a trial pitting the pop star against a Colorado radio personality over allegations the former disc jockey fondled her four years ago during a photo shoot. Swift, 27, one of the top-selling U.S. singers, wore a black jacket and white top in court as she watched the proceedings. She is expected to take the stand during the trial in U.S. District Court to testify about the incident, which resulted in broadcaster David Mueller’s firing from Colorado music station KYGO-FM. The litigation centers on Swift’s allegations that Mueller slipped his hand under her dress and grabbed her bare bottom as they posed during a meet-and-greet session before her June 2, 2013, concert in Denver. “It was not an accident, it was completely intentional, and I have never been so sure of anything in my life,” Swift said of the incident during a deposition. Mueller, 55, sued first, claiming Swift falsely accused him of the groping and pressured station management to oust him from his $150,000-per-year job at the station, according to the lawsuit. Swift countersued for assault and battery and that became part of the same trial. A crowd of Swift supporters was expected to attend the proceedings. Maya Benia, 20, a fan from Albuquerque, New Mexico, had been waiting outside the courthouse since 5:30 a.m. MDT (1130 GMT). She could not stay for the trial because of a doctor’s appointment but had a letter she hoped someone would relay to the singer. “It is a thank-you to her for consistently being there for me over the years through all my hospitalizations and also a thank for survivors of sexual assault and her being able to use her voice when others couldn’t,” Benia said. Mueller denies anything inappropriate occurred during the brief backstage encounter in which he stood on one side of the pop star and his girlfriend on the other. His lawsuit said Swift’s accusation is “nonsense.” Mueller is suing under tort claims of interference with contractual obligations and prospective business relations. Jurors will determine what monetary damages, if any, he is entitled to if Swift is found liable. In court filings, Swift said her representatives informed KYGO management about the incident but she did not demand Mueller be fired. The radio station investigated and two days after the incident fired Mueller for violating the morality clause of his contract, court documents show. The judge has placed a gag order on all parties and attorneys for both sides did not respond to messages seeking comment. Swift, one of the most successful contemporary music artists, earned $170 million between June 2015 and June 2016 following a world tour and her best-selling “1989” album, according to Forbes Magazine.
https://www.thezimbabwemail.com/entertainment/taylor-swift-watches-jury-selection-trial-denver-dj/
The first species that comes to mind in the context of insects and genetics is Drosophila melanogaster, the most commonly used insect model. However, in many cases the fly is not such an ideal model after all – in fact, some biological phenomena cannot be investigated using it at all. For this reason, researchers at FAU and the universities of Göttingen and Cologne have turned their attention to the red flour beetle. The have now analysed more than 5300 of its genes, making their investigation of the role of DNA the largest of its kind for a single beetle. During their analysis the researchers also discovered some previously unknown genes that could provide important information for fields such as developmental biology, entomology and medicine. The researchers recently published their findings in the journal Nature Communications*. When geneticists want to know how genes control certain aspects of development, behaviour and other processes in insects they examine Drosophila melanogaster. The fly is probably the best researched animal to date and a large proportion of what is known about insect genes in general comes from this research. However, the fly is often not a typical example of genetic phenomena in insects. ‘Many processes that can’t be investigated using flies have been ignored in genetics for this reason,’ explains Prof. Dr. Gregor Bucher from the University of Göttingen, speaker of the DFG research unit iBeetle. ‘But in the insect world there are many fascinating processes that do not occur in the ffly. With our project, iBeetle, we are laying the foundations for some of these to finally be examined genetically. In doing so we are creating a broader basis for genetic research.’ There are several reasons why the red flour beetle, Tribolium castaneum, is a suitable model organism that can be used in addition to the fly. Genetic mechanisms in the beetle can be transferred not only to other insects but also partially to vertebrates. During their analysis the researchers discovered, for example, that early development is apparently much more different in beetles and flies than previously thought. When they deactivated certain genes in the beetle the front end of the beetle was replaced by a mirror image of the back end. In the fly the corresponding genes are responsible for entirely different things. ‘We have been looking for genes that are responsible for embryonic polarity in other insects for years. Now we have identified them in the beetle,’ says Prof. Dr. Martin Klingler, professor of developmental biology at FAU and deputy speaker for the project, ‘We never would have thought that evolution used genes in such a flexible way.’ The researchers also discovered several genes that have not been studied before that could have applications in medicine and now intend to examine these further. For example, integrins are responsible for cell adhesion and are involved in a range of skin diseases in humans, including cancer. ‘We have discovered several additional genes that apparently work with the integrins in cell adhesion. Despite all the investigations using flies these genes had been overlooked until now and they can now be studied in more detail,’ explains project co-ordinator Prof. Manfred Frasch, Division of Developmental Biology at FAU. They also identified new genes that are of significance for stem cell biology. Stem cells are involved in the formation and development of many tissues and special types of cells in organisms. ‘The findings from the beetle will improve our overall understanding of stem cells,’ summarises biologist Dr. Michael Schoppmeier, who is also a project co-ordinator for the Erlangen-based research group. The DFG research unit ‘iBeetle: functional genomics of insect embryogenesis and metamorphosis’ was set up in 2010 and has had its funding extended to 2016 by the German Research Foundation (DFG). Its researchers have investigated more than 5300 of the beetle’s 16,000 genes so far and are currently analysing a further 4000. The red flour beetle is a significant pest, as it eats flour and therefore causes considerable losses to harvests all over the world every year. The researchers hope that their findings will contribute to effective pest control for these beetles that can also be applied to other pests. The project is part of an important new development in genetics. The role of genes is no longer being investigated exclusively in classic model organisms such as Drosophila melanogaster but is now being studied in more and more animals. The researchers used a technique known as RNA interference to deactivate genes, the discovery of which was awarded the Nobel Prize in Physiology or Medicine in 2006. Further information about the project is available from the University of Göttingen. *Christian Schmitt-Engel et al. The iBeetle large scale RNAi screen reveals novel gene functions for insect development and physiology. Nature Communications 2015. doi: 10.1038/ncomms8822 Further information:
https://www.fau.eu/2015/07/29/news/research/fau-researchers-analyse-beetle-genes/
PROBLEM TO BE SOLVED: To provide a material compsn. which is applicable to the parts requiring a high expansion and contraction ratio, is lowered in toxicity and is improved in tear strength, etc., by compounding multifuntional isocyanate and plural kinds of compds. having a prescribed number of functional groups making polynm. addition reaction with isocyanate groups at prescribed mol.wts. respectively at prescribed ratios, thereby forming this compsn. SOLUTION: The multifunctional isocyanate is defined as a component (a). The compds. which have two pieces of the functional groups (hereafter described as the similar merely functional groups) making the polyaddition reaction with the isocyanate groups and are respectively 60 to 3500 and &le;500 in the mol.wt. are respectively defined as components (b), (c). The compd. which have the mol.wt. of &le;700 and have &ge;3 pieces of the similar functional groups is defined as a component (d). The functional group quantities of the respective components per 100mol of the isocyanate groups of the component (a) are specified to 1 to 35mol in total quantity of the functional groups in the case of the components (c) and (d), to 0.1 to 18mol in the case of the component (d) and 60 to 100mol in the case of the component (b). The total quantity of the functional groups of the components (b), (c), (d) is similarly specified to 80 to 110mol per 100mol isocyanate groups of the component (a). COPYRIGHT: (C)1997,JPO
Is History History? Identity Politics and Teleologies of the Present Author’s Note (Aug 19, 2022) My September Perspectives on History column has generated anger and dismay among many of our colleagues and members. I take full responsibility that it did not convey what I intended and for the harm that it has caused. I had hoped to open a conversation on how we “do” history in our current politically charged environment. Instead, I foreclosed this conversation for many members, causing harm to colleagues, the discipline, and the Association. A president’s monthly column, one of the privileges of the elected office, provides a megaphone to the membership and the discipline. The views and opinions expressed in that column are not those of the Association. If my ham-fisted attempt at provocation has proven anything, it is that the AHA membership is as vocal and robust as ever. If anyone has criticisms that they have been reluctant or unable to post publicly, please feel free to contact me directly. I sincerely regret the way I have alienated some of my Black colleagues and friends. I am deeply sorry. In my clumsy efforts to draw attention to methodological flaws in teleological presentism, I left the impression that questions posed from absence, grief, memory, and resilience somehow matter less than those posed from positions of power. This absolutely is not true. It wasn’t my intention to leave that impression, but my provocation completely missed the mark. Once again, I apologize for the damage I have caused to my fellow historians, the discipline, and the AHA. I hope to redeem myself in future conversations with you all. I’m listening and learning. Twenty years ago, in these pages, Lynn Hunt argued “against presentism.” She lamented historians’ declining interest in topics prior to the 20th century, as well as our increasing tendency to interpret the past through the lens of the present. Hunt warned that this rising presentism threatened to “put us out of business as historians.” If history was little more than “short-term . . . identity politics defined by present concerns,” wouldn’t students be better served by taking degrees in sociology, political science, or ethnic studies instead? The discipline did not heed Hunt’s warning. From 2003 to 2013, the number of PhDs awarded to students working on topics post-1800, across all fields, rose 18 percent. Meanwhile, those working on pre-1800 topics declined by 4 percent. During this time, the Wall Street meltdown was followed by plummeting undergraduate enrollments in history courses and increased professional interest in the history of contemporary socioeconomic topics. Then came Obama, and Twitter, and Trump. As the discipline has become more focused on the 20th and 21st centuries, historical analyses are contained within an increasingly constrained temporality. Our interpretations of the recent past collapse into the familiar terms of contemporary debates, leaving little room for the innovative, counterintuitive interpretations. This trend toward presentism is not confined to historians of the recent past; the entire discipline is lurching in this direction, including a shrinking minority working in premodern fields. If we don’t read the past through the prism of contemporary social justice issues—race, gender, sexuality, nationalism, capitalism—are we doing history that matters? This new history often ignores the values and mores of people in their own times, as well as change over time, neutralizing the expertise that separates historians from those in other disciplines. The allure of political relevance, facilitated by social and other media, encourages a predictable sameness of the present in the past. This sameness is ahistorical, a proposition that might be acceptable if it produced positive political results. But it doesn’t. In many places, history suffuses everyday life as presentism; America is no exception. We suffer from an overabundance of history, not as method or analysis, but as anachronistic data points for the articulation of competing politics. The consequences of this new history are everywhere. I traveled to Ghana for two months this summer to research and write, and my first assignment was a critical response to The 1619 Project: A New Origin Story for a forthcoming forum in the American Historical Review. Whether or not historians believe that there is anything new in the New York Times project created by Nikole Hannah-Jones, The 1619 Project is a best-selling book that sits at the center of current controversies over how to teach American history. As journalism, the project is powerful and effective, but is it history? This new history often ignores the values and mores of people in their own times. When I first read the newspaper series that preceded the book, I thought of it as a synthesis of a tradition of Black nationalist historiography dating to the 19th century with Ta-Nehisi Coates’s recent call for reparations. The project spoke to the political moment, but I never thought of it primarily as a work of history. Ironically, it was professional historians’ engagement with the work that seemed to lend it historical legitimacy. Then the Pulitzer Center, in partnership with the Times, developed a secondary school curriculum around the project. Local school boards protested characterizations of Washington, Jefferson, and Madison as unpatriotic owners of “forced labor camps.” Conservative lawmakers decided that if this was the history of slavery being taught in schools, the topic shouldn’t be taught at all. For them, challenging the Founders’ position as timeless tribunes of liberty was “racially divisive.” At each of these junctures, history was a zero-sum game of heroes and villains viewed through the prism of contemporary racial identity. It was not an analysis of people’s ideas in their own time, nor a process of change over time. In Ghana, I traveled to Elmina for a wedding. A small seaside fishing village, Elmina was home to one of the largest Atlantic slave-trading depots in West Africa. The morning after the wedding, a small group of us met for breakfast at the hotel. As we waited for several members of our party to show up, a group of African Americans began trickling into the breakfast bar. By the time they all gathered, more than a dozen members of the same family—three generations deep—pulled together the restaurant’s tables to dine. Sitting on the table in front of one of the elders was a dog-eared copy of The 1619 Project. Later that afternoon, my family and I toured Elmina Castle alongside several Ghanaians, a Dane, and a Jamaican family. Our guide gave a well-rehearsed tour geared toward African Americans. American influence was everywhere, from memorial plaques to wreaths and flowers left on the floors of the castle’s dungeons. Arguably, Elmina Castle is now as much an African American shrine as a Ghanaian archaeological or historical site. As I reflected on breakfast earlier that morning, I could only imagine the affirmation and bonding experienced by the large African American family—through the memorialization of ancestors lost to slavery at Elmina Castle, but also through the story of African American resilience, redemption, and the demand for reparations in The 1619 Project. Yet as a historian of Africa and the African diaspora, I am troubled by the historical erasures and narrow politics that these narratives convey. Less than one percent of the Africans passing through Elmina arrived in North America. The vast majority went to Brazil and the Caribbean. Should the guide’s story differ for a tour with no African Americans? Likewise, would The 1619 Project tell a different history if it took into consideration that the shipboard kin of Jamestown’s “20. and odd” Africans also went to Mexico, Jamaica, and Bermuda? These are questions of historical interpretation, but present-day political ones follow: Do efforts to claim a usable African American past reify elements of American hegemony and exceptionalism such narratives aim to dismantle? The Elmina tour guide claimed that “Ghanaians” sent their “servants” into chattel slavery unknowingly. The guide made no reference to warfare or Indigenous slavery, histories that interrupt assumptions of ancestral connection between modern-day Ghanaians and visitors from the diaspora. Similarly, the forthcoming film The Woman King seems to suggest that Dahomey’s female warriors and King Ghezo fought the European slave trade. In fact, they promoted it. Historically accurate rendering of Asante or Dahomean greed and enslavement apparently contradict modern-day political imperatives. Hollywood need not adhere to historians’ methods any more than journalists or tour guides, but bad history yields bad politics. The erasure of slave-trading African empires in the name of political unity is uncomfortably like right-wing conservative attempts to erase slavery from school curricula in the United States, also in the name of unity. These interpretations are two sides of the same coin. If history is only those stories from the past that confirm current political positions, all manner of political hacks can claim historical expertise. This is not history; it is dilettantism. Too many Americans have become accustomed to the idea of history as an evidentiary grab bag to articulate their political positions, a trend that can be seen in recent US Supreme Court decisions. The word “history” appears 95 times in Clarence Thomas’s majority opinion overturning New York’s conceal-carry gun law. Likewise, Samuel Alito invokes “history” 67 times in his opinion overturning Roe v. Wade. Despite amicus briefs written by professional historians in both cases (including one co-authored by the AHA and the Organization of American Historians), the court’s majority deploys only those pieces of historical evidence that support their preconceived political biases. The majority decisions are ahistorical. In the conceal-carry case, Justice Thomas cherry-picks historical data, casting aside restrictions in English common law as well as historical examples of limitations on gun rights in the United States to illustrate America’s so-called “tradition” of individual gun ownership rights. Then, Thomas uses this “historical” evidence to support his interpretation of the original meaning of the Second Amendment as it was written in 1791, including the right of individuals (not a “well regulated Militia”) to conceal and carry automatic pistols. In Dobbs v. Jackson, Justice Alito ignores legal precedents punishing abortion only after “quickening.” concluding: “An unbroken tradition of prohibiting abortion on pain of criminal punishment persisted from the earliest days of the common law until 1973.” This is not history; it is dilettantism. In his dissent to NYSRPA v. Bruen, Justice Stephen Breyer disparagingly labels the majority’s approach “law office history.” He recognizes that historians engage in research methods and interpretive approaches incompatible with solving modern-day legal, political, or economic questions. As such, he argues that history should not be the primary measure for adjudicating contemporary legal issues. Professional historians would do well to pay attention to Breyer’s admonition. The present has been creeping up on our discipline for a long time. Doing history with integrity requires us to interpret elements of the past not through the optics of the present but within the worlds of our historical actors. Historical questions often emanate out of present concerns, but the past interrupts, challenges, and contradicts the present in unpredictable ways. History is not a heuristic tool for the articulation of an ideal imagined future. Rather, it is a way to study the messy, uneven process of change over time. When we foreshorten or shape history to justify rather than inform contemporary political positions, we not only undermine the discipline but threaten its very integrity. James H. Sweet is president of the AHA. Tags: From the President Africa African American History Cultural History Migration/Immigration/Diaspora This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Attribution must provide author name, article title, Perspectives on History, date of publication, and a link to this page. This license applies only to the article, not to text or images used here by permission. The American Historical Association welcomes comments in the discussion area below, at AHA Communities, and in letters to the editor. Please read our commenting and letters policy before submitting. Comment Please read our commenting and letters policy before submitting.
https://www.historians.org/publications-and-directories/perspectives-on-history/september-2022/is-history-history-identity-politics-and-teleologies-of-the-present
Nanoscience studies describe natural phenomena at the submicron scale. Below a critical nanoscale limit, the physical, chemical, and biological properties of materials show a marked departure in their behavior compared to the bulk. At the nanoscale, energy conversion is dominated by phonons, whereas at larger scales, electrons determine the process. The surface-to-volume ratio at the nanoscale is immense, and interfacial interactions are markedly more important than at the macroscopic level, where the majority of the material is shielded from the surface. These properties render the nanoparticles to be significantly different from their larger counterparts. Nano-enabled drug delivery systems have resulted from multidisciplinary cooperation aimed at improving drug delivery. Significant improvements in the thermodynamic and delivery properties are seen due to nanotechnology. Hybrid nanodelivery systems, i.e., membranes with nanopores that can gate stimuli-responsive drug release could be a future development. Nanotechnology will improve current drug delivery and create novel future delivery systems. The fundamental properties and challenges of nanodelivery systems are discussed in this review.
https://www.degruyter.com/view/NANO/nano.0034.00039?rskey=riAc58&result=3
Dr. Jia joined the National Center for Computational Hydroscience and Engineering, the University of Mississippi, in 1990 as a Post-Doctoral Research Associate. He was promoted to Research Assistant Professor (1994), Associate Professor (2000) and Professor in 2006. In the past twenty years at NCCHE, he has been the major developer of the two and three dimensional computational models: CCHE2D and CCHE3D. These two free surface turbulent flow models have been developed, verified, validated, and refined using many sets of analytical solution, physical model data and widely applied to research and engineering projects. With these models, detailed 3D turbulent flow structures and general flow patterns in channels, lakes, reservoirs, estuaries and around hydraulic structures can be modeled successfully. In addition to free surface flow hydrodynamics, capabilities such as sediment transport, bank erosion, water quality and pollutant transport, etc., are developed in these models. To enhance the efficiency of applications, these capabilities are integrated and operated with a user friendly Graphic User Interface. Dr. Jia has published over 100 technical papers in the areas of computational model development, verification, validation and applications, as well as numerous technical reports of engineering projects. Recently, his research group has extended the capabilities of numerical models to simulate flood associated phenomena, including dam break flows, dam break/levee breaching processes, pollutant transport due to floods, etc. A levee closure simulator has been developed to simulate an engineering practice in an attempt to close a breached levee with sandbags and/or rocks. Dr. Ozeren has numerous publications on journals and conferences. He is a member of the American Society of Civil Engineers (ASCE), International Association of Hydro-Environment Engineering and Research (IAHR), and AGU (American Geophysical Union) and serves as an officer for the ASCE Hydraulic Measurements Committee. Emeritus Faculty Research Scientists Research Associates None. Visiting Research Associates None. Post-Doctoral Research Associates None. Research Staff Administrative Staff Graduate Students He’s currently doing experimental work involving the study of dam-break flows of granular-liquid mixtures; his work involves image processing and data analysis: skills he also applies to various other projects. Student Assistants None. Summer Interns None.
https://www.ncche.olemiss.edu/people/
Two thirds of all people eventually experience some significant loss of mental lucidity and independence as a result of aging. 60 years and older experience significant cognitive decline, including declines in memory, concentration, clarity of thought, focus and judgment. Physically what happens to our brain as years go by.... Probable reasons for the change... Surprisingly there is a certain percentage of people who function very normally even when they age. So the causes of loss of memory, concentration, focus and the inability to function independently as man ages may not be only due to the aging itself as previously thought. But may be due to a combination of other factors like Brain-unhealthy behavior and habits, insufficient mental stimulation, limited thought or response control strategies, brain unhealthy or inadequate supplements, lack of novel experience, lack of sufficient social interactions and cooperation etc. Ways to delay effects of aging... To stay mentally sharp, you need to work your mental muscles each and every day. Get involved in something that keeps your brain busy such as taking up a new class, exercise, martial arts, mind games, do woodworking etc. Any activity that involves concentration will help exercise the mind and keep it strong. Mental stimulation After 40, taking up a new language or any new course or art classes, whether joining a formal class or learning on your own is beneficial. As long as you learn something new, the nerve cells in your brain will grow and the connection between them will continue to strengthen. It was found that woodworking helps the brain since the problem solving, planning, visual and spatial functioning, rotating an object in your mind to figure out how parts will fit together strengthen the part of the brain that controls spatial relations, the ability to recognize how things piece together. So, KEEP ON WOODWORKING and don't forget to pay your dues when the time comes.
https://bayareawoodworkers.org/OldNewsletters/Old2009/jul09/jul09thisnthat.html
The New York City Council is expected to vote this Wednesday on four critical bills that together comprise one of the country’s most comprehensive efforts to reduce energy consumption in existing buildings. These four bills (Proposed Int. Nos. 476-A, 564-A, 967-A and 973-A) would not only lower energy costs for consumers and result in significant job creation, improved conditions in the buildings in which we live and work, and fewer emissions of harmful pollutants, but they also represent a major step forward in the City’s effort to reduce its carbon footprint. Energy efficiency is an important resource and is the cheapest, easiest and fastest way to meet New York City’s energy needs while reducing the harmful impacts of pollution. Buildings represent our largest source of efficiency that is just waiting to be tapped - particularly in New York City, where energy use in buildings is responsible for nearly 80% of the City’s greenhouse gas emissions. Unfortunately, much of the energy used in our buildings is wasted – the legislation included in New York City’s “Greener, Greater Buildings Plan”, announced in April by Mayor Bloomberg and City Council Speaker Quinn, would help to stop this wasteful spending, as it is estimated to save New Yorkers more than $700 million annually in energy costs through increased energy efficiency. The bills would also reduce greenhouse gas emissions by nearly 5%, thus going a long way towards achieving the City’s target of reducing such emissions 30% by 2030 (the centerpiece of PlaNYC, which was later codified in law by the City Council in Local Law 55 of 2007). In addition, the package, which is expected to create over 17,000 construction-related jobs in the coming years, can help the City become a center for green jobs and innovation. As demand for energy efficiency grows, here and throughout the country, New York City is poised to position itself as a national leader. -- Int. 564-Awouldcreate for the first time a New York City Energy Conservation Code. This bill would close a significant loophole in the current New York State Energy Code, by requiring that all renovations must comply with the Code and meet greater efficiency requirements, not just those that impact at least 50% of a building subsystem. This issue is particularly important for buildings in New York City where renovations don’t typically happen “building-wide”, but rather on a piecemeal basis. The three other bills apply to large buildings, specifically buildings greater than 50,000 square feet, or two or more buildings on the same tax lot that together exceed 100,000 square feet: -- Int. 476-Awould make building performance more transparent by requiring that buildings annually assess their energy and water consumption by using the U.S. EPA’s free, online benchmarking tool (EPA Portfolio Manager). Doing so would allow building owners to establish baseline energy and water consumption data and to compare their buildings’ performance, over time, to themselves, as well as to other buildings of a similar size and type. As the saying goes, “you can’t manage what you don’t measure.” -- Int. 967-A would require building owners to conduct energy audits and retro-commissioning once every ten years. These measures would identify ways for building owners to save money by highlighting opportunities that exist to make their buildings more energy-efficient and by “tuning up” building systems so that they’re operating as efficiently as possible. The bill would also require that the City “lead by example” and implement in its buildings those energy efficiency measures that would pay for themselves through energy savings within 7 years. -- Int. 973-Awould require that buildings upgrade to more efficient lighting and that commercial tenant spaces be sub-metered by 2025. Lighting represents approximately 20% of energy consumption in New York City buildings. Sub-metering will ensure that tenants have the information and incentive to be more efficient in their energy usage. In addition to the State and federal funding that currently exists to help pay for these measures, new and expanded financing options for energy efficiency are coming on line, and will make it even easier for building owners to act on the energy efficiency opportunities identified through the legislation. New York City’s green buildings legislation is a carefully crafted, sensible package that has been further refined since its introduction to take into consideration the concerns of a wide range of stakeholders. The result is an excellent, ground-breaking initiative that is a win-win proposition for New York City consumers and the environment - it will not only result in a multitude of benefits for New York City, but can also serve as a model for other cities around the country and the world. The Mayor and the City Council should be applauded for their leadership on this effort.
https://www.opposingviews.com/category/new-york-city-s-buildings-about-to-get-greener
The spine is made up of small bones, called vertebrae, which are stacked on top of one another and create the natural curves of the back. These bones connect to create a canal that protects the spinal cord. The spinal cord extends from the skull to the lower back and travels through the middle of the stacked vertebra. Nerves branch out from the spinal cord through openings in the vertebrae and carry messages between the brain and muscles. Discs sit in between the vertebrae and work as shock absorbers for the spine. Discs have a jelly-like center (nucleus) and an outer ring (annulus). Between the back of the vertebrae are small facet joints that help the spine move. They have a cartilage surface, like a hip or knee joint. NECK/SPINE CONDITIONS Discs are soft, rubbery pads between the hard bones or vertebrae of the spinal column. They are composed of an outer shell of tough cartilage that surrounds a nucleus made of gel-like cartilage. Discs allow the back to flex or bend and act as shock absorbers. A bulging disc occurs when the outer cartilage of the disc bulges out around its circumference. A herniated or ruptured disc happens when the gel in the nucleus pushes through the outer edge of the disc and back toward the spinal canal, putting pressure on sensitive nerves. Causes of Bulging or Herniated Discs - Age-related wear and tear, disc degeneration - Disc dehydration - Back or neck strain due to repetitive physical activity or heavy lifting - Poor posture - A traumatic event causing injury - Genetics Symptoms of Bulging or Herniated Discs - In the lumbar spine: pain, numbness, tingling, and weakness in the lower back that extends down the leg - In the cervical spine: pain, numbness, tingling, and weakness in the neck, arms, hands, and/or head - In the thoracic spine: pain in the upper back, radiating through the stomach or chest DiagnosisAfter discussing symptoms and medical history, the DOC orthopedic surgeon will perform a physical examination, which may include tests for muscle weakness, loss of sensation, gait, and reflexes, and X-rays and MRI or CT scan to help to confirm the diagnosis of a bulging or herniated disc. The human spine is made of 32 separate vertebral segments that are separated by shock absorbing, intervertebral discs. Facet joints between every vertebral segment are covered with protective cartilage. After an injury or when facet joint cartilage wears away and bone rubs against bone, the body may add bone to the damaged area in an effort to support the vertebral column. Bone spurs (osteophytes) form on the ends of bones, especially in the facet joints where bones meet. Causes of Bone Spurs - Age - Congenital or heredity - Nutrition - Life-style, including poor posture - Traumatic forces, sports related injuries and motor vehicle accidents Symptoms of Bone Spurs - Dull pain in the neck or lower back when standing or walking - Pain radiating into the shoulders if spurs originate in the cervical spine (neck area) - Pain radiating into the rear and thigh if spurs originate in the lumbar spine (lower back area) - Pain worsens with activity and improves with rest - Stiffness DiagnosisIf bone spurs contribute to nerve compression in the spine, the condition may cause neurological symptoms, such as pain, numbness, and/or weakness in one or both arms or legs. The DOC orthopedic surgeon after a physical examination may order X-rays or an MRI to help locate the bone spur and source of pain. The coccyx is the terminal segment of the spine. Coccydynia occurs when the coccyx or the surrounding tissue is damaged, causing pain and discomfort at the base on the spine, especially when seated. Causes of Coccydynia - Injury, direct trauma - Childbirth - Repetitive stress - Pain from a herniated or degenerative disc - Poor posture - Overweight or underweight - Aging - Infection - Cancer - No identifiable origin Symptoms of Coccydynia - Pain and tenderness in the tailbone region - Minor bruising - Difficulty standing after sitting DiagnosisDOC’s healthcare team of orthopedic surgeons, PAs, physical therapists, and pain management specialists will evaluate the guest’s pain and any mobility issues to determine the correct diagnosis. If the diagnosis is coccydynia, the vast majority of guests respond to conservative treatments. More aggressive treatments may be discussed if conservative treatments fail to provide relief. Kyphosis is a spinal disorder in which an excessive outward curve of the thoracic spine results in an abnormal rounding of the upper back. In the case of a severe curve, the condition is called “hunchback.” The thoracic spine should have a natural curve between 20 to 45 degrees. Causes of Kyphosis - Postural kyphosis associated with poor posture and slouching - Scheuermann’s kyphosis caused by structural abnormality - Congenital kyphosis which occurred in utero when the spine failed to develop Symptoms of Kyphosis - Rounded shoulders - A visible hump on the back - Mild back pain - Fatigue - Spine stiffness - Tight hamstrings - Weakness, numbness or tingling in the legs - Loss of sensation - Shortness of breath or other breathing difficulties DiagnosisThe DOC orthopedic surgeon or PA will review the guest’s medical history, general health and symptoms, and examine the back for areas of tenderness. X-rays provide images of dense structures, such as bone, and help to determine bony abnormalities and measure the degree of the kyphotic curve. Myofascial pain syndrome (MPS) refers to pain and inflammation in the body’s fascia, connective tissue that covers the muscles. Myofascial pain syndrome could involve muscle pain in a single muscle or muscle group. Causes of Myofascial Pain Syndrome - Injury or excessive strain on a muscle, muscle group, ligament, or tendon - Trauma to the musculoskeletal system, intervertebral discs - Prolonged static postures, lack of activity - High body mass index (BMI), obesity - Fatigue, sleeplessness and emotional stress - Nutritional deficiencies - Inflammatory conditions - Hormonal changes, post menopause - Tobacco use Symptoms of MPS - Specific trigger or tender points that worsen with activity or stress - Fatigue - Depression - Sleep disorders - Headaches - Behavioral disturbances DiagnosisTrigger points can be identified by the DOC orthopedic surgeon when pressure is applied to an area of the body that results in pain. Physical therapy methods are considered the best treatments for myofascial pain syndrome. In some chronic cases of myofascial pain, the DOC pain management specialist may prescribe a multidisciplinary combination of therapies and medications to treat existing simultaneous conditions, such as insomnia and depression. The spine is made of 24 bones called vertebrae. The spinal cord runs through the canal in the center of these bones. Nerve roots split from the cord and travel between the vertebrae into various areas of the body. When these nerve roots become pinched or damaged, the resulting symptoms are called radiculopathy. Causes of Radiculopathy - Wear and tear - Arthritis - Injury - Stenosis - Bone spurs - Bulging or herniated disc Symptoms of Radiculopathy - Cervical radiculopathy: pain that radiates into the shoulder, muscle weakness and numbness that travels down the arm and into the hand - Lumbar radiculopathy: pain, weakness, numbness, abnormal sensations, and/or loss of reflexes in the back and legs - Thoracic radiculopathy: pain and numbness that wraps around to the front of the body. - Severe symptoms: poor coordination, difficulty walking and paralysis. DiagnosisAfter discussing medical history and general health, the DOC orthopedic surgeon or PA will ask about symptoms and look for muscle weakness, loss of sensation, or any change in reflexes. X-rays provide images of dense structures, such as bone, and an MRI or CT scan will reveal narrowing of the spinal canal and damage to soft tissues and/or the spinal cord and nerve roots. The sciatic nerve is the longest and largest nerve in the body, measuring three-quarters of an inch in diameter. It originates in the lower back, lumbar spine, and extends with nerve branches all the way to the feet. The sciatic nerve and its nerve branches enable movement and feeling in the thigh, knee, calf, ankle, foot, and toes. When the sciatica nerve becomes compressed or irritated, pain, numbness, and tingling radiates in the leg along the course of the sciatic nerve from the buttocks down the back of the thigh into the calf and foot. Causes of Sciatica - Age related wear and tear - Back or neck strain from repetitive physical activity or heavy lifting - Poor posture - Lower back injury - Genetics Symptoms of Sciatica - Sharp, shooting, constant or intermittent pain - Pain exaggerated by physical activity or sitting on one position for a long time - Pain in the lower half of the body, lower back, hips, buttocks, and legs - Leg cramps - Numbness, burning, or tingling down the leg - Difficulty walking DiagnosisIn order to diagnose sciatica, the DOC orthopedic surgeon or PA will discuss symptoms, family history, and perform a thorough examination to help pinpoint the irritated nerve. X-rays and a CT scan or MRI help to confirm the diagnosis and which nerve roots are affected. Thirty-three small bones, vertebrae, are stacked on top of one another and create the natural curves of the back and the central canal that protects the spinal cord. Scoliosis causes the bones of the spine to twist or rotate sideways so instead of a straight line down the middle of the back, the spine looks more like the letter "C" or "S." Scoliosis causes a side-to-side curvature of the spine. Causes of Scoliosis - Hereditary factor - Spinal infection - Spinal injury - Birth defect - Neuromuscular conditions like muscular dystrophy and cerebral palsy - Generally unknown causes Symptoms of Scoliosis - Visible spinal curvature - Uneven shoulders or hips - Ribs stick out farther on one side than the other - Body leans to one side DiagnosisSpinal curvature is a complex disorder and must be diagnosed by a DOC orthopedic spine surgeon specialist. The curve is measured and diagnosed in terms of severity by the number of degrees. Radiographic tests are required for an accurate and positive diagnosis of scoliosis. X-rays show the structure of the vertebrae and any deformities. A CT scan and/or MRI provide images of the spinal canal, contents and structures. Spinal arthritis is also referred to as osteoarthritis of the spine, degenerative joint disease, and arthritis of the facet joints. Healthy facet joints are covered with cartilage so the vertebrae can move smoothly against one another where bones meet to form facet joints, joining the vertebrae. Spinal arthritis is characterized by the breakdown of cartilage that cushions the ends of the bones. Causes of Spinal Arthritis - Aging - Pressure overload on the joints - Injury Symptoms of Spinal Arthritis - First movement morning back and/or neck stiffness and pain - Pain subsides during the day - Pain and stiffness worsen in the evening - Pain that disrupts sleep - Steady or intermittent pain aggravated by motion - Swelling and warmth in joints, particularly during weather changes - Tenderness in affected area - Loss of flexibility - Difficulty twisting or bending - Crepitus, especially in the neck - Pinching, tingling or numbness sensation DiagnosisThe DOC healthcare team will assess the guest’s overall health, musculoskeletal status, nerve function, reflexes, and problematic spinal joints. The surgeon may order X-rays to determine cartilage loss, compression fractures and bone spurs, and an MRI to show detailed images of the spinal cord, nerve roots, discs, ligaments, and surrounding tissues and spaces. There are four major components of the spine: vertebrae, joints, discs, and nerves. The 24 bones of the vertebrae link together to form a tunnel that protects the nerves and spinal cord. Joints are located at each vertebrae and provide flexibility and stability within the vertebral column. Discs are in between the vertebrae and act as shock absorbers and provide flexibility within the vertebral column. Spinal nerves exit and pass into arms and legs from each disc. Spinal cord compression can occur anywhere from the neck (cervical spine) down to the lower back (lumbar spine). Causes of Spinal Nerve Compression - Osteoarthritis - Abnormal spine alignment (scoliosis) - Injury to the spine - Spinal tumor - Bone diseases - Rheumatoid arthritis - Infection Symptoms of Spinal Nerve Compression - Pain and stiffness in the neck, back or lower back - Burning pain that spreads to the arms, buttocks or legs - Numbness, cramping or weakness in the arms, hands or legs - Pressure on the nerves in the lumbar region - Loss of bowel or bladder control DiagnosisThe DOC orthopedic surgeon or PA will ask questions about symptoms and perform a complete physical exam, looking for signs of spinal compression, such as loss of sensation, weakness, and abnormal reflexes. Tests to confirm the diagnosis may include X-rays, providing images of bones and the alignment of the spine, and CT or MRI scans, showing details of the spinal cord and surrounding structures. The normal wear and tear effects of aging can lead to narrowing of the spinal canal, a condition called spinal stenosis. When the space around the spinal cord narrows, more pressure is put on the spinal cord and the spinal nerve roots. Spinal nerves relay sensation in specific parts of the body and pressure on the nerves can cause pain in the areas that the nerves supply. SymptomsNeck, cervical spine: - Numbness, tingling and weakness in the arm, hand, shoulders, or legs - Changes in fine motor skills - Problems with walking and balance - Back and neck pain - Numbness, tingling and weakness in the foot or leg that radiates from the lower back to the buttocks and legs - Pain or cramping in one or both legs - Problems with walking and balance - Lower back pain DiagnosisAfter discussing symptoms and medical history, the DOC surgeon or PA will examine the back, including range of motion and pain limitations. To confirm the diagnosis of spinal stenosis, the physician may order X-rays to show signs of aging and bone spurs and an MRI to provide images of soft tissues, muscles, discs, nerves, and the spinal cord. Spondylosis is a medical term that describes any degenerative disease of the spine. Spine degeneration or wear and tear can affect the discs and facet joints between the vertebrae of the spine. The discs are the cartilage cushions and the facet joints are the joints between vertebrae that make the back flexible and enable a person to bend and twist. As the discs in the spine age, they lose water content, dry out, weaken, collapse, and lose height. The smooth, slippery articular cartilage that covers and protects the facet joints may wear away, begin to degenerate, and develop arthritis. When the spine deteriorates, other spine conditions occur. Causes of SpondylosisArthritis of the spine, herniated or bulging discs, an injury to the neck or back, poor posture, physically demanding work, or sports activities can cause spine degeneration.
https://www.directorthocare.com/our-services/back-doctor-neck-spine/