content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Let’s recreate the original Whomping Willow! It was added to the training grounds in COS. That location – and even the tree’s very presence in the miniature – is unique to to that film. (In all the later films, the redesigned Whomping Willow was off the edge of the map, somewhere past Hagrid’s, and it was brought to life purely through CG and full-scale practical elements.) The first step was to figure out the exact placement. I don’t have any floor plans that show precisely where it sat in relation to the other structures, but one of the behind-the-scenes features does give a decent shot of that area of the miniature. I lined up the camera angles and added a circle to mark the base of the tree. It seems to be just about dead center in that lawn! This further reinforces my belief that this area’s walls were redesigned for the express purpose of giving the Willow a more interesting setting. They almost create a kind of arena around it. Next up: creating a base mesh. I knew I wasn’t going to be able to match the tree in the film perfectly, but I wanted to capture its essential rhythms, character, and scale. I ended up annotating film screenshots with color-coded numbers to help me keep track of the different major branches. It’s a strategy that served me well with the grand staircase model. I carried that color coding into the model itself as I started the base mesh. As with the tree in the clock tower courtyard, I decided to use the Skin modifier in Blender. This lets you quickly extrude chains of vertices and apply a basic radius parameter to each, creating blocky forms that become tubular with the Subdivision Surface modifier. Here are a couple of work-in-progress shots: As you can see, the base mesh is SUPER photorealistic. Can’t even tell it’s not a real tree! …right? (My girlfriend says it looks like a piece of corporate modern art, and I don’t disagree.) Don’t worry, things started to get a little better as I Boolean-ed all the branches into one object and began the sculpting process. There appear to be some stony areas at the base. I experimented with sculpting them all as one mass. When that didn’t look right, I tried using physics simulation to “drop” all the stones on top of each other in a realistic pile. But sometimes simplest is best, and in the end, it was most expedient to separately sculpt each rock and lay them on top of each other manually. Another pass of Skinned tubes allowed me to start blocking in the tree’s tangled roots: Obviously very rough; the idea is just to build some geometry that will take well to further sculpting. And while I admit I wasn’t SUPER excited about this switch from hard surface to organic modeling, I soon found myself enjoying the sculpting process once more. It’s gratifying to see these basic shapes start to look like something! This was around when I realized my proportions were a little off. The main trunk needed to be a little bigger relative to all the branches. I made some adjustments and kept sculpting, working my way up into the knuckles and knobs where all the younger shoots will soon go: None of this is an absolutely perfect match with the film, but I’m certainly trying to stay close. That crevice on the left side of the trunk is based on one visible in the film; in my headcanon, that’s where the tunnel to the Shrieking Shack is. (We only see the entrance to the tunnel in the next film, and the Willow was redesigned for that one.) By the way, in case it’s not obvious, I haven’t started texturing the tree or the rocks yet. The flat coloration is just temporary while I continue refining the forms. I’ll save all that for the next post, as well as the addition of all the shoots on top that’ll really give the tree its characteristic look. But I think we’re off to a good start!
https://hogwarts4d.home.blog/2021/04/04/starting-the-whomping-willow-cos-version/?like_comment=933&_wpnonce=f10eef5217
About steel; 1 cubic meter of steel weighs 7 900 kilograms [kg] 1 cubic foot of steel weighs 493.18089 pounds [lbs] Steel weighs 7.9 gram per cubic centimeter or 7 900 kilogram per cubic meter, i.e. density of steel is equal to 7 900 kg/m³.In Imperial or US customary measurement system, the density is equal to 493.18 pound per cubic foot [lb/ft³], or 4.57 ounce per cubic inch [oz/inch³] . carbon steel pipe price per foot, steel pipe cost, api 5l Jun 23, 2018 · Carbon Steel Pipe Price Per Meter, Welded steel pipe:ERW:OD:1/8 inch-24 inch, WT:Max 26.5mm. SSAW:OD:16 inch-64inch, WT:Max 65mm. LSAW:OD:219mm-3120mm, WT:3mm-25mm. Main Standards:ASTM A 53:Grade A-D; API 5L:A,B, X42 X70; BS:7, S185,S235, S275, S335, St12-ST52How to Calculate Steel Pipe Weight per Foot/Meter by Size 128 rows · The steel pipe unit weight (kg/m or lb/ft) shall be calculated according to below formula.
http://www.penzionolympia.cz/AH32-plate/e49b8b4053748.html
Dhe health authorities in Germany have that Robert Koch Institute (RKI) reported 4369 new corona infections within one day. In addition, 62 more deaths were recorded within 24 hours. That comes from the numbers of the RKI from Monday. Exactly one week ago, the RKI had recorded 4426 new infections and 116 new deaths within one day. On Monday, the number of cases reported by the RKI is usually lower, partly because fewer tests are carried out on the weekend. The data give the Stand of the RKI Dashboards from 03.10 a.m. again, subsequent changes or additions are possible. Almost 2.4 million detected infections The number of those reported within seven days New infections per 100,000 inhabitants (seven-day incidence) was 61.0 nationwide on Monday morning according to the RKI – and thus higher than the previous day (60.2). Four weeks ago, on January 25, the incidence had been 111.2. Its previous high was reached on December 22nd at 197.6. The high of 1,244 newly reported deaths was reached on January 14th. Among the new infections registered within 24 hours, the highest value was reached on December 18 with 33,777 – but it contained 3500 late reports. Since the beginning of the pandemic, the RKI has counted 2,390,928 detected infections with Sars-CoV-2 in Germany (as of February 22nd, 3:10 am). The actual total number is likely to be significantly higher since many infections are not detected. The RKI stated the number of people recovered to be around 2,198,000. The total number of people who died with or with a proven infection with Sars-CoV-2 rose to 67,903. According to the RKI management report on Sunday afternoon, the nationwide seven-day R-value was 1.10 (previous day 1.07). This means that 100 infected people theoretically infect 110 more people. The value represents the occurrence of the infection 8 to 16 days ago. If it is below 1 for a long time, the infection process subsides. After about two months of closure and emergency care, daycare centers and primary schools will open again this Monday in another ten federal states. Federal Minister of Education Anja Karliczek demanded “to take all available means to prevent virus transmission” in order to be able to keep the school going in the next few weeks. The latest development in the number of infections deserves the greatest attention, said Karliczek and also referred to the feared spread of new virus variants. “That must also be considered when running the school. But I am sure that the federal states will take this into account when deciding to open the door. “ RKI President Lothar Wieler warned last week of more corona infections due to the new virus mutations. He expects more outbreaks in young adults, adolescents and children in the coming weeks, Wieler said. .
https://www.archyde.com/coronavirus-seven-day-incidence-continues-to-rise/
Topics 05. Prison Designation Introduction "If you are going through hell, keep going." Winston Churchill There are 122 facility compounds across the country that have varying security levels, programs and healthcare levels to meet inmate needs. The BOP, and the BOP alone, has the authority to determine where a person will be incarcerated. While your lawyer have gave the judge a designation recommendation at sentencing, there are many other facets of you and your case that will influence the BOP’s final designation decision. There is probably nothing more important that knowing where your time will be served and how far that location will be from your family. Generally, the BOP does an excellent job at housing inmates within a 500 mile radius of their home address. Since the indictment, there has been a substantial amount of information gathered on both you and your case. That information will now be summarized and evaluated to determine which prison best meets both your needs and the government requirements for incarceration. A federal judge has the authority to sentence to a term of imprisonment, fine, establish restitution and order supervised release post prison, but the ultimate decision as to where a defendant will spend his/her prison time is at the sole discretion to the Bureau of Prisons. The following is a list of what a judge CANNOT order: Place of Incarceration Earlier Commencement of a Federal Sentence Credit Towards Sentence For Presentence Custody Referral Into RRC (Halfway House) or Home Detention Temporary Release On Furlough Participation In A Specific Program Participation In A Residential Drug Abuse Program However, the judge can recommend any of the above and the BOP will consider them unless the recommendation violates a statute. The judge does have jurisdiction to order participation in specific programs or placement in community custody as a special condition of either probation or supervised release. So if a federal judge has limits to where a defendant is sentenced, how do you get to the location that you believe best meets your needs (location and programs specifically)? The answer is that you try your best to get a recommendation from the judge and then back that up with information that best supports your position. The BOP adopts a judicial recommendation in over 70% of the requests and more than half of all cases that the BOP reviews for designation have some type of federal judge recommendation. It is important to know that some defendants are remanded into custody immediately and placed into Administrative facilities adjacent to the courthouse or sent to a county jail until the designation process is completed by the BOP. While the defendant gets credit for this time served, it is not a particularly pleasant experience. If you find yourself in this unlikely situation, hang in there. The BOP knows where you are and will get you to your final designation, but it might take a while (weeks or months). There are documented cases of defendants making a request to the Judge to be taken into custody immediately so that they can begin their sentence. Their rationale is that the sooner they get into prison the sooner they can get out. However, this is a huge mistake if self-surrendering was a possibility. Like those remanded into custody, defendants who request to be taken immediately will find themselves in higher security settings or county jails for weeks or months before landing at their final destination. Going through prison transportation is a miserable experience and the days you spend in this process seem like many days in a prison camp or Low. Take the few weeks to prepare for prison and do not be in a hurry to get the prison term started. Ask a question Prisonology is a web-based educational tool that is licensed to law firms, legal professionals and individual attorneys who have the right to provide access to their clients or associates. This site's use is governed by the License Agreement, the Terms Of Use, and the Privacy Terms stated herein.
The figure depicts a prototype of a robotic-manipulator gripping device that includes two passive compliant fingers, suitable for picking up and manipulating objects that have irregular shapes and/or that are, themselves, compliant. The main advantage offered by this device over other robotic-manipulator gripping devices is simplicity: Because of the compliance of the fingers, force-feedback control of the fingers is not necessary for gripping objects of a variety of sizes, shapes, textures, and degrees of compliance. Examples of objects that can be manipulated include small stones, articles of clothing, and parts of plants. The device includes two base pieces that translate relative to each other to effect opening and closing of the compliant gripping fingers. Each finger is made of a piece of elastomeric tubing bent into a U shape and attached at both ends to one of the base pieces. This arrangement of the finger provides compliance, both in orientation and in translation along all three spatial dimensions. Because the specific application for which this device was designed involves picking up and cutting plant shoots for propagation of the plants, the device includes a cutting blade attached to one of the base pieces. By positioning the device to hold an object, then closing the fingers to grip the object, then driving the base pieces downward toward the object, one can cause the blade to cut the object into two pieces. Because, prior to cutting, the fingers are both holding the object and in contact with the surface on which the object is resting, it is possible to move the base pieces sideways simultaneously to center the blade while keeping the object immobile. The prototype gripper has been shown to be capable of picking up a small object. There is a need to refine the design of the gripper; in particular, there is a need to incorporate a sensor that would measure the position of an object relative to that of the gripper. Other aspects of the design expected to be refined in continuing development include the general problem of gripping, the method of actuation for closing the fingers, the shape of the fingers, fixturing, and cutting. This work was done by Raymond Cipra, NASA Summer Faculty Fellow from Purdue University, and Hari Das of Caltech for NASA's Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com/tsp under the Machinery/Automation category. NPO-21104.
https://www.techbriefs.com/component/content/article/tb/techbriefs/mechanics-and-machinery/959
Solution Idea: Control objective tested by auditors. - Auditor should inspect whether inventory abc chemicals should be accurately, correctly, timely recorded in the books of ABC chemical and should be authorized. - Auditor should inspect the revel vent supporting document of inventory purchased. Also check inventory ledger card. - Auditor should observe whether proper security system is used to protect the inventory such as cctv cameras, locked wherehouse. - Auditor should review whether principle of IAS 2 should be properly followed such as inventory should be valued at lower of cost and NRV. - Auditor should reconciled the physical balance of inventory with book balance of ABC chemical. - Auditor should check that procedures are performed by ABC chemical for identification of obsolete or slow moving inventory. Auditor should check aging analysis of inventory to check obsolescence of inventory.
https://www.whichquery.com/vu/forum/acc311-gdb/acc311-gdb-solution-discussion-fall-2019/
The Scholars for Social Responsibility Interest Group (SSRIG) invites abstracts for 5-minute lightning talks to be presented during our meeting at the SMT 2021. Our meeting will be held online regardless of whether the conference is in person, in order to encourage more participation. https://societymusictheory.org/interest-groups/socialresponsibility Description The SSRIG aims to think about how activism and scholarship intersect within and across broad populations of music researchers, scholars, students, performers, educators and other kinds of musical thinkers. Activism looks and feels different depending on where we are in our career and what our career goals are. The purpose of this panel is to create a space for music theory scholars to share experiences, reflect on prior activism, meet with one another and collectively work to increase equity in music theory scholarship. Talks might address the following questions: •How do we see the terms “scholar” and “activist” overlap and intersect in music theory? How do they diverge? Do our definitions of “scholar” include “activist”? •When does teaching function as activism? •How have institutions (broadly defined) advanced or inhibited our activism? How have scholars who have engaged in activism taken advantage of institutional resources? We encourage personal narratives and direct accounts of activist work, answering such questions as: •What are ways you work as an activist? What inspired you to join or start a particular cause or organization? •How have you been able to leverage the privileges of various career stages toward your goals as an activist? •How has your prior activist work utilized, or been hampered by, your career stage? •How have you measured your successes and failures in prior activist work? •What roles do affects like grief, pain, anger or frustration play in our aspirations toward change? What nourishes hope in all of our work? We encourage scholars at every stage of their work to submit. We are looking for talks from individuals with a range of career backgrounds, stages and goals. We hope that participants will use this as an opportunity to share ideas and organize with others and that the session will encourage SMT members and friends to link with one another and find new projects. Submit abstracts of around 250 words by July 1, 2021 to SSRIG2021 -at- gmail.com . Please include any access needs that you might have with your submission. Speakers will be notified of acceptance by the end of July. _______________________________________________ AMS-Announce mailing list and bulletin board: How to submit a post Visit the AMS-Announce archive TO UNSUBSCRIBE, switch to/from Digest mode, change your subscription address, etc.: log in and edit your subscription. AMS-Announce is a free service provided to the musicological community by the American Musicological Society .
https://listserv.unl.edu/cgi-bin/wa?A3=ind2105&L=AMS-ANNOUNCE&E=Quoted-printable&P=95027&B=--0000000000009985d205c198fb32&T=text%2Fhtml;%20charset=utf-8&XSS=3&header=1
Nuclear reactors use a type of nuclear reaction called nuclear fission. Another type of nuclear reaction - nuclear fusion - happens in the Sun and other stars. Nuclear power reactors use a reaction called nuclear fission. Two isotope Atoms of an element with the same number of protons and electrons but different numbers of neutrons. in common use as nuclear fuels are uranium-235 and plutonium-239. However, uranium-235 is used in most nuclear reactors. Splitting atoms 'Fission' is another word for splitting. The process of splitting a [nucleus: The central part of an atom. It contains protons and neutrons, and has most of the mass of the atom. ] is called nuclear fission. Uranium or plutonium isotopes are normally used as the fuel in nuclear reactors because their nuclei are relatively large and easy to split. For fission to happen, the uranium-235 or plutonium-239 nucleus must first absorb a neutron ###TODO###. When this happens: - The nucleus splits into two smaller nuclei - Two or three neutrons are released - Some energy is released The additional neutrons released may be absorbed by other uranium or plutonium nuclei, causing them to split. Even more neutrons are then released, which in turn can split more nuclei. This is called a chain reaction. The chain reaction in nuclear reactors is controlled to stop it going too fast.
http://www.plasma-physics.com/NuclearFission/nuclear-fission-and-fusion-worksheet
Freelance writer Rebekah Denn and cookbook author Cathy Barrow recently joined The Washington Post Food staff to answer questions about all things edible. The following are edited excerpts. Recipes whose names are capitalized can be found in the Recipe Finder at washingtonpost.com/recipes. Q: I was a little slow to catch the whole grain train. Since switching to whole wheat (particularly bread), the more I read, I find marketing is the biggest challenge to choosing a low-carb whole wheat bread. I read that multi-grain doesn’t mean as much as whole wheat, and some whole wheat breads aren’t getting you the healthiest whole grain. Given my conundrum, I thought about baking whole wheat. When I looked for whole wheat flour, I got just as confused as when I looked for whole wheat bread. What should I look for in whole wheat flour (or brands) to get the benefits I’m looking for, without being tricked by marketing? A: Labels can be confusing here, as with most products marketed as “healthful.” I usually find the Whole Grains Council’s materials are a useful reference – its guide to labels is online. For baking, it’s usually safe to substitute about half of the all-purpose flour in a recipe with whole wheat flour, but if you want to go 100 percent whole-wheat, King Arthur Flour has some good guidelines on that. Rebekah Denn Q: I put a teaspoon of citric acid into my whole canned tomatoes rather than the 1/2 teaspoon called for. Will this be a problem? Second, I’ve been having problems with siphoning of my quart jars (whole tomatoes and home-fermented sauerkraut). I’m thinking I’m not leaving them in the canner long enough after turning off the heat before removing - I’ve been doing 10 minutes. Should it be longer? A: Too much citric acid will not harm your canned tomatoes (too little might), so no worrying. Siphoning is another thing altogether. There are several causes, but most commonly air pockets are the culprit, bubbles in between the tomatoes, that burble up as the contents reach 220 degrees. The lid lifts and contents siphon. Use a chopstick or a flat plastic knife - metal might ding the glass jar - and stir and press down on the contents to reduce the possibility of siphoning. Do this while filling the jars, first when half full, then when all the way full. Whole tomatoes are always prone to siphoning. Be sure to clean the rim of the jar before placing the lid, and let quart jars, particularly, cool in the canner for about 10 minutes or so. Cathy Barrow Q: My son is starting his senior year of high school and wants to pursue a career as a chef. He went to a technology high school last year for culinary arts, worked this summer in a country club kitchen, and will do an internship at the country club this upcoming school year along with finishing his required classes so he can graduate. My question – should he go to a culinary college or should he pursue his career by working his way up? We are in the Washington area, which is rich with amazing chefs and restaurants. The advice he’s received thus far reflects who he’s talking to – the teacher who went to CIA said college is essential while the chef he’s working for (who didn’t go to college) said it’s not needed. So, given his experience thus far, if it was your child/niece/nephew/young adult friend, what would you advise? A: This is a particularly thorny issue in the hospitality industry. Without question, a degree from a school like the Culinary Institute of America will ground a student in the fundamentals to prepare her for a career in the kitchen (or the front of the house). But it will also, quite likely, burden the student with a large amount of debt. This debt could potentially keep a chef locked in, say, a corporate cooking job that she loathes, just to pay off the loans. Or, worse, the chef will earn mid-grade wages as a sous in some kitchen and devote too much of her paycheck to the student loan. Or, even worse yet, the chef may decide she hates the industry and wants to move into a different profession altogether. Those student loans still have to be paid. If the child were mine? I’d recommend that the budding chef go to work in professional kitchens, lots of them, starting from the lowliest position and working his way up. Build knowledge and experience over a few years then decide for himself if the industry is for him. With a few years in the business, the budding chef will see the options more clearly: He will decide whether cooking school would hone and elevate his skills to the level he wants. Or he could decide to use his experience to work in better, more refined restaurants and accumulate knowledge that way. Bottom line: Get that wannabe chef into more and more kitchens and start building experience. Tim Carman Q: I switched up my cherry tomato variety that I’m growing this year, and for the first time am getting a large crop. It’s more than my family can use but not enough to where I can make sauce, diced tomatoes, etc. and can them before they go bad. I’ve been freezing the surplus with hopes of having enough to can sauce, etc. in fall. Will this actually work? Do you have any other ideas of how to store and use them in the future? A: If you’ve got a tray’s worth, I’m a huge fan of oven-drying cherry tomatoes and then freezing them for later use. They taste even better that way, if that’s even possible. Denn Q: I picked up a pack of fresh figs this weekend. Any ideas for recipes.
https://www.sacbee.com/food-drink/article170146277.html
Urbanisation and its effects on species: the ants of Lyon as a case in point. Bernard Kaufmann from the Artificial and Natural Hydrosystems Ecology Lab at the University of Lyon presented his take on how cities and their attendant processes and structures affect wildlife, using his studies of ant ecology in Lyon and surrounds to make his case, the main ideas and findings of which we summarise here. Defining urban areas and measuring urbanisation. When studying the effects of urbanisation, one must first begin with a working definition of an urban area; of course, no definition is or ever will be universally applicable. Briefly, urban areas are often defined by administrative status, or by a threshold population density of inhabitants with differing threshold densities for a site to qualify as urban. In general, urban area landuse patterns present a gradient of density of buildings, and of densities and sizes of agglomerations of buildings, outwards from a central high density location. More simply, a city has fewer building and artificial structures towards its outskirts. On the other hand, by a definition from a landuse point of view, an urban area has 60% or greater of its land surface covered by artificial or impermeable material, such as asphalt on roads, or concrete for infrastructure. A novel way of quantifying the level of human presence is to examine urban lighting: night photos of the land can accurately measure city sizes. This approach is easier, but might not always be applicable, since some very low population density sites are very brightly lit (such as oil fields due to natural gas burning) and some high density cities don’t light facilities such as roads. This makes urbanisation a fairly easily measured process, especially when combined with historical data and modern satellite imagery. As with Lyon, many parts of the world have seen massive increases in urban areas in the second half of the 20th century. A gradient usually exists, both of the level of urbanisation as well as the size of structures: there’s usually a decreasing percentage of large buildings when starting from the centre and moving outwards. This pattern differs depending on the region, with the US model of city centre < suburbs < rural areas not particularly applicable to Europe or Asia. Cities, especially older ones, have been designed, and later grow, depending on geography, and might be modelled as concentric (also nuclear, or US model), polycentric (also multipolar, as with twin cities), or as constrained entities (also linear, often following natural features such as rivers or relief). Cities may have two or more models applicable to them at once, or none at all. Lyon, in the south-centre of France, was both constrained by low hills to the west and the Rhône and Saône rivers to the east. Then again, modern Lyon has managed to cross the river and resembles more a nuclear settlement, with the old town at its centre. Scale is another factor that must be taken into account when studying urban ecology, since heterogeneity in the landscape only becomes evident when the area is studied at a finer resolution, so Lyon might appear at one scale to be a homogeneous urban block, while a finer examination might reveal considerable differences in the type of landuse. The effects of urbanisation. The presence of settled humans in high densities quite predictably disrupts local ecology, beginning with the fragmentation of existing landscapes, and continuing with their increased artificialisation. Since cities typically draw on their hinterlands for food and water, logistical infrastructure, such as roads, airports and surrounding agriculture can transform by fragmentation or destruction pre-existing landscapes quite far from cities where definitions of the urban area don’t usually apply. Within cities proper, the high concentrations of artificial ground surfaces and above-ground structures cause increased temperatures, something known as the urban heat-island effect, which is most pronounced in summer. Pollutants too are more heavily concentrated in cities and in the water tables of urban and peri-urban areas. Cities provide novel stressors for both humans and animals in the form of light, air and noise pollution. For wildlife, merely the constant presence of humans might easily constitute a major disturbance. Soil structure is also often different: cities tend to be built on impermeable ground up to 4-5m deep resulting from either the selection of a site with underlying bedrock or from the use of rubble as filling material. Water sources tend to be concentrated and natural aquifers often provide most of a city’s drinking water since nearby surface water-bodies are often too polluted by discharge from the city itself; keeping water sources clean is a major challenge for urban areas today. Underground water sources may also change in temperature which might result in novel pollutant dynamics; however, some areas in and around cities also clean water and return it to the water table. Urbanisation and wildlife. For wildlife, all resources in cities are different from those in natural areas, with either a lot of or very little food around. Environmental seasonality is highly reduced, since humans produce a lot of waste that’s edible. Pollinators, for example, have access to more, and more diverse resources: as an example, linden trees, Tilia sp. impose non-natural constraints on bees, whose population dynamics in cities are driven by their flowering. The modified climates of cities allow tropical and subtropical species to thrive in cities because temperatures more closely match their native ones, and the urban heat island effect often allows more vulnerable species to survive in cities in winter. The excess of sounds so recognisable in cities particularly affects birds and amphibians which communicate primarily through vocalisation. To generalise then, urban landscapes impose restrictions or filters on which species can and cannot inhabit cities. If at the regional scale a certain species pool exists, urban areas might select for certain life-history traits. Forest or cliff birds, like pigeons and some falcons manage city life quite well, while birds of open fields, like bustards or galliformes, cannot. Birds, especially, are the best studied taxa in cities, because many birdwatchers live in cities worldwide, creating a large and competent cadre of citizen-scientists who regularly and accurately gather data. However, in decreasing order, plants, beetles, and mammals are also well studied. With birds, the pattern is that synanthropic, or human commensal species like blackbirds and pigeons are more abundant in urban areas, and while species diversity decreases with urbanisation, the abundance of individuals of those species that do exist is much higher, since resources are more abundant too. Since natural predators are often filtered out, there is rarely any control on these city adapted birds. Human commensal mammals such as cats, dogs and rats also have some effect on species composition in cities, but not a great one, with disturbance by commensals disproportionately affecting ground nesters when compared to tree nesters. While non-volant mammals show uniform drops in diversity with increased urbanisation, bats are more like birds in how cities affect them. Studying urbanisation and its effects on spatial distribution. How then, can we better explain species occurrences in cities: which species tolerate, avoid and exploit cities, and which of these are invasive? By studying an assemblage of Lasius and Tetramorium ants, including the invasive L. neglectus, Kaufmann and others aimed to do just that. They chose ants to study the effects of urbanisation because ants are ubiquitous, especially in open areas, and respond quickly to environmental perturbations. Sampling in Lyon, at 1248 points, they searched for every formicary of the two genera. Sampling density was affected by accessibility, with each sampled site never more than 50m from a road, and always within 500m of another site in the outskirts, and within 200m in the city of Lyon itself. Data were gathered for various factors including land cover and urban heat-island existence, among others. The resolution of the data varied depending on the source used, from satellite images of 1pixel = 1m2, to 30m pixels. Urban history was calculated as change in NDVI (Normalized Difference Vegetation Index) between 2013 and 1986. Buffer distances around each point varied: landscape fragmentation was calculated in a 500m buffer, for example. Human activity was measured as the distance to the nearest road, the density of secondary roads and the distance to the nearest embankment. Climate and altitude were also measured. A PCA (principal component analysis) resulted in fewer factors. In Lyon, the ant assemblage comprised 7 species of Lasius, with L. niger the most common being present in 74% of samples, and 4 Tetramorium species, with T. sp E the most common, present in 49% of samples. T. sp U2 was found to avoid urban, built up Lyon. T. sp E inhabits urban areas, but not U2. L. neglectus proved to be a generalist, inhabiting all areas. L. niger was also ubiquitous, as were some arboreal Lasius species which profited from the presence of trees. An Outlying Mean Index method PCA with the origin as the mean environmental condition and with Axis 1 representing fragmentation and first land cover and Axis 2 the urban history reinforced the cosmopolitan nature of L. niger, which was close to the origin, but T. sp E was found to prefer warm urban areas. Finally, urbanisation was found to be neither a uniform process, nor were invasives found to be favoured by all urban processes. L. niger and L. neglectus for example, are capable of both competition or coexistence in different niches. Some interacting factors exist, like higher temperature and embankments, which favour L. neglectus, but cold embankments do not L. paralienus also prefers cold banks, while T. sp U2’s distribution follows the urban heat island effect, while land cover and landuse history are not as important. Invasives are not really favoured by cities, but classification based on the concept of urban filters is possible. T. sp E thrives in cities, while T. sp U2 doesn’t, which goes to show that disentangling the various factors that together make up the process of urbanisation is necessary to understand how ants and other taxa are affected. Studying urbanisation’s effects on community composition. In order to understand in depth how ant communities were affected by cities, Kaufmann and others actively searched for formicaries, using survey methods that resulted in high detection probabilities and made density measures possible. Landuse was measured as the only abiotic factor, with the others being biological traits of each species. Four major types of areas – urban, peri-urban, rural and agricultural – were sampled. The type of site was always a permanently vegetated, mostly grassy meadow. The Shannon-Weaver diversity index was calculated for each type of site, with urban and rural areas very different in diversity. Nest density was actually higher in urban, periurban and rural areas, with ants much less abundant in agrosystems. The Lyon ant community was biased towards L. myops, Solenopsis fugax and L. niger. This result was obtained because S. fugax and other highly abundant species are deep nesters and were not detected in the previous study, which managed to find mostly epigeous nests and species. In agricultural areas, though, the dominance of deep-nesters was broken by L. niger. A multivariate plot showed that there really is a species gradient which follows the level of urbanisation, with the hyperabundant S. fugax and L. myops preferring urban areas. Interestingly, species with smaller workers were found to be more abundant in forests, but queen body size was not important in determining distribution, though species preferring warm climates were more common in cities, while those preferring dry habitats were found farther from forests and wooded areas. Finally, it was shown that the suburban area doesn’t restrict ant diversity, but the agricultural area does not favour high colony densities. Since the type of agriculture is based on frequent deep-tillage of large fields, many subterranean ants are likely filtered out of the area by this. Urbanisation and gene flow. Kaufmann and others then asked whether Lyon’s urban centre was affecting gene flow in the landscape, to wit, whether Lyon’s highway was a barrier to gene flow, and whether hybridisation occurred between some of the more common species. They also asked if intraspecifically, a distance, barrier or filter/resistance effect could be seen on gene flow. Using the gene for Cytochrome oxidase 1, and comparing between T. sp E and T. sp U2 from 778 samples of which 453 were E and 325 U2, they attempted to answer their question using 17 microsatellite markers. They found that nearly 16% of the ants sampled were hybrids and in the peri-urban zones, hybridisation is of the introgressive type, which means that hybrids are backcrossing with the two species, indicating some level of hybrid fertility. This might mean that hybridisation is advantageous, and hybrid colonies might spread. Since the queens of both species mate with multiple males, they could be mating with males from both species to produce a colony that’s adapted to nearly any condition in terms of urbanisation: a hybrid species could also produce males of either of the two original species, so it wouldn’t be selected against in any kind of environment. Intraspecifically, there was significant isolation by distance, but not by the highway. Resistance was shown to be important, with open areas oddly resistant, especially for T. sp E, which found it easier to move in urban areas, this species having been noted previously as an urban specialist, T. sp E was found differentiated into a northern and a southern group, but it was not clear whether this was due to genetic isolation by distance or whether Lyon was on the boundary of two hitherto separate populations. T. sp U2 was found to treat the landscape as a fairly homogeneous one, and it appears that its nuptial flights are long and fast; while how long or how fast are not known, the dispersal ability and distance is fairly high. To conclude, Kaufmann and others determined that cities offer novel habitats that might promote the existence and persistence of hybrids, to the extent that speciation becomes possible. They now propose to test whether urban areas actually promote the existence of hybrids. The idea is to study 20 cities for the same effect, across a 400km gradient in France, and with a 4℃ temperature change between the most geographically distant points, and to subsequently survey nests to check the percentage of hybrids in each nest along this distance.
http://www.emmc-imae.org/urbanisation-and-its-effects-on-species-the-ants-of-lyon-as-a-case-in-point/
Yale looks to extend streak vs Dartmouth BOTTOM LINE: Yale looks for its 10th straight win in the head-to-head series over Dartmouth. Yale has won by an average of 13 points in its last nine wins over the Big Green. Dartmouth’s last win in the series came on March 7, 2015, a 59-58 win. SUPER SENIORS: Dartmouth’s Chris Knight, James Foye and Ian Sistare have combined to score 48 percent of the team’s points this season, including 55 percent of all Big Green scoring over the last five games. STEPPING IT UP: The Bulldogs have scored 77.4 points per game to Ivy League opponents so far. That’s an improvement from the 72.6 per game they recorded against non-conference competition. AD OFFENSIVE THREAT: Knight has either made or assisted on 50 percent of all Dartmouth field goals over the last three games. Knight has accounted for 26 field goals and 12 assists in those games. AD PASSING FOR POINTS: The Big Green have recently used assists to create baskets more often than the Bulldogs. Dartmouth has an assist on 45 of 76 field goals (59.2 percent) over its past three games while Yale has assists on 39 of 81 field goals (48.1 percent) during its past three games. DID YOU KNOW: Yale is ranked first among Ivy League teams with an average of 76.7 points per game. ___ For more AP college basketball coverage: https://apnews.com/Collegebasketball and http://twitter.com/AP_Top25 ___ This was generated by Automated Insights, http://www.automatedinsights.com/ap, using data from STATS LLC, https://www.stats.com Copyright 2020 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
The contrast between solidarity and individualism plays out in many public policy areas. For example, in their examination of vaccination policy in Israel, Boas, Rosenthal, and Davidovitch (2016, reference below) write: “Public health policies often stand at odds with our contemporary zeitgeist of individualism. Whereas individualistic conceptions place personal self-gain as both incentive for action and a desired result, public health policies address the personal self-gain as the end result of a collective benefit. Vaccinations are perhaps the paradigmatic example of this interplay. Individuals calculate whether or not to be vaccinated by considering their own self-interest in relation to the type and quantity of vaccines to which they are ready to be exposed. Public health policy-makers, in contrast, order vaccination programs by applying a set of considerations that extends the individual level and refers to the group, to the collective, as their main reference unit … In contrast to the personal balance of risks and benefits that individuals weigh when considering vaccinations, policy-makers think of vaccinations in terms of “herd immunity”, vaccination rates, and consider individual self-gain as a predictive outcome of the public good. “In the various public health ethical codes, solidarity is one of the foundations of public health practice, in the context of understanding humans as interdependent within communities – both at the national and global levels. Solidarity is especially used in cases of emergencies, persuading communities to take collective action and to suspend self-gain in favor of promoting collective good. This could be the case in collective responses in cases such as pandemics, for instance. … “Solidarity, a value mentioned in various public health ethical codes of major public associations such as the American Public Health Association, European Public Health Association as well as the Israeli Association of Public Health Physicians, is not merely an abstract concept – it has public health policy implications and it points to the need to be more aware of the interplay between individualism and social structures. In the US, scholars have been discussing the unfashionable place of solidarity in the American value system. In the context of the Affordable Care Act (Obamacare) debates, attempts were made to introduce solidarity in a way that reflects “American nature”, and interpreted to include within solidarity issues such as mutual assistance, patriotism and coordinated investment.” Atlas topic, subject, and course The Study of the Socioeconomic Context for Politics and Policy (core topic) in Socioeconomic and Political Context and Atlas105. Sources Cambridge Dictionary, solidarity, at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5178071/, accessed 12 December 2018. Cambridge Dictionary, individualism, at https://dictionary.cambridge.org/dictionary/english/individualism, accessed 12 December 2018. Hagai Boas, Anat Rosenthal, and Nadav Daviovitch (2016), Between individualism and social solidarity in vaccination policy: the case of the 2013 OPV campaign in Israel, Isr J Health Policy Res, 2016, 5:64, at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5178071/, accessed 12 December 2018. Page created by: Ian Clark, last modified on 12 December 2018. Image: Ben Peterson & Alison Pennington, Solidarity is still the issue, Green Left Weekly, 17 August 2012, at https://www.greenleft.org.au/content/solidarity-still-issue, accessed 12 December 2018.
http://www.atlas101.ca/pm/concepts/solidarity-vs-individualism/
Theology Tuesday- Covenant Theology #6 Last week, we analyzed the covenant with Noah. Today, we see how the covenant with Abraham continues to reveal the larger purposes of God in the covenant of grace. In Genesis 12, God speaks to the great-great-great-great (9 greats in all) grandson of Noah, a man named Abraham. God tells Abraham to go to a land that God will direct him to, and that He will make Abraham into a great nation, and that all the families of the earth will be blessed through him. It’s quite a calling, especially for a 75-year old man whose wife is barren and therefore has no children. God will speak to Abraham two more times to fill in the details of His covenant. In Genesis 15, God explicitly tells Abraham that he will have a son and that his descendants will be as numerous as the stars in the sky. In an earlier post, we looked at Genesis 15:12-21 where God ‘cuts’ the covenant with Abraham by having Abraham cut up some animals and arrange a walkway between their carcasses. God then causes Abraham to fall asleep, and God passes through the pieces alone. By this, God was indicating that He would take the deadly consequences of humanity’s breaking of the covenant. This is another foreshadowing of the work of Jesus on the cross. In Genesis 17, after 24 years have passed with no son born to Abraham and Sarah, God repeats His promise to Abraham that He will make him the father of a multitude of nations, and that Abraham’s offspring will be given the land of Canaan as the land of promise. In Genesis 17:9-14, God gives a covenant sign to Abraham and to his offspring- circumcision. In order to be included in the covenant, all the males of a household had to be circumcised, starting with boys who are eight days old. Now, why did only men get the sign of the covenant? Again, we see the headship principle- that in the Old Testament covenant, men represent women. But, why include infant boys in the covenant sign? To include them in the covenant as soon as possible. Original sin starts at birth, why should the covenant of grace not start at birth? But notice that the sign of the covenant is for both young and old- born into the faith and converting to the faith. Interestingly, Colossians 2:11-12 tells us that after Jesus came that circumcision gave way to baptism as the sign of the covenant: “In him also you were circumcised with a circumcision made without hands, by putting off the body of the flesh, by the circumcision of Christ, having been buried with him in baptism, in which you were also raised with him through faith in the powerful working of God, who raised him from the dead.” Circumcision was a bloody sign, showing (quite graphically) that sin needed to be cut off from us in order for us to be made holy and acceptable to God. But because Jesus shed His blood, we no longer need a bloody sign. Now, we are to baptize people to symbolize the cleansing from sin that we need. And now, happily, baptism is more inclusive, not less inclusive than circumcision. Women are baptized, and we Presbyterians (and Lutherans, Anglicans, Methodists, Catholics, and Orthodox) believe that infants should be baptized, too, to continue to receive the sign of the covenant. There are three stages of fulfillment to God’s promises to Abraham. The promise of descendants was first physically fulfilled when 90-year old Sarah finally gave birth to Isaac (Genesis 22 tells the fascinating story of how God almost destroyed the child of promise, but we don’t have time to look at that story). The promise of the land was fulfilled when Israel took the Promised Land starting in the book of Joshua. Joshua 21:43-45 says that, “The Lord gave Israel all the land which he had sworn to their fathers.” Now, this land, the land of Canaan, is not the most beautiful or most protected land in the world. But, what it was was the crossroads of the ancient world. As Michael Williams writes, “God’s choice of Canaan as a land for Abraham was intentional and central to the redemptive mission for which Abraham was chosen. What was important about this particular piece of real estate was its geographic relationship to other lands. It was a doorway to the world, on the way to everywhere else… When Israel finally entered the land under Joshua, it was beginning its mission in earnest” (Far As the Curse Is Found, 115). That mission was to evangelize the world. The greater second stage of fulfillment of those promises of land and offspring came in Jesus Christ. For, in Christ the great evangelist, the covenant was extended to all people groups, not just Israel, whether physically descended from Abraham or not. As Romans 4:11 says, Abraham “is the father of all who believe.” And so, we can see how, in the church, Abraham’s descendants really begin to be as numerous as the stars in the sky. And we are sent by Jesus to make more spiritual descendants of Abraham: “Go therefore and make disciples of all nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit, teaching them to observe all that I have commanded you” (Matthew 28:19-20).
https://www.riveroakstulsa.com/blog/post/theology-tuesday--covenant-theology--_2
Mission Statement: To provide your pet with quality care and compassion while providing our clients with the highest level of customer service and education. Hospital Philosophy: The care given to each animal is to be of the highest standards. The facilities and the staff are to be dedicated to each animal and their owner. This is a profession of great compassion and our goal is to be a source of healing, caring, relief of pain, and support for all of God’s creatures. This is not only a veterinary hospital but it is also a place where high standards and ethics are an everyday practice.
https://www.palmettovet.net/about-us/
What causes jet contrails? |Jet contrails (a contracion of 'condensation trails') are man-made clouds that form through condensation of water vapor in the exhaust of jet engines into ice crystals. | Water vapor is a natural by-product of the burning of petroleum-based fuels, and the amounts produced by jet engines are sometimes larger than the cold, thin air of the upper troposphere can hold in vapor form. As a result, some of the vapor condenses as water droplets, which then rapidly freeze into ice particles very similar to high-altitude cirrus clouds. If the air has low relative humidity, then a contrail will not form because the extra water vapor produced by the jet engines is not large enough to produce condensation. But if the is air has high relative humidity, then a contrail forms, using up the excess water vapor that the air can not hold.
http://weatherstreet.com/weatherquestions/What_are_jet_contrails.htm
In Sports, Results Matter, But to Get Them, Ignore Them Focus on the process and the results will come. There are a lot of misconceptions about the role of results in achieving your athletic goals. Of course, you need good results to be successful, but the question is how to go about getting those results and, ironically, the answer is not what coaches, athletes, and parents often think. First, I want to define ‘outcome’ and ‘process.’ An outcome focus involved focusing on results, rankings, and beating others. Notice that this focus is on things outside of you. A process focus involves focusing on what you need to do perform your best such as preparation, technique, or tactics. In contrast to an outcome focus, a process focus is entirely on you. Now it’s time to discuss the paradox of outcome focus. Most people think that, to get the results you want, you need to focus on those results. But, and here’s the paradox, by having an outcome focus actually reduces the chances of your achieving the results you want. Here’s why. First, when does the outcome of a competition occur? At the end, of course. If you’re focused on the outcome, you aren’t focused on the process, namely, what you need to do to perform your best from the start to the finish of the competition. Second, what makes you nervous before a competition, the process or the outcome? The chances are it’s the outcome, more specifically, a bad outcome such as not winning or achieving your goals. The bottom line is that when you focus on the outcome, you are far less likely to get the outcome you want. In contrast, when you focus on the process, you increase your chances of getting the results you want. If you focus on the process, that is, what you need to do to perform your best, how you are likely going to perform? Pretty well, you can assume. And if you perform well, you’re more likely to achieve the result you wanted in the first place. Here is my wish for you: never think about results. In an ideal world, I would like you to be entirely process focused and basically never have results cross your mind. Here’s another wish. In that ideal world I mentioned above, I would have parents and coaches never talk about results either. The fact is there is no point. You know when you’ve had a good competition and you definitely know when you’ve had a bad one. If you’re like most athletes, when your parents and coaches talk about results, you hear their chatter as expectations, pressure, or disappointment. Parents, good or bad competition, give your children a hug, tell them you love them, and ask them if they’re hungry. If you’re too excited about a good performance or too disappointed in a bad one, stay the heck away from your children because they will sense your emotions no matter how hard you try to mask them. Coaches, if your athletes had a good day, don’t say “good job.” Instead, help them understand why they performed well. If they had a bad day, pat them on the back, tell them you still believe in them, and help them figure out how to perform better in the next competition. Here’s where the real world collides with the ideal world that I wish existed. We don’t live in an ideal world and until someone invents a “process pill”, it’s not likely that you can expunge results from your mind. In the real world, results do matter. As an athlete, you are competitive and you probably do have some big outcome goals. I don’t expect you to not think about results. In fact, I’m going to assume that you are going to think about results a lot. So, knowing that an outcome focus actually hurts your cause, your challenge is what to do when your mind does fixate on results. First, become aware that you are focusing on the outcome. There’s no magic to this; you just have to monitor your thinking and notice your outcome focus. Once you see that you are thinking about results, you can take steps to get your mind off of them. Recognize that you can only focus on one thing at a time, so if you can replace your outcome focus with a focus on something else, you have stopped yourself from thinking about results. Ideally, you want to refocus on the process, specifically, something that will enable you to perform your best, but sometimes, focusing on anything other than results (e.g., music, school, food) will do the trick. Go through your routine (in practice or competitions). The purpose of a routine is to get yourself totally prepared to perform your best and, if well ingrained, to trigger thoughts, emotions, and physiology that will help you perform well. So, by going through your routine, you are reminded of the process and it takes your mind off of results. Do mental imagery. If you are focused on the thoughts, feelings, and images of performing well, you’re not focused on results. Plus, the imagery will increase your motivation and confidence, help you reach your ideal intensity, and get your body primed to train or compete. If you just can’t shift your mind from outcome to process, the best thing you can do is get out of your mind completely. In other words, distract yourself by talking to others, listening to music, goofing around, anything that will prevent you from thinking about results. Finally, remind yourself why you compete, for example, for the love of competition, being with your teammates, or just plain having fun. This change gets you out of thinking mode and into feeling mode, generating powerful emotions, such as excitement, inspiration, and pride, that will get you fired up about getting out there and performing the very best you can. Mr. Taylor wrote "Recognize that you can only focus on one thing at a time". Yes, science has proven that we do not dual-task, we only focus on one thing at a time. A tennis athlete has to keep the score in working memory at all times. The score tells him where to stand and when to sit down. An athlete keeping the score in working memory will do that at the cost of athletic planning. My izzers tennis game and tennis tie-break scoring system weighs only 3 grams and can be easily marked and easily read. Once the tennis player has marked the score and moved to the correct place to stand, he can clear his mind of the score and focus on the process. Elite tennis players need a scoreboard IMHO to play their best.
Introduction {#Sec1} ============ A brain--computer interface (BCI) is a device that measures signals from the brain and translates them into control commands for an application such as a wheelchair, an orthosis, or a spelling device \[[@CR43]\]. By definition, a BCI does not use signals from muscles or peripheral nerves. Furthermore, a BCI operates in real-time, presents feedback, and requires goal-directed behavior from the user \[[@CR27]\]. Most non-invasive BCIs record the electroencephalogram (EEG) from the surface of the scalp \[[@CR19]\]. In general, there are several components which process the raw EEG signals before an actual output of the system is available. Typically, signals are first preprocessed with temporal or spatial filters. Examples of preprocessing techniques include bandpass filters, bipolar filters, or more advanced approaches such as common spatial patterns (CSP) \[[@CR4]\]. The next stage extracts suitable features from the preprocessed signals, that is, relevant (discriminative) signal characteristics are isolated. Popular features for BCIs include logarithmic band power (logBP) \[[@CR25], [@CR26]\], autoregressive (AR) parameters \[[@CR35]\], time-domain parameters \[[@CR42]\], and wavelet-based methods \[[@CR11]\]. Finally, a classification or regression algorithm translates the features into an output signal for a specific application. Examples of widely used classifiers in BCI research are linear discriminant analysis (LDA), support vector machines, neural networks, and nearest neighbor classifiers \[[@CR18], [@CR19], [@CR40]\]. Optionally, and depending on the application, the output of the classification stage can be post-processed, for example by averaging over time or by introducing additional constraints such as a dwell time and refractory period \[[@CR37]\]. Selecting suitable features is crucial to obtain good overall BCI performance \[[@CR7], [@CR8]\]. In this study, we focus on BCIs based on event-related desynchronization \[[@CR28]\] and explore extensions of the simple AR model and compare the resulting features with logBP features. More specifically, we compare the performance of a standard univariate AR (UAR) model, a vector AR (VAR) model, and a bilinear AR (BAR) model on BCI data. We also study the influence of adding the error variance as a feature for all three AR model types. Similar to logBP, AR parameters can be used to estimate the power spectral density \[[@CR20]\], but they can also serve directly as features for BCIs \[[@CR35]\]. Many groups have used AR parameters as features for BCIs in either way; some groups used short segments of time and fitted an AR model to this data segment \[[@CR9], [@CR30]\], whereas others adapted the model coefficients continuously \[[@CR35], [@CR39]\] (for example with a Kalman filter approach). Most studies used UAR models, which means that each EEG channel is described with a separate AR model. This means that information about the relationships between signals is completely neglected. In contrast, a VAR model describes all channels at once and therefore includes information about the correlation between individual signals. Only a few studies have described VAR parameters applied to BCI data, but they reported promising results \[[@CR2], [@CR24]\]. Furthermore, the additional information inherent in VAR models can be used to compute explicit coupling measures such as the partial directed coherence and the directed transfer function \[[@CR34]\]. Another extension of the AR model is the BAR model. In contrast to the classical linear AR model, a BAR model can describe certain non-linear signal properties \[[@CR29]\] such as non-Gaussian signals. Many real-world time series exhibit such behavior, for example the arc-shaped sensorimotor mu rhythm \[[@CR10]\] in the case of EEG signals. Consequently, a bilinear model (which is a special case of general non-linear models) should be better suited to model such data. The objective of this study is to assess the influence of different feature types based on AR models on the performance of a BCI (for example as measured by the classification accuracy). More specifically, we compared standard UAR models with VAR and BAR models, and variants including the prediction error variance as an additional feature. We also used logBP features as state-of-the-art features for comparison. We hypothesized that both VAR and BAR models could yield higher BCI performance than UAR parameters, because they contain more information on the underlying signals and/or describe the signals more accurately. Moreover, adding the error variance as a feature could add discriminative information and thus increase BCI performance. Methods {#Sec2} ======= Data {#Sec3} ---- We used data set 2a from the BCI Competition IV [1](#Fn1){ref-type="fn"}, which comprises data from nine users over two sessions each (recorded on separate days). The data was recorded with prior consent of all participants, and the study conformed to guidelines established by the local ethics commission. In each trial, participants performed one out of four different motor imagery tasks: movement imagination of left hand, right hand, both feet, and tongue. In total, each of the two sessions consists of 288 trials (72 trials per class) in random order. Subjects were sitting in front of a computer monitor. At the beginning of a trial, a cross appeared on the black screen. In addition, subjects heard a tone indicating trial onset. After 2 s, subjects viewed an arrow that pointed either to the left, right, top or bottom of the screen. They performed the corresponding motor imagery task until the cross disappeared after 6 s. A short break between 1.5 and 2.5 s followed before the next trial. The data set consists of 22 EEG signals recorded monopolarly (referenced to the left mastoid and grounded to the right mastoid). Signals were sampled at 250 Hz and bandpass-filtered between 0.5 and 100 Hz. An additional 50 Hz notch filter removed line noise. In this study, we used only three bipolar channels, calculated by subtracting channels anterior to C3, Cz, and C4 from sites posterior to these locations (the inter-electrode distance was 3.5 cm). Features {#Sec4} -------- We compared three different AR variants, namely (1) a UAR model, (2) a VAR model, and (3) a BAR model. In all three cases, we used the corresponding AR coefficients as features. In addition, we enhanced each AR method by adding the prediction error variance to the feature space. In summary, we analyzed six different AR-based feature types, described in more detail in the following paragraphs. ### UAR model {#Sec5} A $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{{\text{UAR}}}(p)$$\end{document}$ model is defined as$$\documentclass[12pt]{minimal} \begin{document}$$ x_k=\sum\limits_{i=1}^p a_{i} x_{k-i} + e_{k}, $$\end{document}$$where *x*~*k*~ is the value of the time series *x* at time point *k*. The current value of *x*~*k*~ can be predicted by the weighted sum of *p* past values *x*~*k*-*i*~ plus an additional error term *e*~*k*~. The weights *a*~*i*~ are called the AR parameters. In a typical BCI, *x*~*k*~ corresponds to the amplitude of an EEG channel at time *k*. ### VAR model {#Sec6} \begin{document}$$\mathrm{{\text{VAR}}}(p)$$\end{document}$ model is an extension of the UAR case described above, because it simultaneously describes several time series. Thus, it is defined as$$\documentclass[12pt]{minimal} \begin{document}$$ {\mathbf{x}}_{k}=\sum\limits_{i=1}^{p} {\mathbf{A}}_{i} {\mathbf{x}}_{k-i}+ {\mathbf{e}}_{k}, $$\end{document}$$where $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{x}_{k}$$\end{document}$ is a vector of time series at time *k*. The *p* AR parameters from the UAR model generalize to *p* matrices $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{A}_{i},$$\end{document}$ and the error term $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{e}_{k}$$\end{document}$ becomes a vector. In contrast to a UAR model, a VAR model explicitly models the correlation between the different time series. Applied to EEG data, VAR models can describe the relationships between different EEG channels, which might contain discriminable information for BCIs \[[@CR5]\]. ### BAR model {#Sec7} In contrast to UAR and VAR models (which are linear time series models), non-linear models can describe non-linear characteristics such as large bursts or extremely rapid and large fluctuations \[[@CR29]\]. A $\documentclass[12pt]{minimal} \begin{document}$$\mathrm{{\text{BAR}}}(p,q_{1},q_{2})$$\end{document}$ model is an extension of a linear $\documentclass[12pt]{minimal} \begin{document}$$\mathrm{{\text{UAR}}}(p)$$\end{document}$ model and a special case of general non-linear models with finite parameters. It is defined as$$\documentclass[12pt]{minimal} \begin{document}$$ x_{k}=\sum\limits_{i=1}^p a_{i} x_{k-i}+e_{k}+\sum\limits_{i=1}^{q_{1}} \sum\limits_{j=1}^{q_{2}} b_{ij} x_{k-i} e_{k-j}, $$\end{document}$$where the first part is a $\documentclass[12pt]{minimal} \begin{document}$$\mathrm{{\text{UAR}}}(p)$$\end{document}$ model and the last part describes the bilinear contribution with the $\documentclass[12pt]{minimal} \begin{document}$$q_{1} \cdot q_{2}$$\end{document}$ bilinear coefficients *b*~*ij*~. BAR models might be more suitable to describe EEG signals, because EEG signals may contain non-linear features such as the arc-shaped mu rhythm \[[@CR10]\]. Such characteristics cannot be captured by linear time series models \[[@CR29]\]. ### Parameter estimation {#Sec8} We estimated AR parameters adaptively for all AR-based methods (UAR, VAR, and BAR) using a Kalman filter \[[@CR14]\]. A Kalman filter operates in the state space, which is defined by the following two equations:$$\documentclass[12pt]{minimal} \begin{document}$$ {\mathbf{z}}_{k}={\mathbf{G}} \cdot {\mathbf{z}}_{k-1}+{\mathbf{w}}_{k-1} $$\end{document}$$$$\documentclass[12pt]{minimal} \begin{document}$$ {\mathbf{y}}_{k}={\mathbf{H}} \cdot {\mathbf{z}}_{k}+{\mathbf{v}}_{k} $$\end{document}$$ Here, $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{z}_{k}$$\end{document}$ is the state at time $\documentclass[12pt]{minimal} \begin{document}$$k,\mathbf{G}$$\end{document}$ is the state transition matrix, and $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{w}_{k-1}$$\end{document}$ is the process noise with $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{w}_{k-1} \sim \mathcal{N}(0,\mathbf{W}).$$\end{document}$ Furthermore, $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{y}_{k}$$\end{document}$ is the measurement vector, $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{H}$$\end{document}$ is the measurement sensitivity matrix, and $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{v}_{k}$$\end{document}$ is the measurement noise with $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{v}_{k} \sim \mathcal{N}(0,\mathbf{V}). $$\end{document}$ For univariate models UAR and BAR, $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{y}_{k}$$\end{document}$ and $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{v}_{k}$$\end{document}$ reduce to scalars *y*~*k*~ and *v*~*k*~ (with $\documentclass[12pt]{minimal} \begin{document}$$v_{k} \sim \mathcal{N}(0,V)$$\end{document}$), respectively. We used these equations to estimate AR parameters by assigning $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{z}_{k}=\mathbf{a}_{k}$$\end{document}$ (where $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{a}_{k}=\left[ a_{1}, a_{2}, \ldots , a_{p} \right]^{T}$$\end{document}$ is a vector containing all AR coefficients), $\documentclass[12pt]{minimal} \begin{document}$$y_{k}=x_{k},\mathbf{G}=\mathbf{I}$$\end{document}$ (the identity matrix), and $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{H}=\left[x_{k-1},x_{k-2}, \ldots ,x_{k-p} \right]. $$\end{document}$ These assignments hold for the UAR model only, but they can be easily generalized for the VAR case by using matrix equivalents of the corresponding variables, and for the BAR model by extending $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{z}_k$$\end{document}$ and $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{H}.$$\end{document}$ We adopted an estimation approach based on results presented in \[[@CR36]\] and as recommended and implemented in the BioSig[2](#Fn2){ref-type="fn"} toolbox \[[@CR33]\] function tvaar.m. We implemented this function in C and added a MATLAB[3](#Fn3){ref-type="fn"} interface, which speeded up computation time significantly. In the first step, we tried to find suitable initial values for parameters such as the AR coefficients, the process noise covariance, and the measurement noise covariance. We updated all parameters in this first run over the complete first data session. Once we found initial values with this procedure, we estimated AR parameters in a second run over the session using another update mode, which essentially keeps the process noise and measurement noise covariances constant at the previously found values. In the final evaluation step on the unseen second session, we only used mode the latter mode, but initialized parameters with values found in the optimization step using the first session (see Sects. [2.3](#Sec11){ref-type="sec"}, [2.4](#Sec14){ref-type="sec"} for more details). ### Features based on AR models {#Sec9} The prediction error $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{e}_k$$\end{document}$ at time *k* can be estimated by subtracting the prediction $\documentclass[12pt]{minimal} \begin{document}$$\left(\mathbf{H} \cdot \mathbf{z}_k \right)$$\end{document}$ from the measurement $\documentclass[12pt]{minimal} \begin{document}$$\mathbf{y}_k:$$\end{document}$$$\documentclass[12pt]{minimal} \begin{document}$$ {\mathbf{e}}_k = {\mathbf{y}}_k - {\mathbf{H}} \cdot {\mathbf{z}}_k $$\end{document}$$ We used the logarithm of the estimated covariance of the prediction error $\documentclass[12pt]{minimal} \begin{document}$$\log \left( E\left< \mathbf{e}_k \mathbf{e}_k^T \right>\right)$$\end{document}$ to augment the feature space of UAR, VAR, and BAR models, thus yielding three additional AR feature types termed xUAR, xVAR, and xBAR. Note that we adapted the covariance estimation in each step directly with UC. In summary, we compared the following six AR-based feature extraction methods: (1) UAR, (2) xUAR, (3) VAR, (4) xVAR, (5) BAR, and (6) xBAR. ### LogBP {#Sec10} We compared our AR features with results obtained from logBP, which is commonly used in many BCI systems \[[@CR19]\]. The calculation procedure is as follows:Bandpass-filter raw EEG signal in a specific frequency band (we used a fifth order Butterworth filter)Square samplesSmooth over a one second time window (we used a moving average filter)Compute the logarithm Parameter optimization {#Sec11} ---------------------- We conducted two independent parameter optimization procedures. In the first analysis (individual optimization), we optimized parameters for each subject individually. In the second analysis (global optimization), we searched for parameters that were optimal for all subjects in the data set. Importantly, we used only data from the first session in both procedures; we never used data from the second session during parameter optimization. ### Individual optimization {#Sec12} For each AR method, we optimized model order(s) and update coefficient UC (a parameter which determines the update speed in each iteration of the Kalman filter algorithm) for each subject individually. We used a grid search to find the optimal parameter combination. Table [1](#Tab1){ref-type="table"} lists the search spaces for the different methods. In summary, we searched in $\documentclass[12pt]{minimal} \begin{document}$$41 \cdot 20 = 820$$\end{document}$ (UAR, xUAR), $\documentclass[12pt]{minimal} \begin{document}$$41 \cdot 15 = 615$$\end{document}$ (VAR, xVAR), and $\documentclass[12pt]{minimal} \begin{document}$$41 \cdot 15 \cdot 3 \cdot 3 = 5535$$\end{document}$ (BAR, xBAR) parameter combinations, respectively.Table 1Search spaces for the AR-based feature extraction methodsMethodslog(UC)*pq*~1~*q*~2~UAR, xUAR$\documentclass[12pt]{minimal} \begin{document}$$-8 \ldots 0$$\end{document}$$\documentclass[12pt]{minimal} \begin{document}$$1 \ldots 20$$\end{document}$----VAR, xVAR$\documentclass[12pt]{minimal} \begin{document}$$1 \ldots 15$$\end{document}$----BAR, xBAR$\documentclass[12pt]{minimal} \begin{document}$$1 \ldots 15$$\end{document}$$\documentclass[12pt]{minimal} \begin{document}$$1 \ldots 3$$\end{document}$$\documentclass[12pt]{minimal} \begin{document}$$1 \ldots 3$$\end{document}$We varied linear and bilinear model orders *p*, *q*~1~, and *q*~2~ in steps of 1, and the logarithmic update coefficient $\documentclass[12pt]{minimal} \begin{document}$$\log {\text{UC}}$$\end{document}$ in steps of 0.2 For each parameter combination and method, we performed the following steps:Extract features (see Sects. [2.2.4](#Sec8){ref-type="sec"}, [2.2.5](#Sec9){ref-type="sec"})Find best segment for classifier setup using a running classifier \[[@CR22]\] (we divided a trial into 1 s segments with 0.5 s overlap and used all samples within a segment for the running classifier procedure; see Sect. [2.4](#Sec14){ref-type="sec"} for more details on the classifier)Leave-8-trials-out cross-validation (train a classifier on best segment found in the previous step, test on whole trial)Use 0.9 quantile of classification accuracy *p*~0~ as performance measure Finally, we selected the parameter combination associated with the highest performance measure. In contrast to the grid search optimization for AR methods, we used a method based on neurophysiological principles instead of classification results to optimize logBP features; we refer to this method as band power difference maps \[[@CR3]\], which is similar to the approach described in \[[@CR4]\]. The procedure is as follows (applied to each EEG channel separately):Compute time-frequency maps of signal power for each motor imagery task and the three remaining tasks combined (using only data from within trials)Calculate difference maps by subtracting the map of one task from the map of the three remaining tasks combinedIteratively find and remove connected patches in maps (corresponding to largest differences)Combine adjacent or overlapping bands. We calculated time--frequency maps with high time and frequency resolution (we varied time from 0--8 s in steps of 0.04 s and frequency from 5 to 40 Hz with 1 Hz bands in steps of 0.1 Hz). We also calculated confidence intervals for each time--frequency point by first applying a Box-Cox transformation and then computing confidence intervals from the normal distribution. In summary, we calculated eight time--frequency maps for the following motor imagery tasks and combination of tasks: 1, 2, 3, 4, 234, 134, 124, and 123 (the numbers 1, 2, 3, and 4 correspond to left hand, right hand, feet, and tongue motor imagery, respectively; the numbers 234, 134, 124, and 123 are combinations of these tasks). Next, we calculated four difference maps, namely 1--234, 2--134, 3--124, and 4--123. Within each difference map, we iteratively searched for connected significant patches (inspired by a four-way flood fill algorithm), starting with the pixel with the largest difference. If the area of such a patch was over a predefined threshold of 1 s Hz, we used its upper and lower frequency borders to define a band for the logBP feature extraction method. We then removed this patch from the map and repeated the search procedure, searching again for the pixel with the largest difference. We continued this procedure until the algorithm had removed all patches from the map. Finally, we combined all frequency bands found in the four difference maps and combined adjacent or overlapping frequency bands. ### Global optimization {#Sec13} In addition to the individual optimization, we also tried to find parameters that are optimal for all subjects. For each AR method, we averaged the performance measures (calculated for all parameter combinations) over all nine subjects. From these averaged results, we selected the combination of linear model order(s) and update coefficient with the highest performance measure. For logBP, we simply selected standard frequency bands 8--12 and 16--24 Hz (containing alpha and beta bands) for all channels. Evaluation {#Sec14} ---------- We evaluated all feature extraction methods in two different ways. First, we calculated the cross-validated (XV) classification accuracy *p*~0~ on the second session. Second, we estimated the session transfer (ST) by calculating classifier weights on the first session and computing the classification accuracy *p*~0~ on the second session. We carried out this evaluation for both individually and globally optimized features (see Sect. [2.2.4](#Sec8){ref-type="sec"}). ### Cross-validation (XV) {#Sec15} With the optimized parameter values found in the optimization step (using data from the first session only), we calculated the cross-validated classification accuracy *p*~0~ on the second session. Therefore, we used a similar classification procedure as described in Sect. [2.2.4](#Sec8){ref-type="sec"}. First, we extracted features from the second session. Next, we determined the best segment for classifier setup using a running classifier \[[@CR22]\]. As before, we divided each trial into 1 s segments with 0.5 s overlap. We used a combination of LDA classifiers in a one-versus-rest scheme; this classifier assigned one out of four classes to the class with the highest discriminant value. We performed a leave-8-trials-out cross-validation, which means that we used segments of 280 trials to train and eight trials to test a classifier. We repeated this procedure until all segments had been used as a test set once. Finally, we averaged over all folds, and we calculated the 0.9 quantile of the cross-validated classification accuracy. That is, instead of reporting the maximum of the classification accuracy within a trial, we chose the 0.9 quantile as a more robust measure of performance, because it effectively removes outliers. ### Session transfer {#Sec16} The ST estimates the performance of a real-world BCI system more realistically, but it requires a sufficiently high number of unseen test data trials. In this analysis, we determined optimal parameters and classifier weights from the first session. After that we extracted features from the second session and applied the classifier from the previous step. We used the same one-versus-rest classifier scheme as in the cross-validation analysis. ### Statistical analysis {#Sec17} We used repeated measures analysis of variance (ANOVA) to statistically analyze the classification results. First, we checked the sphericity assumption with Mauchly's spericity test. Then, we performed the ANOVA and corrected degrees of freedom if necessary. If we found significant effects, we used Newman--Keuls post-hoc tests to determine significant differences. Basically, we performed ANOVAs for XV and ST results separately. First, we wanted to assess differences over all seven feature extraction methods (factor "method"; 7 levels; UAR, xUAR, VAR, xVAR, BAR, xBAR, and logBP) and optimization strategies (factor "optimization"; 2 levels; individual and global). Second, we were also interested in differences between the three AR-based methods only (factor "method"; 3 levels; U, V, and B), the influence of the prediction error variance feature (factor "x"; 2 levels; yes or no), and the optimization strategies (factor "optimization"; 2 levels; individual or global). We repeated these analyses with both XV and ST results combined into a factor "ST/XV" (2 levels; ST and XV). In summary, we performed six repeated measures ANOVAs. Results {#Sec18} ======= Parameter optimization {#Sec19} ---------------------- Tables [2](#Tab2){ref-type="table"} and [3](#Tab3){ref-type="table"} show the results of the optimization procedure for both the individual and global optimization, respectively. On average, univariate methods (UAR, BAR, xUAR, and xBAR) require a higher model order *p* as opposed to vector models (VAR and xVAR). The optimized values of the update coefficient UC are similar for all methods, except in the case of BAR for subjects A01 and A02, where the UC is significantly lower (see Fig. [1](#Fig1){ref-type="fig"}). This might be due to our optimization procedure, where we selected the parameter combination with the highest fitness function. However, only a slightly lower classification accuracy is associated with a log(UC) around −2.5, a value found for all other subjects.Table 2Results of parameter optimization for AR-based methods UAR, VAR, and BAR without the prediction error variance feature \  UARVARBAR*p*~0~*p*log(UC)*p*~0~*p*log(UC)*p*~0~*pq*log(UC)A010.58213−2.80.6124−2.60.60182, 2−0.8A020.4466−3.00.4616−2.80.461141, 1−0.6A030.57312−2.60.6252−2.80.578121, 3−2.6A040.41810−2.20.3954−2.20.421122, 2−2.6A050.4064−2.60.4102−2.40.41851, 2−2.2A060.42915−2.20.43412−2.20.457151, 1−2.6A070.54414−2.60.53313−2.40.559141, 3−2.6A080.63515−2.40.6734−2.40.63951, 2−2.4A090.6143−2.20.6403−2.00.62371, 2−2.2Global0.49413−2.60.5074−2.60.499131, 1−2.4All nine subjects (A01, A02, $\documentclass[12pt]{minimal} \begin{document}$$\ldots$$\end{document}$) are shown. Columns show the 0.9 quantile of the classification accuracy *p*~0~, linear model order *p*, bilinear model order *q*, and update coefficient logUC. The last row shows the results of the global optimizationTable 3Results of parameter optimization for AR-based methods xUAR, xVAR, and xBAR (including the prediction error variance feature) \  xUARxVARxBAR*p*~0~*p*log(UC)*p*~0~*p*log(UC)*p*~0~*pq*log(UC)A010.61912−2.60.6264−2.60.619131, 1−2.6A020.5098−2.80.5064−3.00.50981, 1−2.8A030.6545−2.60.6512−2.80.65151, 1−2.6A040.41018−2.00.4003−2.00.425152, 2−2.2A050.4182−2.80.4106−2.40.41451, 2−2.2A060.4362−2.20.43413−2.20.457151, 2−2.6A070.55614−2.60.54113−2.40.563152, 3−2.0A080.65416−2.40.6774−2.60.63941, 1−2.6A090.6293−2.20.6533−2.00.64071, 2−2.2Global0.51113−2.60.5184−2.40.513121, 1−2.4The notation is the same as in Table [2](#Tab2){ref-type="table"}Fig. 1Optimization results for subjects A01 (*left*) and A02 (*right*) for BAR with the best bilinear model order *q*. Maps show the 0.9 quantile of the classification accuracy for all parameter combinations of log(UC) (*x*-axis) and model order *p* (*y*-axis). The*white cross* marks the location of the maximum Finally, note that we used the achieved classification accuracies only within our optimization procedure. We report it here only for the sake of completeness, and stress that we did not use these accuracies for evaluation of the methods. The evaluation results are described in the next section. Evaluation {#Sec20} ---------- Using the optimal parameter combinations found in the optimization step, we evaluated the methods on the second session. Table [4](#Tab4){ref-type="table"} shows the results for the ST analysis, whereas Table [5](#Tab5){ref-type="table"} shows the cross-validated (XV) results. As expected, classification accuracies are generally higher in the cross-validated case than in the ST analysis. In both cases, there is no obvious difference in the means for the individual and global optimization. The following paragraphs describe the outcomes of the statistical analyses.Table 4ST evaluation results (0.9 quantile of the classification accuracy) for each feature extraction method and optimization strategy on the second session \  IndividualGlobalUARxUARVARxVARBARxBARLogBPUARxUARVARxVARBARxBARLogBPA010.4710.5710.5210.5500.2750.6000.6500.5540.6110.5210.5750.5110.5930.596A020.3400.3760.3510.3720.2940.3510.3400.3120.3790.3900.4110.3260.3900.351A030.3570.5550.4520.5290.3790.5480.6450.3600.4670.5630.5990.4190.5110.601A040.2730.2820.2910.3790.2820.2730.4100.2600.2820.3170.2920.2690.3000.441A050.2580.2980.2730.2580.2730.2650.2870.2910.2910.2580.2760.2730.2840.305A060.3740.3080.3500.3360.3690.3690.3690.3600.3550.2940.3180.3690.3550.369A070.2390.2390.2390.3260.2390.2390.3950.2390.2900.2390.2500.2390.2640.471A080.4670.4070.5810.5670.5630.5330.6410.4810.4810.5480.5850.5520.5040.641A090.4980.4980.4870.5970.3270.4980.6080.2590.3040.3800.5170.2700.4300.601Mean0.3640.3930.3940.4350.3330.4080.4830.3460.3840.3900.4250.3590.4030.486SD0.100.120.120.130.100.140.150.110.110.130.150.110.110.13The last two rows show the mean and standard deviation (SD)Table 5Cross-validated evaluation results (0.9 quantile of the classification accuracy) for each feature extraction method and optimization strategy on the second session \  IndividualGlobalUARxUARVARxVARBARxBARLogBPUARxUARVARxVARBARxBARLogBPA010.6210.6640.6110.6290.3180.6570.6500.6390.6640.6110.6460.6180.6640.614A020.3820.4200.4270.4440.2990.3920.3750.4060.4100.4170.3920.3850.4060.377A030.5020.6030.6100.6580.5180.6030.6800.4960.5990.6270.6470.5180.6100.603A040.4250.4630.4080.4210.3790.4540.4580.4300.4510.3910.4050.4210.4340.524A050.4110.3770.4060.4000.3640.3630.2930.3750.3670.4040.4070.3930.3890.329A060.3320.3550.3090.3560.3260.3330.4240.3430.3500.3580.3780.3560.3430.424A070.4890.5040.4820.5180.4960.4110.4180.4860.5070.5320.5430.4930.5000.489A080.6200.6120.6420.6450.5990.6090.6470.6000.5980.6450.6320.6020.6020.643A090.5990.6130.6190.6460.6080.6150.6150.5810.5770.6720.6830.5810.5700.624Mean0.4870.5120.5020.5240.4340.4930.5070.4840.5030.5170.5260.4850.5020.514SD0.110.110.120.120.120.130.140.100.110.130.130.100.110.12The last two rows show the mean and standard deviation (SD) ### Overall comparison {#Sec21} A two-way repeated measures ANOVA for the ST case (factors "method" and "optimization") found a significant main effect of "method" (*F*~6,48~ = 8.104, Greenhouse-Geisser-adjusted *P* \< 0.01). A Newman--Keuls post-hoc test found that logBP is significantly better than all six AR-based methods (mean classification accuracies of 0.355, 0.389, 0.392, 0.430, 0.346, 0.406, and 0.485 for UAR, xUAR, VAR, xVAR, BAR, xBAR, and logBP, respectively). Furthermore, xVAR is significantly better than both UAR and BAR. The factor "optimization" was not significant (*F*~1,8~ = 0.030, *P* = 0.87). In the XV case, an ANOVA with the same factors as in the ST analysis also found a significant main effect of "method" (*F*~6,48~ = 3.247, *P* \< 0.01). A Newman--Keuls post-hoc test revealed that BAR (mean accuracy of 0.460) is significantly worse than xUAR, VAR, xVAR, and logBP (mean accuracies of 0.507, 0.509, 0.525, and 0.510, respectively). Again, the factor "optimization" was not significant (*F*~1,8~ = 2.901, *P* = 0.13). We also conducted a repeated measures ANOVAs as described above for the combined evaluation results (that is, we combined ST and XV results and introduced a new factor "ST/XV"). This analysis yielded significant main effects "ST/XV" (*F*~1,8~ = 22.797, *P* \< 0.01) and "method" (*F*~6,48~ = 6.700, *P* \< 0.01), as well as a significant interaction between "ST/XV" and "method" (*F*~6,48~ = 5.746, Greenhouse-Geisser-adjusted *P* \< 0.01). Post-hoc tests showed that XV results (mean accuracy 0.499) are significantly higher than ST results (0.400). Furthermore, logBP yielded significantly higher results than UAR, VAR, BAR, and xBAR. BAR was significantly worse than xUAR, VAR, xVAR, and logBP. Finally, xVAR was significantly better than UAR. The mean accuracies for UAR, xUAR, VAR, xVAR, BAR, xBAR, and logBP were 0.420, 0.448, 0.451, 0.477, 0.403, 0.452, and 0.497, respectively. ### Comparison of AR-based methods {#Sec22} We also analyzed the six AR-based methods in more detail and performed three-way repeated measures ANOVAs (factors "method", "x", and "optimization"). In the ST case, we found significant main effects of "method" (*F*~2,16~ = 3.939, *P* \< 0.05) and "x" (*F*~1,8~ = 6.324, *P* \< 0.05). Post-hoc tests revealed that vector methods (mean accuracy of 0.411) are significantly better than bilinear methods (mean accuracy of 0.376). Furthermore, methods including the prediction error variance are significantly better (mean accuracy 0.408) than their counterparts without this additional feature (mean accuracy 0.364). In the XV case, we found a significant main effect of "method" (*F*~2,16~ = 6.753, *P* \< 0.01). Post-hoc tests showed that vector models (mean accuracy of 0.517) are significantly better than bilinear models (mean accuracy of 0.479). Finally, we analyzed the six AR methods for the combined ST and XV results (by introducing the factor "ST/XV"). We found significant main effects of "ST/XV" (*F*~1,8~ = 20.604, *P* \< 0.01), "method" (*F*~2,16~ = 5.597, *P* \< 0.05), and "x" (*F*~1,8~ = 6.778, *P* \< 0.05). Post-hoc tests showed that cross-validated results (mean accuracy 0.497) were significantly higher than ST results (mean accuracy 0.386). Furthermore, vector models (mean accuracy of 0.464) were significantly better than both univariate and bilinear models (mean accuracies of 0.434 and 0.427, respectively). Finally, results were significantly higher for methods using the prediction error variance feature (mean accuracy of 0.459) compared to methods that did not use this feature (mean accuracy of 0.425). Discussion {#Sec23} ========== In summary, logBP features yielded the highest classification results in this study. In the ST analysis, where features and classifiers are determined on the first session and then applied to the second (completely unseen) session, logBP was significantly better than all AR-based methods. When assessing this result in more detail, we found out that it might be due to our optimization and evaluation procedure, which resembles a practical BCI setup. In such a setup, users control the BCI in different sessions on different days, and only data from previous sessions can be used to tune parameters. However, this only works if the features are stable over sessions, that is, the bias of the classifiers does not change significantly. In fact, it turned out that all AR methods led to a much higher bias in the second session compared to logBP features, where the bias was about as small as in the first session. A statistical analysis comparing all feature extraction methods after adapting the bias in the second session resulted in no significant differences in the ST analysis. Therefore, adapting the bias of the classifier \[[@CR15]\] or using adaptive classifiers \[[@CR12], [@CR38], [@CR41]\] to improve ST is necessary for AR features. Due to the high dimensionality of the feature space in our globally optimized features (see Tables [2](#Tab2){ref-type="table"}, [3](#Tab3){ref-type="table"}), and because similarly high classification accuracies could be obtained for lower model orders in the optimization step, we assessed the performance of univariate models with a lower model order of *p* = 5 for all subjects. It turned out that classification accuracies improved slightly, but statistical analyses showed that the overall results did not change. That is, all results described above are also valid for univariate models with lower model orders. Therefore, we can safely rule out overfitting effects that might have explained the inferior performance of (univariate) AR models, especially in the ST analysis. Other studies such as \[[@CR20]\] have also found similarly high or higher model orders (although they did not use AR coefficients directly for classification, but calculated the power spectrum). Furthermore, we have shown that optimizing parameters for individual subjects does not result in better classification rates. Indeed, there was no significant difference between globally and individually optimized parameters. This implies that using logBP with default bands (8--12 and 16--24 Hz) works as well as with subject-specific bands. Note that we used bipolar channels in this study, which is very common in BCI research \[[@CR1], [@CR6], [@CR16], [@CR17], [@CR23], [@CR32], [@CR31], [@CR40]\]. Had we used subject-specific spatial filters such as CSP, subject-specific bands might have yielded better results than default bands \[[@CR4]\]. The comparison of all analyzed AR methods showed that vector models yielded higher classification results than both univariate and bilinear models. On the one hand, this is not surprising, because vector models consider more information, namely the relationships between individual signals. On the other hand, the potentially more accurate signal description with bilinear models could not be translated into improved classification results. This could be due to two reasons: first, the EEG might not contain signal characteristics that cannot be described by linear models; or second, although bilinear signal properties might improve the model fit, they do not contribute discriminative information for BCIs. Clearly, all AR methods benefited from the inclusion of the prediction error variance as an additional feature. This feature makes initialization of parameters even more important, because the prediction error variance is updated directly with the update coefficient UC. Without initialization to suitable values, it would take a long time until this feature was in its operating range. This underscores the importance of estimating good initial values, for example with a first run over the optimization data set as implemented in our study. In conclusion, logBP is superior to AR-based methods, at least with the procedure and implementation used in this study. However, as described above, the performance of AR features can be improved when adapting the bias of the classifiers in new sessions \[[@CR21], [@CR41]\]. We also found that low model orders generalized better, and the high model orders determined in our optimization step on the first session resulted in significantly lower classification accuracies on the unseen second session. Moreover, for the settings used in this study (which is very common in BCI experiments), it is not necessary to optimize features for each user individually globally optimized parameters for all users yield equally high classification rates. In particular, we recommend using low model orders (such as a model order of 5) for univariate models to ensure generalization of the features. Finally, vector models should be preferred over univariate models, and the prediction error variance improved classification performance of all AR models. Future study should apply these findings to online BCIs, where users receive feedback based on their brain patterns, for example to control a prosthesis \[[@CR13]\]. Although we are confident that our results will generalize to online sessions with feedback, we are currently working on an online study to verify our findings. Another follow-up study could explore the combination of AR and logBP features to assess whether they contain complimentary information on the data. This study was supported by the Austrian Science Fund FWF, P20848-N15 (Coupling Measures in BCIs), PASCAL-2 ICT-216886, and MU 987/3-2. This publication only reflects the authors' views. Funding agencies are not liable for any use that may be made of the information contained herein. We would also like to thank Alois Schlögl for helpful discussions and Vera Kaiser for support with statistical questions. **Open Access** This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. <http://www.bbci.de/competition/iv/>. <http://biosig.sourceforge.net/>. <http://www.mathworks.com/>.
Terms and Conditions Any service or product provided by CRN Consulting Pty Ltd shall be subject to the following disclaimer, and to the "CRN Consulting Pty Ltd - General Terms and Conditions of Sale_Rev2" that can be supplied in pdf format on request, and which are provided in text form below the disclaimer. Disclaimer: “Any and all services and advice, whether written, in electronic form or verbal, provided by CRN Consulting shall not be considered as, or take the form of, legal advice. CRN Consulting recommends that customers should, at their own expense, seek legal advice to the extent they deem necessary. CRN Consulting provides no promise, warranty or claim that any service or advice provided by CRN Consulting shall prevent the customer from suffering claims for damages arising from any cause or arising under any theory of law. CRN Consulting means CRN Consulting Pty Ltd and all directors, officers and employees of CRN Consulting Pty Ltd.” ___________________________________________________________________________________________ CRN Consulting Pty Ltd - General Terms and Conditions for Sale_Rev2 Sale of any Services is expressly limited to acceptance of these Terms and Conditions by Purchaser. Any order to perform work and Supplier's performance of work shall constitute Purchaser’s acceptance of to these Terms and Conditions. 1. Definitions Confidential Information means the fact that Supplier is providing Services to the Purchaser and all information disclosed in writing or otherwise to the Purchaser by the Supplier, or to the Supplier by the Purchaser, and however shall exclude any information that: (a) is or becomes generally available to the public other than as a result of disclosure by a Receiving Party; (b) is or becomes available to Receiving Party on a non-confidential basis from a source other than Disclosing Party; (c) is independently developed by Receiving Party , or (d) is required to be disclosed by law or valid legal process. Contract means these Terms and Conditions together with with the Price and Services detail in any quotation or invoice issued by the Supplier Force Majeure Event means any event as a direct or indirect result of which a Party is prevented from performing any of its obligations under the Contract, that is beyond the reasonable control of that Party and is not the direct or indirect result of the failure of that Party to perform any of its obligations under the Contract and includes act of war (whether declared or not) or terrorism, civil commotion or riot, act of God, natural disaster, epidemic, industrial action or labour disturbance, and action or inaction by a government agency. Hazardous Materials means any toxic or hazardous substance that is regulated, listed or controlled pursuant to any national, state, provincial, or local law, statute, ordinance, directive, regulation or other legal requirement of Australia. Hazardous Conditions means any conditions prevalent at a Site that the Supplier reasonably determines pose a threat to the safety or security of the Supplier’s personnel, including the presence of Hazardous Materials. Insolvent/Bankrupt means that a party is insolvent, makes an assignment for the benefit of its creditors, has a receiver or trustee appointed for it or any of its assets, or files or has filed against it a proceeding under any bankruptcy, insolvency dissolution or liquidation laws. Legal Advice means advice or recommendations, whether written, oral or in electronic form, that has been provided by a qualified and registered legal professional. Price means the agreed price stated in the Contract for the provision of Services. Purchaser means the individual or entity to which Supplier is providing Services under the Contract. Sanctioned Country means any country that appears on the prevailing list of countries subject to United Nations Security Council Sanctions or Australian Autonomous Sanctions as published by the Australian Department of Foreign Affairs and Trade. Services means the services Supplier has agreed to perform for Purchaser under the Contract. Site means any premises or location that the Supplier attends at the Purchaser’s request. Supplier means CRN Consulting Pty Ltd, 3 Prase Place, Carine, WA 6020, Australia; ABN 61 167 658 791 Terms and Conditions means these “General Terms and Conditions for Sale”, and any amendments agreed in writing by both parties prior to commencement of Services. 2. Payment Purchaser shall pay Supplier for the Services by paying all invoiced amounts, in the one of the forms stated on the invoice, in Australian dollars within fifteen (15) days from the invoice date. For each calendar month, or fraction thereof, that payment is late, Purchaser shall pay interest at the then prevailing Reserve Bank of Australia prime lending rate plus 1.5%. 3. GST The Price shall include GST, and Purchaser shall pay to the Supplier any such GST, as required by the then prevailing legislation. 4. Supplier’s Obligations and Warranty 4.1 Supplier warrants that Services shall be performed in a competent, diligent manner and delivered in accordance with the agreed schedule. 4.2 Purchaser acknowledges that there is no actual or implied Legal Advice provided or contained in a) Supplier’s obligations, b) the Services provided by the Supplier, or c) any verbal or written advice or recommendations provided by the Supplier. 4.3 If the Supplier is notified within one (1) year of delivery that the Services have not met this warranty standard, Supplier shall re-perform defective Services or portion thereof at Supplier’s expense. No renewal of the warranty period shall apply to re-performed Services. 4.4 If delivery of the Services is delayed due to the fault of the Supplier, the Price shall be reduced by 0.5% per day for every day of delay to an aggregate limit of 10% of Price. 4.5 This Clause 4 states the Supplier’s exclusive liability for all claims based on failure of or defect in or delay in delivery of Services, regardless of when the defect arises, howsoever a claim is described and on what theory of law a claim is based. The Supplier shall bear no warranty obligation or any liability whatsoever for Services that have been altered without the prior written consent of the Supplier. 5. Purchaser’s Obligations 5.1 Purchaser shall not seek, or claim to have sought, Legal Advice from the Supplier. 5.2 Should the Purchaser wish to do so, the Purchaser may seek independent Legal Advice pertaining to the subject for which the Services are provided at the Purchaser’s own expense. 6. Confidentiality 6.1 Supplier and Purchaser (the “Disclosing Party”) may each provide the other party (the “Receiving Party”) with Confidential Information. 6.2 Receiving Party agrees: (a) to use the Confidential Information only in connection with the Contract and use of Services, (b) to take reasonable measures to prevent disclosure of the Confidential Information to third parties, and (c) not to disclose the Confidential Information to a competitor, customer or supplier of Disclosing Party, noting however that a Party may disclose Confidential Information in defence of any claim for damages by any party, and a Receiving Party may disclose Confidential Information to any party with the prior written permission of Disclosing Party. 6.3 The restrictions under this Clause 6 shall expire three (3) years after the date of disclosure. 7. Force Majeure Event Should a Force Majeure Event occur, the obligations of the affected Party shall be suspended, and the affected Party shall have no liability to the other Party, until such time as the event has been overcome. If acts or omissions of the Purchaser or its customers or suppliers cause the delay, Supplier shall be entitled to an equitable extension of time for delivery of Services. 8. Termination and Suspension Either party may terminate the Contract for cause if the other party becomes Insolvent/Bankrupt or commits a material breach of the Contract, providing that the party in default is provided with an opportunity to remedy the default within fifteen (15) days, or if suspension due to a Force Majeure Event has continued for longer than forty-five (45) days. 9. Compliance with Laws, Codes and Standards 9.1 Supplier shall comply with laws applicable to the performance of Services. Purchaser shall comply with laws applicable to the use of the Services. 9.2 Supplier’s shall be relieved of all obligations and Purchaser shall indemnify Supplier from all liability in the event that the Purchaser or the Purchaser’s customer or supplier a) conducts business in or with an entity based in a Sanctioned Country, or b) commits any act or omission that results in (i) money laundering, (ii) fraud, or (iii) a breach of any Australian or applicable international law or regulation in respect of human rights, environment, safety or corporate governance. Any of the above events shall be considered a material breach of contract, and the Purchaser shall indemnify the Supplier from all claims, losses, damages and expenses incurred by the Supplier as a consequence of the event. 10. Environmental, Health and Safety Matters 10.1 At any Site the Purchaser shall ensure that, at Purchaser’s expense, a) Supplier receives all appropriate induction and safety training, b) that Supplier is provided with any applicable personal protection equipment and c) that Supplier is escorted by Purchaser’s representative at all reasonable times. 10.2 If Supplier encounters Hazardous Conditions at Site, Supplier shall be escorted from the Site until Purchaser eliminates the hazardous conditions. Supplier shall be entitled to Price and schedule adjustment commensurate with any productive time lost due to such an event 11. Changes Either party may at any time propose changes in the schedule or scope of Services. Supplier is not obligated to proceed with any change until both parties agree upon adjustments to Price and schedule pertaining to the change in writing. 12. Liability 12.1 The total liability of Supplier for all claims, under any theory of law and of any kind arising from or related to the performance or breach of this Contract, or any Services, shall not exceed the Price. 12.2 Neither party shall be liable for loss of profit or revenues, interruption of business, increased operating costs, any special, consequential, incidental, indirect, or punitive damages, or claims from a party’s customers or supplier for any such damages. 12.3 Purchaser shall indemnify Supplier from any claims made by Purchaser’s customer or supplier based on the provision or use of the Services. 12.4 To the extent permitted by law, the liability of a Party shall end upon expiration of the warranty period. 13. Governing Law and Dispute Resolution 13.1 This Contract shall be governed by and construed in accordance with the laws of the State of Western Australia, Australia, (the “Governing Law”). 13.2 All disputes arising in connection with this Contract shall be resolved by negotiations between the parties or failing such resolution within 30 days, the matter shall be referred to the Alternative Dispute Resolution Service of the SBDC in Western Australia. 14. General Clauses 14.1 If any Contract provision is found to be void or unenforceable, the remainder of the Contract shall not be affected. The parties will endeavour to replace any such void or unenforceable provision with a new provision that achieves substantially the same practical and economic effect and is valid and enforceable. 14.2 The following Clauses shall survive termination or cancellation of the Contract: 2,3,4,5, 6, 9, 12, 13, and 14. 14.3 The Contract represents the entire agreement between the parties. No modification, amendment, rescission or waiver shall be binding on either party unless agreed in writing.
http://crnconsulting.com.au/?q=node/7
VanSandt's main focus in teaching is to help students develop their critical thinking skills, to get them to question conventional wisdom, and to help them become engaged, productive citizens. He chose to teach in the business school because economic activity has become, by far, the dominant social institution in our society. His content focus is on business ethics, organizational culture and strategy. Research Interests The role of business in society, social entrepreneurship and the ability of business to help alleviate poverty. Responsibilities As the David W. Wilson Chair in Business Ethics, VanSandt is charged with fostering awareness, discussion, and debate about ethical practices in business; educating students and the community about the social and ethical issues facing business; establishing UNIBusiness as Iowa's best recognized platform for business ethics and the Wilson Chair as the most prominent authority of business ethics; acting as a catalyst for high quality research and debate regarding business ethics and the role of business in society; and enabling business to embrace an expanded role in promoting the common good. He also serves on the leadership team, Center for Academic Ethics. Professional Accomplishments VanSandt has had numerous papers published in Journal of Business Ethics, Journal of Management Education, Business & Society, Law & Policy, and Journal of Organization Theory and Behavior; presented dozens of papers at national academic conferences; twice voted by graduating seniors to present the "Last Lecture"; Fellow of the Washington Internship Institute, working with the Institute for Health Policy Solutions; visiting professor at the U. S. Army War College; finalist for the Best Dissertation Award by the Social Issues in Management division of the Academy of Management. Latest News & Views Five iOS Apps for people learning to 'adult' The anxiety young people have about maturing in their twenties has lead to the word "adulting" becoming quite popular. It's a useful, albeit somewhat glib, catch-all term for taking care of yourself. Doing your laundry? Adulting. Scrambling an egg? Adulting. In truth, the real sign of adulthood is living a life that's within your control, and for that, there are several iOS apps to help. With these five apps, anyone can "adult" -- or at least fake it when they need to. 5 highly effective Gmail tools you should be using but probably aren't Do you use Gmail for your business or personal email correspondence? Do you want to improve your email experience, rather than rely on Google's standard email interface? Thanks to developers within the email productivity space, there are a growing number of productivity apps available for Gmail users. Following are five powerful Gmail productivity tools to consider adding to your email routine: UNI releases new COVID-19 dashboard The University of Northern Iowa today begins its release of COVID-19 test results. This routinely updated aggregate data, shared in a way that protects individuals’ privacy, is available here. The dashboard will be updated each Friday and Monday. “Timely, accurate information is a powerful tool for fighting COVID-19, and we believe everyone in the community deserves to have access to it as we work together to stop this virus,” said President Mark A. Nook. Is puffery ethical in an interview? I was looking at a cereal box while I munched on the goodness of oats. The box proclaimed that the cereal was “heart-healthy.” Since the cereal was pure oats, instead of a lot of sugar with a few oats thrown in, the claim was plausible. UNI's MBA program seeking potential clients University of Northern Iowa MBA program is seeking clients for student capstone experience. This course is the comprehensive application of the MBA curriculum in a strategic context. Did You Know you Can Create Polls in Zoom? If you’re prepared for each meeting with an agenda and/or classroom instruction and you’d like to take a poll, simply set one up prior to the meeting directly in Zoom. It’s easy to do and you’re able to share the results with participants during the meeting and download the results.
https://business.uni.edu/faculty-staff/craig-vansandt
A research team, led by the astronomers from National Astronomical Observatories of China (NAOC), Chinese Academy of Sciences, discovered the most lithium-rich giant ever known to date, with lithium abundance 3,000 times higher than normal giants. It is in the direction of Ophiuchus, north side of the Galactic disk, with a distance of 4,500 light years to Earth. The findings were realized with the help of The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST), a special quasi-meridian reflecting Schmidt telescope located in Xinglong Observatory of NAOC in northern China. The telescope can observe about 4,000 celestial bodies at one time and has made a massive contribution to the study of the structure of the Galaxy. Their result of the study was published online in Nature Astronomy on August 6th, 2018. Lithium, atomic number 3, is considered one of the three elements synthesized in the Big Bang, together with Hydrogen and Helium. The abundance of the three elements was regarded as the strongest evidence of the Big Bang. The evolution of lithium has been widely studied in modern astrophysics, however, a few giants were found to be lithium-rich in the past three decades. This makes the lithium study remarkably challenging. "The discovery of this star has largely increased the upper limit of the observed lithium abundance, and provides a potential explanation to the extremely lithium-rich case," said Prof. ZHAO Gang. Detailed information of the star was obtained by a follow-up observation from the Automated Planet Finder (APF) telescope at Lick Observatory. Besides measuring the anomalously high lithium abundance, the research team also proposed a possible explanation to the lithium-rich phenomenon by the nuclear network simulation with the up-to-date atomic data as an input. The research team was led by Dr. YAN Hongliang, Prof. SHI Jianrong and Prof. ZHAO Gang from NAOC. Scientists from other five institutions, including China Institute of Atomic Energy and Beijing Normal University, also joined the team. Finished in 2008 and began regular survey mission in 2012, LAMOST has helped Chinese scientists with a final catalogue of about 10 million spectra after its six-year regular survey, and establish the world's largest databank of stellar spectra this June.
https://www.eurekalert.org/news-releases/906971
It stands to reason that the most difficult cases to investigate are those in which the individual’s death was sudden, unexpected, and unexplained. Very few deaths occur where the deceased has no significant medical history, no trauma, no significant autopsy findings, and very little social history in which to investigate. Infant deaths almost always fit this category, making them consistently the most complicated and challenging deaths to investigate. Investigating Infant Deaths draws on the expertise of a forensic nurse and member of the CDC core team for the Sudden Unexpected Infant Death Investigation Reporting Form to provide medicolegal death investigators and law enforcement personnel with investigative techniques applicable to sudden unexpected infant deaths. Beginning with a general state-of-the-field, the author defines the role of the investigator and explains the benefits of “double-teaming” an investigation. The book emphasizes the importance of timing and gives crucial tips for examining the incident scene and performing an initial post-mortem external exam. Specific instruction regarding the “art” of interviewing the grieving parents and how to follow up with families gives investigators an important edge when autopsy findings are slim. Additional chapters cover how to use a doll re-enactment and how to review medical records, social service records, and criminal histories. It also illustrates how to set up task forces including State Child Fatality Teams and an Investigative Child Death Review at the local level. Case studies are used throughout the book to give investigators real-life examples of the techniques at work. Presenting a workable approach that may facilitate a re-evaluation of current protocols, Investigating Infant Deaths provides the tools for continued improvement that will ensure all infant deaths are investigated thoroughly and thereby help prevent future premature infant deaths from occurring. “In this book, B. O’Neal, a registered nurse and board certified medicolegal death investigator with the American Board of Medicolegal Death Investigators, gives an extensive overview on the investigation of infant deaths including epidemiological background, the significance of the emergency helpers, the initial post-mortem external assessment, the death scene investigation, the parental interview, the forensic autopsy, the analysis of the medical history, the case conference and the family or caregiver follow-up. “The book shows that the author has an extended knowledge in this field and, moreover, a long-standing experience in doing such investigations. The book integrates different resources, refers the situation in the US, and reflects the relevant literature in this field. It is an excellent guideline for performing death investigations in infants. Besides the general overview, a number of special problems/tasks are discussed in great detail and always from a practical point of view. However, good experiences from outside the US, for example from projects in Australia or Norway, are not referred. “In summary, the book gives an excellent overview on all the relevant questions and problems in relation to the investigation of infant deaths and can be used as guideline for practical casework. It can be recommended without any limitation for specialists in legal medicine and specialists from other fields involved in the investigation of these deaths. “ — T. Bajanowski, Institute of Legal Medicine, University of Duisburg-Essen, in the International Journal of Legal Medicine (2008) 122:267 "Anyone who participates in the investigation of infant death, or who may find themselves involved in infant medical resuscitation attempts, will find conceptual and practical guidance for what is often a daunting task." —Jennifer R. Schindell, Deputy Medical Examiner / Forensic Nurse, Linn and Benton Counties, Corvallis, Oregon, writing in Journal of the International Association of Forensic Nurses Investigating Infant Deaths: Why Is It Important? The Process The Impact Why Are They Difficult to Investigate? What Is Being Done about It? Current Statistics Goal and Text Organization Maternal and Infant Health: What Investigators Should Know Maternal Health Infant Health Infant Growth and Development The Investigation Begins: Timing Is Everything A Common Beginning Think outside the Box Advanced Notification Following Death Notification Double-Team Approach First Responders: Their Observations Are Important 9-1-1 Emergency Calls First Responders Initial Postmortem External Assessment Logistics Photodocumentation Performing the External Assessment Equipment Needed Assessing the Infant’s General Appearance Skin Assessment Body Diagram Physical Evidence Transporting the Body Infant Death Scene Investigation: It Tells a Story Death Scene Investigation: When Does It Begin? Macro vs. Micro: What Is the Difference? Documenting the Macroenvironment Documenting the Microenvironment Evidence or Chain of Custody Day-Care Centers The Art of Interviewing Interview Basics Types of Interviews Initial Interview Clarification Interview Follow-Up Interview Interrogation Doll Reenactment with Scene Walk-Through The Opposition Benefits of Doll Reenactments Preplanning: Doll Reenactment and Scene Walk-Through Equipment Preplanning the Doll Reenactment: What to Consider Explain Procedure to Law Enforcement Explain Procedure to Witnesses Initiating the Scene Walk-Through and Doll Reenactment Investigative Concerns The Forensic Autopsy: The Investigator’s Role Why Should MDIs Be Informed? Autopsy Basics Communication The Infant Forensic Autopsy: Before, During, and After Records Review: Let the Records Speak HIPAA Infant’s Medical Record Review Pharmacy Records State Health and Home Health Agencies Family Medical Records Review Social Service Records Review Birth Certificates School Records Criminal Records Review Previous Infant Deaths Child Death Review: It Brings It All Together History Types of CDRs ICDR Retrospective CDRs Family or Caregiver Follow-Up and Referral Family Follow-Up Family Referrals Where Do We Go from Here?
https://www.routledge.com/Investigating-Infant-Deaths/ONeal/p/book/9780849382048
2 edition of Geology, Geophysics and Resources of the Caribbean found in the catalog. Geology, Geophysics and Resources of the Caribbean IDOE Workshop on the Geology and Marine Geophysics of the Caribbean Region and its Resources (1975 : Kingston, Jamaica) Published 1977 in [ ] : Intergovernmental Oceanographic Commission, UNESCO, 1977 . Written in English Edition Notes |Statement||editor, John D. Weaver.| |Classifications| |LC Classifications||MLCM 83/4193 (Q)| |The Physical Object| |Pagination||150 p., leaves of plates (4 fold.) : ill. ; 23 cm.| |Number of Pages||150| |ID Numbers| |Open Library||OL4442088M| |LC Control Number||79103623| Summary from a workshop on the state of the Salton Sea. A monitoring and assessment plan developed by a interagency team to help identify an ecosystem restoration program for the Salton Sea. At the Salton Sea Geothermal Field (SSGF), strain accumulation is released through seismic slip and. Northern Range D2 deformation is south vergent and represents the incorporation of Northern Range metasediments into the Caribbean accretionary prism. The transition to D3 brittle transpressive right‐lateral strike‐slip faulting is interpreted to be due to the uplift and east‐southeastward transpressive emplacement of Northern Range. Geoscience & Geology books Browse through our free geoscience and geology books and prepare for your exams. The books include the studies of minerals and rocks or drilling fluids. You will find explanations for the most important concepts. Many text books have been written on the subject "Exploration Geophysics". The majority of these texts focus on the theory and the mathematical treatment of the subject matter but lack treatment of practical aspects of geophysical exploration. This text is written in simple English to explain the physical meaning of jargon, or terms used in the s: 1. Many of the primary resources used in this curricular resource are available through The Digital Library of the Caribbean (dLOC), which is a cooperative digital library for resources from and about the Caribbean and circum-Caribbean. dLOC provides free and open access to digitized versions of Caribbean cultural, historical andFile Size: 1MB. Pages in category "Geology of the Caribbean" The following 8 pages are in this category, out of 8 total. This list may not reflect recent changes (). Triggernometry Trans-Alaska Oil Pipeline operations journal of François Antoine Larocque gilded pill Seabirds of the tropical Atlantic Ocean Selected poems. Wakefield A1 street plan. The military Billy Taylor, or, The war in Carriboo HOWA BANK, LTD. (THE) Polka Dots And Moonbeams Aces of the air Chapters on marine geology and geophysics are new syntheses for the entire Caribbean region. Highlights of the volume include extensive bibliographies and new syntheses of stratigraphic-lithologic columnar sections, seismicity, gravity and magnetic anomalies, neotectonic features, resource data, and crustal properties. Geology, geophysics and resources of the Caribbean: report of the IDOE Workshop on the Geology and Marine Geophysics of the Caribbean Region and Its Resources. Get Textbooks on Google Play. Rent and save from the world's largest eBookstore. Read, highlight, and take notes, across web, tablet, and phone. Geology and Hydrogeology of the Caribbean Islands Aquifer System of the Commonwealth of Puerto Rico and the U.S. Virgin Islands By ROBERT A. RENKEN1, W.C. WARD2, I.P. GILL3, FERNANDO GÓMEZ-GÓMEZ1, JESÚS RODRÍGUEZ-MARTÍNEZ1, and others REGIONAL AQUIFER-SYSTEM ANALYSIS—CARIBBEAN ISLANDS. About this book. Introduction. This volume presents an overview of the known and potential energy and mineral resources in relation to the geological framework and geohazard conditions in the Central American - Caribbean region. Geology and Geophysics. Exploration Geophysics; Geodesy and Gravity Book Series Book series menu. About. Book Series. Actions. Access Book Content. Vol. 9, Geodynamics of the Eastern Pacific Region, Caribbean and Scotia Arcs. Vol. 8, Continental and Oceanic Rifts. Geology of the Caribbean. A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Our integration of geologic information on faults, basins, and volcanism considerably extends the limited record of plate interactions known from historical reports of the past few hundred years and recorded earthquakes and suggests that (1) relative motion between the North American and Caribbean plates is dominantly east‐west strike‐slip motion, although areas of complex internal deformation associated Cited by: Contacts Contact Springer's publishing editors with your proposals and questions.; Be(come) an Author All you need to know: Manuscript guidelines, tools, templates and more; Meet us at Conferences Meet our editors and get acquainted with our multiformat publishing model.; Stay Informed Sign up for SpringerAlerts and stay up-to-date on the latest research with our books and journals. Handbook of Geophysical Exploration at Sea - CRC Press Book This two-volume handbook presents advanced research and operational information about hard minerals and hydrocarbons. It provides information in an integrated, interdisciplinary manner, stressing case histories. Basic concepts of geology, geomorphology, tectonics and geophysics in the study of natural hazards, with special reference to the Caribbean. Hazards and risks related to volcanic activity, earthquakes, landslides, hydrometeorological processes; flooding and hurricanes. Discover the best Geophysics in Best Sellers. Find the top most popular items in Amazon Books Best Sellers. A COMPLICATED VISION The written geology of the area, and of the Caribbean especially, appears complicated for the following reasons. Geographic diversity and wide range of data quality. Middle America includes continental, stretched continental, oceanic and island arc rocks ( Size: KB. Download GATE Geology and Geophysics Books PDF Online at Low Prices. Check out the Best Reference Books, Study Guides, Previous Year papers, and other Study Materials according to the latest Syllabus and Exam Pattern from Arihant. Geology and Geophysics. Exploration Geophysics; Geodesy and Gravity Geology of the oceanic crust: Magnetic properties of oceanic rocks. Authors. Paul J. Fox, Neil D. Opdyke A diverse suite of rocks has been sampled from ocean basin escarpments in the North Atlantic and the Caribbean: fresh and weathered basalts, metabasalts (zeolite and. The AAPG Latin America & Caribbean Region and the Colombian Association of Petroleum Geologists and Geophysicists (ACGGP) invite you join us for GTW Colombiaa specialized workshop bringing leading scientists and industry practitioners to share best practices, exchange ideas and explore opportunities for future collaboration. The following short course option was developed for geology and geophysics students that have not had much exposure to how geoscience is applied in industry. It can be tailored for undergraduate juniors and seniors or graduate students. The agenda can be modified to meet specific needs and time constraints. Contact the presenter to discuss options. Whether you study for the Geology with Geophysics BSc or the MGeol, your degree will provide you with a comprehensive understanding of the key elements of geophysics, enabling you to develop specialist expertise in geophysical exploration methods alongside related skills and knowledge in physics, maths and IT. You will study earthquakes and the. The book is aimed primarily at introductory and intermediate university students taking courses in geology, earth science, environmental science, and engineering. It will also form an excellent introductory textbook in geophysics departments, and will help practising geologists, archaeologists and engineers understand what geophysics can offer their work/5(19). Books shelved as geophysics: Revealing the Buried Past: Geophysics for Archaeologists by John Gater, Applied Geophysics by W.M. Telford, The Solid Earth. To the east the continental slope also bounds the area, but there descends into the Caribbean Sea. The southern margin is formed by the Arco de la Libertad (Fig. 2), a positive feature including the Guatemalan Peten and the Maya Mountains of British by: This book is devoted to the sedimentary basin formation on active plate margins, which show enormous diversity reflecting complex tectonic processes. Multidisciplinary approach is based on geology, sedimentology, geochronology and geophysics. ( views) Earthquakes and Tsunamis by Emilio Lorca, Margot Recabarren, Geology and Mining History of International Locations Geomorphology - Tectonics - Geophysics Georgia Geostatistics Glaciers & Glaciation Greenland, Iceland, and the North Atlantic GSA Decade of North American Geology Books Handbooks and Catalogs related to Mining Handbooks, Books, and Catalogs related to Railroads.
https://tupikuvogamaq.catholicyoungadultsofsc.com/geology-geophysics-and-resources-of-the-caribbean-book-35396db.php
KFTC’s statewide Annual Membership Meeting will be held August 22 through August 24 this year at the General Butler State Park in Carrollton. The annual meeting is a time for all KFTC members to gather together to celebrate great work over the past year, learn new skills and hold the yearly business meeting. During the business portion of the meeting, members will consider the proposed changes to our platform, elect statewide officers and accept new or renewing chapters for the coming year. Our meeting's theme this year is "From the Grassroots to the Mountaintop: Empowering Grassroots Leaders" and will focus on ways to build our grassroots power and leadership capacity. Leadership development is KFTC’s highest priority as we work to build grassroots power and win important issues. Across Kentucky, in statewide and local campaigns, hundreds of KFTC leaders are deeply engaged and actively leading others. These leaders grow through skills training, mentoring, exchange with other groups and on-the-job practice. KFTC’s success depends on the involvement and commitment of thousands of people, including you! Join us this weekend to learn what leadership means to KFTC and gain skills to become a leader in KFTC and your own community! As we come back together, we’ll turn our attention to the important work of building grassroots leaders here in Kentucky. After some lively table discussions, we’ll hear from a panel of leaders from different organizations in Kentucky. Together they will explore questions of what leadership is, the value and importance of grassroots leadership, and lessons and best practices from grassroots leadership development. We are our best hope for change: Effective Grassroots Leadership – This workshop will explore what it means to be a grassroots leader in KFTC. Building the Muscles of Democracy! Recruitment, Canvassing, and Get Out The Vote – In this workshop we will discuss ways to begin building a New Power voting block in your community, including developing the necessary skills and strategies aimed at getting more people involved in the electoral process. Listen and be heard: Resolving conflict skillfully – How do you respond when conflicts arise? How effective are you in difficult conversations? Conflict resolution skills are important in life and leadership. Join us for an interactive session about ways to improve our ability to listen, respond and provide leadership in difficult situations. Making Waves: The power of youth organizing – Young people have played important roles in justice movements throughout history. Join us at this workshop to learn how young people are making change today and tools for engaging and supporting youth in your community. Building New Power: Organizing across lines of difference – Our identities and life experiences are an important part of who we are and how we organize. Join us at this workshop to learn how to build power and organize across lines of difference in your community. Making It Sing! Using art and culture in community organizing – Art and culture are an important part of our lives, and they can be important and powerful tools for change. Join us at this workshop to learn tools and stragety for integrating art and culture in community organizing. Take a break and relax at General Butler among friends. Swimming, hiking, tennis, and much more are available to have some fun. Self-care is Radical, too: A Self-Care Workshop for Activists – This optional workshop will create a space for participants to learn skills to enhance their self-care practice and recognize that wellness is a vital part of activism. Basic yoga and mediation practice will be included. Join us for the final set of breakout workshops (choose one). These morning workshops will all share an emphasis on bringing others into our work through stories, art and culture. Each breakout will focus on sharing and having conversations with others through a particular issue-focused lens. Pick the breakout issue that you are most interested in learning how to talk about with your neighbors and people in your community. Join us for this annual opportunity to vote on KFTC’s proposed platform of issues, elect officers, accept new and returning chapters and more. The business meeting is the heart of KFTC’s own democratic structure.
http://www.kftc.org/events/kftcs-2014-annual-membership-meeting
--- abstract: 'Let ${X}$ be a proper, geodesically complete Hadamard space, and $\ \Gamma<{\mbox{Is}}({X})$ a discrete subgroup of isometries of ${X}$ with the fixed point of a rank one isometry of ${X}$ in its infinite limit set. In this paper we prove that if $\Gamma$ has non-arithmetic length spectrum, then the Ricks’ Bowen-Margulis measure – which generalizes the well-known Bowen-Margulis measure in the CAT$(-1)$ setting – is mixing. If in addition the Ricks’ Bowen-Margulis measure is finite, then we also have equidistribution of $\Gamma$-orbit points in ${X}$, which in particular yields an asymptotic estimate for the orbit counting function of $\Gamma$. This generalizes well-known facts for non-elementary discrete isometry groups of Hadamard manifolds with pinched negative curvature and proper CAT$(-1)$-spaces.' title: - On the mixing property for hyperbolic systems - Axial isometries of manifolds of nonpositive curvature - 'Metric spaces of non-positive curvature' - 'Rank-one isometries of buildings and quasi-morphisms of [K]{}ac-[M]{}oody groups' - 'Une dichotomie de [H]{}opf pour les flots géodésiques associés aux groupes discrets d’isométries des arbres' - 'Remarques sur le spectre des longueurs d’une surface et comptages' - Topologie du feuilletage fortement stable - 'Some negatively curved manifolds with cusps, mixing and counting' - 'Products of random matrices: convergence theorems' - On the asymptotic geometry of nonpositively curved manifolds - 'The uniqueness of the measure of maximal entropy for geodesic flows on rank [$1$]{} manifolds' - Asymptotic geometry and growth of conjugacy classes of nonpositively curved manifolds - Asymptotic geometry in products of Hadamard spaces with rank one isometries - 'Hopf-Tsuji-Sullivan dichotomy for quotients of Hadamard spaces with a rank one isometry' - 'The limit set of a [F]{}uchsian group' - 'Flat strips, Bowen-Margulis measures, and mixing of the geodesic flow for rank one spaces' - 'Flat strips, Bowen-Margulis measures, and mixing of the geodesic flow for rank one CAT$(0)$ spaces' --- <span style="font-variant:small-caps;">Gabriele Link$^*$</span> Introduction ============ Let $({X},d)$ be a proper Hadamard space, $x$, $y\in{X}$ and $\Gamma<{\mbox{Is}}({X})$ a discrete group. The [[**]{}Poincaré series]{} of $\Gamma$ with respect to $x$ and $y$ is defined by $$P(s;x,y):=\sum_{\gamma\in\Gamma} {\mathrm{e}}^{-sd(x,\gamma y)};$$ its exponent of convergence $$\label{critexpdef} \delta_\Gamma :=\inf\{s>0\colon \sum_{\gamma\in\Gamma} {\mathrm{e}}^{-sd(x,\gamma y)}\ \text{converges}\};$$ is called the [[**]{}critical exponent]{} of $\Gamma$. By the triangle inequality the critical exponent is independent of $x,y\in{X}$. We will require that the critical exponent $\delta_\Gamma$ is [[**]{}finite]{}, which is not a severe restriction as it is always the case when ${X}$ admits a compact quotient oder when $\Gamma$ is finitely generated. Obviously $P(s;x,y)$ converges for $s>\delta_\Gamma $ and diverges for $s<\delta_\Gamma $. The group $\Gamma$ is said to be [[**]{}divergent]{}, if $P(\delta_\Gamma;x,y)\, $ diverges, and [[**]{}convergent]{} otherwise. Since ${X}$ is proper, the [[**]{}orbit counting function]{} with respect to $x$ and $y$ $$\label{orbitcountdef} N_\Gamma:[0,\infty)\to [0,\infty),\quad R\mapsto \#\{\gamma\in\Gamma\colon d(x,\gamma y)\leq R\}$$ satisfies $N_\Gamma(R)<\infty$ for all $R>0$; moreover, it is related to the critical exponent via the formula $$\delta_\Gamma =\limsup_{R\to +\infty}\frac{\ln\bigl(N_\Gamma(R)\bigr)}{R}.$$ One goal of this article is to give a precise asymptotic estimate for the orbit counting function for a discrete [[**]{}rank one group]{} $\Gamma$ as in [@LinkHTS] (that is a group with the fixed point of a so-called rank one isometry of ${X}$ in its infinite limit set); for precise definitions we refer the reader to Section \[rankonegroups\]. Such a rank one group always contains a non-abelian free subgroup generated by two independent rank one elements, hence its critical exponent $\delta_\Gamma$ is strictly positive. Notice that our assumption on $\Gamma$ obviously imposes severe restrictions on the Hadamard space ${X}$ itself: It can neither be a higher rank symmetric space, a higher rank Euclidean building nor a product of Hadamard spaces. Using the Poincar[é]{} series from above, a remarkable $\Gamma$-equivariant family of measures $(\mu_x)_{x\in X}$ supported on the geometric boundary ${\partial{X}}$ of ${X}$ – a so-called conformal density – can be constructed in our very general setting (see [@MR0450547] and [@MR556586] for the original constructions in hyperbolic $n$-space). Let ${{\mathcal G}}$ denote the set of parametrized geodesic lines in ${X}$ endowed with the compact-open topology (which can be identified with the unit tangent bundle $S{X}$ if ${X}$ is a Riemannian manifold) and consider the action of ${\mathbb{R}}$ on ${{\mathcal G}}$ by reparametrization. This action induces a flow $g_\Gamma$ on the quotient space $\quotient{\Gamma}{{{\mathcal G}}}$. If ${X}$ is [[**]{}geodesically complete]{}, then thanks to the construction due to R. Ricks ([@Ricks Section 7]) – which uses the conformal density $(\mu_x)_{x\in{X}}$ described above – we obtain a $g_\Gamma$-invariant Radon measure $m_\Gamma$ on $\quotient{\Gamma}{{{\mathcal G}}}$. This possibly infinite measure will be called the [[**]{}Ricks’ Bowen-Margulis]{} measure, since it generalizes the classical Bowen-Margulis measure in the CAT$(-1)$-setting. If $\Gamma$ is divergent, then according to Theorem 10.2 in [@LinkHTS] the dynamical system $( \quotient{\Gamma}{{{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is conservative and ergodic. We also want to mention here that if ${X}$ is a Hadamard [[**]{}manifold]{}, then Ricks’ Bowen-Margulis measure $m_\Gamma$ is equal to Knieper’s measure first introduced in Section 2 of [@MR1652924] for cocompact groups $\Gamma$ (and which was used in [@LinkPicaud] for arbitrary rank one groups). In the cocompact case Knieper’s work further implies that the Ricks’ Bowen-Margulis measure is the unique measure of maximal entropy on the unit tangent bundle of the compact quotient $\quotient{\Gamma}{{X}}$ (see again Section 2 in [@MR1652924]). In this article we will first address the question under which hypotheses the dynamical system $( \quotient{\Gamma}{{{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is mixing. We remark that in our very general setting we cannot hope to get mixing without further restrictions on the group $\Gamma$: F. Dal’Bo ([@MR1779902 Theorem A]) showed that even in the special case of a CAT$(-1)$-Hadamard [[**]{}manifold]{} ${X}$, the dynamical system $( \quotient{\Gamma}{S{X}}, g_\Gamma, m_\Gamma)$ with the classical Bowen-Margulis measure $m_\Gamma$ is [[**]{}not]{} mixing, if the length spectrum of $\Gamma$ is arithmetic (that is if the set of lengths of closed geodesics in $\quotient{\Gamma}{{X}}$ is a discrete subgroup of ${\mathbb{R}}$). However, we obtain the best possible result:   Let $X$ be a proper, geodesically complete Hadamard space and$\Gamma<{\mbox{Is}}(X)$ a discrete, divergent rank one group. Then with respect to Ricks’ Bowen-Margulis measure the geodesic flow on $\quotient{\Gamma}{{{\mathcal G}}}$ is mixing or the length spectrum of $\Gamma$ is arithmetic. Notice that in the CAT$(0)$-setting Theorem A was already proved by M. Babillot ([@MR1910932 Theorem 2]) in the special case when ${X}$ is a [[**]{}manifold]{} and $\Gamma<{\mbox{Is}}({X})$ is cocompact; moreover, in this case the second alternative cannot occur, that is the length spectrum of $\Gamma$ [[**]{}cannot]{} be arithmetic. It was then generalized by R. Ricks ([@Ricks Theorem 4]) to non-Riemannian proper Hadamard spaces ${X}$ and discrete rank one groups $\Gamma<{\mbox{Is}}({X})$ with [[**]{}finite]{} Ricks’ Bowen-Margulis measure. Under the additional hypothesis that the limit set of $\Gamma$ is equal to the whole geometric boundary ${\partial{X}}$ of ${X}$, Ricks also proved that the length spectrum of $\Gamma$ can only be arithmetic if ${X}$ is isometric to a tree with all edge lengths in $c{\mathbb{N}}$ for some $c>0$. Here we allow both infinite Ricks’ Bowen-Margulis measure and limit sets that are proper subsets of ${\partial{X}}$. Let us mention that the restriction to divergent groups is quite reasonable: If the measure $m_\Gamma$ is infinite, then the mixing property of $( \quotient{\Gamma}{{{\mathcal G}}}, g_\Gamma, m_\Gamma)$ only states that for all Borel sets $A$, $B\subset \quotient{\Gamma}{{{\mathcal G}}}$ with $m_\Gamma(A)$, $m_\Gamma(B)$ finite we have $$\lim_{t\to\pm\infty} m_\Gamma (A\cap g_\Gamma^{t}B) =0.$$ This condition is very weak and obviously neither implies conservativity nor ergodicity. Actually it is easily seen to hold true when $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is dissipative, which – according to Theorem 10.2 in [@LinkHTS] – is equivalent to the fact that $\Gamma$ is convergent. In the second part of the article we use the mixing property in the case of finite Bowen-Margulis measure to deduce an equidistribution result for $\Gamma$-orbit points in the vein of Roblin’s results for CAT$(-1)$-spaces ([@MR2057305 Théorème 4.1.1]):  Let $X$ be a proper, geodesically complete Hadamard space and$\ \Gamma<{\mbox{Is}}(X)$ a discrete rank one group with non-arithmetic length spectrum and finite Ricks’ Bowen-Margulis measure $m_\Gamma$. Let $f$ be a continuous function from ${\overline{{X}}}\times {\overline{{X}}}$ to ${\mathbb{R}}$, and $x$, $y\in{X}$. Then $$\lim_{T\to\infty} \Bigl( \delta_\Gamma\cdot {\mathrm{e}}^{-\delta_\Gamma T} \sum_{\begin{smallmatrix}{\scriptstyle\gamma\in\Gamma}\\{\scriptstyle d(x,\gamma y )\le T}\end{smallmatrix}} f(\gamma y,\gamma^{-1} x)\Bigr)=\frac1{\Vert m_\Gamma \Vert} \int_{{\partial{X}}\times{\partial{X}}} f(\xi,\eta){\mathrm{d}}\mu_x(\xi) {\mathrm{d}}\mu_y(\eta).$$ Finally, from the equidistribution result Theorem B and its proof we get the following asymptotic estimates for the orbit counting function introduced in (\[orbitcountdef\]):   Let $X$ be a proper, geodesically complete Hadamard space, $x$, $y\in{X}$ and $\Gamma<{\mbox{Is}}(X)$ a discrete rank one group. - If $\Gamma$ is divergent with non-arithmetic length spectrum and finite Ricks’ Bowen-Margulis measure $m_\Gamma$, then $$\lim_{R\to\infty} \delta_\Gamma\cdot {\mathrm{e}}^{-\delta_\Gamma R} \#\{\gamma\in\Gamma\colon d(x,\gamma y)\leq R\} = \mu_x({\partial{X}})\mu_y({\partial{X}})/ \Vert m_\Gamma\Vert .$$ - If $\Gamma$ is divergent with non-arithmetic length spectrum and infinite Ricks’ Bowen-Margulis measure, then $$\displaystyle \lim_{R\to\infty} {\mathrm{e}}^{-\delta_\Gamma R} \#\{\gamma\in\Gamma\colon d(x,\gamma y)\leq R\} =0.$$ - If $\Gamma$ is convergent, then $\quad \displaystyle \lim_{R\to\infty} {\mathrm{e}}^{-\delta_\Gamma R} \#\{\gamma\in\Gamma\colon d(x,\gamma y)\leq R\}=0$. Notice that in work in progress with Jean-Claude Picaud we apply the equidistribution result Theorem B above to get asymptotic estimates for the number of closed geodesics modulo free homotopy in $\quotient{\Gamma}{{X}}$ which are much more general and much more precise than the ones given in [@MR2290453]. The paper is organized as follows: Section \[prelim\] fixes some notation and recalls basic facts concerning Hadamard spaces and rank one geodesics. In Section \[rankonegroups\] we introduce the notions of rank one isometry and ${\mbox{Is}}({X})$-recurrence and state some important facts. We also recall the definition of a rank one group and give the weakest condition which ensures that a discrete group $\Gamma<{\mbox{Is}}({X})$ is rank one. In Section \[geodcurrentmeasures\] we introduce the notion of geodesic current and describe Ricks’ construction of a geodesic flow invariant measure associated to such a geodesic current first on the quotient $\quotient{\Gamma}{[{{\mathcal G}}]}$ of parallel classes of parametrized geodesic lines and finally on the quotient $\quotient{\Gamma}{{{\mathcal G}}}$ of parametrized geodesic lines. Moreover, we recall from [@LinkHTS] a few results about the corresponding dynamical systems. Section \[Mixing\] is devoted to the proof of Theorem A, which follows M. Babillot’s strategy ([@MR1910932 Section 2.2]) and uses cross-ratios of quadrilaterals similar to the ones introduced by R. Ricks ([@Ricks Section 10]). In Section \[shadowconecorridor\] we introduce the notions of shadows, cones and corridors and state some important properties that are needed in the proof of Theorem B. Section \[RicksBMestimates\] gives estimates for the so-called Ricks’ Bowen-Margulis measure, which is the Ricks’ measure associated to the quasi-product geodesic current coming from a conformal density. In Section \[equidistribution\] we prove Theorem B, and Section \[orbitcounting\] finally deals with the orbit counting function and the proof of Theorem C. Preliminaries on Hadamard spaces {#prelim} ================================ The purpose of this section is to introduce terminology and notation and to summarize basic results about Hadamard spaces. Most of the material can be found in [@MR1377265] and [@MR1744486] (see also [@MR656659] and [@MR823981] in the special case of Hadamard manifolds and [@Ricks] for more recent results). Let $({X},d)$ be a metric space. For $y\in {X}$ and $r>0$ we will denote $B_y(r)\subset{X}$ the open ball of radius $r$ centered at $y\in{X}$. A [[**]{}geodesic]{} is an isometric map $\sigma$ from a closed interval $I\subset{\mathbb{R}}$ or $I={\mathbb{R}}$ to ${X}$. For more precision we use the term [[**]{}geodesic ray]{} if $I=[0,\infty)$ and [[**]{}geodesic line]{} if $I={\mathbb{R}}$. We will deal here with [[**]{}Hadamard spaces]{} $({X},d)$, that is complete metric spaces in which for any two points $x,y\in{X}$ there exists a geodesic $\sigma_{x,y}$ joining $x$ to $y$ (that is a geodesic $\sigma=\sigma_{x,y}:[0,d(x,y)]\to {X}$ with $\sigma(0)=x$ and $\sigma(d(x,y))=y$) and in which all geodesic triangles satisfy the CAT$(0)$-inequality. This implies in particular that ${X}$ is simply connected and that the geodesic joining an arbitrary pair of points in ${X}$ is unique. Notice however that in the non-Riemannian setting completeness of ${X}$ does not imply that every geodesic can be extended to a geodesic line, so ${X}$ need not be geodesically complete. The geometric boundary ${\partial{X}}$ of ${X}$ is the set of equivalence classes of asymptotic geodesic rays endowed with the cone topology (see for example Chapter II in [@MR1377265]). We remark that for all $x\in{X}$ and all $ \xi\in{\partial{X}}$ there exists a unique geodesic ray $\sigma_{x,\xi}$ with origin $x=\sigma_{x,\xi}(0)$ representing $\xi$. Given two geodesics $\sigma_1:[0,T_1]\to{X}$, $\sigma_2:[0,T_2]\to{X}$ with $\sigma_1(0)=\sigma_2(0)=:x$ the [[**]{}Alexandrov angle]{} $\angle(\sigma_1,\sigma_2)$ is defined by $$\angle (\sigma_1,\sigma_2):=\lim_{t_1,t_2\to 0} \angle_{\overline{x}}\bigl(\overline{\sigma_1(t_1)},\overline{\sigma_2(t_2)}\bigr),$$ where the angle on the right-hand side denotes the angle of a comparison triangle in the Euclidean plane of the triangle with vertices $\sigma_1(t_1)$, $x$ and $\sigma_2(t_2)$ (compare [@MR1744486 Proposition II.3.1]). By definition, every Alexandrov angle has values in $[0,\pi]$. For $x\in{X}$, $y,z\in{\overline{{X}}}\setminus\{x\}$ the angle $\angle_x(y,z)$ is then defined by $$\label{Alexandrovangle} \angle_x(y,z):=\angle (\sigma_{x,y},\sigma_{x,z}).$$ From here on we will require that ${X}$ is proper; in this case the geometric boundary ${\partial{X}}$ is compact and the space ${X}$ is a dense and open subset of the compact space ${\overline{{X}}}:={X}\cup{\partial{X}}$. Moreover, the action of the isometry group ${\mbox{Is}}({X})$ on ${X}$ naturally extends to an action by homeomorphisms on the geometric boundary. If $x, y\in {X}$, $\xi\in{\partial{X}}$ and $\sigma$ is a geodesic ray in the class of $\xi$, we set $$\label{buseman} {{\mathcal B}}_{\xi}(x, y)\,:= \lim_{s\to\infty}\big( d(x,\sigma(s))-d(y,\sigma(s))\big).$$ This number exists, is independent of the chosen ray $\sigma$, and the function $${{\mathcal B}}_{\xi}(\cdot , y): {X}\to {\mathbb{R}},\quad x \mapsto {{\mathcal B}}_{\xi}(x, y)$$ is called the [[**]{}Busemann function]{} centered at $\xi$ based at $y$ (see also Chapter II in [@MR1377265]). Obviously we have $${{\mathcal B}}_{g\cdot\xi}(g{\!\cdot\!}x,g{\!\cdot\!}y) = {{\mathcal B}}_{\xi}(x, y)\quad\text{for all }\ x,y\in{X}\quad\text{and }\ g\in{\mbox{Is}}({X}),$$ and the cocycle identity $$\label{cocycle} {{\mathcal B}}_{\xi}(x, z)={{\mathcal B}}_{\xi}(x, y)+{{\mathcal B}}_{\xi}(y,z)$$ holds for all $x,y,z\in{X}$. Since ${X}$ is non-Riemannian in general, we consider (as a substitute of the unit tangent bundle $S{X}$) the set of parametrized geodesic lines in ${X}$ which we will denote ${{\mathcal G}}$. We endow this set with the metric $d_1$ given by $$\label{metriconSX} d_1(u,v):=\sup \{ {\mathrm{e}}^{-|t|} d\bigl(v(t), u(t)\bigr) \colon t\in{\mathbb{R}}\}\ \mbox{ for} \ u,v\in {{\mathcal G}};$$ this metric induces the compact-open topology, and every isometry of ${X}$ naturally extends to an isometry of the metric space $({{\mathcal G}},d_1)$. Moreover, there is a natural map $p:{{\mathcal G}}\to{X}$ defined as follows: To a geodesic line $v:{\mathbb{R}}\to {X}$ in ${{\mathcal G}}$ we assign its origin $pv:=v(0)\in{X}$. Notice that $p$ is proper, $1$-Lipschitz and ${\mbox{Is}}({X})$-equivariant; if ${X}$ is geodesically complete, then $p$ is surjective. For a geodesic line $v\in {{\mathcal G}}$ we denote its extremities $v^-:=v(-\infty)\in{\partial{X}}$ and $v^+:=v(+\infty)\in{\partial{X}}$ the negative and positive end point of $v$; in particular, we can define the end point map $${\partial_{\infty}}:{{\mathcal G}}\to {\partial{X}}\times{\partial{X}},\quad v\mapsto (v^-,v^+).$$ For $v\in{{\mathcal G}}$ we define the parametrized geodesic $-v\in{{\mathcal G}}$ by $$(-v)(t):=v(-t)\quad\text{for all}\quad t\in{\mathbb{R}}.$$ We say that a point $\xi\in{\partial{X}}$ can be joined to $\eta\in{\partial{X}}$ by a geodesic $v\in {{\mathcal G}}$ if $v^-=\xi$ and $v^+=\eta$. Obviously the set of pairs $(\xi,\eta)\in{\partial{X}}\times{\partial{X}}$ [  ]{}$\xi$ and $\eta$ can be joined by a geodesic coincides with $ {\partial_{\infty}}{{\mathcal G}}$, the image of ${{\mathcal G}}$ under the end point map ${\partial_{\infty}}$. It is well-known that if ${X}$ is CAT$(-1)$, then any pair of distinct boundary points $(\xi,\eta)$ belongs to ${\partial_{\infty}}{{\mathcal G}}$, and the geodesic joining $\xi$ to $\eta$ is unique up to reparametrization. In general however, the set ${\partial_{\infty}}{{\mathcal G}}$ is much smaller compared to ${\partial{X}}\times{\partial{X}}$ minus the diagonal due to the possible existence of flat subspaces in ${X}$. For $(\xi,\eta)\in{\partial_{\infty}}{{\mathcal G}}$ we denote by $$\label{joiningflat} (\xi\eta):=p\bigl(\{ v\in {{\mathcal G}}\colon v^-=\xi,\ v^+=\eta\}\bigr)=p\circ {\partial_{\infty}}^{-1}(\xi,\eta)$$ the subset of points in ${X}$ which lie on a geodesic line joining $\xi$ to $\eta$. It is well-known that $(\xi\eta)=(\eta\xi)\subset {X}$ is a closed and convex subset of ${X}$ which is isometric to a product $C_{(\xi\eta)}\times{\mathbb{R}}$, where $C_{(\xi\eta)}=C_{(\eta\xi)}$ is again a closed and convex set. For $x\in{X}$ and $(\xi,\eta)\in{\partial_{\infty}}{{\mathcal G}}$ we denote $$\label{orthogonalproj} v= v(x;\xi,\eta)\in {{\mathcal G}}$$ the unique parametrized geodesic line satisfying the conditions $v\in{\partial_{\infty}}^{-1}(\xi,\eta)$ and$d\bigl(x, v(0)\bigr)=d\bigl(x,(\xi\eta)\bigr)$. Notice that its origin $pv=v(0)$ is precisely the orthogonal projection onto the closed and convex subset $C_{(\xi\eta)}$. Obviously we have $$v(x;\eta,\xi)=-v(x;\xi,\eta)\quad\text{and}\quad \gamma v( x;\xi,\eta)=v(\gamma x;\gamma \xi,\gamma \eta)\quad\text{for all}\ \ \gamma\in{\mbox{Is}}({X}).$$ In order to describe the sets $(\xi\eta)$ and $C_{(\xi\eta)}$ more precisely and for later use we introduce as in [@Ricks Definition 5.4] for $x\in{X}$ the so-called [[**]{}Hopf parametrization]{} map $$\label{HopfPar} {H}_x: {{\mathcal G}}\to {\partial_{\infty}}{{\mathcal G}}\times {\mathbb{R}},\quad v\mapsto \bigl(v^-,v^+,{{\mathcal B}}_{v^-}(v(0),x)\bigr)$$ of ${{\mathcal G}}$ with respect to $x$. We remark that compared to [@Ricks Definition 5.4] and (5) in [@LinkHTS] we changed the sign in the last coordinate in order to make (\[geodflowHopf\]) below consistent. It is immediate that for a CAT$(-1)$-space ${X}$ this map is a homeomorphism; in general it is only continuous and surjective. Moreover, it depends on the point $x\in{X}$ as follows: If $y\in {X}$, $v\in {{\mathcal G}}$ and ${H}_x(v)=(\xi,\eta,s)$, then $${H}_y(v)=\bigl(\xi,\eta,s+{{\mathcal B}}_{\xi}(x,y)\bigr)$$ by the cocycle identity (\[cocycle\]) for the Busemann function (compare also [@MR1207579 Section 3]). The Hopf parametrization map allows to define an equivalence relation $\sim$ on ${{\mathcal G}}$ as follows: If $u,v\in {{\mathcal G}}$, then $u\sim v$ if and only if ${H}_x(u)={H}_x(v)$. Notice that this definition does not depend on the choice of $x\in{X}$ and that every point $(\xi,\eta,s)\in{\partial_{\infty}}{{\mathcal G}}\times {\mathbb{R}}$ uniquely determines an equivalence class $[v]$ with $v\in{{\mathcal G}}$. The [[**]{}width]{} of $v\in{{\mathcal G}}$ is defined by $$\label{widthdef} {\mathrm{width}}(v):= \sup\{ d\bigl(u(0),w(0)\bigr)\colon u,w\in [v]\}={\mathrm{diam}}\left(C_{(v^-v^+)}\right).$$ Notice that if ${X}$ is CAT$(-1)$ then for all $v\in{{\mathcal G}}$ we have $[v]=\{v\}$ and hence ${\mathrm{width}}(v)=0$; in general, if $v({\mathbb{R}})$ is contained in an isometric image of a Euclidean plane, then the width of $v$ is infinite. This motivates the following definitions: A geodesic line $v\in {{\mathcal G}}$ is called [[**]{}rank one]{} if its width is finite; it is said to have zero width if ${\mathrm{width}}(v)=0$. In the sequel we will use as in [@Ricks] the notation $$\begin{aligned} \mathcal{R}&:=\{v\in {{\mathcal G}}\colon v\ \text{is rank one}\}\quad\text{respectively}\\ \mathcal{Z}&:=\{v\in {{\mathcal G}}\colon v\ \text{is rank one of zero width}\}.\end{aligned}$$ We remark that the existence of a rank one geodesic imposes severe restrictions on the Hadamard space ${X}$. For example, ${X}$ can neither be a symmetric space or Euclidean building of higher rank nor a product of Hadamard spaces. The following important lemma states that even though we cannot join any two distinct points in the geometric boundary ${\partial{X}}$ of the Hadamard space ${X}$ by a geodesic in ${X}$, given a rank one geodesic we can at least join all points in a neighborhood of its end points by a geodesic in ${X}$. More precisely, we have the following result which is a reformulation of Lemma III.3.1 in [@MR1377265]: \[joinrankone\] Let $v\in\mathcal{R}$ be a rank one geodesic and $c>{\mathrm{width}}(v)$. Then there exist open disjoint neighborhoods $U^-$ of $\,v^-$ and $U^+$ of $\,v^+$ in ${\overline{{X}}}$ with the following properties: If $\xi\in U^-$ and $\eta \in U^+$ then there exists a rank one geodesic joining $\xi$ and $\eta$. For any such geodesic $w\in\mathcal{R}$ we have $d(w(t), v(0))< c$ for some $t\in{\mathbb{R}}$ and ${\mathrm{width}}(w)\le 2c$. This lemma implies that the set ${{\mathcal R}}$ is open in ${{\mathcal G}}$; we emphasize that ${{\mathcal Z}}$ in general need not be an open subset of ${{\mathcal G}}$: In every open neighborhood of a [[**]{}zero width]{} rank one geodesic there may exist a rank one geodesic of arbitrarily small but strictly positive width. Let us now get back to the Hopf parametrization map defined in (\[HopfPar\]): As stated in [@Ricks Proposition 5.10] the ${\mbox{Is}}({X})$-action on ${{\mathcal G}}$ descends to an action on ${\partial_{\infty}}{{\mathcal G}}\times {\mathbb{R}}={H}_x({{\mathcal G}})$ by homeomorphisms via $$\gamma (\xi,\eta, s):=\bigl(\gamma \xi,\gamma \eta, s+{{\mathcal B}}_{\gamma\xi}(\gamma x,x)\bigr)\quad\text{for}\quad \gamma\in{\mbox{Is}}({X}).$$ Moreover, the action of ${\mbox{Is}}({X})$ is well-defined on the set of equivalence classes $[{{\mathcal G}}]$ of elements in ${{\mathcal G}}$, and the (well-defined) map $$\label{equivHopf} [{{\mathcal G}}]\to {\partial_{\infty}}{{\mathcal G}}\times {\mathbb{R}},\quad [v]\mapsto {H}_x(v)$$ is an ${\mbox{Is}}({X})$-equivariant homeomorphism. For convenience we will frequently identify $ {\partial_{\infty}}{{\mathcal G}}\times {\mathbb{R}}$ with $[{{\mathcal G}}]$. We also remark that the end point map ${\partial_{\infty}}:{{\mathcal G}}\to {\partial{X}}\times {\partial{X}}$ induces a well-defined map $[{{\mathcal G}}]\to{\partial{X}}\times{\partial{X}}$ which we will also denote ${\partial_{\infty}}$. As in Definition 5.4 of [@Ricks] we will say that a sequence $(v_n)\subset{{\mathcal G}}$ [[**]{}converges weakly]{} to $v\in {{\mathcal G}}$ if and only if $$\label{defweakconvergence} v_n^-\to v^-,\quad v_n^+\to v^+\quad\text{and }\ {{\mathcal B}}_{v_n^-}\bigl(v_n(0),x\bigr)\to {{\mathcal B}}_{v^-}\bigl(v(0),x\bigr);$$ notice that this definition is independent of the choice of $x\in{X}$. Obviously, weak convergence $v_n\to v$ is equivalent to the convergence $[v_n]\to [v]$ in $[{{\mathcal G}}]$, and $v_n\to v$ in ${{\mathcal G}}$ always implies $[v_n]\to [v]$ in $[{{\mathcal G}}]$. We will also need the following result due to R. Ricks, which implies that the restriction of the Hopf parametrization map (\[HopfPar\]) to the subset $\mathcal{R}$ is proper: \[weakimpliesstrong\] If a sequence $(v_n)\subset{{\mathcal G}}$ converges weakly to , then some subsequence of $(v_n)$ converges to some $u\in{{\mathcal G}}$ with $u\sim v$. The topological space ${{\mathcal G}}$ can be endowed with the [[**]{}geodesic flow]{} $(g^t)_{t\in{\mathbb{R}}}$ which is naturally defined by reparametrization of $v\in {{\mathcal G}}$. In particular we have $$(g^t v)(0)=v(t) \quad\text{for all } \ v\in {{\mathcal G}}\quad\text{and all }\ t\in{\mathbb{R}}.$$ The geodesic flow induces a flow on the set of equivalence classes $[{{\mathcal G}}]$ which we will also denote $(g^t)_{t\in{\mathbb{R}}}$; via the ${\mbox{Is}}({X})$-equivariant homeomorphism $[{{\mathcal G}}]\to{\partial_{\infty}}{{\mathcal G}}\times {\mathbb{R}}\,$ the action of the geodesic flow $(g^t)_{t\in{\mathbb{R}}}$ on $[{{\mathcal G}}]$ is equivalent to the translation action on the last factor of ${\partial_{\infty}}{{\mathcal G}}\times {\mathbb{R}}$ given by $$\label{geodflowHopf} g^t (\xi,\eta,s):=(\xi,\eta, s+t).$$ Rank one isometries and rank one groups {#rankonegroups} ======================================= As in the previous section we let $({X},d)$ be a proper Hadamard space and denote ${\mbox{Is}}({X})$ the isometry group of ${X}$. \[hypaxiso\]An isometry $\gamma\in{\mbox{Is}}({X})$ is called [[**]{}axial]{} if there exist a constant $\ell=\ell(\gamma)>0$ and a geodesic $v\in {{\mathcal G}}$ [  ]{}$\gamma v=g^{\ell} v$. We call $\ell(\gamma)$ the [[**]{}translation length]{} of $\gamma$, and $v$ an [[**]{}invariant geodesic]{} of $\gamma$. The boundary point $\gamma^+:=v^+$ (which is independent of the chosen invariant geodesic $v$) is called the [[**]{}attractive fixed point]{}, and $\gamma^-:=v^-$ the [[**]{}repulsive fixed point]{} of $\gamma$. An axial isometry $h$ is called [[**]{}rank one]{} if one (and hence any) invariant geodesic of $h$ belongs to ${{\mathcal R}}$; the [[**]{}width]{} of $h$ is then defined as the width of an arbitrary invariant geodesic of $h$. Notice that if $\gamma\in{\mbox{Is}}({X})$ is axial, then ${\partial_{\infty}}^{-1}(\gamma^-,\gamma^+)\subset{{\mathcal G}}$ is the set of parametrized invariant geodesics of $\gamma$, and every axial isometry $\widetilde\gamma$ commuting with $\gamma$ satisfies $p\circ {\partial_{\infty}}^{-1}(\widetilde\gamma^-,\widetilde\gamma^+)=p\circ {\partial_{\infty}}^{-1}(\gamma^-,\gamma^+)$. If $h$ is rank one, then the fixed point set of $h$ equals $\{h^-, h^+\}$, and every axial isometry $g$ commuting with $h$ either satisfies $g^k=h$ or $g=h^k$ for some $k\in{\mathbb{Z}}\setminus\{0\}$. The following important lemma describes the north-south dynamics of rank one isometries: \[dynrankone\]  Let $h$ be a rank one isometry. Then 1. every point $\xi\in{\partial{X}}\setminus\{h^+\}$ can be joined to $h^+$ by a geodesic, and all these geodesics are rank one, 2. given neighborhoods $U^-$ of $h^-$ and $U^+$ of $h^+$ in ${\overline{{X}}}$ there exists $N\in{\mathbb{N}}$ [  ]{} $\ h^{-n}({\overline{{X}}}\setminus U^+)\subset U^-$ and $h^{n}({\overline{{X}}}\setminus U^-)\subset U^+$ for all $n\ge N$. We next prepare for an extension of part (a) of the lemma above, which replaces the fixed point $h^+$ of the rank one isometry $h$ by the end point of a certain geodesic: \[weakstrongrecurrencedef\] Let $G<{\mbox{Is}}({X})$ be any subgroup. An element $v\in{{\mathcal G}}$ is said to [[**]{}(weakly) $G$-accumulate]{} on $u\in{{\mathcal G}}$ if there exist sequences $(g_n)\subset G$ and $(t_n)\nearrow \infty$ [  ]{}$g_n g^{t_n} v$ converges (weakly) to $u$ as $n\to\infty$; $v$ is said to be [[**]{}(weakly) $G$-recurrent]{} if $v$ (weakly) $G$-accumulates on $v$. Notice that if $v$ is an invariant geodesic of an axial isometry $\gamma\in{\mbox{Is}}({X})$, then $v$ is $\langle \gamma\rangle$-recurrent and hence in particular ${\mbox{Is}}({X})$-recurrent. Moreover, if $v\in{{\mathcal G}}$ weakly $G$-accumulates on , then by Lemma \[weakimpliesstrong\] $v$ $G$-accumulates on some element $w\sim u$. In particular, if is weakly $G$-recurrent, then it is already $G$-recurrent. The following statements show the relevance of the previous notions. \[jointoweakrecurrent\] If $v\in{{\mathcal R}}$ is weakly${\mbox{Is}}({X})$-recurrent then for every $\,\xi\in{\partial{X}}\setminus\{v^+\}$ there exists $w\in{{\mathcal R}}$ satisfying $${\mathrm{width}}(w)\le {\mathrm{width}}(v)\quad\text{and }\quad (w^-,w^+)=(\xi, v^+).$$ We will also need the following generalization of a statement originally due to G. Knieper in the manifold setting; recall the definiton of the distance function $d_1$ from (\[metriconSX\]). \[KniepersProp\]  Let $u\in {\mathcal Z}$ be an ${\mbox{Is}}({X})$-recurrent rank one geodesic of zero width. Then for all $v\in {{\mathcal G}}$ with $v^+=u^+$ and ${{\mathcal B}}_{v^+}(v(0),u(0))=0$ we have $$\lim_{t\to\infty} d_1(g^t v, g^tu)=0.$$ We will now deal with discrete subgroups $\Gamma$ of the isometry group ${\mbox{Is}}({X})$ of ${X}$. The [[**]{}geometric limit set]{} ${L_\Gamma}$ of $\Gamma$ is defined by ${L_\Gamma}:=\overline{\Gamma\cdot x}\cap{\partial{X}},$ where $x\in{X}$ is an arbitrary point. If ${X}$ is a CAT$(-1)$-space, then a discrete group $\Gamma<{\mbox{Is}}({X})$ is called [[**]{}non-elementary]{} if its limit set $L_\Gamma$ is infinite. It is well-known that this implies that $\Gamma$ contains two axial isometries with disjoint fixed point sets (which are actually rank one of zero width as ${{\mathcal G}}={{\mathcal Z}}$ for any CAT$(-1)$-space). In the general setting this motivates the following We say that two rank one isometries $g,h\in{\mbox{Is}}({X})$ are [[**]{}independent]{} if and only if $\{g^+,g^-\}\cap \{h^+,h^-\}\ne\emptyset$ (see for example Section 2 of [@MR2629900] and Section 2 in [@MR2585575]). Moreover, a group $\Gamma< {\mbox{Is}}({X})$ is called [[**]{}rank one]{} if $\Gamma$ contains a pair of independent rank one elements. Obviously, if ${X}$ is CAT$(-1)$ then every non-elementary discrete isometry group is rank one. In general however, the notion of rank one group seems very restrictive at first sight. Nevertheless we have the following weak hypothesis which ensures that a discrete group $\Gamma<{\mbox{Is}}({X})$ is rank one: \[inflimset\] If $\Gamma<{\mbox{Is}}({X})$ is a discrete subgroup with infinite limit set ${L_\Gamma}$ containing the positive end point $v^+$ of a weakly ${\mbox{Is}}({X})$-recurrent element $v\in{{\mathcal R}}$, then $\Gamma$ is a rank one group. Notice that the conclusion is obviously true when $v^+$ is a fixed point of a rank one isometry of ${X}$. We will now define an important subset of the limit set ${L_\Gamma}$ of $\Gamma$. For that we let $x$, $y\in{X}$ arbitrary. A point $\xi\in{\partial{X}}$ is called a [[**]{}radial limit point]{} if there exist $c>0$ and sequences $(\gamma_n)\subset\Gamma$ and $(t_n)\nearrow\infty$ such that $$\label{radlimpoint} d\bigl(\gamma_n y, \sigma_{x,\xi}(t_n)\bigr)\le c\quad\text{for all }\ n\in{\mathbb{N}}.$$ Notice that by the triangle inequality this condition is independent of the choice of $x$, $y\in{X}$. The [[**]{}radial limit set]{} ${L_\Gamma^{\small{\mathrm{rad}}}}\subset{L_\Gamma}$ of $\Gamma$ is defined as the set of radial limit points. We will further denote $$\label{definezerorec} {{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}:=\{v\in{{\mathcal Z}}\colon v\ \text{and } -v\ \text{are } \Gamma\text{-recurrent}\}$$ the set of zero width parametrized geodesics which are $\Gamma$-recurrent in both directions. Notice that if $v\in{{\mathcal Z}}$ is weakly $\Gamma$-recurrent, then it is already $\Gamma$-recurrent according to the remark below Definition \[weakstrongrecurrencedef\]. We will also need the following \[posnegrec\] An element $v\in\quotient{\Gamma}{{{\mathcal G}}}$ is called [[**]{}positively and negatively recurrent]{}, if it possesses a lift $\widetilde v\in{{\mathcal G}}$ [  ]{}both $\widetilde v$ and $-\widetilde v$ are $\Gamma$-recurrent. Geodesic currents and Ricks’ measure {#geodcurrentmeasures} ==================================== In this section we want to describe the construction of Ricks’ measure from an arbitrary geodesic current on ${\partial_{\infty}}{{\mathcal R}}$. We will also recall the properties of the Ricks’ measure which are relevant for our purposes. Our main references here are [@Ricks Section 7] and [@LinkHTS Section 5]. From here on ${X}$ will always be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group with $${\mathcal Z}_\Gamma:=\{v\in{{\mathcal Z}}\colon v^-,v^+\in{L_\Gamma}\}\ne \emptyset.$$ Notice that according to Proposition 1 in [@LinkHTS] the latter hypothesis is always satisfied when ${X}$ is geodesically complete. For later use we further fix a point ${{o}}\in{X}$. Recall that the support of a Borel measure $\nu$ on a topological space $Y$ is defined as the set $$\label{supportdef} {\mbox{supp}}(\nu)=\{y\in Y \colon \nu(U)>0\ \text{ for every open neighborhood}\ U \ \text{of}\ y\}.$$ We also recall that a set $A\subset Y$ is said to have full $\nu$-measure, if $\nu(Y\setminus A)=0$. We start with two finite Borel measures $\mu_-$, $\mu_+$ on ${\partial{X}}$ with ${\mbox{supp}}(\mu_\pm)={L_\Gamma}$, and let $\overline\mu$ be a $\Gamma$-invariant Radon measure on ${\partial_{\infty}}{{\mathcal R}}$ which is absolutely continuous with respect to the product measure $(\mu_-\otimes \mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{\lower 0.4ex\hbox{$\scriptstyle{\partial_{\infty}}{{\mathcal R}}$}}$. Such $\overline\mu$ is called a [[**]{}quasi-product geodesic current]{} on ${\partial_{\infty}}{{\mathcal R}}$ (see for example Definition 5.2 of [@LinkHTS]). Following Ricks’ approach we can define a Radon measure $\overline m=\overline\mu\otimes \lambda $ on $[{{\mathcal R}}]\cong {\partial_{\infty}}{{\mathcal R}}\times {\mathbb{R}}$, where $\lambda$ denotes Lebesgue measure on ${\mathbb{R}}$. Now according to Lemma \[joinrankone\] $\Gamma$ acts properly on $[{{\mathcal R}}]\cong{\partial_{\infty}}{{\mathcal R}}\times {\mathbb{R}}$ which admits a proper metric. Since the action is by homeomorphisms and preserves the Borel measure $\overline m=\overline\mu\otimes\lambda$, there is (see for instance, [@RicksThesis Appendix A]) a unique Borel quotient measure $\overline m_\Gamma$ on $\quotient{\Gamma}{[ {{\mathcal R}}] }$ satisfying the characterizing property $$\label{chardesmeasure} \int_{\overline A} \widetilde h{\mathrm{d}}\overline m=\int_{\tiny\quotient{\Gamma}{[{{\mathcal R}}]}} \bigl( h\cdot f_{\overline A}\bigr) {\mathrm{d}}\overline m_\Gamma$$ for all Borel sets $ \overline A\subset [{{\mathcal R}}]$ and $\Gamma$-invariant Borel maps $ \widetilde h:[{{\mathcal R}}]\to [0,\infty]$ and$\widetilde f_{\overline A}:[{{\mathcal R}}]\to [0,\infty]$ defined by $\widetilde f_{\overline A}([v]):= \#\{\gamma\in\Gamma\colon \gamma [v]\in \overline A\}$ for $[v]\in{{\mathcal R}}$, and with $h$ and $f_{\overline A}$ the maps on $\quotient{\Gamma}{[{{\mathcal R}}]}$ induced from $\widetilde h$ and $\widetilde f_{\overline A}$. Our final goal is to construct from a weak Ricks’ measure $\overline m_\Gamma$ a geodesic flow invariant measure on $\quotient{\Gamma}{{{\mathcal G}}}$. So let us first remark that ${{\mathcal Z}}\subset{{\mathcal R}}$ is a Borel subset by semicontinuity of the width function (\[widthdef\]) (see Lemma 8.4 in [@Ricks]), and that ${H}_{{o}}{\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{\lower 0.4ex\hbox{$\scriptstyle{{\mathcal Z}}$}}:{{\mathcal Z}}\to{\partial_{\infty}}{{\mathcal Z}}\times{\mathbb{R}}\cong [{{\mathcal Z}}]$ is a homeomorphism onto its image; hence $ [{{\mathcal Z}}]\subset[{{\mathcal R}}]$ is also a Borel subset. So if $\quotient{\Gamma}{[{{\mathcal Z}}]}$ has positive mass with respect to the weak Ricks’ measure $\overline m_\Gamma$ we may define (as in [@Ricks Definition 8.12]) a geodesic flow and $\Gamma$-invariant measure $m^0$ on ${{\mathcal G}}$ by setting $$\label{defstrongRicks} m^0(E):= \overline m \bigl({H}_{{o}}(E\cap {{\mathcal Z}})\bigr)\quad\text{for any Borel set }\ E\subset{{\mathcal G}};$$ this measure $m^0$ then induces the [[**]{}Ricks’ measure]{} $m^0_\Gamma$ on $\quotient{\Gamma}{ {{\mathcal G}}}$. Notice that in general $\overline m_\Gamma (\quotient{\Gamma}{[{{\mathcal Z}}]})= 0\ $ is possible; obviously this is always the case when ${{\mathcal Z}}=\emptyset$. However, Theorem 6.7 and Corollary 2 in [@LinkHTS] immediately imply \[currentisproduct\] Let ${X}$ be a proper Hadamard space, and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group with ${{\mathcal Z}}_\Gamma\ne \emptyset$ (which is always the case if ${X}$ is geodesically complete). Let $\mu_-$, $\mu_+$ be non-atomic, finite Borel measures on ${\partial{X}}$ with $\mu_\pm ({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_\pm({\partial{X}})$, and $\overline\mu\sim\bigl( \mu_-\otimes \mu_+\bigr){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{\lower 0.4ex\hbox{$\scriptstyle{\partial_{\infty}}{{\mathcal R}}$}}$ a quasi-product geodesic current. Then for the set ${{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}$ defined in (\[definezerorec\]) we have $$(\mu_-\otimes\mu_+)({\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}})=(\mu_-\otimes\mu_+)({\partial_{\infty}}{{\mathcal R}}) =\mu_-({\partial{X}})\cdot\mu_+({\partial{X}}),$$ and in particular $\overline\mu\sim \mu_-\otimes\mu_+$. So in this case the Ricks’ measure $m^0_\Gamma$ is actually equal to the weak Ricks’ measure $\overline m_\Gamma$ used for its construction. Moreover, for the measure $m$ on ${{\mathcal G}}$, from which the Ricks’ measure descends, we have the formula $$\label{measureformula} m(E)=\int_{{\partial_{\infty}}{{\mathcal Z}}} \lambda\bigl(p(E)\cap (\xi\eta)\bigr) {\mathrm{d}}\overline\mu(\xi,\eta),$$ where $\lambda$ again denotes Lebesgue measure, and $E\subset{{\mathcal G}}$ is an arbitrary Borel set. We further remark that if ${X}$ is a manifold, then the Ricks’ measure is also equal to Knieper’s measure $m^{\text{\scriptsize Kn}}_\Gamma$ associated to $\overline\mu$ which descends from $$m^{\text{\scriptsize Kn}}(E):= \int_{{\partial_{\infty}}{{\mathcal G}}} {\mbox{\rm vol}}_{(\xi\eta)}\bigl(p(E)\cap(\xi\eta)\bigr)\mathrm{d}\overline\mu(\xi,\eta)\quad\text{for any Borel set }\ E\subset {{\mathcal G}},$$ where ${\mbox{\rm vol}}_{(\xi\eta)}$ denotes the induced Riemannian volume element on the submanifold $(\xi\eta)\subset{X}$. From here on we will therefore denote the Ricks’ measure $m_\Gamma$ instead of $m_\Gamma^0$. For later reference we want to summarize what we know from Theorem 7.4 and Lemma 7.5 in [@LinkHTS]. Before we can state the result we denote ${\mathcal B}(R)\subset{{\mathcal G}}$ the set of all parametrized geodesics $v\in{{\mathcal G}}$ with origin $pv=v(0)\in B_R({{o}})$ and define $$\label{Deltadef} \Delta:=\sup \Big\{ \frac{\ln \overline\mu\bigl({\partial_{\infty}}{\mathcal B}(R)\bigr)}{R}\colon R>0\Big\}.$$ \[propertiesofRicks\] Let ${X}$, $\Gamma<{\mbox{Is}}({X})$, $\mu_-$, $\mu_+$ and $\overline\mu\,$ as in Theorem \[currentisproduct\]. We further assume that the constant $\Delta$ defined via (\[Deltadef\]) is finite. Then the dynamical systems $({\partial{X}}\times{\partial{X}}, \Gamma,\mu_-\otimes\mu_+)$, $\bigl({\partial_{\infty}}{{\mathcal G}},\Gamma, \overline\mu\bigr)\,$ and $\bigl(\quotient{\Gamma}{{{\mathcal G}}},g_\Gamma, m_\Gamma\bigr)$ are ergodic. We will also make use of the following argument, which is immediate by Fubini’s theorem: \[useFubini\] Let ${X}$, $\Gamma<{\mbox{Is}}({X})$, $\mu_-$, $\mu_+$, $\overline\mu$ and $\,\Delta<\infty$ as in Theorem \[propertiesofRicks\]. Then if $\Omega\subset\quotient{\Gamma}{{{\mathcal Z}}}$ is a subset of full $m_\Gamma$-measure, and $\widetilde\Omega\subset{{\mathcal Z}}$ the preimage of $\Omega$ under the projection map ${{\mathcal Z}}\mapsto \quotient{\Gamma}{{{\mathcal Z}}}$, the sets $$\begin{aligned} E^-&:= \{\xi\in{\partial{X}}\colon (\xi,\eta')\in {\partial_{\infty}}\widetilde\Omega\ \text{ for }\ \mu^+\text{-almost every }\ \eta'\in{\partial{X}}\}\quad\text{and}\\ E^+&:= \{\eta\in{\partial{X}}\colon (\xi',\eta)\in {\partial_{\infty}}\widetilde\Omega\ \text{ for }\ \mu^-\text{-almost every }\ \xi'\in{\partial{X}}\}\end{aligned}$$ satisfy $\,\mu_-(E^-)=\mu_-({\partial{X}})\ $ and $\,\mu_+(E^+)=\mu_+({\partial{X}})$. Mixing of the Ricks’ measure {#Mixing} ============================ Let ${X}$ be a proper Hadamard space as before, and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group with ${{\mathcal Z}}_\Gamma\ne \emptyset$. Notice that if ${X}$ is geodesically complete, then according to Proposition 1 in [@LinkHTS] the latter condition is automatically satisfied. We further fix a point ${{o}}\in{X}$. From here on we will assume that $\mu_-$, $\mu_+$ are non-atomic, finite Borel measures on ${\partial{X}}$ with $\mu_\pm ({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_\pm({\partial{X}})$. We will further require that for the quasi-product geodesic current $\overline\mu\sim \bigl(\mu_-\otimes \mu_+\bigr){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{\lower 0.4ex\hbox{$\scriptstyle{\partial_{\infty}}{{\mathcal R}}$}}$ on ${\partial_{\infty}}{{\mathcal R}}$ the constant $\Delta$ defined in (\[Deltadef\]) is finite. From Theorem \[currentisproduct\] and Definition \[posnegrec\] we immediately get that the set $$ \{ u\in\quotient{\Gamma}{{{\mathcal G}}} \colon u\ \text{is positively and negatively recurrent}\}$$ has full $m_\Gamma$-measure (which is equivalent to conservativity of the dynamical system $\bigl({\partial_{\infty}}{{\mathcal G}},g_\Gamma, \overline\mu\bigr)$). Moreover, according to Theorem \[propertiesofRicks\] the dynamical system $\bigl({\partial_{\infty}}{{\mathcal G}},g_\Gamma, \overline\mu\bigr)$ is ergodic and we can use its Corollary \[useFubini\]. Our proof of mixing will closely follow M. Babillot’s idea from [@MR1910932]. However, as she only gives the proof for cocompact rank one isometry groups of Hadamard [[**]{}manifolds]{}, for the convenience of the reader we want to give a detailed proof in our more general setting, which includes arbitrary discrete rank one isometry groups of non-Riemannian Hadamard spaces. We also emphasize that her set ${{\mathcal R}}$ in [@MR1910932] is defined as the set of unit tangent vectors $v\in S{X}\cong{{\mathcal G}}$ which do not admit a parallel perpendicular Jacobi field; this is in general a proper open subset of our set ${{\mathcal R}}$ (which was defined as the set of parametrized geodesic lines with finite width) which is contained in ${{\mathcal Z}}$. In particular, her Proposition-Definition below[@MR1910932 Lemma 2] is not true when considering our set ${{\mathcal R}}$ instead of hers. We therefore have to work on the set ${{\mathcal Z}}$ (which is not open in ${{\mathcal R}}$) and use – up to a constant factor – the cross-ratio introduced by R. Ricks in [@Ricks Definition 10.2] instead of Babillot’s. From the Busemann function introduced in (\[buseman\]) we first define for $(\xi,\eta)\in{\partial_{\infty}}{{\mathcal G}}$ the [[**]{}Gromov product]{} of $(\xi,\eta)$ with respect to $y\in {X}$ via $$\label{GromovProd} {{Gr}}_y(\xi,\eta)=\frac12\bigl({{\mathcal B}}_\xi(y,z)+{{\mathcal B}}_\eta(y,z)\bigr), $$ where $z\in (\xi\eta)$ is an arbitrary point on a geodesic line joining $\xi$ and $\eta$. It is related to R. Ricks’ definition following [@Ricks Lemma 5.1] via the formula\ $ {{Gr}}_y(\xi,\eta)=-2 \beta_y(\xi,\eta)$ for all $(\xi,\eta)\in{\partial_{\infty}}{{\mathcal G}}$. We then make the following \[Quadrilateraldef\] A quadrupel of points $(\xi_1,\xi_2,\xi_3,\xi_4)\in\bigl({\partial{X}}\bigr)^4$ is called a [[**]{}quadrilateral]{}, if there exist $v_{13}$, $v_{14}$, $v_{23}$, $v_{24} \in{{\mathcal R}}$ [  ]{}$${\partial_{\infty}}v_{ij} =(\xi_i,\xi_j)\quad\text{for all }\ (i,j)\in\{(1,3), (1,4), (2,3), (2,4)\}.$$ The set of all quadrilaterals is denoted $\mathcal{Q}$, and we define $$\mathcal{Q}_\Gamma= \mathcal{Q}\cap \bigl({L_\Gamma}\bigr)^4.$$ \[Crossratiodef\] \ For a quadrilateral $(\xi,\xi',\eta,\eta')\in\mathcal{Q}$ we define its [[**]{}cross-ratio]{} by $${{CR}}(\xi,\xi',\eta,\eta')={{Gr}}_{{o}}(\xi,\eta)+{{Gr}}_{{o}}(\xi',\eta')- {{Gr}}_{{o}}(\xi,\eta')-{{Gr}}_{{o}}(\xi',\eta).$$ Notice that our definition corresponds to Ricks’ via $${{CR}}(\xi,\xi',\eta,\eta')=- 2\mathrm{B}(\xi,\xi',\eta,\eta').$$ The properties of a cross-ratio listed in Proposition 10.5 of [@Ricks] are therefore satisfied for our cross-ratio ${{CR}}$. We further have If $g\in{\mbox{Is}}({X})$ is axial, then its translation length $\ell(g)$ is given by $$\ell(g)={{CR}}(g^-,g^+,\xi,g\xi).$$ From this we immedately get the following \[lengthsubsetcr\] The length spectrum $ \{\ell(\gamma)\colon \gamma\in\Gamma\}$ of $\Gamma$ is a subset of the cross-ratio spectrum ${{CR}}(\mathcal{Q}_\Gamma)$. \[mixthm\] Let $\Gamma<{\mbox{Is}}({X})$ be a discrete rank one group with non-arithmetic length spectrum and ${{\mathcal Z}}_\Gamma\ne\emptyset$. Let $\mu_-$, $\mu_+$ be non-atomic finite Borel measures on ${\partial{X}}$ with $\mu_\pm({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_\pm({\partial{X}})$, and $$\overline\mu\sim (\mu_-\otimes\mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{\lower 0.4ex\hbox{$\scriptstyle{\partial_{\infty}}{{\mathcal R}}$}}$$ a quasi-product geodesic current defined on ${\partial_{\infty}}{{\mathcal R}}$ for which the constant $\Delta$ defined by (\[Deltadef\]) is finite. Let $m_\Gamma$ be the associated Ricks’ measure on $\quotient{\Gamma}{ {{\mathcal G}}}$. Then the dynamical system $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is mixing, that is for all Borel sets $A,B\subset\quotient{\Gamma}{ {{\mathcal G}}}$ with $m_\Gamma(A)$ and $m_\Gamma(B)$ finite we have (with the abbreviation $\Vert m_\Gamma\Vert =m_\Gamma\big(\quotient{\Gamma}{ {{\mathcal G}}}\bigr)$) $$\lim_{t\to\pm \infty} m_\Gamma(A\cap g_\Gamma^{-t} B)=\left\{\begin{array} {cl}\displaystyle \frac{m_\Gamma(A)\cdot m_\Gamma(B)}{\Vert m_\Gamma\Vert} & \text{ if } \ m_\Gamma \text{ is finite},\\[3mm] 0 & \text{ if } \ m_\Gamma \text{ is infinite}.\end{array}\right.$$ We first remark that mixing is equivalent to the fact that for every square integrable function $\varphi\in{\mbox{\rm L}}^2(m_\Gamma)$ on $\quotient{\Gamma}{{{\mathcal G}}}$ the functions $\varphi\circ g_\Gamma^t$ converge weakly in ${\mbox{\rm L}}^2(m_\Gamma)$ to the constant $$\frac{1}{\Vert m_\Gamma\Vert }\int \varphi {\mathrm{d}}m_\Gamma$$ as $t\to\pm\infty$. Moreover, since the continuous functions with compact support are dense in ${\mbox{\rm L}}^2(m_\Gamma)$ it suffices to show that for every $f\in {\mbox{\rm C}}_c(\quotient{\Gamma}{{{\mathcal G}}})$ $$f \circ g_\Gamma^t \to \frac{1}{\Vert m_\Gamma\Vert }\int f {\mathrm{d}}m_\Gamma$$ weakly in ${\mbox{\rm L}}^2(m_\Gamma)$ as $t\to\pm\infty$. We argue by contradiction and assume that $m_\Gamma $ is not mixing. Then there exist a function $f\in {\mbox{\rm C}}_c(\quotient{\Gamma}{{{\mathcal G}}})$ (without loss of generality we may assume $\int f{\mathrm{d}}m_\Gamma=0\,$ if $m_\Gamma$ is finite) and a sequence $(t_n)\nearrow \infty$ [  ]{}$f \circ g_\Gamma^{t_n}$ does not converge to $0$ weakly in ${\mbox{\rm L}}^2(m_\Gamma)$ as $n\to\infty$. By [@MR1910932 Lemma 1] there exist a sequence $(s_n)\nearrow\infty$ and a [[**]{}non-constant]{} function $\Psi\in {\mbox{\rm L}}^2(m_\Gamma)$ [  ]{} $$f \circ g_\Gamma^{s_n} \to \Psi\quad\text{and }\ f \circ g_\Gamma^{-s_n} \to \Psi$$ weakly in ${\mbox{\rm L}}^2(m_\Gamma)$ as $n\to\infty$. Without loss of generality we may assume that $\Psi$ is defined on all of $\quotient{\Gamma}{{{\mathcal G}}}$. Let $\widetilde\Psi:{{\mathcal G}}\to{\mathbb{R}}$ denote the lift of $\Psi$ to ${{\mathcal G}}$ and smooth it along the flow by considering for $\tau>0$ the function $$\widetilde\Psi_\tau:\widetilde \Omega \to{\mathbb{R}},\quad v\mapsto \int_0^\tau \widetilde \Psi(g^s v){\mathrm{d}}s.$$ For fixed $\varepsilon>0$ sufficiently small $\widetilde\Psi_\varepsilon$ is still non-constant, and now there exists a set $E''\subset {\partial_{\infty}}{{\mathcal G}}$ of full $\overline\mu$-measure [  ]{}for all $ v\in{\partial_{\infty}}^{-1}E''$ the function $$h_{ v}:{\mathbb{R}}\to {\mathbb{R}},\quad t\mapsto \widetilde\Psi_\varepsilon(g^t v)$$ is continuous. Notice that according to Theorem \[currentisproduct\] we can assume $E''\subset {\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}$ as ${\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}} $ has full $\overline\mu$-measure in ${\partial_{\infty}}{{\mathcal G}}$. To any such function we associate the set of its periods which is a closed subgroup of ${\mathbb{R}}$; it only depends on $( v^-, v^+)\in E''$. This gives a map from $E''$ into the set of closed subgroups of ${\mathbb{R}}$ which is $\Gamma$-invariant as $\widetilde\Psi_\varepsilon$ is. By ergodicity of $\overline\mu\,$ (Theorem \[propertiesofRicks\]) this map is constant $\overline\mu$-almost everywhere. Assume that this constant image is the group ${\mathbb{R}}$. Hence for $\overline\mu$-almost every $( v^-, v^+)\in E''$ every real number is a period of $h_v$ for some $v\in{\partial_{\infty}}^{-1}(v^-, v^+)$ which is only possible if $h_{ v}$ is independent of $t$. In this case $\widetilde\Psi_\varepsilon$ induces a $\Gamma$-invariant function on a subset $E'\subset E''\subset {\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}$ of full $\overline\mu$-measure. Again by ergodicity of $\overline\mu\,$ this function is constant, which finally gives a contradiction to the fact that $\widetilde\Psi_\varepsilon$ is non-constant. So we conclude that there exist a subset $E'\subset {\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}$ of full $\overline\mu$-measure and $a\ge 0$ such that the constant image of the map above restricted to $E'$ is the closed subgroup $2 a{\mathbb{Z}}$. In order to get the desired contradiction, we will next show that the cross-ratio spectrum ${{CR}}(\mathcal{Q}_\Gamma)$ is contained in the closed subgroup $a{\mathbb{Z}}$. We denote $\widetilde f:{{\mathcal G}}\to{\mathbb{R}}$ the lift of $f$ to ${{\mathcal G}}$, and define $$\widetilde f_\varepsilon:{{\mathcal G}}\to{\mathbb{R}}, \quad v\mapsto \int_0^\varepsilon \widetilde f(g^s v){\mathrm{d}}s.$$ Since $\widetilde f$ is $\Gamma$-invariant, $\widetilde f_\varepsilon$ is also $\Gamma$-invariant and therefore descends to a function $f_\varepsilon$ on $\quotient{\Gamma}{{{\mathcal G}}}$. Moreover, $$f_\varepsilon\circ g_\Gamma^{s_n}\to \Psi_\varepsilon\quad\text{and}\quad f_\varepsilon\circ g_\Gamma^{-s_n}\to \Psi_\varepsilon$$ weakly in $ {\mbox{\rm L}}^2(m_\Gamma)$ as $n\to\infty$, where $\Psi_\varepsilon\in {\mbox{\rm L}}^2(m_\Gamma)$ is the function induced from the $\Gamma$-invariant function $\widetilde\Psi_\varepsilon$ above. According to the classical fact stated and proved in [@MR1910932 Section 1] there exists a sequence $(n_k)\subset{\mathbb{N}}$ [  ]{}$\Psi_\varepsilon$ is the almost sure limit of the Cesaro averages for positive and negative times $$\frac{1}{K^2}\sum_{k=1}^{K^2} f_\varepsilon \circ g_\Gamma^{s_{n_k}}\quad\text{and }\quad \frac{1}{K^2}\sum_{k=1}^{K^2} f_\varepsilon \circ g_\Gamma^{-s_{n_k}}.$$ We denote $\widetilde \Psi_\varepsilon^+$, $\widetilde \Psi_\varepsilon^-$ the lifts of the almost sure limits of the Cesaro averages above and consider the set $$\begin{aligned} \widetilde \Omega &:=\{ u\in{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}} \colon \widetilde\Psi_\varepsilon^+(u),\ \widetilde\Psi_\varepsilon^-(u) \ \text{ exist and }\ \widetilde\Psi_\varepsilon^+(u)=\widetilde\Psi_\varepsilon^-(u)=\widetilde\Psi_\varepsilon(u)\} ;\end{aligned}$$ from the previous paragraph and the fact that ${\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}$ has full $\overline\mu$-measure we know that ${\partial_{\infty}}\widetilde\Omega$ has full $\overline\mu$-measure. The same is true for the set $E:=E'\cap {\partial_{\infty}}\widetilde\Omega$, where $E'\subset {\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}$ is the set of full $\overline\mu$-measure from the first part of the proof. So in particular $v\in{\partial_{\infty}}^{-1}E$ implies that the periods of the continuous function $h_v\in{\mbox{\rm C}}({\mathbb{R}})$ are contained in the closed subgroup $2a{\mathbb{Z}}$. Since $\widetilde f$ is the lift of a function $f\in {\mbox{\rm C}}_c(\quotient{\Gamma}{{{\mathcal G}}})$, both $\widetilde f$ and $\widetilde f_\varepsilon$ are uniformly continuous. So if $ u,v\in\widetilde \Omega\subset{\partial_{\infty}}^{-1}E$ are arbitrary, then according to Lemma \[KniepersProp\] we have the following statements: - If $ u^+= v^+$ and ${{\mathcal B}}_{ v^+}( u(0), v(0))=0$, then $\ \widetilde \Psi_\varepsilon^+( u)= \widetilde \Psi_\varepsilon^+( v)$. - If $ u^-= v^-$ and ${{\mathcal B}}_{ v^-}( u(0), v(0))=0$, then $\ \widetilde \Psi_\varepsilon^-( u)= \widetilde \Psi_\varepsilon^-( v)$. Now according to Corollary \[useFubini\] the sets $$\begin{aligned} E^-&:= \{\xi\in{\partial{X}}\colon (\xi,\eta')\in E\ \text{ for }\ \mu^+\text{-almost every }\ \eta'\in{\partial{X}}\}\quad\text{and}\\ E^+&:= \{\eta\in{\partial{X}}\colon (\xi',\eta)\in E\ \text{ for }\ \mu^-\text{-almost every }\ \xi'\in{\partial{X}}\}\end{aligned}$$ satisfy $\mu_-(E^-)=\mu_-({\partial{X}})$, $\mu_+(E^+)=\mu_+({\partial{X}})$, hence $E^-\times E^+$ has full $\overline\mu$-measure. We first consider the set of special quadrilaterals $$\begin{aligned} \mathcal{S} &=\{(\xi,\xi',\eta,\eta')\colon (\xi,\eta)\in E\cap (E^-\times E^+),\ (\xi',\eta'), (\xi,\eta'), (\xi',\eta)\in E\}\subset \mathcal{Q}_\Gamma. \end{aligned}$$ So let $(\xi,\eta)\in E\cap (E^-\times E^+)$ and choose $(\xi',\eta')\in E\,$ such that $(\xi',\eta)$ and $(\xi,\eta')$ also belong to $E$. In order to show that the cross-ratio $ {{CR}}(\xi,\xi',\eta,\eta')$ belongs to $a{\mathbb{Z}}$ we start with a geodesic $v\in{\partial_{\infty}}^{-1}(\xi,\eta)$. Let $v_1\in {\partial_{\infty}}^{-1}(\xi',\eta)$ [  ]{}${{\mathcal B}}_{\eta}\bigl(v(0), v_1(0)\bigr)=0$, $v_2\in {\partial_{\infty}}^{-1}(\xi',\eta')$ [  ]{}${{\mathcal B}}_{\xi'}\bigl(v_1(0), v_2(0)\bigr)=0$, $v_3\in {\partial_{\infty}}^{-1}(\xi,\eta')$ [  ]{}${{\mathcal B}}_{\eta'}\bigl(v_2(0), v_3(0)\bigr)=0$ and finally $v_4\in {\partial_{\infty}}^{-1}(\xi,\eta)$ [  ]{}${{\mathcal B}}_{\xi}\bigl(v_3(0), v_4(0)\bigr)=0$. Then according to (a) $$\widetilde \Psi_\varepsilon^+(v)=\widetilde \Psi_\varepsilon^+(v_1)=\widetilde \Psi_\varepsilon^-(v_1)$$ by choice of $\widetilde\Omega$. Moreover (b) gives $$\widetilde \Psi_\varepsilon^-(v_1)=\widetilde \Psi_\varepsilon^-(v_2)=\widetilde \Psi_\varepsilon^+(v_2).$$ Again by (a) we get $$\widetilde \Psi_\varepsilon^+(v_2)=\widetilde \Psi_\varepsilon^+(v_3)=\widetilde \Psi_\varepsilon^-(v_3)$$ and by (b) $$\widetilde \Psi_\varepsilon^-(v_3)=\widetilde \Psi_\varepsilon^-(v_4)=\widetilde \Psi_\varepsilon^+(v_4).$$ Altogether this shows $\widetilde \Psi_\varepsilon(v_4)=\widetilde \Psi_\varepsilon(v)$, and since ${\partial_{\infty}}v_4={\partial_{\infty}}v$ we know that there exists $t\in{\mathbb{R}}$ [  ]{}$v=g^t v_4$. Hence $t$ is a period of the function $h_v$ and therefore $t\in 2 a{\mathbb{Z}}$ (as ${\partial_{\infty}}v\in E'$). On the other hand, we have $$\begin{aligned} 2{{CR}}(\xi,\xi',\eta,\eta') &= 2 \bigl({{Gr}}_{{o}}(\xi,\eta)+{{Gr}}_{{o}}(\xi',\eta')- {{Gr}}_{{o}}(\xi,\eta')-{{Gr}}_{{o}}(\xi',\eta)\bigr)\\ &= {{\mathcal B}}_{\xi}({{o}}, v(0))+{{\mathcal B}}_{\eta}({{o}}, v(0)) + {{\mathcal B}}_{\xi'}({{o}}, v_2(0))+{{\mathcal B}}_{\eta'}({{o}}, v_2(0)) \\ &\quad - {{\mathcal B}}_{\xi}({{o}}, v_3(0))-{{\mathcal B}}_{\eta'}({{o}}, v_3(0)) - {{\mathcal B}}_{\xi'}({{o}}, v_1(0))-{{\mathcal B}}_{\eta}({{o}}, v_1(0))\\ &= \underbrace{{{\mathcal B}}_{\eta}(v_1(0), v(0))}_{=0} + \underbrace{{{\mathcal B}}_{\xi'}(v_1(0), v_2(0))}_{=0}+\underbrace{{{\mathcal B}}_{\eta'}(v_3(0), v_2(0))}_{=0}\\ &\quad + \underbrace{{{\mathcal B}}_\xi(v_4(0), v_3(0))}_{=0}+ {{\mathcal B}}_{\xi}(v_3(0), v(0))\\ & = {{\mathcal B}}_{\xi}(v_4(0), v(0)) ={{\mathcal B}}_\xi(v_4(0),v_4(t))=t\in 2a{\mathbb{Z}},\end{aligned}$$ hence ${{CR}}(\xi,\xi',\eta,\eta') \in a{\mathbb{Z}}$. This proves that ${{CR}}(\mathcal{S})\subset a{\mathbb{Z}}$. Finally, since the cross-ratio is continuous and the set of special quadrilaterals $ \mathcal{S}$ is dense in $\mathcal{Q}_\Gamma$, the cross-ratio spectrum ${{CR}}(\mathcal{Q}_\Gamma)$ is included in the discrete subgroup $a{\mathbb{Z}}$ of ${\mathbb{R}}$. So according to Proposition \[lengthsubsetcr\] the length spectrum is arithmetic in contradiction to the hypothesis of the theorem. We will often work in the universal cover ${X}$ of $\quotient{\Gamma}{{X}}$ and therefore need the following \[mixcor\] Let $\Gamma<{\mbox{Is}}({X})$ be a discrete rank one group with non-arithmetic length spectrum and ${{\mathcal Z}}_\Gamma\ne\emptyset$. Let $\mu_-$, $\mu_+$ be non-atomic finite Borel measures on ${\partial{X}}$ with $\mu_\pm({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_\pm({\partial{X}})$, and $$\overline\mu\sim (\mu_-\otimes\mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{\lower 0.4ex\hbox{$\scriptstyle{\partial_{\infty}}{{\mathcal R}}$}}$$ a quasi-product geodesic current defined on ${\partial_{\infty}}{{\mathcal R}}$ for which the constant $\Delta$ defined in (\[Deltadef\]) is finite. Let $m_\Gamma$ be the associated Ricks’ measure on $\quotient{\Gamma}{ {{\mathcal G}}}$, $A$, $B\subset\quotient{\Gamma}{{{\mathcal G}}}$ Borel sets with $m_\Gamma(A)$ and $m_\Gamma(B)$ finite, and $\widetilde A$, $\widetilde B\subset{{\mathcal G}}$ lifts of $A$ and $B$. Then $$\lim_{t\to\pm \infty} \Bigl( \sum_{\gamma\in\Gamma} m(\widetilde A\cap g^{-t}\gamma \widetilde B)\Bigr)=\left\{\begin{array} {cl}\displaystyle \frac{m(\widetilde A)\cdot m(\widetilde B)}{\Vert m_\Gamma\Vert} & \text{ if } \ m_\Gamma \text{ is finite},\\[3mm] 0 & \text{ if } \ m_\Gamma \text{ is infinite}.\end{array}\right.$$ We denote $$\mathcal{F}:= \{v\in{{\mathcal G}}\colon \gamma v=v \ \text{ for some}\ \gamma\in\Gamma\setminus\{e\}\}$$ the set of parametrized geodesics in ${{\mathcal G}}$ which are fixed by a non-trivial element in $\Gamma$. Notice that this set is non-empty only if $\Gamma$ contains elliptic elements. Obviously $\mathcal{F}$ is closed, $\Gamma$-invariant and invariant by the geodesic flow. Moreover, $\mathcal{F}\cap {{\mathcal Z}}$ is a proper subset of the support of $m$. Since according to Theorem \[propertiesofRicks\] the dynamical system $\bigl(\quotient{\Gamma}{{{\mathcal G}}}, g_\Gamma, m_\Gamma\bigr)$ is ergodic, we conclude that $m(\mathcal{F})=0$. Choose a point $x\in{X}$ with trivial stabilizer in $\Gamma$. Let ${\mathcal D}_\Gamma\subset{{\mathcal G}}$ denote the open [[**]{}Dirichlet domain]{} for $\Gamma$ with center $x$, that is the set of all parametrized geodesic lines with origin in $$\{z\in{X}\colon d(z,x)< d(z,\gamma x)\ \text{ for all}\ \gamma\in\Gamma\}.$$ Then by choice of $x$ we have $$\gamma {\mathcal D}_\Gamma\cap {\mathcal D}_\Gamma=\emptyset\ \text{ for all}\ \gamma\in\Gamma\setminus\{e\}$$ and $${{\mathcal G}}=\bigcup_{\gamma\in\Gamma} \gamma \overline{{\mathcal D}_\Gamma},$$ hence ${\mathcal D}_0:={\mathcal D}_\Gamma $ is a fundamental domain for the action of $\Gamma$ on ${{\mathcal G}}$. As ${X}$ is proper and $\Gamma $ is discrete, the set ${\mathcal D}_0\subset{{\mathcal G}}$ is [[**]{}locally finite]{} in ${{\mathcal G}}$, that is for any compact subset $K\subset{{\mathcal G}}$ the number $$\#\{\gamma\in\Gamma \colon K\cap \gamma \overline{{\mathcal D}_0}\ne\emptyset\}$$ is finite. However, the problem is that in general the boundary $\partial {\mathcal D}_0$ of the Dirichlet domain is very complicated, and in particular $m(\partial {\mathcal D}_0)>0$ is possible. Notice that $$\partial {\mathcal D}_0=\{v\in{{\mathcal G}}\colon d(v(0),x)=d(v(0),\gamma x) \ \text{ for some}\ \gamma\in\Gamma\};$$ obviously we have ${\mathcal F}\subset \partial {\mathcal D}_0$. For our purposes we will need a locally finite fundamental domain whose boundary has zero $m$-measure. In order to achieve this condition we will therefore modify the Dirichlet domain ${\mathcal D}_0$ in a neighborhood of its boundary $\partial {\mathcal D}_0$ as proposed by T. Roblin ([@MR2057305 p. 13]): We first choose a covering of $ \partial {\mathcal D}_0 \setminus \mathcal{F} $ by a locally finite family of open sets $(V_n)_{n\in{\mathbb{N}}}\subset {{\mathcal G}}\setminus \mathcal{F}$ such that for all $n\in{\mathbb{N}}$ we have $m(\partial V_n)=0$ and $\overline{V_n}\cap \gamma\overline{V_n}\ne\emptyset$ only for $\gamma=e$. Recall that $(V_n)_{n\in{\mathbb{N}}}$ locally finite means that for all $n\in{\mathbb{N}}$ the set $\{k\in{\mathbb{N}}\colon V_n\cap V_k\ne \emptyset\}$ is finite. Since ${\mathcal D}_0$ is locally finite in ${{\mathcal G}}$, the family of sets $(\Gamma V_n)_{n\in{\mathbb{N}}}\subset {{\mathcal G}}\setminus \mathcal{F}$ is still locally finite. We set ${\mathcal D}_1:=\bigl({\mathcal D}_{0}\setminus \Gamma \overline{V_1}\bigr)\cup V_1\subset{{\mathcal G}}\setminus \mathcal{F}$. This set is open as a union of two open sets, and it is still a locally finite fundamental domain for the action of $\Gamma$ on ${{\mathcal G}}$. Hence defining ${\mathcal D}_n:=\bigl({\mathcal D}_{n-1}\setminus \Gamma \overline{V_n}\bigr)\cup V_n$ for $n\in {\mathbb{N}}$, we get a sequence of locally finite fundamental domains in ${{\mathcal G}}\setminus \mathcal{F}$; its limit ${\mathcal D}$ is still a locally finite fundamental domain, but now with boundary of $m$-measure zero as it is contained in $$\bigcup_{n\in{\mathbb{N}}} \Gamma\cdot \partial V_n \cup \mathcal{F}.$$ Notice that for any measurable function $h\in {\mbox{\rm L}}^1(m_\Gamma)$ with lift $\widetilde h:{{\mathcal G}}\to{\mathbb{R}}$ the integral $\int_{\mathcal D} \widetilde h {\mathrm{d}}m$ is independent of the chosen fundamental domain ${\mathcal D}\subset{{\mathcal G}}$ of $\quotient{\Gamma}{{{\mathcal G}}}$ with $m(\partial{\mathcal D})=0$. Moreover, we obviously get from (\[chardesmeasure\]) and (\[defstrongRicks\]) $$\int_{{\mathcal D}} \widetilde h {\mathrm{d}}m =\int_{\scriptsize\quotient{\Gamma}{{{\mathcal G}}}} h{\mathrm{d}}m_\Gamma.$$ Now let $A$, $B\subset\quotient{\Gamma}{{{\mathcal G}}}$ be Borel sets with $m_\Gamma(A)$ and $m_\Gamma(B)$ finite, and $\widetilde A$, $\widetilde B\subset{{\mathcal G}}$ lifts of $A$ and $B$. Without loss of generality we may assume that $\widetilde A$, $\widetilde B\subset \overline{\mathcal D}$. For $t\in{\mathbb{R}}$ consider the function $h_t \in {\mbox{\rm L}}^1(m_\Gamma)$ defined by $$h_t=\mathbbm{1}_{A\cap g_\Gamma^{-t} B}.$$ For its lift $\widetilde h_t$ and $v\in{{\mathcal G}}$ we have $$\widetilde h_t(v)=1\quad\text{if }\ \gamma' v\in \widetilde A\cap g^{-t}\gamma\widetilde B\ \text{ for some } \gamma',\gamma\in\Gamma,$$ and $\widetilde h_t(v)=0$ otherwise. So $$\begin{aligned} m_\Gamma (A\cap g_\Gamma^{-t}B)&= \int_{\scriptsize\quotient{\Gamma}{{{\mathcal G}}}} h_t {\mathrm{d}}m_\Gamma =\int_{\mathcal D} \widetilde h_t {\mathrm{d}}m =\sum_{\gamma\in\Gamma} m(\widetilde A\cap g^{-t}\gamma\widetilde B). \end{aligned}$$ The claim now follows from Theorem \[mixthm\], since $$m_\Gamma(A)= \int_{\scriptsize\quotient{\Gamma}{{{\mathcal G}}}} \mathbbm{1}_A {\mathrm{d}}m_\Gamma =\int_{\mathcal D} \mathbbm{1}_{\Gamma \widetilde A} {\mathrm{d}}m =m(\widetilde A)\quad\text{and}\quad m_\Gamma (B)=m(\widetilde B).$$ Notice that in general it is not so easy to determine whether a discrete rank one group has arithmetic length spectrum or not. As mentioned before, if $\Gamma<{\mbox{Is}}({X})$ has finite Ricks’ Bowen-Margulis measure and satisfies ${L_\Gamma}={\partial{X}}$, then according to Theorem 4 in [@Ricks] the length spectrum of $\Gamma$ is arithmetic if and only if ${X}$ is a tree with all edge lengths in $c{\mathbb{N}}$ for some $c>0$. This includes Babillot’s observation that for cocompact discrete rank one groups of a Hadamard [[**]{}manifold]{} the length spectrum is non-arithmetic. Moreover, we recall a few further results: Let ${X}$ be a proper CAT$(-1)$ Hadamard space. A discrete rank one group $\Gamma<{\mbox{Is}}({X})$ has non-arithmetic length spectrum if - $\Gamma$ contains a parabolic isometry ([@MR1617430]), - the limit set ${L_\Gamma}$ possesses a connected component which is not reduced to a point ([@MR1341941]), - ${X}$ is a manifold with constant sectional curvature ([@MR841080 Proposition 3]), - ${X}$ is a Riemannian surface ([@MR1703039]). Shadows, cones and corridors {#shadowconecorridor} ============================ We keep the notation and conditions from the previous section. So in particular ${X}$ is a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group. For our proof of the equidistibution theorem we will need a few definitions and preliminary statements. Recall that for $y\in {X}$ and $r>0$ $B_r(y)\subset{X}$ denotes the open ball of radius $r$ centered at $y\in{X}$. The [[**]{}shadow]{} of $B_r(y)\subset{X}$ viewed from the source $x\in {X}$ is defined by $${\mathcal O}_{r}(x,y):=\{\eta \in{\partial{X}}\colon \sigma_{x,\eta}({\mathbb{R}}_+)\cap B_r(y)\neq\emptyset\};$$ this is an open subset of the geometric boundary ${\partial{X}}$. If $\xi\in{\partial{X}}$ we define $$\begin{aligned} {\mathcal O}_{r}(\xi,y)&:=\{\eta \in{\partial{X}}\colon \exists \ v\in{\partial_{\infty}}^{-1}(\xi,\eta) \ \text{with }\ v(0)\in B_r(y)\}\\ &= \{\eta\in{\partial{X}}\colon (\xi,\eta)\in{\partial_{\infty}}{{\mathcal G}}\ \text{ and }\ d\bigl(y,(\xi\eta)\bigr)<r\}.\end{aligned}$$ Notice that due to the possible existence of flat subspaces in ${X}$ a shadow ${\mathcal O}_{r}(\xi,y)$ with source $\xi\in{\partial{X}}$ need not be open: In a Euclidean plane such a shadow always consists of a single point in the boundary, no matter how large $r$ is. In our context, the shadows with source $\xi$ in the boundary ${\partial{X}}$ will be larger, but still not necessarily open. \[shadowsfrominfinity\] If $\xi$ is the positive end point $v^+$ of a weakly ${\mbox{Is}}({X})$-recurrent geodesic $v\in{{\mathcal R}}$, then Lemma \[jointoweakrecurrent\] implies that ${\mathcal O}_{r}(\xi,y)$ is open for any $y\in {X}$. More generally, if there exists a geodesic $u\in {{\mathcal R}}$ with $u^+=\xi$ and $u(0)\in B_r(y)$, then according to Lemma \[joinrankone\] the shadow ${\mathcal O}_{r}(\xi,y)$ contains an open neighborhood of $u^-$ in ${\partial{X}}$, but need not be open: If $u$ is not ${\mbox{Is}}({X})$-recurrent, then this open neighborhood of $u^-$ can be much smaller than ${\mathcal O}_{r}(\xi,y)$, and there might exist a point $\eta\in {\mathcal O}_{r}(\xi,y)$ such that $(\xi\eta)$ is isometric to a Euclidean plane. But $\xi$ cannot be joined to any point in the boundary of $(\xi\eta)$ different from $\eta$, no matter how close it is to $\eta$. In this case, every open neighborhood of $\eta$ intersects the complement of the shadow ${\mathcal O}_{r}(\xi,y)$ in ${\partial{X}}$ non-trivially (as this complement includes all the boundary points which cannot be joined to $\xi$ by a geodesic), hence $\eta\in \partial {\mathcal O}_{r}(\xi,y)$. We will now prove that this cannot happen if $\eta$ is the end point of an ${\mbox{Is}}({X})$-recurrent geodesic $v\in{{\mathcal Z}}$, that is if $\eta$ belongs to the set $$\label{endpointsofzerowidthrecurrent} {\partial{X}}^{\small{\mathrm{rec}}}:=\{\eta\in{\partial{X}}\colon \exists\, v\in{{\mathcal Z}}\ \, {\mbox{Is}}({X})\text{-recurrent with}\ \eta=v^+\}.$$ \[lem:boundaryofshadow\] Let $\xi\in{\partial{X}}$, $x\in{X}$ and $r>0$ arbitrary. Then for the closure $\overline{{\mathcal O}_{r}(\xi,x)}$ and the boundary $\partial{\mathcal O}_{r}(\xi,x)$ of the shadow ${\mathcal O}_{r}(\xi,x)\subset{\partial{X}}$ we have 1. $\quad \displaystyle \overline{{\mathcal O}_{r}(\xi,x)} \subset \{\zeta\in{\partial{X}}\colon (\xi,\zeta)\in{\partial_{\infty}}{{\mathcal G}}\ \text{ and }\ d\bigl(x, (\xi\zeta)\bigr)\le r\}$, 2. $\quad \displaystyle \partial {\mathcal O}_{r}(\xi,x)\cap {\partial{X}}^{\small{\mathrm{rec}}} \subset \{\zeta \in{\partial{X}}^{\small{\mathrm{rec}}}\setminus\{\xi\}\colon d\bigl(x,(\xi\zeta)\bigr)=r\}.$ In order to prove (a) we let $\zeta\in \overline{ {\mathcal O}_{r}(\xi,x)}$ arbitrary. Then there exists a sequence $(\zeta_n)\subset \mathcal O_{r}(\xi,x)$ with $\zeta_n\to\zeta$ as $n\to\infty$. For $n\in{\mathbb{N}}$ we let $v_n=v(x;\xi,\zeta_n)\in {{\mathcal G}}$ as defined in (\[orthogonalproj\]), hence in particular $v_n^-=\xi$, $v_n^+=\zeta_n$ and $v_n(0)\in B_r(x)$. Passing to a subsequence if necessary we may assume that $v_n(0)$ converges to a point $z\in \overline{B_r(x)}$ (as $\overline{B_r(x)}$ is compact). Recall the definiton of the Alexandrov angle from (\[Alexandrovangle\]). According to Proposition II.9.2 in [@MR1744486] we have $$\angle_z(\xi,\zeta)\ge \limsup_{n\to\infty}\angle_{v_n(0)}(\xi,\zeta_n)=\pi,$$ since $v_n(0)$ is a point on the geodesic $v_n$ joining $\xi$ to $\zeta_n$. From $\angle_z(\xi,\zeta)\in [0,\pi]$ we therefore get $\angle_z(\xi,\zeta)=\pi$, hence $z\in \overline{B_r(x)}$ is a point on a geodesic joining $\xi$ to $\zeta$, and in particular $(\xi,\zeta)\in{\partial_{\infty}}{{\mathcal G}}$. This proves (a). For the proof of (b) we let $\zeta\in \partial {\mathcal O}_{r}(\xi,x)\cap {\partial{X}}^{\small{\mathrm{rec}}}$ be arbitrary. By definition of the boundary we know that $\zeta\in\overline{ {\mathcal O}_{r}(\xi,x)}$ and that there exists a sequence $(\eta_n)\subset {\partial{X}}\setminus {\mathcal O}_{r}(\xi,x)$ with $\eta_n\to\zeta$ as $n\to\infty$. From (a) we know that $(\xi,\zeta)\in{\partial_{\infty}}{{\mathcal G}}$, hence in particular $\zeta\ne \xi$, and that $d\bigl(x,(\xi\zeta)\bigr)\le r$. So it only remains to prove that $ d\bigl(x,(\xi\zeta)\bigr)\ge r$. We will prove that every point $\eta\in \bigl({\partial{X}}^{\small{\mathrm{rec}}}\setminus\{\xi\}\bigr)\cap {\mathcal O}_{r}(\xi,x)$ is an interior point of $ {\mathcal O}_{r}(\xi,x)$: Then $d\bigl(x,(\xi\zeta)\bigr)< r$ would imply that $\zeta$ is an interior point of ${\mathcal O}_{r}(\xi,x)$ and therefore cannot be the limit of a sequence $(\eta_n)\subset{\partial{X}}\setminus {\mathcal O}_{r}(\xi,x)$, in contradiction to $\zeta\in \partial {\mathcal O}_{r}(\xi,x)$. So let $\eta\in \bigl({\partial{X}}^{\small{\mathrm{rec}}}\setminus\{\xi\}\bigr)\cap {\mathcal O}_{r}(\xi,x)$ be arbitrary. From Lemma \[jointoweakrecurrent\] we get that $(\xi,\eta)\in{\partial_{\infty}}{{\mathcal Z}}$, and with $v:=v(x;\xi,\eta)\in{{\mathcal Z}}$ we have $d\bigl(x,v(0)\bigr)=d\bigl(x, (\xi\eta)\bigr)< r$. Fix $\varepsilon=\frac12\left(r-d\bigl(x, (\xi\eta)\bigr)\right)>0$. According to Lemma \[joinrankone\] there exists an open neighborhood $U\subset{\partial{X}}$ of $\eta$ such that any $u\in{{\mathcal G}}$ with $u^-=\xi$ and $u^+\in U$ satisfies $u\in{{\mathcal R}}$ and $d\bigl(v(0),u({\mathbb{R}})\bigr)<\varepsilon$. Let $\eta'\in U$ arbitrary and $u\in{\partial_{\infty}}^{-1}(\xi,\eta')$ be parametrized [  ]{}$d\bigl(v(0), u(0)\bigr)<\varepsilon$. Then $$\begin{aligned} d\bigl(x,(\xi\eta')\bigr)&\le d\bigl(x,u(0)\bigr)\le d\bigl(x,v(0)\bigr)+d\bigl(v(0),u(0)\bigr)<d\bigl(x,(\xi\eta)\bigr) +\varepsilon\\ &<d\bigl(x,(\xi\eta)\bigr)+\frac12\left(r-d\bigl(x,(\xi\eta)\bigr)\right)<r.\end{aligned}$$ Instead of using the boundary $\partial {\mathcal O}_{r}(\xi,x)$ we will work in the sequel with the set $$\label{otherboundaryofshadow} \widetilde\partial {\mathcal O}_{r}(\xi,x):=\{\eta\in{\partial{X}}\colon (\xi,\eta)\in{\partial_{\infty}}{{\mathcal G}}\ \text{ and }\ d\bigl(x, (\xi\eta)\bigr)=r\}$$ whose intersection with ${\partial{X}}^{\small{\mathrm{rec}}}$ may be strictly larger than $\partial {\mathcal O}_{r}(\xi,x)\cap {\partial{X}}^{\small{\mathrm{rec}}}$. Notice that every point $\eta\in\bigl({\partial{X}}^{\small{\mathrm{rec}}}\setminus\{\xi\} \bigr)\cap\bigl({\partial{X}}\setminus\widetilde\partial {\mathcal O}_{r}(\xi,x)\bigr)$ is an interior point of the complement ${\partial{X}}\setminus \widetilde\partial {\mathcal O}_{r}(\xi,x)$ of $\widetilde\partial {\mathcal O}_{r}(\xi,x)$ in ${\partial{X}}$. The converse inclusions “$\supset$" in the above Lemma \[lem:boundaryofshadow\] are wrong in general: If ${X}$ is a $4$-regular tree with all edge lengths equal to $1$, then $$\overline{ {\mathcal O}_{r}(\xi,x)}= {\mathcal O}_{r}(\xi,x) = \{\eta\in{\partial{X}}\setminus\{\xi\}\colon d\bigl(x,(\xi\eta)\bigr)\le \lceil{r}\rceil-1\},$$ where $\lceil{r}\rceil\in{\mathbb{N}}$ is the smallest integer bigger than or equal to $r$. So for $n\in{\mathbb{N}}$ we have $$\overline{ {\mathcal O}_{n}(\xi,x)}\subsetneq \{\eta\in{\partial{X}}\setminus\{\xi\}\colon d\bigl(x,(\xi\eta)\bigr)\le n\}.$$ Moreover, $$\begin{aligned} \widetilde\partial {\mathcal O}_{n}(\xi,x)&= \{\eta\in{\partial{X}}\setminus\{\xi\} \colon d\bigl(x, (\xi\eta)\bigr)=n\}\\ &= \{\eta\in{\partial{X}}\setminus\{\xi\} \colon n\le d\bigl(x, (\xi\eta)\bigr)<n+1\} \\ &={\mathcal O}_{n+1}(\xi,x)\setminus {\mathcal O}_{n}(\xi,x)\ne\emptyset,\end{aligned}$$ whereas the boundary $\partial {\mathcal O}_{r}(\xi,x)$ is always empty. Since all points in ${\partial{X}}$ are ${\mbox{Is}}({X})$-recurrent, this shows that for all $n\in{\mathbb{N}}$ $$\emptyset = \partial {\mathcal O}_{n}(\xi,x)\cap {\partial{X}}^{\small{\mathrm{rec}}} \subsetneq \widetilde\partial {\mathcal O}_{n}(\xi,x) = \{\zeta \in{\partial{X}}^{\small{\mathrm{rec}}}\setminus\{\xi\}\colon d\bigl(x,(\xi\zeta)\bigr)=n\}.$$ We will further need the following refined versions of the shadows above which were first introduced by T. Roblin ([@MR2057305]): For $r>0$ and $x,y\in{X}$ we set $$\begin{aligned} {\mathcal O}^-_{r}(x,y) &:= \{\eta\in{\partial{X}}\colon \forall\, z\in B_r(x)\ \mbox{we have}\,\ \sigma_{z,\eta}({\mathbb{R}}_+)\cap B_r(y)\neq\emptyset\},\nonumber \\ {\mathcal O}^+_{r}(x,y) &:= \{\eta\in{\partial{X}}\colon \exists\, z\in B_r(x)\ \mbox{such that}\,\ \sigma_{z,\eta}({\mathbb{R}}_+)\cap B_r(y)\neq\emptyset\}. $$ It is obvious from the definitions that for any $\rho>0$ and for all $x',y'\in{X}$ we have $$\label{inclusionoflargeshadows} d(x,x')<\rho\ \text{ and }\ d(y,y')<\rho\quad\Longrightarrow\quad {\mathcal O}^+_{r}(x,y)\subset {\mathcal O}^+_{r+\rho}(x',y').$$ Notice also that ${\mathcal O}^-_{r}(x,y)$ need not be open as it is an uncountable intersection of open sets $ {\mathcal O}_{r}(z,y)$ with $z\in B_r(x)$ (for a concrete example see Remark \[Pittetcounterex\] below). If $\xi\in{\partial{X}}$, we set $${\mathcal O}^-_{r}(\xi,y)={\mathcal O}^+_{r}(\xi,y)={\mathcal O}_{r}(\xi,y).$$ \[noconvergenceofshadows\] In the middle of page 58 of [@MR2057305] it is stated that in a CAT$(-1)$-space ${X}$ every sequence $(z_n)\subset{\overline{{X}}}$ converging to a point $\xi\in{\partial{X}}$ satisfies $$\lim_{n\to\infty} {\mathcal O}_{r}^{\pm}(z_n,x) = {\mathcal O}_{r}(\xi,x).$$ This is not true in a CAT$(0)$-space as the following example shows: Let ${X}$ be the Euclidean plane, $x\in{X}$ the origin $(0,0)$, and identify ${\partial{X}}$ with $\mathbb{S}^1\cong [0,2\pi)$. Let $\xi=\pi$ and $r>0$. Then obviously ${\mathcal O}_{r}(\xi,x)=\{0\}$. For $n\in{\mathbb{N}}$ we define $\varphi_n:=1/n$ and $z_n:=\bigl(-rn\cos(\varphi_n),-rn\sin(\varphi_n)\bigr)$, hence $$\sigma_{x,z_n}(-\infty)= \sigma_{z_n,x}(\infty)=\varphi_n \quad\text{and}\quad (z_n)\to\xi=\pi.$$ By elementary Euclidean geometry we further have $\, {\mathcal O}^-_{r}(z_n,x)=\{\varphi_n\}$, and thus $$\lim_{n\to\infty} {\mathcal O}^-_{r}(z_n,x)=\emptyset\ne \{0\}= {\mathcal O}_{r}(\xi,x).$$ However, the following statement will be sufficient for our purposes. \[liminfsupofshadows\] Let $\xi\in{\partial{X}}$, $x\in{X}$, $r>0$ and recall the definitions of $\widetilde\partial{\mathcal O}_{r}(\xi,x)$ from (\[otherboundaryofshadow\]) and of ${\partial{X}}^{\small{\mathrm{rec}}}$ from (\[endpointsofzerowidthrecurrent\]). Then for every sequence $(z_n)\subset{\overline{{X}}}$ converging to $\xi$ the following inclusions hold: 1. $\ \displaystyle \limsup_{n\to\infty} ({\mathcal O}^{\pm}_r(z_n,x)\cap {\partial{X}}^{\small{\mathrm{rec}}}) \subset \bigl({\mathcal O}_r(\xi,x)\cup \widetilde\partial {\mathcal O}_{r}(\xi,x)\bigr)\cap {\partial{X}}^{\small{\mathrm{rec}}}$, 2. $\ \displaystyle \liminf_{n\to\infty} ({\mathcal O}^{\pm}_r(z_n,x)\cap {\partial{X}}^{\small{\mathrm{rec}}}) \supset {\mathcal O}_r(\xi,x)\cap {\partial{X}}^{\small{\mathrm{rec}}}$. Let us first prove (a), which will follow from $$\limsup_{n\to\infty} ({\mathcal O}^+_r(z_n,x)\cap {\partial{X}}^{\small{\mathrm{rec}}}) \subset \bigl({\mathcal O}_r(\xi,x)\cup \widetilde\partial {\mathcal O}_{r}(\xi,x)\bigr)\cap{\partial{X}}^{\small{\mathrm{rec}}}.$$ If $\zeta\in \limsup_{n\to\infty} ({\mathcal O}^{+}_r(z_n,x)\cap{\partial{X}}^{\small{\mathrm{rec}}})$, then for all $n\in{\mathbb{N}}$ there exists $k_n\ge n$ [  ]{}$\zeta\in {\mathcal O}^{+}_r(z_{k_n},x)\cap{\partial{X}}^{\small{\mathrm{rec}}}$. Moreover, by definition of ${\partial{X}}^{\small{\mathrm{rec}}}$ and Lemma \[jointoweakrecurrent\] there exists $w\in{{\mathcal Z}}$ with $w^-=\zeta$ and $w^+=\xi$. Reparametrizing $w$ if necessary we may assume that its origin $w(0)$ satisfies ${{\mathcal B}}_{\zeta}(x,w(0))=0$. For $n\in{\mathbb{N}}$ we let $\sigma_n$ be a geodesic ray in the class of $\zeta$ with $\sigma_n(0)\in B_r(z_{k_n})$ and $\sigma_n({\mathbb{R}}_+)\cap B_r(x)\ne \emptyset$. Let $u_n\in{{\mathcal G}}$ be a geodesic line with $u_n^-=\zeta$ whose image in ${X}$ contains $\sigma_n({\mathbb{R}}_+)$ (that is $-u_n\in{{\mathcal G}}$ extends the ray $\sigma_n$). From $\zeta\in {\partial{X}}^{\small{\mathrm{rec}}}$ and Lemma \[jointoweakrecurrent\] we know that $u_n\in{{\mathcal Z}}$; up to reparametrization we can further assume that ${{\mathcal B}}_\zeta(x, u_n(0))=0$. Let $t_n\in{\mathbb{R}}$ [  ]{}$u_n(t_n)=\sigma_n(0)\in B_r(z_{k_n})$. From the convergence $z_{k_n}\to\xi$ we get $u_n(t_n)\to\xi$, hence $(t_n)\nearrow \infty$. By choice of $u_n$ we further know that $d(x,u_n({\mathbb{R}}))<r$; we fix $s_n\in{\mathbb{R}}$ [  ]{}$d(x, u_n(s_n))=d(x,u_n({\mathbb{R}}))$ (which is equivalent to $g^{s_n} u_n= v(x;\zeta,\xi)$). Then $$\begin{aligned} |s_n|&= \big|{{\mathcal B}}_\zeta\bigl(u_n(0),u_n(s_n)\bigr)\big|= \big|{{\mathcal B}}_\zeta\bigl(x,u_n(s_n)\bigr)\big|\le d\bigl(x,u_n(s_n)\bigr)<r,\end{aligned}$$ which shows that $d\bigl(x,u_n(0)\bigr)<2r$. Hence $u_n(t_n)\to\xi$ also implies $u_n^+\to\xi$, so $(u_n)$ converges weakly to $w\in{{\mathcal Z}}$. Passing to a subsequence if necessary we may assume that $(s_n)$ converges to $s\in [-r,r]$ and that $(u_n)$ converges to $w$ in ${{\mathcal G}}$ (by Lemma \[weakimpliesstrong\]). This finally gives $$\begin{aligned} d(x,w({\mathbb{R}})) &\le d\bigl(x,w(s)\bigr)\\ &\le \lim_{n\to\infty} \bigl( \underbrace{d\bigl(x,u_n(s_n)\bigr)}_{<r}+ \underbrace{d\bigl(u_n(s_n),w(s_n)\bigr)}_{\to 0}+\underbrace{d\bigl(w(s_n),w(s)\bigr)}_{=|s_n-s|\to 0}\bigr)<r. \end{aligned}$$ For the proof of (b) we let $\zeta\in {\mathcal O}_r(\xi,x)\cap{\partial{X}}^{\small{\mathrm{rec}}}$ be arbitrary. By definition of ${\partial{X}}^{\small{\mathrm{rec}}}$ and Lemma \[jointoweakrecurrent\] there exists $w\in{{\mathcal Z}}$ with $w^-=\xi$, $w^+=\zeta$. Reparametrizing $w$ if necessary we may assume that $w=v(x;\xi,\zeta)$, hence $d(x,w(0))<r$. Since $B_r(x)$ is open, there exists $\epsilon>0$ [  ]{}$B_{\epsilon}\bigl(w(0)\bigr)\subset B_r(x)$. According to Lemma \[joinrankone\] there exist neighborhoods $U$, $V\subset{\overline{{X}}}$ of $w^-$, $w^+$ [  ]{}any two points in $U$, $V$ can be joined by a rank one geodesic $u\in{{\mathcal R}}$ with $d(u(0),w(0))<\epsilon$ and ${\mathrm{width}}(u)<2\epsilon$. As $z_n\to\xi=w^-$ there exists $n\in{\mathbb{N}}$ [  ]{}for all $k\ge n$ we have $B_r(z_k)\subset U$; for these $k$ we immediately get $\zeta=w^+\in {\mathcal O}^-_r(z_k,x)\subset {\mathcal O}^+_r(z_k,x)$ (since $B_{\epsilon}\bigl(w(0)\bigr)\subset B_r(x)$). We now fix non-atomic finite Borel measures $\mu_-$, $\mu_+$ on ${\partial{X}}$ with $\mu_\pm({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_\pm({\partial{X}})$ and such that $\overline\mu\sim (\mu_-\otimes\mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{\lower 0.4ex\hbox{$\scriptstyle{\partial_{\infty}}{{\mathcal R}}$}}$ is a quasi-product geodesic current on ${\partial_{\infty}}{{\mathcal R}}$ for which the constant $\Delta$ defined by (\[Deltadef\]) is finite. We will need the following \[boundaryhasmeasurezero\] Let $\xi\in{\partial{X}}$, $x\in{X}$ and recall definition (\[otherboundaryofshadow\]). Then the set $$\{r>0\colon \mu_+\bigl(\widetilde\partial {\mathcal O}_r(\xi,x)\bigr)>0\}$$ is at most countable. We first notice that the sets $\widetilde\partial {\mathcal O}_r(\xi,x)$ are disjoint for different values of $r$. Hence by finiteness of $\mu_x$ we know that for $n\in{\mathbb{N}}$ arbitrary the set $$A_n:=\{r>0\colon \mu_+(\widetilde\partial {\mathcal O}_r(\xi,x))>1/n\}$$ is finite. Therefore the set $\displaystyle \ \{r>0\colon \mu_+(\widetilde\partial {\mathcal O}_r(\xi,x))>0\}=\bigcup_{n\in{\mathbb{N}}} A_n\ $ is at most countable. From Proposition \[liminfsupofshadows\] we get the following estimate on the $\mu_{+}$-measure of the small and large shadows with source in the neighborhood of a given boundary point. \[measureofshadowsisclose\] Let $\xi\in{\partial{X}}$, $x\in{X}$ and $r>0$ [  ]{} $$\mu_+\bigl({\mathcal O}_r(\xi,x)\bigr)>0\quad\text{and}\quad \mu_+\bigl(\widetilde\partial{\mathcal O}_r(\xi,x)\bigr)=0.$$ Then for all $\varepsilon>0$ there exists a neighborhood $U\subset{\overline{{X}}}$ of $\xi$ [  ]{}for all $z\in U$ $${\mathrm{e}}^{-\varepsilon} \mu_+\bigl({\mathcal O}_r(\xi,x)\bigr)<\mu_+\bigl({\mathcal O}_r^{\pm}(z,x)\bigr) <{\mathrm{e}}^{\varepsilon} \mu_+\bigl({\mathcal O}_r(\xi,x)\bigr).$$ We first recall the definition of ${{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}$ from (\[definezerorec\]) and notice that $\quotient{\Gamma}{{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}}$ has full $m_\Gamma$-measure by Theorem \[currentisproduct\]. So according to Corollary \[useFubini\] we have $$\mu_+\bigl(\{\zeta\in{\partial{X}}\colon (\eta,\zeta)\in{\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}\ \text{ for } \ \mu_-\text{-almost every }\ \eta\in {\partial{X}}\}\bigr)=\mu_+({\partial{X}}).$$ Hence from the obvious inclusion $$\{\zeta\in{\partial{X}}\colon (\eta,\zeta)\in{\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}}\ \text{ for } \ \mu_-\text{-almost every }\ \eta\in {\partial{X}}\}\subset {\partial{X}}^{\small{\mathrm{rec}}}$$ we obtain $\mu_+({\partial{X}}^{\small{\mathrm{rec}}})=\mu_+({\partial{X}})$. Since $\mu_+$ is a finite Borel measure, Proposition \[liminfsupofshadows\] implies $$\begin{aligned} \mu_+\bigl({\mathcal O}_r(\xi,x)\bigr)& =\mu_+\bigl({\mathcal O}_r(\xi,x)\cap {\partial{X}}^{\small{\mathrm{rec}}}\bigr)\stackrel{\text{\scriptsize{(b)}}}{\le} \mu_+\bigl(\liminf_{n\to\infty} ({\mathcal O}^{\pm}_r(z_n,x)\cap{\partial{X}}^{\small{\mathrm{rec}}})\bigr)\\ &\le \liminf_{n\to\infty} \mu_+\bigl({\mathcal O}^{\pm}_r(z_n,x)\cap{\partial{X}}^{\small{\mathrm{rec}}}\bigr)\le \limsup_{n\to\infty} \mu_+\bigl({\mathcal O}^{\pm}_r(z_n,x)\cap{\partial{X}}^{\small{\mathrm{rec}}}\bigr)\\ &\le \mu_+\bigl(\limsup_{n\to\infty} ({\mathcal O}^{\pm}_r(z_n,x)\cap{\partial{X}}^{\small{\mathrm{rec}}})\bigr) \\ &\stackrel{\text{\scriptsize{(a)}}}{\le} \mu_+\bigl(({\mathcal O}_r(\xi,x)\cup \widetilde\partial {\mathcal O}_r(\xi,x))\cap{\partial{X}}^{\small{\mathrm{rec}}}\bigr)= \mu_+\bigl({\mathcal O}_r(\xi,x)\bigr),$$ because $\mu_+\bigl(\widetilde\partial{\mathcal O}_r(\xi,x)\bigr)=0$. So we conclude $$\lim_{n\to\infty} \mu_+\bigl({\mathcal O}^{\pm}_r(z_n,x)\bigr)=\lim_{n\to\infty} \mu_+\bigl({\mathcal O}^{\pm}_r(z_n,x)\cap {\partial{X}}^{\small{\mathrm{rec}}} \bigr)=\mu_+\bigl({\mathcal O}_r(\xi,x)\bigr),$$ hence the claim. For a subset $A\subset {\partial{X}}$ we next define the small and large cones $$\begin{aligned} \label{slcones} {\mathcal C}^-_{r}(x,A) &:= \{z\in{X}\colon {\mathcal O}^+_{r}(x,z)\subset A\},\\ {\mathcal C}^+_{r}(x,A) &:= \{z\in{X}\colon {\mathcal O}^+_{r}(x,z)\cap A\ne \emptyset\}.\nonumber $$ Notice that our definition of the small cones ${\mathcal C}^-_{r}$ differs slightly from Roblin’s in order to get Lemma \[smallcones\]. Moreover, they are related to our large cones via $${\mathcal C}^-_{r}(x,A)\subset {\mathcal C}^+_{r}(x,A)\quad\text{and}\quad {\mathcal C}^-_{r}(x,A)= {\overline{{X}}}\setminus{\mathcal C}^+_{r}(x,{\partial{X}}\setminus A).$$ From the latter equality and (\[inclusionoflargeshadows\]) we immediately get \[changepoint\] Let $\rho>0$, $x_0\in B_\rho(x)$ and $y_0\in B_\rho(y)$. Then 1. $\quad y\in {\mathcal C}^+_{r}(x,A)\quad \Longrightarrow \quad y_0 \in {\mathcal C}^+_{r+\rho}(x_0,A)$, 2. $\quad y\in {\mathcal C}^-_{r+\rho}(x,A)\quad \Longrightarrow \quad y_0 \in {\mathcal C}^-_{r}(x_0,A)$. This shows in particular that for $r<r'$ we have $$\label{coneesti} {\mathcal C}^+_{r}(x,A)\subset {\mathcal C}^+_{r'}(x,A)\quad\text{and }\quad {\mathcal C}^-_{r}(x,A)\supset {\mathcal C}^-_{r'}(x,A).$$ In Sections \[equidistribution\] and \[orbitcounting\] we will frequently need the following \[orbitpointsincones\] Let $x,y\in{X}$, $r>0$, and $\widehat V\subset{\overline{{X}}}$, $V\subset{\partial{X}}$ be arbitrary open sets. 1. For $A\subset {\partial{X}}$ with $\overline{A}\subset \widehat V\cap{\partial{X}}$ only finitely many $\gamma\in\Gamma$ satisfy $$\gamma y \in {\mathcal C}_r^{\pm} (x,A) \setminus \widehat V.$$ 2. For $\widehat A\subset {\overline{{X}}}$ with $\overline{\widehat A}\cap{\partial{X}}\subset V$ only finitely many $\gamma\in\Gamma$ satisfy $$\gamma y \in \widehat A\setminus {\mathcal C}_r^{\pm} (x,V).$$ We begin with the proof of (a) by contradiction. Assume that there exists a sequence $(\gamma_n)\subset\Gamma$ [  ]{}$ \gamma_n y \in {\mathcal C}_r^+ (x,A) \setminus \widehat V$ for all $n\in{\mathbb{N}}$. As $\Gamma$ is discrete, every accumulation point of $(\gamma_n y)\subset{X}$ belongs to ${\partial{X}}$. Passing to a subsequence if necessary we will assume that $\gamma_n y\to \zeta\in{L_\Gamma}\cap{\partial{X}}$ as $n\to\infty$. From $\gamma_n y\in {\mathcal C}_r^+ (x,A)$ we know that $ {\mathcal O}^+_{r}(x,\gamma_n y)\cap A\ne \emptyset$. We choose a geodesic line $v_n\in {{\mathcal G}}$ with $v_n^+\in A$ whose image intersects $B_r(x)$ and then $ $ and $z_n:=v_n(0)\in B_r(\gamma_n y)$. Up to reparametrization we can assume that ${{\mathcal B}}_{v_n^+}(x,v_n(0))=0$ and ${{\mathcal B}}_{v_n^+}(\gamma_n y, v_n(t_n))=0$ for some $t_n>0$. Then by an easy geometric estimate analogous to the one in the proof of Proposition \[liminfsupofshadows\] (a) we have $d(x,v_n(0))<2r$ and $d\bigl(\gamma_n y, v_n(t_n)\bigr)<2r$. By convexity of the distance function and $\sigma_{x,v_n^+}(\infty)=v_n(\infty)=v_n^+$ we get $$d\bigl(\sigma_{x,v_n^+}(T), v_n(T)\bigr)<2r\quad\text{for all}\quad T>0.$$ Hence $$d\bigl(\gamma_n y, \sigma_{x,v_n^+}(t_n)\bigr)\le d\bigl(\gamma_n y,v_n(t_n)\bigr)+d\bigl(v_n(t_n), \sigma_{x,v_n^+}(t_n)\bigr)<4r$$ which implies $v_n^+\to \zeta$ and therefore $\zeta\in\overline{A}\subset \widehat V\cap{\partial{X}}$. On the other hand, as $\widehat V$ is open and $\gamma_n y \notin \widehat V$ for all $n\in{\mathbb{N}}$, we obviously have $\zeta\notin \widehat V\cap {\partial{X}}$, hence a contradiction. The claim for ${\mathcal C}_r^- (x,A) \setminus \widehat V$ follows from the obvious inclusion ${\mathcal C}_r^- (x,A) \subset {\mathcal C}_r^+ (x,A) $. For the proof of (b) we assume that there exists a sequence $(\gamma_n)\subset\Gamma$ [  ]{}$ \gamma_n y \in \widehat A\setminus {\mathcal C}_r^- (x,V)$ for all $n\in{\mathbb{N}}$. Passing to a subsequence if necessary we will assume as above that $\gamma_n y\to \zeta\in{L_\Gamma}\cap{\partial{X}}$ as $n\to\infty$. Here $\gamma_n y\in \widehat A$ for all $n\in{\mathbb{N}}$ obviously implies $\zeta\in\overline{\widehat A}\cap {\partial{X}}\subset V $. From $\gamma_n y\notin {\mathcal C}_r^- (x,V)$ we know that $ {\mathcal O}^+_{r}(x,\gamma_n y)\not\subset V$. We choose a geodesic line $v_n\in {{\mathcal G}}$ with $v_n^+\not\in V$ whose image intersects $B_r(x)$ and then $ $ and $z_n:=v_n(0)\in B_r(\gamma_n y)$. As in the proof of (a) we get $v_n^+\to \zeta$, and therefore $\zeta\in\overline{{\partial{X}}\setminus V}={\partial{X}}\setminus V$ since $V$ is open; this is an obvious contradiction to $\zeta\in V$. Again, the claim for $\widehat A\setminus {\mathcal C}_r^+ (x,V)$ follows from the obvious inclusion ${\mathcal C}_r^+ (x,V) \supset {\mathcal C}_r^- (x,V) $. Before we proceed we will state some results concerning the following corridors first introduced by T. Roblin ([@MR2057305]): For $r>0$ and $x,y\in{X}$ we set $$\begin{aligned} {\mathcal L}_{r}(x,y) &= \{(\xi,\eta)\in{\partial_{\infty}}{{\mathcal G}}\colon \exists\, v\in{\partial_{\infty}}^{-1}(\xi,\eta)\ \exists\, t>0\ {\mbox{such}\ \mbox{that}\ }\label{Lrc} \\ &\hspace*{4.8cm}v(0)\in B_r(x),\ v( t)\in B_r(y) \}.\nonumber\end{aligned}$$ Notice that if $(\xi,\eta)\notin{\partial_{\infty}}{{\mathcal Z}}$, then the element $v\in{\partial_{\infty}}^{-1}(\xi,\eta)$ satisfying the condition on the right-hand side is in general different from $v(x;\xi,\eta)$ (and from $g^{-t}v(y;\xi,\eta)$). \[Pittetcounterex\] The inclusion ${\mathcal O}^-_{r}(y,x)\times {\mathcal O}^-_{r}(x,y)\subset {\mathcal L}_{r}(x,y) $ claimed in the middle of page 58 of [@MR2057305] is wrong even in the hyperbolic plane $\mathbb{H}^2$ as the following counterexample provided by C. Pittet shows: Let $x=1+\mathrm{i}$, $y={\mathrm{e}}^4+\mathrm{i} {\mathrm{e}}^4 $ and $r=d(x,\sqrt2\mathrm{i})=d(y,\sqrt2{\mathrm{e}}^4\mathrm{i})$ (which is equal to the hyperbolic distance of $x$ respectively $y$ to the imaginary axis). Then elementary hyperbolic geometry shows that the geodesic line $$\sigma:{\mathbb{R}}\to \mathbb{H}^2,\quad t\mapsto {\mathrm{e}}^{t}\mathrm{i}$$ satisfies $\sigma(-\infty)\in \mathcal{O}_r^-(y,x)$, $\sigma(\infty)\in \mathcal{O}_r^-(x,y)$, but $\bigl(\sigma(-\infty),\sigma(\infty)\bigr)\notin \mathcal{L}_r(x,y)$ (since $\sigma({\mathbb{R}})$ is tangent to the open balls $B_r(x)$ and $B_r(y)$). Notice in particular that none of the sets $\mathcal{O}_r^-(y,x)$, $\mathcal{O}_r^-(x,y)$ is open.\ As a replacement for the above inclusion we will prove Lemma \[smallcones\] below. From here on we fix $r>0$, $\gamma\in{\mbox{Is}}({X})$, points $x$, $y\in{X}$ and subsets $ A,B\subset {\partial{X}}$. The following results relate corridors to cones and large shadows. The proof of the first one is straightforward. \[largecones\] If $\ (\zeta,\xi)\in {\mathcal L}_{r}(x,\gamma y) \cap (\gamma B\times A ),$ then $$\begin{aligned} (\gamma y, \gamma^{-1}x) & \in {\mathcal C}^+_{r}(x,A)\times {\mathcal C}^+_{r}(y,B)\ \text{ and}\quad ( \zeta,\xi )\in {\mathcal O}^+_{r}(\gamma y,x)\times {\mathcal O}^+_{r}(x,\gamma y).\end{aligned}$$ \[smallcones\] If $(\gamma y, \gamma^{-1}x) \in {\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)$, then $$\begin{aligned} {\mathcal L}_{r}(x,\gamma y) \cap (\gamma B\times A)\supset \{(\zeta, \xi)\in{\partial{X}}\times{\partial{X}}\colon \xi\in {\mathcal O}^-_{r}(x,\gamma y),\ \zeta\in {\mathcal O}_{r}(\xi,x)\}.\end{aligned}$$ From $\ \zeta\in {\mathcal O}_{r}(\xi,x)\ $ we know that the geodesic line $w=v(x;\xi,\zeta)\in{{\mathcal G}}$ defined by (\[orthogonalproj\]) has origin $w(0)\in B_r(x)$. Then $v:=-w\in{\partial_{\infty}}^{-1}(\zeta,\xi)$ satisfies $v(0)\in B_r(x)$, so $v^+=\xi\in {\mathcal O}^-_{r}(x,\gamma y)\ $ implies $v(t)=\sigma_{v(0),\xi}(t)\in B_{r}(\gamma y)$ for some $t>0$. We conclude $\ (\zeta,\xi)\in {\mathcal L}_{r}(x,\gamma y)$. It remains to prove that $\zeta\in \gamma B$ and $\xi\in A$. By definition (\[slcones\]) $\gamma y\in {\mathcal C}^-_{r}(x,A)$ immediately gives ${\mathcal O}^-_{r}(x,\gamma y)\subset {\mathcal O}^+_{r}(x,\gamma y)\subset A$, hence $\xi\in A$. Moreover, from $(\zeta,\xi)\in {\mathcal L}_{r}(x,\gamma y)$ we directly get $\zeta\in {\mathcal O}^+_{r}(\gamma y,x)$. So $\gamma^{-1} \zeta\in {\mathcal O}^+_{r}( y,\gamma^{-1} x)$, and from $\gamma^{-1}x\in {\mathcal C}^-_{r}(y,B)$ we know that ${\mathcal O}^+_{r}( y,\gamma^{-1} x)\subset B$ according to definition (\[slcones\]). Hence $\gamma^{-1}\zeta\in B$ which is equivalent to $\zeta\in\gamma B$. We will further need the following Borel subsets of ${{\mathcal G}}$ which up to small details were already introduced by T. Roblin in [@MR2057305]: $$\begin{aligned} \label{Kpm} && K_r(x) = \{ g^s v(x;\xi,\eta) \colon (\xi,\eta)\in {\partial_{\infty}}{{\mathcal Z}}\ \text{with}\ d(x,(\xi\eta))<r,\ s\in (-r/2,r/2)\},\nonumber\\ && K_r^+(x,A) = \{ v\in K_r(x)\colon v^+\in A\}=:K^+ ,\\ && K_r^-(y,B)= \{ w\in K_r( y) \colon w^-\in B\}=:K^-.\nonumber \end{aligned}$$ Notice that by definition the orbit of an element $v\in{{\mathcal Z}}$ either never enters one of the sets above or spends precisely time $r$ in them. Moreover, we have the following relation to the corridors ${\mathcal L}_{r}(x,\gamma y)$ introduced in (\[Lrc\]): \[K+-corridor\] For all $\gamma\in{\mbox{Is}}({X})$ with $d(x,\gamma y)\ge 3r$ we have $${\partial_{\infty}}\big(\{ K^+\cap g^{-t}\gamma K^-\colon t>0 \}\big)= {\mathcal L}_{r}(x,\gamma y)\cap {\partial_{\infty}}{{\mathcal Z}}\cap (\gamma B\times A)$$ For the inclusion “$\subset$" we let $v\in K^+\cap g^{-t}\gamma K^-$ for some $t>0$. Then obviously $(\zeta,\xi):= (v^-,v^+)\in{\partial_{\infty}}{{\mathcal Z}}$, $\xi=v^+\in A$ and $\zeta=v^-\in\gamma B$. Now consider $u:= v(x;\zeta,\xi)\in{{\mathcal Z}}$ and let $\tau\in{\mathbb{R}}$ [  ]{} $$v(\gamma y, \zeta,\xi) = g^{\tau} u;$$ such $\tau$ exists because $(\zeta,\xi)\in{\partial_{\infty}}{{\mathcal Z}}$. From the definition of $K_r(x)$ and $K_r(\gamma y)$ we further get $|d(x,\gamma y)-\tau|<2r$; since $d(x,\gamma y)\ge 3r$ this implies $\tau>r>0$. Hence $(\zeta,\xi)=(u^-, u^+)\subset {\mathcal L}_{r}(x,\gamma y)$. For the converse inclusion “$\supset$" we let $(\zeta,\xi)\in {\mathcal L}_{r}(x,\gamma y) \cap {\partial_{\infty}}{{\mathcal Z}}\cap (\gamma B\times A) $ be arbitrary. Then by definition (\[Lrc\]) there exists $v\in {{\mathcal Z}}$ and $t'>0$ with $$(v^-,v^+)=(\zeta,\xi),\quad d(x, v(0))<r\quad\text{and }\ d\bigl(\gamma y, v(t')\bigr)<r.$$ As above we set $u:= v(x;\zeta,\xi)$ and let $\tau\in{\mathbb{R}}$ [  ]{} $$v(\gamma y, \zeta,\xi) = g^{\tau} u.$$ Since $d(x,u(0))\le d(x,v(0))<r$ and $u^+=\xi\in A$ we have $u\in K^+$, and from $d(\gamma y, u(\tau))\le d(\gamma y,v(t'))<r$ and $u^-=\zeta\in\gamma B$ we further get $ g^{\tau} u\in\gamma K^-$. Moreover we have $\tau>r>0$ as above, so the claim is proved. Ricks’ Bowen-Margulis measure and some useful estimates {#RicksBMestimates} ======================================================= As before ${X}$ will denote a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group with ${{\mathcal Z}}_\Gamma\ne\emptyset$. In order to get the equidistribution result Theorem B from the introduction we will have to work with the so-called Ricks’ Bowen-Margulis measure: This is the Ricks’ measure from Section \[Mixing\] associated to a particular quasi-product geodesic current $\overline\mu$. We are going to describe this geodesic current now. \[confdensity\] A $\delta$-dimensional $\Gamma$-invariant [[**]{}conformal density]{} is a continuous map $\mu\,$ of ${X}$ into the cone of positive finite Borel measures on ${\partial{X}}$ such that for all $x$, $y\in{X}$ and every $\gamma\in\Gamma$ we have $$\begin{aligned} \label{conformality} \nonumber &{\mbox{supp}}(\mu_x)\subset {L_\Gamma},\\ \nonumber &\gamma_*\mu_x=\mu_{\gamma x},\quad\text{where } \ \gamma_*\mu_x(E):=\mu_x(\gamma^{-1}E)\quad\text{for all Borel sets }\ E\subset{\partial{X}},\\ &\frac{{\mathrm{d}}\mu_x}{{\mathrm{d}}\mu_y}(\eta)={\mathrm{e}}^{\delta {{\mathcal B}}_{\eta}(y,x)} \quad\text{for any }\ \eta\in{\mbox{supp}}(\mu_x).\end{aligned}$$ Recall the definition of the critical exponent $ \delta_\Gamma$ from (\[critexpdef\]) and notice that in our setting it is strictly positive, since $\Gamma$ contains a non-abelian free subgroup generated by two independent rank one elements. For $\delta=\delta_\Gamma$ a conformal density as above can be explicitly constructed following the idea of S. J. Patterson originally used for Fuchsian groups (see for example [@MR1465601 Lemma 2.2]). From here on we will therefore fix a $\delta_\Gamma$-dimensional $\Gamma$-invariant conformal density $\mu=(\mu_x)_{x\in{X}}$. With the Gromov product from (\[GromovProd\]) we will now consider as in Section 7 of [@Ricks] and in Section 8 of [@LinkHTS] for $x\in{X}$ the geodesic current $\overline{\mu}_{x}\,$ on ${\partial_{\infty}}{{\mathcal G}}\subset{\partial{X}}\times{\partial{X}}$ defined by $$d\overline{\mu}_x(\xi,\eta)={\mathrm{e}}^{2\delta_\Gamma{{Gr}}_x(\xi,\eta)} {\mathbbm 1}_{{\partial_{\infty}}\mathcal{R}}(\xi,\eta){\mathrm{d}}\mu_x(\xi){\mathrm{d}}\mu_x(\eta).$$ As $\overline\mu_x$ does not depend on the choice of $x\in{X}$ we will write $\overline\mu$ in the sequel. Since we want to apply Theorem \[mixthm\] we will assume that $\mu_x({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_x({\partial{X}})$; in view of Hopf-Tsuji-Sullivan dichotomy (Theorem 10.2 in [@LinkHTS]) this is equivalent to the fact that $\Gamma$ is divergent. Moreover, it is well-known that in this case the conformal density $\mu$ from above is non-atomic and unique up to scaling. So Theorem \[currentisproduct\] implies that for all $x$, $y\in{X}$ we have $$\label{overlinemudef} d\overline{\mu}(\xi,\eta)={\mathrm{e}}^{2\delta_\Gamma{{Gr}}_x(\xi,\eta)}{\mathrm{d}}\mu_x(\xi){\mathrm{d}}\mu_x(\eta)={\mathrm{e}}^{2\delta_\Gamma{{Gr}}_y(\xi,\eta)}{\mathrm{d}}\mu_y(\xi){\mathrm{d}}\mu_y(\eta)$$ and $$(\mu_x\otimes\mu_x)({\partial_{\infty}}{{\mathcal Z}}_\Gamma^{\small{\mathrm{rec}}})=(\mu_x\otimes\mu_x)({\partial_{\infty}}{{\mathcal Z}})=\mu_x({\partial{X}})^2.$$ The Ricks’ measure $m_\Gamma$ on $\quotient{\Gamma}{{{\mathcal G}}}$ associated to the geodesic current $\overline\mu$ from (\[overlinemudef\]) is called the [[**]{}Ricks’ Bowen-Margulis measure]{}. It generalizes the well-known Bowen-Margulis measure in the CAT$(-1)$-setting. Moreover, for the measure $m$ from which it descends we have the formula  (\[measureformula\]). Notice also that if ${X}$ is a manifold and $\Gamma$ is cocompact, then Ricks’ Bowen-Margulis measure is equal to the measure of maximal entropy $m^{\text{\scriptsize Kn}}_\Gamma$ described in [@MR1652924] (this is Knieper’s measure associated to $\overline\mu$ from (\[overlinemudef\])). We further remark that the constant $\Delta$ defined in (\[Deltadef\]) is equal to $2 \delta_\Gamma$ in this case (compare the last paragraph in Section 8 of [@LinkHTS]), hence in particular finite. Fix $r>0$, points $x$, $y\in{X}$ and subsets $A$, $B\subset{\partial{X}}$. We will first compute the measure of the sets introduced in (\[Kpm\]). From (\[measureformula\]), (\[overlinemudef\]) and the remark below (\[Kpm\]) we get $$\begin{aligned} m(K^+)&=\int_{{\partial_{\infty}}{{\mathcal Z}}}{\mathrm{d}}\mu_x(\xi){\mathrm{d}}\mu_x(\eta){\mathrm{e}}^{2\delta_\Gamma {{Gr}}_x(\xi,\eta)} \int {\mathbbm 1}_{K^+}\bigl(g^s v(x;\xi,\eta)\bigr){\mathrm{d}}s\\ & =r \int_A {\mathrm{d}}\mu_x(\xi)\int_{\mathcal{O}_r(\xi,x)} {\mathrm{d}}\mu_x(\eta){\mathrm{e}}^{2\delta_\Gamma {{Gr}}_x(\xi,\eta)}, \nonumber\end{aligned}$$ and similarly $$m(K^-)=r \int_B {\mathrm{d}}\mu_y(\eta)\int_{\mathcal{O}_r(\eta,y)} {\mathrm{d}}\mu_y(\xi){\mathrm{e}}^{2\delta_\Gamma {{Gr}}_y(\xi,\eta)} .$$ From the non-negativity of the Gromov-product (\[GromovProd\]) and the fact that $${{Gr}}_x(\xi,\eta)\le r\quad\text{if}\quad \eta\in\mathcal{O}_r(\xi,x)$$ we further get the useful estimates $$\begin{aligned} \label{measureK+} r \int_A {\mathrm{d}}\mu_x(\xi)\mu_x\bigl({\mathcal{O}_r(\xi,x)}\bigr)&\le m(K^+)\le r {\mathrm{e}}^{2\delta_\Gamma r} \int_A {\mathrm{d}}\mu_x(\xi)\mu_x\bigl({\mathcal{O}_r(\xi,x)}\bigr) , \\ \nonumber r \int_B {\mathrm{d}}\mu_y(\eta)\mu_y\bigl({\mathcal{O}_r(\eta,y)}\bigr)&\le m(K^-)\le r {\mathrm{e}}^{2\delta_\Gamma r} \int_B {\mathrm{d}}\mu_y(\eta)\mu_x\bigl({\mathcal{O}_r(\eta,y)}\bigr).\end{aligned}$$ We continue with the important \[flowintegrals\] Let $T_0>6r$, $T>T_0+3r$, $\gamma\in\Gamma$, $(\xi,\eta)\in {\mathcal L}_r(x,\gamma y)\cap {\partial_{\infty}}{{\mathcal Z}}$ and $s\in (-r/2,r/2)$. Then 1. $\ \displaystyle \int_{T_0}^{T+3r} {\mathrm{e}}^{\delta_\Gamma t} \mathbbm{1}_{K_r(\gamma y)}\bigl(g^{t+s} v(x;\xi,\eta)\bigr){\mathrm{d}}t \ge r\cdot {\mathrm{e}}^{-3\delta_\Gamma r} {\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)}\ \ $\ if $\ \ T_0+3r<d(x,\gamma y)\le T$, 2. $\ \displaystyle \int_{T_0}^{T-3r} {\mathrm{e}}^{\delta_\Gamma t} \mathbbm{1}_{K_r(\gamma y)}\bigl(g^{t+s} v(x;\xi,\eta)\bigr){\mathrm{d}}t \le r\cdot {\mathrm{e}}^{3\delta_\Gamma r} {\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)} $,\ and $\ \ \displaystyle \int_{T_0}^{T-3r} {\mathrm{e}}^{\delta_\Gamma t} \mathbbm{1}_{K_r(\gamma y)}\bigl(g^{t+s} v(x;\xi,\eta)\bigr){\mathrm{d}}t =0\ \ $\ if $\ \ d(x,\gamma y)\le T_0-3r\ \ $ or $\ \ d(x,\gamma y)>T$. Denote $v= v(x;\xi,\eta)\in{{\mathcal Z}}$ and let $\tau>0$ [  ]{}$g^{\tau}v=v(\gamma y;\xi,\eta)$. Since $(\xi,\eta)\in {\mathcal L}_r(x,\gamma y)$, the triangle inequality yields $$| d(x,\gamma y)-\tau|< 2r.$$ By definition of $K_r(\gamma y)$ we have $g^{t+s}v \in K_r(\gamma y)$ if and only if $|t+s-\tau|<r/2$. Hence if $\,\tau -s -r/2\ge T_0\, $ and $\,\tau-s+r/2\le T+ 3r$, then $$\begin{aligned} \int_{T_0}^{T+3r} {\mathrm{e}}^{\delta_\Gamma t} \mathbbm{1}_{K_r(\gamma y)}\bigl(g^{t+s} v(x;\xi,\eta)\bigr){\mathrm{d}}t &=\int_{\tau -s -r/2}^{\tau-s+r/2} {\mathrm{e}}^{\delta_\Gamma t}{\mathrm{d}}t\\ &\hspace*{-2cm} \ge r\cdot {\mathrm{e}}^{\delta_\Gamma (\tau-s-r/2)}\ge r\cdot e^{-3\delta_\Gamma r}{\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)} . \end{aligned}$$ Now $d(x,\gamma y)\in (T_0+3r, T]$ and $s\in (-r/2,r/2)$ imply $$\,\tau -s -r/2\ge d(x,\gamma y)-2r -r/2- r/2\ge T_0\quad\text{and}$$ $$\tau -s +r/2\le d(x,\gamma y)+2r+r/2+r/2\le T+3r,$$ so (a) holds. In order to prove (b) we first notice that $$\begin{aligned} \int_{T_0}^{T- 3r} {\mathrm{e}}^{\delta_\Gamma t} \mathbbm{1}_{K_r(\gamma y)}\bigl(g^{t+s} v(x;\xi,\eta)\bigr){\mathrm{d}}t &\le \int_{\tau -s -r/2}^{\tau-s+r/2} {\mathrm{e}}^{\delta_\Gamma t}{\mathrm{d}}t\\ & \hspace*{-2cm} \le r\cdot {\mathrm{e}}^{\delta_\Gamma (\tau-s+r/2)} \le r\cdot e^{3\delta_\Gamma r}{\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)}; \end{aligned}$$ this proves the first assertion in (b). Now if $\, d(x,\gamma y)\le T_0-3r$, then $$\,\tau-s+r/2\le d(x,\gamma y)+2r+r\le T_0,$$ and if $\, d(x,\gamma y)\ge T$, then $$\,\tau-s-r/2\ge d(x,\gamma y)-2r-r\ge T-3r,$$ hence the integral in (b) equals zero in both cases. Moreover, from Lemma \[K+-corridor\] we immediately get the following \[K+-measure\] For all $\gamma\in{\mbox{Is}}({X})$ with $d(x,\gamma y)>3r$ and all $\,t>0 $ we have $$\begin{aligned} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr)&=\int_{{\mathcal L}_{r}(x,\gamma y)\cap (\gamma B\times A )}{\mathrm{d}}\mu_x(\xi){\mathrm{d}}\mu_x(\eta){\mathrm{e}}^{2\delta_\Gamma {{Gr}}_x(\xi,\eta)} \\ &\hspace*{3cm} \cdot \int_{-r/2}^{r/2} {\mathbbm 1}_{K_r(\gamma y)} \bigl(g^{t+s} v(x;\xi,\eta)\bigr){\mathrm{d}}s. \end{aligned}$$ Equidistribution ================ We keep the notation and the setting from the previous section and will now address the question of equidistribution of $\Gamma$-orbit points in ${X}$. In order to get a reasonable statement we will have to require that the Ricks’ Bowen-Margulis measure $m_\Gamma$ is [[**]{}finite]{}: \[equithm\] Let $\Gamma<{\mbox{Is}}({X})$ be a discrete rank one group with non-arithmetic length spectrum, ${{\mathcal Z}}_\Gamma\ne\emptyset$ and finite Ricks’ Bowen-Margulis measure $m_\Gamma$. Let $f$ be a continuous function from ${\overline{{X}}}\times {\overline{{X}}}$ to ${\mathbb{R}}$, and $x$, $y\in{X}$. Then $$\lim_{T\to\infty} \delta_\Gamma {\mathrm{e}}^{-\delta_\Gamma T} \sum_{\begin{smallmatrix}{\scriptstyle\gamma\in\Gamma}\\{\scriptstyle d(x,\gamma y )\le T}\end{smallmatrix}} f(\gamma y,\gamma^{-1} x)=\frac1{\Vert m_\Gamma \Vert} \int_{{\partial{X}}\times{\partial{X}}} f(\xi,\eta){\mathrm{d}}\mu_x(\xi) {\mathrm{d}}\mu_y(\eta).$$ Our proof will closely follow Roblin’s strategy for his Th[é]{}or[è]{}me 4.1.1 in [@MR2057305]: Using mixing of the geodesic flow one proves that for all sufficiently small Borel sets $A,B\subset {\partial{X}}$ the limit inferior and the limit superior of the measures $$\label{measuredef} \nu_{x,y}^T:= \delta_\Gamma\cdot {\mathrm{e}}^{-\delta_\Gamma T}\sum_{\begin{smallmatrix}{\scriptstyle\gamma\in\Gamma}\\{\scriptstyle d(x,\gamma y )\le T}\end{smallmatrix}} \mathcal{D}_{\gamma y}\otimes \mathcal{D}_{\gamma^{-1} x}$$ as $T$ tends to infinity evaluated on products of “cones" of opening $A$, $B$ is approximately $\mu_x(A)\cdot \mu_y(B)/\Vert m_\Gamma\Vert$. In the first step we only deal with sufficiently small open neighborhoods of pairs of boundary points which are in a “nice" position with respect to $x$ and $y$; then one shows that the estimates hold for all pairs of sufficiently small Borel sets $A$ and $B$. The final step consists in the full proof by globalisation with respect to $A$ and $B$. The following Proposition provides the first step in the proof of Theorem \[equithm\]: \[firststep\] Let $\varepsilon>0$, $(\xi_0,\eta_0)\in{\partial{X}}\times{\partial{X}}$ and $x$, $y\in{X}$ with trivial stabilizer in $\Gamma$ and [  ]{}$x\in (\xi_0 v^+)$, $y\in (\eta_0 w^+)$ for some $\Gamma$-recurrent elements $v$, $w\in{{\mathcal Z}}$. Then there exist open neighborhoods $V$, $W\subset{\partial{X}}$ of $\xi_0$, $\eta_0$ [  ]{}for all Borel sets $A\subset V$, $B\subset W$ $$\begin{aligned} \limsup_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr) &\le {\mathrm{e}}^\varepsilon \mu_x(A)\mu_y (B)/\Vert m_\Gamma\Vert,\\ \liminf_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B)\bigr) &\ge {\mathrm{e}}^{-\varepsilon} \mu_x(A)\mu_y (B)/\Vert m_\Gamma\Vert.\end{aligned}$$ Set $\rho:=\min\{ d(x,\gamma x),\ d(y,\gamma y) \colon \gamma\in \Gamma\setminus\{e\}\}$. Let $\varepsilon>0$ arbitrary. We first fix $r\in (0,\min\{1,\rho/3, \varepsilon/(30 \delta_\Gamma)\})$ [  ]{}$$\mu_x\bigl(\widetilde\partial\mathcal{O}_r(\xi_0, x)\bigr)=0=\mu_y\bigl(\widetilde\partial\mathcal{O}_r(\eta_0, y)\bigr).$$ Since $v^+\in {L_\Gamma}\cap \mathcal{O}_r(\xi_0, x)$ and $w^+\in{L_\Gamma}\cap \mathcal{O}_r(\eta_0, y)$, both shadows $\mathcal{O}_r(\xi_0, x)$ and $\mathcal{O}_r(\eta_0, y)$ contain an open neighborhood of a limit point of $\Gamma$ by Lemma \[joinrankone\]. So from ${\mbox{supp}}(\mu_x)={\mbox{supp}}(\mu_y)={L_\Gamma}$ and the definition (\[supportdef\]) of the support of a measure we have $$\mu_x\bigl(\mathcal{O}_r(\xi_0, x)\bigr)\cdot\mu_y\bigl(\mathcal{O}_r(\eta_0, y)\bigr)>0.$$ Moreover, according to Lemma \[joinrankone\] and Corollary \[measureofshadowsisclose\] there exist open neighborhoods $\widehat V$, $\widehat W\subset{\overline{{X}}}$ of $\xi_0$, $\eta_0$ [  ]{}if $(a, b)\in \widehat V\times \widehat W$, then $a$ can be joined to $v^+$, $b$ can be joined to $w^+$ by a rank one geodesic, and $$\begin{aligned} \label{approxmeasures} {\mathrm{e}}^{-\varepsilon/30} \mu_x\bigl(\mathcal{O}_r(\xi_0, x) \bigr)&\le \mu_x\bigl(\mathcal{O}^\pm_{r}(a, x) \bigr)\le{\mathrm{e}}^{\varepsilon/30} \mu_x\bigl(\mathcal{O}_r(\xi_0, x) \bigr),\nonumber \\ {\mathrm{e}}^{-\varepsilon/30}\mu_y\bigl(\mathcal{O}_r(\eta_0, y) \bigr) & \le \mu_y\bigl(\mathcal{O}^\pm_{r}(b, y) \bigr) \le{\mathrm{e}}^{\varepsilon/30}\mu_y\bigl(\mathcal{O}_r(\eta_0, y) \bigr).\end{aligned}$$ Let $V$, $W\subset{\partial{X}}$ be open neighborhoods of $\xi_0$, $\eta_0$ [  ]{}$\overline{V} \subset \widehat V\cap{\partial{X}}$ and $\overline{W}\subset \widehat W\cap{\partial{X}}$. Let $A\subset V$, $B\subset W$ be arbitrary Borel sets. Roblin’s method consists in giving upper and lower bounds for the asymptotics of the integrals $$\int_{T_0}^{T\pm 3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr) {\mathrm{d}}t$$ as $T$ tends to infinity: On the one hand one uses mixing to relate the integrals to $\mu_x(A)\cdot \mu_y(B)$; on the other hand one computes direct estimates for the integrals to get a relation to the measures $\nu_{x,y}^T\bigl({\mathcal C}^\pm_{1}(x,A)\times {\mathcal C}^\pm_{1}(y,B)\bigr)$. Let us start by exploiting the mixing property. Notice that by choice of $r<\rho/3$ and the definition of $\rho$ we have $$K_r(x) \cap \gamma K_r(x) =\emptyset\ \text{ and}\quad K_r(y) \cap \gamma K_r(y)=\emptyset\ \text{ for all}\quad \gamma\in\Gamma\setminus\{e\},$$ hence the projection map ${{\mathcal G}}\to \quotient{\Gamma}{{{\mathcal G}}}$ restricted to $K^\pm$ is injective. So we can apply Corollary \[mixcor\] to get $$\lim_{t\to \infty} \sum_{\gamma\in\Gamma} m(K^+\cap g^{-t}\gamma K^-)= \frac{m(K^+)\cdot m(K^-)}{\Vert m_\Gamma\Vert }.$$Hence there exists $T_0> 6r$ [  ]{}for $t\ge T_0 $ we have $$\begin{aligned} \label{summixing} {\mathrm{e}}^{-\varepsilon/3} m(K^+)\cdot m(K^-) &\le & \Vert m_\Gamma\Vert\cdot \sum_{\gamma\in\Gamma} m(K^+\cap g^{-t} \gamma K^-)\nonumber\\ &\le & {\mathrm{e}}^{\varepsilon/3} m(K^+)\cdot m(K^-).\end{aligned}$$ Combining (\[measureK+\]) and the estimates (\[approxmeasures\]) we obtain from $A\subset \widehat V$ and $B\subset \widehat W$ $$\begin{aligned} r {\mathrm{e}}^{-\varepsilon/30} \mu_x\bigl({\mathcal O}_r(\xi_0,x)\bigr)\mu_x(A) \le m(K^+)&\le r{\mathrm{e}}^{2\delta_\Gamma r}{\mathrm{e}}^{\varepsilon/30} \mu_x\bigl({\mathcal O}_r(\xi_0,x)\bigr)\mu_x(A),\\ r {\mathrm{e}}^{-\varepsilon/30} \mu_y\bigl({\mathcal O}_r(\eta_0,y)\bigr)\mu_y(B) \le m(K^-)&\le r{\mathrm{e}}^{2\delta_\Gamma r}{\mathrm{e}}^{\varepsilon/30} \mu_y\bigl({\mathcal O}_r(\eta_0,y)\bigr)\mu_y(B);\end{aligned}$$ using the abbreviation $M=r^2 \mu_x\bigl({\mathcal O}_r(\xi_0,x)\bigr) \mu_y\bigl({\mathcal O}_r(\eta_0,y)\bigr)>0$ and $\delta_\Gamma r\le \varepsilon/30$ we get $$\label{measureproductKpKm} {\mathrm{e}}^{-\varepsilon/15}M\mu_x(A)\mu_y(B)\le m(K^+)m(K^-)\le {\mathrm{e}}^{\varepsilon/5}M\mu_x(A)\mu_y(B).$$ Hence according to (\[summixing\]) we have for $t\ge T_0$ $$\begin{aligned} M \mu_x(A)\mu_y(B)&\le {\mathrm{e}}^{\varepsilon/15}{\mathrm{e}}^{\varepsilon/3} \Vert m_\Gamma\Vert\cdot \sum_{\gamma\in\Gamma} m(K^+\cap g^{-t}\gamma K^-),\\ M \mu_x(A)\mu_y(B)&\ge {\mathrm{e}}^{-\varepsilon/5}{\mathrm{e}}^{-\varepsilon/3} \Vert m_\Gamma\Vert\cdot \sum_{\gamma\in\Gamma} m(K^+\cap g^{-t}\gamma K^-).\end{aligned}$$ We now integrate the inequalities to get $$\begin{aligned} \nonumber \bigl({\mathrm{e}}^{\delta_\Gamma (T-3r)}-{\mathrm{e}}^{\delta_\Gamma T_0}\bigr) &M \mu_x(A)\mu_y(B) = \delta_\Gamma \int_{T_0}^{T-3r} {\mathrm{e}}^{\delta_\Gamma t} M \mu_x(A)\mu_y(B){\mathrm{d}}t\\ \label{Tminus3r} &\le e^{2\varepsilon/5} \Vert m_\Gamma\Vert\cdot \delta_\Gamma \int_{T_0}^{T-3r} {\mathrm{e}}^{\delta_\Gamma t}\sum_{\gamma\in\Gamma} m(K^+\cap g^{-t}\gamma K^-),\\ \nonumber \bigl({\mathrm{e}}^{\delta_\Gamma (T+3r)}-{\mathrm{e}}^{\delta_\Gamma T_0}\bigr) & M \mu_x(A)\mu_y(B) = \delta_\Gamma \int_{T_0}^{T+3r} {\mathrm{e}}^{\delta_\Gamma t} M \mu_x(A)\mu_y(B){\mathrm{d}}t \\ \label{Tplus3r} &\ge e^{-8\varepsilon/15} \Vert m_\Gamma\Vert\cdot \delta_\Gamma \int_{T_0}^{T+3r} {\mathrm{e}}^{\delta_\Gamma t}\sum_{\gamma\in\Gamma} m(K^+\cap g^{-t}\gamma K^-).\end{aligned}$$ We will next give upper and lower bounds for the integrals on the right-hand side: For the upper bound we first remark that $(\xi,\eta)\in {\mathcal L}_{r}(x,\gamma y)\cap{\partial_{\infty}}{{\mathcal Z}}$ implies ${{Gr}}_x(\xi,\eta)<r$. Moreover, our choice of $T_0 > 6r$ guarantees that $K^+ \cap g^{-t} \gamma K^-\ne\emptyset$ for some $t \geq T_0$ implies $d(x, \gamma y) > 3r$. Applying Corollary \[K+-measure\] we therefore get $$\begin{aligned} & \hspace*{-0.7cm} \int_{T_0}^{T- 3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr) {\mathrm{d}}t\\ &\le \sum_{\gamma\in\Gamma} \int_{{\mathcal L}_{r}(x,\gamma y)\cap (\gamma B\times A)}{\mathrm{d}}\mu_x(\xi){\mathrm{d}}\mu_x(\eta){\mathrm{e}}^{2\delta_\Gamma r}\\ &\hspace*{3cm} \cdot \int_{-r/2}^{r/2} \bigl(\int_{T_0}^{T-3r} \mathbbm{1}_{K_r(\gamma y)} \bigl(g^{t+s} v(x;\xi,\eta)\bigr) {\mathrm{e}}^{\delta_\Gamma t} {\mathrm{d}}t\bigr) {\mathrm{d}}s \\ &\le {\mathrm{e}}^{2\delta_\Gamma r}\cdot r^2\cdot {\mathrm{e}}^{3\delta_\Gamma r} \sum_{\begin{smallmatrix} {\scriptstyle \gamma\in\Gamma}\\ {\scriptstyle d(x,\gamma y)\le T}\end{smallmatrix}}\int_{{\mathcal L}_{r}(x,\gamma y)\cap (\gamma B\times A)}{\mathrm{d}}\mu_x(\xi){\mathrm{d}}\mu_x(\eta)\cdot {\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)};\end{aligned}$$ here we used Lemma \[flowintegrals\] (b) in the last step. Lemma \[largecones\], $r\le 1$ and the first estimate in (\[coneesti\]) further imply $$\begin{aligned} & \hspace*{-0.1cm} \int_{T_0}^{T- 3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr) {\mathrm{d}}t\\ &\le r^2 {\mathrm{e}}^{5\delta_\Gamma r}\hspace*{-4mm} \sum_{\begin{smallmatrix} {\scriptstyle \gamma\in\Gamma}\\ {\scriptstyle d(x,\gamma y)\le T}\\ {\scriptstyle (\gamma y,\gamma^{-1}x)\in {\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B)}\end{smallmatrix}} \hspace*{-4mm} \int_{{\mathcal O}^+_{r}(\gamma y,x)} {\mathrm{d}}\mu_x(\xi) \int_{{\mathcal O}^+_{r}(x,\gamma y)}{\mathrm{d}}\mu_x(\eta) {\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)}.\end{aligned}$$ Using the fact that for all $\eta\in{\mathcal O}^+_{r}(x,\gamma y)$ we have $\ {{\mathcal B}}_\eta(x, \gamma y)\ge d(x,\gamma y)-4r$,$\Gamma$-equivariance and conformality (\[conformality\]) of $\mu$ imply $$\int_{{\mathcal O}^+_{r}(x,\gamma y)}{\mathrm{d}}\mu_x(\eta) {\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)}\le {\mathrm{e}}^{4\delta_\Gamma r} \mu_y\bigl({\mathcal O}^+_{r}(\gamma^{-1}x, y)\bigr).$$ Moreover, since by Lemma \[orbitpointsincones\] (a) there are only finitely many $\gamma\in\Gamma$ [  ]{} $$(\gamma y,\gamma^{-1}x)\in \left( {\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B)\right) \setminus (\widehat V\times \widehat W),$$ restricting the summation to $\gamma\in\Gamma$ with $$(\gamma y,\gamma^{-1}x)\in \left( {\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B)\right) \cap (\widehat V\times \widehat W)$$ only contributes a constant $C$ to the upper bound. So with our choice of $r\le 1$ and $r\le \varepsilon/(30 \delta_\Gamma)$ we conclude $$\begin{aligned} & \hspace*{-7mm} \int_{T_0}^{T- 3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr) {\mathrm{d}}t\\ &\le r^2 {\mathrm{e}}^{9\varepsilon/30} \hspace*{-14mm} \sum_{\begin{smallmatrix} {\scriptstyle \gamma\in\Gamma}\\ {\scriptstyle d(x,\gamma y)\le T}\\ {\scriptstyle (\gamma y,\gamma^{-1}x)\in ( {\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B))\cap (\widehat V\times \widehat W)}\end{smallmatrix}} \hspace*{-14mm} \mu_x\bigl({\mathcal O}^+_{r}(\gamma y,x)\bigr) \mu_y\bigl({\mathcal O}^+_{r}(\gamma^{-1}x, y)\bigr) +C\\ &\stackrel{(\ref{approxmeasures})}{\le} r^2 {\mathrm{e}}^{11\varepsilon/30}\hspace*{-14mm} \sum_{\begin{smallmatrix} {\scriptstyle \gamma\in\Gamma}\\ {\scriptstyle d(x,\gamma y)\le T}\\ {\scriptstyle (\gamma y,\gamma^{-1}x)\in ( {\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B))\cap (\widehat V\times \widehat W)}\end{smallmatrix}} \hspace*{-14mm} \mu_x\bigl({\mathcal O}_{r}(\xi_0,x)\bigr) \mu_y\bigl({\mathcal O}_{r}(\eta_0, y)\bigr) +C,\\ &\le {\mathrm{e}}^{11\varepsilon/30} M \frac{{\mathrm{e}}^{\delta_\Gamma T}}{\delta_\Gamma}\nu_{x,y}^T\bigl( {\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B)\bigr)+ C.\end{aligned}$$ Plugging this in the inequality (\[Tminus3r\]) divided by $M {\mathrm{e}}^{\delta_\Gamma (T-3r)}\cdot\Vert m_\Gamma\Vert$ we get (with a constant $C'$ independent of $T$) $$\begin{aligned} \frac{1- {\mathrm{e}}^{\delta_\Gamma (-T+3r+T_0)}}{\Vert m_\Gamma\Vert} \mu_x(A)\mu_y(B) & \le {\mathrm{e}}^{2\varepsilon/5} {\mathrm{e}}^{11\varepsilon/30}{\mathrm{e}}^{3\delta_\Gamma r}\nu_{x,y}^T\bigl( {\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B)\bigr)\\ +\ C'{\mathrm{e}}^{-\delta_\Gamma T} &\le {\mathrm{e}}^{13\varepsilon/15} \nu_{x,y}^T\bigl( {\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B)\bigr)+ C'{\mathrm{e}}^{-\delta_\Gamma T} ,\end{aligned}$$ which proves $$\liminf_{T\to\infty} \nu_{x,y}^T\bigl( {\mathcal C}^+_{1}(x,A)\times {\mathcal C}^+_{1}(y,B)\bigr)\ge {\mathrm{e}}^{-\varepsilon} \mu_x(A)\mu_y(B)/\Vert m_\Gamma\Vert.$$ We finally turn to the lower bound. Using again Corollary \[K+-measure\] and the non-negativity of the Gromov product (\[GromovProd\]) we estimate $$\begin{aligned} & \hspace*{-0.7cm} \int_{T_0}^{T+3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr) {\mathrm{d}}t\\ &\ge \sum_{\gamma\in\Gamma} \int_{{\mathcal L}_{r}(x,\gamma y)\cap (\gamma B\times A)}{\mathrm{d}}\mu_x(\xi){\mathrm{d}}\mu_x(\eta){\mathrm{e}}^{2\delta_\Gamma\cdot 0}\\ &\hspace*{3cm} \cdot \int_{-r/2}^{r/2} \Bigl(\int_{T_0}^{T+3r} \mathbbm{1}_{K_r(\gamma y)} \bigl(g^{t+s} v(x;\xi,\eta)\bigr) {\mathrm{e}}^{\delta_\Gamma t} {\mathrm{d}}t\Bigr) {\mathrm{d}}s \\ &\ge r^2{\mathrm{e}}^{-3\delta_\Gamma r} \sum_{\begin{smallmatrix} {\scriptstyle \gamma\in\Gamma}\\ {\scriptstyle T_0+3r< d(x,\gamma y)\le T}\end{smallmatrix}}\int_{{\mathcal L}_{r}(x,\gamma y)\cap (\gamma B\times A)}{\mathrm{d}}\mu_x(\xi){\mathrm{d}}\mu_x(\eta)\cdot {\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)},\end{aligned}$$ where we used Lemma \[flowintegrals\] (a) in the last step. By Lemma \[smallcones\], $r\le 1$ and the second estimate in (\[coneesti\]) we have for all $\gamma\in\Gamma$ with $(\gamma y, \gamma^{-1}x) \in {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\subset {\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)$ $$\begin{aligned} {\mathcal L}_{r}(x,\gamma y) \cap (\gamma B\times A)\supset \{(\zeta,\xi)\in{\partial{X}}\times{\partial{X}}\colon \xi\in {\mathcal O}^-_{r}(x,\gamma y),\ \zeta\in {\mathcal O}_{r}(\xi,x)\},\end{aligned}$$ hence $$\begin{aligned} & \hspace*{-0.7cm} \int_{T_0}^{T+3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr) {\mathrm{d}}t\\ &\ge r^2 \cdot {\mathrm{e}}^{-\varepsilon/10} \hspace*{-14mm} \sum_{\begin{smallmatrix} {\scriptstyle \gamma\in\Gamma}\\ {\scriptstyle T_0+3r<d(x,\gamma y)\le T}\\ {\scriptstyle(\gamma y, \gamma^{-1}x) \in ({\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B))\cap (\widehat V\times \widehat W)}\end{smallmatrix}} \hspace*{-14mm} \int_{{\mathcal O}^-_{r}(x, \gamma y)}{\mathrm{d}}\mu_x(\xi) {\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)}\cdot \mu_x\bigl({\mathcal O}_{r}(\xi,x)\bigr).\end{aligned}$$ Notice that $\gamma y\in {\mathcal C}^-_{1}(x,A)\subset {\mathcal C}^-_{r}(x,A)$ implies ${\mathcal O}^-_{r}(x, \gamma y)\subset {\mathcal O}^+_{r}(x, \gamma y)\subset A\subset\widehat V$ by definition of the small cones. Hence (\[approxmeasures\]) shows that for all $\xi\in {\mathcal O}^-_{r}(x, \gamma y)$ we have $$\begin{aligned} \mu_x\bigl({\mathcal O}_{r}(\xi,x)\bigr) &\ge {\mathrm{e}}^{-\varepsilon/30}\mu_x\bigl({\mathcal O}_{r}(\xi_0,x)\bigr).\end{aligned}$$ By $\Gamma$-equivariance and conformality of $\mu$ we further have $$\int_{{\mathcal O}^-_{r}(x, \gamma y)}{\mathrm{d}}\mu_x(\xi) {\mathrm{e}}^{\delta_\Gamma d(x,\gamma y)} \ge \mu_y\bigl( {\mathcal O}^-_{r}(\gamma^{-1} x, y)\bigr)\ge {\mathrm{e}}^{-\varepsilon/30}\mu_y\bigl( {\mathcal O}_{r}(\eta_0, y)\bigr),$$ where the last inequality follows from $\gamma^{-1}x\in \widehat W$ and (\[approxmeasures\]). Altogether this proves $$\begin{aligned} & \hspace*{-0.7cm} \int_{T_0}^{T+3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr) {\mathrm{d}}t\nonumber\\ &\ge r^2\cdot {\mathrm{e}}^{-\varepsilon/6} \hspace*{-10mm} \sum_{\begin{smallmatrix} {\scriptstyle \gamma\in\Gamma}\\ {\scriptstyle T_0+3r<d(x,\gamma y)\le T}\\ {\scriptstyle(\gamma y, \gamma^{-1}x) \in {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)}\cap(\widehat V\times \widehat W)\end{smallmatrix}}\mu_x\bigl({\mathcal O}_{r}(\xi_0,x)\bigr) \mu_y\bigl( {\mathcal O}_{r}(\eta_0, y)\bigr).\end{aligned}$$ Since the number of elements $\gamma\in\Gamma$ with $d(x,\gamma y)\le T_0+3r$ or with $$(\gamma y,\gamma^{-1}x)\in \left( {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\right) \setminus (\widehat V\times \widehat W)$$ is finite thanks to Lemma \[orbitpointsincones\] (a), there exists a constant $C>0$ [  ]{} $$\begin{aligned} \label{lowerboundused} & \hspace*{-0.7cm} \int_{T_0}^{T+3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr) {\mathrm{d}}t\nonumber\\ &\ge r^2 \cdot {\mathrm{e}}^{-\varepsilon/6} \hspace*{-10mm} \sum_{\begin{smallmatrix} {\scriptstyle \gamma\in\Gamma}\\ {\scriptstyle d(x,\gamma y)\le T}\\ {\scriptstyle(\gamma y, \gamma^{-1}x) \in {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)}\end{smallmatrix}}\mu_x\bigl({\mathcal O}_{r}(\xi_0,x)\bigr) \mu_y\bigl( {\mathcal O}_{r}(\eta_0, y)\bigr) -C\\ &\ge {\mathrm{e}}^{-\varepsilon/6} M \frac{{\mathrm{e}}^{\delta_\Gamma T}}{ \delta_\Gamma}\nu_{x,y}^T\bigl( {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr)- C.\nonumber\end{aligned}$$ Plugging this in the inequality (\[Tplus3r\]) divided by $M {\mathrm{e}}^{\delta_\Gamma (T+3r)}\cdot \Vert m_\Gamma\Vert$ we get (with a constant $C'$ independent of $T$) $$\begin{aligned} \frac{1- {\mathrm{e}}^{\delta_\Gamma (-T-3r+T_0)}}{\Vert m_\Gamma\Vert} \mu_x(A)\mu_y(B) & \ge {\mathrm{e}}^{-8\varepsilon/15} {\mathrm{e}}^{-\varepsilon/6}{\mathrm{e}}^{-3\delta_\Gamma r}\nu_{x,y}^T\bigl( {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr)\\ -\ C'{\mathrm{e}}^{-\delta_\Gamma T} &= {\mathrm{e}}^{-12\varepsilon/15} \nu_{x,y}^T\bigl( {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr)+ C'{\mathrm{e}}^{-\delta_\Gamma T} ,\end{aligned}$$ which proves $$\limsup_{T\to\infty} \nu_{x,y}^T\bigl( {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr)\le {\mathrm{e}}^{\varepsilon} \mu_x(A)\mu_y(B)/\Vert m_\Gamma\Vert.$$ The next Proposition is the second step in the proof of Theorem \[equithm\]: \[secondstep\] Let $\varepsilon>0$ and $x$, $y\in{X}$ arbitrary. Then for all $(\xi_0,\eta_0)\in{\partial{X}}\times{\partial{X}}$ there exist $r>0$ and open neighborhoods $V\subset{\partial{X}}$ of $\xi_0$, $W\subset{\partial{X}}$ of $\eta_0$ [  ]{}for all Borel sets $A\subset V$, $B\subset W$ $$\begin{aligned} \limsup_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)\bigr) &\le {\mathrm{e}}^\varepsilon \mu_x(A)\mu_y (B)/\Vert m_\Gamma\Vert,\\ \liminf_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^+_{r}(x,A)\times {\mathcal C}^+_{r}(y,B)\bigr) &\ge {\mathrm{e}}^{-\varepsilon} \mu_x(A)\mu_y (B)/\Vert m_\Gamma\Vert.\end{aligned}$$ Let $(\xi_0,\eta_0)\in{\partial{X}}\times{\partial{X}}$ be arbitrary. Choose $\Gamma$-recurrent geodesics $v$, $w\in{{\mathcal Z}}$ and $x_0\in (\xi_0 v^+)$, $y_0\in (\eta_0 w^+)$ with trivial stabilizers in $\Gamma$. Let $V_0$, $W_0\subset{\partial{X}}$ be open neighborhoods of $\xi_0$ and $\eta_0$ [  ]{}the statement of Proposition \[firststep\] is true for $x_0$, $y_0$ instead of $x$, $y$, $V_0$, $W_0$ instead of $V$, $W$ and $\varepsilon/3$ instead of $\varepsilon$. Choose open neighborhoods $\widehat V_0$, $\widehat W_0$ of $\xi_0$, $\eta_0$ [  ]{}$\widehat V_0\cap{\partial{X}}\subset V_0$, $\widehat W_0\cap{\partial{X}}\subset W_0$ and $$\label{estimatebusinnbhd} |d(x_0,a)-d(x,a)-{{\mathcal B}}_{\xi_0}(x_0,x)|<\frac{\varepsilon}{6\delta_\Gamma},\quad |d(y_0,b)-d(y,b)-{{\mathcal B}}_{\eta_0}(y_0,y)|<\frac{\varepsilon}{6\delta_\Gamma}$$ for all $(a,b)\in \widehat V_0\times \widehat W_0$. Notice that if $a=\xi \in{\partial{X}}$ we use the convention that $d(x_0,a)-d(x,a)={{\mathcal B}}_{a}(x_0,x)$ and similar for $b=\eta\in{\partial{X}}$. Now let $V$, $W\subset{\partial{X}}$ be neighborhoods of $\xi_0$, $\eta_0$ [  ]{}for the closures we have $\overline V\subset \widehat V_0\cap {\partial{X}}$ and $\overline W\subset \widehat W_0\cap {\partial{X}}$. We further set $$r=1+\max\{ d(x,x_0), d(y,y_0)\},$$ and let $A\subset V$, $B\subset W$ be arbitrary Borel sets. From the choice of $r$ above and Lemma \[changepoint\] we immediately deduce that $(\gamma y,\gamma^{-1}x)\in {\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)$ implies $$(\gamma y_0,\gamma^{-1}x_0)\in {\mathcal C}^-_{1}(x_0,A)\times {\mathcal C}^-_{1}(y_0,B),$$ and that $(\gamma y_0,\gamma^{-1}x_0)\in {\mathcal C}^+_{1}(x_0,A)\times {\mathcal C}^+_{1}(y_0,B)$ implies $$(\gamma y,\gamma^{-1}x)\in {\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{1}(y,B).$$ Let $T\gg 1$ and set $$\widehat V_{-r}:=\{z\in X\colon \overline{B_r(z)}\subset \widehat V_0\}\cup \bigl(\widehat V_0 \cap {\partial{X}}\bigr)\subset \widehat V_0.$$ If $d(x,\gamma y)\le T$ and $(\gamma y,\gamma^{-1}x)\in \widehat V_{-r}\times \widehat W_{0}$, then $(\gamma y_0,\gamma^{-1}x)\in \widehat V_{0}\times \widehat W_{0}$ and hence $$\begin{aligned} d(x_0,\gamma y_0)&\le d(x,\gamma y_0)+{{\mathcal B}}_{\xi_0}(x_0,x)+\frac{\varepsilon}{6\delta_\Gamma}=d(y_0,\gamma^{-1}x)+{{\mathcal B}}_{\xi_0}(x_0,x)+\frac{\varepsilon}{6\delta_\Gamma}\\ &\le d(y,\gamma^{-1}x)+{{\mathcal B}}_{\eta_0}(y_0,y)+ {{\mathcal B}}_{\xi_0}(x_0,x)+\frac{\varepsilon}{3\delta_\Gamma}\\ &\le T+ {{\mathcal B}}_{\eta_0}(y_0,y)+ {{\mathcal B}}_{\xi_0}(x_0,x)+\frac{\varepsilon}{3\delta_\Gamma},\end{aligned}$$ which shows that $$\begin{aligned} {\mathrm{e}}^{-\delta_\Gamma T}& \#\{\gamma\in\Gamma\colon d(x,\gamma y)\le T,\ (\gamma y,\gamma^{-1} x) \in \bigl({\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)\bigr)\cap ( \widehat V_{-r}\times \widehat W_0)\}\\ & \le {\mathrm{e}}^{\varepsilon/3}\cdot {\mathrm{e}}^{\delta_\Gamma\left( {{\mathcal B}}_{\eta_0}(y_0,y)+ {{\mathcal B}}_{\xi_0}(x_0,x)\right)}\cdot {\mathrm{e}}^{-\delta_\Gamma(T+ {{\mathcal B}}_{\eta_0}(y_0,y)+ {{\mathcal B}}_{\xi_0}(x_0,x)+\varepsilon/3\delta_\Gamma)}\cdot\\ &\hspace*{15mm}\#\{\gamma\in\Gamma\colon d(x_0,\gamma y_0)\le T+ {{\mathcal B}}_{\eta_0}(y_0,y)+ {{\mathcal B}}_{\xi_0}(x_0,x)+\varepsilon/3\delta_\Gamma,\\ &\hspace*{29mm} \ (\gamma y_0,\gamma^{-1} x_0) \in \bigl({\mathcal C}^-_{1}(x_0,A)\times {\mathcal C}^-_{1}(y_0,B)\bigr)\cap ( \widehat V_{0}\times \widehat W_0)\}.\end{aligned}$$ Similarly, if $\ \displaystyle d(x_0,\gamma y_0)\le T-{{\mathcal B}}_{\eta_0}(y,y_0)-{{\mathcal B}}_{\xi_0}(x,x_0)-\frac{\varepsilon}{3\delta_\Gamma}\ $ and\ $(\gamma y_0,\gamma^{-1}x_0)\in \widehat V_{-r}\times \widehat W_{0}$, then $(\gamma y,\gamma^{-1}x_0)\in \widehat V_{0}\times \widehat W_{0}$, then $$\begin{aligned} d(x,\gamma y)&\le d(x_0,\gamma y)+{{\mathcal B}}_{\xi_0}(x,x_0)+\frac{\varepsilon}{6\delta_\Gamma}\\&\le d(y_0,\gamma^{-1}x_0)+{{\mathcal B}}_{\eta_0}(y,y_0)+ {{\mathcal B}}_{\xi_0}(x,x_0)+\frac{\varepsilon}{3\delta_\Gamma}\le T\end{aligned}$$ and finally $$\begin{aligned} {\mathrm{e}}^{-\delta_\Gamma T}& \#\{\gamma\in\Gamma\colon d(x,\gamma y)\le T,\ (\gamma y,\gamma^{-1} x) \in {\mathcal C}^+_{r}(x,A)\times {\mathcal C}^+_{r}(y,B)\}\\ & \ge {\mathrm{e}}^{-\varepsilon/3}\cdot {\mathrm{e}}^{-\delta_\Gamma\left( {{\mathcal B}}_{\eta_0}(y,y_0)+ {{\mathcal B}}_{\xi_0}(x,x_0)\right)}\cdot {\mathrm{e}}^{-\delta_\Gamma(T- {{\mathcal B}}_{\eta_0}(y,y_0)- {{\mathcal B}}_{\xi_0}(x,x_0)-\varepsilon/3\delta_\Gamma)}\cdot\\ &\hspace*{15mm}\#\{\gamma\in\Gamma\colon d(x_0,\gamma y_0)\le T- {{\mathcal B}}_{\eta_0}(y,y_0)+ {{\mathcal B}}_{\xi_0}(x,x_0)-\varepsilon/3\delta_\Gamma,\\ &\hspace*{29mm} \ (\gamma y_0,\gamma^{-1} x_0) \in \bigl({\mathcal C}^+_{1}(x_0,A)\times {\mathcal C}^+_{1}(y_0,B)\bigr)\cap ( \widehat V_{-r}\times \widehat W_0)\}.\end{aligned}$$ From $\overline{A}\subset\overline{V}\subset \widehat V_{0}\cap{\partial{X}}=\widehat V_{-r}\cap{\partial{X}}$ and Lemma \[orbitpointsincones\] (a) we know that the number of elements $\gamma\in\Gamma$ with $(\gamma y,\gamma^{-1}x)\in \bigl({\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)\bigr)\setminus (\widehat V_{-r}\times \widehat W_{0})$ or with $(\gamma y_0,\gamma^{-1}x_0)\in \bigl({\mathcal C}^{\pm}_{1}(x_0,A)\times {\mathcal C}^{\pm}_{1}(y_0,B)\bigr)\setminus (\widehat V_{-r}\times \widehat W_{0})$ is finite; hence by Proposition \[firststep\] $$\begin{aligned} & \hspace*{-1cm} \limsup_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)\bigr)\\ &\le {\mathrm{e}}^{\varepsilon/3}{\mathrm{e}}^{\delta_\Gamma \left({{\mathcal B}}_{\xi_0}(x_0,x)+{{\mathcal B}}_{\eta_0}(y_0,y)\right)} \\ &\qquad \cdot \limsup_{T\to\infty} \nu_{x_0,y_0}^{T+{{\mathcal B}}_{\xi_0}(x_0,x)+{{\mathcal B}}_{\eta_0}(y_0,y)+\varepsilon/3\delta_\Gamma}\bigl({\mathcal C}^-_{1}(x_0,A)\times {\mathcal C}^-_{1}(y_0,B)\bigr)\\ &\le {\mathrm{e}}^{2\varepsilon/3}{\mathrm{e}}^{\delta_\Gamma \left({{\mathcal B}}_{\xi_0}(x_0,x)+{{\mathcal B}}_{\eta_0}(y_0,y)\right)} \mu_{x_0}(A)\mu_{y_0}(B)/\Vert m_\Gamma\Vert,\end{aligned}$$ $$\begin{aligned} & \hspace*{-1cm} \liminf_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^+_{r}(x,A)\times {\mathcal C}^+_{r}(y,B)\bigr)\\ &\ge {\mathrm{e}}^{-\varepsilon/3}{\mathrm{e}}^{-\delta_\Gamma \left({{\mathcal B}}_{\xi_0}(x,x_0)+{{\mathcal B}}_{\eta_0}(y,y_0)\right)} \\ &\qquad \cdot \liminf_{T\to\infty} \nu_{x_0,y_0}^{T-{{\mathcal B}}_{\xi_0}(x,x_0)-{{\mathcal B}}_{\eta_0}(y,y_0)-\varepsilon/3\delta_\Gamma}\bigl({\mathcal C}^+_{1}(x_0,A)\times {\mathcal C}^+_{1}(y_0,B)\bigr)\\ &\ge {\mathrm{e}}^{-2\varepsilon/3}{\mathrm{e}}^{\delta_\Gamma \left({{\mathcal B}}_{\xi_0}(x_0,x)+{{\mathcal B}}_{\eta_0}(y_0,y)\right)} \mu_{x_0}(A)\mu_{y_0}(B)/\Vert m_\Gamma\Vert.\end{aligned}$$ Now for $\xi\in A\subset \widehat V_0\cap{\partial{X}}$ and $\eta \in B\subset \widehat W_0\cap {\partial{X}}$ we get from (\[estimatebusinnbhd\]) $$\begin{aligned} {{\mathcal B}}_{\xi}(x_0,x) -\frac{\varepsilon}{6\delta_\Gamma} &< {{\mathcal B}}_{\xi_0}(x_0,x) < {{\mathcal B}}_{\xi}(x_0,x) +\frac{\varepsilon}{6\delta_\Gamma},\\ {{\mathcal B}}_{\eta}(y_0,y) -\frac{\varepsilon}{6\delta_\Gamma} &< {{\mathcal B}}_{\eta_0}(y_0,y) < {{\mathcal B}}_{\eta}(y_0,y) +\frac{\varepsilon}{6\delta_\Gamma},\end{aligned}$$ hence $$\begin{aligned} {\mathrm{e}}^{\delta_\Gamma {{\mathcal B}}_{\xi_0}(x_0,x)}\mu_{x_0}(A) &=\int_A {\mathrm{e}}^{\delta_\Gamma {{\mathcal B}}_{\xi_0}(x_0,x)}{\mathrm{d}}\mu_{x_0}(\xi)\\ &< {\mathrm{e}}^{\varepsilon/6} \int_A {\mathrm{e}}^{\delta_\Gamma {{\mathcal B}}_{\xi}(x_0,x)}\frac{{\mathrm{d}}\mu_{x_0}}{{\mathrm{d}}\mu_x}(\xi){\mathrm{d}}\mu_x(\xi)\stackrel{(\ref{conformality})}{=} {\mathrm{e}}^{\varepsilon/6}\mu_x(A),\\ {\mathrm{e}}^{\delta_\Gamma {{\mathcal B}}_{\xi_0}(x_0,x)}\mu_{x_0}(A) &>{\mathrm{e}}^{-\varepsilon/6}\mu_x(A),\end{aligned}$$ and similarly $${\mathrm{e}}^{-\varepsilon/6}\mu_y(B) <{\mathrm{e}}^{\delta_\Gamma {{\mathcal B}}_{\eta_0}(y_0,y)}\mu_{y_0}(B) < {\mathrm{e}}^{\varepsilon/6}\mu_y(B).$$ This finally proves $$\begin{aligned} &\limsup_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)\bigr) \le {\mathrm{e}}^{\varepsilon}\mu_x(A)\mu_y(B)/\Vert m_\Gamma\Vert\quad\text{ and} \\ & \liminf_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^+_{r}(x,A)\times {\mathcal C}^+_{r}(y,B)\bigr) \ge {\mathrm{e}}^{-\varepsilon}\mu_x(A)\mu_y(B)/\Vert m_\Gamma\Vert.\end{aligned}$$ [*Proof of Theorem \[equithm\].*]{}  Let $x$, $y\in{X}$ and $\varepsilon>0$ arbitrary. For $(\xi_0,\eta_0)\in{\partial{X}}\times {\partial{X}}$ we fix $r>0$ and open neighborhoods $V$, $W\subset{\partial{X}}$ of $\xi_0$, $\eta_0$ [  ]{}the conclusion of Proposition \[secondstep\] holds. Choose open sets $\widehat V$, $\widehat W\subset{\overline{{X}}}$ with $\widehat V\cap{\partial{X}}=V$ and $\widehat W\cap{\partial{X}}=W$, and let $\widehat A$, $\widehat B\subset{\overline{{X}}}$ be Borel sets with $ \overline{\widehat A}\subset\widehat V$ and $$\label{zeromeasureboundary} (\mu_x\otimes \mu_y)\bigl(\partial(\widehat A\times \widehat B)\bigr)=0.$$ Let $\alpha>0$ be arbitrary, and choose open sets $A^+, B^+\subset {\partial{X}}$ and compact sets $A^-, B^-\subset {\partial{X}}$ with the properties $$\begin{aligned} A^-&\subset \widehat A^\circ\cap{\partial{X}}\subset \overline{\widehat A}\cap{\partial{X}}\subset A^+\subset V,\\ B^-&\subset \widehat B^\circ\cap{\partial{X}}\subset \overline{\widehat B}\cap{\partial{X}}\subset B^+\subset W,\\ &\mu_x(\widehat A^\circ\setminus A^-)<\alpha,\quad \mu_x(A^+\setminus \overline{\widehat A})<\alpha,\\ &\mu_y(\widehat B^\circ\setminus B^-)<\alpha,\quad \mu_y(B^+\setminus \overline{\widehat B})<\alpha .\end{aligned}$$ Notice that according to Lemma \[orbitpointsincones\] (b) the number of $\gamma\in\Gamma$ with $$(\gamma y,\gamma^{-1}x)\in (\overline{\widehat A}\times\overline{\widehat B})\setminus \left( {\mathcal C}^-_{r}(x,A^+)\times {\mathcal C}^-_{r}(y,B^+)\right)$$ is finite; the same is true for the number of $\gamma\in\Gamma$ with $$(\gamma y,\gamma^{-1}x)\in \left( {\mathcal C}^+_{r}(x,A^-)\times {\mathcal C}^+_{r}(y,B^-)\right) \setminus (\widehat A^\circ\times \widehat B^\circ)$$ by Lemma \[orbitpointsincones\] (a). Hence $$\begin{aligned} \Vert m_\Gamma\Vert \cdot \limsup_{T\to\infty} \nu_{x,y}^T\bigl(\widehat A\times\widehat B\bigr) &\le \Vert m_\Gamma\Vert \cdot \limsup_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^-_{r}(x,A^+)\times {\mathcal C}^-_{r}(y,B^+)\bigr),\\ \Vert m_\Gamma\Vert \cdot \liminf_{T\to\infty} \nu_{x,y}^T\bigl(\widehat A\times\widehat B\bigr) &\ge \Vert m_\Gamma\Vert \cdot \liminf_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^+_{r}(x,A^-)\times {\mathcal C}^+_{r}(y,B^-)\bigr).\end{aligned}$$ Proposition \[secondstep\] further implies $$\begin{aligned} \Vert m_\Gamma\Vert \cdot \limsup_{T\to\infty} \nu_{x,y}^T\bigl(\widehat A\times\widehat B\bigr) &\le {\mathrm{e}}^{\varepsilon} \mu_{x}(A^+)\mu_{y}(B^+)\\ &\le {\mathrm{e}}^{\varepsilon} \mu_{x}(\overline{\widehat A})\mu_{y}(\overline{\widehat B})+\alpha {\mathrm{e}}^\varepsilon \bigl(\mu_x({\partial{X}})+\mu_y({\partial{X}})\bigr)\\ &\stackrel{(\ref{zeromeasureboundary})}{\le} {\mathrm{e}}^{\varepsilon} \mu_{x}(\widehat A)\mu_{y}(\widehat B)+\alpha {\mathrm{e}}^\varepsilon \bigl(\mu_x({\partial{X}})+\mu_y({\partial{X}})\bigr)\end{aligned}$$ and $$\begin{aligned} \Vert m_\Gamma\Vert \cdot \liminf_{T\to\infty} \nu_{x,y}^T\bigl(\widehat A\times\widehat B\bigr) &\ge {\mathrm{e}}^{-\varepsilon} \mu_{x}(A^-)\mu_{y}(B^-)\\ &\ge {\mathrm{e}}^{-\varepsilon} \mu_{x}(\widehat A^\circ)\mu_{y}(\widehat B^\circ)-\alpha {\mathrm{e}}^{-\varepsilon} \bigl(\mu_x({\partial{X}})+\mu_y({\partial{X}})\bigr)\\ &\stackrel{(\ref{zeromeasureboundary})}{\ge} {\mathrm{e}}^{-\varepsilon} \mu_{x}(\widehat A)\mu_{y}(\widehat B)-\alpha {\mathrm{e}}^{-\varepsilon }\bigl(\mu_x({\partial{X}})+\mu_y({\partial{X}})\bigr)\end{aligned}$$ As $\alpha$ was arbitrarily small we get in the limit as $\alpha$ tends to zero $$\begin{aligned} \limsup_{T\to\infty} \nu_{x,y}^T\bigl(\widehat A\times\widehat B\bigr) &\le {\mathrm{e}}^{\varepsilon} \mu_{x}(\widehat A)\mu_{y}(\widehat B)/\Vert m_\Gamma\Vert \quad\text{and}\\ \liminf_{T\to\infty} \nu_{x,y}^T\bigl(\widehat A\times\widehat B\bigr) &\ge {\mathrm{e}}^{-\varepsilon} \mu_{x}(\widehat A)\mu_{y}(\widehat B)\Vert m_\Gamma\Vert .\end{aligned}$$ So for every continuous and positive function $h$ with support in $\widehat V\times \widehat W$ we have $$\begin{aligned} \frac{{\mathrm{e}}^{-\varepsilon}}{\Vert m_\Gamma\Vert} \int h ({\mathrm{d}}\mu_x\otimes {\mathrm{d}}\mu_y) & \le \liminf_{T\to\infty} \int h {\mathrm{d}}\nu_{x,y}^T\\ &\le \limsup_{T\to\infty} \int h {\mathrm{d}}\nu_{x,y}^T\le \frac{{\mathrm{e}}^\varepsilon}{\Vert m_\Gamma\Vert} \int h ({\mathrm{d}}\mu_x\otimes {\mathrm{d}}\mu_y).\end{aligned}$$ Now the compact set ${\partial{X}}\times{\partial{X}}$ can be covered by a finite number of open sets of type $V\times W$ with $V$, $W\subset{\partial{X}}$ as above, and similarly ${\overline{{X}}}\times{\overline{{X}}}$ by finitely many open sets $\widehat V\times \widehat W$ with $\widehat V$, $\widehat W\subset{\overline{{X}}}$ as above. Using a partition of unity subordinate to such a finite cover we see that the inequalities above remain true for every continuous and positive function on ${\overline{{X}}}\times{\overline{{X}}}$. The claim now follows by taking the limit $\varepsilon\to 0$, and passing from positive continuous functions to arbitrary continuous functions via a standard argument. $\hfill\square$ We conclude this section with the following Let $\Gamma<{\mbox{Is}}({X})$ be a discrete rank one group with non-arithmetic length spectrum, ${{\mathcal Z}}_\Gamma\ne\emptyset$ and finite Ricks’ Bowen-Margulis measure $m_\Gamma$. Let $f:{\overline{{X}}}\to{\mathbb{R}}$ be a continuous function, and $x$, $y\in{X}$. Then $$\lim_{T\to\infty} \delta_\Gamma {\mathrm{e}}^{-\delta_\Gamma T} \sum_{\begin{smallmatrix}{\scriptstyle\gamma\in\Gamma}\\{\scriptstyle d(x,\gamma y )\le T}\end{smallmatrix}} f(\gamma y)=\frac{ \mu_y({\partial{X}})}{\Vert m_\Gamma \Vert} \int_{{\partial{X}}} f(\xi){\mathrm{d}}\mu_x(\xi).$$ Asymptotic estimates for the orbit counting function {#orbitcounting} ==================================================== In this section we let ${X}$ be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group with ${{\mathcal Z}}_\Gamma\ne\emptyset$. We fix a point ${{o}}\in{X}$ with trivial stabilizer in $\Gamma$. Recall that the orbit counting function with respect to $x$, $y\in {X}$ is defined by $$N_\Gamma:[0,\infty)\to{\mathbb{N}},\quad R\mapsto \#\{\gamma\in\Gamma\colon d(x,\gamma y)\leq R\}.$$ We first state a direct corollary of Theorem \[equithm\] (using $f=\mathbbm{1}_{{\overline{{X}}}\times{\overline{{X}}}}$): Let $\Gamma<{\mbox{Is}}({X})$ be a discrete rank one group with non-arithmetic length spectrum, ${{\mathcal Z}}_\Gamma\ne\emptyset$ and finite Ricks’ Bowen-Margulis measure $m_\Gamma$. Then for any $x$, $y\in{X}$ we have $$\lim_{R\to\infty} \delta_\Gamma{\mathrm{e}}^{-\delta_\Gamma R} N_\Gamma(R) =\frac{\mu_x({\partial{X}})\mu_y({\partial{X}})}{ \Vert m_\Gamma \Vert}.$$ We next deal with the case that the Ricks’ Bowen-Margulis measure is not finite: \[lowgrowth\] Let $\Gamma<{\mbox{Is}}({X})$ be a discrete rank one group with ${{\mathcal Z}}_\Gamma\ne\emptyset$ and infinite Ricks’ Bowen-Margulis measure $m_\Gamma$. If $\Gamma$ is divergent we further require that $\Gamma$ has non-arithmetic length spectrum. Then for the orbit counting function with respect to arbitrary points $x$, $y\in{X}$ we have $$\lim_{t\to\infty} N_\Gamma(t){\mathrm{e}}^{-\delta_\Gamma t}=0.$$ As in the proof of Theorem \[equithm\] we define the measure $$\nu_{x,y}^T:= \delta_\Gamma {\mathrm{e}}^{-\delta_\Gamma T}\sum_{\begin{smallmatrix}{\scriptstyle\gamma\in\Gamma}\\{\scriptstyle d(x,\gamma y )\le T}\end{smallmatrix}} \mathcal{D}_{\gamma y}\otimes \mathcal{D}_{\gamma^{-1} x};$$ here we only have to show that $$\displaystyle \limsup_{T\to\infty} \nu_{x,y}^T({\overline{{X}}}\times {\overline{{X}}})=0.$$ Again, the first step of the proof is provided by the following \[firststeplem\] Let $(\xi_0,\eta_0)\in{\partial{X}}\times{\partial{X}}$ and $x$, $y\in{X}$ with trivial stabilizer in $\Gamma$ and [  ]{}$x\in (\xi_0 v^+)$, $y\in (\eta_0 w^+)$ for some $\Gamma$-recurrent elements $v$, $w\in{{\mathcal Z}}$. Then there exist open neighborhoods $V$, $W\subset{\partial{X}}$ of $\xi_0$, $\eta_0$ [  ]{}for all Borel sets $A\subset V$, $B\subset W$ $$\begin{aligned} \limsup_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr) &= 0.\end{aligned}$$ Let $\varepsilon>0$ arbitrary and set $\rho:= \min\{ d(x,\gamma x), d(y,\gamma y)\colon\gamma\in\Gamma\}$. As in the proof of Proposition \[firststep\] we fix $r\in (0, \min\{1, \rho/3, \varepsilon/(30\delta_\Gamma)\})$ [  ]{} $$\mu_x\bigl(\widetilde\partial\mathcal{O}_r(\xi_0, x)\bigr)=0=\mu_y\bigl(\widetilde\partial\mathcal{O}_r(\eta_0, y)\bigr)$$ and choose open neighborhoods $\widehat V$, $\widehat W\subset{\overline{{X}}}$ of $\xi_0$, $\eta_0$ [  ]{}if $(a, b)\in \widehat V\times \widehat W$, then $a$ can be joined to $v^+$, $b$ can be joined to $w^+$ by a rank one geodesic and (\[approxmeasures\]) holds. Let $V\subset \widehat V\cap{\partial{X}}$, $W\subset \widehat W\cap{\partial{X}}$ be open neighborhoods of $\xi_0$, $\eta_0$, and $A\subset V$, $B\subset W$ arbitrary Borel sets; denote $K^+=K_r^+(x,A)$, $K^-=K_r^-(y,B)$, and $M=r^2 \mu_x\bigl({\mathcal O}_r(\xi_0,x)\bigr) \mu_y\bigl({\mathcal O}_r(\eta_0,y)\bigr)>0$. Then by mixing (or dissipativity in the case of a convergent group $\Gamma$) there exists $T_0\gg 1$ [  ]{} $$\sum_{\gamma\in\Gamma} m(K^+\cap g^{-t}\gamma K^-)<M \varepsilon\cdot {\mathrm{e}}^{-\varepsilon/3}$$ for all $t\ge T_0$, which implies $$\bigl( {\mathrm{e}}^{\delta_\Gamma (T+3r)}- {\mathrm{e}}^{\delta_\Gamma T_0}\bigr)M \varepsilon\cdot {\mathrm{e}}^{-\varepsilon/3} > \delta_\Gamma \int_{T_0}^{T+ 3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m(K^+\cap g^{-t}\gamma K^-){\mathrm{d}}t.$$ We now use (\[lowerboundused\]) to get $$\begin{aligned} & \hspace*{-0.7cm} \delta_\Gamma \int_{T_0}^{T+3r} {\mathrm{e}}^{\delta_\Gamma t} \sum_{\gamma\in\Gamma} m \bigl( K^+\cap g^{-t}\gamma K^-\bigr) {\mathrm{d}}t\\ &\ge {\mathrm{e}}^{-\varepsilon/6} M {\mathrm{e}}^{\delta_\Gamma T} \nu_{x,y}^T\bigl( {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr)- C\end{aligned}$$ with a constant $C$ independent of $T$. Dividing by $M {\mathrm{e}}^{\delta_\Gamma (T+3r)}$ then yields $$\begin{aligned} \bigl(1- {\mathrm{e}}^{\delta_\Gamma (-T-3r+T_0)}\bigr) \varepsilon\cdot {\mathrm{e}}^{-\varepsilon/3} & > {\mathrm{e}}^{-\varepsilon/6}{\mathrm{e}}^{-3\delta_\Gamma r}\nu_{x,y}^T\bigl( {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr) - C'{\mathrm{e}}^{-\delta_\Gamma T}\\ &= {\mathrm{e}}^{-4\varepsilon/15} \nu_{x,y}^T\bigl( {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr)+ C'{\mathrm{e}}^{-\delta_\Gamma T} ,\end{aligned}$$ where $C'$ is again a constant independent of $T$. We conclude $$\limsup_{T\to\infty} \nu_{x,y}^T\bigl( {\mathcal C}^-_{1}(x,A)\times {\mathcal C}^-_{1}(y,B)\bigr)<\varepsilon ,$$ and the claim follows from the fact that $\varepsilon>0$ was chosen arbitrarily small. The next statement shows that in fact we can omit the conditions on $x$ and $y$ in Lemma \[firststeplem\]. \[secondsteplem\] Let $x$, $y\in{X}$ arbitrary. Then for all $(\xi_0,\eta_0)\in{\partial{X}}\times{\partial{X}}$ there exist $r>0$ and open neighborhoods $V\subset{\partial{X}}$ of $\xi_0$, $W\subset{\partial{X}}$ of $\eta_0$ [  ]{}for all Borel sets $A\subset V$, $B\subset W$ $$\begin{aligned} \limsup_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)\bigr) &=0. $$ Let $(\xi_0,\eta_0)\in{\partial{X}}\times{\partial{X}}$ be arbitrary. Choose $\Gamma$-recurrent geodesics $v$, $w\in{{\mathcal Z}}$ and $x_0\in (\xi_0 v^+)$, $y_0\in (\eta_0 w^+)$ with trivial stabilizers in $\Gamma$. Let $V$, $W\subset{\partial{X}}$ be open neighborhoods of $\xi_0$ and $\eta_0$ [  ]{}the statement of Lemma \[firststeplem\] holds for $x_0$, $y_0$ instead of $x$, $y$. Set $$r=1+\max\{ d(x,x_0), d(y,y_0)\}$$ and let $A\subset V$, $B\subset W$ be arbitrary Borel sets. From the choice of $r$ above and Lemma \[changepoint\] (b) we know that $(\gamma y,\gamma^{-1}x)\in {\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)$ implies $$(\gamma y_0,\gamma^{-1}x_0)\in {\mathcal C}^-_{1}(x_0,A)\times {\mathcal C}^-_{1}(y_0,B).$$ If $d(x,\gamma y)\le T$, then obviously $$\begin{aligned} d(x_0,\gamma y_0)&\le d(x_0, x)+ d(x,\gamma y)+d(y, y_0)\le T+d(x_0, x)+d(y, y_0),\end{aligned}$$ hence for $T\gg 1$ $$\begin{aligned} {\mathrm{e}}^{-\delta_\Gamma T}& \#\{\gamma\in\Gamma\colon d(x,\gamma y)\le T,\ (\gamma y,\gamma^{-1} x) \in \bigl({\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)\bigr) \}\\ & \le {\mathrm{e}}^{\delta_\Gamma\left(d(x_0, x)+d(y, y_0)\right) }\cdot {\mathrm{e}}^{-\delta_\Gamma\left(T+ d(x_0, x)+d(y, y_0)\right)}\\ &\hspace*{15mm}\cdot\#\{\gamma\in\Gamma\colon d(x_0,\gamma y_0)\le T+d(x_0, x)+d(y, y_0),\\ &\hspace*{29mm} \ (\gamma y_0,\gamma^{-1} x_0) \in \bigl({\mathcal C}^-_{1}(x_0,A)\times {\mathcal C}^-_{1}(y_0,B)\bigr)\}.\end{aligned}$$ We conclude that $$\begin{aligned} & \limsup_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^-_{r}(x,A)\times {\mathcal C}^-_{r}(y,B)\bigr)\\ &\hspace*{1cm} \le {\mathrm{e}}^{\delta_\Gamma \left(d(x_0, x)+d(y, y_0) \right) } \limsup_{T\to\infty} \nu_{x_0,y_0}^{T+d(x_0, x)+d(y, y_0)}\bigl({\mathcal C}^-_{1}(x_0,A)\times {\mathcal C}^-_{1}(y_0,B)\bigr)=0,\end{aligned}$$ where we used Lemma \[firststeplem\] in the last estimate. [*Proof of Theorem \[lowgrowth\].*]{}  Let $x$, $y\in{X}$ and $\varepsilon>0$ arbitrary. For $(\xi_0,\eta_0)\in{\partial{X}}\times {\partial{X}}$ we fix $r>0$ and open neighborhoods $V$, $W\subset{\partial{X}}$ of $\xi_0$, $\eta_0$ [  ]{}the conclusion of Lemma \[secondsteplem\] holds. Choose open sets $\widehat V$, $\widehat W\subset{\overline{{X}}}$ with $\widehat V\cap{\partial{X}}=V$ and $\widehat W\cap{\partial{X}}=W$, and let $\widehat A$, $\widehat B\subset{\overline{{X}}}$ be Borel sets with $$\overline{\widehat A}\subset\widehat V \quad\text{and }\quad \overline{\widehat B}\subset\widehat W.$$Choose open sets $A $, $B \subset {\partial{X}}$ with the properties $$\begin{aligned} \overline{\widehat A}\cap{\partial{X}}\subset A &\subset V\quad\text{and }\quad \overline{\widehat B}\cap{\partial{X}}\subset B \subset W;\end{aligned}$$ from Lemma \[orbitpointsincones\] (b) we know that the number of $\gamma\in\Gamma$ with $$(\gamma y,\gamma^{-1}x)\in (\overline{\widehat A}\times\overline{\widehat B})\setminus \left( {\mathcal C}^-_{r}(x,A )\times {\mathcal C}^-_{r}(y,B )\right)$$ is finite. Hence $$\begin{aligned} \limsup_{T\to\infty} \nu_{x,y}^T\bigl(\widehat A\times\widehat B\bigr) &\le \limsup_{T\to\infty} \nu_{x,y}^T\bigl({\mathcal C}^-_{r}(x,A )\times {\mathcal C}^-_{r}(y,B )\bigr)=0,\end{aligned}$$ which implies that for every continuous and positive function with support in $\widehat V\times \widehat W$ we have $$\limsup_{T\to\infty} \int h {\mathrm{d}}\nu_{x,y}^T=0.$$ Now the compact set ${\partial{X}}\times{\partial{X}}$ can be covered by a finite number of open sets of type $V\times W$ with $V$, $W\subset{\partial{X}}$ as above, and similarly ${\overline{{X}}}\times{\overline{{X}}}$ by finitely many open sets $\widehat V\times \widehat W$ with $\widehat V$, $\widehat W\subset{\overline{{X}}}$ as above. Using a partition of unity subordinate to such a finite cover we see that the statement above remains true for every continuous and positive function on ${\overline{{X}}}\times{\overline{{X}}}$. $\hfill\square$ Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to thank the anonymous referee for pointing out several inaccuracies and mistakes in the first version of the paper. She is also very greatful for his many valuable suggestions to improve the exposition. \[2\][\#2]{} \[1\][[arXiv:\#1](http://arxiv.org/abs/#1)]{} \[1\][`#1`]{} [10]{} (MR1910932) \[10.1007/BF02773153\] Martine Babillot, , *Israel J. Math.*, **129**, (2002), 61–76. (MR656659) \[10.1007/BF01456836\] W. Ballmann, , *Math. Ann.*, **259** (1982), 131–144. (MR1377265) \[10.1007/978-3-0348-9240-7\] W. Ballmann, *Lectures on Spaces of Nonpositive Curvature*, vol. 25 of DMV Seminar, Birkhäuser Verlag, Basel, 1995, With an appendix by Misha Brin. (MR823981) \[10.1007/978-1-4684-9159-3\] W. Ballmann, M. Gromov and V. Schroeder, *Manifolds of Nonpositive Curvature*, vol. 61 of Progress in Mathematics, Birkhäuser Boston Inc., Boston, MA, 1985. (MR1341941) M. Bourdon, Structure conforme au bord et flot géodésique d’un [${\rm CAT}(-1)$]{}-espace, *Enseign. Math. (2)*, **41** (1995), 63–102. (MR1744486) Martin R. Bridson and Andr[é]{} Haefliger, , Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], vol. 319, Springer-Verlag, Berlin, 1999. (MR2585575) \[10.1007/s00039-009-0042-2\] P-E. Caprace and K. Fujiwara, , *Geom. Funct. Anal.*, **19** (2010), no. 5, 1296–1319. (MR1207579) \[10.2307/2154747\] M. Coornaert and A. Papadopoulos, , *Trans. Amer. Math. Soc.*, **343** (1994), 883–898. (MR1703039) \[10.1007/BF01235869\] F. Dal’bo, , *Bol. Soc. Brasil. Mat. (N.S.)*, no. 2, **30** (1999), 199–221. (MR1779902) \[10.5802/aif.1781\] F. Dal’bo, , *Ann. Inst. Fourier (Grenoble)*, no. 3, **50** (2000), 981–993. (MR1617430) \[10.1515/crll.1998.037\] F. Dal’bo, M. Peign[é]{}, , *J. Reine Angew. Math.*, **497** (1998), 141–169. (MR841080)\[10.1090/conm/050/841080\] Y. Guivarc’h, A. Raugi, , *Random matrices and their applications (Contemporary Mathematics, 50)*, *American Mathematical Society*, Providence, RI (1986), 31–54. (MR1465601) \[10.1007/s000390050025\] G. Knieper, , *Geom. Funct. Anal.*, **7** (1997), 755–782. (MR1652924) \[10.2307/120995\] G. Knieper, , *Ann. of Math. (2)*, **148** (1998), 291–314. (MR2290453) \[10.1007/s10455-006-9016-x\] G. Link, , *Ann. Global Anal. Geom.*, **31** (2007), 37–57. (MR2629900) \[10.1007/s10455-006-9016-x\] G. Link, , *Geometry and Topology*, no. 2, **14** (2010), 1063–1094. \[10.3934/dcds.2018245\] G. Link, , *Discrete and Continuous Dyn. Syst.*, no. 11, **38** (2018), 5577–5613. (MR3543588) \[10.3934/dcds.2016072\] G. Link and J.-C. Picaud, Ergodic geometry for non-elementary rank one manifolds, *Discrete and Continuous Dyn. Syst. A*, no. 11, **36** (2016), 6257–6284. (MR0450547) \[10.1007/BF02392046\] S. J. Patterson, , *Acta Math.*, **136** (1976), 241–273. R. Ricks, , *PhD Thesis*, University of Michigan, 2015. (MR3628926) \[10.1017/etds.2015.78\] R. Ricks, , *Ergodic Theory Dynam. Systems,* no. 3, **37** (2017), 939–970. (MR2057305) T. Roblin, Ergodicité et équidistribution en courbure négative, *Mém. Soc. Math. Fr. (N.S.)*, vi+96. (MR556586) D. Sullivan, The density at infinity of a discrete group of hyperbolic motions, *Inst. Hautes Études Sci. Publ. Math.*, 171–202.
Book Review of The Garden of Eden and the Hope of Immortality: The Read-Tuckwell Lectures for 1990 by James Barr Introduction: James Barr is a professor of Hebrew Bible at Vanderbilt Divinity School. This book is an expanded version of a lecture series that was given at Bristol University in May 1990. The purpose of these lectures were to facilitate the rethinking of the Genesis narrative in light of the topic of immortality. Barr states that he thinks his study has something to say about; philosophy, anthropology and the general history of religions. Barr’s book concludes with the summary that the idea of immortality is found in the very beginning of the Hebrew scriptures. Barr ultimately attempts to show that the concept of immortality and resurrection can be in harmony. Adam and Eve, the Chance of Immortality: For Barr, the creation story found in Genesis is primarily to be understood as the “story of how human immortality was almost gained, but in fact, was lost.” He states that story of Adam and Eve is not a narrative meant to explain sin and evil in the world. Barr points to the fact that the common conception that sin separates humanity from God does not seem to fit the narrative framework. While there may be a break down in relationship, God does not separate Himself from Adam and Eve as a result of their sin. Rather, the separation that takes place is between Adam and Eve, and the Tree of Life. In addition, Barr suggests that there is not a concept of a loss in the reflection of the image and likeness of God. The story then is not about the idea of original sin or a fall, but rather about knowledge and immortality. Adam and Eve gain knowledge but lose the chance for eternal life. Eternal life then is understood to be controlled by the power and will of God and is not something inherent or ontological within the created order itself. In this sense, Barr sees death as a natural occurrence, and says that Adam and Eve were “mortal from the beginning.” Creation is thus potentially immortal, and the actualization of this potential is always understood to be contingent upon God as symbolized by the Tree of Life. In fact, if it is not understood this way, the Tree of Life and in essence, the entire narrative itself, losses its primary significance. Barr summarizes this chapter by stating: “In particular, our story does not speak of ‘life after death’, nor about the ‘immortality of the soul’. The ‘living for ever’ which Adam and Eve would have acquired had they stayed in the Garden of Eden is a permanent continuance of human life.” Two key points that Barr seems to make here are first, that Gods creation is intended to be physically embodied, and that God has deemed his creation good. Second, God’s initial vision of ‘eternal life’ is to be lived out on the earth that he has created, not in a disembodied state in heaven. Barr says “immortality here, or eternal life, in the first place does not mean life after death, but the continuance of life without death. That continuance is meant to take place in the physical bodies Adam and Eve were created with on earth. Here, Barr strongly affirms the goodness of God’s physical creation of both humanity along with the rest of the created order. The Naturalness of Death, and the Path to Immortality Throughout the book, Barr interacts with Oscar Cullmann’s book, which draws a contrast between Greek and Hebrew thought through the contrast of both Socrates and Jesus’ death. Barr reveals that there are two sides to the argument concerning the relationship between death and God. On one hand, there are texts that seem to support the idea that death is an enemy that separates humanity from God. These are revealed in Biblical statements about God being the God of the living, and God’s desire for all to live and none to die. On the other hand, there also seem to be texts that show God’s control over life, revealing God can both give and take life from his creation. Barr thinks that for the Hebrews, the primary question concerning life after death was not the modern problem of the continuance of existence, “but whether they continued to be in touch with Yahweh.” Understood this way the Hebrews could believe in the continuity of existence as shades in Sheol and still maintain that death was an enemy that separated a deceased person from Yahweh. In investigating the language of creation and the soul, Barr shows that humans don’t ‘have’ a soul they ‘are’ a soul. Here he draws attention to the Hebrew parallels found in Job 14:22, Psalm 63:2 and Psalm 84:3 where the Hebrew term nephesh (soul) is placed in parallel with the word for flesh. Barr’s critique of Cullmann seems to be that “Hebrew thinking, then is not itself a monolithic block.” The point being that there was a variety of belief in anthropological ideas and the afterlife within the Hebrew tradition seen most poignantly in the contrast between the Pharisees and Sadducees. The contrast can also be seen in the canonical Hebrew scriptures with the Second Temple literature. Barr says that the intent of the second chapter is not necessary to prove or disprove the idea of the immortality of the soul. Instead, he is seeking to show that death is something that is a natural occurrence within creation. In this sense, death is not seen so much as a punishment, but a natural limitation of the created order that God has made. Death can be the fulfilment of a long and fulfilled life, or it can be an enemy that cuts short one’s potential through unjust violence or other circumstances which might end a person’s life prematurely. Knowledge, Sexuality and Immortality In the third chapter if the book, Barr points out that the sole “motivation for the expulsion from the garden” is to limit access to the Tree of Life which in turn denies both Adam and Eve immortality. If humanity possessed an immortal soul, this limitation would seem to only be a partial punishment in that it would eventually limit humanity from embodied life but not life in general. In order to maintain a dualist or trichotomist anthropology, one must then redefine the common biological understanding of death as outlined by God in Genesis 3:19. Noah’s Ark: Time, Chronology and the Fall One of the unique statements that Barr seems to make is “if we are to talk of a ‘Fall of Man’ and indeed of a ‘fallen world’, we come much closer to it here in the Flood story than in that of the Garden of Eden.” This is a very interesting observation considering historically the idea of the Fall has been associated with the first sin. While the banishment from the Garden seems to be a limitation of ability, which is represented by the Tree of Life, the Flood narrative seems to be a direct judgement and punishment of God. Read this way, Barr suggests that the real problem or ‘Fall of Man’ is understood to be violence. Barr draws his readers attention to Genesis 6:3 which states “My spirit will not remain in a human being forever; because he is mortal flesh, he will live only for a hundred and twenty years.” This text coincides nicely with Romans 8:11 which states “And if the Spirit of him who raised Jesus from the dead is living in you, he who raised Christ from the dead will also give life to your mortal bodies because of his Spirit who lives in you.” Here we see that life is contingent upon God’s life-giving breath or spirit. Genesis 2:7 reveals this truth in the initial creation of man, where God sets a limit on life in Genesis 6:3 and then we see the hope found in the resurrection and restoration of the spirit in Romans 8:11. Barr then moves to what he says is the true meaning of Jesus’ crucifixion, which is to reveal the violence of man through an unjust execution. Barr says “the point is not the general, universal fact that he died, but the way in which he died, the victim of a complicated set of injustices.” Two things are concluded in this chapter. First, concerning immortality, Adam and Eve “never [had] it, but they had the chance of it, and lost the chance.” Second, humanity did not lose the image of God. The image of God is said to correspond to the vocation, to be God’s rulers over the created order. Immortality and Resurrection: Conflict or Complementarity? In the final chapter, Barr suggests that humanity in a general sense can be divided into two classes of people. There are those that are immoralists and there are those that believe in the resurrection. The advantage to the physicalist resurrectionist view is that it seems to more accurately aligned with the natural sciences. Barr says that “traditional Christianity [has] invested far more heavily in idealist, immaterialist, side of Greek philosophy than people now want to admit. He suggests a tangible example of this can be found in the Westminster Confession. Barr suggests the theology of such a faith statement seems to have at least four obvious problems in relation to the Biblical text. First, in the Westminster Confession, the judgement must take place both individually and immediately after death while the Bible seems to indicate a corporate judgement at the return of Jesus. Second, in the Westminster Confession man is attributed an immortal soul which does not seem to coincide with the Genesis narrative or scripture as a whole. Third, the Westminster Confession proclaims that both Heaven and Hell are eternal locations, which becomes problematic to conceive of God as an eternal torturer. Finally, the Westminster Confession suggests that the resurrection will be a reunion of body and soul, which does not fit with the Biblical language of sleep and the whole person being reconstituted from the dust at the resurrection. Barr suggests that a commonly overlooked problem is that the Platonic idea of the immortality of the soul also involves the ideas of reincarnation and the transmigration of souls. Here Christianity has chosen to ignore these elements of Plato’s argument but keep the ones that support their truth claim. In part, it seems, that Christianity struggled with the concept of continuity of personhood the farther away if got from Christ’s bodily resurrection. Barr finishes the book with an examination of several Pauline texts that have to do with life, death and the afterlife. Paul seems to indicate that immortality is something that is gained through the process of resurrection and not an innate ontological feature of humanity. In this way, immortality is brought to light through Jesus’ resurrection. It is important to note that while Paul says that flesh and blood cannot enter the kingdom of God he does believe in the resurrection of the body. Here Paul sees the idea of flesh and blood as that which is corruptible, perishable and mortal. This is why the resurrected body must be clothed or overclothed with immortality. Summary: I found this book refreshing and honest. Barr does a great job at attempting to shed new light on a very well-known Biblical narrative found in Genesis. With such a topic, it is hard to set aside preconceived notions and learned opinions and approach the text with a fresh perspective. I personally found the connection between Genesis 6:3 in which God limits the breath of man to the promise of the return of the spirit of humanity in the resurrection to be an insightful connection. Barr does well to point out that the creation account culminates in the banishment from the Garden of Eden. It is here that we see the potential of immortality becomes limited and understand the dependence of the created on the Creator. I would highly recommend this book as a fresh way to approach the topic of Conditional Immortality as seen through the lens of the creation account in Genesis.
https://www.afterlife.co.nz/2020/06/the-garden-of-eden-and-the-hope-of-immortality/
In an independent, retrospective cohort analysis , data was collected and analyzed to assess the efficacy of high frequency chest wall oscillation (HFCWO) on Non-Cystic Fibrosis Bronchiectasis (NCFB) patients—before and after usage. Insurance claims data assessed key measures such as hospitalizations, treated pneumonia, and medication usage. Following the introduction of HFCWO vest therapy, the study revealed that patients experienced: - A decrease in hospitalizations, pneumonia, cough, and medication usage. - A reduction in Healthcare Resource Utilization (HCRU) including having fewer chest x-rays, computerized tomography (CT) scans, and lung function tests. Background of Study The insurance claims data analysis was conducted using IQVIA’s PharMetrics Plus® for MedTech, one of the largest US health plan databases of adjudicated integrated medical and pharmacy claims. Claims for patients who had received HFCWO within a one-year period were compared to their insurance claims prior to using the therapy. The study aimed to compare HCRU and outcomes (both before and after receipt of HFCWO) among patients with NCFBE. KEY FINDINGS Reduction in Bronchiectasis Symptoms The chart to the right shows a consistent improvement in key health outcomes, as patients experienced a reduction in bronchiectasis-related symptoms, such as cough, pneumonia and hospitalizations. Again, this suggests an overall decrease in healthcare costs for patients living with bronchiectasis.* Significant Decrease in HCRU = Lower Outpatient Costs As shown on the chart to the right, following HFCWO therapy, patients required fewer chest x-rays, CT scans, lung function tests, and medication use decreased. This decrease equates to a reduction in healthcare costs.* *Hospitalizations among NCFBE patients who develop an infection cost up to $36,665 compared to $20,421 for a hospitalization prior to infection. Download the Infographic Why SmartVest for HFCWO Vest Therapy? Electromed is pleased to further the body of evidence in support of HFCWO therapy and SmartVest, continuously showing improved patient outcomes. Consistent with these results, findings from a previous 2019 first-of-its-kind, peer-reviewed study, published in BMC Pulmonary Medicine, found that when the SmartVest Airway Clearance System, a HFCWO therapy device, was utilized in an algorithm of care treatment plan, it helped reduce bronchiectasis-related exacerbations requiring hospitalization, decrease antibiotic usage, and improve lung stability . This recent study further demonstrates the benefits of utilizing HFCWO vest therapy among NCFB patients, allowing them to actively manage their bronchiectasis-related symptoms, improve their health outcomes over time, and reduce the need for medications and healthcare resources. If you’d like to learn more about the benefits of using SmartVest to help you manage your chronic lung condition, we encourage you to contact our patient care advocates today to learn more! Schedule a time to chat or call them directly at 1.855.528.5690. You can also request a patient packet to share with your clinician and read success stories to learn how SmartVest is changing lives—one breath at a time! HFCWO: High frequency chest wall oscillation (HFCWO) is an airway clearance technique that’s typically performed using an inflatable vest and air pulse generator. HCRU: Healthcare Resource Utilization describes the use of services a person has for the purpose of preventing and curing health problems, promoting maintenance of health and well-being, or obtaining information about health status and prognosis. NCFB: Non-Cystic Fibrosis Bronchiectasis is an irreversible chronic condition that occurs when a patient’s airways become abnormally widened and damaged from repeated infections and inflammation. MD, KM, ND, JC and AR are all employees of IQVIA, which received funding for this study from Electromed, Inc. “IQVIA” and “PharMetrics Plus” is a registered trademark of IQVIA, Inc. Resources DeKoven M, Mandia K, DeFabis N, Chen J, Ruscio A. Patient characteristics, healthcare resource utilization and outcomes among non-cystic fibrosis bronchiectasis patients with high frequency chest wall oscillation (HFCWO) therapy. IQVIA PharMetrics Plus for MedTech. 2018-2019 Powner J, Nesmith A, Kirkpatrick DP, Nichols JK, Bermingham B, Solomon GM. Employment of an algorithm of care including chest physiotherapy results in reduced hospitalizations and stability of lung function in bronchiectasis. BMC Pulm Med. 2019;19(1):82. Published 2019 Apr 25.
https://smartvest.com/blog/study-shows-hfcwo-vest-therapy-reduces-healthcare-resource-utilization/
A set of six Morris & Co. Sussex chairs designed by Phillip Webb, comprising of 2 carvers and 4 single chairs all in good, original condition purchased from a private collection. The chairs consist of four turned spindles on the back rail leading down to a rush seat (all original and in good condition). The arm supports go through the seat rail which is unusual design which is only found on this design of chair. The chairs are supported by tapered legs with circular stretchers. The Sussex chair was introduced by Morris & Co (William Morris) around 1864 remaining popular for many years and was still being produced, it still featured in their catalogue of 1911. William Morris used the Sussex chairs in his own houses as did his great friend, the artist Edward Burne-Jones. This chair design brought a new style to furniture as it was of simple structure, made from beech and stained in black or 'ebonised' with a rush seat and ideal for the smaller property. The Sussex chair also influenced other top designers of the time such as Liberty & Heals who produced their own versions such as the 'Argyll' range at Liberty & Co. This set of chairs are all of stable construction and can go straight into a home. Size:
https://www.lvsdecorativearts.co.uk/en-GB/sold-archive/set-of-six-morris-co-sussex-chairs/prod_11902
Arizona Legislature approves delay in tax filing deadline, but other concerns remain Arizona now appears likely to extend its income tax filing due date to match the temporary May 17 federal deadline, but the state hasn't made much progress over the past week to adopt or conform to various other rule changes. A bill to extend the state's normal April 15 due date to May 17 recently cleared both chambers of the Legislature and awaits action from the governor. But movement on a related issue of conforming to recent federal tax-law changes has proven more elusive. Arizona and most states that levy income taxes routinely adjust their codes to match annual changes at the federal level, even though it can mean several hundred million dollars in revenue losses, as would be the case this year in Arizona. The purpose of making those changes is to give taxpayers clarity and consistency, as federal adjusted gross income is the starting point to calculate Arizona taxes. Most states already have conformed, but Arizona still hasn't. For example, one notable divergence involves unemployment income. Normally, that compensation is taxable, but Congress waived that requirement for 2020, making the first $10,200 in jobless benefits received by an individual tax free at the federal level (for people with income below $150,000). At the moment, jobless benefits remain taxable in Arizona. More than 2 million Arizonans received jobless benefits in 2020, so conforming on this issue, or failing to do so, affects a lot of people here. Other conforming issues include the tax treatment of retirement-plan withdrawals, as Congress eased these rules last year, and the tax treatment of forgivable Paycheck Protection Program loans. Tax-return preparers are split in their advice, with some recommending that taxpayers file as soon as possible with others suggesting that people delay. If the state extends the filing date to May 17 as expected, that will provide more breathing room. IRS to make calculations The IRS provided guidance in late March on how taxpayers should treat unemployment compensation. Normally, as noted, jobless benefits are taxable. But the American Rescue Plan enacted March 11 decreed that up to $10,200 in benefits paid in 2020 per person (up to $20,400 for married couples) could escape taxation for those with modified adjusted gross income below $150,000. By then, however, millions of people already had filed their 2020 returns. The new IRS guidance means those individuals won’t need to file amended returns, much to relief of taxpayers, return preparers and probably even the IRS. The agency says it will refigure the tax obligations for those people, adjust their bills and send any refunds due starting in spring and continuing into summer. For taxpayers who received unemployment benefits last year and haven’t yet filed, there are several steps to take. The IRS provides more information at irs.gov under an unemployment exclusion section. Opinion: Lawsuit to stop the American Rescue Plan's tax cut ban is unnecessary The revised tax balances apparently will qualify some people for a stimulus payment or a larger one. Such individuals might be "eligible for a new or larger payment based on their recently processed 2020 tax returns," the IRS said in a statement April 2, in response to a question on unemployment. The latest stimulus or economic impact payments phased out for singles with income between $75,000 and $80,000 and married couples between $150,000 and $160,000. Free tax help available With the expanded federal tax-filing deadline of May 17, some no-cost tax centers are expected to remain open until then, including several sites operated under the VITA or Volunteer Income Tax Program. The VITA program, which is affiliated with the IRS, is generally open to people earning less than $57,000 annually, plus those with disabilities or limited English skills. Sites expected to remain open into May include those run by Phoenix (phoenix.gov/eitc), Masters of Coin (mastersofcoin.org), Mesa United Way (mesaunitedway.org/volunteer-income-tax-assistance/), A New Leaf West Valley (turnanewleaf.org/services/financial-empowerment/vita-program.html) and Rehoboth CDC (rcdcphx.org/vita-tax-preparation). Refunds slightly lower for Arizonans As is typically the case, most taxpayers are receiving refunds so far this filing season, both nationally and in Arizona. Nationally, 76% of taxpayers have received or are due a refund, averaging $3,660, according to a LendingTree analysis of IRS data released March 29.. In Arizona, the numbers are a bit lower — 73% received or are slated for a refund, with an average amount of $3,395. Scam targets students, college staff The IRS is warning of a scam that appears to target university students and staff who have .edu email addresses. Suspicious emails display the IRS logo and use subject lines with phrases such as "tax refund payment" or "recalculation of your tax-refund payment" in an effort to entice victims into disclosing sensitive personal information. The IRS doesn't send out unsolicited emails to taxpayers. Reach the reporter at [email protected]. Support local journalism. Subscribe to azcentral.com today.
https://www.azcentral.com/story/money/business/consumers/2021/04/03/arizona-state-income-tax-filing-deadline-delay-approved-legislature/4850498001/
As the planet’s oceans and rivers warm, increased heat could pose a grave threat to the fish populations the world depends on by the end of this century. That’s the alarming conclusion of a new study published Thursday in the journal Science. Among the species the authors said are at risk are some of the most commercially important species on Earth — including grocery store staples like Atlantic cod, Alaska pollock and sockeye salmon, and sport fishing favorites like swordfish, barracuda and brown trout. In fact, 60% of the fish species examined could struggle to reproduce in their current habitat ranges by the year 2100 if the climate crisis continues unchecked, according to the researchers. If governments recommit themselves to holding global warming to 1.5 degrees Celsius, however, the scientists found the number of species threatened could be far less — just 10%. “More than half of the species potentially at risk is quite astonishing, so we really emphasize that it’s important to take action and follow the political commitments to reduce climate change and protect marine habitats,” said Dr. Flemming Dahlke, a marine biologist at Germany’s Alfred Wegener Institute and one of the authors of the study. That worst-case scenario of global warming’s impact on fish species is yet more evidence showing that without sweeping efforts to cut emissions of heat-trapping gases, human activity threatens to permanently disrupt the ecosystems that feed billions of people. The study was conducted by a team of researchers based in Germany, who analyzed temperature tolerance data for nearly 700 species of marine and freshwater fish from climate zones around the world. Heat effects on key life stages Past studies of fish sensitivity to changing water temperatures have focused mostly on adult fish. But to really understand how warmer waters will impact species, the researchers argued that it’s essential to look at the most vulnerable stages in a fish’s life cycle — those that are critical to reproduction and species survival. In nearly all cases, fish embryos and spawners — female and male fish that are preparing to produce eggs and sperm — are far more vulnerable to abnormally warm water temperatures than adults and larvae, the researchers found. “There’s a difference in the tolerable temperature range of almost 20 degrees on average between embryos and adults,” Dahlke said. “This of course makes a big difference when you want to look at the sensitivity of species to global warming.” Global warming outpaces species adaptation The consequences for humans could be enormous. Today, an estimated 3 billion people rely on fish and seafood as their primary source of protein. To survive, many species would be forced to evolve to cope with warmer temperatures or to move in search of cooler waters, the authors said. But given the speed with which human activity is warming the planet, many species may not be able to adapt fast enough to survive. “If climate change continues unchecked, we will probably see big changes in the species composition of our ecosystems,” Dahlke said. “When species can no longer reproduce in their traditional habitats, they have to either go into deeper water or further North if possible, or become locally extinct if they’re unable to do that.” As the planet heats up, several other recent studies have shown how oceans in particular have borne the brunt of the warming. It’s “virtually certain” that the world’s oceans have warmed nonstop since 1970 and have absorbed 90 percent of the planet’s excess heat, according to a landmark report last fall from the United Nation’s Intergovernmental Panel on Climate Change. And the world’s oceans are now heating at the same rate as if five Hiroshima atomic bombs were dropped into the water every second, found another study from January. There is also evidence that heat is already reshaping aquatic ecosystems. An April study published in the scientific journal Nature found that recent marine heat waves — characterized by extreme water temperatures that persist over huge areas of the ocean — have led to fish die-offs and dramatically shifted the ranges of many species. These recent marine heat waves have offered a glimpse into how species may respond if water temperatures continue to climb, Dahlke said. “The effects of these heat waves kind of forecasts what could happen at the end of this century.” This latest study strengthens the evidence sounding an alarm for government leaders to take meaningful action to protect fish species that people depend on.
https://nbcpalmsprings.com/2020/07/05/warming-temperatures-threaten-hundreds-of-fish-species/
For the best feng shui, you want to have the air and energy flow easily around and under you while you’re in bed. … If you have to have storage under the bed, stick to soft sleep related items like linens, pillows, and towels. Are storage beds bad feng shui? A common place for storage in the bedroom is usually under the bed, but Cerrano believes it’s not good practice: “From a feng shui perspective, storage under the bed can obstruct your sleeping pattern because the movement of energy cannot flow evenly around the energy fields of your bed.” For example, Cerrano suggests … Are beds with storage good? Efficient Use of Space – With storage beds, that empty space underneath your mattress is put to good use. A functional storage bed can free up dozens of cubic feet of extra space in a bedroom where every inch is valuable. What is the best bed position feng shui? In feng shui, we place the bed using the principle of the commanding position. You want your bed located so that when you’re lying in bed, you can see the door to the bedroom. However, you don’t want to be directly in line with the door either. A good rule of thumb is it places the bed diagonal from the door. What do you put in a storage bed? What to Store Under Your Bed - Luggage: You can also store things inside your luggage and just set them aside when you’re not traveling. - Out-of-season clothing: Because your bed is close to your clothes closet, this seems like a no-brainer. … - Linens: Keep your extra set of sheets and pillowcases right under your bed. 13 нояб. 2020 г. What is bad feng shui for bedroom? Why is a bed close to the bedroom door considered bad feng shui? A bed close to the bedroom door is considered bad feng shui because doors usually have a strong flow, or rush of incoming energy. This energy can be very unsettling and too active as compared to the energy you want close to your bed. Is it bad to put stuff under your bed? Storing items under your bed can block the chi (energy) from flowing around it. Since everything is energy, whatever is near the bed can affect your health, sleep or other areas of your life. If you’re not sleeping well, take a close look at what you have tucked under your bed. What is the best storage bed? If you’re ready to start shopping, here are the best storage beds on the market today. - Best Overall: Winston Porter Houchins Upholstered Storage Platform Bed. … - Best Queen: Greyleigh Kerens Tufted Upholstered Storage Standard Bed. … - Best King: Breakwater Bay Jaimes Storage Platform Bed. 10 июн. 2020 г. Does bed height affect sleep? Does bed height affect sleep? Overall, the height of the bed will not affect the quality of someone’s sleep. That being said, the thickness of the mattress will determine the amount of comfort or support someone receives. Why shouldn’t you sleep with your feet facing the door? Some cultures believe that spirits might drag you away if you sleep with your feet facing the door. … As in many cultures, it’s vital that your feet do not face the bedroom door as you sleep. It’s considered bad for your health because dead bodies traditionally are removed from a bedroom feet first. How should I arrange my bedroom feng shui? 13 Tips for Feng Shui in Your Bedroom - Invest in a king or super-king size bed and mattress. - Position your bed centrally and out of line from the bedroom door. - Maximise air-flow with adjustable blinds. - Limit electronic devices and screens. - Ensure your bed has a bedside table on either side. 4 мар. 2019 г. Which direction should a bed face for peaceful sleeping? Ideally, when sleeping, your head or the head of the bed, should be facing North. North represents quieting the mind, allowing self-introspection and promotes the warm, restorative, safe feeling that comes with a time of deep sleep or hibernation. How do you maximize under bed storage? 10 Ways to Maximize Under-the-Bed Storage - Wheeled Boxes. A specially designed under-bed wheeled box keeps belongings out of sight but is easy to roll out when you need something. … - Drawers. … - Locked Boxes. … - Storage Bags. … - Archival Garment Boxes. … - Photo Storage Cases. … - Shoe Organizers. … - Gift Wrap Organizers. 2 февр. 2021 г. How do you hide things under your bed? The space under your bed is a prime spot for storing things in your bedroom! Use storage baskets, rolling storage bins, or pull-out drawers underneath the bed frame to hide shoes, clothes, suitcases, and more.
https://livewithyoga.net/meditation/is-storage-bed-good-for-feng-shui.html
Why project management is key to operational business intelligence The best infrastructure in the world will not deliver the desired results from an operational business intelligence application unless the very best planning and management of the project are in place and are practiced. This article originally appeared on the BeyeNETWORK. This is the second part of a two-part series discussing project and management.Part 1 of this series discussed management expectations, project scope, risks and change control procedures. This article provides suggested approaches for project and project management. Project Project is not a one-time activity. Since a project is based on estimates, which are frequently no more than best guesses, project plans must be adjusted constantly. The number one telltale sign that a project is not being managed is a static project plan where estimates and milestones have never changed from the day they were developed. Activities and Tasks Operational business intelligence (OBI) projects are composed of many activities, with a long checklist of tasks. Regardless of the project manager’s experience, it is impossible for any person to remember all the tasks that need to be performed on an OBI project. At a minimum, the project manager must rely on some existing comprehensive list of the most necessary activities. Naturally, not all activities have to be performed on every project. Not even every step has to be performed on every project. The project manager selects the minimum number of steps and activities needed to produce an acceptable deliverable under the imposed constraints. The development approach for an operational business intelligence application is not linear. It is a much more dynamic approach to application development. It often looks like and feels like a prototype – but it is not a prototype. The same rigor and discipline applied under a traditional methodology must be applied to OBI projects in terms of controlling scope, mitigating risks and time-boxing weekly activities. However, constant rework must be expected during the development cycle and should be built into the project . For example: analysis activities can show up as early as and as late as testing, and so can design or activities. These activities in any order keep any project manager on his or her toes! The project must reflect this dynamic nature of application development. Since changes and setbacks are to be expected, certain “completed activities” will have to be revisited and reworked. The easiest way to for these internal iterations is to use the concept of “looping” or “refactoring” by dividing the project into multiple small sub-projects, each with a deliverable, albeit not completed. Then, revisit and revise each deliverable adding more data and more functionality until the entire operational business intelligence application is completed with the desired deliverable. This iterative refinement approach is what gives the project development effort the feeling of prototyping. Estimating Once the activities and tasks have been selected for the project and the project has been organized into subprojects, the base estimates are derived using one of three methods: - Historical, based on learned patterns (e.g., how long it took on the last project) - Intuitive, based on one’s intuition and experience (“gut” estimating) - Formula-based, based on the average of possibilities, as shown in Figure 1 Figure 1: Formula-Based Estimating Assigning Resources Effort estimates cannot be completed until the activities and tasks are assigned to available resources because their skills and subject-matter expertise, as well as environmental factors affecting each team member, have to be taken into consideration. - Skills – The ability to perform specific tasks: Have they done this type of work before? - Subject-Matter Expertise – The possession of facts or concepts about a specific subject matter: Are they experts in this business ? - Environmental Factors – Administrative and non-work related activities. Some examples are listed in the table shown in Figure 2. Figure 2: Environmental Factors Task Dependencies All activities and tasks do not have to be performed serially. Many activities and tasks can be performed in parallel as long as there is sufficient staff. The step in determining what tasks can be performed in parallel is to identify task dependencies and develop the critical path. Most project tools support the four types of task dependencies, as shown in Figure 3. Figure 3: Task Dependencies - Finish to Start – indicates that Task 2 cannot start until Task 1 finishes. - Start to Start – indicates that Task 2 can start at the same time as Task 1. - Finish to Finish – indicates that Task 2 cannot finish until Task 1 finishes. - Start to Finish – indicates that Task 2 cannot finish until Task 1 starts. Resource Dependencies A shortage of staff can quickly reverse the benefits of having few task dependencies. For example, tasks that could have been performed in parallel, but cannot be assigned to multiple staff members because of a staff shortage, must revert to being executed serially. Figure 4 shows how four tasks could be accomplished in 10 days with adequate staffing; Figure 5 shows that it will take 14 days to complete them because only one person could be assigned to do all tasks. Note that in Figure 5, the task of Compile Findings could be reduced by one day because there is no longer a need for two analysts to collaborate. Figure 4: Elapsed Days with Task Dependencies Figure 5: Elapsed Days with Resource Dependencies Critical Path Method (CPM) Once the task dependencies are identified and the resources are leveled, the critical path will show task duration, indicating any lag time for tasks not on the critical path, as illustrated in Figure 6. This will give the project manager the visibility he or she needs to reassign resources or to renegotiate the constraints of the project. Figure 6: Critical Path Method (CPM) Creating a useful project requires some effort, but maintaining the project (adjusting it) is not as labor-intensive as it was prior to the availability of project management tools. Becoming proficient on a sophisticated project management tool will take some time, and a solid understanding of project management principles is required. Once the components (e.g., tasks, estimates, resources, dependencies) have been keyed into the tool, any adjustments subsequently made to the components automatically cascade through the entire project , updating all charts and reports. Although the results must still be reviewed and validated, an experienced project manager who is skilled on the project management tool does not need to become a slave to the tool or to the project activities. Project Activities The project activities do not need to be performed linearly. Figure 7 indicates which activities can be performed concurrently. Figure 7: Project Activities Determine project delivery requirements: The objectives for the project and some high-level requirements for the proposed scope may have already been prepared during the business case assessment step. However, most likely these are not of sufficient detail to start the process. As part of the scope definition, the following requirements will have to be reviewed and revised: data, functionality (reports and queries) and infrastructure (technical and non-technical). Determine condition of source files and databases: The project schedule cannot be completed and a delivery date cannot be committed to without a good understanding of the condition of the source files and databases. Some time must be taken to review the data content of these operational files and databases. Although detailed source data analysis will be performed during the data analysis step, enough information must be gleaned during to make an educated guess about the necessary effort for data cleansing. Determine or revise cost estimates: Detailed cost estimates must be prepared to include hardware and network costs as well as purchase price and annual maintenance fees for tools. In addition, the costs for contractors, consultants and training have to be ascertained. A more indirect cost is associated with the learning curve for the business staff and IT. It should be factored into the cost estimates as well as the time estimates. Revise the risk assessment: A risk assessment must be performed, or reviewed and revised if one was performed during the business case assessment step. Risks should be ranked on a scale of 1 to 5 according to the severity of their impact on the OBI project, with 1 being low impact and 5 being high impact. The likelihood of the risks materializing should also be indicated on a scale of 1 to 5, with 1 being “will probably not happen” and 5 being “you can almost count on it.” Identify critical success factors: A critical success factor is a condition that must exist for the project to have a high chance for success. Some common critical success factors are: a proactive and very supportive business sponsor, full-time involvement of a business representative, realistic budgets and schedules, realistic expectations and a core team with the right skill set. Prepare the project charter: The project charter is similar to a scope agreement, a document of understanding, or a statement of work. However, the project charter is much more detailed than the usual 3- to 4-page general overview of the project that contains only a brief description of resources, cost and schedule. The project charter is 20- to 30-page document developed by the core team, which includes the business representative. The project charter and the project are presented to the business sponsor for approval. Create high-level project : Project plans are usually presented in the form of a Gantt chart showing activities, tasks, resources, dependencies, and effort. Some project managers also create Pert charts, which show the graphic representation of the critical path method on the calendar. Kick off the project: Once the project is planned, the resources are assigned and the training is scheduled, the project is ready to be kicked off. This is usually accomplished with an orientation meeting for the entire team – the core team members as well as the team members. Project kick-off should also include setting up communication channels (e.g., newsletters, e-mails, Web pages) to the rest of the organization to keep stakeholders and interested parties up to date on the progress of the project. Deliverables The following deliverables result from these activities: Project Charter: This document represents the agreement between IT and the business about the definition, scope, constraints and schedule of the OBI project. It also serves as the baseline for all change requests. A project charter contains the following sections: - Goals and objectives (both strategic and OBI project specific) - Statement of the business problem - Proposed OBI solution - Results from cost-benefit analysis - Results from infrastructure gap analysis (technical and non-technical) - Functional project deliverables (reports, queries, portal) - Historical requirements (how many years of history to store) - Subject to be delivered - Entities (objects), significant attributes, relationships (high-level, or conceptual, logical data model) - Items not in scope (originally requested but subsequently excluded from scope) - Condition of source files and databases - Availability and security requirements - Access tool requirements - Roles and responsibilities - Team structure for core team and team members - Communication - Assumptions - Constraints - Risk assessment - Critical success factors Project : A project may contain multiple graphs, such as critical path method (CPM) chart, Pert chart, or Gantt chart, detailing task estimates, task dependencies, and resource dependencies. Most project tools can also produce additional tabular reports on resources and schedule. The chance of success is directly proportional to the efforts expended in the project and management. (Source: Moss, Larissa and Atre, Shaku. Business Intelligence Roadmap – The Complete Project Lifecycle for Decision-Support Applications, Addison Wesley Professional, 2003.) Author’s Note: The Business Intelligence Roadmap includes a set of all major activities and tasks that are appropriate for BI projects. Not every BI project will have to perform every single activity in every step. To receive a complimentary copy of the Business Intelligence Navigator, designed to help chart the business intelligence journey, please visithttp://www.atre.com/bi_navigator/navigator.html.
https://searchbusinessanalytics.techtarget.com/news/2240111248/Why-project-management-is-key-to-operational-business-intelligence
This Information Security Addendum (“ISA”) applies whenever it is incorporated by reference into the Master Services Agreement between you and Code42 (“Agreement”). Capitalized terms used but not defined in this ISA have the meanings ascribed in the Agreement. 1. Purpose 1.1 This ISA describes the minimum information security standards that Code42 maintains to protect your Customer Data. Requirements in this ISA are in addition to any requirements in the Agreement. 1.2 Code42 follows AICPA guidelines and regularly reviews controls as described in Code42’s SOC2 independent auditor report ("SOC2 Report"). For your convenience, Code42 references some of the applicable SOC2 controls in this ISA. See the SOC2 Report for exact language. Code42 will provide you with a copy of the SOC2 Report upon request. 1.3 The CrashPlan for Small Business Offering is not SOC2 certified, and the references to specific SOC2 controls (e.g. SOC: A-4) are not applicable. 2. Encryption and key management 2.1 Code42 uses industry-standard encryption techniques to encrypt Customer Data at rest and in transit (SOC: C-10). 2.2 The Code42 system is configured by default to encrypt your files at the source using AES 256-bit encryption and decryption is controlled by you (SOC: C-8). Encryption keys are stored and transferred securely during the sign-in process using industry standard encryption technology (SOC: C-11). You manage Code42 access via the administration console. If you are on a legacy implementation that enables you to disable encryption and you have done so, this section 2.2 does not apply. Encryption keys are generated using a cryptographically strong random number that complies with the statistical random number generator tests specified in FIPS 140-2, Security Requirements for Cryptographic Modules (SOC: C-12). 2.3 Transmitted Customer file data is MD5 check-summed at multiple points during the backup process, including after encryption at the source to provide destinations the ability to detect tampering or corruption without having encryption keys for the original data. (SOC: C-9) 3. Support and maintenance Code42 deploys changes to the Cloud Services during scheduled maintenance windows, details of which are posted to the Code42 website prior to the scheduled period. In the event of a service interruption, Code42 posts a notification to the website describing the affected services. Code42 provides status updates, high level information regarding upgrades, new release availability, and minimum release version requirements via the Code42 website (SOC: CM-11). 4. Incident response and notification 4.1 “Incident” means a security event that compromises the confidentiality, integrity or availability of a Code42 information asset. "Breach" means an Incident that results in the confirmed disclosure, not just potential exposure, of Customer Data to an unauthorized party. 4.2 Code42 has an incident response plan, including a breach notification process, to assess, escalate, and respond to identified physical and cyber security Incidents that impact the organization, Customers, or result in data loss. Discovered intrusions and vulnerabilities are resolved in accordance with established procedures. The incident response plan is reviewed and updated annually and more frequently as needed (SOC: OPS-4). 4.3 If there is a Breach involving your Customer Data, Code42 will (A) notify you within 24 hours of discovery of the Breach, (B) reasonably cooperate with you with respect to such Breach, and (C) take appropriate corrective action to mitigate any risks or damages involved with the Breach to protect your Customer Data from further compromise. Code42 will take any other actions that may be required by applicable law as a result of the Breach. 5. Code42 security program 5.1 Scope and Contents. Code42 maintains a written security program that (A) complies with applicable global industry recognized information security frameworks, (B) includes administrative, technical and physical safeguards reasonably designed to protect the confidentiality, integrity and availability of Customer Data and (C) is appropriate to the nature, size and complexity of Code42’s business operations. 5.2 Security Program Changes. Code42 policies (including the Code42 Code of Conduct), standards, and operating procedures related to security, confidentiality, integrity and availability ("Security Policies") are made available to all Code42 personnel via the corporate intranet. Security Policies are reviewed, updated (as needed), and approved at least annually to maintain their continuing relevance and accuracy. Code42 personnel are required to review and acknowledge Security Policies during on-boarding and annually thereafter (SOC: ORG-2). 5.3 Security Officer. The Code42 Chief Information Security Officer and security governance group develop, maintain, review, and approve Code42 Security Policies. 5.4 Security Training & Awareness. All Code42 personnel are required to complete security awareness training at least annually (SOC: ORG-8). Code42 conducts periodic security awareness education to give personnel direction for creating and maintaining a secure workplace. (SOC: COM-11). 6. Risk management 6.1 Code42 has a security risk assessment and management process to identify and remediate potential threats to Code42. Risk ratings are assigned to all identified risks, and remediation is managed by security personnel (SOC: RM-1). Executive management is kept apprised of the risk posture of the organization. 6.2 Code42 has an established insider threat risk management program to monitor, alert and investigate threats posed by both non-malicious and malicious actors inside the organization on an on-going basis. Identified issues are reviewed and investigated as appropriate (SOC: RM-2). 7. Access control program 7.1 Code42 assigns application and data rights based on Active Directory security groups and roles, which are created based on the principle of least privilege. Security access requests are approved by the designated individual prior to provisioning access (SOC: LA-1). 7.2 Code42 classifies informational assets in accordance with the Code42 data classification guideline (SOC: C-5). 8. User access management 8.1 Code42 promptly disables a terminated user's application, platform and network access upon notice of termination (SOC: LA-7). 8.2 Code42 reviews administrator access to confidential and restricted systems, including corporate and cloud networks, on a semiannual basis. Code42 reviews administrator access to the cloud production environment and to select corporate systems that provide broad privileged access on a quarterly basis. Any inappropriate access is removed promptly (SOC: LA-8). 8.3 Code42 uses separate administrative accounts to perform privileged functions, and accounts are restricted to authorized personnel (SOC: LA-9). 9. Password management and authentication controls Authentication mechanisms require users to identify and authenticate to the corporate network with their unique user ID and password. Code42 requires minimum password parameters for the corporate network via the Active Directory system (SOC: LA-2). 10. Remote access and cloud access Remote access to the corporate network is secured through a virtual private network (VPN) solution with two-factor authentication (SOC: LA-3). Access to the cloud network requires two authentication steps; authorized users must log on to the corporate network and then authenticate using separate credentials through a secure shell (SSH) jump box server (SOC: LA-4). 11. Asset configuration and security Endpoint detection and response (EDR) technology is installed and activated on all Code42 workstations to monitor for virus and malware infections. Endpoint devices are scanned in real-time. Monitoring is in place to indicate when an anti-virus agent does not check in for prolonged periods of time. Issues are investigated and remediated as appropriate. Virus definition updates are automatically pushed out to endpoint devices from the EDR technology as they become available. (SOC: LA-11). Code42 uses full-disk encryption on endpoint devices. Endpoint devices are monitored and encrypted using industry recognized tools. Code42 has tools to identify and alert IT administrators of discrepancies between Code42 Security Policies and a user's endpoint settings (SOC: LA-12). Code42 maintains and regularly updates an inventory of corporate and cloud infrastructure assets, and systematically reconciles the asset inventory annually (SOC: OPS-5). 12. Threat and vulnerability management and security testing Code42's Threat and Vulnerability Management (TVM) program monitors for vulnerabilities on an on-going basis (SOC: RM-3). Code42 conducts monthly internal and external vulnerability scans using industry-recognized vulnerability scanning tools. Identified vulnerabilities are evaluated, documented and remediated to address the associated risk. (SOC: RM-6). External penetration tests are conducted annually by an independent third party. Critical findings from these tests are evaluated, documented and remediated (SOC: RM-7). 13. Logging and monitoring Code42 continuously monitors application, infrastructure, network, data storage space and system performance (SOC: OPS-1). Code42 utilizes a security information event monitoring (SIEM) system. The SIEM pulls real-time security log information from servers, firewalls, routers, intrusion detection system (IDS) devices, end users and administrator activity. The SIEM is configured for alerts and is monitored on an ongoing basis. Logs contain details on the date, time, source, and type of events. Code42 reviews this information and works events worthy of real-time review (SOC: OPS-2). 14. Change management Code42 has change management policies and procedures for requesting, testing, and approving application, infrastructure, and product related changes. All changes receive a risk score based on risk and impact criteria. Low risk changes generate automated change tickets and have various levels of approval based on risk score. High risk changes require manual change tickets to be created and are reviewed by approvers based on change type. Planned changes to the corporate or cloud production environments are reviewed regularly. Change documentation and approvals are maintained in a ticketing system (SOC: CM-1). Product development changes undergo various levels of review and testing based on change type, including security and code reviews, regression, and user acceptance testing prior to approval for deployment (SOC: CM-2). Following the successful completion of testing, changes are reviewed and approved by appropriate managers prior to implementation to production (SOC: CM-3). Code42 uses dedicated environments separate from production for development and testing activities. Access to move code into production is limited and restricted to authorized personnel (SOC: CM-9). 15. Secure development Code42 has a software development life cycle (SDLC) process, consistent with Code42 Security Policies, that governs the acquisition, development, implementation, configuration, maintenance, modification, and management of Code42 infrastructure and software components (SOC: CM-4). Prior to the final release of a new Code42 system version to the production cloud environment, code is pushed through lower tier environments for testing and certification (SOC: CM-6). Code42 follows secure coding guidelines based on leading industry standards. These guidelines are updated as needed and available to personnel via the corporate intranet. Code42 developers receive annual secure coding training (SOC: CM-7). Code42 utilizes a code versioning control system to maintain the integrity and security of the application source code (SOC: CM-8). 16. Network security Code42 uses network perimeter defense solutions, including an IDS and firewalls, to monitor, detect, and prevent malicious network activity. Security personnel monitor items detected and take appropriate action (SOC: LA-15). Firewall rule changes (that meet the criteria for the corporate change management criteria) follow the change management process and require approval by the appropriate approvers (SOC: LA-16). Code42’s corporate and cloud networks are logically segmented by virtual local area networks (VLANs) and firewalls monitor traffic to restrict access to authorized users, systems, and services (SOC: LA-17). 17. Third party security Code42 assesses and manages the risks associated with existing and new third party vendors. Code42 employs a risk-based scoring model for each third party (SOC: MON-2). Code42 requires third parties to enter into contractual commitments that contain security, availability, processing integrity and confidentiality requirements and operational responsibilities, as necessary (SOC: COM-9). Code42 evaluates the physical security controls and assurance reports for data centers on an annual basis. Code42 assesses the impact of any issues identified and tracks any remediation (SOC: MON-3). 18. Physical security Code42 grants access to data centers and Code42 offices by job responsibility, and access is removed as part of the Code42 separation or internal job transfer process when access is no longer required (SOC: LA-21; SOC: LA-22). Access to Code42 offices is managed by a badging system that logs access, and any unauthorized attempts are logged and denied. Code42 personnel and visitors are required to display identity badges at all times within Code42 offices. Code42 maintains visitor logs and requires visitors to be escorted by Code42 personnel (SOC: LA-23). 19. Oversight and audit Internal audits are aligned to Code42’s information security program and compliance requirements. Code42 conducts internal control assessments to validate that controls are operating effectively. Issues identified from assessments are documented, tracked and remediated (SOC: MON-1). Internal controls related to security, availability, processing integrity and confidentiality are audited by an external independent auditor at least annually and in accordance with applicable regulatory and industry standards. 20. Business continuity plan Code42 maintains a Business Continuity Plan (BCP) and a Disaster Recovery Plan to manage significant disruptions to Code42 operations and infrastructure. These plans are reviewed and updated periodically and approved annually by the Chief Information Security Officer (SOC: A-5). Code42 conducts business continuity exercises to evaluate Code42 tools, processes and subject matter expertise in response to specific incidents. Results of these exercises are documented and issues identified are tracked to remediation (SOC: A-6). 21. Human resources security Code42 has procedures in place to guide the hiring process. Background verification checks are completed for Code42 personnel in accordance with relevant laws and regulations (SOC: ORG-5). Code42 requires personnel to sign a confidentiality agreement as a condition of employment (SOC: C-2). Code42 maintains a disciplinary process to take action against personnel that do not comply with company policies, including Code42 Security Policies (SOC: ORG-3).
https://support.code42.com/Terms_and_conditions/Legal_terms_and_conditions/Information_security_addendum
Iâm grateful for the progress Iâve made thus far and am trying ahead to continuing my private growth journey in the months and years to return. The first step in any journey is at all times the hardest. Whether itâs starting a model new job, transferring to a brand new city, or embarking on a new exercise routine, getting began is always probably the most tough half. But as quickly as we take that first step, things usually get simpler and more enjoyable from there. Finally, make certain to read by way of your essay again to examine for any typos or other errors. To start with, learn through your essay rigorously, correcting any grammar or spelling errors. However, if acceptable a author can call readers https://www.thelondonfilmandmediaconference.com/your-stay-in-london-accommodation/ to motion or ask questions. Make certain that the conclusion is powerful sufficient for readers to remember it. In most instances, an introduction and a conclusion is the only factor your viewers will bear in mind. In this information, we discover in detail tips on how to write a great reflective essay, including what makes a good construction and a few advice on the writing course of. The objective of a reflective essay isn’t solely to describe personal experiences, but additionally to explore the deeper which means and significance of those experiences. As such, reflective essays can be each introspective and thought-provoking. For example, learners should discuss a single idea in every section, masking tips on how to write a reflective essay. In this case, the primary sentence incorporates a declare that helps the thesis statement. Also, you study how those occasions have modified, evolved, or developed you. It is as a outcome of it’s going to vary according to the supposed viewers. Letâs begin by defining what we imply by reflective writing. You can either use any of the advised topics for a reflective essay or invent a new one that’s intently connected to your life and expertise you lived. The title page, abstract, major body, and references should all be included in yourreflective essay. If youârewriting a reflectionon a sure textual content, annotate your initial feelings. Due to the specifics of our enterprise, our purchasers usually provide us with their personal information and particulars from their lives. This is where we depend on the best encryption strategies to guard the data. Needless to say, we keep the data that our prospects provide on the order type confidential. The only factor a student ought to keep in mind when on the lookout for a custom writer on-line is the trustworthiness of a writing company that offers the providers. A strong writing platform should present guarantees to its clients and carry them out properly. While you’ll find out in regards to the promises of the corporate on the principle page of its web site, the only way to verify the fulfillment of its promises is thru buyer reviews. I can convey a simple motion a million other ways, and I mastered tips on how to explore each to search out perfection in my written phrases. I also picked up new flexibility in my writing by opening my mind to different scopes of expression. A natural thinker and author at heart, I thought I understood artistic expression and wordplay… Upon looking at that hanging “D” on the paper, I realized I must push myself tougher and explore the depths my writing might attain. Not only did I be taught to sharpen my technical writing chops, however I even have discovered tips on how to dig into my creative soul to view my emotions and experiences in an entire new way. In a reflective essay, you must follow a 5-paragraph format. However, you’ll find a way to add extra paragraphs, and it depends on your chosen matter. When you write a reflective essay for the first time, you have to learn the examples. It is an editorial where individuals report their thoughts, experiences, and concepts. The key to writing a personal journal is having the power to pay consideration to what is happening around you. It is reflective as a end result of the author reflects on what they have written. Reflecting on a private expertise may appear to be a straightforward essay to write down. However, to ace your reflection paper, dive deeply into your emotions and select a topic that triggers a strong emotional response.
https://www.rocksolid.ae/the-method-to-write-a-reflective-essay-with-pattern-essay-instance/
Alford, B. L., in A. Biswas. 2002. »The Effects of Discount Level, Price Consciousness and Sale Proneness on Consumers' Price Perception and Behavioral Intention.« Journal of Business Research 55:775-783. Ali, F., M. Griffin in B. J. Babin. 2009. »How Quality, Value, Image, and Satisfaction Create Loyalty at a Chinese Telecom.« Journal of Business Research 62:980-986. Agencija za zavarovalni nadzor. 2012. »Porocˇ ilo Agencije za zavarovalni nadzor za leto 2011.« http://www.a-zn.si/Documents/Porocila/letno _porocilo-2011.pdf Blattberg, R. C., R. Briesch in E. J. Fox. 1995. »How Promotions Work?« Marketing Science 14 (3): 122-132. Bollen, K. A. 1989. Structural Equations with Latent Variables. New York: Wiley. Borda, M. 2011. »Voluntary Health Insurance as a Method of Health Care Financing in European Countries.« International Journal of Economics and Finance Studies 3 (1): 119-127. Bromley, D. B. 2000. »Psychological Aspects of Corporate Identity, Image and Reputation.« Corporate Reputation Review 3 (3): 240-252. Chernew, M., D. M. Cutler in P. S. Keenan. 2005. »Increasing Health Insurance Costs and the Decline in Insurance Coverage.« Health Services Research 40 (4): 1021-1039. Cravensa, K., E. Goad Oliverb in S. Ramamoorti. 2003. »The Reputation Index: Measuring and Managing Corporate Reputation.« European Management Journal 21 (2): 201-212. Dahlen, M., in F. Lange. 2005. »Advertising Weak and Strong Brands: Who Gains?« Psychology & Marketing 22 (6): 473-488. Eberl, M., in M. Schwaiger. 2005. »Corporate Reputation: Disentangling the Effects on Financial Performance.« European Journal of Marketing 39 (7-8): 838-854. Eisenberg, J. M., J. Elaine in M. P. P. Power. 2000. »Transforming Insurance Coverage Into Quality Health CareVoltage Drops From Potential to Delivered Quality.« The Journal of the American Medical Association 284 (16): 2100-2107. Fornell, C., in D. F. Lacker. 1981. »Evaluating Structural Equation Models with Unobservable Variables and Measurement Error.« Journal of Marketing Research 18:39-50. Helm, S., I. Garnefeldb in J. Tolsdorfc. 2009. »Perceived Corporate Reputation and Consumer Satisfaction - An Experimental Exploration of Causal Relationships.« Australasian Marketing Journal 17 (2): 69- 74. Herbig, P., in J. Milewicz. 1993. »The Relationship of Reputation and Credibility to Brand Success.« Journal of Consumer Marketing 10 (1): 5-10.
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::6eda858dc7a41ff49add00e6e0869075
Advaita philosophy was advocated by Sri Sankara in the 8th century AD. Advaita Philosophy The six systems of Indian philosophy are Nyaya, Vaisheshika, Sankhya, Yoga, Purva Mimamsa and Uttara Mimamsa or Vedanta. Great sages of the past have advocated these systems of philosophy. Gautama founded the Nyaya school of philosophy, Kanada founded the Vaisheshika school of philosophy, Kapila founded the Sankhya system of philosophy, Patanjali, the Yoga system and Jaimini, the Purva Mimamsa system of philosophy. The system of Uttara Mimamsa, otherwise called as Vedanta system of philosophy has various founders. The system is primarily divided into three schools, namely, Advaita, Visishta Advaita and Dvaita. Each of these schools has its own founder. Sri Sankara advocated the Advaita system of philosophy, Sri Ramanuja, the Visishta Advaita and Sri Madhva advocated the Dvaita system of Vedanta. The term 'vedanta' in Sanskrit means 'philosophy'. According to the Advaita system of philosophy, the universe is the manifestation of the supreme soul called the 'Brahman'. Every living being in this universe is a part of this supreme soul. It is the body that gets destroyed since it is perishable. The soul never gets destroyed since it is a part of the Supreme Brahman. The state of immortality is the highest goal of a human being according to Advaita. One can become immortal only if one ceases to be born again and again. One is never born again if one gets liberated in his previous birth. Hence liberation or 'moksha' is the supreme goal of human life. The entire universe is nothing but illusion and hence its appearance is illusory in nature like the vision of a serpent in a rope in insufficient light. When light gets sufficient enough to realize the real nature of the rope, the truth becomes evident. In the same way the truth about the Brahman gets established only if ignorance goes out. Once ignorance goes out, the Brahman becomes clear to the mind. The real nature of the universe also gets known. The truth about eternity gets established finally.
https://guides.wikinut.com/Advaita-Philosophy/3j63jclu/
Business professional with over 4 years of technical experience in management support, CRM/ERP software, and growing consumer product portfolios. I excel best in opportunities that allow me to utilize my creative skills and analytical thinking. My previous work experience has been involved in the following industries: Financial Market, Technology, and E-Commerce. Data Entry & Admin Data Analytics Data Entry Data Mining Excel Qualitative Research 55 $ Caity Brady BASIC United States, Santa Ana Music Therapist Caity Brady, Data Entry & Admin Please see Resume for further details. To meet character requirement: experience includes several positions in the field of music therapy, education, healthcare, food service, call center, stage manager for concerts, administrative assistant Data Entry & Admin Data Entry Microsoft Outlook Qualitative Research Call Center Microsoft Office 25 $ Corinne Pembroke BASIC United States, Los Angeles Recent graduate Corinne Pembroke, Data Entry & Admin I am a recent graduate from the University of California, Los Angeles with a B.A. in Economics. My coursework consisted of a variety of classes including econometrics, macro and micro economics, and several interdisciplinary courses that incorporated Microsoft Excel, Microsoft Powerpoint, and STATA. Topics covered in the economics major gave me a strong background in finance and presented me with the economic framework in which to analyze human behavior and address broader policy issues. I have found this particularly useful in today’s context, where the effects of a global pandemic require a reevaluation of the sustainability of business operations nationwide. While attending university, I gained work and leadership experience that advanced my professional and technical skills. From October 2018 to December 2019 I held a wealth management internship position at UBS Financial Services. My day-to-day tasks included constructing cash flow projections, assessing and analyzing the performance of client portfolios and individual investment accounts, creating content for marketing efforts, and managing the calendars and email correspondence for two financial advisors. This position exposed me to the demands of a fast paced professional environment- time management, attention to detail, and interpersonal communication were essential to the success of myself and the team. It also aided in the development of my skills in Microsoft Excel (VLOOKUPS, Formulas, and Pivot Tables), Word, and Powerpoint, as they were incorporated in my daily job functions. From January 2018 to October 2018 I held the position of research analyst at the UCLA Economics Department. My primary job function was to support three UCLA economics professors on several long term studies by performing statistical, qualitative, and quantitative analyses to gain insights and feedback using STATA. Additionally, I was tasked with developing macros, special formulas, and STATA programs to merge, scrub, and analyze data sets with more than 27 million data points.
https://wono.io/hire-freelancers/qualitative-research-skills-in-united-states
Forbes reports, a federal judge in Florida threw out the federal government’s mask mandate for airports, airplanes and other public transportation Monday, ruling the Centers for Disease Control and Prevention exceeded its authority by imposing the mask requirement days after the agency extended it another two weeks. U.S. District Judge Kathryn Kimball Mizelle issued a ruling that declared the mask mandate unlawful and blocked it by vacating the order and sending it back to the CDC “for further proceedings.” Additionally, it appears the Biden administration has conceded the judge’s ruling that the mask mandate was unlawful: “TSA (Transportation Security Administration) will not enforce its Security Directives and Emergency Amendment requiring mask use on public transportation and transportation hubs at this time,” the official said. United Airlines just lifted its mask mandate for Domestic flights: Effective immediately, masks are no longer required at United on domestic flights, select international flights (dependent upon the arrival country’s mask requirements) or at U.S. airports. While this means that our employees are no longer required to wear a mask – and no longer have to enforce a mask requirement for most of the flying public – they will be able to wear masks if they choose to do so, as the CDC continues to strongly recommend wearing a mask on public transit. One day after taking office, Biden issued Executive Order (EO) 13998, ordering the Centers for Disease Control and Prevention (CDC) to develop the Mask Mandate, and for various other federal agencies like the Federal Aviation Administration (FAA) and Transportation Security Administration (TSA) to implement the CDC’s decisions. EO 13998 also covered airports and other public transportation, such as buses. CDC issued its initial order on January 29, 2021, pursuant to Biden’s EO. Among other things, it requires almost everyone ages two and up to wear masks in airports and on airplanes, with certain narrow exceptions such as between bites when they are eating. Various legal challenges have been filed against the mandate. Most recently, almost two dozen states filed a lawsuit raising several statutory and constitutional objections to the Mask Mandate. Monday’s ruling was on one of the earliest cases, filed in July 2021. The lawsuit alleged that the Mask Mandate violated the Administrative Procedure Act (APA) on three separate grounds: First, that issuing the order is beyond CDC’s statutory authority granted by Congress in 42 U.S.C. § 264a. Second, that the CDC’s action was really a rule (i.e., a regulation) rather than an order, and therefore had to go through a process of public notice and opportunity for public comment before taking effect. And third, that it violates the APA because it is “arbitrary and capricious,” meaning that it was not the result of reasoned decisionmaking. “The government purports to discover this unheralded power to regulate how individuals appear and behave in public in a long-extant statute—one over seventy years old,” Judge Kathryn Mizelle wrote. “This history suggests that the power the government sees in § 264(a) is a mirage.” “The only reason the Mandate cites is the public health emergency caused by COVID-19,” Mizelle continued in her 59-page opinion, rejecting the Biden administration’s argument that the Mask Mandate could be imposed by an agency order. Instead, she decided that CDC’s restriction was a regulation that had to go through months of public comments, because “good cause to suspend notice and comment must be supported by more than the bare need for the regulations.” Mizelle also held that the CDC did not account for various objections, issues, and evidence opposing the Mask Mandate. This made the Mandate arbitrary and capricious, because “irrespective of whether the CDC made a good or accurate decision, it needed to explain why it acted as it did,” she said. “Because our system does not permit agencies to act unlawfully even in pursuit of desirable ends, the Court declares unlawful and vacates the Mask Mandate,” Mizelle concluded. Just this morning, Delta CEO Ed Bastian told WaPo that:
https://thejewishvoice.com/2022/04/federal-judge-declares-biden-admin-mask-mandate-for-planes-trains-unlawful/
We believe that as various policies such as wide credit are expected to enter the implementation stage, corporate capital pressures and downward pressure on the economy will ease, and the fundamentals may be better than investors' expectations. MSCI increased A's shareholding, the "two sessions" policy expected to ferment, risk appetite, and stock market liquidity continued to improve. Overall, the current "Wangchun market" will continue to exceed market expectations. January monthly report "Layout Wangchun market", "Wangchun market" is booming on the 6th, "Policy warm wind boosts "Wangchun market"" on the 13th, "Wangchun market is on schedule" on January 20, On January 27th, "Wangchun market is booming." Since January 2019, the Shanghai Composite Index has successfully reached 2,600 points from 2,455 points. At the beginning of our industrial strategy in 2019, we took the lead in proposing optimistic A-shares and grasping the "Wangchun market". After the Lunar New Year, how does the market interpret? We continue to maintain an optimistic and positive attitude in the overall judgment, enjoy the "Wangchun market", and configure "four big diamonds (large housing enterprises / largeBroker/ Big Innovation / Big Infrastructure) + Strategically focus on the home appliance and automotive sectors. 1. Global: During the Spring Festival, the US and Hong Kong markets increased slightly, and the European market was weakened by the impact of economic growth. Policy level, the world's major central bankscurrencyThere are signs of marginal easing in the policy: 1 India cut interest rates. 2 Major economies such as the United States, Britain and Australia have transmitted flexible and loose monetary policy signals. From the perspective of the global market, the overall external stability is stable, and the probability of A-share sentiment being suppressed is small, which is expected to increase the risk appetite of the A-share market. To further verify, we have a "global restructuring" perspective in the annual strategy "Reconstruction Innovation Age". It is not a simple foreign capital inflow, joining MSCI, but the re-allocation of global assets. It is the world's choice of funds in the country, the Chinese market is the most beneficial, and the allocation of developed markets and emerging markets. 2. Domestic aspects: goodwillPerformance"Ray" is suspended, economic earnings decline is expected to gradually respond, reform continues, four in one (government and enterprises, state-owned enterprises and private enterprises, supervision and market, market micro-subjects, etc.), "China restructuring" is happening, risk appetite, valuation Continue to improve. Nearly 20 relevant documents for the board of science and technology have been drafted, and the capital market reform has gone further. As a whole, the rules are closer to developed markets, information disclosure is more adequate, pricing is more market-oriented, and delisting systems are more stringent. For example: 1 In the shareholding structure, the same shares are allowed to have different rights, but they cannot be changed after listing, and the listing conditions of different shares are more strict. 2 In the ups and downs, the first five trading days do not set the price limit, and the other 20% limit. 3 In the listing price, the direct pricing is cancelled, and the market-based inquiry pricing method is adopted comprehensively. All the stocks are issued for inquiry and only the institutions participate. 5 The most stringent delisting system, no longer a single continuous loss indicator, but the use of two major categories of indicators, namely deductionNet profitNegative for negative and net assets. The delisting time is shortened to 2 years, and ST is not met in the first year. Finishing the financial data, as long as the evidence is confirmed to delist immediately. Can "Wangchun Quotes" exceed market expectations? The next month is crucial. After the recent exchanges with the market, everyone is generally concerned about when the “Wangchun Quotes” will continue. Looking forward to the next month, various pending issues are critical to their sustainability. 1) In mid-February, financial data will be released in January. 2) On February 28th, MSCI raised the weight of A into equity in 2019. 3) On March 5th, the national "two sessions" on economic growth and reform measures. 4) Economic and financial data for the first two months of mid-March. We believe that as various policies such as wide credit are expected to enter the implementation stage, corporate capital pressures and downward pressure on the economy will ease, and the fundamentals may be better than investors' expectations. MSCI increased A's shareholding, the "two sessions" policy expected to ferment, risk appetite, and stock market liquidity continued to improve. Overall, the current "Wangchun market" will continue to exceed market expectations.
http://westdollar.com/sbdm/finance/a/201902101040660538.html
Wingecarribee Family Support Service (WFSS) needs to collect personal (identifying) and sensitive (e.g. health, mental health, etc.) information from people in order to deliver quality and appropriate service to the children and families we serve. WFSS respects the dignity and confidentially of all: personal and sensitive information will be treated with the highest respect and confidentiality (Privacy Amendment (Private Sector) Act 2000). Information about the family is confidential within WFSS, which means it may be accessed by the Team Leader and others within the team when necessary for quality and effective service provision. Information is only shared with outside services if written permission is granted by the family, or when we become aware of any circumstance where a child is at serious risk of harm (Mandatory Reporting), at which time we must notify Family and Community Services, (Section 248 of the Children & Young Persons (Care & Protection) Act 1998). Under Section 16A of this act, we may share information with another agency, but only when there is concern about the safety, welfare and/or well-being of a child. When there is requirement to break confidentiality, the Team Leader must be consulted, except if it is urgent report for the child’s imminent safety. We will inform you when we need to make a notification or share information (unless to do so places someone in the home at risk) and support you to make the changes that will help your child/children live safely and reach their full potential. If you have any concerns about how WFSS manages your privacy and confidentiality, please contact us. Phone us and ask to speak to a family worker. Wingecarribee Family Support Service has been working with families in the Southern Highlands for over 25 years. Our mission is to strengthen and support families and children to help build a community where children reach their full potential.
http://wfss.org.au/about-us/privacy-and-confidentiality/
OTTAWA—Billions of dollars are flying out of federal coffers as part of the government’s COVID-19 wage support program. But which companies are getting the money? Ottawa won’t say — at least not yet. According to Jeremy Bellefeuille, press secretary to National Revenue Minister Diane Lebouthillier, it’s only a matter of time until it does. Bellefeuille pointed out that the law passed in April to create the Canada Emergency Wage Subsidy allows the government to publish the names of employers receiving it. He said the government will release a list of them at some unspecified point in time, along with the amount of public money each has received. “The process for making this information available is still under consideration,” Bellefeuille said by email Monday. He referred further questions to the CRA’s media relations department. Agency spokesperson Christopher Doody told the Star by email that the CRA plans to “provide an update” on the publication of this information by the end of August. Doody did not answer questions about why it hasn’t been released yet, or what factors are being considered before it is published, but said the CRA “is focusing” on recent changes to extend and expand eligibility for the wage subsidy. Federal statistics show the government paid out more than $25 billion in wage subsidies to almost 800,000 applicants as of Aug. 2. That included 190 entities that received more than $5 million. James Cohen, executive director of the anti-corruption group Transparency International Canada, said the government should be as open as possible about how it is spending huge amounts of public money during the pandemic crisis. For the wage subsidy, that transparency would be increased if Canada created a registry that makes it easier to see who owns companies in this country. “Additional transparency is key during this time,” Cohen said. “Canadians are hurting and need help, but they also need to trust the process, that the help is being distributed correctly, is being overseen correctly.” Others questioned the need for the government to release which companies are receiving how much from the wage subsidy. Kevin Page, the former Parliamentary Budget Officer and president of the University of Ottawa’s Institute of Fiscal Studies and Democracy, said Monday that “a case can be made that we do not need company-specific information” about the wage subsidy. He said general information about company sizes and industries could be enough to evaluate the program’s performance, and regular government audits “should be sufficient” to monitor how the money is spent. “To qualify, firms need to demonstrate significant year over year declines in revenues. While this should not be surprising in an economy that is shrinking by record amounts (post Depression) in 2020, it may be sensitive to business and employees,” Page wrote by email. Billed as a key plank in the federal government’s response to the COVID-19 pandemic, the wage subsidy is designed to help businesses, non-profits, political parties and other entities keep workers on their payrolls as revenues are battered amid this year’s economic turbulence. In recent months, applications for the wage subsidy trailed those of the more-popular Canada Emergency Response Benefit for people who lost work during the pandemic. Last month, Parliament extended the life of the subsidy program and broadened its eligibility criteria, so that companies that have lost less than 30 per cent revenue during the pandemic can now qualify for the subsidy. Allan Lanthier, a former government adviser and retired partner of an international accounting firm, argues the expanded wage subsidy — and the loosening of the requirement to have experienced a 30-per-cent revenue decline — lets corporations that don’t need Ottawa’s help receive public money anyway during the crisis. But Lanthier questioned whether the government could release how much companies get from the wage subsidy without breaching confidentiality rules in the Income Tax Act. He said he would rather see a better designed program that doesn’t “open the door to major corporations” that don’t need government help. “Rather than publishing the names, I think they should have written the rules in a sensible way,” he said. NDP MP Gord Johns said Monday that his party’s main priority is to pressure the government to ensure more entities can get the wage subsidy to protect jobs during the pandemic. At the same time, however, “we should know where the funds are going when it comes to all of the spending for the pandemic,” he said.
https://www.thestar.com/politics/federal/2020/08/10/which-companies-are-benefiting-from-ottawas-multi-billion-dollar-wage-subsidy-program-the-government-wont-say-yet.html
We’re looking for a motivated, commercial real estate administrative assistant to assist firm agents in their daily administrative needs. Responsibilities include but are not limited to administrative duties such as keeping track of all transaction documents in the customer relationship management system (CRM), monitoring and notifying clients of important deadlines, and planning appointments. You will also assist with marketing needs including email campaigns, scheduling LinkedIn posts, and creating newsletters. The ideal candidate should be a great communicator who is driven and highly detail-oriented and organized. Responsibilities -Manage agent’s CRM to ensure client data is accurate and up to date -Communicate with clients regarding transactions, contracts, LOIs, RFPs, leases, etc.
https://www.bullrealty.com/jobs/view/administrative-assistant
By Veiko Lember, Visit Amazon's Rainer Kattel Page, search results, Learn about Author Central, Rainer Kattel, , Tarmo Kalvet This e-book maps the most recent advancements in public procurement of innovation coverage in a variety of contexts and analyzes the evolution and improvement of some of the coverage options in broader institutional contexts. In doing so, it addresses major theoretical and functional gaps: at the one hand, there's an rising curiosity in public procurement as a coverage software for spurring innovation; but however, the present idea, with a few impressive exceptions, is guided and infrequently restricted by means of old purposes, primarily within the defence industries. via conscientiously interpreting the instances of 11 international locations, the booklet issues to the life of even more nuanced public procurement at the innovation coverage panorama than has been stated within the educational and coverage debates to date. Read Online or Download Public Procurement, Innovation and Policy: International Perspectives PDF Similar production & operations books Innovating in a Learning Community: Emergence of an Open Information Infrastructure in China's Pharmaceutical Distribution Industry How do businesses together boost open details infrastructures? to reply to this query, this ebook attracts at the result of a longitudinal study venture masking the advance of the pharmaceutical distribution in China from 2004 to 2012, targeting the emergence and next evolution of industry-wide info infrastructures. This booklet begins with the fundamental premise provider is constructed from the 3Ps-products, techniques, and folks. additionally, those entities and their sub-entities interlink to aid the providers that finish clients require to run and aid a company. This widens the scope of any availability layout some distance past and software program. Modeling Approaches to Natural Convection in Porous Media This booklet offers an summary of the sphere of circulation and warmth move in porous medium and makes a speciality of presentation of a generalized method of are expecting drag and convective warmth move inside of porous medium of arbitrary microscopic geometry, together with reticulated foams and packed beds. functional numerical the way to remedy common convection difficulties in porous media could be provided with illustrative purposes for filtrations, thermal garage and sun receivers. The Essentials of Supply Chain Management: New Business Concepts and Applications This can be state-of-the-art imperative advent to provide chain administration for state-of-the-art scholars and tomorrow's managers – no longer yesterday's! Prof. Hokey Min specializes in sleek enterprise concepts and functions – transcending out of date logistics- and purchasing-driven ways nonetheless present in many aggressive books. Extra info for Public Procurement, Innovation and Policy: International Perspectives Example text I refer to it as MIISR 2009. 9 Including spending on technology infrastructure such as the national broadband network, direct spending on science and R&D, education and green technologies. 3 Australia 41 This points to both a lack of appreciation of the potential power of public purchasing of innovation (PPI) as a techno-industrial upgrading device and the weakness of elite cohesion around the idea of using PPI to this end. 10 This figure represents the combined spending of Australia’s Commonwealth and state governments. This catalogue is a kind of wish list of high priority technologies yet to be designed for China, based on the domestic demand and needs. Here the government looks for unsolicited proposals from enterprises in the listed areas and provide the enterprises a mix of supporting measures from R&D subsidies to tax reductions and pre-commercial procurement (Li 2011). ), which is basically a measure mixing innovation-oriented and buy-national procurement policies. Thus, taken from the above, public R&D procurement policy usually takes place via specific programmes created to articulate demand and tackle structural hindrances for socially and economically relevant radical innovations. Japanese government has a large in-house competence in the various agencies and ministries. This makes it possible for various agencies, not only to define the strategy, but also be involved in the implementation of it. A procurement policy based on the most effective use of the limited resources is designed for every technology and product. Depending on the technology, the number of a product to be constructed and the anticipated costs of competition, co-ordination or a mix policy chosen. (1978, 75) Nevertheless, there are considerable lacunae in today’s knowledge how countries, in the current policy space, develop and nurture the innovation-oriented public procurement policies and related policy expertise.
http://folar.org/lib/public-procurement-innovation-and-policy-international-perspectives
The study, published in the journal Geospatial Health, shows that healthcare professionals may be able to track dangerous ticks by using data provided by veterinarians. "We don't screen ourselves for exposure," Jenna Gettings, one of the lead authors of the study and a wildlife disease researcher at the University of Georgia, told New Scientist. "The only time people are tested for tick-borne disease is when they have symptoms. Whereas with dogs, we screen healthy animals." Most dog-owners bring their pet into the veterinarian once a year for a check-up, and professionals will turn in the data generated from the visit into a central database. Unlike personal health records that we use, veterinarians can share pets' health records universally, Gettings said. Her team used data from more than 16.5 million dog check-ups conducted between 2012 and 2016 to see if animals produced antibodies for Borrelia burgdorferi, which is the bacteria that is linked to ticks and can manifest in Lyme disease later on. Because dogs often don't go anywhere without their owners, the data helps scientists piece together where humans are at the most risk of contracting Lyme disease. The team also found an association between canine data and the rates of Lyme disease in humans in the same analysis. In places where at least 10 percent of dogs tested positive for exposure to the tick's bacteria, there were increases in recorded diagnoses of Lyme disease. Interestingly enough, as the number of pups exposed to the bacteria increased, the amount of Lyme disease found in humans decreased. But not many places in the United States have higher levels of canine exposure to this strain of bacteria in particular. "We don’t fully understand why the association drops off," Gettings said. "It may be that we don’t have a ton of data at that level." Researchers are planning to use data from annual canine check-ups to understand how Lyme disease is changing over time, but the authors of the study report that their findings also provide some challenges. This method of screening doesn't account for when a dog is traveling and contracts Lyme disease outside of their usual surroundings, for example. For those in the Northeast and the upper Midwest, where Lyme disease has become endemic, healthcare professionals are constantly educating their patients about tick safety. But for those communities who may be seeing an influx of tick activity for the first time, a visit to the veterinarian could serve as a warning for health experts that their residents need to be on high alert in the future.
A Cinema of Poetry brings Italian movie experiences into discussion with fields open air its ordinary purview via exhibiting how movies can give a contribution to our knowing of aesthetic questions that extend again to Homer. Joseph Luzzi considers the relation among movie and literature, specifically the cinematic edition of literary assets and, extra commonly, the fields of rhetoric, media reports, and smooth Italian culture. The publication balances theoretical inquiry with shut readings of flicks by way of the masters of Italian cinema: Roberto Rossellini, Vittorio De Sica, Luchino Visconti, Michelangelo Antonioni, Federico Fellini, Pier Paolo Pasolini, Bernardo Bertolucci, and others. Luzzi&apos;s examine is the 1st to teach how Italian filmmakers deal with such the most important aesthetic matters because the nature of the refrain, the relation among image and allegory, the literary prehistory of montage, and where of poetry in cinematic expression, what Pasolini known as the "cinema of poetry." While Luzzi establishes how definite characteristics of movie, its hyperlink with technological techniques, means for mass distribution, artificial virtues (and vices) because the so-called overall artwork, have reshaped centuries-long debates, A Cinema of Poetry additionally explores what's particular to the Italian paintings movie and, extra extensively, Italian cinematic background. In different phrases, what makes this model of the paintings movie recognizably "Italian"? Within the cool, historical sanctuary of Nemi rests the spirit of Diana, the Benevolent-Malign Goddess whose clergymen as soon as stalked the sacred grove. Now Hubert Mallindaine, self-styled descendent of the Italian huntress, has claimed non secular rights to a villa at Nemi - a villa so that it will kill. Overlaying the interval from the French Revolution to the top of the nineteenth Century, this quantity units the occasions resulting in Italian Unification and the construction of an self reliant Italian kingdom within the broader context of nineteenth Century eu background. difficult the view that the political failings of the Risorgimento and Italy's fiscal and social backwardness prepared the ground for fascism within the twentieth century, it emphasizes how related Italy's social and political improvement was once to that of different modernizing ecu states within the comparable interval, whereas explaining why Italy's adventure of modernization within the 19th century additionally proved relatively tough. Content material: desk of Contents. creation to the 3rd variation: Edward Chaney. Preface to the 1st variation. 1. The Beginnings of curiosity: mid-Sixteenth to mid-Seventeenth Centuries. 2. From Description to hypothesis: mid-Seventeenth to overdue Eighteenth centuries. three. flavor for Italian work: I 16th to overdue Eighteenth Centuries. Figure 2 Claudia reenters the frame in the opposite direction from figure 1, thereby jumping the line of the traditional 180-degree film axis (L’avventura). Introduction 15 Figure 3 Antonioni places Claudia and Sandro in disjointed spatial relation to call attention to the frame’s abstract compositional arrangement (L’avventura). in the 1940s. Indeed, Antonioni follows Rossellini in using this final scene to suggest the distance he had traveled from the neorealist themes of his early aesthetic development and its documentary-style cinematography. Figuratively speaking, their grief relates to the film’s larger message about the loss of the ideals associated with neorealism. The ending of L’avventura may have been “announced,” to quote Bertolucci’s epigraph, nearly a decade earlier in the enigmatic conclusion to Rossellini’s Voyage to Italy. After that film’s main characters, Alex and Katherine Joyce, encounter the lovers ensconced in the lava of Vesuvius and decide to divorce, they drive into the middle of a Neapolitan religious procession, where they will have an unexpected reconciliation at film’s end. Verga eschewed background description of any kind in reporting his characters and their relations with one another and the natural world, thereby achieving what Mikhail M. Bakhtin called the “heteroglossia” of a detached fictional world. Like Bakhtin’s archetypal prose author, Verga “does not express himself in [his characters]. . Rather, he exhibits them as a unique speech-thing, [and] they function for him as something completely reified” (Bakhtin, “Discourse in the Novel” 299). Verga’s novel conveys this process through the dialect speech patterns and proverbializing of the inhabitants of Aci Trezza, orchestrating what Bakhtin labels the “stratification of language” to express the worldviews of his characters and, more implicitly, the author’s own “intentional theme” (“Discourse in the Novel” 299).
http://www.s-pay.com.ua/index.php/kindle/a-cinema-of-poetry-aesthetics-of-the-italian-art-film
A federal judge has denied the city of Evanston’s motion to strike the amended complaint Skokie filed in federal court as part of the two towns’ ongoing battle over water rates. Federal court Judge Charles Kocoras made the ruling Aug. 21, and further outlined a schedule for how the case may proceed in the coming months. “In denying Evanston's motion, the court determined that the June 2018 filing of Skokie’s amended complaint did not violate procedure,” village of Skokie spokeswoman Ann Tennes said in a written statement issued hours after the ruling. “As a result, Skokie's action seeking equal protection for its residents and businesses against Evanston's discriminatory water rates will proceed.” Evanston has sold water to Skokie for more than 70 years, and the latest contract between the two towns expired Dec. 31, 2016. In the statement Tuesday, Skokie officials called the water rates Evanston now wants to charge “disparate.” But in previous contract negotiations, Evanston officials argued that Skokie’s rate was low and a proposal was made to raise it from $1.07 per 1,000 gallons to $2.06 per 1,000 gallons. Skokie officials disagreed with the proposed increase and the towns could not come to an agreement on a new rate. That led to Evanston suing Skokie in Cook County Circuit Court in September 2017, seeking a “declaratory judgment” to resolve the conflict. Then Skokie filed its own suit in U.S. District Court for the Northern District of Illinois Eastern Division, alleging that Evanston’s rate increase ordinance adversely affects the rights of the plaintiffs named in the complaint in a discriminatory way. Further, according to the lawsuit, the rate violates Skokie citizens’ rights. In its complaint, Skokie made the argument that Evanston’s higher rate would violate due process and equal protection rights guaranteed in the Fifth and Fourteenth amendments of the U.S. Constitution, hence the federal lawsuit. Three Skokie residents and one company are named as plaintiffs in that suit. Kocoras on Aug. 21 outlined a schedule for the next few months regarding Skokie’s federal lawsuit, which Evanston has asked the court to dismiss. Evanston has until Sept. 4 to file a new, “oversized brief” in support of the city’s motion to dismiss the case, according to court documents. Then, Skokie has until Oct. 2 to respond to Evanston’s motion to dismiss, court documents indicate. The legal documents state that after that, Evanston’s reply to Skokie’s response is due by Oct. 16. Finally, the “court will rule by mail” on the motion to dismiss, the court documents indicate, though a deadline for that ruling is not listed. Kocoras also allowed Skokie to proceed with an amended complaint to the lawsuit the village originally filed. In their Aug. 14 motion to dismiss the federal lawsuit, Evanston attorneys argued that Skokie “has no fundamental right to Lake Michigan water – and certainly no such right to water from Evanston.” Tennes explained that Kocoras did not rule on that and instead instructed Evanston to refile its motion to dismiss. In July, a Cook County judge put Evanston’s state court case on hold while the federal case continues. Evanston City Manager Wally Bobkiewicz said after this latest ruling Tuesday that he expects the federal case will ultimately be dismissed and the state case resume. “Absolutely” the case will return to the state courts, Bobkiewicz said. “It’s a contract issue,” and that’s where those types of cases belong.
https://www.chicagotribune.com/suburbs/evanston/ct-skr-federal-water-lawsuit-ruling-tl-0830-story.html
Stakeholder engagement continues to be an integral role for the department to understand and best meet customers’ needs and deliver liveable regions and active cities. Therefore, engagement with customers, community and business stakeholders is a key priority for TMR. Engagement activities facilitate a gathering of diverse perspectives that can contribute to developing innovative and collaborative solutions. During August 2018, TMR hosted the annual statewide and regional industry briefings in Brisbane and Mackay, focusing on the Queensland Transport and Roads Investment Program 2018–19 to 2021–22 (QTRIP). Each event provided industry partners with a detailed insight into the projects to be released to market during the 2018–19 financial year, as well as an overview of QTRIP, regional priorities, contract types, procurement processes and the Transport System Planning Program. In addition, targeted sessions were also held covering topics including asphalt, bitumen and quarry materials, Indigenous and small business, and heavy vehicles. These events offered a unique opportunity for the department to present its statewide program directly to key stakeholders, strengthening the ongoing partnerships with industry and local businesses. Survey results from attendees showed a positive response rate to the events, with 53 per cent of participants extremely satisfied with the events, and 86 per cent of attendees being extremely or somewhat likely to attend in the future. ‘Loved hearing from all the Regional Directors and putting names to faces. Loved the address from Neil Scales - he shared some valuable information. Was great to get a better understanding of what is planned and what TMR is looking for from suppliers.’ ‘Always no substitute in hearing first-hand about TMR’s program of works, key objectives and priorities.’ Feedback given by attendees at the QTRIP Industry briefings Throughout the 2018–19 financial year, TMR continued to grow and engage with the department’s online customer community through our online research platform, Transport Talk. With over 2640 members from across the state, the Transport Talk community enables our customers to share their thoughts, ideas and insights on transport-related topics. Customers can engage with TMR through online surveys and discussion groups that help us to shape the direction of Queensland’s transport future. In 2018–19, Transport Talk sought customer feedback on a range of TMR products, services and initiatives, some of which included: Transport Talk is a whole-of-TMR initiative that supports the department to better understand and connect with Queenslanders. It allows us to improve our products and services by integrating the voice of our customers into everything we do. For more information on Transport Talk As part of TMR’s ongoing activities to reduce permit burden on heavy vehicle operators and increase freight productivity and efficiency, TMR and the National Heavy Vehicle Regulator worked collaboratively during 2018 to develop a National Notice that allows 30 Performance Based Standards approved A-Doubles to operate between Toowoomba and the Port of Brisbane via the Warrego Highway, Logan and Gateway Motorways up to Concessional Mass Limits masses. The new National Notice came into effect on 9 October 2018 and provides significant administrative and permit cost savings as well as providing business certainty through operation under a five-year National Notice, rather than a 12-month permit. In a commitment to helping the heavy vehicle industry to meet their road safety obligations, TMR regularly hosts educational Truckie Toolbox Talk sessions around the state. These events provide the opportunity for truck drivers and operators to speak openly with transport inspectors on a wide range of industry topics from regulations and accreditations to permits, mass limits, fatigue management and more. Eight truckie toolbox talks were held last year at locations including Port of Brisbane, Maryborough, Gladfield, Millmerran, Coomera, Greenacres, Townsville and Cairns. This was the first year a truckie toolbox talk was held on the Gold Coast and was well received by industry. Since the first talk in 2015, the program has generated interest across Queensland with increased attendance numbers and positive feedback from the heavy vehicle industry continuing to rise. TMR’s Road Safety Officers also regularly attend these events to help other road users who stop-in with questions about vehicle safety, general load restraints and towing with caravans. Through major events, such as the delivery of TMR’s 21st Engineering Technology Forum, TMR is keeping ahead of emerging technology and looking for opportunities to partner with industry and learn from each other, while exploring and developing innovative ideas. TMR experts took this opportunity to share their knowledge by presenting alongside industry. Over 400 TMR and industry specialists attended the 2018 Engineering Technology Forum held from 18–20 September 2018 which explored emerging and existing technologies, innovation and their application to transport infrastructure. This year’s theme, ‘Transport for the future’, covered a range of topics, including transformative technologies, global trends and future opportunities. It brought together departmental specialists, engineering researchers, practitioners and industry from a range of disciplines. The forum saw delegates attending 84 presentations across 31 sessions offering a program of innovative and interactive presentations designed to ignite discussion and build networks. Amanda Yeates, Deputy Director-General (IMD) speaking at the 2018 Engineer Technology Forum in Brisbane. Since 2007 (except for 2012), we have been sponsoring Engineers Australia (EA) as a Principal Partner. Partnering with Engineers Australia (the peak representation body of engineers in Queensland) provides the department with the opportunity to influence the engineering profession. As part of our benefits TMR has access to professional engineers throughout Queensland, allowing us to drive greater collaboration and problem-solving opportunities with industry. The sponsorship relationship has seen TMR and EA collaborate on innovation challenges, women in engineering initiatives, industry panel discussion sessions and regional engagement programs. For the second year, we have partnered with the Bus Industry Confederation (BIC) to sponsor the Australasia Bus and Coach Conference. The initiative was the first joint (Australia and New Zealand) bus and coach industry conference with the theme ‘Moving People – Century 21’. Sponsoring the conference increased the department’s visibility with BIC members, demonstrating the department is supportive of the Australian bus industry and improved relationships between government and industry partners and stakeholders.
https://annualreport2019.tmr.qld.gov.au/Accessible-to-everyone/Engaging-with-industry
TTCS is a new bi-annual conference series, intending to serve as a forum for novel and high-quality research in all areas of Theoretical Computer Science. The conference is held in cooperation with the European Association for Theoretical Computer Science. There will be a number of satellite events at TTCS, These will feature presentation of early research results, and position papers. Topics of interest include but are not limited to: Track A: Algorithms and Complexity algorithms and data structures, algorithmic coding theory, algorithmic graph theory and combinatorics, approximation algorithms, computational complexity, computational geometry, computational learning theory, economics and algorithmic game theory, fixed parameter algorithms, machine learning optimization, parallel and distributed algorithms, quantum computing, randomness in computing, theoretical cryptography, Track B: Logic, Semantics, and Programming Theory algebra and co-algebra in computer science, concurrency theory, coordination languages, formal verification and model-based testing, logic in computer science, methods, models of computation and reasoning for embedded, hybrid, and cyber-physical systems, stochastic and probabilistic specification and reasoning, theoretical aspects of other CS-related research areas, e.g., computational science, databases, information retrieval, and networking, theory of programming languages, and type theory and its application in program verification. Submission For the main conference, we solicit research papers in all areas of theoretical computer science. All papers will undergo a rigorous review process and will be judged based on their originality, soundness, significance of the results, and relevance to the theme of the conference. Papers should be written in English. Research papers should not exceed 15 pages in the LNCS style format. Multiple and/or concurrent submission to other scientific venues is not allowed and will result in rejection as well as notification to the other venue. Any case of plagiarism (including self-plagiarism from earlier publications) will result in rejection as well as notification to the the authors' institutions. The proceedings of TTCS 2020 will be published in the Lecture Notes in Computer Science (LNCS) series, in accordance with the contract between Springer Nature Switzerland AG and International Federation for Information Processing. Papers should be submitted through our EasyChair submission website: https://easychair.org/conferences/?conf=ttcs2020 . The web site is open for submissions.
http://wikicfp.com/cfp/servlet/event.showcfp?eventid=98873&copyownerid=161397
In the strange land of Q'ai Du'un, Skylar meets a Zhinn who guides him over treacherous mountains to the land of the River People-and to the desperate woman, Naissa, queen of the Orai. Naissa must leave Q'ai Du'un to reunite with her other self on Earth, and only the ancient mystic, Djagg, can help her return there. To reach him, Skylar and Naissa must traverse perilous terrain, beset by the brutal Shadow Winds, as they follow the portents of seers they encounter along the way. Can the seekers reach Earth, where Naissa's other self lies comatose in a Nevada hospital, before her time runs out? About the Author Chas Fleischman studied art at the San Francisco Art Institute and later became an educational illustrator at the College of Marin. He also created and produced “Dag’s Bag,” a weekly irreverent comic strip in Marin County’s Pacific Sun. Later, he studied printmaking with the Fort Mason Printmakers. He now lives in Fort Bragg with his wife Jenny. Praise For… The Shadow Winds of Q'ai Du'un is a fun, beautifully illustrated romp, reminiscent of the classic hippy comics of the 1970s. Filled with history and mystique, Q'ai Du'un is built on a mythos that captures readers and pulls them in head first.
https://www.gallerybookshop.com/book/9781735471907
As part of the Comprehensive Addiction and Recovery Act (CARA), the first major piece of federal substance use disorder treatment and recovery legislation in over 40 years, the Department of Justice is awarding a total of $58.9 million nationwide to address the epidemic, of which $1.1 million will fund three New Jersey-based opioid abuse programs, including Camden County. The Camden County Opioid Abuse Diversion Program (CCOAD) aims to reduce opioid misuse and overdose fatalities for those involved in the justice system in Camden County. Initially, the County will conduct a comprehensive system assessment to understand this population, and using this information, provide case management and wrap-around services for those with an opioid addiction involved in the justice system. A $400,000 grant will be used to implement the program and WRI will serve as the project’s evaluator. The Camden County Correctional Facility will implement a system-level diversion and alternatives to incarceration project with its pre-trial intervention population. In this program, WRI will: 1) conduct a comprehensive system assessment to understand the target population, 2) assess issues and underlying causes to develop theory of change models and strategies to address them, and 3) facilitate program process and outcome assessment and evaluation. In New Jersey, the recidivism rate is nearly 30 percent in the first three years, according to a study completed in 2018, and substance abuse is among the top reasons offenders return. It costs Camden County more than $50,000 to incarcerate one individual per year. According to Freeholder Jonathan Young, liaison to the Department of Public Safety, “the opioid epidemic is a public health crisis in our community and across the country. This funding allows us to explore new ways to help those who are already struggling by providing services that will cut down on recidivism and save taxpayers money. We know many inmates come into our facility fighting against a dependency to opioids and now we can start taking a closer look on reentry and getting them the tools they need to be successful upon release.” Learn more here.
https://rand.camden.rutgers.edu/2019/05/14/camden-county-opioid-abuse-diversion-program/
Funerary monuments and grave markers are an important part of our heritage and deserve to be treated as such. They reflect the lives, beliefs, and attitudes to death of past communities and individuals. In essence, they provide a window into our ancestors’ thoughts and deeds. From the dawning of the Reformation to the emergence of lawn cemeteries in the twentieth century, this talk endeavours to explore the rich social history funerary monuments can impart, weaving, at times, a long forgotten narrative. About the Speaker Lorraine Evans is a Mortuary Archaeologist and Death Historian specialising in nonconformist burial practices, funerary construction, and the social histories of mortality. A best-selling author, her latest book Burying the Dead explores the archaeological history of burying grounds, she is presently completing her PhD at the IISPG. This workshop/course is presented with funding support from Learn Harrow. You will be required to register with Learn Harrow to take part in this workshop – instructions will follow in a separate email. It is really important you register as future funding for similar events depends on it. Tuesday 8 February, 2-3pm £2.50 per Zoom screen Tuesday Talks are designed for adults but all ages are welcome!
https://headstonemanor.org/events/tuesday-talk-messages-from-the-dead-exploring-funerary-architecture-of-north-west-london/
Hydrophilic Fumed Silica Our hydrophilic (untreated) fumed silica powder and dispersion products provide exceptional performance benefits for a wide variety of applications and industries. Our CAB-O-SIL® and CAB-O-SPERSE® hydrophilic fumed silica products are used around the world to meet challenging performance requirements. Due to their unique particle characteristics, our hydrophilic fumed silicas provide superior performance benefits for many types of applications, including: - Adhesives and sealants - Chemical mechanical planarization (CMP) - Coatings - Composites - Food additives - Greases - Pharmaceuticals - Printing and packaging - Silicone sealants - Skin and beauty care products Performance benefits The versatility of our fumed silica is related to its wide range of surface areas and aggregate structure, both of which lead to unique performance benefits, including:
https://tempuatcn.cabotcorp.com/solutions/products-plus/fumed-metal-oxides/hydrophilic
3 of the 2532 sweeping interview questions in this book, revealed: Communication question: What are the most challenging documents you have done? What Apprentice tile setter kinds of proposals have your written? - Getting Started question: How do you know if you have the wrong Apprentice tile setter questions? - Behavior question: Tell me about a time you had to handle multiple responsibilities. How did you organize the work you needed to do? Land your next Apprentice tile setter role with ease and use the 2532 REAL Interview Questions in this time-tested book to demystify the entire job-search process. If you only want to use one long-trusted guidance, this is it. Assess and test yourself, then tackle and ace the interview and Apprentice tile setter role with 2532 REAL interview questions; covering 70 interview topics including Organizational, Variety, Responsibility, Scheduling, Self Assessment, More questions about you, Setting Priorities, Setting Goals, Flexibility, and Strengths and Weaknesses...PLUS 60 MORE TOPICS... Pick up this book today to rock the interview and get your dream Apprentice tile setter Job.
https://www.barnesandnoble.com/w/apprentice-tile-setter-red-hot-career-guide-2532-real-interview-questions-red-hot-careers/1128752738
Born in England, Clive Wynne studied at University College London and the University of Edinburgh. He has served on the faculty of the University of Western Australia, and the University of Florida, before arriving at Arizona State University in 2013. His broad interest is in comparative psychology understood to include the evolution, development and progress of the behavior of individuals and groups of nonhuman animals. The behaviors Wynne is interested in range from simple conditioning to complex cognitions. His approach is behaviorist in his emphasis on parsimonious explanations, cognitive in his interest in rich behaviors, and ethological in his concern to see behavior as a tool in animals’ adaptation to their environments. The specific focus of ongoing research is the behavior of dogs and their wild relatives. In this domain Wynne's group studies the ability of pet dogs to react adaptively to the behaviors of the people they live with; the deployment of applied behavior analytic techniques to the treatment of problem behaviors in dogs; the behaviors of shelter dogs that influence their chances of adoption into human homes, as well as the welfare of shelter dogs; improved methods for training sniffer dogs; the development of test banks for studying cognitive aging in pet dogs; humans as social enrichment for captive canids; among others.
https://isearch.asu.edu/profile/2218677
Joel DeGrands is the Chief Operations Officer and co-founder of DLC Solutions. He was formerly the Director of Distance Learning at Medical Consumer Media where he pioneered the development of the first generation of rich media tools for medical associations and the pharmaceutical industry. These technologies have become the cornerstone for many organizations’ e-learning strategies and have exponentially extended the program reach to their target audiences. Joel is an expert in multiple industry-leading digital and social media technologies. As chief technology officer and product manager at DLC Solutions/EthosCE, Ezra interacts with CE professionals working in the field every day. Working on our flagship product, EthosCE, Ezra actively solicits input from our customers on how to make our application solve problems. That input goes directly into our product planning meetings where Ezra works alongside some of the smartest and most dedicated software engineers and quality assurance professionals in the CE industry. Hand-in-hand with our marketing and customer support teams, Ezra’s team delivers monthly updates and new features to the leading LMS in the CE industry. With more than 20 years of experience in online design and development in the healthcare education vertical, Ezra is unique qualified to lead our technology team. Ezra started in the newspaper industry where he focused on the visual display of information, both online and in print. That experience was excellent preparation for making the administration and delivery of learning simple and effective. Both jobs involve simplifying complex information and making it easier to create and consume. Ezra has also worked in consumer healthcare marketing and education and traveled the world implementing technology systems for leading pharmaceutical and medical device manufacturers. Mr. Wolfe holds a master’s degree from the University of Kansas and a bachelor’s from Syracuse University.
https://www.dlc-solutions.com/company/leadership
To write an effective clinical project leader job description, begin by listing detailed duties, responsibilities and expectations. We have included clinical project leader job description templates that you can modify and use. Sample responsibilities for this position include: Clinical Project Leader Qualifications Qualifications for a job description may include education, certification, and experience. Licensing or Certifications for Clinical Project Leader List any licenses or certifications required by the position: CCRA, SOCRA, ACRP, PMP, CCRP, CCRC, M.D Education for Clinical Project Leader Typically a job would require a certain level of education. Employers hiring for the clinical project leader job most commonly would prefer for their future employee to have a relevant degree such as Bachelor's and University Degree in Healthcare, Public Health, Allied Health, Radiologic Technology, Respiratory Therapy, Health Administration, Associates, Medical, Education, Science Skills for Clinical Project Leader Desired skills for clinical project leader include: Desired experience for clinical project leader includes:
https://www.velvetjobs.com/job-descriptions/clinical-project-leader
3 Reasons Why You Cannot Afford to NOT Buy Gold Right Now As major proponents of physical precious metals ownership, we often discuss reasons why we feel it is in your best interest to acquire and hold physical precious metals such as gold and silver. While we can list numerous reasons for our opinion on the matter, we also felt it might make sense to address why we believe you simply cannot afford not to own physical gold, silver or ideally both. The three reasons listed below are in our view very legitimate concerns. Ignore them at your own peril: - The Dollar may continue to lose value: The idea of fiat, or paper currencies losing value over time is nothing new and nothing earth shattering. Throughout history, paper money has lost value with the passage of time-and all of the money printing seen in recent years is not going to help. As your paper money loses value, your purchasing power declines. In the simplest of terms, this means that goods and services get relatively more expensive. - Investment returns may yield little compared to the past: Some have referred to it as the “new normal.” Others might simply refer to it as stagflation. Whatever you want to call it: The current outlook for global economic growth appears to be stagnant and an era of below average economic growth appears to be upon us. What might this mean? Well, for starters, investors may have to look elsewhere for yield. Stocks will, in our view, eventually run out of gas. Interest rates are likely to remain very subdued for some time to come. Investors will have to look elsewhere…We feel that a logical choice for many investors will be hard, physical assets like gold, silver and other commodities. - There could be significant changes seen in the global balance of power: Although the dollar still enjoys its status as the world’s reserve currency of choice, that status is becoming weaker with the passage of time. With the recent admission of the Chinese Yuan by the IMF to its Special Drawing Rights (SDR), China could potentially be in a position for its currency to gain more favor-and eventually challenge the dollar as the global reserve currency of choice. This could potentially have very far-reaching effects on the U.S. dollar as well as global trade as we know it today. Physical precious metals like gold and silver have been considered a reliable store of wealth and protector of value for centuries. In fact, we feel these assets are the only true form of real money in existence. Isn’t it time you got some? There are a lot of potential economic and geopolitical issues that could change the way global trade is done and as such some major changes could potentially be seen in global currency and interest rate markets. Isn’t it time you began acquiring and holding hard, physical assets that may potentially gain in value during such a period while also possibly providing an effective hedge against a loss of purchasing power? Isn’t it time you began acquiring and holding physical gold and silver? Fortunately, doing so is probably a lot easier than you think. All you need to do is pick up the phone. Speak with an Advantage Gold account executive today about your options. Our professionals are here to answer your questions about physical precious metals ownership, and can even show you how to begin buying and holding these key assets using your IRA account.
https://www.advantagegold.com/2016/10/18/3-reasons-cannot-afford-not-buy-gold-right-now/
Matoke with spaghetti. The most Delicious Matoke Curry you will Have. How To Make Matoke in Peanut Sauce Matoke in peanut sauce served with creamed spinach. It is a traditional Kenyan meal. Use this recipe for spicy Matoke made in a different way. Your family and guests will enjoy this. You can have Matoke with spaghetti using 11 ingredients and 3 steps. Here is how you achieve it. Ingredients of Matoke with spaghetti - Prepare 10 of matoke fingers (peeled). - You need of Carrot (chopped). - You need of Onions (chopped). - It’s of Green pepper (chopped). - It’s of Tomato paste. - It’s of Coriander (chopped). - It’s of Mixed spices (tropical heat). - Prepare of Salt. - You need of Cooking oil (uto). - You need 250 g of Spaghetti. - Prepare of Blue band magarine (original). East Africans have for a long time made this staple food; Matoke. See great recipes for Matoke, Fried bananas(matoke) too! Learn how to prepare matoke, a variety of banana indigenous to southwest Uganda. Cooked and mashed matoke is the nation's official food dish. Matoke with spaghetti instructions - Matoke – place the peeled matoke on a pan and some water then place on a stove with low heat and let it boil until soft. Once soft don’t pour the hot water let it stay until ready to cook.. - On another pan put the onions, green pepper, coriander and carrots. Add some oil and place in on a stove on low heat. Let them cook for 10 min then add the tomato paste 2 big spoon full. Then stir until it mixes properly. Add the matoke and continue stirring, add the mixed spices and some salt then cover. Let it cook for 10 minutes on low heat. Ready to serve.. - Spaghetti- Boil water in a pan, add spaghetti (cut/break into three) add a big spoon full of blue band magarine some salt and stir until the magarine has melted. Then cover and let it cook on low heat for 10 min. Continue stirring until all the water disappears. Ready to serve.. Most of the time when cooking spaghetti, it's tempting to just eyeball a serving size rather than measure out carefully. Najbolje Domaće Matorke sa EX YU Prostora! Javi mi se ako voliš seksi matorke! Joj boze kako je to bilo. Matoke is a banana variety that is indigenous to Uganda. Consuming 14 Superfoods Is A Good Way To Go Green For Better Health One good thing about going green is deciding to take life easier and enjoy yourself along the way. In spite of the fast pace of our modern-day world, you can accomplish this. We have to go back to a lifestyle that prevents disease before we have to get it treated. Unfortunately, majority of people don’t concern themselves about their health as they believe they can take a pill to fix the problem later on. It’s impossible to turn around without hearing about the latest pill to treat your health problems. There are a few pills that help, but only if you make some necessary changes in your life. As soon as your body wears out, you can’t exchange it for a new one, like your car. You have to take care of your health while you can. Your body cannot work correctly if it doesn’t receive the right nutrition. When you eat, do you eat out of convenience or taste without seeing if what goes into your mouth is beneficial for you? How often do you consume mini mart junk food, or oily fried foods from the local fast food joints? With all of the sugar-laden starchy and fatty food that virtually all people eat, it’s not surprising that new diseases are regularly occurring. A growing number of individuals are developing diabetes, hypertension, and other diseases as a result of the foods they ingest. People are choosing to eat better now that they are aware of how important food choices are to their health. Now it is much easier to find quality foods by going to a local farmer’s market or health food store. Almost all grocery stores now sell organic foods. There you will be able to get what science has termed superfoods. “Superfoods” refers to 14 foods that have been found to delay or reverse certain diseases. By ingesting these superfoods, your body will more fit. You will begin to feel a lot better when you choose to eat the superfoods instead of junk food. Your body will start to run as it was meant to when you give it the proper nutrition. As a result, your immune system will easily ward off diseases. You should include a number of superfoods in your diet everyday. To start with, beans are very good, and berries, in particular blueberries. Include some green tea or spinach or broccoli. Include whole cereals and nuts. Also, you may wish to eat salmon, turkey, yogurt, soya bean, tomatoes, oranges, and pumpkins. By eating these superfoods daily, you should get rid of any weight problems. Observing a green living meal plan will provide you with exactly what you need to become healthy and fit. You will see that your immune system becomes stronger and your body will be able to fend off disease. Ensure your future health by developing healthy eating habits now.
http://myspecialrecipe.com/41-easiest-way-to-make-perfect-matoke-with-spaghetti/
- Is Improvisation Present? - Politics as Hypergestural Improvisation in the Age of Mediocracy - On the Edge: A Frame of Analysis for Improvisation - The Salmon of Wisdom: On the Consciousness of Self and Other in Improvised Music and in the Language that Sets One Free - Improvising Yoga - Michel de Montaigne, or Philosophy as Improvisation - The Improvisation of Poetry, 1750–1850: Oral Performance, Print Culture, and the Modern Homer - Germaine de Staël’s <i>Corinne, or Italy</i> and the Early Usage of Improvisation in English - Improvisation, Time, and Opportunity in the Rhetorical Tradition - Improvisation, Democracy, and Feedback - Improvised Dance in the Reconstruction of <i>THEM</i> - Improvising Social Exchange: African American Social Dance - Fixing Improvisation: Copyright and African American Vernacular Dancers in the Early Twentieth Century - Performing Gender, Race, and Power in Improv Comedy - Shifting Cultivation as Improvisation - Improvisation in Management - Free Improvisation as a Path-Dependent Process - Musical Improvisation and the Philosophy of Music - Improvisation and Time-Consciousness - Improvising <i>Impromptu,</i> Or, What to Do with a Broken String - Ensemble Improvisation, Collective Intention, and Group Attention - Interspecies Improvisation - Spiritual Exercises, Improvisation, and Moral Perfectionism: With Special Reference to Sonny Rollins - Improvisation and Ecclesial Ethics - Index Abstract and Keywords This chapter juxtaposes brief case studies of African American vernacular dancers from the first half of the twentieth century in order to reexamine the relationship between the ideology of intellectual property law and the traditions of jazz and tap dance, which rely heavily on improvisation. The examples of the blackface performer Johnny Hudgins, who claimed a copyright in his pantomime routine in the 1920s, and of Fred and Sledge, the class-act dance duo featured in the hit 1948 musical Kiss Me, Kate, whose choreography was copyrighted by the white modern dancer Hanya Holm, prompt a rethinking of the assumed opposition between the originality and fixity requirements of copyright law and the improvisatory ethos of jazz and tap dance. Ultimately, the chapter argues that whether claiming or disavowing uniqueness, embracing or resisting documentation, African American vernacular dancers were both advantaged and hampered by copyright law. Keywords: improvisation, copyright, documentation, fixity, African American vernacular dance, Johnny Hudgins, Kiss Me, Kate, Fred and Sledge Anthea Kraut, University of California Riverside Access to the complete content on Oxford Handbooks Online requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription. Please subscribe or login to access full text content. If you have purchased a print title that contains an access token, please see the token for information about how to register your code. For questions on access or troubleshooting, please check our FAQs, and if you can''t find the answer there, please contact us. - The Oxford Handbook of Critical Improvisation Studies - Dedication - Preface - Acknowledgments - Contributors to Volume 1 - Introduction: On Critical Improvisation Studies - Cognitive Processes in Musical Improvisation - The Cognitive Neuroscience of Improvisation - Improvisation, Action Understanding, and Music Cognition with and without Bodies - The Ghost in the Music, or the Perspective of an Improvising Ant - The Improvisative - Jurisgenerative grammar (for alto) - Is Improvisation Present?
https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780195370935.001.0001/oxfordhb-9780195370935-e-015
Reflection of a Transient Acoustic Pulse by a Wall Consider the reflection of a two-dimensional acoustic pulse by a plane wall as shown in Figure 6.13. The fluid is inviscid and is at rest at time t = 0. An acoustic pulse is generated by an initial pressure disturbance with a Gaussian spatial distribution centered at (0, 20). The wall is located at y = 0. The initial conditions are as follows: – ln 2[x2 + (y – 20)2] u = v = 0. In the numerical simulation, the 7-point DRP scheme is used, and the time step At is set equal to 0.07677. This value of At satisfies numerical stability requirement and ensures that the amount of numerical damping due to time discretization is insignificant. Figure 6.13 shows the calculated pressure contour patterns associated with the acoustic pulse at 100, 300, and 500 time steps. The corresponding contours of the exact solution are also plotted in this figure. To the accuracy given by the thickness of the contour lines, the two sets of contours are almost indistinguishable. At 100 time steps, the pulse has not reached the wall, so the pressure contours are circular. At 300 time steps, the front part of the pulse reaches the wall. It is immediately reflected back. At 500 time steps, the entire pulse has effectively been reflected off the wall, creating a double-pulse pattern: one from the original source, and the other from the image source below the wall. Figure 6.14 shows the computed pressure waveforms along the line x = y. The distance measured along this line from the origin is denoted by s. The computed waveforms at 400, 700, and 1000 time steps are shown together with the exact solution. As can be seen, there is excellent agreement between the exact and computed results. At 400 time steps, the pulse has just been reflected off the wall. At 700 time steps, the double-pulse characteristic waveform is fully formed. Both pulses propagate away from the wall with essentially the same waveform. The amplitude, however, decreases at a rate inversely proportional to the square root of the distance.
http://heli-air.net/2016/02/25/reflection-of-a-transient-acoustic-pulse-by-a-wall/
On behalf of Rubinstein, Holz & King, P.A. posted in Alimony on Friday, September 7, 2018. It's not romantic trust that you have to worry about, but instead, a person's ability not to lie about assets or belongings. Mediators specifically work to gain both parties' trust to make sure they'll be as honest as possible without fearing repercussions, but if one party doesn't believe the other, then there is almost nothing that can be gained from mediation. What happens when people build trust? Even if you can't trust your spouse, there has to be trust of the mediator at the very least. Without some form of trust in the process, mediation is a wasted effort. What can you do to build better trust in mediation? Be as transparent as possible. Be honest about your feelings and intentions. Don't try to manipulate the mediator or your spouse with outbursts, crying or demands. It's best to approach this situation calmly and with the documents you're asked to bring the first time. If you come prepared, it helps to show that you're trustworthy and begins to build trust among everyone involved in the case.
https://www.rubinstein-holz.com/blog/2018/09/trust-a-vital-part-of-a-mediation-session.shtml
Since first establishing in 1985, Khans solicitors have been committed to delivering professional legal advice at the highest of standards. A multi-disciplinary practice, the firm has grown in both size and reputation from its humble beginnings as s sole-partner firm. Today, Khans is not only one of the longest established firms in Essex, it also boasts some of the most experienced solicitors. The philosophy at Khans is simple – to provide honest and accurate advice whilst always protecting the best interests of our client. We are aware at Khans that any kind of litigation can be a daunting process and costs are always a major concern of any client. Our fee structures are therefore affordable, transparent and easy to understand with a risk to benefit assessment carried out at each stage. We pride ourselves on taking a friendly yet professional approach with our clients and with over thirty years’ experience Khans solicitors is a name you can trust.
https://khans-solicitors.co.uk/about-us
And sitting in my rocking chair. I’ll be reading Harry Potter. And my family will say to me “After all this time?” And I will say,… “Always.” ~ Alan Rickman There, there, ladies and gentlemen, presenting to you some of the finest actors of the Hollywood fraternity. They are not just actors, instead, all of them come from British origin. How exciting is that! This article here covers the best of the lot. Therefore, not reading this article till the end will be a major loss, on your part. Keep scrolling through this article to know more about these actors. Some Of The Outstanding Actors Of Recent Times “Nothing is permanent in this wicked world Not even our troubles.” ~ Charlie Chaplin Here’s a list of the outstanding British actors, in recent times.*jaw dropped* Tom Hardy Born on the 15th of September, 1977, in London, England, United Kingdom. This British man also happens to be a Producer, besides being an Actor. Tom Hardy has been a relevant name in the entertainment industry for a little more than 2 decades. He is remembered for his impeccable performance in masterpieces like Inception, Venom, and The Revenant. This ‘Virgo’ man started his career as a model, making his debut in the movies in the 2000’s. It did not take him much time to make his mark in the world of acting. It was not long before that he had become a hit, and his work was loved by all. Did you know? Hardy is a dedicated theater actor as well. Benedict Cumberbatch Born on the 19th of July, 1976, in Hammersmith, London, England. This British man happens to be one of the most popular English actors hailing the present epoch. Cumberbatch is not just popular for his remarkable acting skills but also for his empathetic skills, which he channels in the form of charity work. This ‘Cancer’ man was listed on the Time magazine’s list of 100 most influential people in the world, in the year 2014. In 2015, he was appointed by the charity & performing arts for their services, as the CBE. Instead of looking for his movies on ‘fry movies net’, why don’t you look for them on MovieFry.net . Daniel Craig Born on the 2nd of March,1968, in Chester, Cheshire, England. This British man gained immense popularity and fame when he started acting as the protagonist in the James Bond Movie series. That too at the International level. Like any other Pisces, Craig is known for his robust good looks juxtaposed well with a chafed sex appeal. This ‘Pisces’ man began his career as a theater actor in the 1990’s and, slowly and gradually, sailed into the world of cinema. He has also been nominated for many awards. Charlie Chaplin Born on the 16th of April,1889, in Walworth, London, England. Don’t you dare forget this British man. Cinema buffs around the globe have placed this man on a pedestal. Chaplin was one of the most popular actors…A Star of the silent film era & has been regarded as the icon of comedy. This ‘Aries’ man is remembered for his performances in classics like Modern Times and The Gold Rush. This iconic man served the duties of a Writer, Director & Producer. Alongside that of an Actor. His demise on the 25th of December, 1977, made Christmas not so merry, for his fans, forever. Rowan Atkinson Born on the 6th of January,1955, in the County of Durham, England, United Kingdom. This British comedian is known to have fans at a Global level. He gained popularity with his outstanding performance as the legendary character Mr. Bean, in the TV series Mr. Bean. Mr. Bean became so popular that the makers had to release an animated version as well. Till date, Mr. Bean- the legend as well as the TV-series, has not lost its charm. This ‘Capricorn’ man had made his mark in the genre of comedy, in cinema, with the film Blackadder. Few other notable works would include movies like Never Say Never Again, Four Weddings, The Witches & A Funeral. Alan Rickman Born on the 21st of February, 1946, in Hammersmith, London, England. This British man was one of the most admired actors among his fans, privileged with a Global recognition for his on-screen as well as stage presence. The biggest trait of this ‘Pisces’ man was, nailing the villainous characters on-screen. Some of his notable works include masterpieces like, Die Hard, Robin Hood: Prince of Thieves, Rasputin: Dark Servant of Destiny, & Truly, Madly, Deeply. However, even after his demise on the 14th of January, 2016. Rickman rules the heart of his beloved Potterheads as Professor Severus Snape. Daniel Radcliffe Born on the 23rd of July,1989 in London, England This British Actor requires no introduction, literally. Radcliffe has gained International fame for his impeccable performance in the Harry Potter movie series, while still a child. More than an adaptation, Harry Potter was Radcliffe’s personal bildungsroman. This ‘Leo’ man is not just an Actor but a Producer as well, who has made his mark in the entertainment industry for almost 20 years now. He is one of the highest paying actors as well. The credit of which goes to Harry Potter, obviously. Radcliffe is an ardent supporter of the LGBTQA+ community and works towards preserving their rights , while contributing to various charities. Conclusion Talking about these beautiful British men who happen to be fine actors, got me all giddy. Now, don’t get me started on how difficult it was. To just talk about a few and leave out on the rest. How unfair! Although, in the above article, I have tried my best to feature, at least the major ones. I bet you guys have got a lot on your mind. What are you doing? Share the list of your favorite British men. Opps, Actors! With us in the comments section below. We’d love to hear from you. Author Bio:
https://glitternglue.com/outstanding-uk-actors-of-recent-times-2022/
Of the states that have applied, 93 percent have sought grants at the $300 level, which the federal government pays, rather than kick in an extra $100, which would have cost New Jersey approximately $80 million a week. As of Tuesday, FEMA had approved 28 states’ applications for $300 payments and 2 states’ applications for $400 payments. One state, South Dakota, has declined to participate. This new program must be created from scratch, and run separately from New Jersey’s existing state and federal unemployment programs. This is not something New Jersey or any state will be able to do quickly or easily. “This is not the answer for unemployed workers around the country who have been hit hard and fast by COVID-19. It is our hope that Congress extends the $600 supplement so that workers, especially those in high cost-of-living states like ours, can keep food on the table and a roof over their heads until we can get beyond COVID and they can get back on their feet,” Asaro-Angelo said. The FEMA grant has a spending cap of $44 billion. Once the allocation is exhausted, benefits will end. Thus, states must reapply after the first three weeks, giving FEMA the opportunity to calculate the program’s remaining balance after the first round of grants. The program also would be halted if FEMA’s Disaster Relief Fund balance drops below $25 billion.
https://njbia.org/nj-applies-for-lost-wages-assistance-funds-for-unemployed/
Abstract: Climate-conscious action initiated at the present has the potential to impact welfare consequences into the distant future. Market discount/interest rates help determine time-preference patterns for individuals; yet, such rates do not help compare the dis/utility derived by different contemporaneous individuals, or of economic agents living generations apart. Researchers and activists have advocated low to zero discounting of future environmental benefits from present-day investments. Contrary to most implicit assumptions, and despite projected technological advances and higher money incomes, once depleted, natural capital is not substitutable. Subsidising relatively 'sustainable' production often generates perverse incentives and sets arbitrage opportunities up. This paper argues that unsustainable production should be discounted at higher rates to internalise the externality, to eliminate the present-bias, and to achieve the desired indifference between short-run and medium-term returns. The model parameter verdurous β so developed is applied to demonstrate the incentive structures that might promote the switch to 100% organic coffee production in Rwanda. Keywords: discount rates; sustainable production; present bias; climate-conscious investment; time-rate-of-preference; certainty equivalent.
https://www.inderscience.com/info/inarticle.php?artid=89016
Kinematic Self-Replicating Machines © 2004 Robert A. Freitas Jr. and Ralph C. Merkle. All Rights Reserved. Robert A. Freitas Jr., Ralph C. Merkle, Kinematic Self-Replicating Machines, Landes Bioscience, Georgetown, TX, 2004. 4.10.1 Merkle Generic Assembler (1992-1994) In 1992, Merkle published a set of broad specifications for a generic assembler design which was to include positional mechanochemistry using “robot arms” similar to Drexler’s telescoping manipulator (Figure 4.29), plus a physical barrier to keep the internal workings of the machine (in vacuum) separate from the external (liquid) environment. The list of required specifications was to include the following items: (1) the type and construction of the computer; (2) the type and construction of the positional device; (3) the set of chemical reactions that take place at the tip; (4) how compounds are transported to and from the tip, and how the compounds are modified (if at all) before reaching the tip; (5) the class of structures that the assembler can build; (6) the nature of the internal environment in which the assembler mechanisms operate; (7) the method of providing power; (8) the type of barrier used to prevent unwanted changes in the internal environment in the face of changes in the external environment; (9) the nature of the environment external to the device; (10) the transport mechanism that moves material across the barrier; (11) the transport mechanism used in the external environment; and (12) a receiver that allows the assembler to receive broadcast instructions. The generic assembler was to be able to manufacture a range of diamondoid products, where “diamondoid” was construed to include “atoms other than carbon such as hydrogen, oxygen, nitrogen, sulfur, and other elements that form relatively strong covalent bonds with carbon.” This would require, in turn, a significant number of different types of input molecules to be presented to the device as feedstock, in order to minimize the need for chemical processing: “To simplify the design, it might be desirable to provide all compounds in the feedstock in essentially the form in which they would actually be used. This would place the burden of synthesizing the specific compounds on whoever made the feedstock. If, on the other hand, we wished to provide only a relatively modest number of simple compounds in the feedstock, then there would have to be a corresponding increase in the complexity of the internal processing in the assembler in order to synthesize the required complex compounds from the simple compounds that were provided. Early designs will likely have limited synthetic capabilities and will be almost completely dependent on the feedstock for the needed compounds. As time goes by, some of the compounds provided in the feedstock will instead be synthesized directly within the assembler from simpler precursors. Ultimately, those compounds that can be synthesized more economically in bulk will not be synthesized internally by the assembler but will simply be provided in the feedstock. Those compounds which are more easily and economically made internally by the assembler will be eliminated from the feedstock.” A multi-stage cascade approach (Figure 4.36) would be used to transport and purify input compounds, both because feedstock might contain impurities and because the many different feedstock compounds must be segregated before use.
http://www.molecularassembler.com/KSRM/4.10.1.htm
Guitarist and lead vocalist Matt Heafy was around 13 or 14 when Trivium formed in Orlando in 2000, and only 17 when Trivium released their debut album, Ember To Inferno. Over the past decade, Heafy and the other members of Trivium have come a long way, from constant experimentation (each album has its own very different sound) to the evolution of their songwriting and musical prowess. Rounded out by lead guitarist Corey Beaulieu (who came on board right after Ember to Inferno) bassist Paolo Gregoletto (2004-present) and drummer Nick August (2010-present), the band is still in their 20s, but they've achieved what most musicians their age only dream of. The name "trivium" even suits their accomplishments, translating to a three-way intersection that combines metalcore, melodic death metal and thrash. Currently touring North America with DevilDriver, Trivium's Heafy talked with Up On The Sun about working with David Draiman, what makes his metal generation unique, and how he would describe Trivium in three words.
https://www.phoenixnewtimes.com/music/triviums-matt-heafy-metals-not-a-genre-its-a-lifestyle-6590711
Alfa Romeo Automobiles S.p.A. is an Italian luxury car manufacturer, founded by the French Alexandre Darracq as A.L.F.A. (“[Società] Anonima Lombarda Fabbrica Automobili”, “Anonimo Lombard car factory”) on June 24, 1910, in Milan. The brand is known for sports vehicles and has been involved in motor racing since 1911. The company was owned by the Italian state holding company Institute for Industrial Reconstruction between 1932 and 1986, when it joined the Fiat Group. In February 2007, the Alfa Romeo brand became Alfa Romeo Automobiles S.p.A., a subsidiary of Fiat Group Automobiles, now Fiat Chrysler Automobiles Italia. The company that became Alfa Romeo was founded as an Italian Anonymous Company Darracq (SAID) in 1906 by the French automaker Alexandre Darracq, with Italian investors. At the end of 1909, Italian Darracq cars were selling slowly and the company’s Italian partners hired Giuseppe Merosi to design new cars. On June 24, 1910 a new company named A.L.F.A. was founded, initially still in partnership with Darracq. The first non-Darracq car produced by the company was the 24 HP of 1910, designed by Merosi. ALPHA. he ventured into motor racing, with the Franchini and Ronzoni drivers competing in the 1911 Targa Florio with two 24 HP models. In August 1915 the company passed under the direction of the Neapolitan entrepreneur Nicola Romeo, who converted the factory to produce military hardware for the Italian and allied war efforts. In 1920, the company name was changed to Alfa Romeo with the Torpedo 20-30 HP the first car to be so marked. In 1921, the Italian Discount Bank, which supported Eng. Nicola Romeo & Co broke down and the government had to support the industrial companies involved, including Alfa Romeo, through the “Consortium for Subsidies on Industrial Values”. In 1925, the railway activities were separated from the Romeo company and, in 1928, Nicola Romeo left. In 1933, state ownership was reorganized under the banner of the Institute for Industrial Reconstruction (IRI) by the government of Benito Mussolini, who then had effective control. The company struggled to return to profitability after the Second World War, and turned to mass production of small vehicles rather than manual construction of luxury models. In 1954, he developed the Alfa Romeo Twin Cam engine, which remained in production until 1994. During the 60s and 70s, Alfa Romeo produced a series of sports cars, but fought to make a profit, therefore the Institute for Reconstruction (IRI), the state conglomerate that controls Finmeccanica sold the brand to the Fiat Group in 1986. Alfa Romeo has successfully competed in Grand Prix car racing, Formula 1, car racing, car racing and rallies. He competed both as a manufacturer and as an engine supplier, through job registrations (usually under the name of Alfa Corse or Autodelta) and private registrations. The first racing car was built in 1913, three years after the company was founded, and Alfa Romeo won the inaugural world championship for Grand Prix cars in 1925. The race victories gave the brand a sporty image and Enzo Ferrari founded the Scuderia Ferrari racing team in 1929 as an Alfa Romeo racing team, before becoming independent in 1939. He had the most wins of any brand in the world. Gargash Motors & General Trading LLC Near to Gold and Diamond Park Sheikh Rashid Rd, Dubai United Arab Emirates Tel. +971 4 340 3333 Web-Site: www.alfaromeo-me.com STELVIO Iconic Alfa Romeo design has given shape to an unprecedented SUV, distinguished by unique and essential traits from the brand’s legacy, capturing a stylistic consistency with the New Giulia. Premium and heritage-inspired, the design approach is both outstanding and purposeful in equal measure. Everything is sculpted on mechanics, like a suit tailored to an athlete’s body. Every detail is conceived to enhance aerodynamics and underscore the performance of Stelvio, without drama. At Alfa Romeo, we call it “The Meaningful Beauty.” GIULIA Since its foundation, Alfa Romeo has taken a unique and original approach to creating cars, striving for a point of convergence between style and passion. Blending iconic Italian design, cutting-edge technology and a bold dynamic spirit to continually inspire authentic emotions. Now there is a new chapter in the Brand’s history. The driver and his emotions are at the center of the universe, innovation leads to new sensations and outstanding safety: in addition to the 5-star Euro NCAP rating, Giulia has earned the extraordinary score of 98% for adult occupant protection. On sale since 2010, the Alfa Romeo Giulietta is an average five-door sports car with a personal and still current appearance; in 2016 there was a slight update, and the front became more similar to that of the Giulia. In addition, the Veloce package was introduced with details (such as the red profile in the front bumper) that accentuate the sportiness of the car. The engines, all turbocharged, are modern and brilliant in response, and in addition to the manual gearbox you can have (choosing the diesel) the comfortable robotic dual clutch TCT. The passenger compartment is quite spacious, as well as, taking into account the type of car, the capacity of the trunk. In driving, the Alfa Romeo Giulietta is safe, and thanks to the direct steering and well-tuned suspension, it also knows how to entertain. Where the car points out more its relative seniority is in the equipment: the most recent electronic driving aids are missing, such as automatic emergency braking.
https://iicuae.com/en/alfa-romeo/
Is your workers comp provider network culturally competent? If not, you may fostering needless disability. Georgetown University’s Center on an Aging Society has an excellent article on the issue of cultural competence in healthcare, and defines the concept as “the ability of providers and organizations to effectively deliver health care services that meet the social, cultural, and linguistic needs of patients.” The article addresses the specialized medical needs that the changing demographics demand, both for reasons of language and other cultural and socio-economic factors. It makes the case that positive outcomes require that physicians and other providers develop cultural competence in service delivery. Barring this competence, minorities are more likely to be dissatisfied with care. If the providers, organizations, and systems are not working together to provide culturally competent care, patients are at higher risk of having negative health consequences, receiving poor quality care, or being dissatisfied with their care. African Americans and other ethnic minorities report less partnership with physicians, less participation in medical decisions, and lower levels of satisfaction with care. The quality of patient-physician interactions is lower among non-White patients, particularly Latinos and Asian Americans. Lower quality patient-physician interactions are associated with lower overall satisfaction with health care. In workers comp, poor quality care and dissatisfaction hinder recovery and may well prolong disability. Dissatisfaction often also turns into lawsuits that might have been prevented. The issue of cultural competence has relevance to workers compensation in terms of health-care services delivered by workers compensation provider networks, but also in other aspects of prevention and claims management as well. We’ve previously discussed some of the challenges posed by an increasingly multilingual workforce, as well as the fact that some immigrant workers are at high risk of injuries or death. The article suggests the following strategies for improving the patient-provider interaction and institutionalizing changes in the health care system: 1. Provide interpreter services 2. Recruit and retain minority staff 3. Provide training to increase cultural awareness, knowledge, and skills 4. Coordinate with traditional healers 5. Use community health workers 6. Incorporate culture-specific attitudes and values into health promotion tools 7. Include family and community members in health care decision making 8. Locate clinics in geographic areas that are easily accessible for certain populations 9. Expand hours of operation 10. Provide linguistic competency that extends beyond the clinical encounter to the appointment desk, advice lines, medical billing, and other written materials This list might be a useful adjunct to an employer’s current gating issues when screening medical providers for a workers comp program. It also provides a checklist of considerations for loss control, risk management, and claims staff as well.
http://workerscompinsider.com/tag/demographics/
issues, to encourage increase in confidence and self esteem. Casualty Art, Face painting and montype printing. exclusivity being one of the key aims. The festival is free to increase accessibility of high quality varied art. to encourage group work, discussion and interaction in a relaxed informal way. Worked with youths to design and create stage set. My personal art is often but not always inspired by the human condition. Nursing and art have run parallel to each other in my life for years, one practice has often informed the other. As a nurse(RGN, 1989) and ex foster carer I have a wealth of experience working with people from all walks of life and across all ages. As a freelance artist (Fine Art BA Hons, 2008) I am skilled at putting people at ease and providing creative non-threatening art classes,(artworks) where people enjoy the experience of experimental creativity, harnessing their individual ideas to realise work that brings about personal satisfaction. Through encouraging group work and communication I promote a supportive, positive environment, which helps to increase self-confidence. Reinforcing cognitive behavioural therapy and AA 12 step program. Manage safe drug or alcohol detox. immunisations, diabetic and asthma care, well women clinics, dressings and general health promotion. I fostered 6 teenagers in total over a period of 4 years. The experience was invaluable & rewarding but not without its trials and tribulations. Prior to year 2000 I worked as: A locum practice nurse; As a senior nurse in a womens drug and alcohol rehab; elderly care and various hospitals. I left nursing in Dec 2011 to pursue a career in art.
http://www.joanneweaver-artworks.com/cv/4537855294
On Saturday, Aug 13, 2016, our birdwatching group from Olympic Peninsula Audubon Society traveled through the northern waters of the Olympic Coast National Marine Sanctuary on the M/V Windsong out of Neah Bay. Our route traveled through the western entrance of the Strait of Juan de Fuca to Swiftsure Bank, then south over the Juan de Fuca Canyon and back inshore by Tatoosh Island. Marine organisms from tiny fish to enormous whales rely on sound and hearing for their survival, but increasing human activity within our ocean over the last century has also meant increasing levels of noise. Gray's Reef National Marine Sanctuary superintendent Sarah Fangman and Olympic Coast National Marine Sanctuary superintendent Carol Bernthal discuss how understanding the soundscapes within their protected areas can help sanctuary managers better safeguard species that rely on sound. Understanding Ocean Noise Reddit "Ask Us Anything" Join NOAA scientists Dr. Leila Hatch, Dr. Jason Gedamke and Dr. Jenni Stanley on December 8th for a Reddit "Ask Us Anything" about understanding ocean noise. Former Nancy Foster Scholar Dr. Nyssa Silbiger and her colleagues Piper Wallingford and Savannah Todd have spent the summer researching intertidal organisms in West Coast national marine sanctuaries -- all out of a camper van. Sport divers on the M/V Fling, diving in the Gulf of Mexico 100 miles offshore of Texas and Louisiana, were stunned to find green, hazy water, huge patches of ugly white mats coating corals and sponges, and dead animals littering the bottom on the East Flower Garden Bank, a reef normally filled with color and marine life. The reef, which is part of Flower Garden Banks National Marine Sanctuary, is considered one of the healthiest anywhere in the region. For 13 years, Greater Farallones National Marine Sanctuary (GFNMS) and Cordell Bank National Marine Sanctuary (CBNMS) scientists have collaborated with Point Blue Conservation Science to collect data on the ocean ecosystem. The project, Applied California Current Ecosystem Studies (ACCESS), examines how ocean conditions influence the tiny plants and animals at the base of the ocean food chain which in turn drive the abundance and distribution of top level predators like seabirds and marine mammals. ACCESS data from 13 years of sampling reveal a dynamic ocean influenced by both large scale phenomenon like El Niño, and local conditions such as strong winds that bring cold, nutrient rich water to the surface through “upwelling”. This summer, NOAA's Office of National Marine Sanctuaries is teaming up with the Ocean Exploration Trust to explore the marine ecosystems of the West Coast. Working aboard the E/V Nautilus and utilizing remotely operated vehicles (ROVs), scientists will map and explore targets throughout the U.S. West Coast, from Canada to southern California, including five national marine sanctuaries. If you're a marine animal, odds are good that you need sound. From whales to small invertebrates like shrimp, many marine organisms rely on sound and hearing for their survival. Sound is the most efficient means of communications over distance underwater, and is the primary way that many marine species gather and understand information about their environment. The government alone cannot sustain healthy oceans. It takes stewardship and community involvement in order to combat today’s pressures on our marine ecosystems. The Greater Farallones National Marine Sanctuary uses seabirds as sentinel indicators of the health of our coastal ecosystem, through long-term monitoring of live and dead seabirds.
https://sanctuaries.noaa.gov/notes/
Yesterday’s Town Hall meeting at Facebook produced one big surprise – the tepid questioning fired at President Obama from America’s tech elite. Tapscott’s take on the current state of the economy? Tapscott’s view of the economy is pretty comprehensive – autos, energy, pharmaceuticals were just the start of our conversation. A lot of his commentary is a negative take on how we do things. And his answers are a little one-sided – we need to move to a new paradigm, mass collaboration. But there’s a huge amount of sense in what I see as his central message. It’s time to own up on how inefficient many of our systems and processes are. And how interconnected the failings. Phi Beta Iota: Inefficiency is a form of corruption. Integrity eliminates inefficiency because integrity embraces clarity, diversity, and constant innovation in the face of constant recognition of truthful feedback. Hence, the West generally and the USA specifically is doubly-corrupt: corrupt in government, which has become a bloated patronage system for special interests, borrowing one third of its annual budget “in our name;” and corrupt in industry, which has been allowed to commit virtual suicide “in our name” by being allowed to waste resources, over-charge for unneeded products and services, and so on. As this web site has emphasized, deep information is paradigmatic. Obama is still working on multiple failed paradigms across the twelve policy domains in the Strategic Analytic Model.
https://phibetaiota.net/2011/04/obama-and-facebook-clueless-on-innovation/
The 50 metre motor yacht FB215 has been listed for sale by Jan Jaap Minnema at Fraser. Built in steel and aluminium by Italian yard Benetti to a design by Stefano Natucci, FB215 was delivered in 1996 as the first in the yard’s Golden Bay series and had a full refit in 2016. Her interior styling was originally by Francois Zuretti and features the use of exceptional materials. The master suite is still in the original style, while during her rebuild in 2016, Adriel Design gave the rest of her interior a completely new and fresh styling. Accommodation is for up to 12 guests in six cabins consisting of a master suite and VIP suite on the main deck, while below deck are two doubles and two twins. All guest cabins have entertainment centres and en suite bathroom facilities, and six further cabins can sleep up to 11 crewmembers aboard this yacht for sale. Double glass sliding doors lead into the main saloon, boasting a white oak floor with a silk carpet, a U-shaped leather seating area and an entertainment centre featuring a large 75-inch Samsung television screen. Upstairs, the skylounge has two large sofas with two coffee tables, a card table, a high top breakfast table with four chairs, and a bar with a wine cooler and two sinks. To starboard forward is an entertainment centre including a Samsung 54-inch Smart television screen. Twin 2,226 MTU diesel engines power her to a maximum speed of 14 knots and a range of 3,500 nautical miles at 12 knots. Lying in Dubai, FB215 is asking €12,500,000.
https://www.boatinternational.com/yacht-market-intelligence/brokerage-sales-news/50m-benetti-motor-yacht-fb215-for-sale--36233
Risk factors for pulmonary tuberculosis: what are they? Lung tuberculosis is an infectious disease caused by mycobacteria, mainly affecting the upper respiratory tract. Tuberculosis is transmitted by airborne droplets, and the pathogens themselves can remain active for a long time outside the human body. This disease is called a social problem due to the fact that its distribution is mainly due to lack of hygiene, contamination, lack of timely treatment and disinfection of the premises where the patient was. In order for tuberculosis to develop, it is necessary to coincide two groups of factors: infection and the development of the disease in the lung tissue. Not every inhabitant of the planet is at risk of contracting this disease - its spreading though extensively, but not universally. Also, as well as not every one of those whose organism the pathogen of tuberculosis has got, falls ill - for example, the vaccinated BCG has a very small risk of getting sick. The main risk factors for developing tuberculosis are listed below. Risk of infection of The main danger for tuberculosis infection is the rooms where patients stay for a long time, especially if the potentially infected person stays in them for a long time, and hygienic measures are not provided properly. In addition, the risk of getting infected is higher in those people whose susceptibility to infection is, for whatever reason, higher than the average. The risk groups for these factors are relatives or neighbors in the room of a patient with tuberculosis - without knowing that they may be residents of a communal apartment, dormitory, nursing home, etc. Also in potential danger are prisoners and workers of correctional facilities, medical workers( mainly employees of TB dispensaries).No less vulnerable are the most vulnerable segments of the population - homeless, migrants, drug and alcohol addicts. All these people become vulnerable to illness due to poor living conditions, namely absence: - of normal housing( lodging in doss houses, abandoned buildings); - good nutrition; - hygiene products; - free space( ie high density of people in the living space). In addition, listed citizens often do not disdain( and sometimes simply do not have a choice) to use one dish, do not have a normal heating. The population below the poverty line often becomes a victim of tuberculosis because of the transmitted( and neglected) diseases of the upper respiratory tract, systematic smoking are factors that reduce the resistance of the lungs to the causative agent of tuberculosis. The marginal population often does not have the opportunity or voluntarily refuses to vaccinate the children of BCG, which in turn increases the risk of infection among the population of childhood. Persons with a potential risk of infection: - experiencing physiological stress, hypothermia; - smokers; - with hypovitaminosis Can get sick without even contacting the patient directly. Mycobacteria away from sunlight( for example, in the soil or in cool damp areas) are able to remain viable for a year and a half, so even using the room without proper hygiene measures is potentially dangerous. In the risk group are also employees of the library, sinceMycobacteria on the pages of books also retain a long-term viability, and the microclimate of library facilities contribute to this. In addition to the immediate threat of inhaling the causative agent of tuberculosis, library workers are also potentially vulnerable due to the action on their bronchi of book dust, which traumatizes the mucous almost in the same way as smoking. Risk factors for the development of the disease and the possibility of reducing their impact The main risk factors for tuberculosis are: - primary infection; - reduced immunity. Immunity may be decreased for various reasons: - HIV infection; - long-term hormonal, chemo- and radiation therapy; - prolonged uncontrolled intake of antibiotics; - recently transferred diseases, etc. In addition, a significant role is played by factors: - malnutrition; - lack of vitamins in the diet; - long-term smoking; - use of drugs and alcohol. The risk of getting sick again is also present in people who have had tuberculosis in the past and undergoing treatment. The development of the disease is more likely in patients whose immediate relatives were ill with tuberculosis( not only because of genetic, but also the social factors mentioned earlier). In addition, some factors of infection are additionally and factors of the disease. For example, the presence of a pathogen in a room already after infection aggravates the patient's position and leaves no chances to fight immunity with them - the patient will simply repeatedly get mycobacteria from the environment. Cold rooms, lack of sun and congestion also do not contribute to recovery. That is why the risk of getting sick and dying from tuberculosis is so high in prisons - a large number of people are at the same time at risk of being exposed to unfavorable conditions, and sometimes the hygiene of the premises is not at the proper level. In order to reduce the impact of negative factors, it is necessary to adhere to certain rules regarding the way of life: - timely vaccination and vaccination of children; - regularly undergo Mantoux tests; - be informed about the transmission of tuberculosis. Teachers at the school, health workers and business leaders are required to hold regular production meetings at which staff would be informed of various socially dangerous diseases. However trite it may sound, sticking to a healthy lifestyle is extremely important in order to avoid tuberculosis. In time to diagnose diseases, timely and qualitatively treat them, eat right and give up smoking - these are the items that everyone who does not want to fall into the risk group should perform. The activity of state services is also important. For example, access to medicine and trust in medical workers, the quality of their services play an important role in the timely detection of the disease. Medical personnel must comply with hygiene measures in the premises, dosage of disinfectant solutions and timely cleaning. Corps of tuberculosis dispensaries and other health facilities must comply with sanitary standards: development of mold, dampness and other provoking factors is unacceptable. The state needs to provide support for the most vulnerable segments of the population: - ensure decent maintenance for people without a specific place of residence; - to establish medical assistance in prisons and drug dispensaries; - to provide social supervision for disadvantaged families; - take measures to curb the littering of common living quarters. In turn, homeowners should be responsible for hygiene in the premises. The fight against tuberculosis should include a set of social and medical measures, and responsibility for this problem should equally be borne by civil servants and civilians.
https://medhelpsis.com/en/posts/20153
- Strategic career development –A focus on training and continuing education will help HR departments retain the top talent, while simultaneously making them more adept at completing their daily tasks. - Automation –Tedious and error-ridden tasks can be delegated to computer automation to streamline efficiencies and ensure that HR professionals remain focused on helping their employees flourish. - Leveraging the latest technology –HR can and should be a driving factor for organizations that want to partake in ROI enhancing digital transformations. - Creating a better user experience – Through the right automated HR software, communications can be streamlined, mobile-first capabilities can be implemented, and heightened levels of productivity can be achieved. - Transparency as an incentive –The most innovative organizations realize that transparency key for employees who need to achieve a common goal. - Awareness in the face of new regulations – The federal government will keep HR teams on their toes in the face of new regulations that are designed to protect the rights of everyone and safeguard sensitive data.
https://wonderlic.eoi.digital/blog/human-resources/human-resource-management-wonderlic/
The epic dust bunnies lurking within our galaxy are so dark and dense that they cast the darkest shadows ever observed. Dust marks out cycles of galactic creation and destruction in our cosmos. Grains of carbon and silicon condense in the swollen outer layers of dying stars, or perhaps in the violence of supernova explosions. The grit is blasted out into the surrounding galaxy, where gravity encourages gas to condense around it, forming clouds that eventually collapse and ignite into new generations of stars. Advertisement Dust cannot be seen with conventional optical telescopes because it blocks out visible light. In this composite image from NASA’s infrared Spitzer Space Telescope, a particularly dark and dense dust cloud in the Milky Way’s centre is visible only by the infrared light it blocks from stars behind it. The black shadow it casts – the darkest that NASA has ever recorded – is enough to estimate the cloud’s mass at 70,000 times the mass of our sun, spread across a region 50 light years in diameter. That suggests a briefly glittering future for this galactic cranny. Such a dense cloud would be expected to erupt into a vast clump of giant white-blue stars that will live fast and die young. In their death throes, they will start the cosmic dust-to-dust cycle once again. More on these topics:
https://www.newscientist.com/article/dn25622-cosmic-dust-bunnies-cast-the-darkest-known-shadows/
What is the Second Palestine International Water Forum 2020? Second Palestine International Water Forum will be under the theme “Water Security in Dynamic Environments, Together...We Can”. it’s the first time to have this intricately linkage between water security and dynamic environments considering that water security encapsulates complex and interconnected challenges and highlights water’s centrality for achieving a sense of security, sustainability, development and human well-being, from the local to the international level. Many factors contribute to water security and range from natural, to institutional, economic, political, social, financial and for sure technological– many of which lie outside the water realm and all are in accelerated change. Water security, therefore, lies at the center of many security areas, each of which need an integrated and proactive actions. Addressing water security, therefore, requires interdisciplinary collaboration across sectors, communities and political borders and this is where “Together We Can” in the theme come from. The Second Palestine International Water Forum – 2020 is intended as a high-profile international water event supporting finding applicable solutions to water security and reliable responses to dynamic changes at the local, national and international levels. In order for the discussion to be effectively operating and for collaboration to grow, participation will be diverse and will include government representatives, policy makers, businesses, international organizations, local authorities, donors, civil society organizations, academic institutions, experts researchers and the media. Considering the fact that positive impacts require continuity, Second Palestine International Water Forum will capitalize on the outputs of the First Palestine International Water Forum. As the first forum conducted under the theme ”Integrated Water Resource Management: Best Practices and Technology Transfer”, the IWRMS is a main keystone of achieving water security and both forums aiming at achieving SDGs 6 in the context of 2030 agenda. The Second Palestine International Water Forum will be integrated and coordinated with major regional and international water events, mainly, “Ninth World Water Forum”- Senegal March 2021 and “Forth Arab Water Forum”. it will be an opportunity to encourage dialogues in the region and among stakeholders to examine and establish a new effective participation in these events taking specific issues of the region in water security to the world.
https://www.piwf.pwa.ps/en/article/31/What-is-the-Second-Palestine-International-Water-Forum-2020
Now, as Covid will hit the two-year mark next month and restrictions are starting to lift, there has been a collective sigh of relief. However, we are beset by another global catastrophe, Russia’s invasion of Ukraine. Coverage of the war can be especially difficult, given that so many American citizens fled war-torn countries to live safely in the United States. Many will be re-traumatized as they watch the citizens of Ukraine abruptly leave their country to seek solace in other parts of the world. Given the slow-down of the last two years, people were forced to rethink their lives, create more work-life balance and set new goals. Now that we are finally coming out of the woods with respect to the pandemic, we have to cope with another major event. We worry knowing that it is easier to start and sustain a war after a period of destabilization, as historians have noted. For instance, WW2 started right after the Depression. Even though the crisis is on the other side of the world, there are many immigrants from Russia and Ukraine in our cities, thus making us feel closer to that part of the world. The recent invasion has shaken our sense of security as we were just starting to feel secure again. Where do we go from here and what do we do now? Some ideas are as follows: - We can start by focusing on gathering information from credible news sources. - We may want to set some boundaries such as, keeping the TV off, and closing news briefs and apps on our phone. - It can be helpful to spend time with friends and family and focus on other matters such as your activities, work and plans this summer. You may want to make an agreement with friends and family that you will limit your discussion of the war. - Focus on a consistent routine as this may help you to feel that you have control over your life vs. worrying about the matters that you don’t have control of. Once you return to reading about the pandemic or the war, you may feel calmer and more able to manage. - It’s important to ensure that you are getting 3 meals a day, some snacks, plenty of water, rest, and exercise if that feels helpful for you. - Try to avoid maladaptive coping strategies like drugs, alcohol, overeating, and over spending. - Postpone any big decisions until you feel you are in the right “frame of mind” to make them. - If you feel sad or angry, try to speak to a trusted friend or professional. It is better to allow yourself to feel any emotions that come up. If you don’t have a counselor, Talkspace is a good place to start. - If you have children or are in a caregiving role to children, make sure they know that you are available and open to answer any questions that they may have. One of the most healing things that you can do is to reach out to organizations that are assisting people in Ukraine or refugees who are seeking asylum elsewhere. If you want to help out closer to home, many people within our own community have needs and can use support during these challenging times. Here are some organizations that you can support: Ukraine Humanitarian Fund Ukrainian Red Cross Save the Children Unicef Ukraine Crisis Appeal These are tough times and I hope my tips provide some help as we all navigate what’s to come in the future. Kay Gimmestad, LCSW-C is a business coach and clinician in New York City with 20 years of experience working in the profit and not for profit sectors of Human Resources, Health and Human Services. She has built a reputation for being highly skilled in facilitating behavior change while working with employees, both individually and in groups, on matters relating to performance management, substance abuse, crisis intervention, and stress/wellness.
http://www.kaygimmestad.com/blog/war-has-begun-more-trauma-and-uncertainty
Which is pound weight? Last Update: April 20, 2022 This is a question our experts keep getting from time to time. Now, we have got the complete detailed explanation and answer for everyone, who is interested!Asked by: Kristina Romaguera Score: 5/5 (16 votes) Pound, unit of What is the pounds symbol in weight? The international standard symbol for the avoirdupois pound is lb; an alternative symbol is lbm (for most pound definitions), # (chiefly in the U.S.), and ℔ or ″̶(specifically for the apothecaries' pound). The unit is descended from the Roman libra (hence the abbreviation "lb"). How many kg means 1 pound? One pound is equal to 0.453 kg. Where did pounds weight come from? The standard measure of weight in each was the pound (lb). The abbreviation 'lb' comes from the Latin word for pound, 'libra', which was also used for the monetary pound (£). Pounds were divided into ounces (oz). Is it 1 lb or 1 lbs? 2. “Pound” and “lbs.” are essentially the same thing. The pound is the actual unit of measurement, while “lbs.”, which stands for libra, is the common abbreviation used in expressing pounds. The correct way of abbreviation in expressing singular or plural pounds is “lb.” Ounces, Pounds, and Tons How do you spell lbs? Short question: Why is the weight unit "pounds" spelled "lbs"? "lb" is short for "librum", which is Latin for "pound". The modern plural is "lbs". BTW "lbs" isn't the spelling of the word, it's the abbreviation. How do you abbreviate pounds in a sentence? - lb. - lb. - lbs (plural) Is lb a weight or mass? The international standard symbol for the pound as a unit of mass is lb. In the "engineering" systems (middle column), the weight of the mass unit (pound-mass) on Earth's surface is approximately equal to the force unit (pound-force). This is convenient because one pound mass exerts one pound force due to gravity. Why does America use pounds? Why the US uses the imperial system. Because of the British, of course. When the British Empire colonized North America hundreds of years ago, it brought with it the British Imperial System, which was itself a tangled mess of sub-standardized medieval weights and measurements. How much is a Roman pound? Libra, the basic Roman unit of weight; after 268 bc it was about 5,076 English grains or equal to 0.722 pounds avoirdupois (0.329 kg). This pound was brought to Britain and other provinces where it became the standard for weighing gold and silver and for use in all commercial transactions. Which is more 1kg or 1 lb? Both pound and kilogram are units of measurement of weight or mass. A pound is an imperial unit of mass or weight. ... A kilogram (kg) is stated to be 2.2 times heavier than a pound (represented as lbs). Thus, one kilo of mass is equal to 2.26lbs. How can I lose 1 pound per day? You need to burn 3500 calories a day to lose one pound a day, and you need anywhere between 2000 and 2500 calories in a day if you are doing your routine activities. That means you need to starve yourself the whole day and exercise as much as to lose the remaining calories. How can I calculate weight? Weight is a measure of the force of gravity pulling down on an object. It depends on the object's mass and the acceleration due to gravity, which is 9.8 m/s2 on Earth. The formula for calculating weight is F = m × 9.8 m/s2, where F is the object's weight in Newtons (N) and m is the object's mass in kilograms. What is pound symbol called? The pound sign £ is the symbol for the pound sterling – the currency of the United Kingdom and previously of Great Britain and of the Kingdom of England. The same symbol is used for other currencies called pound, such as the Gibraltar, Egyptian, Manx and Syrian pounds. Why is it called the pound key? In the US it's often called the pound key, because it has long been used to mark numbers related to weight, or for similar reasons the number sign, which is one of its internationally agreed names. Elsewhere it is commonly called hash, a term dating from the 1970s that may have been a popular misunderstanding of hatch. What is oz weight? Ounce, unit of weight in the avoirdupois system, equal to 1/16 pound (437 1/2 grains), and in the troy and apothecaries' systems, equal to 480 grains, or 1/12 pound. The avoirdupois ounce is equal to 28.35 grams and the troy and apothecaries' ounce to 31.103 grams. Will US ever go metric? The United States has official legislation for metrication; however, conversion was not mandatory and many industries chose not to convert, and unlike other countries, there is no governmental or major social desire to implement further metrication. Why do Americans use Fahrenheit? USA Fahrenheit FAQ Fahrenheit is a scale used to measure temperature based on the freezing and boiling points of water. Water freezes at 32 degrees and boils at 212 degrees Fahrenheit. This is used as a metric for determining hotness and coldness. Which president stopped the metric system? The Metric Board was abolished in 1982 by President Ronald Reagan, largely on the suggestion of Frank Mankiewicz and Lyn Nofziger. What is a pound of thrust? A "pound of thrust" is equal to a force able to accelerate 1 pound of material 32 feet per second per second (32 feet per second per second happens to be equivalent to the acceleration provided by gravity). Which is the unit of weight? The SI unit of weight is the same as that of force: the newton (N) – a derived unit which can also be expressed in SI base units as kg⋅m/s2 (kilograms times metres per second squared). What is the full form of lbs? 1) LBS: Pound-Mass or Pound LBS has been derived from a Roman word Libra, it is represented by 'lb' or 'lbs'. It is an international term used to define weight or mass of an object. Pound is a Latin word meaning 'a pound by weight'. United States and countries of commonwealth have agreed on the term pound and yard. How is weight written? There are two common abbreviations of weight: wt. and wgt. How do you write 40 pounds? 40 (forty) is the number that follows 39 and precedes 41. Though it's related to the number “four” (4), the modern spelling of 40 is “forty.” The older form, “fourty,” is treated as a misspelling today. The modern spelling could reflect a historical pronunciation change. What is the difference between pound and pound force? “Lbf” refers to the gravitational force placed by a matter on the Earth's surface, while “lb” deals with the measurement of force. A pound force equals the product of 1 pound and the gravitational field. “Lb” and “lbf” are basically similar with each other since they both involve the same force.
https://miniexperience.com.au/which-is-pound-weight
What is the Moral argument for the existence of God? The moral argument begins with the fact that all people recognize some moral code (that some things are right, and some things are wrong). Every time we argue over right and wrong, we appeal to a higher law that we… Read More › PHILOSOPHY & RELIGION Both philosophy and religion are widely used concepts, yet defining them accurately can be challenging due to the wide range of meanings that each has acquired over time. Because religion is a derivative of an ancient Latin phrase that referred… Read More › Life worth Living. Finding meaning in our journey through life, calls for our need to establish a foundation that provides meaning and a framework that equal and fair to all. BEHOLD THE MAN SERIES: TRUTH What is Truth? This is the question that has plagued humanity since our very existence. We seem disoriented and lost as we struggle through life, seeking answers to our purpose and meaning to life. Is this all there is to… Read More › Distinguishing Atheism, Theism, and Agnosticism What if it can be shown that there is a reasonable alternative to traditional Christian theism (on the one hand) and atheism (on the other)? Among other things, this would suggest that the defeat of orthodox Christian theism does not necessarily and in itself spell doom for theism. The Argument Against Atheism Free will is the notion that we independently undertake our actions and choices, and therefore are responsible for them. But at some point during our stay on the planet, we come upon the question of whether we are the true authors of our own actions. Is free will only an illusion? Are we unwitting pawns in a world of determinism” — a world where we take action not because of our own volition but because of all of the circumstances that led to that moment?
https://morallife.foundation/tag/atheist/
KUSUMA., ADNAN (2010) LIU XING’S AMBITION TO BE A SCIENTIST IN CHEN SHI-ZHENG’S DARK MATTER (2008) MOVIE: A PSYCHOANALYTIC APPROACH. Skripsi thesis, Universitas Muhammadiyah Surakarta. | | | PDF | A320060159.pdf Download (65kB) | PDF (Full Text) | A320060159.pdf Restricted to Repository staff only Download (870kB) Abstract The major problem of this study is how ambition is reflected in the major character’s personality. The aim of this study is to analyze Chen Shi- Zheng’s Dark Matter based on the structural elements and the development of personality of the major character based on Psychoanalytic Approach. This study focuses on the major character, namely Liu Xing. The benefit of this study is to give additional information which can be used by the other literature researchers who are interested in analyzing this play. This study uses two data sources, namely primary and secondary data source. The primary data source is the movie of Dark Matter, while the secondary data are taken from some information needed. The method of data collection is qualitative method. The analysis is started by analyzing Liu Xing’s id, ego, and superego. For the last, the writer makes an analysis of Liu Xing’s character using defense mechanism. Based on the analysis, the writer can draw the following conclusions. Firstly, the structural elements of the movie present unity. Chen Shi-Zheng takes the character with particular characterization. The plot of this story can be categorized into traditional plot that consists of the rise of complication, climax and resolution. Dark Matter movie begins with the exposition followed by conflict and climax, and ended by resolution. Secondly, based on psychoanalytic approach to analyze the data of the research. Dark Matter movie is a very interesting movie to be studied, which is manifested in his psychoanalytic analysis of its character. The major factors of the movie are the human ambition and human frustration. The movie shows the conflict among them it and how superego obeys the id and how the ego becomes a scientist.
http://eprints.ums.ac.id/9956/
At LDSM, we recognise the contribution of Physical education to a child’s physical, social, emotional and mental health and wellbeing; we aim to engage all pupils in physical activity to promote development in all areas. Pupils are encouraged to take part in team games which inspire team spirit and good sportsmanship whilst developing a sense of fairness. Lessons provide opportunities to enhance social development through team work as well as cognitive development through decision making within game scenarios. Through lesson time, specialist coaching and our broad range of sports clubs, activities are planned to build upon prior learning; opportunities are provided for all abilities to develop their skills, knowledge and understanding. There is planned progress throughout each unit which increases the challenge as children move up through the school. When inside or outside, all activities offer appropriate physical challenge using a wide range of resources to support appropriately. Children will: - participate in competitive sports and apply attacking and defending techniques; - develop flexibility, strength, control and balance; - use a range of movements in sequences; - evaluate their performances in order to improve and achieve their best. Cross curricular links are made throughout lessons to various other subjects. - Literacy can be included through instructions, reporting or making recount of an activity or skill. - Science is included through learning about body parts, diet, keeping healthy and pulse rates. - Maths is incorporated through the use of shape, position and direction as well as counting and measuring. At LDSM, safe practice is our priority. In order to minimise the risk of injury, children should wear shorts/tracksuit bottoms and t-shirts. Plimsolls or trainers are to be worn for outdoor games, together with a jumper and tracksuit bottoms when cold. No jewellery is allowed, including earrings and watches. Hair has to be tied up. Whilst any child who does not have the correct attire or cannot abide by the above cannot partake in PE that week, this is rare, as we are very proud that children at LDSM love their time in PE and Sports lessons! For more information on uniform and other guidelines relating to PE, see 'School Life.' PE Curriculum Overview | | | | Autumn | | Spring | | Summer | | Year 3 | | Dodgeball Health & Fitness. Hockey | | Basketball. Tag Rugby Hockey | | Swimming Indoor Athletics. | | Swimming. Gymnastics | | Striking & Fielding. Football | | Athletics- Track/Field Dance | | Year 4 | | Hockey Health & Fitness. | | Hockey Tag Rugby | | Indoor athletics. Football | | Gymnastics Cricket. | | Netball. Dance | | Athletics- Track Hockey | | Year 5 | | Health & Fitness. Hockey | | Football. Hockey | | Indoor athletics. Tag Rugby | | Gymnastics Tennis | | Cricket Netball. | | Athletics- Track/field. Dance | | Year 6 | | Health & Fitness. Basketball. Hockey | | Tag Rugby Indoor athletics Hockey | | Hockey Gymnastics. | | Football. Cricket | | Rounders Netball.
https://www.longdittonstmarysschool.co.uk/physical-education/
Planning considerations for your Xsan SAN design The following considerations might help improve your SAN design decisions. How much storage? Because it’s easy to add storage for user data to an Xsan SAN, you only need an adequate starting point. You can add storage later as needed. However, you can’t add storage for journal data, so try to allocate enough space for journal data right from the start. You can add an entire storage pool for metadata and another storage pool for journal data. Workflow considerations How much file sharing is required by your users’ workflow? For example, if different users or groups work on the same files, simultaneously or in sequence, store those files on a single volume to avoid needing to maintain or hand off copies. Xsan uses file locking to manage shared access to a single copy of the files. Performance considerations If your SAN supports an app (such as high resolution video capture and playback) that requires the fastest possible sustained data transfers, design your SAN with these performance considerations in mind: Set up the LUNs (RAID arrays) using a RAID scheme that offers high performance. To increase parallelism, spread LUNs across RAID controllers. Xsan then stripes data across the LUNs and benefits from simultaneous transfers through two RAID controllers. To increase throughput, connect both ports on client Fibre Channel cards to the fabric. For clients using Xsan 5 and DLC, real-time operations should be done over a Fibre connection. Store file system metadata on a separate storage pool from user data and make sure the metadata LUNs aren’t on the same RAID controller as user data LUNs. You can use a separate storage pool for journal data when you create a new volume. This significantly improves performance for some operations, such as creating and deleting files. Use a second Ethernet network (including a second Ethernet port for each SAN computer) for SAN metadata. If all computers on your SAN are Mac computers, enable Extended Attributes for your volumes to eliminate the overhead of file information being stored in multiple hidden files. Availability considerations If high availability is important for your data, set up multiple metadata controllers to accommodate metadata controller failover. Also, consider setting up dual Fibre Channel connections between each client, metadata controller, and storage device using redundant Fibre Channel switches. Security considerations If your SAN supports projects that must be secure and isolated from each other, create separate volumes for each project and set appropriate ACLs on the volume to eliminate any possibility of the wrong client or user accessing files stored on a volume. As the SAN administrator, you control which computers are SAN clients. Users whose computers aren’t SAN clients or controllers can’t browse for or mount SAN volumes. However, you can’t control which Xsan computers can use a volume. Users whose SAN computers have macOS or macOS Server can mount all SAN volumes themselves. You can also set up access control lists (ACLs) in the Server app or assign user and group permissions to folders using standard file access permissions in the Finder. Choose RAID schemes for LUNs Much of the reliability and recoverability of data on a SAN is provided not by Xsan, but by the RAID arrays you combine to create storage pools and volumes. Before you set up a SAN, use the RAID system configuration or administration software to prepare LUNs based on specific RAID schemes. WARNING: Losing a metadata controller without a standby can result in the loss of all data on a volume. A standby controller is strongly recommended. WARNING: If a LUN in the metadata storage pool fails and can’t be recovered, all data on the volume is lost. It’s strongly recommended that you use only redundant LUNs (LUNs based on RAID schemes other than RAID 0) to create Xsan volumes. LUNs configured as RAID 0 arrays (striping only) or LUNs based on single drives are difficult or impossible to recover if they fail. Unprotected LUNs such as these should be used only in storage pools that store scratch files or other data that you can afford to lose. Most RAID systems support all popular RAID levels. Each RAID scheme offers a different balance of performance, data protection, and storage efficiency, as summarized in the following table. RAID level Storage efficiency Read performance Write performance Data protection RAID 0 Highest Very High Highest No RAID 1 Low High Medium Yes RAID 3 High to very high Medium Medium Yes RAID 5 High to very high High High Yes RAID 0+1 Low High High Yes Decide on the number of volumes A volume is the largest unit of shared storage on the SAN. If users need shared access to files, store those files on the same volume. This makes it unnecessary for them to pass copies of the files among themselves. However, if security is critical, remember you can’t control client access by unmounting volumes on Xsan clients. Users whose computers have macOS or macOS Server can mount SAN volumes themselves. For a typical balance of security and shared access, create one volume and control access with folder access privileges or ACLs the Server app. Decide how to organize a volume You can help users organize data on a volume or restrict users to specific areas of the volume by creating predefined folders. You can control access to these folders by assigning access permissions using the Server app. Choose metadata controllers You must choose at least one computer to be the SAN metadata controller, the computer which is responsible for managing file system metadata. Note: File system metadata and journal data are stored on the SAN volume, not on the metadata controller itself. See “Store user data with metadata and journal data” below. If high availability is important for your data, set up multiple metadata controllers to accommodate metadata controller failover. If performance is critical, don’t run other server services on the metadata controller and don’t use the controller to reshare a SAN volume using AFP or NFS. Estimate metadata and journal data storage needs The metadata and journal data that describe a volume are stored not on the volume’s metadata controller, but on the volume. Metadata is stored on the first storage pool in the volume. Journal data can be stored on any storage pool in the volume. You must have only one storage pool with journal data. To estimate the amount of space required for Xsan volume metadata, assume that 10 million files on a volume require approximately 10 GB of metadata on the volume’s metadata storage pool. The journal requires between 64 KB and 512 MB. Xsan configures a fixed size when you create a volume. Due to the small size, you can use a single RAID 1 LUN for the journal storage pool. To maximize the performance benefit of a separate journal storage pool, dedicate entire physical disks to the RAID 1 LUN. Store user data with metadata and journal data Although it’s possible to create a volume with only one storage pool (containing metadata, journal and user data), it isn’t recommended if performance is of concern.
https://support.apple.com/en-mt/guide/server/apde32ad52b2/mac
Writing a blog post is easy, writing a book is very very difficult by comparison. My first book, An Englishman in Nepal, took a long time despite the fact that I had a lot of notes, diaries and previous blog posts from the last 10 years to use as content. I suppose it all goes to show that the well worn adage “content is king” only kicks in AFTER your book is written AND published! Now I’m writing my second book, about wine, I’m finding it a lot easier, which means that another old adage, “without changing there has been no learning” is perfectly true. I used to drum this in to our team of education consultants in Kathmandu week after week, initially badgering them to hold weekly team learning reviews every Friday afternoon to get them each to commit to implementing the following week what they said they had learned in the previous week. It’s too easy to say that accumulating knowledge is learning. It isn’t! Changing behaviour IS learning. Writing my first book I learned how to use some new tools, particularly KDP (Kindle Direct Publishing) Amazon’s self publishing platform and Scrivener, an app or programme for “writing” that goes way beyond using Microsoft Word or Apple Pages. Then, connecting your manuscript in Scrivener with the Amazon platform is the Create Space tool that converts the manuscript into the required format and gives you lots of options for adapting chapter headings, adding images, changing fonts, and a lot more. What Create Space is doing is showing you what your book will actually look like ……. in whatever book format and size you choose. Reflecting on the time it took me to write that first book, despite having most of the content, I was horrified to have to admit that my weakness was organisation and planning! Here I am a retired Organisation Psychologist, highly trained in a technique known as “systems thinking” who’d just written a book on the hoof so to speak wondering why it had taken so long. I had no clear vision of an end point, no structure or skeleton of chapters or sections, no plan for what would go best in each chapter or how to link them together. All I had done was to create a list of 10 topics then wrote them as ideas came into my head, occasionally using previous blog posts or articles I had written elsewhere. Ridiculous! To make matters worse, my reflection showed me that I had grossly underused Scrivener, completely missing it’s massive range of features because I was using it as a simple word processor. Wrong! My only excuse was that I was using the free version of Scrivener on my iPad which was “feature lite” compared to the Mac version for laptops. So, I bought the “paid” version of Scrivener for my MacBook, not cheap at £47, but considering I use it every day that works out at 15p a day spread over a year. Having followed various tutorials and played about with features and sections, I created a project just called Wine Book and started to create an Outline Plan which has 27 sub sections which are potential chapters, sections, or standalone pages of the book. Each one has a brief sentence or two saying what that chapter is about. Next I created a Research Folder which has sections roughly corresponding to the chapters of the book. In each of these sections I paste web links, articles, book notes and highlights, ideas, and pieces of prior written content. Everything is placed in there either randomly as I think of things, or in a focused way because I’m deliberately researching a topic. Finally I have a draft manuscript folder in which I place chapters as I write them and using items from the relevant Outline Plan and Research folders. None of the chapters are numbered, yet, and each is a separate document in the manuscript. This may seem strange at first but it gives you the option to move chapters around into any sequence you wish, clearly works only for non fiction books! Finally, the best feature is called the “compile” instruction in which you identify when and how you want the final manuscript to be put together into the final format and structure. Its then ready for interfacing with the Create Space tool of Amazon whether you want a Kindle digital version of your book or a print/paperback version. Brilliant! I do realise that there are some very experienced authors who follow my blog and who might have a bit of a laugh at my post here, I’m still no expert, but I do know how to learn and change ……. Mostly anyway. As for the wine book, I’ve now completed 16 chapters so not many to go, with any luck published in time for Christmas when plenty of wine needs to be drunk. Should make a nice present!
https://buddhawalksintoawinebar.blog/2020/09/23/the-learning-of-a-novice-author/?replytocom=6835
In the BBC’s Bodyguard, Richard Madden plays a police protection officer and veteran soldier who is exhibiting signs of PTSD. In episode three he tries to strangle the woman he is supposed to be safeguarding. Later, a friend suggests he seek counselling. This image of the suffering veteran dominates modern views of the soldier experience, but was this the case in ancient and medieval warfare? Achilles, hero of the Trojan war, is commonly held to be an ancient sufferer of PTSD, thanks largely to Jonathan Shay’s Achilles in Vietnam about the psychological damage caused by war, while Epizelus’ spontaneous blindness at the Battle of Marathon (490BC) is often cited as another example. So popular is the contemporary idea that PTSD was common in the ancient world that ancient plays are now being used to help modern veterans. In May 2017, more than 100 servicemen and veterans watched extracts from Sophochles’ plays which portray what many see as ancient examples of PTSD, as a way of getting them to talk about their own experiences. More recently, researchers at Anglia Ruskin University in Cambridge claimed that the earliest examples of PTSD can be found in ancient Mesopotamia (present-day Iraq) 3,000 years ago. The sources they looked at described how the King of Elam’s “mind changed” after years of fighting. Soldiers there had to go on campaign every three years after which they had flashbacks and dreams about their dead comrades, symptoms now commonly ascribed to PTSD. But there is another school of thought that says the experience of the ancient soldier was not universal. His experience was a product of his culture, and therefore he was more able to deal with the traumas of war because he was conditioned to fight. Killing enemies was a glorious thing – and rather than going against what society expected, ancient warriors were fulfilling a clearly defined role. Medieval warfare The same has been said of knights in the Middle Ages who were trained to fight from a young age, a factor which arguably made them more resilient to the psychological impact of warfare, as did the fact that medieval society was more used to death and brutality. In 15th-century France, people certainly believed that warfare caused a kind of madness, but they differentiated between the good and the bad. Soldiers traumatised by war who went “beserk” were celebrated – while noncombatants traumatised by war were pitied or ridiculed. But killing was a sinful act, and in the Christian Middle Ages, writers wrestled with the issue of exactly when and how it was morally acceptable. They also thought about how homecoming fighters could make amends. A few years after the Battle of Hastings (1066) William the Conqueror faced rebellion from some of his own men, in part perhaps because they found the violence of his conquest had gone too far. Certainly, contemporary descriptions of the Harrying of the North, when William’s troops ravaged northern England, suggest that it was particularly brutal. Indeed, the effort medieval churchmen put into thinking about ways to atone for killing in warfare suggests that there was a real awareness of the need to ritualise the return to normal life. Penances were imposed on the men who fought at Hastings in 1066. The philosopher Thomas Aquinas warned that warfare had to potential to be sinful as in battle soldiers could get carried away and engage in savage murder which needed to be forgiven. Even during the crusades – wars believed to be sanctioned by God and fought on his behalf – some knights came home changed by their experiences. When one chronicler described the crusaders coming home from the Third Crusade (1189-92), he told his readers that though these men “survived unharmed … their hearts were pierced by swords of sorrows from different sorts of suffering”. Some knights warned about the dangers of warfare and the toll it could take on those who fought. Geoffrey de Charny, a French knight who fought in the Hundred Years War, warned other knights in his Book of Chivalry that they would face lack of food and water, have to fight through the night and suffer many dangers. He cautioned that “when they would be secure from danger, they will be beset by great terrors”, suggesting that though they were trained for war they could be terrified by it. Fighters in the past were clearly affected by their experiences, expressed feelings of fear, shame, or anger, or otherwise suffered as a result of the psychological traumas of war, whatever those traumas might be. This does not mean that their experiences or responses were universal, or that we can judge the trauma of a 14th-century knight by the same standards as a 21st-century soldier. But it does show that trauma and distress have followed as long as humans have waged war on one another. Kathryn Hurlock, Senior Lecturer in Medieval History, Manchester Metropolitan University This article is republished from The Conversation under a Creative Commons license. Read the original article.
https://www.zmescience.com/other/we-have-accounts-of-ptsd-in-warfare-from-homer-to-the-middle-ages/
Title : The twin-method enables the assessment of real-world effectiveness and may trigger a paradigm shift in health care and health professions. Abstract: The available standard tools for outcome assessment can measure the "proof of principle" of a new intervention under experimental study conditions. The new method of a Pragmatic Controlled Trial (PCT) is an observational TWIN-METHOD (1) that allows the application of health care as usual under the unstructured conditions of the "Natural Chaos" where almost all patients and all doctors' strategies are different and second, the assessment of outcomes under well-structured conditions. This “squaring of a cycle” is possible by replacing the Randomized Controlled Trial (RCT) with the Bayes' principle. All patients will be treated according to the best of the health care team's knowledge. However, each patient needs to be classified at study entry. The classification describes the individual risk of each patient related to her/his risk profiles (high, intermediate, low) to each of the assessed study endpoints (e.g., survival, major side effect, monetary costs). This endpoint-specific risk classification will provide more detailed and more reliable information on the effects of treatments than RCTs. The new method will change several of our current standards in health care and health legislation. The TWIN-METHOD will be the first and a specific tool of Health Services Research (HSR). HSR is a field of medicine that combines both care under real-world conditions and outcomes research under well-structured but non-experimental (observational) conditions. We will get precise data not only on efficacy but also on effectiveness and efficiency. Legal decisions will be based no longer on experimental data but on Real-World Outcomes. To achieve these aims we need experts who can retrieve these data in out-patient clinics and hospitals. The best-suited profession to complete this challenging task may be nurses with a feeling for patient needs, medical practice, and information technology. Reference - Porzsolt F, Weiß C, Weiss M. Covid-19: Twinmethode zum Nachweis der Real-World Effectiveness (RWE) unter Alltagsbedingungen [Covid-19: Twin Method for Verifying Real-World Effectiveness Under Everyday Conditions]. Gesundheitswesen. 2022 Jun 23. German. doi: 10.1055/a-1819-6237. Epub ahead of print. PMID: 35738304.
https://nursingresearchconference.com/program/scientific-program/2023/the-twin-method-enables-the-assessment-of-real-world-effectiveness-and-may-trigger-a-paradigm-shift-in-health-care-and-health-professions
A teacher’s work is never done, and the more we can multitask with our class activities, the better off both we and our students will be. This simple activity is perfect for the teacher who wants to cover several areas of language instruction in just one activity. In short, students choose one of their interests and create a book about it using pictures and vocabulary. It works for younger students as well as adults, and you might even have classroom resources for future use when you are done. Here’s how to create this engaging and educational experience with your students. Cover All Language Skills in an Enjoyable Way - 1 Picture This The activity starts with students looking through magazines for pictures that interest them. I hope you have a collection of classroom magazines on hand for your students to use in various activities. They are great for learning vocabulary, creating collages, and using for writing prompts, among many other activities. In this activity, students will look for pictures that appeal to them that they will later assemble into a book. It helps if you have magazines that cover topics of interest to your students. If you’re not sure that you do, invite students to bring in magazines that they like, even if those magazines are not in English. They will only be taking the pictures from the magazine and not the words. Make sure you warn them, however, that they will be cutting pictures from their magazine, so they should not bring something that they do not want to cut up. Students can work individually or in groups to choose their pictures. Have students choose between five and eight pictures that interest them and paste each one on to a separate piece of paper. Eventually they will compile these pages into a book. - 2 Identify Important Vocabulary Once your students have chosen their pictures and pasted them onto their book pages, it’s time to talk about what is in the pictures. If individual students chose their pictures, it’s best if you can talk with each person individually as well. If you had groups work together, you can have groups discuss the pictures among themselves as you move from group to group giving input. During the discussion, students should point out any important objects or elements in the pictures. You or their classmates will supply the correct vocabulary for these items, and you may want students to make a list on a separate piece of paper of these important vocabulary words. - 3 Write It Out This step is when the book really comes together. You will have to be involved in writing each book because it’s important that the spelling and grammar be without error in these class made books. Thus, this is a good step to do while your class is doing other, independent work. You will not, however, come up with the text that will be written. That is your students’ job. Sitting with each individual or group, have your student tell you what text should go on each page. It is up to your students whether they want to write a fictional piece or an informational piece (or even a combination of the two). Reading picture books similar to the one they are creating is a good way to prepare them for this activity. When it is time to write, students should have their list of vocabulary handy when they give you the sentence(s) for each page. You then write down what your students said correcting the grammar as you write (no need to point out to your students their grammatical mistakes at this point) and using correct spelling. Once all the pages have text on them that relates to the picture on that page, your students can assemble the book. Staples are a great go to, but you can use any binding option that works for you and your budget including three ring notebooks or a spiral binding. - 4 Read and Read Again Once your books are bound, your students have a reading reference that will interest them and be at their reading level. Because they chose the pictures for the book, they will be interested in the subject and, most likely, have a fair amount of previous knowledge about it. Because they gave you the sentences to write down, they will understand the grammar of what they are reading (even if you made corrections for them). Because the book is based on pictures, they will have a good picture reference for the vocabulary they learned in the book writing process. All these things combine to make an interesting and level appropriate book for your students to read. If you like, set aside a place in your classroom for these student-made books so their classmates can read them during their free time. If the exercise is particularly successful with your students, think about also setting aside space for a book making learning center. Students can choose pictures and assemble their books on their own and then come to you for help with vocabulary and the text for each page. Not every class activity has to be complicated. In this straightforward activity, students choose pictures that appeal to them and then take the steps to transform those pictures into the text of a book – a book that interests them and that they and their classmates can read independently. It’s fun and covers discussion, vocabulary, writing, and reading in one simple and fun activity. Have your students created their own picture books? What was the experience like for you and your class?
https://m.busyteacher.org/20757-cover-it-all-reading-writing-listening-speaking.html
Filed under: - Stream 13 Total Updates SinceFollow this stream Feb 5, 2015, 12:15pm EST Feb 5, 2015, 12:15pm EST - February 28, 2022 15 Sandwiches to Try Around D.C. Right Now Where to grab one of these handheld masterpieces - November 18, 2020 17 Essential Fried Chicken Sandwiches Around D.C. Crave-worthy sandwiches come in classic styles or with Asian, Indian, and Ethiopian twists - August 10, 2020 12 Luscious Lobster Rolls to Sample This Summer Get these rolls to-go - April 10, 2019 18 Places for a Hot Pastrami Sandwich in D.C. From a traditional style on rye to sandwiches with more unconventional toppings - January 2, 2019 Where to Go for Grilled Cheese and Tomato Soup in D.C. The winning pairing is perfect for winter weather - December 4, 2018 Where to Find French Dip Sandwiches in D.C. Roast beef at its best - July 30, 2018 Po'boys Are Hot — Here's Where to Find Them The savory Louisiana treat is alive and well in D.C. - November 16, 2016 Shake Up the Sandwich Rotation with These Banh Mi Sandwiches Many are in Northern Virginia - August 16, 2016 Tracking Down Tortas In D.C. Here's a map for finding the filling Mexican treat around town - August 10, 2016 Celebrate the Summer Tomato with These Standout BLTs Most people love bacon, but the tomato is the important part - February 17, 2016 Where To Score Monster Breakfast Sandwiches in D.C. More than just egg and cheese on toast. - January 21, 2016 11 Places That Put it on a Pretzel Bun Pretzel buns put a new twist on sandwiches, burgers, and sausages. - February 5, 2015 Ten Non-Traditional Takes on Banh Mi in D.C. From vegan takes to banh mi burritos, it seems every restaurant has its own take on the banh mi.
https://dc.eater.com/2016/9/19/12969542/best-sandwiches-dc-maps
Triple threat brownies A classic dessert combining Oreos, chocolate chip cookies, and brownies to make a glutinous trifecta. Ingredients: 1 box brownie mix 1 roll cookie dough 1 box Oreos 2 eggs ¼ cup water ¾ cup oil 1 handful chocolate chips Directions: 1. Preheat oven to 375 oF. 2. Grease a baking dish and linebottom with cookie dough. 3. Press a layer of Oreos on top ofcookie dough. 4. In a bowl, mix together brownie mix,eggs, water,chocolate chips, and oil. 5. Pour mixture on top of cookies. 6. Bake in oven for 45 minutes oruntil cooked through. Cookies and cream cake It’s like cookies and cream ice cream, but in a super moist cake. Ingredients: 1 cup vanilla cake mix 1 cup icing sugar ¼ cup butter ¼ cup milk 1/2 box Oreos, crushed Directions: 1. Preheat oven to 350oF, and blend together cake mix, eggs, and ice cream together until thoroughly mixed. 2. Split cake mix between three greased 8 x 8 inch baking dishes. 3. Bake for 25 minutes or until cooked through, then place in refrigerator until cakes are cooled. 4. Remove cakes from pans. 5. Frost top of one cake and sprinkle with Oreo crumbs. 6. Stack second cake on top and repeat previous step. 7. Repeat with third cake. 8. Frost sides of cake. Cover with remaining Oreo crumbs. Oreo fudge Because no one has time for stoves and candy thermometers. Ingredients: 1 box vanilla cake mix 3 eggs 1 pint melted vanilla ice cream 2 12 oz tubs vanilla frosting 1 box Oreos, crushed Directions 1. Mix together cake mix and sugar in microwave safe bowl. 2. Add butter and milk, but do not stir. 3. Microwave for 2 minutes on high. 4. Immediately add crushed Oreos and stir. 5. Spread mixture into pan. 6. Refrigerate for 1 hour.
https://www.mcgilltribune.com/student-life/mind-blowing-baking-with-betty-duncan-and-mr-christie/
CROSS-REFERENCE TO RELATED APPLICATIONS BACKGROUND BRIEF SUMMARY DETAILED DESCRIPTION This application claims the benefit of and priority to U.S. Provisional Patent Application Ser. No. 62/526,786 filed on Jun. 29, 2017, and entitled “DYNAMIC GATING FRAUD CONTROL SYSTEM,” which application is incorporated herein by reference in its entirety. Computer systems and related technology affect many aspects of society. Indeed, the computer system's ability to process information has transformed the way we live and work. Computer systems now commonly perform a host of tasks (e.g., word processing, scheduling, accounting, etc.) that prior to the advent of the computer system were performed manually. More recently, computer systems have been, and are being, developed in all shapes and sizes with varying capabilities. As such, many individuals and families alike have begun using multiple computer systems throughout a given day. For instance, computer systems are now used in ecommerce and the like as individuals increasing perform financial transactions such as making a purchase from various vendors over the Internet. In order to perform the financial transactions, the individuals are typically required to provide a payment instrument such as a credit card or bank account information such as a checking account to the vendor over the Internet. The vendor then uses the payment instrument to complete the transaction. The process of providing the payment instrument over the Internet leaves the various merchants subject to loss from fraudulent transactions. For example, when a fraudulent payment instrument is used to purchase a product, the merchants often loses the costs associated with the product. This is often because the bank or financial institution that issues the payment instrument holds the merchants responsible for the loss since it was the merchants who approved the transaction at the point of sale where payment instrument is not present. The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced. This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Embodiments herein are related to system, methods, and computer readable media for reducing an amount of data transactions that are subjected to further review when determining if the data transactions should be approved or rejected. In the embodiments, a risk score for various data transactions are determined. The risk scores define a first cutoff between data transactions that are approved and those subject to further review and a second cutoff between data transactions that are rejected and those subject to the further review. During a first time period, the first and second cutoffs are extended to increase the data transactions subject to further review and to create an opportunity group of data transactions not previously approved that are now approved. In addition, a rejection rate of the data transactions subject to further review are compared to a threshold. When the rejection rate is no more than the threshold, during a second time period a volume of the data transactions subject to further review is minimized and a second opportunity group of data transactions not previously approved that are now approved is created. The rejection rate of the data transactions subject to further review are again compared to the threshold. Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. Features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter. Fraud attackers typically keep testing a computing system that is used to approve or reject data transactions to identify one or more patterns where data transactions can bypass risk checks (most of the time not knowing the exact reasons why). Once the fraud attackers have identified such patterns, they will place many orders using data transactions that mimicking an approved data transaction to maximize their profit until the fraud pattern is identified and blocked. There are two types of constant fraud attacks that are desirable to prevent. The first pattern is single account with multiple trials. The second pattern is where multiple accounts attack simultaneously. Fraud attackers will use the first pattern less frequently than the first pattern since this approach is less expensive but easily detected by risk systems. Some risk systems may provide a risk score for various data transactions, rejecting the data transactions with highest score, providing further review for medium score and approving the lowest score. The embodiments disclosed herein only reject the data transactions with highest score and randomly approve a subset of data transactions from medium to high score bands (which would be rejected by some risk systems), medium score bands (which would be reviewed by some risk systems) and low to medium score bands (which would be approved by some risk systems). This random approval may be used to help detect fraudulent attacks on the remaining data transactions. In some embodiments disclosed herein the approved subset may be referred to as an “opportunity group” as their purpose is to provide the opportunity of reducing false positives. There are multiple advantages and technical effects for the embodiments disclosed herein, including: Extending review spectrum: Data transactions that are slightly riskier than the review ones often have highest false positive (FP) rate in some risk systems; on the other hand, the data transactions slightly less risky than the review ones often have highest false negative (FN) rate. By reviewing more of those transactions to get immediate labeling, the embodiments disclosed herein will help decision accuracy and increase the chance of identifying current fraudulent attacks. Equal or lower cost: different from some risk systems which review 100% of data transactions which are scored in the review band, the embodiments disclosed herein select a random sample, thus the cost can be set to be equal or less than the other risk systems. Higher profit with minimal exposure: the embodiments disclosed herein utilize the fact that overall fraud rate is less than 1%. A ML model which can identify a subset of transactions with 20% of bad rate is extremely hard already, even so, rejecting the subset results in 80% FP rate. Lower false alarms by using real time transactions as the baseline: many anomaly detection alarms are built on comparing historical and current distributions. This often results in many false alarms since attributes shift a lot in various data transaction environments by various reasons such as sale events, software updates etc. The embodiments disclosed herein do not compare distributions, instead, they use close to real time review results or anomaly detection to trigger alarms which are more accurate. FIG. 1 Some introductory discussion of a computing system will be described with respect to . Computing systems are now increasingly taking a wide variety of forms. Computing systems may, for example, be handheld devices, appliances, laptop computers, desktop computers, mainframes, distributed computing systems, datacenters, or even devices that have not conventionally been considered a computing system, such as wearables (e.g., glasses). In this description and in the claims, the term “computing system” is defined broadly as including any device or system (or combination thereof) that includes at least one physical and tangible processor, and a physical and tangible memory capable of having thereon computer-executable instructions that may be executed by a processor. The memory may take any form and may depend on the nature and form of the computing system. A computing system may be distributed over a network environment and may include multiple constituent computing systems. FIG. 1 100 102 104 104 As illustrated in , in its most basic configuration, a computing system typically includes at least one hardware processing unit and memory . The memory may be physical system memory, which may be volatile, non-volatile, or some combination of the two. The term “memory” may also be used herein to refer to non-volatile mass storage such as physical storage media. If the computing system is distributed, the processing, memory and/or storage capability may be distributed as well. 100 104 100 106 The computing system also has thereon multiple structures often referred to as an “executable component”. For instance, the memory of the computing system is illustrated as including executable component . The term “executable component” is the name for a structure that is well understood to one of ordinary skill in the art in the field of computing as being a structure that can be software, hardware, or a combination thereof. For instance, when implemented in software, one of ordinary skill in the art would understand that the structure of an executable component may include software objects, routines, methods, and so forth, that may be executed on the computing system, whether such an executable component exists in the heap of a computing system, or whether the executable component exists on computer-readable storage media. In such a case, one of ordinary skill in the art will recognize that the structure of the executable component exists on a computer-readable medium such that, when interpreted by one or more processors of a computing system (e.g., by a processor thread), the computing system is caused to perform a function. Such structure may be computer-readable directly by the processors (as is the case if the executable component were binary). Alternatively, the structure may be structured to be interpretable and/or compiled (whether in a single stage or in multiple stages) so as to generate such binary that is directly interpretable by the processors. Such an understanding of example structures of an executable component is well within the understanding of one of ordinary skill in the art of computing when using the term “executable component”. The term “executable component” is also well understood by one of ordinary skill as including structures that are implemented exclusively or near-exclusively in hardware, such as within a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or any other specialized circuit. Accordingly, the term “executable component” is a term for a structure that is well understood by those of ordinary skill in the art of computing, whether implemented in software, hardware, or a combination. In this description, the terms “component”, “agent”, “manager”, “service”, “engine”, “module”, “virtual machine” or the like may also be used. As used in this description and in the case, these terms (whether expressed with or without a modifying clause) are also intended to be synonymous with the term “executable component”, and thus also have a structure that is well understood by those of ordinary skill in the art of computing. In the description that follows, embodiments are described with reference to acts that are performed by one or more computing systems. If such acts are implemented in software, one or more processors (of the associated computing system that performs the act) direct the operation of the computing system in response to having executed computer-executable instructions that constitute an executable component. For example, such computer-executable instructions may be embodied on one or more computer-readable media that form a computer program product. An example of such an operation involves the manipulation of data. 104 100 100 108 100 110 The computer-executable instructions (and the manipulated data) may be stored in the memory of the computing system . Computing system may also contain communication channels that allow the computing system to communicate with other computing systems over, for example, network . 100 112 112 112 112 112 112 112 112 While not all computing systems require a user interface, in some embodiments, the computing system includes a user interface system for use in interfacing with a user. The user interface system may include output mechanisms A as well as input mechanisms B. The principles described herein are not limited to the precise output mechanisms A or input mechanisms B as such will depend on the nature of the device. However, output mechanisms A might include, for instance, speakers, displays, tactile output, holograms and so forth. Examples of input mechanisms B might include, for instance, microphones, touchscreens, holograms, cameras, keyboards, mouse of other pointer input, sensors of any type, and so forth. Embodiments described herein may comprise or utilize a special purpose or general-purpose computing system including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments described herein also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computing system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: storage media and transmission media. Computer-readable storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other physical and tangible storage medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computing system. A “network” is defined as one or more data links that enable the transport of electronic data between computing systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computing system, the computing system properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computing system. Combinations of the above should also be included within the scope of computer-readable media. Further, upon reaching various computing system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computing system RAM and/or to less volatile storage media at a computing system. Thus, it should be understood that storage media can be included in computing system components that also (or even primarily) utilize transmission media. Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computing system, special purpose computing system, or special purpose processing device to perform a certain function or group of functions. Alternatively or in addition, the computer-executable instructions may configure the computing system to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries or even instructions that undergo some translation (such as compilation) before direct execution by the processors, such as intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims. Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computing system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, datacenters, wearables (such as glasses) and the like. The invention may also be practiced in distributed system environments where local and remote computing systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices. Those skilled in the art will also appreciate that the invention may be practiced in a cloud computing environment. Cloud computing environments may be distributed, although this is not required. When distributed, cloud computing environments may be distributed internationally within an organization and/or have components possessed across multiple organizations. In this description and the following claims, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services). The definition of “cloud computing” is not limited to any of the other numerous advantages that can be obtained from such a model when properly deployed. FIG. 2 FIG. 2 200 100 200 200 200 200 200 102 104 Attention is now given to , which illustrates an embodiment of a computing system , which may correspond to the computing system previously described. The computing system includes various components or functional blocks that may implement the various embodiments disclosed herein as will be explained. The various components or functional blocks of computing system may be implemented on a local computing system or may be implemented on a distributed computing system that includes elements resident in the cloud or that implement aspects of cloud computing. The various components or functional blocks of the computing system may be implemented as software, hardware, or a combination of software and hardware. The computing system may include more or less than the components illustrated in and some of the components may be combined as circumstances warrant. Although not necessarily illustrated, the various components of the computing system may access and/or utilize a processor and memory, such as processor and memory , as needed to perform their various functions. FIG. 2 200 210 210 201 202 203 204 205 200 201 211 202 212 203 213 214 204 215 205 211 215 As shown in , the computing system may include a transaction entry module . In operation, the transaction module may receive input from multiple users , , , , and any number of additional users as illustrated by the ellipses to initiate a data transaction that is performed by the computing system . For example, the user may initiate a data transaction , the user may initiate a data transaction , the user may initiate a data transaction , and the user may initiate a data transaction . The ellipses represent any number of additional data transactions that can be initiated by one or more of the users . Of course, it will be noted that in some embodiments a single user or a number of users less than is illustrated may initiate more than one of the transactions -. 211 215 211 215 211 215 211 215 The data transactions - may represent various data transactions. For example, as will be explained in more detail to follow, the data transactions - may be purchase or other financial transactions. In another embodiments, the transactions - may be transactions related to clinical or scientific research results. In still, other embodiments, the transactions - may be any type of transaction that is subject to fraud and is thus able to be characterized as being properly accepted, improperly accepted, properly rejected, or improperly rejected as a result of the fraud. Accordingly, the embodiments disclosed herein are not related to any type of data transactions. Thus, the embodiments disclosed herein relate to more than purchase or financial transactions and should not be limited or analyzed as only being related to purchase or financial transactions. 210 211 215 211 215 210 210 210 The transaction entry module may receive or determine information about each of the data transactions -. For example, if the data transactions - are purchase or other financial transactions, then the transaction entry module may determine personal information about the user, payment information such as a credit or debit card number, and perhaps the product that is being purchased. If the data transactions are clinical or scientific research data transactions, then the data transaction entry module may determine identifying information about the research such as participant information and result information. The transaction entry module may receive or determine other information about other types of data transactions as circumstances warrant. 200 220 220 211 215 221 211 222 212 223 213 224 214 215 221 224 211 215 The computing system also includes a risk score module . In operation, the risk score module may determine a risk score for each of the data transactions -. For example, the score module may determine a risk score for the data transaction , a risk score for the data transaction , a risk score for the data transaction , and a risk score for the data transaction . The risk score module may also determine a risk score for each of the additional data transactions . As will be explained in more detail to follow, the risk scores - may specify if each of the data transactions - is to be approved (i.e., the data transactions are performed or completed), if the transactions are to be rejected (i.e., the data transactions are not completed or performed) or if further review is needed to determine if the data transaction should be approved or rejected. FIG. 2 230 230 200 In some embodiments, the decision analysis may be based at least in part on one or more risk parameters that are related to the data transactions. For example, as illustrated in , the computing system may include a risk parameter store . Although shown as being an independent, the risk parameter store may be part of another element of the computing system . 230 235 235 235 235 235 a b c d As shown, the risk store may include a first risk parameter , a second risk parameter , a third risk parameter , and any number of additional risk parameters as illustrated by the ellipses . The risk parameters may be also be referred to hereinafter as risk parameters . 235 235 235 235 235 a b c d In the embodiment related to the purchase or other financial transaction, the risk parameters may be related to the product or service being purchased and to the purchaser of the product and service. For example, the first risk parameter may specify a purchase price for the product or service, the second risk parameter may specify the past payment history of the purchaser of the product and service, and a third risk parameter may specify a profit margin for each transaction for the seller of the product or service. Other risk parameters such as location of the data transaction may also be used. As will be appreciated, the various risk parameters may be those parameters that would likely indicate how trustworthy the purchaser of the product and service is and how much harm the seller of the product or service would suffer is the transaction were fraudulent as these types of parameters are relevant to risk. 235 220 235 220 In the embodiment related to the to the clinical or scientific research results, the risk parameters may specify the amount of error that is acceptable, the research goals, and other relevant factors. These may be used by the risk score module as needed. In other embodiments, various other risk parameters may be used as needed by the risk score module . 220 220 210 235 235 210 220 In some embodiments, the risk score module may perform a decision analysis on each of the data transactions when assigning the risk score to the data transaction. This decision analysis may be based on various factors that are indicative of whether a data transaction should be approved or rejected. For example, if data transaction is the purchase or other financial transaction, the factors may be related to risk analysis. For instance, the risk score module may determine based on the information determined by the data transaction entry module or by one or more of the risk parameters that a purchase or other financial transaction is likely to be a fraudulent transaction and so a risk score that indicates the data transaction should be rejected may be assigned. Alternatively, the one or more of the risk parameters or the information determined by the data transaction entry module may cause the risk score module to determine that the purchase or other financial transaction is likely to be a good transaction and so a risk score that indicates that the data transaction should be approved may be assigned. 235 210 220 235 210 Alternatively, the one or more of the risk parameters or the information determined by the data transaction entry module may cause the risk score module to determine that the purchase or other financial transaction requires further review to determine if it should be approved or rejected and so a risk score that indicates that the data transaction should be subjected to further review may be assigned. It will be noted that such a risk score indicates that the one or more of the risk parameters or the information determined by the data transaction entry module is not conclusive as to whether the transaction should be accepted or rejected. For example, the past payment history of buyer may be suspect, the buyer may be located in a location that is well known for including fraudulent activity, or the purchase price of an item related to the data transaction may be high, thus subjecting the seller to greater risk. There may be other reasons why a review score may be assigned. As will be described in more detail to follow, the review risk score will cause that further review of the data transaction be performed before the final decision is made. In this way, there are further protections from data transactions that have a higher chance of being fraudulent, but do not rise to the level where they should be outright rejected. 220 220 If the data transaction is related to the clinical or scientific research results, the factors may be related to what type of errors have occurred. For example, in many research embodiments, there are Type I errors and Type II errors. The risk score may accept a certain percentage of Type I errors and reject the rest and may also accept a certain percentage of Type II errors and reject the rest based on the determined risk score. In embodiments related to other types of data transactions, the risk may use other factors as circumstances warrant. 220 211 214 220 220 211 214 221 224 310 330 320 60 80 FIG. 3A FIG. 3A Once the risk score module has determined a risk score for each of the data transactions -, the risk score module may determine if a data transaction should be approved, rejected or subjected to further review based on the risk score. That is, the risk score module may set or otherwise determine a risk score cutoff or boundary for risk scores that will be approved, risk scores that will rejected, and risk scores that will be subjected to further review. For example, shows risk scores from 1 to 100. As illustrated, those data transactions (i.e., data transactions -) that are assigned a risk score (i.e., risk scores -) between 1 and 60 are included in an approve portion and those data transactions that are assigned a risk score from 80 to 100 are included in a rejected portion . However, those data transactions having a risk score between 60 and 80 are included in a review portion that are to be subjected to further review. Thus, in the embodiment the risk score is a first cutoff or boundary and the risk score is a second cutoff or boundary. It will be noted the is only one example of possible risk scores and should not be used to limit the embodiments disclosed herein. It FIG. 2 FIG. 3A FIG. 3A 225 211 212 221 222 221 222 200 213 226 223 223 200 As illustrated in , it is shown at that the data transactions and have been approved because the risk scores and were in the risk score band that should be approved. For instance, in relation to the embodiment of the risk scores and may be between 1 and 60. Accordingly, the data transactions are able to be completed by the computing system . The data transaction , on the other hand, is shown at as being rejected because the risk score was in the risk score band that should be rejected. For instance, in relation to the embodiment of the risk score was between 80 and 100. Accordingly, the data transaction is not completed by the computing system . FIG. 2 FIG. 3A 227 214 224 224 200 240 240 240 214 245 245 245 245 245 245 a b c d As further shown in at , the data transaction has been marked for further review because the risk score was in the risk score band that should be subjected to further review. For instance, in relation to the embodiment of the risk score was between 60 and 80. Accordingly, the computing system may also include a review module , which may be a computing entity or a human entity that utilizes the review module . In operation, the review module receives the data transaction and performs further review on the data transaction to determine if the data transaction should be accepted or rejected. For example, the review module may apply one or more additional review criteria , , , and any number of additional review criteria as illustrated by ellipses (hereinafter also referred to “additional review criteria ”). In some embodiments the additional review criteria may be to review of social media accounts of the initiator of the data transaction, review and/or contact of third parties associated with the initiator of the data transaction, contact with a credit card company that issues a credit card associated with the initiator of the data transaction, or direct contact with the initiator of the data transaction through a phone call, SMS, email, or other real time (or near real time) forms of communication. It will be appreciated that there may be other types of additional review criteria. 245 214 214 214 214 220 214 225 226 Based on the results of the additional review criteria , the review module may determine if the data transaction should be accepted or rejected. For example, if the additional review criteria indicate that it is likely that the data transition is a valid, non-fraudulent transaction, then the data transaction may be approved. On the other hand, if the additional review criteria indicate that it is likely that the data transition is a fraudulent transaction, the data transaction may be rejected. The determination of the review module may be provided to the risk score module and the data transaction may be added to the approved data transactions and allowed to be completed or added to the rejected data transactions and not allowed to be completed. 240 246 320 246 246 220 The review module may keep track of a rejection rate that specifies how many of the data transactions in the review portion are rejected. For example, in one embodiment around 80% of the data transactions in the review portion during a given time period are ultimately approved and so the rejection rate may be 20%. As will be explained in more detail to follow, the rejection rate may increase during the given period during fraudulent attacks on the computing system and such increase may be used as an indication of the fraudulent attack. 240 245 320 320 As may be appreciated after reading this specification, the further review performed by the review module may be time consuming and costly in terms of system resources and actual costs, especially when the review module includes humans to perform one or more of the additional review criteria . This is especially true in many embodiments where the large majority of the data transactions in the review portion are ultimately approved. For example, as explained above around 80% of the data transactions in the review portion may ultimately be approved. 220 321 320 310 321 240 330 331 320 FIG. 3A It will also be appreciated after reading this specification that the risk score module may make mistakes when assigning a risk score. For example, as shown in , a portion of the review portion closest to the cutoff with the approved portion is likely to include a large number of data transactions that will ultimately be approved. However, by assigning them a risk score over 60, the data transactions in the portion are subjected to the review process. Of those reviewed, some may be improperly rejected by the review module , which is called a false positive. In addition, the reject portion may include a portion closest to the cutoff with the review portion that may include a large number of false positives since these transactions were assigned a risk score above 80, which resulted in their automatic rejection. As may be appreciated, false positives may be viewed as a lost benefit as any value they may have provided is lost when they are rejected. 240 Accordingly, it may be desirable to reduce the number of data transactions that are subjected to review by the review module and to increase the number of data transactions that are approved so as to reduce the number of false positives. However, care should still be given to ensure that by reducing the number of data transactions that are subjected to further review, the number of false negatives (e.g., those data transactions that should have been rejected, but were instead allowed) is not increased. In addition, care should be given to guard against fraudulent activity that may take advantage of any reduction in review and increase in approval rate. Advantageously, the embodiments disclosed herein provide for a dynamic gate that causes such reduction in number of data transactions being reviewed, increase the number of data transactions being allowed, and guard against increased fraud as a result as will now be explained. FIG. 2 200 250 250 310 320 320 330 250 240 250 As shown in , the computing system includes a dynamic gating module . In operation, the dynamic gating module is able to open a dynamic gate for a given period of time that extends the cutoff between the approved portion and the review portion and the cutoff between the review portion and the rejected portion to thereby potentially increase the number of data transactions that are allowed. In addition, while the dynamic gate is open, the dynamic gating module is further able to reduce the volume of the data transactions that are subjected to the further review by the review module thereby potentially reducing the further review costs. Advantageously, the dynamic gating module is able detect or measure any potential increase in fraudulent activity that may occur as a result of opening the dynamic gate and close the gate in response. 250 251 251 220 310 320 320 330 211 215 250 252 220 320 As shown, the dynamic gating module includes a score extender module . In operation, the score extender module instructs the risk score module to extend the cutoff between the approved portion and the review portion and the cutoff between the review portion and the rejected portion in one or more iterations while the dynamic gate is open so that a larger number of the data transactions - may be approved without any further review. The dynamic gating module also includes a review minimize module that in operation instructs the risk score module to minimize or lower the volume or amount of the data transactions in the review portion that are reviewed in one or more iterations while the dynamic gate is open. 250 253 251 252 320 320 310 The dynamic gating module may also include an opportunity group module . As may be appreciated, when the score extender module and/or the review minimize module are in operation, the review portion may be modified so that some of the data transactions in the review portion that were previously subjected to further review will now be approved and allowed to be completed like the data transactions in the approved portion . In addition, some of the data transactions that were previously rejected will now be subject to further review, which may lead to at least some of these transactions being approved after the further review, which may decrease the number of false positives. Further, some of the data transactions that were previously approved will now be subject to the further review. However, it may assumed that most of these will ultimately be allowed by the further review while those that are not should not have been approved in the first place, thus lowering the number of false negatives without affecting the approval rate. 253 320 253 253 253 253 254 253 253 253 a b c d a d In order to help prevent fraudulent attacks or activity from taking advantage of the dynamic gate being opened, the opportunity group module may randomly divide the data transactions in the modified review portion into various opportunity groups , , , and any number of additional opportunity groups as illustrated by the ellipses . When additional data transactions from the review portion are to be approved, a randomizer of the opportunity group module may randomly select one of the opportunity groups -to be the allowed data transactions. In this way, any fraudulent attackers may have a more difficult time determining that the dynamic gate is opened based on the identity of the data transactions that are now being allowed. 250 255 250 246 240 246 255 250 256 250 256 The dynamic gating module may also include a threshold . In operation, the dynamic gating module may monitor the rejection module rate of the review module . When the rejection rate exceeds the threshold , the dynamic gating module may infer that a fraudulent attack in underway and may close the dynamic gate in response. A time module may also be included that allows the dynamic gating module to set a time period or window in which to operate the dynamic gate. The time period may be any reasonable length. In some embodiments, the time period may be a few minutes, 1 or 2 hours, a longer number of hours, a day, or even a number of days. As may be appreciated, the length of the time period set by the time module may be determined by how likely a fraudulent attack is to occur. For example, if it is likely that perpetrators of the fraudulent attack will discover the dynamic window in a short period of time, then the time period should be set to be set to be smaller while if there is less chance of the dynamic window being discovered, then the time period may be set for a longer period of time. It will also be appreciated that for those embodiments where there are multiple time periods, each time period may be the same or they may be different. 250 301 310 320 330 211 212 310 214 320 213 330 FIG. 3B FIG. 3B FIG. 3A The operation of the dynamic gating module will now be explained in relation to . shows that at a first time period , the approved portion , the review portion , and the reject portion are the same as that described in relation to . That is, those data transactions having a risk score between 1 and 60 (i.e. data transactions and ) are included in the approved portion , those data transactions having a risk score between 60 and 80 (i.e., data transaction ) are included in the review portion , and those data transactions having a risk score between 80 and 100 (i.e. data transaction ) are included in the reject portion ). 301 250 250 246 255 246 255 250 200 255 During the first time period , which may be a time period of normal operation prior to the opening of the dynamic gate, the dynamic gate module may determine to begin operation of a dynamic gate. This determination may be based on a predetermined schedule or it may be based on operational factors that are determined in real time. The dynamic gate module may monitor the rejection rate to determine if it is no more than the threshold . If the rejection rate is no more than the threshold , for example no more than 20% as described above, than the dynamic gate module may infer that the computing system is not currently under a fraudulent attack and may open a dynamic gate. On the other hand, if the rejection rate is more than the threshold , the dynamic gate module will not open the dynamic gate as there is a chance that a fraudulent attack is currently occurring. 302 250 302 256 302 251 220 310 320 320 330 320 240 253 340 253 253 320 320 340 310 320 340 302 341 a d If there is no fraudulent attack currently occurring, during a time period , which may be considered a first iteration of a dynamic gate, the dynamic gate may open the dynamic gate. As mentioned above, the time period may be determined by the time module . During the time period , the score extender module may direct the risk score module to extend the cutoff between the approved portion and the review portion to be 55 and the cutoff between the review portion and the rejected portion to be 85. The result of this will be that the review portion will now include some data transaction with a score of 55 to 60 and 80 to 85 that were previously not reviewed. Because the review module may have a set amount resources for performing the further review, the opportunity group module may create an opportunity group , which may correspond to one or more of the opportunity groups -, that equals the amount of data transactions that have been added to the review portion so that the total amount or volume of data transactions in the review portion remains constant. The data transaction in the opportunity group may then become part of the approved portion and may be allowed to complete. In this way, additional data transactions that may have been rejected or subjected to review will be allowed while the overall review load stays constant. It will be appreciated that the extension of the review portion and the approval of the opportunity group during the time period may be considered an example of the dynamic gate as shown at . 250 246 302 255 246 255 250 200 341 302 255 250 250 341 301 The dynamic gate module may monitor the rejection rate during the time period to determine if it is no more than the threshold . If the rejection rate is no more than the threshold than the dynamic gate module may infer that the computing system is not currently under a fraudulent attack and may keep the dynamic gate open during the entire time period . On the other hand, if the rejection rate is more than the threshold , the dynamic gate module may infer a fraudulent attack. In such case, the dynamic gate module may close the dynamic gate by returning the system to the state of the initial time period . 303 250 303 256 303 251 220 310 320 320 330 320 253 350 253 253 340 320 303 320 350 340 340 350 340 350 310 320 350 303 351 a d FIG. 3B If there is no fraudulent attack currently occurring, during a time period , which may be considered a second iteration of a dynamic gate, the dynamic gate may extend the dynamic gate. As mentioned above, the time period may be determined by the time module . During the time period , the score extender module may direct the risk score module to extend the cutoff between the approved portion and the review portion to be 50 and the cutoff between the review portion and the rejected portion to be 90. The result of this will be that the review portion will now include some data transaction with a score of 50 to 60 and 80-90 that were previously not reviewed. In order to keep the review load constant, the opportunity group module may create an opportunity group , which may correspond to one or more of the opportunity groups -, that, in addition to the opportunity group already being approved, equals the amount of data transactions that have been added to the review portion during the time period so that the amount or volume of the data transactions in the review portion remains constant. As shown in , the size of the opportunity group is larger than the opportunity group since it includes the opportunity group . In one embodiment, the opportunity group may be 10% larger than the opportunity group . The data transactions in the opportunity group may then become part of the approved portion and may be allowed to complete. It will be appreciated that the extension of the review portion and the approval of the opportunity group during the time period may be considered an example of the dynamic gate as shown at . 250 246 303 255 246 255 250 200 351 303 255 250 250 351 301 The dynamic gate module may monitor the rejection rate during the time period to determine if it is no more than the threshold . If the rejection rate is no more than the threshold than the dynamic gate module may infer that the computing system is not currently under a fraudulent attack and may keep the dynamic gate open during the entire time period . On the other hand, if the rejection rate is more than the threshold , the dynamic gate module may infer a fraudulent attack. In such case, the dynamic gate module may close the dynamic gate by returning the system to the state of the initial time period . 304 250 304 256 304 320 303 304 246 If there is no fraudulent attack currently occurring, during a time period , which may be considered a third iteration of a dynamic gate, the dynamic gate may again implement the dynamic gate. As mentioned above, the time period may be determined by the time module . During the time period , the lower cutoff and the higher cutoff of the review portion are not extended, but remain the same as during the time period . As may be appreciated, it may be undesirable to extend these cutoffs more during the time period as this would result in a higher number of data transaction that should be approved being subject to the further review on the low end and on the high end it would result in many data transactions being reviewed that should be rejected. Since it may be assumed that these transactions would be rejected by the further review, this may result in the rejection rate being artificially raised. 304 320 252 302 303 304 230 230 304 250 Accordingly, during the time period the amount or volume of the data transactions in the review portion is lowered by the review minimize module to be less than the amount or volume during the time periods and . In other words, during the time period the review load of the review module is not kept constant, but is lowered, thus potentially reducing costs associated with the further review done by the review module . This may be possible because by the time period occurs, the dynamic gate module may be satisfied that there is no fraudulent attack occurring and so there is less need for further review of those data transactions in the extended cutoff regions. 253 360 253 253 340 350 320 360 320 320 360 304 361 a d At the same time, the opportunity group module may create an opportunity group , which may correspond to one or more of the opportunity groups -, that also includes the opportunity groups and and is thus larger than those opportunity groups. Since the volume of the review portion has been lowered, the opportunity group may include a larger number of data transactions than those that are part of the review portion . It will be appreciated that the lowering of the volume of the review portion and the approval of the opportunity group during the time period may be considered an example of the dynamic gate as shown at . 250 246 304 255 246 255 250 200 361 304 255 250 250 361 301 The dynamic gate module may continue to monitor the rejection rate during the time period to determine if it is no more than the threshold . If the rejection rate is no more than the threshold than the dynamic gate module may infer that the computing system is not currently under a fraudulent attack and may keep the dynamic gate open during the entire time period . On the other hand, if the rejection rate is more than the threshold , the dynamic gate module may infer a fraudulent attack. In such case, the dynamic gate module may close the dynamic gate by returning the system to the state of the initial time period . 305 250 305 256 305 320 252 304 320 305 301 If there is no fraudulent attack currently occurring, during a time period , which may be considered a fourth iteration of a dynamic gate, the dynamic gate may again implement the dynamic gate. As mentioned above, the time period may be determined by the time module . During the time period , the volume or amount of the data transactions in the review portion is again lowered by the review minimize module to be less than the volume during the time period . In one embodiment, the volume of the review portion during the time period is about 10% of the volume during the initial state of time period . 253 370 253 253 340 350 360 320 370 320 320 370 305 371 a d The opportunity group module may create an opportunity group , which may correspond to one or more of the opportunity groups -, that also includes the opportunity groups , and and is thus larger than those opportunity groups. Since the volume of the review portion has been lowered, the opportunity group may include a larger number of data transactions than those that are part of the review portion . It will be appreciated that the lowering of the volume of the review portion and the approval of the opportunity group during the time period may be considered an example of the dynamic gate as shown at . 250 246 305 255 246 255 250 200 371 305 255 250 250 371 301 250 301 305 FIG. 3B The dynamic gate module may continue to monitor the rejection rate during the time period to determine if it is no more than the threshold . If the rejection rate is no more than the threshold than the dynamic gate module may infer that the computing system is not currently under a fraudulent attack and may keep the dynamic gate open during the entire time period . On the other hand, if the rejection rate is more than the threshold , the dynamic gate module may infer a fraudulent attack. In such case, the dynamic gate module may close the dynamic gate by returning the system to the state of the initial time period . In addition, to further prevent fraudulent attacks, the dynamic gate module may return to the state of the initial time period upon completion of the time period even in the absence of a fraudulent attack. The process of may then be repeated at different time periods as needed to reduce the review volume over time and to expose more data transactions to approval that may otherwise be rejected while preventing fraudulent attackers from exploiting the dynamic gate. 253 254 254 253 253 253 253 254 1 9 a d a d FIG. 4 As mentioned above, the opportunity group module may include the randomizer . In one embodiment, the randomizer may be used to further randomize the selection of the opportunity groups -. For example, illustrates an embodiment where the opportunity groups -include nine opportunity groups labeled as groups 1-9. In the embodiment, the randomizer has randomly assigned each of the opportunity groups 1-9 to a random time period D-D over a three day window where the opportunity group may be used during the dynamic gate. In this way, the opportunity groups are only used during the randomly assigned time periods, which may make it harder for fraudulent attackers to learn when a given opportunity group will be utilized in the dynamic window The following discussion now refers to a number of methods and method acts that may be performed. Although the method acts may be discussed in a certain order or illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is dependent on another act being completed prior to the act being performed. FIG. 5 FIGS. 2-4 500 500 illustrates a flow chart of an example method for reducing an amount of data transactions that are subjected to further review when determining if the data transactions should be approved or rejected. The method will be described with respect to one or more of discussed previously. 500 510 220 221 224 211 214 225 310 227 320 227 320 226 330 FIG. 3B The method includes determining risk scores for a plurality of data transactions (act ). The risk scores define a first cutoff between a first portion of the plurality of data transactions that are to be approved and a second portion of the plurality of data transactions that are to be subjected to further review and a second cutoff between the second portion and a third portion of the plurality of data transactions that are to be that are to be rejected. For example, as previously described the risk score module may determine the risk scores - for the data transactions -. The risk scores may define a first cutoff between an approved portion , of data transactions that are to be approved and a review portion , of data transactions that are to be subjected to further review. A second cutoff between the review portion , and reject portion , of data transactions that are to be rejected may also be defined. In the embodiment of , the first cutoff is 60 and the second cutoff is 80. 500 520 251 220 310 320 320 330 211 215 The method includes, during a first time period, extending the first and second cutoffs such that the second portion includes some of the data transactions of the first portion and third portion that were not previously included in the second portion (act ). For example, as previously described the score extender module may cause the risk score module to extend the cutoff between the approved portion and the review portion and the cutoff between the review portion and the rejected portion in one or more iterations while the dynamic gate is open so that a larger number of the data transactions - may be approved without any further review. 500 530 253 302 253 340 320 330 The method includes during the first time period, creating a first opportunity group of data transactions that are to be approved (act ). The first opportunity group may include at least some data transactions that were part of the second and third portions. For example, as previously described the opportunity group module may during a first time period create an opportunity group such as the opportunity group A or . As discussed, the opportunity group may include some of the data transactions that were previously part of the review portion and the reject portion . 500 540 250 246 302 255 246 255 250 200 341 302 255 250 250 341 301 The method may include, during the first time period, comparing a rejection rate of the data transactions included in the second portion to a threshold to determine if the rejection rate is no more than the threshold (act ). As previously described the dynamic gate module may monitor the rejection rate during the time period to determine if it is no more than the threshold . If the rejection rate is no more than the threshold than the dynamic gate module may infer that the computing system is not currently under a fraudulent attack and may keep the dynamic gate open during the entire time period . On the other hand, if the rejection rate is more than the threshold , the dynamic gate module may infer a fraudulent attack. In such case, the dynamic gate module may close the dynamic gate by returning the system to the state of the initial time period . 500 550 252 304 320 The method includes, in response to determining that the rejection rate is no more than the threshold, during a second time period, minimizing a volume of the second portion by removing some of data transactions from the second portion that were included during the first time period (act ). For example, as previously described the review minimize module may, during the second time period , lower the volume or amount of data transactions in the review portion . 500 560 253 304 253 360 320 330 302 340 The method includes, during the second time period, creating a second opportunity group of data transactions that are to be approved (act ). The second opportunity group includes at least some data transactions that were part of the second and third portions during the first time period. The second opportunity group is larger than the first opportunity group. For example, as previously described the opportunity group module may during the second time period create an opportunity group such as the opportunity group B or . As discussed, the opportunity group may include some of the data transactions that were previously part of the review portion and the reject portion during the time period and may be larger than the opportunity group . 500 570 250 246 304 255 246 255 250 200 361 304 255 250 250 361 301 The method includes during the second time period, comparing the rejection rate of the data transactions included in the second portion to the threshold to determine if the rejection rate is no more than the threshold (act ). For example, as previously described the dynamic gate module may continue to monitor the rejection rate during the time period to determine if it is no more than the threshold . If the rejection rate is no more than the threshold than the dynamic gate module may infer that the computing system is not currently under a fraudulent attack and may keep the dynamic gate open during the entire time period . On the other hand, if the rejection rate is more than the threshold , the dynamic gate module may infer a fraudulent attack. In such case, the dynamic gate module may close the dynamic gate by returning the system to the state of the initial time period . For the processes and methods disclosed herein, the operations performed in the processes and methods may be implemented in differing order. Furthermore, the outlined operations are only provided as examples, and some of the operations may be optional, combined into fewer steps and operations, supplemented with further operations, or expanded into additional operations without detracting from the essence of the disclosed embodiments. The present invention may be embodied in other specific forms without departing from its spirit or characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. BRIEF DESCRIPTION OF THE DRAWINGS In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting in scope, embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which: FIG. 1 illustrates an example computing system in which the principles described herein may be employed; FIG. 2 illustrates a computing system that may implement the embodiments disclosed herein; FIG. 3A illustrates an embodiment of a risk score band; FIG. 3B illustrates an embodiment of an operation of a dynamic gate; FIG. 4 illustrates an embodiment of randomizing opportunity groups; and FIG. 5 illustrates a flow chart of an example method for reducing an amount of data transactions that are subjected to further review when determining if the data transactions should be approved or rejected.
Q: Precalc word problem Can someone to point me in the right direction for this math problem I have on my homework? I don't know where to begin on this. The elk population in a certain region is given by the function E(t) = 1050 + 120sin(2t/5), where time is t measured in years. What is the largest number of elk present in the region at anytime? A: Hints: What is the maximum value of $\sin x$? Then, what is the maximum value of $\sin(2t/5)$? Next, what is the maximum value of $120\sin(2t/5)$? Finally, what is the maximum value of $1050+120\sin(2t/5)$? That is the largest amount of elk present at any time.
New Scientist | (Nicola Jones) As the two parallel efforts to sequence the human genome enter their final stages, the geneticists who gathered in Vancouver for last week's meeting of the Human Genome Organisation are already looking ahead. Many think the key to finding the meaningful words in the three-billion-letter long sentence of our DNA is sequencing the genome of the humble lab mouse. "That will be the Rosetta Stone in terms of interpreting the human genome," says Steven Jones of the Genome Sequence Centre in Vancouver. The chromosomes of a mouse. The company Celera Genomics, headed by Craig Venter, which is already setting the pace in unravelling the human genome, announced it would try to sequence the mouse genome earlier this month. Later this year, when the main publicly funded American sequencing centres are scaling down work on the human genome, they will start sequencing the mouse genome. The US's National Human Genome Research Institute estimates that it will take the ten public labs about three years to produce a working draft. Despite appearances, mice and humans are remarkably similar when it comes to their DNA. Both genomes have about three billion bases, only about 3 per cent of which codes for functional genes--the other 97 per cent being "junk DNA". In the many millions of years since mice and humans diverged from a common ancestor, much of the important DNA has been conserved, while the "junk" has mutated freely and is now very different. That means that simply comparing the two genomes will be an efficient way of identifying vital stretches of DNA, including genes and sequences that regulate gene expression. Even better, by "knocking out" selected genes in lab mice, we get a good idea of what they do. The equivalent genes in humans should have very similar functions. And that is the first step on the long road to finding useful applications for all this genetic information. "When it comes to drug development, this is crucial," says Lap-Chee Tsui, president of the Human Genome Organisation (HUGO). Vancouver's Genome Sequence Centre, part of the British Columbia Cancer Research Centre, is starting the public sequencing effort by making a physical map of the mouse genome--a low-resolution chart that orders known sections of DNA and helps locate important areas of each chromosome. With that map, the centre can decide which stretches of DNA need to be sequenced to get good coverage of the mouse genome without unnecessary duplication. Sometime this autumn the Vancouver lab will deal out the cards saying who should sequence what--including data about which segments might be of the greatest medical importance. Celera uses a different method, known as the "shotgun technique", in which the whole genome is randomly broken into fragments. The fragments are then sequenced and researchers use heavyweight computer power to spot overlapping fragments which allow reassembly. Although potentially less accurate than traditional "directed" sequencing along the chromosome, the shotgun approach is considerably faster. "Ultimately it will accelerate the progress and the public availability of information," says HUGO's senior vice-president Gert-Jan van Ommen about Venter's work.
http://www.bcgsc.ca/about/news/almost_human
--- abstract: 'This is essentially a translated (and explained) version of [@hecke1930], where Hecke shows, for a prime $q$, a relation between the class number $h(-q)$ of ${{\mathbb{Q}}}(\sqrt{-q})$ and the representation of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ on the space of holomorphic differentials of $X(q)$.' address: 'Department of Mathematics and Statistics, McGill University, Montréal, QC, Canada' author: - Luiz Kazuo Takei bibliography: - 'takei.bib' title: | On modular forms of weight 2 and\ representations of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ --- Introduction ============ In 1930 Hecke ([@hecke1930]) proved an interesting result concerning the representation of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ on the space of holomorphic differentials of $X(q)$. Analyzing the character table of this group we see two ‘special’ irreducible representations (here denoted $\pi_+$ and $\pi_-$). If we call $m_+$ and $m_-$ the multiplicity of $\pi_+$ and $\pi_-$ in that representation, then Hecke showed that $m_+ - m_- = h(-q)$. In this paper we present a translation and explanation of the ideas contained in [@hecke1930]. We attempt to use the same notation as Hecle whenever it is defined, otherwise we use standard notation. A notable exception is the choice of notation for the irreducible representations of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$. We assume familiarity with representation theory, Riemann surfaces and some knowledge of modular forms at an introductory level (for instance, the first chapters of ). When needed, we refer to results in other topics as well. Fixing notations ================ In these notes, $q$ will denote a prime number (which we assume to satisfy $q \equiv 3 \pmod{4}$ and $q > 3$). $$\Gamma(q) = \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \operatorname{SL}_2({{\mathbb{Z}}}) \ \bigg| \ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \equiv \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \pmod q \right\}$$ $$\Gamma_1(q) = \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \operatorname{SL}_2({{\mathbb{Z}}}) \ \bigg| \ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \equiv \begin{pmatrix} 1 & * \\ 0 & 1 \end{pmatrix} \pmod q \right\}$$ $$M_2(\Gamma(q)) := \textrm{space of modular forms of weight $2$ with respect to } \Gamma(q)$$ (as defined in ; in particular, our modular forms are holomorphic on ${\mathcal{H}}$ and at the cusps) We have a well-known (right) action of $\operatorname{SL}_2({{\mathbb{Z}}})$ on $M_2(\Gamma(q))$: if $\gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \operatorname{SL}_2({{\mathbb{Z}}})$ and $\varphi \in M_2(\Gamma(q))$, then $$(\varphi[\gamma]_2) (z) := (cz+d)^{-2} \varphi(\gamma z)$$ where $\gamma z := \frac{az+b}{cz+d}$. Since the elements of $M_2(\Gamma(q))$ are invariant under the action of $\Gamma(q)$ and $-I$, the original action induces an action of $\operatorname{PSL}_2({{\mathbb{F}}}_q) = \dfrac{\operatorname{SL}_2({{\mathbb{Z}}})}{\{\pm I \} \cdot \Gamma(q)}$. Let $\zeta := \exp\left(\frac{2 \pi i}{q}\right)$ and $P = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \in PSL_2({{\mathbb{F}}}_q)$. We define the ${{\mathbb{C}}}$-vector space $V := \left\{ f \in M_2(\Gamma(q)) \ | \ f[P]_2 = \zeta f \right\}$ and denote $z := \dim_{{{\mathbb{C}}}} V$. We also define the following modular curves (which are Riemann surfaces): $X(q) = \Gamma(q) \backslash {\mathcal{H}}^*$ and $X_1(q) = \Gamma_1(q) \backslash {\mathcal{H}}^*$ (where ${\mathcal{H}}^*$ is the union of the upper half-plane and the cusps). Finally, throughout this paper, if $\pi$ is a representation of a group $G$ on a vector space $V$, $\pi(g)$ denotes the element in $GL(V)$ or its trace depending on the context (sometimes we write $\operatorname{tr}(\pi(g))$ so that it is clear what we mean). First Remarks ============= Note that the action of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ on $M_2(\Gamma(q))$ induces a representation (since it is a right action, the representation is given by $\gamma \cdot \varphi = \varphi[\gamma^{-1}]_2$). Recall that $M_2(\Gamma(q)) = E_2(\Gamma(q)) \oplus S_2(\Gamma(q))$, where $E_2(\Gamma(q))$ is the Eisenstein space and $S_2(\Gamma(q))$ is the space of cusp forms. This decomposition is also a decomposition of $M_2(\Gamma(q))$ as a representation of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$. Moreover, $S_2(\Gamma(q))$ is isomorphic to the space of holomorphic differentials of $X(q)$. $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ can be naturally identified with a subgroup of $\operatorname{Aut}(X(q))$ and its action on $S_2(\Gamma(q))$ defined in the previous section is (via these identifications) the action of a subgroup of $\operatorname{Aut}(X(q))$ on the space of holomorphic differentials of $X(q)$. If $\pi$ denotes the representation from last paragraph, then it is known that $\pi(\gamma) + \overline{\pi(\gamma)} = 2 - t$ where $t$ is the number of fixed points of $\gamma \in \operatorname{PSL}_2({{\mathbb{F}}}_q) \subseteq \operatorname{Aut}(X(q))$ (this is called the Lefschetz Fixed Point Formula; cf. ). So, if $\pi(\gamma) \in {{\mathbb{R}}}$, we can obtain $\pi(\gamma)$ by computing $(2-t)/2$. Looking at the character table of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ we see that all the irreducible representations have real traces except in the case $q \equiv 3 \pmod{4}$ (where there are two irreducible representations which do not have real traces). Thus, from now on we assume $q \equiv 3 \pmod{4}$ and $q > 3$ (the case $q = 3$ is simple and can be treated individually). Hecke actually studied all the cases in [@hecke1930] (for the interested reader: Hecke defines $\varepsilon = (-1)^{(q-1)/2}$ and deals with both cases simultaneously). Character Table of $PSL_2({{\mathbb{F}}}_q)$ ============================================ In this section we recall the character table of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ (following the presentation given in [@casselmana]). Representations of the conjugacy classes are: $$\begin{array}{cccc} \pm \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} & & & \\ \pm \begin{pmatrix} t & 0 \\ 0 & 1/t \end{pmatrix} & \sim & \pm \begin{pmatrix} 1/t & 0 \\ 0 & t \end{pmatrix} & (t \neq \pm 1) \\ \pm \begin{pmatrix} a & -b \\ b & a \end{pmatrix} & \sim & \pm \begin{pmatrix} a & b \\ -b & a \end{pmatrix} & (a^2 + b^2 = 1) \\ \pm \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} & & & \\ \pm \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix} & & & \end{array}$$ If we let $E = {{\mathbb{F}}}_q(\sqrt{-1})$, then $N^1_{E / {{\mathbb{F}}}_q}$ embeds in $\operatorname{SL}_2({{\mathbb{F}}}_q)$ via $a + b \sqrt{-1} \mapsto \begin{pmatrix} a & -b \\ b & a \end{pmatrix}$. Since $N^1_{E / {{\mathbb{F}}}_q} / \{ \pm 1 \}$ has a unique element of order $2$ (namely $\sqrt{-1}$), it also has a unique character of order $2$ (call it $\rho_0$). The irreducible representations of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ are the following: - the trivial representation of dimension $1$ - the Steinberg representation - representations $\pi_{\chi}$ parametrized by the characters $\chi$ of the group ${{\mathbb{F}}}_q^{\times} / \{ \pm 1 \}$ - representations $\pi_{\rho}$ parametrized by the characters $\rho \neq \rho_0$ of the group $N^1_{E / {{\mathbb{F}}}_q} / \{ \pm 1 \}$ - representations $\pi_+$ and $\pi_-$, corresponding to the character $\rho_0$ Finally, we give the character table: $ \begin{array}{| c | c | c | c | c | c | c| } \hline & id & St & \pi_{\chi} & \pi_{\rho} & \pi_+ & \pi_- \\ \hline I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} & 1 & q & q+1 & q-1 & (q-1)/2 & (q-1)/2 \\ \hline \begin{pmatrix} t & 0 \\ 0 & 1/t \end{pmatrix} & 1 & 1 & \chi(t) + \chi(1/t) & 0 & 0 & 0 \\ \hline \begin{pmatrix} a & -b \\ b & a \end{pmatrix} & 1 & -1 & 0 & -\rho(\epsilon) - \rho(1 / \epsilon) & -\rho_0(\epsilon) & -\rho_0(\epsilon) \\ \hline P = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} & 1 & 0 & 1 & -1 & \overline{\mathfrak{G}} & \mathfrak{G} \\ \hline P^{-1} = \begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix} & 1 & 0 & 1 & -1 & \mathfrak{G} & \overline{\mathfrak{G}} \\ \hline \end{array} $ where $\varepsilon = a + b \sqrt{-1} \in E$, $$\mathfrak{G} = \operatorname*{\sum}_{\left( \frac{x}{q} \right) = 1} \exp(2 \pi i x / q) = \operatorname*{\sum}_{\left( \frac{x}{q} \right) = 1} \zeta^x$$ and $\overline{\mathfrak{G}}$ is its complex conjugate. Computing $z$ using representation theory ========================================= In this section we will compute $z$ viewing $M_2(\Gamma(q))$ as a representation of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$. General remarks on representations of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ ------------------------------------------------------------------------------ For each representation $\pi$ and each $n \in \{ 0, \dotsc, q-1 \}$, let $p_{\pi}(n)$ denote the multiplicity of $\zeta^n$ viewed as an eigenvalue of $\pi(P^{-1})$. (Sometimes we shall simply write $p$ instead of $p_{\pi}$). With this notation, we have the following: $$f_\pi := \operatorname{tr}(\pi(I)) = p(0) + p(1) + \dotsb + p(q-1)$$ $$\operatorname{tr}(\pi(P)) = p(0) 1 + p(1) \zeta + \dotsb + p(q-1) \zeta^{q-1}$$ (recall that $\pi(\gamma)$ can be diagonalized for any $\gamma \in \operatorname{PSL}_2({{\mathbb{F}}}_q)$). So, given $f_\pi$ and $\operatorname{tr}(\pi(P))$, we can determine $p(0), p(1), \dotsb, p(q-1)$. We can therefore compute $p(n)$ for each of the irreducible representations (cf. table \[tbl:irreps\]) obtaining the following: ------------ -- --------------------------------------------------------------------------------------------------------------------------------------------------------- $id$ $p(0) = 1$ and $p(n) = 0$ for all $n>0$ $St$ $p(n) = 1$ for all $n$ $\pi_\chi$ $p(0) = 2, p(1) = \dotsb = p(q-1) = 1$ $\pi_\rho$ $p(0) = 0, p(1) = \dotsb = p(q-1) = 1$ $\pi_+$ $p(n) = \left\{ \begin{array}{c c l} 1 & \textrm{, } & \left( \frac{n}{q} \right) = 1 \\ 0 & \textrm{, } & \textrm{ otherwise } \end{array} \right.$ $\pi_-$ $p(n) = \left\{ \begin{array}{c c l} 1 & \textrm{, } & \left( \frac{n}{q} \right) = -1 \\ 0 & \textrm{, } & \textrm{ otherwise } \end{array} \right.$ ------------ -- --------------------------------------------------------------------------------------------------------------------------------------------------------- As an example of these computations, let us determine $p_{\pi_+}(n)$ for all $n$. We have the equations $$\frac{q-1}{2} = p(0) + p(1) + \dotsb + p(q-1)$$ $$\begin{array}{c c l} \operatorname*{\sum}\limits_{\left( \frac{x}{q} \right) = 1} \zeta^x & = & p(0)1 + p(1) \zeta + \dotsb + p(q-1) \zeta^{q-1} \\ & = & [p(0) - p(q-1)]1 + [p(1) - p(q-1)] \zeta + \dotsb + [p(q-2) + p(q-1)]\zeta^{q-2} \end{array}$$ Hence, for $0 \leq n \leq q-2$ $$\begin{array}{ccl} p(n) - p(q-1) & = & \left\{ \begin{array}{ccl} 1 & , & \left( \frac{n}{q} \right) = 1 \\ 0 & , & \textrm{ otherwise} \end{array} \right. \end{array}$$ Now, since there are exactly $(q-1)/2$ squares in ${{\mathbb{F}}}_q^{\times}$ the first equation reads $$\frac{q-1}{2} = (q-1) p(q-1) + \frac{q-1}{2}$$ and so $$p(q-1) = 0$$ This yields $$\begin{array}{ccl} p(n) & = & \left\{ \begin{array}{ccl} So, if we have a representation $W$ of $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ which decomposes as $$W = \alpha \cdot St \ \oplus \ \beta_+ \cdot \pi_+ \ \oplus \ \beta_- \cdot \pi_- \ \oplus \ \operatorname*{\sum}_{\chi} \gamma_{\chi} \cdot \pi_{\chi} \ \oplus \ \operatorname*{\sum}_{\rho} \delta_{\rho} \cdot \pi_{\rho}$$ then $$\dim_{{{\mathbb{C}}}} \left\{ w \in W \ | \ P^{-1}w = \zeta w \right\} = \alpha + \beta_+ + \operatorname*{\sum}\gamma_{\chi} + \operatorname*{\sum}\delta_{\rho}$$ Applying the general remarks to our case {#ssec:applying_gen_rmks} ---------------------------------------- Now we apply the previous subsection to the representation on $M_2(\Gamma(q))$ (recall that if $\gamma \in \operatorname{PSL}_2({{\mathbb{F}}}_q)$ and $\varphi \in M_2(\Gamma(q))$, then $\gamma \varphi = \varphi[\gamma^{-1}]_2$). As we saw earlier, $M_2(\Gamma(q)) = E_2(\Gamma(q)) \ \oplus \ S_2(\Gamma(q))$. So, we may study $E_2(\Gamma(q))$ and $S_2(\Gamma(q))$ separately. It is known that, as a representation, $$E_2(\Gamma(q)) = St \ \oplus \ 2 \operatorname*{\sum}_{\chi} \pi_{\chi}$$ Let $$\label{eqn:decomp_of_s2} S_2(\Gamma(p)) = x \cdot St \ \oplus \ y_+ \cdot \pi_+ \ \oplus \ y_- \cdot \pi_- \ \oplus \ \operatorname*{\sum}_{\chi} u_{\chi} \cdot \pi_{\chi} \ \oplus \ \operatorname*{\sum}_{\rho} v_{\rho} \cdot \pi_{\rho}$$ be the decomposition of the space of cusp forms (notice $id$ does not occur because of the well known fact that $\dim M_2(\operatorname{SL}_2({{\mathbb{Z}}})) = 0$). Then, since there are $\frac{q + 1}{4} - 1$ irreducible representations of the form $\pi_{\chi}$, the previous subsection gives us $$z = x + 1 + y_+ + \operatorname*{\sum}_{\chi} u_{\chi} + 2 \left( \frac{q + 1}{4} - 1 \right) + \operatorname*{\sum}_{\rho}v_{\rho}$$ Our goal for the rest of this section is to simplify this expression. For this, we will define some notations and use them to help us (these are actually introduced in [@hecke1928]). Given the decomposition (\[eqn:decomp\_of\_s2\]), we define $$\begin{array}{ccc} U = \operatorname*{\sum}\limits_{\chi} u_{\chi} & , & V = \operatorname*{\sum}\limits_{\rho} v_{\rho} \\ Y = y_- + y_+ & , & S = Y + 2U + 2V \end{array}$$ Using this notation, we obtain $$\label{eqn:formula_z} z = \frac{y_+ - y_-}{2} + \frac{q-1}{2} + \frac{Y}{2} + U + V + x$$ If $H \leq \operatorname{PSL}_2({{\mathbb{F}}}_q)$, we define $$Z(H) := \dim_{{{\mathbb{C}}}} \{ f \in S_2(\Gamma(q)) \ | \ f[\gamma]_2 = f, \forall \gamma \in H \}$$ Notice that $Z(H)$ is just the multiplicity of the identity representation of $H$ in $S_2(\Gamma(q))$. Thus, by representation theory, $$\label{eqn:formula_Z(H)} Z(H) = \frac{1}{|H|} \operatorname*{\sum}_{h \in H} \left( x St(h) + y_+ \pi_+(h) + y_- \pi_-(h) + \operatorname*{\sum}_{\chi} u_{\chi} \pi_{\chi}(h) + \operatorname*{\sum}_{\rho} v_{\rho} \pi_{\rho}(h) \right)$$ Notice $H_1 := \left\{ \begin{pmatrix} a & -b \\ b & a \end{pmatrix} \in \operatorname{PSL}_2({{\mathbb{F}}}_q) \ \bigg| \ a, b \in {{\mathbb{F}}}_q \right\}$ is a subgroup of order $(q+1)/2$. Also, $H_2 := \left\{ \begin{pmatrix} t & 0 \\ 0 & 1/t \end{pmatrix} \in \operatorname{PSL}_2({{\mathbb{F}}}_q) \ \bigg| \ t \in {{\mathbb{F}}}_q^\times \right\}$ is a subgroup of order $(q-1)/2$ (because ${{\mathbb{F}}}_q^\times$ is cyclic of order $q-1$ and $I = -I$ in $\operatorname{PSL}_2({{\mathbb{F}}}_q)$). This motivates our next definition: $Z(\frac{q+1}{2}) := Z(H_1)$  and  $Z(\frac{q-1}{2}) := Z(H_2)$. Using equation (\[eqn:formula\_Z(H)\]), we obtain the following: $$\label{eqn:specific_formula_Z(H)} \begin{array}{c} Z(\frac{q-1}{2}) = 3x + Y + 2U + 2V \\ \\ Z(\frac{q+1}{2}) = x + Y + 2U + 2V \end{array}$$ Hence, $$\label{eqn:specific_formula_x_S} \begin{array}{c} x = \frac{1}{2} Z(\frac{q-1}{2}) - \frac{1}{2} Z(\frac{q+1}{2}) \\ \\ S = \frac{3}{2} Z(\frac{q+1}{2}) - \frac{1}{2} Z(\frac{q-1}{2}) \end{array}$$ So, $$\frac{Y}{2} + U + V + x = \frac{S}{2} + x = \frac{1}{4} \left( Z\left(\frac{q+1}{2}\right) + Z\left(\frac{q-1}{2}\right) \right)$$ Note $Z(H)$ is $\dim S_2(\Gamma)$ for a certain congruence subgroup $\Gamma$ and, so, is equal to the genus of the corresponding modular curve. Hence one can compute (Hecke computes this in [@hecke1928]) $$Z\left(\frac{q+1}{2}\right) + Z\left(\frac{q-1}{2}\right) = \frac{q^2-1}{6} - (q-1)$$ So, $$\label{eqn:z_repn} z = \frac{y_+ - y_-}{2} + \frac{q-1}{4} + \frac{q^2-1}{24}$$ Computing $z$ using Riemann-Roch ================================ Some general definitions and results ------------------------------------ Before we actually compute $z$ we will introduce some definitions and present some general facts about automorphic factors and its relation with modular forms. For the most part we will follow [@rankin77]. Throughout this subsection, $\Gamma \subseteq \operatorname{SL}_2({{\mathbb{Z}}})$ denotes a congruence subgroup, $\overline{\Gamma}$ its image in $\operatorname{PSL}_2({{\mathbb{Z}}})$ and if $T = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \Gamma$ and $z \in {\mathcal{H}}$, then $T:z = cz + d$. Moreover, $P = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \in \operatorname{SL}_2({{\mathbb{Z}}})$. A map $\nu : \Gamma \times {\mathcal{H}}\rightarrow {{\mathbb{C}}}$ is called an *automorphic factor of weight $k$* if 1. $z \mapsto \nu(T,z)$ is holomorphic for all $T \in \Gamma$ 2. $|\nu(T,z)| = |T:z|^k$ for all $T \in \Gamma$ and $z \in {\mathcal{H}}$ 3. $\nu(ST, z) = \nu(S,Tz) \nu(T,z)$ for all $S,T \in \Gamma$ and $z \in {\mathcal{H}}$ 4. if $-I \in \Gamma$, then $\nu(-T,z) = \nu(T,z)$ for all $T \in \Gamma$ and $z \in {\mathcal{H}}$ Let us denote $\mu(T,z) := (T:z)^k$. One can prove that $\mu(T,z) = v(T) \mu(T,z)$, where $|v(T)| = 1$. The map $v$ is called the *multiplier system* associated with $\nu$. Note that it determines $\nu$. If $L \in \operatorname{SL}_2({{\mathbb{Z}}})$, then we define the *parameter of the cusp* $L \infty$ (with respect to $\Gamma$ and $\nu$) to be the only $\kappa_L \in [0,1)$ such that $v(L P^{n_L} L) = \exp(2 \pi i \kappa_L)$ where $n_L$ is the width of the cusp $L \infty$ with respect to $\Gamma$. (One proves that this depends only on the orbit of $L \infty$ via the action of $\Gamma$). An *unrestricted modular form* of weight $k$ for the group $\Gamma$ with respect to the automorphic factor $\nu$ (or multiplier system $v$) is a holomorphic function $\varphi: {\mathcal{H}}\rightarrow {{\mathbb{C}}}$ such that $f(Tz) = \nu(T,z) f(z)$ for all $T \in \Gamma$ and $z \in {\mathcal{H}}$. The space of all such functions is denoted $M'_k(\Gamma, v)$. (Note: the definition found in [@rankin77] is a little more general; moreover, the notation used is slightly different). Let $\varphi \in M'_k(\Gamma,v)$, $L \in \operatorname{SL}_2({{\mathbb{Z}}})$ and $n_L$ be the width of the cusp $L \infty$ with respect to $\Gamma$. Define $\varphi[L]_k (z) := \mu(L,z)^{-1} \varphi(Lz)$ and $\varphi[L]_k^*(z) := \exp(-2 \pi i \kappa_L z / n_L) \varphi[L]_k (z)$. Then 1. $\varphi[L]_k (z) = (z + n_L) = \exp(2 \pi i \kappa_L) \varphi[L]_k(z)$ 2. $\varphi[L]_k^*(z + n_L) = \varphi[L]_k^*(z)$ A function $\varphi \in M'_k(\Gamma, v)$ is called a *modular form* of weight $k$ for the group $\Gamma$ with respect to the automorphic factor $\nu$ (or multiplier system $v$) if $\varphi$ is holomorphic at the cusps. A function $\varphi \in M'_k(\Gamma, v)$ is said to be *holormophic at the cusps* if for all $L \in \operatorname{SL}_2({{\mathbb{Z}}})$, $\varphi[L]_k^*(z) = \operatorname*{\sum}\limits_{m = N_L}^{\infty} a_m t^m$ for some $N_L \in {{\mathbb{Z}}}_{\geq 0}$ where $t = \exp(2 \pi i z / n_L)$. If $\varphi \in M_k(\Gamma, v) \backslash \{ 0 \}$ and $z \in {\mathcal{H}}^*$, then we define the *order of $\varphi$ at $z$* by $$\operatorname{ord}(\varphi, z, \Gamma) = \left\{ \begin{array}{lcl} \frac{\operatorname{ord}(\varphi, z)}{\overline{\Gamma}_{z}} & , & \textrm{ if } z \in {\mathcal{H}}\\ \kappa_L + N_L & , & \textrm{ if } z = L \infty \end{array} \right.$$ (where $\operatorname{ord}(\varphi, z)$ is just the order of $\varphi$ at $z$ as a holomorphic function on ${\mathcal{H}}$, $\overline{\Gamma}_{z}$ is the stabilizer of $z$ in $\overline{\Gamma}$ and $N_L$ is as in the previous remark such that $a_{N_L} \neq 0$). Let $\varphi \in M_k(\Gamma, v)$. Then $$\varphi[L]_k(z) = \exp(2 \pi i \kappa_L z / n_L) \operatorname*{\sum}\limits_{m = N_L}^{\infty} a_m(L) \exp(2 \pi i z / n_L)$$ (This equality justifies the definition of the order of $\varphi$ at a cusp) Given $\varphi \in M_k(\Gamma, v) \backslash \{ 0 \}$, we define $$\operatorname{ord}(\varphi, \Gamma) = \operatorname*{\sum}_{z \in \Gamma \backslash {\mathcal{H}}^*} \operatorname{ord}(\varphi, z, \Gamma)$$ (*theorem 4.1.4 in [@rankin77]*) \[thm:valence\_formula\] If $\varphi \in M_k(\Gamma, v) \backslash \{ 0 \}$ then $\operatorname{ord}(\varphi, \Gamma) = \dfrac{\mu k}{12}$, where $\mu = \left[ \operatorname{PSL}_2({{\mathbb{Z}}}) : \overline{\Gamma} \right]$. This theorem is sometimes called ‘Valence Formula’. Computing $z$ {#ssec:computing_z} ------------- We are now going to use Riemann-Roch to compute $z$. (throughout this section, we view $P = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$ in $\operatorname{SL}_2({{\mathbb{Z}}})$ or in $\operatorname{PSL}_2({{\mathbb{F}}}_q)$ interchangeably depending on the context). First, using that $\Gamma_1(q) = \left< \Gamma(q), P \right>$ we notice that $V = M_2(\Gamma_1(q), v)$ where $v$ is the multiplier system associated with the automorphic factor $\nu$ given by $$\begin{array}{ccccc} \nu & : & \Gamma_1(q) \times {\mathcal{H}}& \longrightarrow & {{\mathbb{C}}}\\ & & (\gamma,z) & \longmapsto & (cz+d)^2 \zeta^b \end{array}$$ where $\gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$. So $v : \Gamma_1(q) \rightarrow {{\mathbb{C}}}$ is given by $v(\gamma) = \zeta^b$. Now fix $\varphi \in V \backslash \{ 0 \}$ and define $\widetilde{V} := \left\{ \frac{\varphi_1}{\varphi} \ \big| \ \varphi_1 \in V \right\} \subseteq {{\mathbb{C}}}(X_1(q)) = $ set of rational functions on $X_1(q)$. Note that $\widetilde{V} \cong V$ (as ${{\mathbb{C}}}$-vector spaces) and, hence, $z = \dim_{{{\mathbb{C}}}} \widetilde{V}$. The idea is to write $\widetilde{V} = L(D) = \{ f \in {{\mathbb{C}}}(X_1(q)) \mid div(f) \geq -D \}$ for a suitable divisor $D$. We then need to find the degree of $D$ in order to apply Riemann-Roch. $D$ is basically the “divisor” of $\varphi$ but first we have to understand what we mean by “divisor” of $\varphi$. Notice the order of $\varphi$ as defined in the previous section might not be an integer for elliptic points and cusps. So we have to deal with those points. First note that since $q \geq 3$ then $\Gamma_1(q)$ has no elliptic elements. Hence we only need to deal with cusps. We need to find all cusps with non-zero parameter ($\kappa_L \neq 0$). Let $\dfrac{r}{s}$ be a cusp of $\Gamma_1(q)$ (and assume $\gcd(r,s) = 1$) and $L = \begin{pmatrix} r & x \\ s & y \end{pmatrix} \in \operatorname{SL}_2({{\mathbb{Z}}})$ and assume $n = n_L$ is the width of the cusp $r/s = L \infty$. Then $$\label{eqn:stab_cusp} L P^n L^{-1} = \begin{pmatrix} 1 - nrs & nr^2 \\ -ns^2 & 1 + nrs \end{pmatrix}$$ Note that if $n \equiv 0 \pmod{q}$, then $\kappa_L = 0$. If $n \not\equiv 0 \pmod{q}$, then $s \equiv 0 \pmod{q}$ (otherwise $n \neq n_L$). So the cusps $r/s$ that are problematic are the ones with $s \equiv 0 \pmod{q}$. One checks that these are represented by $$\label{eqn:bad_cusps} \dfrac{r}{q} \ , \ r = 1, 2, \dotsc, \frac{q-1}{2}$$ (and these are not $\Gamma_1(q)$-equivalent). Hence, by (\[eqn:stab\_cusp\]), we see that $n_L = 1$ for all the cusps in (\[eqn:bad\_cusps\]). Moreover, for those cusps, $v(L P^{n_L} L^{-1}) = \exp(2 \pi i r^2/q) = \zeta^{r^2}$ and, thus, $\kappa_L = \left\{ \frac{r^2}{q} \right\} = $ the fractional part of $r^2/q$. The conclusion is that $\widetilde{V} = L(D)$ where $D$ is a divisor of degree $m = \mu / 6 - \operatorname*{\sum}\limits_{r=1}^{(q-1)/2} \left\{ \frac{r^2}{q} \right\}$ where $\mu = \left[ \operatorname{PSL}_2({{\mathbb{Z}}}) : \overline{\Gamma_1(q)} \right] = (q^2 - 1)/2$ (by theorem \[thm:valence\_formula\]). Let $g$ be the genus of $X_1(q)$ (which is known to be $g = (q-5)(q-7)/24$), then $m \geq 2g - 2$. $m - 2g + 2 = (q^2-1)/12 - \operatorname*{\sum}\limits_{r=1}^{(q-1)/2} \left\{ \frac{r^2}{q} \right\} - (q-5)(q-7)/12 + 2 = (12q-36)/12 - \operatorname*{\sum}\limits_{r=1}^{(q-1)/2} \left\{ \frac{r^2}{q} \right\} + 2 = q - 1 - \operatorname*{\sum}\limits_{r=1}^{(q-1)/2} \left\{ \frac{r^2}{q} \right\} > 0$. So Riemann-Roch gives us $z = \dim_{{{\mathbb{C}}}} \widetilde{V} = m - g + 1$. Denote $\chi(n) = \left( \frac{n}{q} \right)$ the Legendre symbol. Since $\left\{ \frac{r^2}{q} \right\} = \frac{r^2 \mod{q}}{q}$ we obtain that $$\operatorname*{\sum}\limits_{r=1}^{(q-1)/2} \left\{ \frac{r^2}{q} \right\} = \frac{1}{2} \operatorname*{\sum}\limits_{n=1}^{q-1} \frac{n}{q}(1 + \chi(n)) = \frac{q-1}{4} + \frac{1}{2q} \operatorname*{\sum}_{n=1}^{q-1} n \chi(n)$$ Hence, we can compute $$z = m - g + 1 = \dotsb = \frac{q^2 + 6q - 7}{24} - \frac{1}{2q} \operatorname*{\sum}_{n=1}^{q-1} n \chi(n)$$ Now, using Dirichlet’s class number formula, we obtain $$\label{eqn:z_RR} z = \frac{q^2 + 6q - 7}{24} - \frac{1}{2}h(-q)$$ where $h(-q) = $ class number of ${{\mathbb{Q}}}(\sqrt{-q}) / {{\mathbb{Q}}}$. Conclusion and final remark =========================== Using equation (\[eqn:z\_repn\]) from section \[ssec:applying\_gen\_rmks\] and equation (\[eqn:z\_RR\]) from section \[ssec:computing\_z\] we finally obtain what we wanted: $$m_+ - m_- = h(-q)$$ As a final remark, we note that a different proof of this result of Hecke can be found in [@casselmana].
That’s according to the Sun, which claims that many of the workers—most of whom are women—clock 60-hour work weeks and live in boarding houses. To put this in perspective: The price of the cheapest Ivy Park tank would be equivalent to about four days worth of pay for these women. One worker told the Sun that she makes 18,500 rupees—$126—each month, despite working overtime with no sick pay or paid time off. From the Sun: One sewing machine operator, 22, told us she cannot survive on her basic wage of 18,500 rupees (£87.26) a month, just over half the Sri Lankan average of £164. She works 9¾ hours a day, Monday to Friday, with a 30-minute lunch break. She has to work Saturdays and overtime in the week. The monotonous work involves stitching clothes alongside hundreds of other women. Speaking at her cramped 100-room boarding house near the factory in Katunayake, close to Colombo airport, she said: “All we do is work, sleep, work, sleep.” The worst part? Technically, the factory employing these women isn’t breaking the law: The minimum wage in Sri Lanka is 13,500 rupees a month (about $92). But, as in the U.S., this isn’t considered a living wage.
https://www.fastcompany.com/4007399/beyonces-ivy-park-line-is-reportedly-made-by-sri-lankan-workers-earning-6-a-day
Adverbs are words that can modify verbs, adjectives, or other adverbs. So if you are not familiar with the concept of adverbs yet, read this. What Are Adverbs? Types of Adverbs Here are the main types of adverbs in the English language: - Adverbs of place - Adverbs of time - Adverbs of manner - Adverbs of frequency - Adverbs of degree - Adverbs of probability - Adverbs of movement and direction - Demonstrative adverbs - Relative adverbs - Interrogative adverbs - Conjunctive adverbs - Viewpoint and commenting adverbs - Intensifiers and mitigators - Adverbial nouns Functions of Adverbs Adverbs Modifying Verbs The main function of adverbs is to modify verbs or verb phrases. In this way, they give us more information about the manner, place, time, frequency, certainty, etc. Take a look at some examples: She danced Here, 'beautifully' modifies the verb 'danced,' indicating the manner of dancing. I left my wallet 'In the park' modifies the verb phrase 'left my wallet,' indicating place. My uncle is going to London Here, 'tomorrow' modifies the verb phrase 'is going to,' indicating time. She 'Often' modifies the verb 'works,' indicating frequency. She has 'Undoubtedly' modifies the verb phrase 'has gone,' indicating probability. Adverbs Modifying Adjectives and Adverbs We can also use adverbs to modify adjectives, and other adverbs, often to indicate degree. for examples: The turtle moves The adverb 'very' modifies another adverb 'slowly.' This cake is The adverb 'absolutely' modifies the adjective 'delicious.' Adverbs Modifying Determiners and Prepositional Phrases Adverbs can also be used to modify determiners and prepositional phrases. Take a look at the following examples: I've watched 'Practically' modifies the determiner 'all' in the noun phrase, 'all of his movies'. He's 'Almost' modifies the prepositional phrase 'as old as.' Adverbs Modifying Sentences We can also use adverbs to modify whole clauses or sentences. For example: 'Undoubtedly' modifies the sentence as a whole. 'Unfortunately' modifies the sentence as a whole. Adverbs as Subject Complements Adverbs can sometimes be used as predicative subject complements. Mostly the adverbs of place can be used in this way. For example: Our seat is In this sentence, 'where the trouble starts' is the subject and 'here' is the predicate. The sentence has a subject-verb inversion. Structures of Adverbs In English (as in many other languages), there are different ways of creating an adverb. For example, we can create adverbs of manner by adding the suffix -ly to the adjectives. quick → quick slow → slow Some words can be used as both adjectives and adverbs. These words are called flat adverbs (also called bare adverbs or simple adverbs). For example: - fast - hard - straight Positions of Adverbs Adverbs of manner are generally placed after the verb and its objects. However, other positions are possible too. He drove He Many adverbs of frequency, degree, certainty, etc. tend to be placed before the verb, although if there is an auxiliary, then the normal position for such adverbs is after that. I I can Adverbs that show a connection with the previous sentences, and those that provide the context for a sentence, are normally placed at the beginning of the sentence. If the verb has an object, the adverb comes after the object. I read the book Order of Adverbs If you want to use more than one 'adverb' in a sentence, it is important to know how to place them in a specific order. There is a simple set of rules to follow, called the order of adverbs. The adverbs are placed first in the following order: - Adverbs of manner - Adverbs of place - Adverbs of frequency - Adverbs of time I run I study Comparative and Superlative Adverbs 'Adverbs,' like adjectives, can show degrees of comparison. But it’s less common to use them in comparison. - To make the comparative form of a two or more-syllable adverb you add 'more' before the adverb. - To make the superlative form of a two or more syllable adverb you add 'most' before the adverb. She walked She walked the With adverbs that look exactly the same as their adjective counterparts, the comparative and superlative forms look the same as the adjective comparative and superlative forms. She speaks Here in this example, fast is an adverb which is the same as the adjective 'fast'. Maria works Tip! They are some linking verbs that are mostly followed by adjectives not adverbs. Such as smell, feel, appear, etc. The food smells .') Adverbial Phrases 'Adverbial phrases' are groups of words that act as adverbs. An adverb phrase may have an adverb as its head, and with any modifiers and complements. For example: - very carefully - all too well - sadly enough Another very common type of adverb phrase is the prepositional phrase, which consists of a preposition and its object. I will talk to my father His car is parked Adverbial Clauses Clauses contain one subject and one verb. Without having a subject and verb we cannot make clauses. 'Adverbial clauses,' as a result, have subject and verb. They are made up of a set of words and play the role of an adverb. Look at the examples. The bosses signed the contracts He left Review Adverbs are used to modify adjectives, verbs, or other adverbs. There are different types of adverbs in English. Check out the list. - adverbs of manner - adverbs of place - adverbs of time - adverbs of frequency Comments Contents - What Are Adverbs?
https://langeek.co/en/grammar/course/501/adverbs
Amy Van Nostrand is an American actress best known for her contributions on Broadway Theatres. Her works in theatre include The Hothouse, Dance With Me. Besides that, Amy featured in movies as well Made in Heaven, Ruby Cairo, Partners in Crime, and Ghost Town. Amy started her career in 1981 and is still present in the industry. Nostrand married actor Tim Daly and has two children. Let’s find out more about her career, personal life below. Who is Amy Van Nostrand? Her Bio and Age Amy Van Nostrand was born on April 11, 1953, in Providence, Rhode Island, USA. This 66-year-old actress was born under the sun sign of Aries. There is not much information about her parents and siblings. Amy attended Brown University and completed her degree in Bachelors in Arts. Nostrand always had an interest in acting so, after graduating she started her career in theatre. Also Read: Sakura Ando Height, Movies, Husband, Net Worth, 100 Yen Love Amy Van Nostrand’s Career Nostrand started her career by working in Theatres like Huntington Theatre Company, George Street Playhouse, Pittsburgh Public Theatre, Williamstown Theatre Festival, and The People’s Light & Theatre Company. Some of the stages show Amy appeared in are The Buried Child, Love Letters, A View From the Bridge. Nostrand started her television and film career in 1981 with The House of Mirth. Amy played the role of Gwen Van Osburgh. After that, she appeared in many television series like the Vietnam War, Dangerous Heart, The Practice, Frasier, Execution of Justice, and Law & Order. Currently, Amy is working in Broadway theatre and also helping fellow newcomers in the industry. Amy Van Nostrand’s Husband, James Daly Nostrand married actor Tim Daly in 1982. Tim is the son of late actor James Daly who is one of the inspiring actors in theatre. Amy and Tim met each other during the shoot and immediately fell in love with each other. The couple tied the knot in 1982 in a low-key wedding ceremony inviting just friends and family. Soon, Amy and Tim welcomed Sam Daly and Emelyn Daly their children. Sam is currently working as an actor. But the love between Amy and Tim got sour and they got separated in 2010. The reason for the divorce is not known yet. Tim is currently married to Tea Leoni and Amy is single. Also Read: Maria Schrader Bio, Movies, Daughter, Fortitude & Net Worth What is Amy Van Nostrand’s Net Worth? The exact amount of her wealth is undisclosed but in her two-decades-long career, Amy must’ve earned millions of dollars. Her source of income comes from her acting career in theatre, films, and television. Not just that, Amy also became the producer in the later years of her career which helps to add income in her bank balance. From her Broadway career, Amy earned a salary of $2,034 per week and from her off-broadway, the salary is $1,145 per week. Amy played in movies from which she earned a salary of more than $500,000 per movie and some contribution from the movie profit. She is the associate producer at Seven Girlfriends production from which on average Amy earns $54,774 per year. Amy is not that active on Instagram so, not much about her lifestyle can be learned.
https://allstarbio.com/amy-van-nostrand-bio-husband-children-age-james-daly/
India's significant economic growth over the last decade has led to an inexorable rise in energy demand. Currently, India faces a challenging energy shortage. To grow at 9 per cent over the next 20 years, it is estimated that its energy capacity must increase by approximately 5.8 per cent per year. Edited Proceedings of the 11th Petro India Conference organised by India Energy Forum and Observer Research Foundation. The creation of a new 'architecture' for regional cooperation in the area of energy in general and hydrocarbons in particular will be a critical step in the process of Asia's development. While the process of industrialisation is facing many problems that need to be addressed, the problem of land for industry is among the most serious. Many major projects today are stuck due to problems related to land acquisition. The Indian industrial sector has slowed down and reviving it is an immense challenge, given problems in the availability of power. Many states across the country have been facing daily power cuts of upto six hours; the situation is only worsening despite measures being taken by the government such as sprucing up coal supplies. Oil is the lifeblood of modern Indian society; its importance in the daily lives of citizens is steadily increasing. At the same time, natural gas is gaining in importance worldwide due to its environment-friendliness and lower cost as compared to oil. Mekong-Ganga Dialogue (MGD) is an international cooperation forum for enhancing understanding between Mekong and Ganga countries about water, food and energy challenges. The draft Land Acquisition and Rehabilitation and Resettlement Bill, which was introduced in the Lok Sabha on September 7, 2011 is one of the most important legislations waiting for Parliamentary approval. As India braces itself for an over-ambitious Jawaharlal Nehru National Solar Mission, it also has to deliberate the prospects of developing other renewable energy resources. Of all the non-conventional renewable energy sources, small hydro represents the highest density resource. India and the US are poised to expand agricultural cooperation with the hope of bringing about a "Second Green Revolution" in India. Cooperation in this area would, however, need to take into account the interests of Indian farmers as well as issues related to bio-diversity and the environment.
http://admin.indiaenvironmentportal.org.in/category/publisher/observer-research-foundation?page=1
-1truecm -2.5truecm [**COMPARISON OF CONNECTIVITY AND RIGIDITY PERCOLATION**]{} Cristian F. Moukarzel and Phillip M. Duxbury\ \ Department of Physics/Astronomy and\ Center for Fundamental Materials Research,\ Michigan State University,\ East Lansing, MI 48824-1116 [**1. INTRODUCTION**]{} Connectivity percolation has devotees in mathematics, physics and in myriad applications . Rigidity percolation is a more general problem, with connectivity percolation as an important limiting case. There is a growing realization that the general rigidity percolation problem exhibits a broader range of fundamental phenomena and has the potential for many new applications of percolation ideas and models. In connectivity percolation, the propagation of a scalar quantity is monitored, while in rigidity the propagation of a vector is, in general, considered. In both cases, one or more conservation laws hold. Moreover, connectivity percolation is rather special and appears to be, in many cases, quite different than other problems in the general rigidity class. We illustrate these differences by comparing connectivity and rigidity percolation in two cases which are very well understood, namely diluted triangular lattices  and trees . A large part of the intense fundamental interest in percolation is due to the fact that percolation is like a phase transition, in the sense that there is a critical point (critical concentration) and non-trivial scaling behavior near the critical point . This analogy carries over to the rigidity percolation problem. However, although the connectivity percolation problem is usually second order (including on trees), the rigidity percolation problem is first order in mean field theory and on trees . However on triangular lattices, both connectivity and rigidity percolation are second order, though they are in [*different universality classes*]{} . The emphasis of most of the analysis in the literature and in this presentation is the behavior at and near the critical point. In this paper, we discuss (Section 2) ideas which apply to connectivity and rigidity percolation on diluted lattices. In Section 3, we discuss and compare the specific case of connectivity and rigidity percolation on trees. In Section 4 we summarize the matching algorithms which may be used to find the percolating cluster in both connectivity and rigidity cases. The behavior in the connectivity and rigidity cases on site diluted triangular lattices is then compared. Section 5 contains a summary and discussion of the similarities and differences between connectivity and rigidity percolation in more general terms. [**2. RIGIDITY AND CONNECTIVITY OF RANDOM GRAPHS**]{}\ In mathematical terms, percolation is the study of the connectivity properties of random graphs as a function of the number of edges in the graph. We are usually interested in the asymptotic limit of graphs with an infinite number of vertices. In the physics community, we usually study this process on a regular lattice (e.g. square, simple cubic, triangular) and consider the effect of adding or removing edges which lie between the vertices of these graphs. However the mean field limit is equivalent to considering connections between all sites on the lattice, no matter their Euclidean separation. The behavior of this “infinite range” model is equivalent to the limit of infinite dimensional lattice models (e.g. infinite dimensional hypercubes) and tree models. The reason for the equivalence of the critical behavior of these models is that they are all dominated by long range “rings”, whereas on regular lattices in lower dimensions short range rings can be very important. Field theory models seek to add short range loops to these models in a systematic and non-perturbative (in some sense) way. We consider lattices consisting of sites which are defined to have $z$ neighbors (i.e. they have coordination number $z$). We then adding edges between neighbors randomly with probability $p$. Once this process is finished we are left with a random graph with each site having average coordination $C=z p$. In studying percolation, we are always asking the question “Is it possible to transmit some quantity (i.e. a scalar, or a vector) across a graph”. If it is possible to transmit the quantity of interest, we say that the network is above the “percolation threshold” $p_c$ (or equivalently $C_c=z p_c$) for that quantity. If it is impossible to transmit the quantity of interest we are below $p_c$. At $p_c$ the part of the random graph which transmits the quantity of interest is [*fractal*]{} [**provided**]{} the percolation transition is second order. The probability that a bond is on this “percolating backbone” is $P_B$, and is one of the key quantities in the analysis of percolation problems on regular lattices. If the percolation transition is “second order”, $$P_B \sim (p-p_c)^{\beta'} \ \ \hbox{as}\ \ p \to p_c^+$$ with $\beta' > 0$. On trees it is more difficult to define $P_B$. Nevertheless we are able to analyze the problem effectively using “constraint-counting” ideas . Let us for a moment put aside thinking in terms of percolation and instead develop constraint counting ideas originally discussed by Maxwell, and which have been developed extensively in the engineering, math and glass communities . These ideas have not been applied to connectivity percolation till recently and are enriching to both the connectivity and rigidity cases. At a conceptual level constraint counting is deceptively simple. To illustrate this, we assign to each site of our lattices a certain number of [*degrees of freedom*]{}. In connectivity percolation, the transmission of a scalar quantity is of interest, therefore each site is assigned one degree of freedom. However if we consider a point object in two dimensions from the point of view of the transmission of forces, it has two degrees of freedom (two translations). In $d$ dimensions point masses have $d$ degrees of freedom. However extended objects have rotational degrees of freedom, so that when we have clusters which are mutually rigid, we must also consider these rotational degrees of freedom. We call such objects bodies and they have $d(d-1)/2$ degrees of freedom. From a model viewpoint, we allow the number of degrees of freedom of a free site to be a control variable which we label $g$, with $g=1$ the connectivity case. With equal generality we may say that $G$ is the number of degrees of freedom of a rigid cluster or body. It is easy to see that there is a vast array of models with $g\ne 1$, and with $G\ne 1$ and much of the physics and applications of these models have yet to be examined . Now we have a model in which a free site has $g$ degrees of freedom, so that a lattice with $N$ sites(and no edges) has a total of $F=Ng$ degrees of freedom, or “floppy modes” (i.e. modes which have zero frequency due to the fact that there is no restoring force). Constraint counting consists in simply saying that each time an *independent* edge is added to the lattice, the number of degrees of freedom (zero frequency modes) is reduced by one if this edge is not “wasted” (see later). For example, the minimum number $E_{\hbox{\scriptsize \bf min}}$ of constraints needed to make a rigid cluster out of a set of $N$ sites is $$E_{\hbox{\scriptsize \bf min}} = N g - G, \label{eq:bmin}$$ which holds for the case in which edges are put on the graph in such a way that none is wasted. In general, if $E$ edges are added to the graph, the number of degrees of freedom (or floppy modes) which remains is $$F = N g - E + R.$$ Note the additional term $R$ on the right hand side. This term is key in understanding the relation between constraint counting and percolation, and in finding algorithms for rigidity percolation. $R$ is the number of “redundant bonds” or “wasted” edges which are added to the lattice. An edge does not reduce the number of floppy modes if it is placed between two sites which are already mutually rigid, in which case this edge is “redundant”. The simplest examples of subgraphs containing a redundant bond on a triangular lattice are illustrated in Fig. 1 for the connectivity ($g=1$) and $g=2$ rigidity cases. Note that any one of the bonds in these structures could be labeled as the redundant one. However once any one of them is removed, all of the others are necessary to ensure the mutual rigidity of the structure. In fact the set of all bonds which are mutually redundant form an “overconstrained” or “stressed cluster”. This is because, in the rigidity case we can think of each edge as being a central force spring, which means that there is a restoring force only in tension and compression. Then an overconstrained cluster of such springs (with random natural lengths) is internally stressed due to the redundant bond. In the connectivity case each bond is like a wire which can carry current or fluid flow. The simplest overconstrained cluster is then a loop which can support an internal “eddy” current. Rigid structures which contain no redundant bonds are minimally rigid or “isostatic”. In connectivity percolation isostatic structures are trees, whereas in ($g>1$) rigidity percolation isostatic structures must always contain loops (see Fig. 1). In percolation problems, we are interested in the asymptotic limit of very large graphs ($N \rightarrow \infty$), and it is more convenient to work with intensive quantities, so we define $f(p) = F/gN$ and $r(p)=2Rg/zN$ which leads to $$f(p) = 1 - {z\over 2g}( p - r(p)),$$ where the number of edges $E/N = zp/2$. The normalization on $r(p)$ is chosen this way because $r(p)$ is now the probability that a bond is redundant (times $g$). Note that we normalize the number of floppy modes by $g$ to be consistent with previous work . We have shown that $f(p)$ acts as a free energy for both connectivity and rigidity problems, so that if $\partial f(p)/\partial p$ undergoes a jump then the transition is first order. The behavior of this quantity is directly related to the probability that a bond is overconstrained $P_{ov}$ via the important relation  $${\partial f \over \partial p} = -{z\over 2g} ( 1 - P_{ov})$$ If the transition is second order, the second derivative $\partial^2 f/\partial p^2 \sim (p-p_c)^{-\alpha}$, where $\alpha$ is the specific heat exponent . On both triangular lattices and on trees, we also calculate the infinite cluster probability. This is composed of the backbone plus the dangling ends. Dangling ends are rigidly connected to the backbone but [*do not*]{} participate in the transmission of the quantity of interest. Examples of dangling ends in the connectivity and rigidity cases are illustrated in Fig. 2. [**3. TREES**]{}\ In our previous analysis of tree models, we have considered propagation of rigidity outward from a rigid boundary (this is the same as a strong surface field). Here we also add a bulk field which induces rigidity. The results are essentially the same whether the boundary or the bulk field is used. In percolation problems, the bulk field corresponds to nuclei embedded in the lattice and tethered to a rigid substrate. This is the sort of construction envisioned by Obukov , and used extensively in the construction of models for connectivity percolation. The mean-field equation for the order parameter on trees is then, $$T_0= h + (1-h) \sum^{z-1}_{l=g} {z-1 \choose l} (pT_0)^l (1-pT_0)^{z-1 -l}, \label{eq:mf}$$ where $T_0$ is the probability that a site is connected to the infinite rigid cluster through one branch of a tree (see the paper by Leath for more details on the derivation), and $h$ is the probability that a site is rigidly connected to the rigid substrate ($h$ is sometimes called a “ghost field”). On trees, the probability that a bond is overconstrained is, $${N_0\over N_B} = T_0^2$$ so that we have the key equation, $${\partial f \over \partial p} = -{z\over 2g} (1-T_0^2).$$ Other useful formulae are the probability that a bond is overconstrained, $$P_{ov} = T_0^2,$$ and the probability that a bond is on the infinite cluster, $$P_{inf} = T_0^2 + 2T_0 T_1.$$ In the connectivity cases, an infinitesimal field $h$ or any finite order parameter at the boundary is sufficient to allow a percolation transition to occur on trees. In contrast in $g\ne 1$ cases, there must be a finite $h$, or a finite boundary field before a transition occurs. These differences are illustrated in Fig. 3. -1cm It is clearly seen from these figures that the rigidity transition is first order on trees, while the connectivity one is second order. The results presented in Fig. 3 are simple to obtain. We iterate the mean field equation (6) until a stable fixed point is reached (there are similar equations for $T_1$ etc.) and we then evaluate Eqs. (7) - (9). We identify the point at which the stable solution becomes nonzero as $p_s$, the spinodal point, for reasons discussed below. In the connectivity case $p_s = p_c$ because the transition is second order. We also want to find the total number of redundant bonds $r(p)$ and the total number of floppy modes $f(p)$. In order to find these quantities, we integrate Eq. (7) and then use Eq. (3). However, the integration of Eq. (7) leads to one free constant. This constant depends on the situation we wish to model. If we wish to model a regular lattice, then we impose the constraint, $$r(1) = 1 - {2g \over b z}, \label{eq:weuse}$$ for example in the case of central force springs on a triangular lattice $1/3$ of the springs are redundant when $p=1$. However on trees, $$r(1) = 1 - {g \over b (z-1)}.$$ In using tree models to provide approximations to rigidity percolation on regular lattices, we impose the constraint (\[eq:weuse\]) . Then we find that r(p) approaches zero at a critical point $p_c$, which is NOT the same a $p_s$ if $g>1$. However, if $g>1$, this $p_c$ is very close to the Maxwell estimate $p_c^{\hbox{\tiny Maxwell}} = 2g/bz$, and is [*the same*]{} as that found numerically for infinite range (random bond) models . Typical results found for tree models are presented in Fig. 4. -0.7cm 0.7cm In the connectivity case we find $p_c=p_s$, while in rigidity cases $p_s<p_c$. For the rigidity case of Fig. 4, $p_s=0.605$, while $p_c=0.655$.\ [**4. TRIANGULAR LATTICES**]{}\ We consider connectivity percolation($g=1,b=1,z=6$) and central force percolation($g=2,b=1,z=6$) on site diluted triangular lattices. In connectivity percolation the scaling behavior near the percolation threshold has been proven on trees and has been extensively tested on regular lattices using large scale numerical simulations. We have carried out a similar program for rigidity percolation as summarized below for triangular lattices. However doing numerical simulations in the rigidity case has required a breakthrough in algorithm development, and this has only occurred recently through contact with the mathematical computer science community. The methods developed for rigidity have even improved some aspects of algorithmic methods  for the connectivity case, as discussed below. Before discussing the rigidity algorithms, it is important to point out the limitations of these methods. Firstly these algorithms are able to identify structures which can support stress (rigidity case), or which can carry a current (connectivity case). They do not find the actual current or stress, but rather those bonds which are able to transmit the load from an input node or set of nodes to an output node or set of nodes. In the connectivity percolation case it is trivial that the actual geometric realization of a given graph connectivity does not change the set of bonds which carry current from a fixed set of input nodes to a fixed set of output nodes. However in the rigidity case, there can be “special” realizations of a given graph which are responsible for singular rigid properties of “generically” rigid clusters. This may occur when there are “degenerate” constraints (e.g. parallel bonds on a lattice). The probability of such degenerate realizations is zero on geometrically disordered lattices and for this reason geometry, and the existence of these degenerate configurations, may be simply ignored in this case. In the mathematical community this problem is called “generic rigidity” and is the only one for which powerful algorithms exist. Thus the problem of rigidity on a [*regular triangular lattice remains unsolved*]{}. The results described below apply to triangular lattices whose sites have been randomly displaced (e.g. by $0.1$ of a lattice constant). In addition, the powerful matching algorithms that we use apply to “joint-bar graphs in the plane” and to a subset of graphs in general dimensions (so-called body - bar problems) However, it turns out that glasses in $3-d$ correspond to a case which is solvable (they map to a body-bar problem), so this is one of those unusual cases where the practical case is actually theoretically convenient. The matching algorithm is implemented as follows : > Start with an empty triangular lattice(no bonds) and assign to each node $g$ degrees of freedom.\ > [*Then:*]{} > > 1. Randomly add a bond to the lattice. > > 2. Test whether this bond is redundant with respect to the bonds which are currently in the lattice. > > 3. If the bond is redundant [*do not add it to the lattice*]{}, but instead store its location in a different array. > > 4. Return to 1. > > [*End*]{} The key step is step 2. The algorithm to do this test is rigorous and based on Laman’s theorem. It was developed by Bruce Hendrickson, who also provided a key service in explaining his algorithm to the physics community. Step 2. is performed by exact constraint counting. This is implemented by “matching” constraints (bonds) to degrees of freedom, with the restriction that the constraints can only match degrees of freedom at each end of the corresponding bond. Thus it is natural to represent this constraint counting by using arrows to indicate the degree of freedom to which each bond is matched. The idea is that, when a new bond is added, it must be possible to match $G+1$ arrows to degrees of freedom of the graph in order for this new bond not to be redundant. If this task can be accomplished [^1] the new bond is [*independent*]{}, which means that it is not wasted, and is left on the graph. In this case only $G$ auxiliary arrows are removed. If on the other hand some of the new arrows cannot be matched, the last edge is [*redundant*]{} and all $G+1$ arrows are erased. A successful and a failed match are illustrated in Fig. 5 for a connectivity (g=1) case and a rigidity (g=2) case. Note that the bond that is being tested carries with it $G$ additional “copies” which account for global degrees of freedom of a rigid cluster. In the connectivity case $G=1$, while on central-force bar-joint networks in two dimensions $G=3$. A failed match occurs when a bond is unable to find a degree of freedom to “cover”. This bond is then redundant. In trying to find a degree of freedom to which a redundant bond can be assigned or “matched”, the algorithm identifies all bonds which are “overconstrained” of stressed with respect to that bond. This set of bonds is called a [*Laman subgraph*]{}. Note that if a redundant bond is already in a graph, it is not possible to add a new bond and test its redundancy with this method. This is the reason that the algorithm proceeds by adding bonds one at a time starting with an empty lattice. Any error is testing the redundancy of a bond invalidates the rest of the addition sequence. However since this algorithm is an integer method there is no problem with roundoff. It is easy to see that the matching algorithm is quite efficient, however if requires quite a bit of effort to fully optimize these methods. Pictures of the infinite cluster for connectivity and rigidity percolation on a triangular lattice are presented in Fig. 6. -1cm Because of the fact that we add bonds one at a time until the percolation point is reached, we are able to identify $p_c$ [*exactly*]{} for each sample, and therefore measure the components of the spanning cluster exactly at $p_c$. This eliminates the error associated with measurements at fixed values of $p$, since estimated exponents are known to depend very sensitively on $p$. This method was proposed and applied for the first time in ref. . At $p_c$ we identify three different types of bonds: [*backbone bonds*]{}, [*dangling ends*]{} and [*cutting bonds*]{}. These together form the [*infinite cluster*]{}. The cutting bonds are stressed (belong to the backbone), but they are “critical” because if one of them is removed, load is no longer transmitted across the infinite cluster. The results in Fig. 5 are for bus bars at the top and bottom of the figures and periodic boundary conditions in the other direction. In order to find the correlation length exponent, we used two relations: Firstly the size dependence of the threshold behaves as $\delta p_c \sim L^{-1/\nu}$ and secondly, the number of cutting bonds varies as $n_c \sim L^{1/\nu}$ . An analysis of this data in the connectivity and rigidity cases is presented in Fig. 7. It is seen from this figure that the rigidity case has a different $\nu=1.16\pm 0.03$  than the connectivity case $\nu=4/3$. A finite-size-scaling plot of the infinite-cluster and backbone densities at $p_c$ is presented in Fig. 8, along with the density of dangling ends. -0.7cm 0.7cm What is unambiguous from these figures is that the backbone density is decreasing algebraically $P_B \sim L^{-\beta'/\nu}$, and from this data we find that in the rigidity case $\beta' = 0.25 \pm 0.02$ . It also appears that the infinite-cluster probability is decreasing algebraically, however that is difficult to reconcile with the behavior of the dangling ends. In the connectivity case much larger scale simulations have been done, and it is known that the algebraic decrease in the infinite cluster probability continues, so that the infinite cluster in the connectivity case is indeed fractal . In the rigidity case, such large scale simulations are still lacking, so the possibility of an infinite cluster with finite density (about $0.1$) remains.\ [**5. SUMMARY**]{}\ The development of tree models and the use of new algorithms in numerical studies have revolutionized our understanding of the geometry of rigidity percolation. In particular we now know that on triangular lattices, the rigidity transition is second order, but in a different universality class to the connectivity case. In contrast on trees the rigidity transition is first order while the connectivity transition is second order. The geometry of rigidity percolation is different from that of connectivity percolation due to the requirement of multiple connectivity in the rigidity case. However this is clearly not enough to ensure that the transition becomes first order. Perhaps a deeper question is whether the infinite cluster breaks up into two or an infinite number of subclusters when a critical bond is removed. In the connectivity case, the answer is clearly two. In the rigidity case it is infinity on trees and difficult to analyze precisely on triangular lattices. This and a host of other questions remain unanswered in this interesting class of problems.\ [**ACKNOWLEDGEMENTS**]{} PMD and CM thank the DOE for support under contract DE-FG02-90ER45418 and CM also acknowledges support from the Conselho Nacional de Pesquisa CNPq, Brazil. [^1]: It is valid to rearrange previously existing arrows, provided this is done in such a way that all remain matched to some degree of freedom
The Department of Health & Human Services (HHS) just issued proposed nondiscrimination rules that would be applied under the health reform law. After reading a press release titled something to the effect of “New ACA nondiscrimination rules issued,” we were reminded of the scene from Star Wars: A New Hope in which Obi-Wan Kenobi plays a Jedi mind trick on a stormtrooper and says, “These aren’t the droids you’re looking for.” Why, of all things, did that pop into our head? Because, in fact, these aren’t the nondiscrimination rules we’re looking for. Still waiting … As you’ll likely recall, when the ACA was passed, the feds said it would subject group health plans to nondiscrimination rules similar to those that currently apply to self-insured group health plans. The rules would prevent health plans from discriminating against highly compensated employees by offering them benefits not open to their lesser-paid counterparts. The problem is, the feds said the rules wouldn’t apply until official guidance had been released about them. So feds kept employers waiting and searching for the guidance. It was then expected to finally be released in 2014, but it was delayed due to some lingering questions IRS officials had. And so we waited. Then, behold, nearly two years later “New ACA nondiscrimination rules issued.” But much like the lowly stormtrooper, we’ve been tricked. What’s in the new rules In a nutshell, the new HHS proposed rules look to snuff out all forms of race, sex, color, national origin, age and/or disability discrimination in the health insurance marketplace — a noble endeavor, but not the one we were looking for. While some of these forms of discrimination had already been banned under the ACA, the new rules further clarify and strengthen protections for individuals. For example, the proposed rule establishes that the prohibition on sex discrimination includes discrimination based on gender identity. Discrimination on the basis of sexual orientation would also be barred. This piggybacks on other federal rulemaking that made it illegal for federal contractors to discriminate against individuals based on sexual orientation or gender identity. The proposed rules would apply to health insurance marketplaces, any health program that the HHS administers, and any health program or activity receiving funding from the HHS. The rules’ protections would also be extended to individuals enrolled in plans offered by insurers participating in the health insurance exchanges. In other words, if your health insurer offers a plan on the exchanges, all of its plans are barred from discriminatory benefit designs or marketing practices. HHS Secretary Sylvia M. Burwell said in a news release, “This proposed rule is an important step to strengthen protections for people who have often been subject to discrimination in our health care system. This is another example of this administration’s commitment to giving every American access to the health care they deserve.” As for the other droids — ahem, nondiscrimination rules — we’re still waiting.
https://www.hrmorning.com/articles/new-aca-nondiscrimination-rules-but-these-arent-the-rules-youre-looking-for/
Tianna Hawkins crowned inaugural AU Hoops Champion Entering Week 3 of Athletes Unlimited Basketball Tianna Hawkins took the No. 1 spot on the leaderboard. She never looked back. Hawkins finished the season with 6,836 points, becoming the inaugural Hoops Champion on Saturday evening, capping he season with a thrilling triple overtime victory – the first-ever in league history – as her team defeated Team Cloud, 116-11, in a battle to the finish. Hawkins posted 35 points and 18 boards. Over the five weeks, she has cemented herself in Athletes Unlimited history books in more ways than one. She led the league in rebounds (166) and ranked second in points (357), and owns single-game records for points (43), leaderboard points (790) and field goals (19). She also recorded a league-high 11 double-doubles. She tallied 500+ leaderboard points in seven games this season, more than any other player in the league. The three-time captain of the gold squad led her teams to a 7-2 record. She led the league in stat points (3,336) and MVP points (750), earning MVP 1 honors six times, MVP 2 honors twice and MVP 3 honors three times. “It was hard, but I’m not here without my great teammates… I wouldn’t be here without them so credit to them,” Hawkins said. “Aye, aye the Champ is here. I’m excited, I say that humbly, but look, aye, I’m the first one in the book so I go down in history, so I’m excited about that … I was the first in D.C. now I’m the first in AU. It don’t get no better than this.” Natasha Cloud finished the season ranked No. 2 on the leaderboard with 5,919 points overall – 917 points behind Hawkins. Cloud, who was an advocate for Hoops to be brought into the Athletes Unlimited network, recorded league single-game records in 3-pointers (6), free throws (16) and assists (15). She also holds the league record in assists (133) and registered the first-ever triple-double in league history with 17 points, 10 rebounds and 10 assists on Feb. 12. It was also the first triple-double of Cloud’s career. Securing the No. 3 spot with 5,373 points after Saturday’s finale is Isabelle Harrison. Despite her team’s loss earlier in the evening, Harrison added 254 leaderboard points as her team picked up two quarter wins in a 103-90 loss to Team Brown. She was a four-time captain, joining Cloud as the only players this season to be a captain during four of the five weeks, and netted 275 points and 45 assists throughout the five weeks. Harrison was also selected as the Defensive Player of the Year by the players, facilitators and members of the Unlimited Club. Harrison finished the season with 109 rebounds, 31 steals and 11 blocks. She will receive a medal and $5,000 bonus. Lexie Brown, who captained the team that defeated Harrison, rounded out the top four on the leaderboard with 5,317 points. Brown recorded 210 points, 70 assists and 63 rebounds. She ranked second in steals, grabbing 38 from her opponents. Brown, Harrison and Cloud are the only three players to remain in the top five spots of the leaderboard since the season began. On Friday afternoon, Harrison and Brown were announced as part of the 2022 All-Defensive Team. In addition to Harrison, center Nikki Greene, forward Lauren Manis and guard DiJonai Carrington were also named to the team. Greene ranked third in blocks (15), fourth in steals (27) and added 83 rebounds. Carrington led the league in steals with 41 and ranked second in rebounds with 123. A new award to Athletes Unlimited for the Hoops season is the Sportsmanship Award, which is awarded to an individual player who exemplifies the attributes of good character, integrity and sportsmanship. Receiving the 2022 Sportsmanship Award as voted on by players and facilitators was CC Andrews. When Andrews isn’t on the court, she can always be heard on the sidelines hyping up her team. We heard y'all need an energy boost for tonight's games 😏 What better way to get hype than a @c_andrews21 mic'd up? 🗣#AUHoops x @HighlightHER pic.twitter.com/IwOXkcElN6 — Athletes Unlimited (@AUProSports) February 19, 2022 After a successful first season featuring 44 of the best basketball players in the world, Athletes Unlimited Basketball will return for Season 2 in 2023. “We really did that. We all made this happen. From our co-founders, to our staff, to our facilitators, who were amazing for us, to the fans. I really don’t think we could have envisioned this being any better than it was,” Colson said. “So many players came here and rediscovered their love for the game. They showcased their talent and had opportunities to receive more basketball opportunities.
https://auprosports.com/read/tianna-hawkins-crowned-inaugural-au-hoops-champion/
Woman calls police to complain about quality of crystal meth An Oklahoma woman didn’t hesitate to pick up the phone to complain when a product she bought did not meet industry standards. But perhaps she should have taken a moment to think it through, since the number she called was 911 and the product was crystal meth. According to police officers in Enid, 54-year-old Lynette Rae Sampson was the model of politeness as she described how the ‘ice’ she had purchased did not meet the purity standards she was used to. Officer Aaron Barber was greetly warmly when he arrived at Ms Sampson’s home. ‘I’m glad you came,’ she told the officer, before leading him to the kitchen where the crystal meth was stored in a tin on the counter. The woman was arrested. ‘Once you think you’ve seen it all, something new will surprise you,’ said Captain Jack Morris of Enid Police. ‘It’s sad people who utilise these drugs don’t realise how it affects them and what they can do to you.’ Ms Sampson now faces up to 10 years in jail for possession of the drug and paraphernalia.
http://metro.co.uk/2014/08/01/woman-calls-police-to-complain-about-quality-of-crystal-meth-4818595/