content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
This Thursday, my latest BBC Radio 4 documentary, Can Computers Write Shakespeare is broadcast. The programme asks whether computers can ever be truly creative, using sculpture, music and poetry as examples. As a teenager, I wrote a computer program that composed ragtime music using simple probability tables, i.e. if the current note is an A what is the probability the next note is B,C, D etc. The notes were selected by rolling a dice. I then superimposed the structure and rhythms of ragtime to these melodies. This produced music that had the jaunty lilt of ragtime, but it wasn’t something you’d listen to for very long because it didn’t seem to be going anywhere. The Radio 4 documentary focuses on a much better computer composing system, IAMUS (‘Other computer composers are available’). The video below shows the world premiere of Adsum, a piece entirely composed and orchestrated by the machine. As you can hear, IAMUS usually creates modern classical music. This is done by mimicking the processes in Darwinian evolution. IAMUS started with a very simple population of musical genomes that were just a handful of notes that lasted a few seconds. Through a process of breeding and mutation, IAMUS has produced new compositions that are longer and more elaborate. The computer is given very few guidelines beyond ensuring the notes remain within the range of the musical instruments. It is like watching a student composer develop their compositional style, where the computer is working on its own without human input for musical ideas. One of the most fascinating things I learnt while making this documentary was that many of those working in computational creativity are not that interested in the Turing Test. They’re not interested in testing whether a computer algorithm can create art that can pass as human-generated. So, when we got experts to critique the music and poetry, they were told it was computer generated to begin with. Follow me The simple act of telling them that the music or poetry was written by a computer changed how they perceived it, and part of that prejudice appears to be unconscious. When Steinbeis and Koelsch compared which regions of the brain were stimulated by computer and human composed music in an fMRI scanner [doi: 10.1093/cercor/bhn110], they found that the regions of the brain associated with ascribing intention to others is less active with computer composed music. Maybe an indication that however good computers get at composition, they will always fall short of fulfilling the need for art to be about communication between humans. You can buy IAMUS from online music stores. Will you be buying it? Postscript: One thing that was lost on the cutting room floor was IBM’s attempts to churn out new and innovative recipes using computers. Would you fancy Swiss-Thai asparagus quiche?
http://trevorcox.me/can-computers-compose
By Kaitlynn AndersonStaff WriterFarms.com Ontario producers are working to make their voices heard for the upcoming provincial election. Farm organizations, including the Ontario Federation of Agriculture (OFA) and Christian Farmers Federation of Ontario (CFFO), are advocating on behalf of their members. The groups want to ensure the candidates understand the issues that the agri-food industry and rural communities face. This election season, economic development is a common theme. For example, OFA’s Producing Prosperity campaign focuses on rural Ontario’s “potential to drive economic growth, affordable housing opportunities, job creation, environmental sustainability and local food security,” Keith Currie, president of the organization, said in Wednesday’s release. The OFA plans discuss with candidates how long-term investments in agriculture can lead to economic growth for the whole province, the release said. The CFFO has also reached out to provincial candidates “to remind them of the vital role that the agri-food sector plays in Ontario’s economic achievement,” a Friday release stated. Throughout these discussions, the organization will inform the parties about key concerns, such as labour policies and energy costs. In addition, the Council of Canadians, a not-for-profit organization that advocates for the country’s citizens, recently surveyed the public to determine their election priorities. (For many years, the Council has worked with industry associations in Canada and abroad, Mark Calzavara, Ontario-Quebec regional organizer for the Council of Canadians, told Farms.com yesterday.) Council of Canadians photo From April 21 to May 9, the organization asked rural and urban participants to select trade issues that will inform how they vote in the election. In total, 71.5 per cent of respondents indicated that they want the government to “support Ontario farmers by protecting supply management in international trade agreements.” The province should also protect Ontario’s “buy local” rules in these trade agreements, 66.82 per cent of survey participants stated. After decades examining trade deals, the Council and its supporters understand the impacts that government policies can have, Calzavara said. “Ontario should staunchly defend our farms, our policies that protect food production and the rights of sub-national governments to procure goods and services locally if they so chose,” he said.
https://www.farms.com/ag-industry-news/making-rural-ontario-an-election-priority-967.aspx
Booth Id: EAEV055 Category: Earth and Environmental Sciences Year: 2020 Finalist Names: Michaluk, Sonja (School: Hopewell Valley Central High School) Abstract: Using macroinvertebrates for freshwater bioassessment was popularized by Hilsenhoff in 1977. They show cumulative effects of habitat alteration and pollutants that chemical testing and field sensors do not. Currently there are hundreds of bioassessment protocol in use globally, however expert error rates as high as 65% have been observed at the genus and species levels. There is no standard freshwater bioassessment method, especially one that leverages the power of DNA Barcoding. The World Economic Forum lists water scarcity as one of the greatest global risks of the coming decade. It is forecast that 66% of our population will experience water scarcity within a decade, leaving us more dependent on surface water for drinking. This requires more filtration infrastructure, and more bioassessment of surface water sources. DNA Barcoding of Chironomidae, the most widespread macroinvertebrate family, may be a move toward a global bioassessment method. Bland Altman statistical analyses were conducted to validate this method as a more accurate and precise waterway measurement health data, adding significant value for monitoring scarce water resources. This project explored the optimal standard taxonomic level for waterway health assessment globally as well as the statistical power at each taxonomic level. Taxonomic levels of identification were compared through phylogenetic tree analyses and an optimal level was determined. Statistical analysis was used to compare taxonomic levels: family, subfamily, genus, and species. The validated method was used to assess pollutants on a waterway used for municipal drinking water. Learnings from these data were applied to build a genetics lab at a scientific institute, and demonstrate capability with samples gathered in the Arctic Circle.
https://abstracts.societyforscience.org/Home/FullAbstract?ISEFYears=2020%2C&Category=Any%20Category&AllAbstracts=True&FairCountry=Any%20Country&FairState=Any%20State&ProjectId=19775
The bible has a lot to answer for, least of all, why all lists always result in top 10’s. Anyway, I had wanted to add an instructional element to the site for some time, so tonight I’m kicking this off with a set of basics for mountain photography and filming. So this is my list of ten things that I believe can really make a difference to your outdoor and mountain photography. It’s not meant to be exhaustive and I’m sure there are many things that could be added, but I wanted to have a simplified set of golden rules ( which I sometimes break ), which are easy to memorise and hopefully, will be helpful to those wanting to get more out of their photography/cinematography when in the mountains. I’ve included links for those wanting to delve deeper into certain subjects ( and I encourage you to do so ), as I wanted to keep this as simplified as possible. This is not intended for seasoned shooters, but if you are, and feel there is something that I should have included then please drop me a line. I’ve included a few points that are practical rather than artistic, creative or theoretical. If you can’t use your equipment then you won’t be able to shoot and, likewise, if you’re comfortable, safe, well equipped and familiar with your kit, then you are free to concentrate on things such as composition. 1. You can’t capture everything. Exposure and dynamic range The human eye is capable of seeing an incredible range of brightness and within this range, still being able to see detail. The amount of stops from light to dark that the human eye can see is usually greater than most digital or film cameras ( though not always ). When shooting in the mountains on any given day, you are likely to encounter a large range, from the dark forest shadows to the sun illuminated snow fields on the summits. This incredibly contrasty scene is what can make mountain shots so spectacular, but it can also confuse people when taking a photo of the scene, which when viewed later looks nothing like the scene you witnessed. The important part here is what do you want to capture most of all? Is it more important to capture the details in the shade or more important to capture the details in the highlights. While by learning the basic rules of exposure you can go some way to cover both, it’s really important to try and visualise scenes the way a camera will capture them. Learning to look at a scene and understanding the cameras limits in order to reframe or compose the shot, is all part of the learning process. This shot, whilst not a great shot, is a good example. Where I was standing, in front of the forest with a shaded, snowy field in front of me, the view was incredible. I knew that I couldn’t capture both the foreground and the mountains beyond and still have everything perfectly exposed. So I re-framed, to emphasise the fresh snow and sunlight on the mountains behind, the edge of the forest is near black, but works as a border to frame the scene. HDR photography does address this issue and has made it possible to capture scenes, where even the human eye is unable to see all. In brief this involves taking several shots at different exposures, which are later composited together in software. 2. Give up trying to capture everything. Zooming and panning This applies particularly to filming and is probably the biggest hurdle that most people with a video or movie camera find they have to cross. We have all seen the footage of kids plays or friends holidays, only to feel completely sea sick from the experience. It’s tempting when you can zoom and pan the camera around to capture a big scene. Don’t do it! Okay, unless it’s for a particular artistic reason, just don’t do it. It’s often better to convey a scene with a wide angle shot supplemented by small details of the scene, which when viewed later convey the message of the experience. A typical example would be a sunny day, with a lot of activity, perhaps at the ski slopes or something. Instead of just waving the camera around to try to cover everything, look for the less obvious, maybe film or shoot the scene, reflected in a pair of goggles. Finding these smaller details often help to tell the story far more effectively than simply moving the camera around, in an attempt to capture everything. These screw on filters will probably be one of the only filters that you actually need when shooting in the mountains. There job is to cut out reflections and they are often used when shooting near glass windows. Because there is usually a large amount of water vapour in the atmosphere, the filters come in to there own when you would like to capture very well defined fluffy clouds, for example. As you point the camera further away from the sun, the effect of the filter on a clear blue sky becomes more pronounced. Circular Polarisers work in relation to the angle from the light source, for example, if shooting straight into a glass window, they will have little or no effect, but if you moved off to an angle to take the shot, the effect of glare being cut down becomes more obvious. Also remember, that telephoto or long zooms don’t really work well with CP’s, so they are best used for wide and standard focal lengths. Like any piece of kit, try it as soon as you buy it. It doesn’t take too long to see how they work. If one thing will improve landscape shots ( both moving and still ), it’s the use of a good tripod or camera support. Get the best you can afford, period. Monopods are great for shooting sports and wildlife stills with longer lenses, but personally I find them almost useless for filming. If weight is really a major concern then sometimes makeshift supports can get the job done, a rolled up down jacket, resting on a rock or a wall, for example. If you own a large camera backpack, these can also double as makeshift supports when shooting from a low level. This only really works with camera bags that are vertically rigid, by simply placing the camera or lens on the top of the bag, it’s possible to get a reasonably stable shot ( I use a Tamrac Expedition 7 to carry a full days kit, but Lowepro and others produce similar bags ). For many types of shooting, lack of light is often a problem. In the mountains during the daytime, especially in the snow, the opposite is true. If you need to control depth of field, then using an ND filter is normally my first choice. If I want to shoot with the lens wide open, to separate the foreground from the background say, but want to shoot at a given shutter speed, the only way to really do this is with an ND filter. These can be bought as simple screw types or you can use a system like the Cokin P drop in square filters. The Cokin system is pretty good, as you can add more filters for different lenses and cameras, without buying separate filters for each system. All you need to replace is the thread adapter for your lens or camera. Graduated ND filters are also great additions to this set up and help in situations where there is a very bright scene in the top of the frame, yet the foreground, in the lower part of the frame is significantly darker. A set of graduated ND filters is a great investment, when shooting in the mountains and doesn’t cost a fortune. Snow is easy to photograph, if you don’t try and over complicate it. Some people approach this aspect, as some sort of dark art and while in certain conditions it can be a little tricky, as a rule, it couldn’t be more straight forward. I will expand on this in another post, but for now I will link to this tutorial. Simply, snow is roughly 1-2 stops brighter than a medium tone. So in other words, say you were to point your camera or light meter at the snow, in order to take a reading and subsequently take the shot, you will probably end up with a shot that is under exposed. The reason for this is, the camera doesn’t know that the snow is pure white ( well actually it has a blue tone ) and it tries to expose the shot as if the snow was a medium tone. The answer is often to then manually open up 1-1.5 stops from what the camera is telling you to do, you should now be in the right ball park exposure wise. Taking the reading from the back of your hand ( if you have light toned skin like me ) and then locking exposure should in most daytime situations, give you a fairly good exposure. This is a quick fix, but does often work. Less is more This is more of a personal preference than an absolute. Following on from what I said about temperature, what you are wearing and how you are equipped to deal with the elements, is as important, if not more so, than what camera equipment you are carrying. There’s no point in attempting to get a shot, at the risk of not returning home. This might be an extreme example, but I don’t think that this is overly dramatic. On another level, what if you are so cold or uncomfortable, that you cannot concentrate on the photography, or hold the camera steady for that matter. Personally, I would rather leave a very heavy lens at home, even if it gave me a stop or two of extra light, than freeze and dehydrate, because I didn’t have room for extra clothing and drink. Carry loads of them and make sure that you’ve tested all of your equipment together before packing for a shoot. Maybe this should be number 1 on the list! Enough said. 10. Study your subject, learn mountain crafts, study your subject. Last, but by no means least, is the importance of understanding and being able to work with your subject. Whether you are filming friends riding at the snow park, photo’s of flowers on Alpine meadows or shooting landscape time-lapses, it will be so much easier, and more fun, if you have a good understanding of where and what you are shooting. Study maps, go to photo exhibitions and look at other peoples work, take note of weather patterns and speak to local people as much as possible about weather past and present, if shooting in the backcountry during winter, you really should study snow and avalanche safety. You may be working alone, but it doesn’t mean that you won’t come across a situation where you will need this knowledge. Carry a beeper, shovel and compass and learn how to use them. Above all, perhaps the most important aspect of mountain photography is learning as much as you can about the environment itself. This entry was posted in Cinematography, Photography, Technique, Uncategorized, Video. Bookmark the permalink.
https://chillfactorfilms.com/2009/01/14/technique-top-ten/
What Are Bridge Loans? Bridge loans are temporary loans that bridge the gap between the sales price of a new home and a home buyer's new mortgage, in the event the buyer's home has not yet sold. The bridge loan is secured to the buyer's existing home. The funds from the bridge loan are then used as a down payment on the move-up home. How Do Bridge Loans Work? Many lenders do not have set guidelines for FICO minimums nor debt-to-income ratios. Funding is guided by a more "make sense" underwriting approach. The piece of the puzzle that requires guidelines is the long-term financing obtained on the new home. Some lenders who make conforming loans exclude the bridge loan payment for qualifying purposes. This means the borrower is qualified to buy the move-up home by adding together the existing loan payment, if any, on the buyer's existing home to the new mortgage payment of the move-up home. The reasons many lenders qualify the buyer on two payments are because: If the new home mortgage is a conforming loan, lenders have more leeway to accept a higher debt-to-income ratio by running the mortgage loan through an automated underwriting program. If the new home mortgage is a jumbo loan, most lenders will restrict the home buyer to a 50% debt-to-income ratio.
http://mcmf.net/MCMF_BuyHouseLink12.html
Sometimes last year, I had the rare privilege as the Zonal Commanding Commanding Officer Zone 7, comprising the Federal Capital Territory Abuja and Niger State to be conscripted by my boss,Dr Boboye Oyeyemi,the Corps Marshal to the Federal Road Safety team to a Public Hearing seating at the National Assembly presided over by the House of Representatives Committee on Road Safety. The Minister of Power, Works and Housing, Mr Raji Fashola was also invited to the hearing. During his presentation, the Minister who as Governor of Lagos State demonstrated his disdain for traffic infractions proposed communal service as punishment for traffic infractions. His concern was that traffic fines are yet to drive home the needed change required. He therefore reasoned that communal or community service where traffic offenders are compelled to undertake community service such as cleaning drainages or sweeping public places will go a long way in deterring people from irresponsible driving. His proposal sounded different from the usual position people like me canvass which is always on increased fines. This position is the similar to what countries like the United Kingdom tow always. Just last December, the Transport Minister, Andrew Jones during a gathering of Parliamentarians on road safety in London said a new fine regime was being introduced to curb driving and phoning violation. While I still hold rigidly to my views that increased fines will do the magic just like Lagos has done, I have however had cause to do a rethink on his stand pondering on the novelty of such punishments. Will it bring about the needed change or should it be targeted at which driving offences; the trivial to severe or to all? For the records, I must state here that in other climes, everything from driving without a seatbelt to drunk driving can result in penalty points on your license, a hefty fine or even jail time. Whether it’s a speeding ticket through the letterbox or a court summons for something more serious and this explains why I am still marveling at the Lagos State novelty especially the jail term angle. The reason for my excitement is very simple-human beings are almost the same world over. The same applies to drivers irrespective of the color of the skin or the sex or even age. What deters the average driver in developed climes is the fact that infractions are not treated with kid gloves as you can end up in jail or lose your right to drive based on what is regarded as cumulative penalty points. It is therefore key to understand the penalties for driving offences to promote safer driving. The National Road Traffic Regulations contains all these traffic violations meant to deter drivers from committing offences.This write is therefore a must read for every licensed driver. Although the penalty point system is not fully implemented as is obtainable in countries such as the United Kingdom, I will no doubt start with the definition of what some basic terms are. First is the penalty point which refers to Points allotted to traffic offences accumulated in the driver’s record? If a driver receives a statutorily maximum number of points, the driver shall be warned and or have his licence suspended or withdrawn. FINE: Is payment of a sum of money made to satisfy a claim of an offence committed as penalty. WARNING: Is notification issued to a traffic offender who has accumulated 10 – 14 penalty points. SUSPENSION: Is the temporal removal or interruption of authority or right to drive a vehicle or ride a motorcycle/tricycle, as a punishment for a period of time, having accumulated 15 to 20 penalty points. WITHDRAWAL: The act or condition of taking away the authority or the denial of the right to drive a motor vehicle or ride a motorcycle/tricycle on Nigeria roads, having accumulated 21 and above penalty points. The second definition is on the USE OF NOTICE OF OFFENCE SHEET The Notice of Offence sheet is issued by a Road Marshal to a traffic offender who has violated any of the Road Traffic Laws and Regulations. It is a legal document and as such should be properly understood and filled, as it may be tendered in the law court for prosecution purposes. Having defined these terms as a guide let me now focus on the Notice of offences which contains thirty-seven specific traffic offences. These numbers are not restrictive as they can either increase or decrease depending on that the traffic regulations provide at a time. It is your responsibility to know and understand these offences and to daily strive not to run foul of any .In dealing with the offences, you will get to know the offences, their categories, definitions, penalties and penalty points: Since specific infractions have become the norm, I think it will be appropriate to look at offences such as speeding, use of phone while driving, seat belt related offences, drivers license and vehicle paper offences, in addition to assaulting marshals on duty among others.
https://www.thisdaylive.com/index.php/2017/01/21/driving-offences-and-penalty-fines/
© 2015 Elsevier B.V. Mediterranean rivers are probably one of the most singular and endangered ecosystems worldwide due to the presence of many endemic species and a long history of anthropogenic impacts. Besides a conservation value per se, biodiversity is related to the services that ecosystems provide to society and the ability of these to cope with stressors, including climate change. Using macro-invertebrates and fish as sentinel organisms, this overview presents a synthesis of the state of the art in the application of biomarkers (stress and enzymatic responses, endocrine disruptors, trophic tracers, energy and bile metabolites, genotoxic indicators, histopathological and behavioural alterations, and genetic and cutting edge omic markers) to determine the causes and effects of anthropogenic stressors on the biodiversity of European Mediterranean rivers. We also discuss how a careful selection of sentinel species according to their ecological traits and the food-web structure of Mediterranean rivers could increase the ecological relevance of biomarker responses. Further, we provide suggestions to better harmonise ecological realism with experimental design in biomarker studies, including statistical analyses, which may also deliver a more comprehensible message to managers and policy makers. By keeping on the safe side the health status of populations of multiple-species in a community, we advocate to increase the resilience of fluvial ecosystems to face present and forecasted stressors. In conclusion, this review provides evidence that multi-biomarker approaches detect early signs of impairment in populations, and supports their incorporation in the standardised procedures of the Water Frame Work Directive to better appraise the status of European water bodies.
https://portalrecerca.uab.cat/en/publications/ecological-relevance-of-biomarkers-in-monitoring-studies-of-macro
So you’re thinking about starting a family but feel like you already don’t have enough hours in the day. What can you do to prepare for this life transition? I’m glad you asked. As a time-management coach, I’ve found that wanting to start a family, or having a child and adjusting to that change, is a major reason that people reach out to me for help. Having a little person’s life to manage in addition to your own requires a serious leveling up of time-management skills. There’s no perfect way to prepare—surprises and adjustments are inevitable—but here are four strategies that can help, as you consider this life change: Clarify what you want Different individuals have massively different views of what they envision in terms of parenting. There’s not a right or wrong scenario as long as your children are safe and loved. But you need to figure out what works for you (and your partner, if applicable). For example, you’ll want to think about how much you’ll want—and need—to work: full-time, part-time, or potentially not at all outside of the home. You’ll also want to think about what you feel comfortable with in terms of travel, evening, or weekend activities. You’ll want to think about what kind of childcare situation makes you comfortable, and feels feasible, given your financial situation. Maybe even drill down to specifics like how often you want to eat dinner at home or participate in bedtime. It’s very possible that your answers to these questions may change, but it’s good to consider them in advance. Assess your circumstances Once you have clarity on what you want, then you’ll need to determine what possibilities exist within your current situation and which don’t. For example, if you have an hour or more commute, that may have an impact on how much you can participate in certain family activities. Or if you have a job that requires a high percentage of travel or always expects evening and weekend work, this will also limit family time. You may be okay with these limitations, or you may be able to find creative strategies to make things work. For example if you have a long commute, you may leave really early so you can still be home for dinner, or you may figure out a way to have some work-from-home days. But if you take a good, hard look at your circumstances and realize they’re incongruent with what you want for your family, you may want to consider changes like moving closer to the office or getting a different position. Remember: There’s no perfect time to start a family, and you can make things work in almost any situation. But certain circumstances can make the situation easier or harder. Get your work in order Often pre-baby individuals work until their work is done. But once you have a little person who you want to see—and need to pick up before daycare closes—getting out of the office on time becomes a much bigger deal. Also, obviously, babies take energy and tend not to sleep through the night. So you’ll likely not have as much energy to do work at night as you might have had before. What this means for you is that having clarity on what needs to be accomplished and organizing your time in such a way that you can get it done during the day becomes essential. If you haven’t done so already, start to keep a list of your tasks and projects, begin to plan your day, and then execute on those activities, preferably ahead of schedule. Staying late or trying to work on the evenings or weekends can still happen. But it typically feels like it has a higher cost, post-baby. Start practicing your new life Once you’re better organized, start practicing your new way of life. Even before you have a baby, see if you can start to leave the office earlier. Challenge yourself by making a commitment to your spouse or friend to meet at a certain time after work or sign up for something like an exercise class after work. This practice will help you see what it takes to leave work on time and to try to get everything done before then. If you’re currently working on the weekends, experiment to see if you can reduce or even eliminate this work. This practice to contain your work will help you to have more skills in place before your bundle of joy arrives and will also give you an accurate assessment of what is possible within your current circumstances. You can’t completely prepare for becoming a parent. Unexpected things will come up, and you will experience a big adjustment. But with these four strategies, you can be better equipped for the change.
https://www.fastcompany.com/90405903/prepping-for-the-ultimate-time-management-challenge-parenthood
Everyone must have had dreams in their sleep. Through Dream Interpretation Of Heart Surgery, God also gives instructions to the people. Dreams have a certain meaning but we also have to remember about our faith. Genesis 40:8; We had dreams," they said to him, "but there is no one to interpret them." Then Joseph said to them, "Don’t interpretations belong to God? Tell me your dreams." Dreams provide a valuable lesson for people to be able to interpret it as an announcement of justice from God. Dream Interpretation Of Heart Surgery will provide an explanation regarding your life. This symbol gives understanding to be more aware of what you will face. Pray always to god and ask for protection. This will give you stronger faith to face challenges in the world. Dreams about surgery can be an uncomfortable picture for many people. Many people wake up from their sleep sweating and feeling anxious. This dream is so scary that one can wake up terrified. So many kind of this symbol, and most of them are unpleasant. Dreams about surgery in a hospital seem very unpleasant. Such a dream usually represents a problem you need to eliminate because it has a terrible effect on you. This symbol indicates that there are some things or people in your life that you no longer need or are overcoming an emotional block. That is why dreams like this can bring about significant changes. You must overcome the past and problems that no longer provide growth and progress.
https://www.dreamjohn.com/justice/dream-interpretation-of-heart-surgery/
A new technique pioneered by scientists from the Plymouth marine laboratory in the UK is uncovering the secrets of life in our oceans. The Mesopelagic zone is the area between 100 and 1000m deep in the oceans, and comprises one of the largest ecosystems on Earth. It was thought that the primary source of nutrients for this ecosystem was a ‘rain’ of sinking organic carbon and materials from the upper layers of the ocean. However, marine scientists realised that this source was not enough to fuel the enormous biomass of the Mesopelagic ecosystem. Using a combination of satellite images of ocean color and in-situ floats, researchers discovered a process of seasonal mixing that circulates organic matter from surface waters into the deeper realms of the Mesopelagic. Spring storms and wind mix surface waters with organic carbon and carry particles too small to sink and dissolved carbon deeper into the ocean. In summer a mixed layer sits at the surface, trapping the deeply mixed carbon inside the Mesopelagic region. It was found that at high latitudes this process supplies an average of 23% of the flux of sinking food sources, although it can be greater than 100% in some instances. The research team estimates that globally this ‘seasonal mixed-layer pump’ moves around 300 million tonnes of carbon into the Mesopelagic zone each year. “Most methods for measuring carbon transport into the deep ocean have concentrated on the particles that sink at relatively fast rates, but have not measured how neutrally buoyant or slowly sinking organic particles are redistributed through the water column” said lead researcher Giorgio Dall’olmo. “Current global estimates of carbon export in the ocean are missing the contribution of the seasonal mixed-layer pump”. While it was previously known that variations in the mixed surface layer could provide organic carbon to the Mesopelagic, this is the first effort to estimate the total amount of organic carbon supplied in this way. This provides quantification of a major additional flux of organic carbon to the Mesopelagic that was previously unaccounted for.
https://www.geographyrealm.com/satellites-delve-depths-one-earths-largest-ecosystems/
In this subsection, we evaluate the performance of information dissemination with relevant factors of the proposed model. Effects of Spreading Parameter and Immune Parameter We evaluate the effects of spreading parameter and immune parameter on the information dissemination. Here, R(t) is used as the performance metric. In order to study the impact of spreading parameter and immune parameter, we ignore the selfimmune parameter such that all recovered nodes can have the information under this situation. Here, the spreading parameter в increases from 0 to 1, and four values of 8 are randomly chosen for comparison, which are 0.1,0.3, 0.7, and 1. Other settings are the same as those in Fig. 2.5. Figure 2.6 shows the final number of the recovered nodes R versus different values of в and 8. FromFig. 2.6, it is observed that the information can be still transmitted if в is very small. For example, when в = 0.1 and 8 = 0.3, the number of the recovered node R is close to 200. Another phenomenon is that the final R increases with the increase of the value of в, while the increasing speed decreases gradually. Moreover, the larger immune parameter has smaller value of the final R. A large spreading parameter в can promote information dissemination, while immune parameter 8 has a negative effect on information dissemination. Moreover, when 8 is smaller, the network is more robust to в. For example, when 8 = 0.1, the value of final R remains stable when в > 0.6.
https://m.ebrary.net/35725/engineering/performance_dissemination
Fat or Bone? Apparently, Water Has The Answer! Adding or removing water from a stem cell can change the destiny of the cell, researchers have discovered in a new study published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS). The research found that altering the volume of a cell changed its internal dynamics, including the rigidness of the matrix lining the outer surface. In stem cells, removing water condenses the cell, influencing the stem cells to become stiff pre-bone cells, while adding water causes the cells to swell, forming soft pre-fat cells. Researchers have long understood that stem cells are influenced by the cells around them, picking up cues on what their function should be based on the stiffness of the matrices of neighboring cells. The results, however, confirm that nature plays as much of a role as nurture in stem cell behavior and development. “The findings from this study add a fascinating new tool to our understanding and utilization of stem cell biology for regenerative medicine,” says Praveen Arany, DDS, PhD, co-author and assistant professor in the Department of Oral Biology in the University at Buffalo School of Dental Medicine. The study was led by Ming Guo, PhD, d’Arbeloff Assistant Professor in the Department of Mechanical Engineering at the Massachusetts Institute of Technology; and David Weitz, PhD, Mallinckrodt Professor of Physics and of Applied Physics in the John A. Paulson School of Engineering and Applied Sciences at Harvard University. “For the first time, we’re beginning to understand the importance of cell volume and cellular water content in the mechanical properties and physiological functions of cells,” says Guo, who began the research as a graduate student in Weitz’s lab at Harvard. The research originally sought to understand the effects of volume on a cell’s characteristics and functions. Cell volume is highly regulated and changes frequently over the course of a cell’s life, increasing as the cell grows and decreasing when it divides. These changes in volume are a result of variations in the amount of protein, DNA and other materials within the cell, though they mostly remain constant. But cells can also experience rapid and extreme changes in size and density through the absorption or release of water, spreading or shrinking in as little as 20 minutes. By increasing or decreasing the volume of cells by 20 percent, the investigators found that the cells experienced several internal changes, including in gene expression and stiffness. Knowing the role cell stiffness plays in the development of stem cells, the researchers began to wonder if cell volume could affect their fate as well. To test the premise, investigators placed stem cells at their normal volume in a hardened hydrogel substrate to simulate the rigidness of bone cells. After one week, a large portion of the stem cells developed into pre-bone cells. The experiment was repeated with a softened hydrogel substrate. In the softer environment, there was a significant decrease in the number of stem cells that became pre-bone cells. However, when water was removed from the cells to decrease their volume by 20 percent, the number of stem cells that became pre-bone cells increased, despite being in the softer substrate. A similar experiment was conducted using glass. Researchers placed stem cells on glass to simulate a stiffer environment and found that few of the cells developed into pre-fat cells. It was not until the volume of the stem cells was increased by 20 percent that a spike in the formation of fat cells was found. The investigators discovered that changing the volume of the cells caused them to behave similarly to as if they were under environmental pressures. “The surprising thing about these experiments is the observation that volume seems to be related to so much about the cell. It seems to dictate the cell stiffness as well as the cell fate,” says Weitz, also a core faculty member of the Wyss Institute for Biologically Inspired Engineering and director of the Materials Research Science and Engineering Center at Harvard. Future studies are needed to examine the effects of varied changes in volume, as well as if cell volume or external cues are the dominating factor in the fate of stem cells. Stem cells sit at the forefront of regenerative medicine, providing researchers and clinicians with the potential to repair or replace damaged tissue and organs. With the ability to develop into any type of specialized cell – from a muscle cell to a red blood or brain cell – stem cells hold the potential to treat various diseases and conditions, from heart disease to tooth loss. Bone marrow transplantation, one form of stem cell therapy, is already in widespread use. Stem cells may also aid in drug development and the understanding of how cancer and birth defects occur. Learning what causes differentiation among these cells will help researchers generate methods that influence their behavior and, ultimately, develop new therapies. Aside from physical cues such as cell stiffness or volume, stem cell differentiation can be influenced by a number of biological factors, pharmaceutical drugs or biophysical agents, such as light, ultrasound and radio frequencies. Other investigators on the study include Enhua Zhou, PhD, investigator at the Novartis Institutes of BioMedical Research; Dylan Burnette, PhD, assistant professor at Vanderbilt University; Mikkel Jensen, PhD, assistant professor at California State University, Sacramento; Adrian Pegoraro, PhD, research associate at the University of Ottawa; Karen Kasza, PhD, Clare Boothe Luce Assistant Professor at Columbia University; Angelo Mao, PhD, postdoctoral fellow at Harvard University; Yulong Han, PhD, research fellow at Harvard University; Jeffrey Moore, PhD, associate professor at University of Massachusetts at Lowell; Frederick Mackintosh, PhD, professor at Rice University; Jeffrey Fredberg, PhD, professor at Harvard University; David Mooney, PhD, Robert P. Pinkas Family Professor of Bioengineering at Harvard University; and Jennifer Lippincott-Schwartz, PhD, group leader at the Howard Hughes Medical Institute.
http://www.nikolateslafans.com/human/fat-or-bone-apparently-water-has-the-answer/
Q: Why is the application of an oracle function not a measurement? Why is the application of an oracle function not a measurement, causing the collapse of the system? How can you know the state of the system (the input of the oracle function) without measurement? A: An application of an oracle does not return a value; rather, it modifies the state of the system in a non-collapsing way. The oracles are a bit similar to controlled gates in this respect (in fact, a lot of oracles rely on controlled gates for their implementation). Consider, for example, CNOT gate: it does not measure the control qubit and apply an X gate to the target qubit based on measurement results, but rather it is a unitary gate described by a matrix $$\mathrm{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}$$ The oracles are the same: they do not return any value; instead they are unitary transformations which implement classical functions in the following way: Define the effect of the oracle on all basis states, using the classical function it implements. This will automatically define the effect of the oracle on all superposition states: the oracle is a quantum operation and has to be linear in the state on which it acts. The oracle will be applied to superposition states - this is what "calculating the value of the function on all inputs at once" formulation you sometimes see refers to. CNOT is an example of an oracle which implements classical function $f(x) = x$: you can check that its effects on basis states follow the rule $\mathrm{CNOT} |x \rangle |y \rangle = |x \rangle |y \oplus x \rangle = |x \rangle |y \oplus f(x) \rangle$, which is the definition of the oracle effect. The second part - about the oracles being defined by their effect on basis states using linearity - is implicit in a lot of sources I've seen, and is a frequent source of confusion - the definition of oracle effects on basis states makes it very tempting to try and measure the input state. If you need more mathematical details on this, we ended up writing it up here. There are also lots of questions about quantum oracles on this site - it is a bit weird that there is no "oracle" tag, but if you look for Deutsch, Deutsch-Jozsa or Grover algorithms, a lot of the questions are about the oracles.
Skilled workers are a small minority of U.S. legal immigrants. Of the 1,062,040 legal immigrants in 2011, only 12% were skilled employment-based immigrants. About 40% of these skilled immigrants had advanced degrees, or 5 or more years of experience after a baccalaureate degree. It is also worthwhile to note that the three other main immigrant-receiving countries - Australia, Canada and the United Kingdom - select between 62 and 72% of their immigrants based on education and skills. The highly skilled immigrants are future Americans and they are directly related to American Competitiveness. The impact of these workers’ contributions to American competitiveness belies their small number. They add to the process of scientific discovery, technology development, and innovation, which in turn leads to greater productivity growth. Greater productivity growth improves the standard of living for the U.S. population as a whole. Skilled immigrants not only contribute to the innovation process themselves, they also help train our own future innovators, thus ensuring the competitiveness of future generations. Please see chapter 2 of the February 2006 Economic Report of the President to learn more about the role of skilled immigrants in the U.S. economy.
https://immigrationvoice.org/index.php?option=com_content&task=view&id=84&Itemid=1
If you have just graduated from high school or college, and you are piecing together a resume for the first time; you may be wondering a lot about the structure of an effective and winning resume. Since you probably do not have much work experience to offer-at least in the professional arena-you’ll need to focus on the areas in which you do have experience and value to a potential employer. This is your education, skills, and the extracurricular activities that you have participated in. While both education and skills have their own predefined sections on any kind of resume format that you may choose-the activities section is not often clearly addressed or even mentioned in relation to your resume presentation. So, how should you include them? This section deals with why your extracurricular activities are important to include and where to put them. Why They are Important Perhaps, you were a part of the high school math team or maybe were student council president-whatever your affiliations were-are very important to an employer for two main reasons. First, having any extracurricular activities or clubs during high school or college-shows that you took it upon yourself to join-as they are optional groups and clubs. This shows you as enthusiastic and a go-getter-two things any company could want in their potential employers. Secondly, having been a part of a team or club, shows that you have some general and/or specific transferable skills that can attest to your value to a company-when a lack of career experience cannot. So, if you were on the math team, you have some serious analytical skills and you are good with numbers. Moreover, if you were student body president, you have some fantastic communication and leadership skills-which apply again, to any company looking for a stellar employee. When you have only part time job experience, education, and these activities to speak for your candidacy; you have to use them to promote yourself as the best entry level candidate for their position or company. How to Include Them Chances are if you are an entry level candidate, fresh out of school; you will be using a skills-oriented resume. These types of resumes downplay the fact that you have little or scattered career employment; and focus, instead on the skills you have-such as communication skills, team playing skills, etc. This said, you have a couple options. You can either include a section after education, for activities and extracurricular affiliations; whereby listing all teams or clubs you were a part of and the responsibilities you gained from each-or you can use these activities under the skills sections of your resume; such as under communications skills, you could list having exemplary communication skills as gained by being student body president-or something to that effect. Some people choose to do both, to reiterate what they have to sell of their qualifications and skills. It certainly would not hurt. In this kind of entry level resume creation, it is almost always advised to list anything and everything that points to valuable skills and qualifications; to downplay the minimal existence of real world career experience.
https://www.greatsampleresume.com/blog/how-should-i-include-clubs-and-activities/
I recently received a call from one of our Surfer users who was trying to create a site suitability model for a new manufacturing development. The user needed to find areas within the proposed site where the slope was under 10 degrees. The site suitability model required specific slope to be respected; the areas that were under the threshold criteria of 10% would be considered potential locations within the site to locate the new development. A site suitability model can be easily developed in Surfer by creating a slope grid from a digital elevation model or DEM for the area, masking the slope grid to the site boundary, and creating a contour map that highlights the areas that meet the 10% or under criteria. Since this is such an interesting workflow, I thought it would be a great topic to blog about it so others in the Surfer community could benefit from seeing the approach. In Surfer, click Grid | Calculus. In the Open Grid dialog, navigate to the GRD or DEM and click Open. In the Grid Calculus dialog, expand the Terrain Modeling selection and select Terrain Slope. Name the Output Grid File and click OK to create the slope grid. In the Open Grid dialog, navigate to the digital elevation model and click Open. In the Open dialog, navigate to the BLN file of the area of interest and click Open. In the Save Grid As dialog, name the grid and click Save. Now that the slope grid has been blanked to the potential site boundary, a contour map can be created that highlights the areas within the site boundary that have a slope of 10% or less. This can be done by adding slope contours at 0% slope, 5% slope, and 10% slope. In the Open Grid dialog, navigate to the blanked slope grid and click Open. In the Object Manager, click the Contours layer to select it. In the Levels for Map dialog, delete all of the levels except 0, 5, and 10. Assign these levels an appropriate color and fill pattern and click OK. We now have a map of the locations within the site boundary that are less than 10% slope. In the map above, areas that are green are the most suitable for locating the new development and are under 5% slope. Areas that are highlighted in yellow are between 5% and 10% slope, which is still suitable for locating the development. All areas in red have a slope over 10% and are not good location candidates for the new development. The contours can also be exported from Surfer to be used in 3rd party mapping and CAD applications.
https://www.goldensoftware.com/blog/calculate-slope-for-a-site-suitability-model
In multicarrier transmission schemes, a frequency band is divided into multiple narrow frequency bands (subcarriers) and separate signals are transmitted using the subcarriers. For example, in orthogonal frequency division multiple access (OFDMA), subcarriers are arranged such that they become orthogonal to each other to improve frequency efficiency and to achieve high-speed, high-volume communications. OFDMA makes it possible to effectively reduce inter-subcarrier interference. This in turn makes it possible to concurrently transmit signals using subcarriers and to increase the symbol length. Also with OFDMA, it is possible to effectively reduce multipath interference by using a relatively long guard interval. With multicarrier transmission schemes, however, fairly-high peak power is instantaneously necessary for transmission because signals mapped to subcarriers overlap each other in the time domain. In other words, with multicarrier transmission schemes, the peak-to-average power ratio (PAPR) may become fairly high. This is not preferable particularly for mobile terminals. Generally, single-carrier transmission schemes have an advantage in terms of reducing the PAPR. Particularly, single-carrier transmission schemes such as single-carrier frequency division multiple access (SC-FDMA) and discrete Fourier transform (DFT) spread OFDM also make it possible to efficiently use a wide frequency band. In SC-FDMA, a transmission signal is Fourier-transformed and mapped to subcarriers, and the mapped signal is inverse-Fourier-transformed and wirelessly transmitted. At the receiving end, a received signal is Fourier-transformed, signal components mapped to the subcarriers are extracted, and transmission symbols are estimated. Such single-carrier transmission schemes are preferable in terms of efficiently using a frequency band while reducing the PAPR. Meanwhile, with single-carrier transmission schemes where subcarriers with a relatively large bandwidth are used, multipath interference tends to occur. Multipath interference increases as the transmission rate increases. For example, multipath interference becomes particularly prominent when the data modulation level is high or a MIMO multiplexing scheme is used. Increase in multipath interference in turn reduces the detection accuracy of signals at the receiving end. Let us assume that the number of transmitting antennas is N, the data modulation level is B (e.g., when 16 QAM is used, B=4), the expected number of multipaths is P, and a maximum likelihood detection (MLD) method is used for signal detection (for a QRM-MLD method, see, for example, K. J. Kim, et al., “Joint channel estimation and data detection algorithm for MIMO-OFDM systems”, Proc. 36th Asilomar Conference on Signals, Systems and Computers, November 2002). As described above, OFDMA makes it possible to effectively reduce inter-subcarrier interference and to sufficiently reduce multipath interference within a guard interval. Therefore, with OFDMA, the total number of symbol candidates that need to be examined at the receiving end is represented by the following formula:2N×B Meanwhile, with a single-carrier transmission scheme where multipath interference cannot be ignored, the total number of symbol candidates that need to be examined is represented by the following formula:2N×B×P Thus, with a single-carrier transmission scheme, the number of candidates increases exponentially according to the number of multipaths and as a result, the computational complexity for signal detection increases. This in turn makes it difficult to employ an MLD method, which provides high detection accuracy but require high computational complexity, together with a single-carrier MIMO transmission scheme. Signal detection methods such as a zero forcing (ZF) method and a minimum mean squared error (MMSE) method require low computational complexity, but may reduce the signal detection accuracy. To achieve desired signal quality (desired SINR) when the signal detection accuracy at the receiving end is low, it is necessary to increase the transmission power of signals. However, since one purpose of employing a single-carrier transmission scheme is to reduce the PAPR and thereby to save battery energy, it is not preferable to increase the transmission power.
It may be hot and humid today, but fall is just around the corner with winter close behind. Do you know what comes with winter? Would you believe the answer is heart disease? A study of over 100,000 people, between the ages of 35 – 80, in seven countries looked at several heart disease risk factors: - Waist circumference - Blood lipids (cholesterol) - Sugar levels - Blood Pressure - Total Body Mass (BMI) What the researchers found was that in January and February the indicators for heart disease were at their peak. For example, waist circumference increased by a centimeter. The study doesn’t offer any explanation for the phenomenon, they put forward that people eat differently in the winter – less fresh fruit and vegetables are available; and cold weather can prevent regular outdoor exercising. What can you take from this study? Make a plan for a heart healthy winter. Don’t fall in to dietary ruts and make arrangements for maintaining physical activity. If you need help, you can find more information on our Weight Wellness Program here. The Woman’s Clinic of Jackson and Madison, have made a commitment to help our patients achieve their personal goals of being as healthy as possible.
https://twc-ms.com/make-a-plan-for-winter/
Notes: Studies have shown that stimulation to the parietal cortex can enhance mathematical abilities for up to 6 months. Material that was best committed to memory was that which was learned recently. Scientific Studies: - Modulating neuronal activity produces specific and long-lasting changes in numerical competence. - Transcranial direct current stimulation of the posterior parietal cortex modulates arithmetic learning News Articles:
https://totaltdcs.com/tdcs-montages/improved-mathematical-abilities/
For more information, please contact us or consult our Privacy Notice. Your binder contains too many pages, the maximum is 40. We are unable to add this page to your binder, please try again later. This page has been added to your binder. - Home - News and Insights - Insights - Extraterritorial Perspectives Extra-territorial Perspectives. From Remedies in us Courts for Acts Occurring in Whole or in Part Outside the US to Obtaining Discovery from Abroad October 17, 2018, 22nd Annual Eastern District Bench Bar Conference May 4, 2020 WASHINGTON—For the third consecutive year, The National Law Journal has named Covington to its “Pro Bono Hot List.” The law firms appearing on the 2020 Pro Bono Hot List were recognized for taking on “some of the biggest issues of our time.” In its profile of the firm, The National Law Journal highlights Covington’s Pro Bono efforts in two significant matters. ... August 15, 2017 SILICON VALLEY—Covington secured a significant patent infringement victory on behalf of Elbit Systems Ltd. over its groundbreaking patent relating to high-speed satellite communications. A jury in the Unites States District Court for the Eastern District of Texas awarded Elbit Systems $21,075,000 in damages, exclusive of pre-judgment interest and post-verdict ... Covington Promotes 15 Lawyers to Partnership October 1, 2014 WASHINGTON, DC, October 1, 2014 — Covington & Burling is pleased to announce that it has elected 15 lawyers to its partnership effective today. “Our new partners come from six firm offices and practice in a variety of areas that are of great importance to the firm’s clients,” said Timothy Hester, chair of Covington’s management committee. “These are all superb ...
https://sitecoreprod-519730-cd.azurewebsites.net/en/news-and-insights/insights/2018/10/extraterritorial-perspectives
As a devotee, I feel that I should have outgrown certain things and that I am right back where I started before I joined. Am I just some middle aged poseur trying to regain her lost youth or am I approaching some sort of personal growth? Answer When we first take up spiritual life, we give up much of our former life because it was mundane and thus an impediment to your spiritual growth. We take on a new life. Later, when we are more mature and fixed in our spiritual identity, in a sense we have the luxury of revisiting our past and more intelligently, more maturely, more wisely seeing what was good and what was bad in our past, and with full consciousness, choosing Krishna above everything. We also evaluate our past in terms of whether or not it can bear a favorable relation to Krishna. Krishna often gives devotees this opportunity to not only reject our old life in the enthusiasm of our discovery of Krishna, but also later in life to take another look, so that our renunciation is fully conscious. We may also see elements in our past that have some use in Krishna’s service.
http://hdgoswami.com/questions-and-answers/fully-conscious-renunciation/
TECHNICAL FIELD BACKGROUND ART SUMMARY OF INVENTION TECHNICAL PROBLEM SOLUTION TO PROBLEM BRIEF DESCRIPTION OF DRAWINGS DESCRIPTION OF EMBODIMENTS First embodiment A technique disclosed in this specification relates to vehicle grilles attached to vehicles. More particularly, it relates to a vehicle grille capable of achieving a decorative function that adds to the beauty of the vehicle, and a function of setting an airflow state or an airflow interruption state by a change of ambient temperature. 2003-170733 With hybrid vehicles that have become popular in recent years, fuel efficiency is said to deteriorate when overcooling occurs in winter, and as its preventive measure, it is necessary to prevent intrusion of cold air into an engine room. For example, an air conditioning device for an automobile is known as a technique of preventing the intrusion of the cold air into the engine room, in which a communication passage for taking the air into the engine room is opened and closed by a shutter, so as not to reduce the temperature of an outdoor heat exchanger within the engine room in a normal automobile that is equipped with an engine, when outside air temperature is at or lower than a predetermined temperature (Japanese Patent Application Publication No. ). 2003-170733 2003-170733 However, with the air conditioning device for the automobile as described in Japanese Patent Application Publication No. , the shutter for opening and closing the communication passage is arranged on a front side of the engine room and inside the vehicle, and therefore, it does not make a contribution to add to the beauty of the vehicle. Exterior parts of the vehicle may be fixed in order to add to the beauty of the vehicle, but when both of the air conditioning device for the automobile as described in Japanese Patent Application Publication No. and the exterior parts of the vehicle are fixed to the vehicle, the car weight increases accordingly, and the fuel efficiency is deteriorated. Therefore, if such a vehicle grille is realized that adds to the beauty of the vehicle and that has the function of securing the temperature of the engine room when the outside air temperature is the predetermined temperature of lower, its commercial value can be increased. This specification discloses a vehicle grille capable of achieving a decorative function that adds to the beauty of the vehicle, and a function of setting an airflow state or an airflow interruption state according to ambient temperature. A vehicle grille as disclosed in this specification is attached to a vehicle, and is provided with a base panel member, a back panel member, a connecting member, and a moving mechanism. The base panel member is arranged on a front surface side of the vehicle, and is provided with an opening disposed to form a part of a design on the surface of the vehicle. The back panel member is arranged on a back surface side of the base panel member, and has a projection configured to be inserted into the opening of the base panel member so as to be in close contact with the opening. The connecting member connects the back panel member and the base panel member. The moving mechanism is configured to allow the back panel member to move relative to the base panel member, to a first position at which the projection is inserted into the opening, and to a second position at which the projection is separated from the opening. Here, the moving mechanism is provided with a biasing member that biases the back panel member, and a shape-memory member that produces a force against a biasing force of the biasing member by changing its shape according to ambient temperature. The connecting member is provided at a portion where a peripheral portion of the opening of the base panel member overlaps a peripheral portion of the projection of the back panel member. The biasing member and the shape-memory member are attached to the connecting member, and are configured to allow the back panel member to move from the first position to the second position, or to allow the back panel member to move from the second position to the first position, against the biasing force of the biasing member, by the shape-memory member changing its shape according to a change in the ambient temperature. In a state where the back panel member is moved to the first position relative to the base panel member, the design on the surface of the vehicle is integrally formed by the combination of the base panel member and the projection of the back panel member. With this vehicle grille, it is possible to form a three-dimensional design, to improve a decorative function that adds to the beauty of the vehicle, and to inspire user's motivations for purchase, by combining the base panel member with the projection of the back panel member that is inserted into the opening of the base panel member. Further, when the design of the base panel member is diversified or the shape or the like of the projection is diversified, the design of the vehicle grille formed by the combination of the base panel member and the projection is diversified greatly, so that the beauty of the vehicle increases its variation, and the user's motivations for purchase can be inspired further. Furthermore, as the biasing member and the shape-memory member, constituting the moving mechanism, are attached to the connecting member that is provided at the portion where the peripheral portion of the opening of the base panel member overlaps the peripheral portion of the projection of the back panel member, it is possible to downsize the moving mechanism. As a result of this, the entire vehicle grille can be made smaller, and the vehicle grille occupies less space in the vehicle. Further, the moving mechanism using the shape-memory member produces the force against the biasing force of the biasing member by the shape-memory member changing its shape according to the change in the ambient temperature, so as to switch between the state in which the projection is inserted into the opening and the state in which the projection is not inserted into the opening. Thereby, an airflow state (the state where the airflow is permitted) and an airflow interruption state (the state where the airflow is prohibited) can he switched to each other, according to the ambient temperature. Therefore, the above-described vehicle grille provides greater functionality, and increases its commercial value. Notably, the biasing force of the biasing member may be set based on wind pressure applied to the base panel member or the back panel member when driving the vehicle. When the biasing force of the biasing member is set based on the wind pressure applied to the base panel member or the back panel member when driving the vehicle, the airflow state or the airflow interruption state can be set according to the strength of the wind pressure, which provides the further greater functionality to the vehicle grille. In addition, the moving mechanism may further include first holding means for holding the back panel member at the first position, or may further include second holding means for holding the back panel member at the second position. With the structure like this, it is possible to hold the vehicle grille stably in the airflow state or in the airflow interruption state. Notably, this specification discloses a novel airflow interruption device that allows switching between the airflow state and the airflow interruption state according to the ambient temperature. Namely, the airflow interruption device disclosed in this specification is provided with a base panel member that is provided with an opening, a back panel member that is arranged on a back surface side of the base panel member, and has a projection configured to be inserted into the opening of the base panel member so as to be in close contact with the opening, a connecting member for connecting the back panel member and the base panel member, and a moving mechanism that is configured to allow the back panel member to move relative to the base panel member, to a first position at which the projection is inserted into the opening and to a second position at which the projection is separated from the opening. The moving mechanism is provided with a biasing member that biases the back panel member, and a shape-memory member that produces a force against a biasing force of the biasing member by changing its shape according to the ambient temperature. The connecting member is provided at a portion where a peripheral portion of the opening of the base panel member overlaps a peripheral portion of the projection of the back panel member. The biasing member and the shape-memory member are attached to the connecting member, and are configured to allow the back panel member to move from the first position to the second position, or to allow the back panel member to move from the second position to the first position, against the biasing force of the biasing member, by the shape-memory member changing its shape according to a change in the ambient temperature. This device can be preferably used as the vehicle grille (vehicle component) and the like, and as components other than the vehicle grille. Fig. 1 is a schematic view of a vehicle according to this embodiment; Fig. 2 is an enlarged front view of a front grille according to this embodiment; Fig. 3 is a side view that enlarges and illustrates the front grille according to this embodiment; Fig. 4 Fig. 4A Fig. 4B includes views illustrating the front grille, where illustrates a state where a back panel member is separated from a base panel member, and illustrates a state where the base panel member and the back panel member are in close contact with each other and projections are inserted into openings; Fig. 5 Fig. 5A Fig. 5B includes views illustrating a vehicle front, where illustrates a state where the front grille is attached, and illustrates a state where the front grille is not attached; Fig. 6 Fig. 6A Fig. 6B includes partially enlarged views of the front grille according to this embodiment, where illustrates a state where the projection is inserted into the opening, and illustrates a state where the projection is separated from the opening; Fig. 7 Fig. 7A Fig. 7B includes views illustrating an air flow through the front grille according to this embodiment, where illustrates an airflow state, and illustrates an airflow interruption state; Fig. 8 Fig. 8A Fig. 8B Fig. 8C includes front views that enlarge and illustrate front grilles according to other embodiments, where illustrates a first modification example, illustrates a second modification example, and illustrates a third modification example; Fig. 9 Fig. 6 is a view illustrating another embodiment (corresponding to ); Fig. 10 Fig. 6 is a view illustrating another embodiment (corresponding to ); and Fig. 11 Fig. 6 is a view illustrating another embodiment (corresponding to ). Fig. 1 Hereinafter, a vehicle grille according to this embodiment will be explained. In , a front grille for a vehicle 20 (hereinafter simply referred to as a vehicle grille 20) is attached to a front 16 of a vehicle 10. An engine room 12 is provided at the front center of the vehicle 10, and a communication passage 14 is provided on the front (vehicle grille 20) side of the engine room 12. Notably, a reference number 18 denotes a wheel. Automobiles as the vehicles, to which the vehicle grille 20 is attached, may include a hybrid vehicle and a general automobile that is not equipped with traction motor, for example, but the present invention may be applied to other vehicles. Fig. 2 Fig. 3 As illustrated in and , the vehicle grille 20 is provided with a base panel member 22 that is arranged on a front surface side of the vehicle front 16. The base panel member 22 is exposed on the front surface side that is visible from outside the vehicle 10, and includes a plurality of openings 24 that is arranged in a scattered pattern, so as to constitute a part of a design (pattern, decoration and the like) on the surface of the vehicle. In addition, the vehicle grille 20 is provided with a back panel member 26 that is arranged on the back surface side of the base panel member 22. The back panel member 26 includes a plurality of projections 28 that can be fittingly inserted into (in close contact with) the openings 24 of the base panel member 22. Fig. 4 Fig. 4A Fig. 4B Fig. 2 Fig. 4 Fig. 2 Fig. 4 Fig. 2 Fig. 4 includes perspective views of the vehicle grille 20, where illustrates a state where the back panel member 26 is separated from the base panel member 22, and illustrates a state where the projections 28 arc inserted into the openings 24. As illustrated in and , two opening groups (24, ..., 24) that are arranged in a longitudinal direction (x-axis direction) with a space therebetween are formed in the base panel member 22. One opening group (24, ..., 24) is formed by the five openings 24 (the left side in (the lower side in )), and the other opening group (24, ..., 24) is formed by the six openings 24 (the right side in (the upper side in )). The openings 24 of one of the opening groups (24, ..., 24) are arranged so that these positions in the x-axis direction alternate with the openings 24 of the other opening group (24, ..., 24). Namely, the opening 24 in one opening group (24, ..., 24) are arranged between the neighboring openings 24 in the other opening group (24, ..., 24). Fig. 4A The back panel member 26 is formed by a lower holding plate that corresponds to the lower opening group (24, ..., 24) of the base panel member 22 and an upper holding plate that corresponds to the upper opening group (24, ..., 24) of the base panel member 22 (refer to ). The lower holding plate holds the five projections 28 that line up in the longitudinal direction. The upper holding plate holds the six projections 28 that line up in the longitudinal direction. Thus, when the lower holding plate and the upper holding plate are adhered tightly to the back surface of the base panel member 22, the projections 28 are inserted into the openings 24, and peripheries of the openings 24 fit (are in close contact with) peripheries of the projections 28. Fig. 2 Fig. 2 Fig. 3 Fig. 7B Fig. 7B Fig. 7A It should be noted that, as illustrated in , each of the openings 24 has a substantially oval shape whose both edges in the x-axis direction (longitudinal direction) have acute angles. As illustrated in and , each of the projections 28 has a ridgeline extending in the x-axis direction (longitudinal direction), has a substantially oval shape whose both edges in the x-axis direction (longitudinal direction) have acute angles in planar view, and has a protruding shape projecting toward the front of the vehicle. The projection 28 has such a shape as to allow air from the front to flow smoothly along the ridgeline and its outer shape to the surroundings (refer to ). Namely, when the projections 28 are inserted into the openings 24, the air from the front is interrupted by the base panel member 22 and the projections 28, and an air flow that passes through the openings 24 to the inside of the engine room 12 is interrupted (refer to ). Meanwhile, when the projections 28 are separated from the openings 24, an airflow state is created in which the air passes through the openings 24 and flows into the engine room 12 (refer to ). Fig. 3 Fig. 4A As described above, the openings 24 arranged in the scattered pattern in the base panel member 22 and a streamlined shape in the vicinity of the openings 24 (refer to and ) have a novel design, and form a three-dimensional novel design (pattern and decoration), together with the shape of the projections 28 projecting in the direction orthogonal to the base panel member 22. By combining the base panel member 22 and the projections 28 like this, it is possible to form a geometrical design Fig. 5A Fig. 5B Fig. 5A (pattern and decoration) at the front 16 of the vehicle 10. Comparison between states after the front grille 20 is attached as illustrated in and before it is attached as illustrated in clarifies a difference in beauty (an effect of the vehicle grille 20) added to the front 16 as illustrated in . Fig. 6 Fig. 6A Fig. 6B Next, a moving device 36 that allows the back panel member 26 to make relative movement relative to the base panel member 22 will be explained with reference to . The moving device 36 allows movement to a first position (forward movement) at which the projections 28 are inserted into the openings 24, and allows movement to a second position (backward movement) at which the projections 28 in the insertion state are separated from the openings 24. Here, illustrates the state where each of the projections 28 is inserted into each of the openings 24 (insertion state), and illustrates the state where each of the projections 28 is separated from each of the openings 24 (separation state). Figs. 6A and 6B As illustrated in , a peripheral portion of each of the openings 24 of the base panel member 22 overlaps a peripheral portion of each of the projections 28 of the back panel member 26. At the portion where the peripheral portion of the opening 24 and the peripheral portion of the projection 28 overlap one another, a pair (right and left) of connecting shafts 34 (functioning as connecting members and simply referred to as shafts below) is attached. Each of the shafts 34 penetrates through the base panel member 22 and the back panel member 26. A spring 38 as a biasing member is attached around the shaft 34, and the spring 38 biases the projection 28 against the opening 24. Namely, one end of the spring 38 in an extended state is fixed to the peripheral portion of the opening 24, and the other end is fixed to the peripheral portion of the projection 28. Thereby, the peripheral portion of the opening 24 and the peripheral portion of the projection 28 are connected so as to be attracted to each other. In addition, a shape-memory member 40 for adding a force against a biasing force (tensile force) of the spring 38 is attached to the outer periphery of the spring 38. According to this embodiment, the shape-memory member 40 is formed by a shape-memory alloy whose shape can be changed by a change in ambient temperature. At the time of attaching the spring 38 and the shape-memory member 40 to the shaft 34, the spring 38 is first attached to the outer periphery of the shaft 34, and then the shape-memory member 40 is attached to the outer periphery of the spring 38. Next, the shaft 34 is allowed to penetrate through the base panel member 22 and the back panel member 26 so that the spring 38 and the shape-memory member 40 lie between the base panel member 22 and the back panel member 26, and lastly, both ends of the shaft 34 are secured to the base panel member 22 and the back panel member 26 by fixtures 30 and 32 (by nuts and the like, for example). Fig. 4 Fig. 6 When the shape of the shape-memory member 40 changes and extends by the rise of the ambient temperature (the ambient temperature rises to be 5°C or higher, for example), under the insertion state in which the projection 28 is inserted into the opening 24, the shape-memory member 40 produces the force against the tensile force of the spring 38. Thereby, the back panel member 26 moves (makes the backward movement) so that the projection 28 is separated from the opening 24. On the contrary, when the shape-memory member 40 returns back to its original shape by a fall of the ambient temperature (the ambient temperature falls to be 25°C or lower, for example), the force produced by the shape-memory member 40 is smaller than the tensile force of the spring 38. Thereby, the back panel member 26 moves (makes the forward movement) so that the projection 28 is inserted into the opening 24, creating the state where the projection 28 is fully inserted into the opening 24. As a result of this, the opening 24 is closed by the projection 28, and an air inflow and an air outflow between the engine room 12 and the outside of the vehicle 10 are prohibited. Therefore, the airflow state in which the projection 28 releases the opening 24 and an airflow interruption state in which the projection 28 closes the opening 24 are switched therebetween, when the shape-memory member 40 changes its shape according to the ambient temperature, against the tensile force of the spring 38. Here, a mold model formed by the base panel member 22 and the back panel member 26 (refer to ), the shafts (connecting members) 34 and the moving mechanism 36 (refer to ) constitute an airflow interruption device that switches the airflow state and the airflow interruption state. Notably, a climate (environmental temperature) may change depending on an area where the vehicle grille 20 is used, and therefore, it is preferable to adjust a transformation point (set temperature characteristic) of the shape-memory member 40 used in the vehicle grille 20 for each of the areas with the different climates. With the above-described moving mechanism 36, a right and left pair of the springs 38 is disposed at the peripheral portion of the opening 24 and the projection 28, and a right and left pair of the shape-memory members 40 that produce the force against the tensile force of the springs 38 is disposed in the vicinity of the springs 38. Because the movement of the back panel member 26 is made by the shape-memory members 40, the moving mechanism 36 can be greatly downsized as compared with a moving mechanism employing, for example, a drive motor or the like. Notably, it is not necessarily required for the moving mechanism 36 to employ the above-described structure, and various changes can be made. For example, a biasing member, other than the spring, may be employed as the biasing member to bias the back panel member 26, and an electromagnet, for example, may be used to bias the back panel member 26. When the electromagnet is employed as the biasing member, such structure may be employed that enables control of the operation by the electromagnet according to environmental factors of the surroundings (engine speed, accelerator opening, detection by a hot water temperature sensor and a water temperature sensor, and the like, for example) and to adjust the biasing force as appropriate. Further, the number of the springs 38 and the shape-memory members 40 that are fixed to the periphery of the opening 24 and the projection 28 may be three or more, and the springs 38 and the shape-memory members 40 may be attached to shafts that are different from the shafts 34 connecting the base panel member 22 and the back panel member 26. Furthermore, the method of attaching the springs 38 and the shape-memory members 40 is not limited to the above-described method, and may be changed to other methods. Fig. 7A With the moving mechanism using the shape-memory member 40, the shape of the shape-memory member 40 changes by the temperature increase, and the force against the tensile force of the spring 38 is produced, whereby a switch between the airflow state and the airflow interruption state is allowed according to the change in the ambient temperature. Therefore, when the ambient temperature decreases, the vehicle grille 20 is brought into the airflow interruption state, and entry of the air (cold air) through the communication passage 14 into the engine room 12 can be prevented. As a result of this, it is possible for the vehicle grille 20 to prevent intrusion of the cold air into the engine room 12 and to prevent the temperature decrease in the engine room 12, when overcooling occurs in winter. For this reason, when the vehicle grille 20 is attached to the hybrid vehicle, it is possible to prevent deterioration of fuel efficiency in the hybrid vehicle. When the ambient temperature increases, on the other hand, the projection 28 is separated from the opening 24 (separation state), and an excessive temperature increase in the engine room 12 can be prevented as the air from outside the vehicle 10 can flow through the opening 24 of the vehicle grille 20 into the engine room 12 (refer to ). Notably, an effect of temperature increase protection of the vehicle grille 20 can be evaluated by checking volume of the air passing through the opening 24 and the temperature change in the engine room 12, in the state where the projection 28 is separated from the opening 24 (separation state). Namely, the volume of the air passing through the opening 24 is decided by an opening ratio of the opening 24. The opening ratio of the opening 24 is decided by the position of the projection 28 relative to the opening 24. Therefore, the effect of temperature increase protection of the vehicle grille 20 can be evaluated by detecting the temperature change in the engine room 12 at the specified environmental temperature (at 25°C or 35°C, for example), while changing the opening ratio of the opening 24 (while changing the position of the projection 28 relative to the opening 24). Then, the opening ratio of the opening 24 is set appropriately based on the evaluation results, so as to prevent the temperature increase in the engine room 12 appropriately. It should be noted that the tensile force (biasing force) of the spring 38 (an example of the biasing member) used in the moving mechanism 36 can be set based on wind pressure applied to the back panel member 26 when driving the vehicle. Specifically, driving speed of the vehicle 10 is set as 80 km/h, for example, the wind pressure applied to the projection 28 is calculated from a surface area of the projection 28 (an area of the opening 24), and the tensile force (biasing force) for pressing the projection 28 against the opening 24 is set so as to withstand the wind pressure. Therefore, when the driving speed of the vehicle 10 does not exceed the predetermined driving speed, the projection 28 closes the opening 24 so as to prevent the intrusion of the outside air into the engine room 12. On the other hand, when the vehicle 10 is driving at the driving speed faster than the predetermined driving speed, and when the wind pressure applied to the projection 28 exceeds the tensile force of the spring 38, the projection 28 is separated from the opening 24 (separation state), creating the airflow state in which the outside air flows through the opening 24 into the engine room 12. When the tensile force (biasing force) of the spring 38 (an example of the biasing member) is set based on the wind pressure that is applied when driving the vehicle, and the airflow state or the airflow interruption state of the vehicle grille 20 is set, the vehicle grille 20 is switched to the airflow state when the vehicle 10 drives at the speed faster than the predetermined driving speed. Thereby, the outside air of the vehicle 10 flows through the vehicle grille 20 and the communication passage 14 into the engine room 12, and it is possible to prevent the excessive temperature increase in the engine room 12 when driving the vehicle at the speed faster than the predetermined speed. With the vehicle grille 20 according to the above-described embodiment, it is possible to improve decorativeness by adding to the beauty of the vehicle front 16, and to inspire user's motivations for purchase. Further, the vehicle grille 20 prevents the intrusion of the cold air into the engine room 12 when the temperature is low in winter, so as to secure heat insulation in the engine room 12, to prevent the deterioration of the fuel efficiency in winter, and to realize energy conservation and low fuel consumption. Furthermore, at the time of driving the vehicle at the speed faster than the predetermined speed and when the ambient temperature is high, the air flow into the engine room 12 is permitted so as to prevent overheating of the engine room 12. Fig. 2 Figs. 8A to 8C Fig. 8A Fig. 8B Fig. 8C Notably, the technique disclosed in the specification is not limited to the above-described embodiment, and various changes can be made. For example, the base panel member 22 and the back panel member 26 may have the shape and the size that are different from those described above. Especially, the shape and the arrangement of the openings 24 of the base panel member 22 and the shape and the arrangement of the projections 28 of the back panel member 26 are not necessarily limited to those illustrated in , and may be changed to the shape and the arrangement as illustrated in . Specifically, each of openings 24a and each of protrusions 28a may have a rounded oval shape (refer to ), each of openings 24b and each of protrusions 28b may have a diamond shape (refer to ), and each of openings 24c and each of protrusions 28c may have a rectangular (quadrangular) shape (refer to ). By combining the base panel member 22 and the projections 28a, 28b and 28c like this, it is possible to add a novel design (pattern or figure) to the vehicle grille 20. In addition, as the design of the vehicle front grille can be changed quite easily by changing the combination of the base panel member 22 and the projections 28, the beauty of the vehicle front becomes more diversified, and the user's motivations for purchase can be inspired. Further, the moving mechanism 36 for moving the projection 28 of the back panel member 26 may not be necessarily provided to each of all the openings 24 of the base panel member 22, and the moving mechanism 36 for moving the projection 28 may be provided to the specified (a part of the) openings 24, so as to set the airflow state and the airflow interruption state. Figs. 9 and 10 Fig. 9 Moreover, the moving mechanism 36 may have the following structure. Namely, the peripheral portion of the opening 24 of the base panel member 22, as illustrated in , overlaps the peripheral portion of the projection 28 of the back panel member 26, and at the overlapping portion, the shaft members 34 are attached to penetrate through the base panel member 22 and the back panel member 26. As illustrated in , the spring 38 and the shape-memory member 40 are attached to each of the shaft members 34. The shape-memory member 40 is formed by the shape-memory alloy, and is arranged between the base panel member 22 and the back panel member 26. The spring 38 is arranged on the back surface side of the back panel member 26, and biases the back panel member 26 against the base panel member 22 (that is, biases the projection 28 against the opening 24 so as to make the forward movement). The shape-memory member 40 does not change its shape (does not extend) at the predetermined temperature or lower. Therefore, when the environmental temperature is equal to or lower than the predetermined temperature, the projection 28 is fully inserted into the opening 24 so as to create the insertion state, and the air flow into the engine room 12 is interrupted. On the other hand, when the environmental temperature increases to be equal to or higher than the predetermined temperature, the shape-memory member 40 changes its shape (extends). Thereby, the projection 28 makes the backward movement from the insertion state against the biasing force of the spring 38, and the air can flow through the opening 24 into the engine room 12. Fig. 10 Further, it is not necessarily required to attach both of the spring 38 and the shape-memory member 40 to each of all the shaft members 34, and, as illustrated in , for example, the spring 38 may be attached to the shaft member 34 on one side (left side), and the shape-memory member 40 may be attached to the shaft member 34 on the opposite side (right side). Furthermore, with regard to the position to arrange the spring 38, the spring may be arranged between the base panel member 22 and the back panel member 26, and a contractile force of the spring may he used so as to allow the projection 28 to make the forward movement to be inserted into the opening 24. Notably, strength of the spring may be adjusted from the outside by manual operation and the like. Further, the connecting member, other than the connecting shaft, may be used to connect the back panel member 22 and the base panel member 26. Furthermore, as another modification example, the spring 38 and the shape-memory member 40 as biasing means may be disposed not around the shaft 34 but in the vicinity of the opening 24 and the projection 28. Fig. 11 Fig. 11 Moreover, the moving mechanism 36 as illustrated in may be employed. With the moving mechanism 36 as illustrated in , the peripheral portion of the opening 24 of the base panel member 22 is also connected to the peripheral portion of the projection 28 of the back panel member 26 by the shaft members 34. A spring 52 and a shape-memory member 50 are attached to each of the shaft members 34. The shape-memory member 50 is formed by the shape-memory alloy, and is arranged between the base panel member 22 and the back panel member 26. The spring 52 is arranged on the back surface side of the back panel member 26, and biases the back panel member 26 against the base panel member 22. A plate member 42 is fixed to the back surface of the base panel member 22 (the surface on the back panel member 26 side). The plate member 42 is formed by magnetic material (steel material, for example). Magnets 44 and 46 are respectively fixed to the front surface (the surface on the base panel member 22 side) and to the back surface of the back panel member 26. The magnets 44 and 46 may be a neodymium magnet, for example. A plate member 48 is fixed to a rear end 54 of the shaft member 34. The plate member 48 is formed by the magnetic material (steel material, for example). The magnet 44 and the plate member 42 are arranged inside the shape-memory member 50 that is formed in a coil spring shape, and the magnet 44 and the plate member 42 face each other with a certain distance therebetween. The magnet 46 and the plate member 48 are arranged inside the spring 52, and the magnet 46 and the plate member 48 face each other with a certain distance therebetween. When the distance between the magnet 44 and the plate member 42 is reduced, the plate member 42 and the magnet 44 attract each other by a magnetic force and function as magnetic force attraction members, and when the distance between the magnet 46 and the plate member 48 is reduced, the plate member 48 and the magnet 46 attract each other by the magnetic force and function as the magnetic force attraction members. Fig. 11 With the moving mechanism 36 as illustrated in , the shape-memory member 50 does not extend when the environmental temperature is the predetermined temperature or lower, and therefore, the back panel member 26 abuts against the base panel member 22 by the biasing force of the spring 52. This creates the insertion state, in which the projection 28 is fully inserted in the opening 24, and the air flow into the engine room 12 is interrupted. At this time, as the plate member 42 is attracted by the magnet 44, the back panel member 26 and the base panel member 22 are held stably while abutting against each other. Namely, positional displacement of the back panel member 26 relative to the base panel member 22 can be prevented, even when an external force is applied to the back panel member 26. Therefore, the magnet 44 and the plate member 42 are an example of "first holding means" described in claims, but the first holding means may be formed by other components and the like. Meanwhile, when the environmental temperature increases to be equal to or higher than the predetermined temperature, the shape-memory member 50 extends, and the projection 28 (back panel member 26) makes the backward movement against the biasing force of the spring 38. This creates the state in which the air can flow through the opening 24 into the engine room 12. Notably, when the back panel member 26 moves to the vicinity of the position where it abuts against the rear end 54 of the shaft member 34, the plate member 48 is attracted by the magnet 46. Thereby, displacement of the back panel member 26 relative to the base panel member 22 can be prevented, even when the external force is applied to the back panel member 26. Therefore, the magnet 46 and the plate member 48 are an example of "second holding means" described in claims, but the second holding means may be formed by other components and the like. Further, the first and the second holding means are formed by the magnetic force attraction members, but may be formed by other components. Fig. 11 Fig. 11 It should be noted that, although the back panel member 26 is held relative to the base panel member 22 by the magnets 44 and 46 and the plate members 42 and 48 in the example illustrated in , such an aspect is not restrictive, and the magnets may be used instead of the plate members 42 and 48, for example. Even when the plate members 42 and 48 are replaced by the magnets, the similar effect can also be obtained. Moreover, the positions to arrange the shape-memory member 50, the spring 52, the magnets 44 and 46 and the plate members 42 and 48 can be adjusted freely, and are not limited to the example as illustrated in . For example, the magnets and the plate members may be arranged at the positions different from the positions where the shape-memory member 50 and the spring 52 are arranged. Further, according to the above-described embodiment, the back panel member 26 is moved relative to the base panel member 22 that is secured to the vehicle 10, but the back panel member 26 of the vehicle 10 may be secured, and the base panel member 22 may be moved relative to the back panel member 26. Furthermore, the vehicle grille in this specification may be arranged at the position that is different from the position of the front grille of the vehicle 10 (the position on the rear side or the lateral side of the vehicle, for example). Further, according to the above-described embodiment, each of the openings of the vehicle grille is opened and closed by changing the shape of the shape-memory member according to the environmental temperature, but the technique disclosed in this specification is not limited to such an aspect. For example, the shape-memory member may be electrically heated to control the temperature of the shape-memory member (that is, the shape of the shape-memory member), so as to open and close the opening of the vehicle grille. Specifically, the environmental temperature (the outside air temperature, the environmental temperature near the engine, the temperature of cooling water for a radiator, and the like (hereinafter simply referred to as the "environmental temperature")) is detected by detecting means such as a temperature sensor and, when the temperature detected by the temperature sensor is equal to or lower than the predetermined temperature, the shape-memory member is electrically heated to close the opening of the vehicle grille. When the environmental temperature is higher than the predetermined temperature, the electricity to the shape-memory member is stopped, so as to open the opening of the vehicle grille. With the structure like this, the opening can be opened and closed stably based on the detected temperature of the temperature sensor, since the shape of the shape-memory member is controlled by electrical heating. It should be noted that, when the structure of electrically heating the shape-memory member is employed, it is preferable to provide a lock mechanism for holding a closing state of the opening of the vehicle grille. As the position of the back panel member relative to the base panel member is held by the lock mechanism, it is possible to stop the electricity to the shape-memory member after the opening of the vehicle grille is closed (that is, after being locked by the lock mechanism). In opening the opening of the vehicle grille at this time, the lock mechanism may be released by electrically heating another shape-memory member, so as to open the opening of the vehicle grille by the biasing force of the spring. Furthermore, according to the above-described embodiment, the shape-memory member and the spring are arranged in the vicinity of the base panel member and the back panel member, but the technique disclosed in this specification is not limited to such an aspect. For example, a mechanism to drive the back panel member (the shape-memory member and the spring, for example) may be arranged at the position separated from the base panel member and the back panel member, and the back panel member may be driven via another mechanism (a link mechanism or a cable mechanism, for example). Notably, the technique disclosed in this specification may employ the technical means as follows. a base panel member that is arranged on a front surface side of the vehicle, and is provided with an opening disposed to form a part of a design on a surface of the vehicle; a back panel member that is arranged on a back surface side of the base panel member, and has a projection that is configured to be fittingly inserted into the opening; and a moving mechanism that is configured to allow the back panel member to move relative to the base panel member, so that the projection is inserted into the opening, and the projection in an insertion state is separated from the opening, wherein the moving mechanism includes a biasing member that biases the projection against the opening, and a shape-memory member that produces a force against a biasing force of the biasing member, wherein the biasing member is disposed near the opening and the projection, wherein the shape-memory member is disposed near the biasing member and is configured to change its shape according to a change in ambient temperature, and wherein, in a state where the projection is inserted into the opening, the design on the surface of the vehicle is formed by combining the base panel member and the projection of the back panel member. (Technical means 1) A vehicle grille to be attached to a vehicle, the grille including: wherein the moving mechanism further includes a connecting shaft member for connecting the back panel member and the base panel member at a portion where a peripheral portion of the opening of the base panel member overlaps a peripheral portion of the projection of the back panel member, and wherein the biasing member and the shape-memory member are attached around the connecting shaft member. (Technical means 2) The vehicle grille according to the technical means 1, a base panel member that is arranged on a front surface side of the vehicle and is provided with an opening; a back panel member that is arranged on a back surface side of the base panel member, and has a projection that is configured to be inserted into the opening so as to be in close contact therewith (in a fitting manner); and a moving mechanism that is configured to allow the back panel member to move relative to the base panel member, so that the projection is inserted into the opening, and the projection in the insertion state is separated from the opening, a connecting shaft member that is fixed at a portion where a peripheral portion of the opening of the base panel member overlaps a peripheral portion of the projection of the back panel member, and that connects the back panel member and the base panel member, and a biasing member that is attached around the connecting shaft member and biases the projection against the opening. wherein the moving mechanism includes (Technical means 3) An airflow interruption device that is attached to a surface of a vehicle, the airflow interruption device including: at least one of first holding means for holding the back panel member at the first position and second holding means for holding the back panel member at the second position. (Technical means 4) The vehicle grille according to the technical means 1 or the airflow interruption device according to the technical means 3, further including:
Debt Ceiling Legislation Also Speaks to Administrative Law Rules governing student loans are exempt from a negotiated rulemaking mandate. Expand Centralized Regulatory Review to Independent Agencies Congress should require independent regulatory commissions to perform cost-benefit analyses The REINS Act: A Constitutional Means to Control Delegation The proposed legislation would give Congress authority over a limited set of major regulations. Agencies Should Provide Enhanced Procedural Protections in Aggregate Settlements Agencies fail to provide necessary fairness when they compensate large groups of people. Why Congress Should Not Codify Cost-Benefit Analysis Requirements Codifying cost-benefit analysis requirements of Executive Order would preempt valuable nuances of current review system. The Myths of Benefit-Cost Analysis Congress should resist the popular misconceptions of the critics of benefit-cost analysis. Week in Review Regulatory news in review. Why the REINS Act Is Unwise If Not Also Unconstitutional A proposed act would hinder needed regulations, thereby interfering with the executive branch’s constitutional authority to execute the law. PPR Panel on Outsourcing National Security Two prominent scholars discuss the federal government’s reliance on private firms to carry out national defense functions. Week in Review Regulatory news in review Congressional Republicans Seek to Put the “REINS” on Costly Agency Rulemaking Republican legislation would require Congress to approve major rules passed by federal agencies. Fall 2010 Recap: Risk Regulation Seminar Series Penn Program on Regulation features number of experts who discuss risk in a number of regulatory contexts.
https://www.theregreview.org/tag/administrative-law/page/17/
Would you like to receive an alert when new items match your search? Sort by Proceedings Papers Proc. ASME. POWER2020, ASME 2020 Power Conference, V001T03A008, August 4–5, 2020 Paper No: POWER2020-16477 Abstract To address one of the main environmental concerns, the engine out emissions, an enhanced understanding of the combustion process itself is fundamental. Recent optical and laser optical measurement techniques provide a promising approach to investigate and optimize the combustion process regarding emissions. These measurement techniques are already quite common for passenger car and truck size engines and significantly contribute to their improvement. Transferring these measurement techniques to large bore engines from low to high speed is still rather more uncommon especially due to the bigger challenges caused by the engine size and thus much higher stability requirements and design effort for optical accessibility. To cover this new field of research a new approach for a medium speed large bore engine was developed using a fisheye optic mounted centrally in the cylinder head to design a fully optically accessible engine test bench. This new approach is detailed with a test setup layout and a stability concept consisting of cooling systems and the development of a suitable operation strategy based on simulation and experimental verification. The design of this single cylinder engine with 350mm bore and 440mm stroke providing 530kW nominal load at 750 rpm was tested up to 85% nominal load in skipped fire engine operation mode. The measurements of the flame chemiluminescence of a dual fuel combustion of the diesel gas type present proof of the feasibility of the new design as a starting point for future systematic studies on the combustion process of large bore engines. Topics: Engines, Large-bore engines Proceedings Papers Effects of Ambient Air Humidity on Emissions and Efficiency of Large-Bore Lean-Burn Otto Gas Engines in Development and Application Proc. ASME. POWER2020, ASME 2020 Power Conference, V001T03A010, August 4–5, 2020 Paper No: POWER2020-16572 Abstract The use of large-bore Otto gas engines is currently spreading widely considering the growing share of Power-To-Gas (P2G) solutions using renewable energies. P2G with a Combined Heat and Power (CHP) plant offers a promising way of utilizing chemical energy storage to provide buffering for volatile energy sources such as wind and solar power all over the world. Therefore, ambient conditions like air temperature, humidity and pressure can differ greatly between the location and time of engine operation, influencing its performance. Especially lean-burn Otto processes are sensitive to changes in ambient conditions. Besides, targeted use of humidity variation (e.g. through water injection in the charge air or combustion chamber) can help to reduce NO x emissions at the cost of a slightly lower efficiency in gas engines, being an alternative to selective catalytic reduction (SCR) exhaust gas aftertreatment. The ambient air condition boundaries have to be considered already in the early stages of combustion development, as they can also have a significant effect on generated measurement data in combustion research. To investigate the behavior, a test bench with a natural gas (CNG) powered single-cylinder research engine (piston displacement 4.77 1) at the Institute of Internal Combustion Engines (LVK) of the Technical University of Munich (TUM) was equipped with a sophisticated charge air conditioning system. This includes an air compressor and refrigeration dryer, followed by temperature and pressure control, as well as a controlled injection system for saturated steam and homogenizing containers, enabling the test bench to precisely emulate a widespread area of charge air parameters in terms of pressure, temperature and humidity. With this setup, different engine tests were conducted, monitoring and evaluating the engine’s emission and efficiency behavior regarding charge air humidity. In a first approach, the engine was operated maintaining a steady air-fuel equivalence ratio λ, fuel energy input (Q̇ fuel = const.) and center of combustion (MFB 50%) while the relative ambient humidity was varied in steps between 21% and 97% (at 22 °C and 1013.25 hPa). Results show a significant decrease in nitrogen oxides (NO x ) emissions (−39.5%) and a slight decrease in indicated efficiency (−1,9%) while hydrocarbon (THC) emissions increased by around 60%. The generated data shows the high significance of considering charge air conditioning already in the development stage at the engine test bench. The comparability of measurement data depends greatly on ambient air humidity. In a second approach, the engine was operated at a constant load and constant NO x emissions, while again varying the charge air humidity. This situation rather reflects an actual engine behavior at a CHP plant, where today often NO x –driven engine control is used, maintaining constant NO x emissions. The decrease in indicated efficiency was comparable to the prior measurements, while the THC emissions showed only a mild increase (5%). From the generated data it is, for instance, possible to derive operational strategies to compensate for changes in ambient conditions while maintaining emission regulations as well as high-efficiency output. Furthermore, the results suggest possibilities, but also challenges of utilizing artificial humidification (e.g. through water injection) considering the effects on THC emissions and efficiency. A possible shift of the knocking limit to earlier centers of combustion with higher humidity is to be investigated. The main goal is the further decrease of NO x emissions, increase of efficiency, while still maintaining hydrocarbon emissions.
https://verification.asmedigitalcollection.asme.org/search-results?f_AllAuthors=Maximilian+Prager
Boston, MA—Patients with cervical and endometrial cancers experience fewer gastrointestinal and genitourinary adverse events and have improved quality of life when they receive intensity-modulated radiation therapy (IMRT) compared with conventional radiation therapy, according to the results of a recent study presented at the 2016 Annual Meeting of the American Society for Radiation Oncology (ASTRO). “The way that radiation is performed has a major impact on the risk of side effects from treatment. We know that IMRT reduces the amount of normal tissue irradiated, so we suspected it would have fewer side effects. [Our study is one of the first] to rigorously ask this question using patient questionnaires to ensure that the lower doses resulted in meaningful differences in patients’ experiences during treatment,” said lead investigator, Ann Klopp, MD, PhD, the University of Texas M.D. Anderson Cancer Center, Houston. The study included 278 patients with cervical or endometrial cancer treated with postoperative pelvic radiation therapy at centers in North America, Japan, and Korea. Patients were stratified based upon radiation therapy dose (45 Gy or 50.4 Gy), chemotherapy (0 or 5 cycles of weekly cisplatin), and disease site, and were then randomized to receive standard radiation therapy or IMRT. Several patient-reported outcomes measures were used, including the Expanded Prostate Cancer Index Composite (EPIC) for bowel and urinary toxicities; Patient-Reported Outcomes–Common Terminology Criteria for Adverse Events for gastrointestinal and genitourinary adverse events; and Functional Assessment of Cancer Therapy–General with cervix subscale to track health-related quality of life. IMRT resulted in significantly fewer bowel-related toxicities compared with radiation therapy on the EPIC measure: −18.6 points versus −23.6 points, respectively. Patients receiving IMRT had less diarrhea and fecal incontinence, with 1 in 5 women in the standard radiation therapy group reporting taking ≥4 antidiarrheal medications daily compared with 7.8% of women in the IMRT group (P = .04). The frequency of urinary side effects was also lower in the IMRT group. IMRT had significantly less of a negative impact on patients’ quality of life than standard radiation therapy, according to the Functional Assessment of Cancer Therapy–General (P = .06). In addition, patients in the IMRT group had less change in physical well-being and fewer additional concerns than those in the conventional radiation therapy group. “Many radiation oncologists already use IMRT for women undergoing pelvic radiation, but this research provides data that using IMRT, which is a more resource-intensive treatment, makes a real difference to patients….When performed by an experienced radiation oncology team, IMRT reduces the risk of short-term bowel and bladder side effects for patients with cervical and endometrial cancer,” Dr Klopp said.
https://www.theoncologypharmacist.com/top-issues/2016-issues/august-2016-vol-9-no-3?view=article&artid=16897:fewer-adverse-events-with-intensity-modulated-radiation-therapy-for-patients-with-gynecologic-cancers&catid=2764
Skip to code content (skip section selection) Compare to: (A) The owner of each public pool shall arrange for the collection and bacteriological examination of the water in that spa or hot tub in accordance with the following schedule: (1) Operators of public pools shall submit at least one water sample per week for bacteriological examination whenever the facility is open for use. (2) Results of such examination shall be reported to the Department as soon as results are available. (3) Failure to meet this requirement shall constitute grounds for closure by the Department. (B) No two consecutive samples or three samples collected from any public pool, collected in a six-week period shall demonstrate the following: (1) Contain more than 200 bacteria per milliliter, as determined by the standard 35 degree Centigrade agar plate count. (2) Show positive test (confirmed test) for coliform organisms in any of the five 10 ml portions of a sample, or more than one coliform organism per 50 ml when the membrane filter test is used. (3) Show the presence of any coliform when the 100 ml presence/absence test is used. (4) Show the presence of more than two colony forming units when the 100 ml enzyme substrate coliform test is used. (a) Failure to collect and analyze weekly water samples during the period that a pool is open for use is considered an unsatisfactory report for the applicable week. (b) All public pool samples shall be collected, dechlorinated, and examined for total bacteria using the heterotrophic 35 degree Centigrade plate count method and for total coliform using the multiple tube fermentation test, the membrane fitter test, the MMO-MUG test, the 100 ml presence/absence test, or the 100 ml quantifiable enzyme substrate test. Such tests shall be performed by a state approved bacteriological laboratory in accordance with the procedures outlined in Part 9000, Microbiological Examination of Water, of the 18th edition of “Standard Methods for the Examination of Water and Wastewater (APHA)” or the most recently approved and accepted addition. Where samples are examined in laboratories other than those of the department, copies of the report of the examination shall be sent by the laboratory or by the public pool operator to the LaPorte County Health Department.
https://codelibrary.amlegal.com/codes/laportecounty/latest/laporteco_in/0-0-0-12842
About the crackling of the bones Cracking of bones in the hand or other joints is very common and usually harmless, and contrary to what we have heard from grandmothers, crackling of bones will not cause arthritis. Cracking the joints can cause relief and lead to greater movement in the joint, a 2020 study showed that theories of bone crackling are still being discussed, and more advanced bone imaging techniques must be done in order to gain a clear view of how this occurs. The sound of bone cracking can become more noticeable with age, as some of the cartilage in the body is eroded, and when the sound of bone cracking is accompanied by pain, swelling, or injury, a doctor should be visited immediately to see if there is any health disorder leading to this. Causes of crackling bones Bones crackling can have different causes, and it is common and does not usually indicate the presence of health disorders in the bones, but what causes the crackling sound is what is the subject of many studies, but the causes are not fully understood. Some common causes of bone cracking include: - Sound caused by muscle activity: As a muscle stretches, it can cause sounds in the joints, for example, the tendon can move in and out of position when stretching, exercising, dancing or moving repeatedly while working. - Loss of cartilage: This can happen with age, as joint surfaces become rough, causing joints to crack during movement. - Arthritis: This can also cause cartilage degeneration and bone cracking. When is bone cracking painful and painless? When is it painless? The crackling sound in the joints and ligaments is common and may sometimes be normal, as the synovial fluid protects the joints, and over time, gases accumulate in the joint and the use of the joint leads to the release of gases and a crackling sound is heard, the crackling is likely to increase with age, but it is not a cause for concern when The absence of pain. When is it painful? - Arthritis The crackling sound of bones can be caused by damage to the cartilage and bone. Different types of arthritis cause changes in the way the bones move, for example, osteoporosis affects adults with age, and as a result of the decomposition of the cartilage, a person feels pain and swelling, at first It may be simple, but if the lining of the joint is damaged, a person can hear intense rubbing between the bones. - Patellofemoral Pain Syndrome Dull pain behind the kneecap could indicate that there is an underlying injury or that it has been overused. With movement, there is usually a crackling or squeaking sound that accompanies the pain. - Respiratory diseases The term pop can be used to describe sounds that originate in the lungs. They are also called crackles and can indicate respiratory diseases. Voices may or may not be audible without a stethoscope - meniscus tear The meniscus is a thin layer of cartilage that connects the joints in the femur and tibia. If the meniscus ruptures, the torn edges can get stuck during movement, causing swelling and pain. bone cracking mechanism The exact mechanism of bone crackling is not known precisely, but a common explanation for this condition is the pressure on the joint causing small bubbles in the synovial fluid, then these small bubbles burst because they have formed so quickly, the synovial fluid contains oxygen, nitrogen and carbon dioxide and helps protect Bones from rubbing against each other. A 2020 study used real-time magnetic resonance imaging (MRI) to crack joints. This study showed that bone crackling is caused by the formation of a cavity in joint fluid, not the bursting of pre-existing small bubbles as is commonly believed. The 2018 study also developed a mathematical model of bubble and sound dynamics that was consistent with the bubble collapse pattern previously formed. Is cracking bones bad for health? Cracking bones or joints is not a bad thing, but it can be annoying to the people around us if the person does it frequently, if the person cracks the bones hard, such as the back, they can hurt themselves by putting pressure on a nerve or straining the muscles. According to a small 2011 study, orthopedic popping can provide a physical feeling of relief from pressure, whether the person performs the procedure themselves or the chiropractor performs an orthopedic treatment. The common myth that a person will develop arthritis in the hand in case of crackling of the bones is incorrect and this was proven by a study conducted in 2011, studies also showed that crackling of the bones does not lead to the thinning of cartilage or eventually lead to osteoporosis. When should you see a doctor in case of orthopedic cracking? Although bone crackling is annoying, there is usually no need to see a doctor (either for adults or children), but in some cases, bone crackling can be caused by a degenerative disorder that makes the joint more susceptible to these and other sounds. Unless joint crackling is accompanied by other accompanying symptoms such as pain and swelling, there is no cause for concern, however, sometimes bone cracking is a symptom of a disorder that needs medical care, such as gout, inflammation, and joint dislocation. You should see a doctor if the crackling of the bones is accompanied by the following symptoms: - injury - bruising - A limitation in the range of movements that a person can perform. - the pain - swelling Experts in figuring out what causes joint pain and how to get rid of it An ageing population is more likely to have joint issues as a result. Crepitus and joint discomfort may be pinpointed with enhanced knowledge by the Aurora Health Care staff so that you can resume your normal activities without restriction. We provide the following services as one of the state's major regional health care systems: - A comprehensive diagnosis is provided by our team of joint care experts, who have years of combined expertise. Without intrusive therapy, we can often identify the source of crepitus. - MRI, 3-D CT scans, and diffusion tensor imaging are just some of the sophisticated diagnostic tools available to you (DTI). DTI is a cutting-edge technique for assessing the health of cartilage, the smooth, white substance that acts as a shock absorber at the ends of bones. - Physical therapy, bracing, and complete joint replacement are all effective noninvasive and surgical treatment options. - Convenient locations: Eastern Wisconsin and northern Illinois have clinics and hospitals where you may visit a doctor and get physical therapy. Visit one of our sites to see for yourself. - Complete integration of our healthcare system ensures seamless treatment. To put it another way, you'll have a team of healthcare professionals working together to develop the best plan of care for you. Tips to get rid of the crackling sound - Mental focus: If the person is accustomed to the habit of crackling joints, and wants to stop it, the first step in order to quit this bad habit is by realizing the person and observing his actions, you must monitor when the person cracks his bones, neck or back in order to avoid Do it unconsciously. - Movement more: The simple solution can be to move more, in the case of sitting or standing for a long time in one position, the person can become stiff and the joints make a crackling sound, so frequent breaks must be taken for movement, and the person should try to do at least Every half hour in case he has been sitting on a work bench all day. - Gently stretching: Another solution to getting rid of bone crackling is gently stretching, which can move synovial fluid around the joints and lubricate them. There are dynamic and static stretches for all joints. - Get relief from stress: If stress causes crackling bones, try to get rest and relaxation, such as deep breathing, meditation, and a stress ball. - Exercise: You should try to increase the duration of exercise to 150 minutes per week. You can choose activities that suit the person's age and lifestyle. Any physical activity, such as working at home, or even walking for a short period can be part of an exercise routine.
https://www.soft3arbi.com/2021/09/causes-of-crackling-severe-bones-and-do.html
Section: Tag Type: Keyword: Wallpapers tagged with 'Medium: Digital Composite'. Each wallpaper on InterfaceLIFT has been tagged with keywords, allowing you to browse for similar content, whether it be by Color, Scene, Location, Medium, Event, Equipment, or Subject. You are currently browsing the 2 desktop wallpapers that were tagged with 'Medium: Digital Composite', beginning with the most popular images. The Galactic Center By NASA Images January 31st, 2016 This composite image combines a near-infrared view from the Hubble Space Telescope, an infrared view from the Spitzer Space Telescope, and an X-ray view from the Chandra X-ray Observatory into one multi-wavelength picture. It features the spectacle of stellar evolution: from vibrant regions of star birth, to young hot stars, to old cool stars, to seething remnants of stellar death called black holes. This activity occurs against a fiery backdrop in the crowded, hostile environment of the galaxy's core, the center of which is dominated by a supermassive black hole nearly four million times more massive than our Sun. Permeating the region is a diffuse blue haze of X-ray light from gas that has been heated to millions of degrees by outflows from the supermassive black hole as well as by winds from massive stars and by stellar explosions. Infrared light reveals more than a hundred thousand stars along with glowing dust clouds that create complex structures including compact globules, long filaments, and finger-like "pillars of creation," where newborn stars are just beginning to break out of their dark, dusty cocoons. Sedona Nights April 19th, 2013 I combined a couple of shots from my getaway to Sedona, Arizona. Thanksgiving, 2012. Adobe Photoshop. Canon EOS 5D Mark II, Tamron SP 70-300mm F/4-5.6 Di VC USD. Photo Settings: 88mm, f/8, 1/400 second, ISO 100. Copyright 2000-2021 L-bow Grease, LLC.
https://interfacelift.com/wallpaper/tags/722/medium/digital_composite/
In order enjoy the full functionality of our site you should . Search for more books Shipped From United States of America By: A2zbooks Title: From the Heart Through the Hands: The Poser of Touch in Caregiving Author: Nelson Price: USD $19.75 Category: Medicine & Health ISBN: 9781899171934 Item ID: AZ00-1561043197 Quantity: 1 Publisher: Findhorn Press , USA, 2001 Edition: Edition Unstated Binding: Softcover Condition: As New Findhorn Press, 2001, Softcover, Book Condition: As New, Edition Unstated Book is in as new condition. Cover may have some minor wear from storage. Text appears to be clean. Quantity Available: 1. Shipped Weight: Under 1 kilo. Category: Medicine & Health; ISBN: 1899171932. ISBN/EAN: 9781899171934. Pictures of this item not already displayed here available upon request. Inventory No: 1561043197. To see the Shipping Fee please change the following as required: Shipping Destination Select Country ...
https://bookzangle.com/booklist/AZ00-1561043197/from-the-heart-through-the-hands-the-poser-of-touch-in-caregiving-by-nelson/9781899171934
138 Ariz. 257 (1983) 674 P.2d 320 Judith Ann STIREWALT, in her own behalf and on behalf of Brian Craig Stirewalt and Michael Jordan Stirewalt, minors, Plaintiffs-Appellants, v. P.P.G. INDUSTRIES, INC., a Pennsylvania corporation; Meyer Drum Company; Kaiser Steel Corporation, dba Meyer Drum Company; Kaiser Steel Corporation, a Nevada corporation; and Accel Plastic Products, Inc., an Arizona corporation, Defendants-Appellees. No. 1 CA-CIV 6075. Court of Appeals of Arizona, Division 1, Department B. October 4, 1983. Review Denied January 4, 1984. *258 Tolman & Martineau by J. Robert Tolman, Mesa, for plaintiffs-appellants. Snell & Wilmer by James R. Condo, R. Chris Reece, Phoenix, for defendant-appellee P.P.G. Industries, Inc. Gallagher & Kennedy by Michael K. Kennedy, John Dillingham, Kevin E. O'Malley, Phoenix, for defendant-appellee Accel Plastic Products, Inc. Crampton, Woods, Broening & Oberg by James R. Broening, Jan E. Cleator, Phoenix, for defendants-appellees Meyer Drum Co. and Kaiser Steel Corp. OPINION FROEB, Judge. Judith Ann Stirewalt, in her own behalf and on behalf of Brian Craig Stirewalt and Michael Jordan Stirewalt, minors (plaintiffs), appeal from the grant of summary judgment by the trial court dismissing their wrongful death suit against P.P.G. Industries, Inc., Meyer Drum Co., Kaiser Steel Corporation, and Accel Plastic Products, Inc. The issue presented to this court is the validity of the reassignment of plaintiffs' wrongful death claim by decedent's workers' compensation carrier, Wausau Insurance Company (insurance carrier). Plaintiffs' decedent, Steven Stirewalt, was fatally injured on June 7, 1978, in the course and scope of his employment. Plaintiffs filed for and received workers' compensation death benefits, but did not file a personal injury lawsuit within one year from the date of injury. Consequently, their claim was automatically assigned by operation of law to plaintiffs' decedent's insurance carrier.[1] The insurance carrier reassigned the claim to plaintiffs on June 6, 1980, one day before the limitations period on the claim would have run. The same day plaintiffs filed suit against defendants. As this lawsuit was proceeding to trial, the posture of the litigation changed dramatically with the Arizona Supreme Court's decision in Ross v. Superior Court, 128 Ariz. 301, 625 P.2d 890 (1981). In Ross, filed February 24, 1981, the court held that, as a matter of law, a "claim assigned to the insurance carrier by operation of law is neither assignable to a third party or reassignable to the insurance claimant." Id. at 302, 625 P.2d at 891. On the basis of Ross, defendants filed motions for summary judgment, claiming the purported reassignment was invalid, and that plaintiffs' suit was barred as a matter of law. The motions were granted and final judgments were entered for defendants on April 17, 20 and 23, 1981. A timely notice of appeal therefrom was filed by the defendants on May 15, 1981. On April 27, 1981, between the entry of judgments and the filing of the notice of *259 appeal, the Arizona Legislature passed and the governor signed into law, H.B. 2176, amending A.R.S. § 23-1023, authorizing reassignments of personal injury claims.[2] It may be assumed this was in response to the Ross decision. The reassignment provision of the amendment further provided that it was applicable to all pending cases assigned or reassigned under A.R.S. § 23-1023.[3] On May 18, 1981, plaintiffs filed, pursuant to rule 60(c), Arizona Rules of Civil Procedure, a motion to set aside judgments based upon the amendment and additions to A.R.S. § 23-1023. The trial court denied the motion and plaintiffs subsequently appealed to this court. After this appeal was deemed "at issue," the Arizona Supreme Court decided Chevron Chemical Co. v. Superior Court, 131 Ariz. 431, 641 P.2d 1275 (1982). The supreme court's decision in Chevron Chemical upheld the validity of amended A.R.S. § 23-1023, including the statute's retroactive provision, rejecting the argument that the retroactivity provision violated the state or federal constitutions.[4] Plaintiffs then filed a motion in this court for an order remanding this case to the superior court for a trial on the merits for the reason that the Chevron Chemical decision is dispositive of all issues. The motion was successfully opposed by defendants who argued that the Chevron Chemical decision did not address all of the constitutional arguments raised in the present case. We did, however, permit plaintiffs to file a reply brief addressing the Chevron Chemical decision. Defendants were also permitted to file a joint supplemental answering memorandum. The narrow issue as framed by the supplemental arguments before this court is whether the Chevron Chemical decision controls the outcome of this case. Defendants argue that the retroactive provision of A.R.S. § 23-1023, as amended, violates due process by depriving defendants of a vested right to the defense of nonassignability of unliquidated tort claims. They argue that this issue was not addressed by the court in Chevron Chemical. In deciding this issue we begin with the presumption favoring the constitutional validity of this statute. Defendants therefore bear the burden of proving that the statute infringes upon a constitutional guarantee or violates some constitutional principle. State v. Yabe, 114 Ariz. 89, 559 P.2d 209 (App. 1977). Whenever possible, we will construe a statute so as to give fair import to its terms in order to effect its object and promote justice. State v. Valenzuela, 116 Ariz. 61, 567 P.2d 1190 (1977). We will not declare an act of the legislature unconstitutional unless satisfied beyond a reasonable doubt that the act is in conflict with the federal or state constitutions. Chevron Chemical Co. v. Superior Court. In our opinion, this issue is controlled by the supreme court's decision in Chevron Chemical. The issue there was whether "a statute of limitations defense is a vested property right which may not be taken without due process of law contrary to the Fifth and Fourteenth Amendments to the United States Constitution and Article 2, Section 4 of the Arizona Constitution." Id. 131 Ariz. at 438, 641 P.2d at 1282. We see no distinction between the issue presented here and that in Chevron Chemical. We are not persuaded by defendants' attempt at a second bite at the apple merely by restating the issue. The net effect of the *260 retroactivity provision of the statute is to require "alleged tortfeasors to respond in damages for pain and suffering in those cases which prior to statute, there had been no such liability after the claim had been irrevocably assigned to the carrier." Id. at 438, 646 P.2d at 1282. The court recognized that defendants can be subject to increased liability if amended A.R.S. § 23-1023 is applied to claims barred by the one year statute of the Workmen's Compensation Act, but not barred by the two year general statute of limitations for personal injury, A.R.S. § 12-542. Nonetheless, the court rejected the argument that the right to raise a one year statute of limitations defense is a vested property right within the protection of the fourteenth amendment. We agree with defendants that the retroactive application of A.R.S. § 23-1023 subjects them to increased liability. Defendants' claim, however, amounts to nothing more than a claim that the statute in effect extends a lapsed statute of limitations. That argument was rejected in Starks v. Rykoff & Co., 673 F.2d 1106 (9th Cir.1982) wherein the court, applying Arizona law, stated: Where a lapse of time has not invested a party with title to real or personal property, a state legislature may extend a lapsed statute of limitations without violating the fourteenth amendment, regardless of whether the effect is seen as creating or reviving a barred claim. Id. at 1109. Defendants have no vested right in the common law defense of nonassignability of tort claims. What they have is the right to assert the one year limitations statute of A.R.S. § 23-1023. That right is purely procedural in nature. Prior to the Chevron Chemical decision the negligent third party was still subject to suit within the two year statute of limitations for personal injury actions. A.R.S. § 23-1023 merely split the time to sue between employee and carrier. The Workmen's Compensation Act merely provided that as between the employee and the employer, the employee was required to bring suit within one year. This was not done to benefit the negligent third party, but to protect the employer or his carrier and insure that they would have sufficient time to enforce their subrogated rights to proceed against a negligent third party. Chevron Chemical Co. v. Superior Court, 131 Ariz. at 439, 641 P.2d at 1283. The retroactive provision of amended A.R.S. § 23-1023 therefore creates no new substantive right in injured employees but only goes to matters of remedy in restoring a barred claim. We find no due process violation. For the reasons stated, the judgments of the trial court are reversed and the case is remanded for further proceedings consistent with this opinion. GRANT and GREER, JJ., concur. NOTES [1] The assignment was made in accordance with A.R.S. § 23-1023(B), which at the time provided: If the employee entitled to compensation under this chapter, or his dependents, does not pursue his or their remedy against such other person by instituting an action within one year after the cause of action accrues, the claim against such other person shall be deemed assigned to the insurance carrier, or to the person liable for the payment thereof. Such a claim so assigned may be prosecuted or compromised by the insurance carrier or the person liable for the payment thereof. [2] The amendment provides that any assigned claim may be reassigned in its entirety to the employee or his dependents and that after reassignment the claim may be pursued as if it had been filed within the first year. See A.R.S. § 23-1023(B). [3] Laws 1981, Ch. 226, Sec. 2(A) provides: Any claim which was or may be commenced pursuant to assignment or reassignment under § 23-1023 ... prior to expiration of the statute of limitations [2 years] and which has not been finally adjudicated or which is currently being appealed or for which time for appeal has not expired shall be valid. [4] More recently, the Ninth Circuit Court of Appeals, in applying Arizona law, has upheld the constitutionality of A.R.S. § 23-1023, rejecting the argument that the statute's retroactivity provisions violate the due process clause of the fourteenth amendment. Starks v. S.E. Rykoff & Co., 673 F.2d 1106 (9th Cir.1982).
Abstract—In recent years, food waste has received growing interest from local, national and international organizations, as well as NGOs from various disciplinary fields. As food waste occurs in all stages of the food supply chain, private households have been identified as key actors in food waste generation. The project maps the small but expanding academic territory of consumer food waste by systematically reviewing empirical studies on food waste practices as well as distilling factors that foster and impede the generation of food waste on the household level and provides a better solution to avoid food waste by proper and adequate usage of ingredients. Mapping the determinants of waste generation deepens the understanding of household practices and helps the design of automatic kitchen pantry. Most food wastages can be reduced by providing the proper quantity while cooking. Experience and prediction should be done properly to do a particular dish, if not the wastage of food will be high. It is hard to know the perfect amount of ingredients to buy and what to be cooked in the future. Measuring foods and ingredients is an important part of eating healthier. The IoT is an emerging technology, which is implemented in all automation process. This project deals with the automation in the cooking process by measuring and selecting the proper ingredients for a particular dish, thereby reducing the faults made by human in the kitchen and consequently reducing food wastage. Keywords: Raspberry Pi, Virtual Network Computing (VNC), Global Hunger Index (GHI). Cite this Article Thangatamilan M, Nandha Kiran RK, Sakthivel N, Subashini R. IoT Based Smart Kitchen Pantry. Current Trends in Information Technology. 2019; 9(2): 20–23p. Full Text:PDF DOI: https://doi.org/10.37591/ctit.v9i2.413 Refbacks - There are currently no refbacks.
http://computerjournals.stmjournals.in/index.php/CTIT/article/view/413
The utility model provides an anti-deformation curved surface form shingle installation structure. The structure comprises a curved-surface wallboard, the curved wall plate is wavy; the span of each wave unit is 1-2 cm; the thickness of the curved wall plate is 2-3cm; a plurality of flexible connecting blocks which are arranged in an array manner are attached to the back surface of the curved-surface wallboard; a conical groove is formed in the first connecting surface of the flexible connecting block; a conical block is arranged in the conical groove; a gap is formed between the conical groove and the conical block, a fixing block is connected to the flexible connecting block, a circular base block is arranged on the second connecting face of the fixing block, a plurality of inserting plates distributed in the circumferential direction of the circular base block at equal intervals are arranged on the side edge of the circular base block, the inserting plates are clamped into the gap,and the fixing block is fixedly connected to the wall surface base layer. Compared with the prior art, the problem of deformation of the curved-surface wallboard can be effectively solved, the mounting efficiency of the curved-surface wallboard is effectively improved, and the service life of the curved-surface wallboard is effectively prolonged.
The Student Council is a body of students who represent their tutor groups and year groups, and who contribute significantly to the running of the School. They offer their opinions and ideas on a variety of matters and plan and deliver activities that help fellow students and the wider community. The Council meets fortnightly to discuss issues such as uniform, fund-raising events, rewards and charity support and has contributed to school policy and ethos. Student Council representatives can be identified by different ties, which they all wear with pride. Student Leaders Leadership skills and responsibility are promoted through the adoption of a number of roles across the school. All those in such roles can be identified by the badges that they proudly wear: Senior Students Year 11 students support the discipline of their peers and younger students in the dining hall during the lunch hour, around the corridors, in the playground, helping the smooth running of systems. They act as role models to younger students. Anti-Bullying Ambassadors Students from across the year groups have been trained to identify and support other students who might be being bullied. Their role is to ensure that incidents are reported and dealt with and any victims are supported appropriately. Buddy Readers Students in year 8 and up help students in year 8 who need support with their reading. The Accelerated Reader scheme enables students to test their understanding and the support offered by the older students can help build skills and confidence. Sports Leaders Students have the opportunity to develop and practise their leadership skills through sport, supporting primary school activities throughout the year. Student Council Each tutor group from across the school provides at least one representative for the Student Council, supporting school policy, fund-raising and leadership. Department Leaders Departments adopt Departmental Leaders as they are required. Responsibilities include lunchtime and break supervision of departmental areas and support for staff as required.
https://www.khalsaacademiestrust.com/2115/school-council
Assessment of Treatment Plant Performance provides practical indications on how to undertake performance evaluation of water and wastewater treatment units, processes, and plants, using math and statistics as a tool. The objective is to approach the concepts based on the needs and challenges associated with treatment plant performance evaluation, and to use simple language to describe the mathematical calculations and statistical analyses that can be used for this purpose. Written primarily for graduate level students, the concepts and applications will be presented from practice to theory in a simplified and practical way, thus making the book equally beneficial to practitioners and policy-makers, who may have a limited background in math and statistics. Postdoctoral scientists and professors will also find it useful if they are involved in research projects that comprise the assessment of treatment performance. Introduction; Flow data; Sampling for water constituents; Laboratory analysis of water constituents; Loading rates; Descriptive statistics of flows, concentrations and loads; Frequency distribution of flows, concentrations and loads; Removal efficiencies; Compliance with discharge standards or targets for the effluent; Reliability analysis of treatment performance; Quality control for assessing treatment plant performance; Making comparisons between systems and operational phases; Influence of operational conditions on treatment performance; Reaction kinetics and reactor hydraulics; Assessment of the goodness-of-fit of mathematical models for the estimation of treatment performance. Basic Principles of Wastewater Treatment is the second volume in the Biological Wastewater Treatment series, and focusses on the unit operations and processes associated with biological... Wastewater Characteristics, Treatment and Disposal is the first volume in the Biological Wastewater Treatment series, presenting an integrated view of water quality and wastewater... This title is available as a free ebook PDF only. Biological Wastewater Treatment in Warm Climate Regions gives a state-of-the-art presentation of the... Biological Wastewater Treatment in Warm Climate Regions gives a state-of-the-art presentation of the science and technology of biological wastewater treatment, particularly... This research attempts to evaluate nitrification treatment performance in combined carbon/nitrogen municipal wastewater reactors using traditional physical/chemical methods and modern molecular... Water meters are the cornerstone of commercial systems for water utilities throughout the world; revenue is directly derived from the, figures provided by meters. Despite this, little attention... As a result of an evaluation of biomass reduction technologies, anaerobic treatment was found to have potential for the lowest level of biomass production in the treatment of municipal and... Increasing demand for potable water in Colorado has forced drinking water utilities to consider utilizing water from lower quality sources. These lower quality sources...
https://www.iwapublishing.com/books/9781780409313/assessment-treatment-plant-performance-practical-guide-students-researchers-and
AbstractThis study explores the internationalisation of higher education institutions in the UK. First, the varying meanings and interpretations of internationalisation are examined, along with its relationship to terms such as globalisation and internationalism. The concept of “integrated internationalism” is introduced. Variations in institutional rationales for internationalisation, and the influence of national attitudes, are explored. The empirical research project then offers a snapshot of institutional internationalisation in the UK in 2005. It explores, via a predominantly qualitative, mixed methods approach, variations in interpretation and focus among UK HEIs. Institutional motivations are probed via a national survey, revealing that economic and prestigeorientated rationales tend to dominate, with social and academic rationales playing a lesser role. A subsequent comparison across three institutional case studies yields insights into the ways in which the ethos of internationalism is integrated with institutional mission and how the latter affects an institution’s international priorities. Through interviews and documentary analysis, both public and private faces of the institutions are illuminated, resulting in three distinctly different profiles. Common and contrasting themes are drawn out, reflecting some of the nuances of mission and values. From these are derived some recommendations and questions for consideration by leaders, policymakers and practitioners in institutions which are serious about internationalisation. A practical tool is proposed, which has the potential to help institutions interrogate their motivations for internationalisation as a prelude to strategy development. In light of the research, a revised interpretation of “integrated internationalism” is also suggested. The thesis concludes with a summary of my own personal development during the course of the DBA, which prefaces an update on recent, significant national developments related to internationalisation and a justification of the continued validity and relevance of the findings of this study.
https://researchportal.bath.ac.uk/en/studentTheses/integrated-internationalism-in-uk-highereducation-interpretations
Do you love all kinds of foods? Do you find sweets a lot? Do you like food from different countries? Have you ever desired to cook these types of dishes at home? It is time to find answers to your questions and improve your skills. Follow the guidelines below to get an idea on how to create delicious meals. If you have decided that you might like to cook more food at home, look for a great book that can help you. These books can be found in many places: the library, online or even a family member. Be as patient as you can be when you are learning how to cook. If using skewers made of steel or metal, the square or twisted kind are much better than round ones for holding food in place. If you are going to make a stir-fry dish, slice the meat on the bias, as thinly as possible. Getting the perfect cut can sometimes be a challenge. Remove the meat from the freezer before it is frozen, but when it is solid to touch. Next, position the meat at 45 degrees to the chopping board and slice across the grain with a sharp knife. Your spices should be stored in any area that is free of light. Using fresh spices will make tastier meals. Have you found yourself regretting disposing of moldy fruits? Is it safe to cut around that moldy area and save the remaining fruit for consumption? It is unfortunate, but it is not safe to save fruit that is partly rotted. After a certain point, the fruit may become moldy, though you may not be able to tell by looking at it. Consuming the fruit could make you very ill. Apples are a popular choice for eating in autumn or winter, but they will spoil quickly if not stored properly. To keep apples at the peak of perfection, you should keep them in a plastic bag and in a cool temperature. One rotten apple will spoil the whole bag so keep an eye on them while stored. You can use this on a variety of foods, not just meat. Spread it on pumpkin seeds that are roasted to make yourself a savory snack. Another way that you can use this is to put it on your scrambled eggs. Anyone who tastes these is sure to wonder and ask if you have a special or secret ingredient. Do you enjoy using fresh basil? Place some basil into a clean glass. Fill the glass with water till the stems. You could let it sit on your counter and keep it for weeks. The basil will grow roots if you occasionally change out the water regularly. You should also cut the basil so it grows even more. Saute vegetables in some chicken broth for a healthier option. Broth will fill your dish with flavor. This cooking method also allows you to eliminate unhealthy oils. This is a nutritious and delicious method for cooking vegetables. They will act like sponges and just soak up a lot of that water. Wipe the mushrooms off with a cloth that is damp. If you’re feeling stressed at the idea of making dinner for the family, try to do most or all of your prep work the evening before cooking. You can marinate the meat, make a sauce and cut up some vegetables and onions before going to bed. With far less work left, you’ll be eager to pick up where you left off at dinner time. Plan to make a big pot of stock in order to freeze and store it. Good homemade chicken stock is a wonderful base in soups, stews, stews and other dishes. Being highly organized is necessary when you are preparing several dishes at the same time. If not, you are sure to burn or overcook something. Keep your kitchen in order. Unorganized cooking stations may mean you lose money and valuable food that is better suited to eating. Beans or tofu are excellent sources of protein that you need to add some to your diet. Both of these protein-rich foods are available to buy at almost every grocery store. Try pan-frying tofu and you will have a tasty alternative to meat. Beans may be boiled with some spices and herbs for a delicious protein source. When it’s time to chop fresh herbs, you should sprinkle a bit of salt on your cutting board first. This gives extra flavor and helps to hold the herbs in place. To avoid serving a dish that is too salty, do not add extra salt to the meal you are preparing. The salt will stick to the herbs and make their flavor stronger. If the recipe has called for water, you could easily use chicken broth, juice, or juice when suitable. Instead of milk, you can substitute yogurt, yogurt or buttermilk. Using liquid substitutions in your cooking can add nutritional content to certain dishes and give a standby dish a new flavor. When making salsa to be eaten later, rinse the onions after you dice them, and use a paper towel to blot them dry. If you use fresh onions, they are going to release a sulfurous gas. Your salsa could be ruined because of this gas. Rinsing in cold water and drying the onions first will eliminate this gas. You should establish a habit of washing the dishes as they are used. Make all necessary preparations for whatever you are planning on cooking before you start cooking. Otherwise, you might use the stove longer than you need to, and waiting can also make it safer and harder to burn food. Use the advice in this article to bring you one step closer to cooking delicious meals. Being adventurous with the spices you use when cooking is important. You might even discover a new food that turns out to be your favorite! Unleash your inner chef by following both the tips above, and your own taste buds. It is important to properly care for your wood cutting board if you want it to last a long time. Cutting boards made of wood can often warp due to climate. The board has to be washed, but just lightly go over it with soapy water. Don’t submerge the board in the dishpan. Bring a damaged wood board back to life with regular oiling, using only oil made for cutting boards. Before using it again, make sure the oil is completely dry.
https://cookingblogs.info/take-advantage-of-the-available-cooking-information-3/
Two Northeast teachers honored during Inclusive Schools Week HASTINGS, Neb. (Press Release) - Kitt Wells and Jenny Knipping were honored by Down Syndrome Advocates in Action during Inclusive Schools Week for promoting inclusion in Northeast Elementary school. The teachers were nominated by the family of Jordyn Lucius, a student at the school writing: “Kitt is an understanding, compassionate, and loving human being! She has grown a bond with Jordyn that is so rich with love that Jordyn fondly says she is her best friend. Kitt not only offered to transfer with Jordyn when we moved t a new school district, but she is always helpful in assisting Jordyn’s needs and always makes her feel comfortable and included. Because of this, Jordyn is making insane strides with her academics and talking/signing. We are blessed to have Kitt – she is an asset to the school and to Jordyn!” In addition to Kitt, the family wrote about Jordyn’s teacher, Jenny Knipping, stating: “Jordyn is the only kid with down syndrome in her school. Her teacher Mrs. Knipping has opened a line of communication to help Jordyn transition into a new school, teachers, students, and setting. She has gone above and beyond to ensure Jordyn will be successful this school year! We are blessed to have her this school year!” According to the Down syndrome Advocates in Action, this annual event was created to celebrate the progress schools have made toward providing a quality education for our increasingly diverse student population and those students marginalized due to disability, gender, socio-economics status, cultural heritage, language preference and other factors. Inclusive Schools Week provides an opportunity for educators, students and parents to discuss and initiate practices to further ensure schools continue to improve their goal of successfully educating ALL children. The award letter states, “This is a wonderful opportunity to involve all teachers and students in recognizing and embracing the differences in each of us. There are several free resources and celebration ideas on the http: //inclusiveschools.org website that heighten awareness of inclusion and celebrate our diverse classrooms.” Copyright 2022 KSNB. All rights reserved.
https://www.ksnblocal4.com/2022/12/10/two-northeast-teachers-honored-during-inclusive-schools-week/
[Autism spectrum disorder and genes for synaptic proteins]. Autism spectrum disorder (ASD) is characterized by impaired social interaction and communication, and restricted interests. It is generally accepted that ASD is caused by abnormalities in the structure or functions of the brain. Recent genome-wide analyses have identified copy number variations (CNVs) of neuronal genes in the genomes of ASD patients. CNV is a commonly observed phenomenon in human beings. During the first cell division of meiosis, irregular crossing over between homologous chromosomes results in loss or duplication of a segment. From 2007 to 2010, several groups performed a large-scale virtual screening of CNVs in ASD genomes. Genes affected by CNV, de novo CNVs, and rare CNVs were more prevalent in ASD. The results highlighted the CNVs of many neuronal genes associated with ASD. A fraction of these genes was previously identified in ASD but some were newly identified in each study. The CNVs implicated in ASD include neuronal genes belonging to 4 classes. These genes encode (1) neural adhesion molecules, including cadherins, neuroligin, and neurexin; (2) scaffold proteins such as SHANK3; (3) protein kinases and other intracellular signaling molecules; and (4) proteins that regulate protein syntheses. In general, these proteins play a role in synapse of glutamatergic neurons. The CNVs detected in the ASD patient genomes of imply a link between the synaptic proteins and pathological characteristics of ASD. Altered protein dosage by the CNVs may alter the functional quality of ASD patient's synapses, and may consequently affect their development of language and communication skills. There are 2 types of ASD, one is sporadic and, the other is familial. According to some reports, de novo CNVs are more frequently observed in sporadic-type ASD. However, it is generally understood that a combination of particular CNVs and other possible mutations underlie the pathology of ASD regardless of ASD type. The major symptoms of ASD are often curable with behavioral intervention during early childhood. An early diagnosis, followed by early start of treatment is crucial for language development and communication skills. Further and broader research on genomes will eventually provide information on the biological characteristics of ASD, as well as on specific ASD genotypes, thus aiding in the establishment of optimal treatment and medication to meet the biological conditions of each patient.
Ideally, the exhaust gases accelerating through a convergent nozzle should reach Mach 1 at the lip of the nozzle and the exit pressure should be reduced to ambient. This condition gives the maximum obtainable momentum thrust. And zero pressure thrust and occurs when the jet pipe gas pressure is 1.85 times ambient pressure. If the pressure is lower than this, then the gas will not be able to create the expansion required to give the full momentum thrust. This would occur if the nozzle area were too large. If the jet pipe pressure were too high then the nozzle would increase the jet stream velocity. And engine thrust until the nozzle choked. Gas pressure would remain in the exiting jet stream. The expansion would then occur behind the nozzle. This would produce some pressure thrust and some wasted energy. If this condition occurs because the nozzle area was too small. The back-pressure felt in the jet-pipe as a result of this will as already stated reduce the turbine pressure ratio. And push the compressor towards the stall. One question often asked is how can an aircraft fly faster than Mach 1 if the jet-stream velocity cannot exceed Mach 1? The answer is simple! The speed of sound in air is temperature-dependent. The temperature of the gas at exit is higher than ambient so the speed of sound in air is higher than ambient at the lip of the nozzle. The jet-stream velocity is actually higher than the ambient value for the speed of sound in the air that the aircraft is flying in. If the exhaust gas velocity were to reach Mach 1 inside the convergent propelling nozzle a choked nozzle condition would occur. Mach 1 gas flow expands faster radially than it does axially. This causes the nozzle to choke. Once choke occurs the downstream gas velocity cannot increase any further from the value at which the choke occurred regardless of the upstream conditions. This means that the nozzle will under-expand the gas leaving residual gas pressure in the exit stream. As the gas pressure at exit is now higher than ambient pressure the difference between the exit pressure and ambient pressure will act on the propelling nozzle area to create a forward thrust called pressure thrust. The nozzle will choke when the gas flow becomes transonic. This is a condition where you would have both sonic and subsonic flow in the exhaust system – Mach one at the propelling nozzle with a sub-sonic upstream flow. Once a nozzle is choked it can only be un-choked if either the exit velocity is reduced or the gas temperature is increased. As the speed of sound in is temperature-dependent an increase in jet-pipe gas temperature will raise the sonic speed value. Re-heat systems will do this and permit the jet-stream velocity to rise to a higher Mach 1 value. CONVERGENT- DIVERGENT NOZZLES We know that the jet-stream velocity behind a choked nozzle cannot increase. And that there is an expansion taking place behind the nozzle. Concorde flies at Mach 2 and no amount of pressure thrust will achieve that. So something else is needed. You have read that a Mach 1 gas flow expands faster radially than it does axially. Think of a greased metal funnel. If you were to push a flexible rubber ball down into the apex of the funnel and release it what would happen? The ball should eject itself from the throat of the funnel. The sideways or radial expansion of the ball would push on the inclined surface of the funnel and produce a reaction force, which ejects the ball. If a divergent section duct were to be positioned aft of the lip of the convergent propelling nozzle we can create the same effect. The expanding exit gases would push against the wall of the divergent duct and the resultant reaction force would cause the gases to accelerate rearwards. There would also be a component of forwarding reaction thrust created on the sloping walls of the divergent nozzle. The Con-di nozzle is used to produce a jet-stream velocity increase behind a Choked nozzle making maximum use of existing pressure VARIABLE AREA NOZZLES The only case where you will encounter these nozzles is on Concorde so we will first deal with that type. The Concorde exhaust system fulfils five major functions: 1. It maintains the correct engine backpressure condition during subsonic and super-sonic airspeeds using a variable area primary propelling nozzle. 2. It maintains the best propulsive efficiency at subsonic and supersonic airspeeds using a variable area secondary nozzle. 3. It maintains the correct engine backpressure during reheat operation. 4. It provides thrust reversal for braking on landing. 5. It provides for noise reduction. We are only concerned here with the first three requirements. At subsonic airspeed, the jet stream velocity will be at or below Mach 1. This means that a normal convergent propelling nozzle will provide the correct exhaust gas expansion to achieve the required jet-stream velocity. As the aircraft passes through the transonic airspeed range reheat will be used to raise the thrust value to overcome the increasing airframe drag. When reheat is initiated the jet pipe pressure would rise beyond limits and cause an engine surge if the propelling nozzle area remained unaltered. There is a need to increase the exhaust nozzle area to control the gas pressure. The propelling nozzle has a series of 36 moveable flaps connected by links, which are actuated by pneumatic rams. As the re-heat fuel flow increases the rams progressively open the flaps to control the jet-pipe pressure to compressor delivery pressure ratio within safe limits.
https://www.gasturbineengine.online/convergent-divergent-variable-nozzles/
It is sobering to think that the medieval and Renaissance paintings that fill our galleries represent just a fraction of the artistic output of that period. Panel paintings – not to mention exquisitely fragile wall paintings – have for the most part succumbed to the ravages of time, and those not destroyed by fire or flood, acts of war or vandalism, or abortive attempts at restoration have simply faded, darkened or discoloured. Safely tucked away in libraries, illuminated manuscripts have survived in far greater numbers and, as such, form the most substantial, if most easily overlooked, legacy of medieval and Renaissance visual culture. The bland anonymity of a bound volume shelved amongst thousands was not much of a draw for the vandals and looters of the past, and served to shield the richly decorated pages from light and the elements. Of the world’s many illuminated manuscript collections, that of the Fitzwilliam Museum in Cambridge is reputedly the finest, the books cocooned in fenland isolation, the terms of the founder’s bequest ensuring that much of the collection remains forever inside the museum (pictured right: The Macclesfield Psalter, c.1330-1340). In this spellbound state, the Fitzwilliam perpetuates the conditions that have kept these books safe for centuries, and the knowledge of this makes looking at them a strangely timeless experience. In galleries darkened to protect light-sensitive pigments, pages embellished with gold and silver leaf twinkle convincingly, just as they must have done when seen by candlelight. Unlike our encounters with easel paintings, there is an authenticity of experience that comes from the simple interaction between viewer and book, a relationship so fundamental that it seems innate. Even so, there are considerable difficulties involved in the display of books, and glass cabinets inevitably introduce a frustrating barrier to the appreciation of these exquisitely detailed objects. The museum has chosen this bicentenary year to celebrate not only the riches of its collections, but its pioneering work in recording and investigating its illuminated manuscripts which span the sixth to the 16th centuries. Its crowning achievement is Illuminated, an ongoing project to record the collection in a free, publicly accessible database combining high-resolution images with a vast array of information in a well-designed, easily navigable format. It includes the results of new, non-invasive analytical techniques which have yielded fascinating and unexpected information about the materials and methods of manuscript illumination. Illuminators were traditionally thought to have used a fairly limited range of pigments, but the Cambridge project has shown the opposite, with artists deploying the sort of varied and complex mixtures more often associated with easel painting. The discovery of smalt, a cheap blue pigment made by grinding up glass, in a Venetian manuscript, suggests an artist as pragmatic and versatile as any panel painter, making use of a local, readily available material. In another beguiling example, an eagle marks the beginning of an eighth-century St John’s Gospel, the intricate but spare design with large areas of blank parchment typical of manuscripts made at Lindisfarne. We are told that the organic purple used to colour the eagle’s head is derived from a lichen found locally, over which yellow orpiment has been applied in dots. The contrast between the local purple, and the rare, imported yellow is evocative, and shows that for all its isolation Lindisfarne was part of an international trade network. But it also shows the technical expertise of the Lindisfarne illuminators, who knew that the organic purple base would prevent the deterioration of the orpiment, an unstable pigment that would otherwise tend to turn black. While all of this is beautifully explained in the exhibition, the book itself is displayed so far back in its case and with lighting so low, that the painted details are simply not discernible, and can only be appreciated by consulting a reproduction. Other revelations are equally problematic because they rely too heavily on lengthy captions and are essentially non-visual. Infra-red imaging has revealed instructions written in Dutch or German beneath the lavishly decorated frontispiece of a 15th-century Parisian encyclopaedia (pictured above), showing that the illuminator and his workshop were not French but immigrants from the north. It is interesting, certainly, but it is not what we see in front of us, and for that reason it sits far more comfortably in the catalogue or for that matter, in the Illuminated database. It is an unfortunate irony that an exhibition of books should be dominated by captions and wall texts so long and numerous that they threaten to become the principal focus, but The Three Living and the Three Dead from a French Book of Hours, c.1490-1510 (main picture), big and bold enough to be enjoyed despite the glass, is a reminder that reading can wait. Above all, an illuminated manuscript like this is a visual experience, a fine piece of painting that uses the format of the book as a dramatic device to fascinate and delight the viewer. The Fitzwilliam has a huge amount to celebrate and be proud of, and there are the makings of several more focused exhibitions here, shaped by the museum’s groundbreaking research programme. This is rare and wonderful opportunity to see some exquisite treasures: buy the catalogue and do the reading when you get home.
https://www.theartsdesk.com/visual-arts/colour-fitzwilliam-museum-cambridge
John McEntee: U20 change is snow joke for players and their clubs MY eldest daughter is in first year of secondary school where her history lessons are about the Norman invasion of Ireland. She talks with great interest of how the ports on the south-east coast were invaded by English soldiers and Norman mercenaries and how Connacht's High King Ruaidrí Ua Conchobair provided the last line of defence. I'm sure those were difficult times when the name King Henry was derided – unlike his Kilkenny namesake 900 years later. In a strange sort of way, history has repeated itself this week. Storm Emma combined forces with a Siberian snow storm, aptly named ‘The Beast from the East', to bring the province of Leinster to a standstill before roaring across the remaining 20 counties. Curiously, the name Emma was first introduced to England at the time of their Norman invasion. As Yogi Berra would say, that's too coincidental to be a coincidence. From the GAA's perspective, its visit was most unwelcome and its duration overstayed. The master fixtures schedule is in disarray so early in the year. Millions has been invested in improving pitch drainage but when Mother Nature decides to empty the lining of her pockets and cover the land in snow there is no solution. Give credit where it is due, it was the correct decision to cancel games. Player safety is paramount, not to mention the welfare of the travelling supporters. It needs to be asked what impact a tight schedule will have on the remainder of the season and who will be most impacted. Inter-county football games are being rescheduled for this weekend. I've no issue with that – it's a free weekend in the calendar. What is happening, however, is that some counties are obligated to reschedule their club matches to accommodate the inter-county fixtures chaos. It is a small example of how clubs are immaterial in the bigger picture. I haven't heard anyone including the new President say as we reschedule these matches we ought to think about what impact, if any, this might have on the club scene. In Ireland we awaken to rainfall, we dose off to its drip, drip, drip. We take umbrellas to the beach in case it rains. We camp out indoors such is the intensity of rain. It is endless and inevitable. An English farmer who doubles up as an amateur weather expert predicted snow in January and snow last week. He suggested on morning TV that further snow is likely later in March and we are likely to experience a wet, disruptive summer. As we look beyond the April showers towards the busy months of May to August I fear that the club scene will be squeezed into its little box of insignificance. One competition which has received little attention thus far is the U20s championship. Sixty-eight percent of Congress in 2017 agreed to replace the U21 competition with an U20 competition from this year. It formed part of a move to change the minor grade to U17, the rationale being that unrealistic demands were being placed on our young players, particularly at a really important developmental stage of their careers. So now, rather than preparation for undertaking three A-levels being affected, kids are trying to study 10 GCSE topics while participating in U17 training. This hardly constitutes easement. The U20 competition was also moved to later in the year to prevent clashes with university football and League games, yet there was no mention of its impact on clubs preparing for league and championship. While it was hoped the changes would invigorate the competition, you now have the situation where players who are the correct age but excel and play for the senior team can't take part in the underage championship. It has become a sub-standard competition. Young men in small numbers are training three times per week for six months to play one game. These players would be much better served by playing with their schools and universities in January-February and then playing with their local team-mates in their clubs for the remainder of the season. The U20s is a development stage on one's progression into senior grade. The assertion is that they are granted access to specialist training, nutritional advice, and so on. The reality is somewhat different. These young players we've identified as being most at risk of burnout are being asked to make difficult choices between serving many masters, training six times a week and somehow fitting in time for school and study. Until this competition is removed from the calendar, huge pressure will remain on these young men and the rate of injury and disillusionment will continue to rise. Is there a temporary solution? These young men ought to be club men first. U20 managers, as development officers, ought to place the needs of these young men before success at this grade and with some innovative thinking can incorporate development training around the club schedules. Weekend training camps could replace three sessions per week. These kids are technically savvy. Strength and conditioning, nutrition and so on can be digitally monitored. Their time is precious, it should not be wasted, otherwise their participation will disappear just like this fall of snow.
http://www.irishnews.com/sport/2018/03/08/news/john-mcentee-u20-change-is-snow-joke-for-players-and-their-clubs-1273051/
The cost of a two-year legal battle over a £36.50 cake is set to hit almost £180,000. Senior judges yesterday threw out an appeal against a ruling that a family-owned bakery's refusal to make a cake endorsing same-sex marriage was discriminatory. Ashers declined to make the cake iced with the slogan "Support Gay Marriage" as it conflicted with the owners' Christian beliefs. The McArthur family will decide on their next course of action in the coming days after losing their bid to overturn the landmark judgment that found them guilty of discrimination. Three Court of Appeal judges found the company had discriminated against gay rights campaigner Gareth Lee on grounds of sexual orientation by refusing to make the cake two years ago. Ashers general manager Daniel McArthur said he was "extremely disappointed" with yesterday's ruling, adding that the family would now take advice before deciding on any further legal action. A decision is expected by Friday. However, DUP MLA Jim Wells, who described the judgment as "an awful decision", said he would be encouraging the McArthurs to take the issue as far as they can. Last night it emerged the cost of the case is approaching £180,000. The Equality Commission is asking for an estimated £88,000 following the appeal. Legal sources said a similar amount is likely to have been spent on the challenge by the McArthurs, whose costs have been covered by the Christian Institute, bringing the total bill to around £176,000. It could rise further still if the McArthurs decide to continue their challenge. Speaking outside the Court of Appeal yesterday, Mr McArthur said that equality law in Northern Ireland would have to change if it meant people could be punished for politely refusing to support others' causes. "But now we are being told we have to promote the message, even if it is against our conscience. What we refused to do was to be involved with promoting a political campaign to change marriage law in Northern Ireland." Ashers, which has six shops in the greater Belfast area, was initially prosecuted for refusing to bake the cake promoting same-sex marriage which also featured two Sesame Street characters. Mr Lee, a member of the LGBT group Queer Space, was supported by the Equality Commission in his case against the bakery. He spoke of his relief that the appeal had gone in his favour. Mr Wells, meanwhile, told the Belfast Telegraph that the case must now be referred to the Supreme Court and, if that fails, the European Court of Human Rights. "I urge the McArthurs not to give up and if they do decide to launch a further appeal, I think the people of Northern Ireland should rally round and fund that appeal," he said. In their legal battle to overturn the ruling, the McArthur family won the support of Attorney General, John Larkin QC. During the hearing in May, he argued in court that the McArthur family was entitled to constitutional protection for turning down a customer's order based on their personal religious beliefs. Reacting to the decision yesterday, a spokeswoman for Mr Larkin's office said it "was a very careful judgment" to which he is going to give "very careful consideration". From initial order to a tussle in the courts... how the long-running dispute unfolded May 9, 2014: Gareth Lee places an order at Ashers. May 11: Mr Lee is informed the order cannot be completed and offered a full refund.
Decreased salt intake in Japanese men aged 40 to 70 years and women aged 70 to 79 years: an 8-year longitudinal study. It is not known whether salt intake decreases over time in the same population. This study attempts to describe salt intake for 8 years according to age groups, and examines whether salt intake changes over time in community-dwelling middle-aged and elderly Japanese subjects. Data were collected as part of the National Institute for Longevity Sciences Longitudinal Study of Aging. Participants included 544 men and 512 women who participated in and completed all nutrition surveys from the first (1997-2000) to fifth (2006-2008) study waves. Each study wave was conducted for 2 years; in individuals, the entire follow-up period was 8 years. Salt and energy intake were calculated from 3-day diet records with photographs. The mixed-effects regression model was used for analysis of repeated measures of salt intake. Mean age and salt intake for study participants at first participation in the survey were 56.5 ± 9.3 years and 12.8 ± 3.3 g/day in men and 55.8 ± 9.4 years and 10.6 ± 2.5 g/day in women, respectively. Mean energy intake decreased in men and women in all age groups from the first to fifth study waves. Eight-year longitudinal data showed that salt intake decreased in men. In stratified analyses by age, mean salt intake in men decreased 0.08 g/year among 40- to 49-year-olds, 0.09 g/year among 50- to 59-year-olds, 0.16 g/year among 60- to 69-year-olds, and 0.14 g/year among 70- to 79-year-olds. For women, mean salt intake decreased 0.08 g/year among 70- to 79-year-olds (P = 0.098). After adjusting for energy intake, salt intake was decreased among 60- to 69-year-old men (P = 0.049) and increased among 50- to 59-year-old women (P = 0.015). Absolute salt intake was decreased among all age groups from 40 to 70 years in men and from 70 to 79 years in women. An increased focus on reducing energy intake resulted in only a modest decrease in salt intake. Although we observed a decline, salt intake still exceeded recommended levels. Efforts that focus on salt reduction are needed to address this important public health problem.
Student project captures local people's stories from the streets of Edinburgh. Student social enterprise Our SpeakEasy has been collecting anonymous handwritten stories from strangers throughout Edinburgh, showcasing the incredible diversity this city boasts, as well as the vulnerabilities that are willing to be shared. The project encourages individuals to share a piece of their lives on a sheet of A4 paper. All stories are collected spontaneously, anonymously and in person. Everyone is given a pen and a blank canvas to share anything they like, and the result is a heart-warming collection of local lives. The stories are showcased in various cafes-turned-story-galleries across the city. The purpose of this project to open up a slice of the world around us. After reading the first hundred or so stories, I’ve noticed that this project has the additional benefit of allowing people the cathartic release of sharing some very personal past events. SpeakEasy Magazine SpeakEasy has created a beautiful magazine which collates a selection of the stories. Curated by editor Alejandra Jimenez de Luis, each month touches on a different theme of what makes us human. The magazine is a collaborative effort bringing together the stories of Edinburgh residents and art produced by creatives whose work features in the magazine. The publication also includes articles of the strangers who live in Edinburgh, similar to Humans of New York, a a photoblog and book of street portraits and interviews collected on the streets of New York City. You can purchase a copy of the magazine here: https://www.ourspeakeasy.com/magazine StorySlam Our SpeakEasy also holds storytelling nights where people are invited to come together to listen to stories or share their own experiences live on stage. You can find out more on the link to their facebook page below.
https://www.ed.ac.uk/local/projects/our-speakeasy
Time travel brings a girl closer to someone she’s never known. Sixteen-year-old Kiku, who is Japanese and white, only knows bits and pieces of her family history. While on a trip with her mother to San Francisco from their Seattle home, they search for her grandmother’s childhood home. While waiting for her mother, who goes inside to explore the mall now standing there, a mysterious fog envelops Kiku and displaces her to a theater in the past where a girl is playing the violin. The gifted musician is Ernestina Teranishi, who Kiku later confirms is her late grandmother. To Kiku’s dismay, the fog continues to transport her, eventually dropping her down next door to Ernestina’s family in a World War II Japanese American internment camp. The clean illustrations in soothing browns and blues convey the characters’ intense emotions. Hughes takes inspiration from her own family’s story, deftly balancing complicated national history with explorations of cultural dislocation and biracial identity. As Kiku processes her experiences, Hughes draws parallels to President Donald Trump’s Muslim ban and the incarceration of migrant children. The emotional connection between Kiku and her grandmother is underdeveloped; despite their being neighbors, Ernestina appears briefly and feels elusive to both Kiku and readers up to the very end. Despite some loose ends, readers will gain insights to the Japanese American incarceration and feel called to activism.
https://www.kirkusreviews.com/book-reviews/kiku-hughes/displacement-hughes/print/
Bio-energy is an umbrella term for renewable natural organic material — be that wood, crops or other feedstock such as food or agricultural wastes — used to create electricity, heat and transport fuel. The term Biomass usually applies to solid biofuel, where Biogas applies to liquids and gases. Current or ‘first-generation’ biofuels, such as ethanol and biodiesel, are produced from sugar, starch and plant oils; new biofuels from other sources are under development and termed ‘advanced biofuels’. Biogas includes biomethane, obtained through the anaerobic digestion of plant matter. In 2015 these forms of bioenergy together accounted for 70.72% of the UK’s renewable energy across electricity, heat and transport. “The UK is legally bound to provide for 15% of its energy needs — including 30% of its electricity, 12% of its heat, and 10% of its transport fuel — from renewable sources by 2020. We expect the Government will surpass the electricity sub-target, but success in this sector may not compensate for under performance in heat and transport. It is not yet halfway towards 12% in heat and the proportion of renewable energy used in transport actually fell last year. On its current course, the UK will fail to achieve its 2020 renewable energy targets.” Energy and Climate Change Committee Report September 2016 Of the 16.7 million tonnes of oil equivalent of primary energy use accounted for by renewables, 12.1 million tonnes was used to generate electricity, 3.5 million tonnes generated heat, and 1.0 million tonnes was used for road transport. Renewable energy use grew by 20% between 2014 and 2015 and is now over six and a half times the level it was at in 2000.
https://www.ionacapital.co.uk/our-market
Home > Reports > Screen Shots/Explanations > Item Profit by Department Report This report shows each individual item sold during a specific date range. Since it shows each sale of the item separately, it is recommended for small volume, large value items rather than high volume, small value items. The items are grouped by department, category and group. For each item the report will show the nett quantity sold, revenue, GP and Nett GP for sales and returns. Nett GP is the GP plus any expected rebates. The Adjustments column shows the quantity, revenue and GP for returned items. Supports selection of data by PDA import. New sort parameter has been added to allow sorting by Department or Supplier. | | This report prints on A4 paper in Landscape mode.
http://pizzarello.com/requestreportitemsitemprofitbydeptrep.htm
Vaxcom Services Inc. (an Xator Corporation National Security Solutions Company) is a niche provider of threat mitigation and intelligence related support services. We are recognized experts in Intelligence Community with subject matter expertise in the areas of Security/Threat Mitigation, Technical/IT Services, Operations, and Intelligence Services. ITSM Event Process Analyst Chantilly, VA The Information Technology Service Management (ITSM) Event Process Analyst supports the design, deployment, and operations of IT Infrastructure Event processes. The Event Process Analyst supports the deployment of process and procedures and assists in the IT Infrastructure governance and operations. What You’ll Get to Do: - Supports the planning, design, and implementation of ITSM Event processes - Assists with the deployment and transformation of IT infrastructure processes and procedures - Provides analysis and support for on-going service delivery, performance, and governance operations - Works with key process stakeholders to capture process policies, work instructions, and knowledge expertise - Supports monitoring services throughout ITSM process life cycle; verifying adherence to specified process requirements and support quality assurance activities (Monitor) - Assists with analysis, evaluation, and assessment leading to development of recommendations for process improvements, optimization, and/or development efforts for IT processes (CSI) - Supports evaluations and quality assessments for proper implementation of processes to meet quality standards - Facilitates TEMs and other requirement gathering work sessions - Works as a self-starter who delivers high quality work and can adapt to new challenges, either on their own or as part of a team You’ll Bring These Qualifications: - An Active TS/SCI Clearance with Polygraph - Degree or equivalent experience and minimum 3 years of related work experience - Working knowledge of the Service Management workflows and ITSM processes - Relevant experience as an ITSM Event Process Analyst in programs of similar scope, type, and complexity - Good written and communications skills with the ability to clearly document and explain business processes These Qualifications Would be Nice to Have: - Working knowledge of the Service Management workflows, and ITSM processes - Experience with network devices - Be able to read devices and throughputs, ascertain false positive readings, and determine if reported events are actionable vice false-positives - ITIL certifications or training - Familiarity with ServiceNow, SRS or other ITSM management tools - Familiarity with Tableau Clearance Requirement: Active TS/SCI clearance with a polygraph is required. On September 24, 2021, the U.S. Government’s Safer Federal Workforce Task Force issued “2021 Guidance for Federal Contractors and Subcontractors” mandating that Covered federal contractors, including Xator Corporation, and their employees shall be fully vaccinated for COVID-19. Therefore, this position may require individuals to be fully vaccinated (2 weeks past final dose) or have been granted a religious or medical accommodation by December 8, 2021 or shortly thereafter. Equal Opportunity Statement Xator Corporation, and its Subsidiaries, provides equal opportunity to all applicants for employment as required by and/or consistent with applicable country law and company policy. Consistent with the foregoing, Xator Corporation provides qualified applicants consideration for employment without regard to race, color, religion, sex, national origin, age, disability, veterans’ status, citizenship, sexual orientation, gender identity or any other status(s) protected by law. In the United States, Xator Corporation ensures nondiscrimination in all programs and activities in accordance with Title VI of the Civil Rights Act of 1964.
https://www.ziprecruiter.com/c/Xator-Corporation/Job/ITSM-Event-Process-Analyst/-in-Chantilly,VA?jid=7dba1c57d392e58c&lvk=G19ujVI0Nwq3gyz620T1xA.--MXkSENm-c&tsid=152016386
in the late 1990s, i read your money or your life, a book about rethinking your relationship with money. i was very interested in it at the time, but set it aside and didn’t do much about it. i did, however, keep complete financial records in the meantime (every penny i received or spent since 1998, from paychecks to nickles in parking meters), so when i started this page in 2001, i was able to plot this graph going all the way back to then. there’s a lot to the book, and just plotting this graph shouldn’t been seen as the whole of it, but it is a large part. here’s what it means: one of the messages of the book is that everyone is going to be financially independent at some point in his or her life. for some people, that will be at retirement. for others, at death… if you’re going to get past the need to work for money at some point, why not do it asap, while you still have the youth and health to do what you want? the authors’ suggestions for getting to financial independence involve taking a close, long look at what you’re spending and why, and whether you’re getting your money’s worth. as a result of just realizing what you’re doing, you: - reduce your expenses until you’re only spending money on things that bring you fulfillment equal to the amount of energy you had to expend to buy them. - increase your income as much as you can without doing work you think is wrong or just inappropriate for you. in other words, the income and expense lines on a graph like this should be splitting further apart until you hit the point at which you’re making enough and spending enough but not more than enough. when that happens, the gap between the two each month is your savings, which should be considered capital. this is where the third line comes in; it’s calculated this way: ($capital * $i) / 12 where $i is the annual interest rate on a no-risk investment vehicle, something that has a modest interest rate but which guarantees not to lose your money. as an example of this, the book holds up long-term u.s. treasury bonds. so the “investment income” line is what you could make if you took your capital (savings) and invested it. as your expenses come down and your income goes up, the investment income line naturally rises, especially as you get enough money together to actually invest it and start making interest on your investments (and reinvesting that interest). the magic crossover point comes when the investment income line rises so high that it crosses the expense line. at that point, you’re making enough money each month just from your invested capital that you don’t have to work if you don’t want to. i’ll post updates of the details of my progress. best of luck with your own goals! footnotes 1. this doesn’t have to be as materialistic as it may sound. ‘what you want’ could be teaching adults to read or researching alternative fuels. whatever it is, the need to work for money can be a huge obstacle to doing it. 2. the word ‘enough’ is very important to the authors; read the book for all their thoughts about it. 3. they provide a good checklist for the sort of investment(s) that qualify: - Your capital must produce income. - Your capital must be absolutely safe. - Your capital must be in totally liquid investment. you must be able to convert it into cash at a moment’s notice, to handles emergencies. - Your capital must not be diminished at the time of investment by unnecessary commissions, or other expenses. - Your income must be absolutely safe. - Your income must not fluctuate. You must know exactly what your income will be next month, next year and 20 years from now. - Your income must be payable to you, in cash, at regular intervals. - Your income must not be diminished by charges, management fees or redemption fees. - The investment must produce this regular, fixed known income without any further involvement or expense on your part. It must not require maintenance, management, geographic presence or attention due to ‘acts of God’.
https://jeffcovey.net/2001/11/19/your-money-or-your-life/
The World Health Organization (WHO) defines telemedicine as: ‘[T]he delivery of health care services, where distance is a critical factor, by all health care professionals using information and communication technologies for the exchange of valid information for diagnosis, treatment and prevention of disease and injuries, research and evaluation, and for the continuing education of health care providers, all in the interests of advancing the health of individuals and their communities.’ Although there are limited historical records of its use – such as in combating the Black Death and in caring for wounded soldiers during the First and Second World Wars – the fact is that the regulation of telemedicine is quite recent and is based on the Tel Aviv Declaration on Responsibilities and Ethical Standards in the Use of Telemedicine (adopted by the 51st General Assembly of the World Medical Association, Tel Aviv/Israel, October 1999). The first Brazilian norms on the use of telemedicine In Brazil, telemedicine was first regulated by the Federal Council of Medicine (FCM) on 26 August 2002, through FCM Resolution No 1,643/2002. According to that standard – which remains in force to this day – the use of telemedicine would be limited to the care of patients in remote locations (far from health institutions or in areas with a shortage of medical professionals) and, even then, only under the following requirements: - through the use of appropriate technological infrastructure, capable of guaranteeing the quality of the consultation, the preservation of the doctor-patient relationship, professional secrecy, confidentiality and the preservation of patient data; - the attending physician has professional responsibility for any harm caused to the patient; and - registration with the FCM (in the case of companies providing telemedicine services). In addition, following the publication of this general and vague rule, the FCM also edited several sparse Resolutions, which aimed to regulate specific aspects of telemedicine, such as: - FCM Resolution No 1,671/2003, which deals with remote pre-hospital care, with the objective of assisting the victim in the first minutes of care; - FCM Resolution No 2,107/2014, which provides for the requirements for the exercise of teleradiology; and - FCM Resolution No. 2,264/2019, which provides for the requirements for the exercise of telepathology. In the context of public health, the use of telemedicine was officially embraced by the Ministry of Health (MoH) in 2007, through the institution of the National Telehealth Program, which aimed to expand access to health and improve the quality of care in the Unified Health System (UHS). Reformulated and expanded in 2010 and 2011, the National Program for Telehealth Brazil Networks started to provide teleconsulting services, telediagnosis, formative second opinion and tele-education, all with the purpose of facilitating the exchange of information and knowledge among professionals, workers and managers in the health area. The failed attempt to revise FCM Resolution No 1,643/2002 FCM Resolution No 1643/2002, however, was never enough to fully allow the practice of telemedicine, especially given the restrictive conditions historically imposed by the Code of Medical Ethics and the various norms and opinions of the FCM. This is the case, for example, of the prohibition of medical prescription without a physical examination of the patient and the need to register the professional with the Regional Council of Medicine of all states where the doctor provides services. In order to resolve these obstacles and recognise telemedicine as a broader instrument that enhances the quality and efficiency of medical services, on 6 February 2019, FCM published FCM Resolution No 2,227/2019, revoking FCM Resolution No 1,643/2002 and establishing clearer and more permissive rules for the exercise of telemedicine in Brazil. As a way of bringing the exercise of telemedicine in line with technological advances in medicine and electronic communications, FCM Resolution No 2,227/2019 encompassed issues such as information privacy, professional secrecy and physician responsibilities – defining, delimiting and regulating the services subject to be provided by telemedicine (consultation, diagnosis and surgery, among others). In addition, the new resolution established mandatory requirements for the correct collection, storage and communication of patient data, with express reference to the Brazil’s Internet Bill of Rights and the General Data Protection Law. While it was celebrated by the Brazilian health market as enabling the expansion of telemedicine-related businesses, FCM Resolution No 2,227/2019 was widely rejected by medical unions and Regional Councils of Medicine. Although the final text of the Resolution trod carefully regarding the preservation of the doctor-patient relationship (emphasising that distance care could happen only after a face-to-face consultation and in-care coverage in geographically remote areas), a significant faction of the medical community argued that the new rules would leave doctors and patients vulnerable and would cause an undesirable distance between doctor and patients. In view of the broad rejection of the new Resolution, the FCM decided to revoke it days after its publication (on 22 February 2019), with the promise of reopening the topic for discussion by the medical profession. For the time being, however, the validity of FCM Resolution No 1,643/2002 has been re-established. The normative revolution forced by Covid-19 The regulation of telemedicine remained untouched until the beginning of 2020, when the Covid-19 pandemic compelled its use on a large scale. Faced with overcrowded hospitals, doctors on the front lines in the fight against the virus and the need for social isolation to prevent the spread of the disease, public authorities were quickly forced to regulate telemedicine, allowing remote medical care with no personal contact. Thus, on 19 March 2020, the FCM sent an official letter to the MoH, recognising the possibility of the ethical use of telemedicine in cases of teleorientation (for guidance and referral of patients in isolation), telemonitoring (for remote monitoring of health parameters) and teleinterconsultation (for the exchange of information and opinions among physicians, as a way of assisting in the diagnosis or therapeutic approach). In response to the FCM, on 23 March 2020, the MoH published Ordinance No 467/2020, which, exceptionally and temporarily, authorised the exercise of telemedicine for pre-clinical care, assistance support, consultation, monitoring and diagnosis (in public or private health systems); and the electronic issuance of medical certificates and prescriptions. In a further step to ensure greater legal certainty for the exercise of telemedicine, on 16 April 2020, the National Congress published Law No 13,989/2020, authorising the exercise of telemedicine on an emergency basis while the public health crisis caused by Covid-19 persists. Short and objective (composed of only six articles), Law No 13,989/2020 broadly authorises the use of telemedicine (which it defines as ‘the exercise of medicine mediated by technologies for the purposes of assistance, research, prevention of diseases and injuries and promotion of health’), provided that the usual ethical and normative standards of face-to-face care are followed. In addition, the Law expressly recognises the possibility of exercising telemedicine even without any possibility of physical examination in person (something absolutely unthinkable in the pre-pandemic scenario). Finally, on 10 June 2020, the Presidency of the Republic published Provisional Measure No 983/2020, allowing and regulating the electronic signature in public communications and in documents related to health (notably for documents signed by health professionals and medical prescriptions). Expectations for the future of telemedicine in Brazil History proves that times of crisis tend to stimulate important technological and social advances. This was the case during the sanitary crises of the late 19th and early 20th centuries, during the 20th century’s two World Wars and during the Cold War period. It is not surprising, therefore, that the pandemic forced advances in the regulation of telemedicine. Now, telemedicine is a reality in the Brazilian territory. On the one hand it is widely used by health plans, hospitals and doctors; on the other, it is evaluated extremely favourably by patients in all corners of the country. However, as much as it has been incorporated into medical practice, telemedicine continues to be regulated on a provisional basis, and its exercise is authorised only by exceptional and temporary measures, which will remain in force only until the end of the resulting public health emergency of the pandemic. In theory, once the pandemic is over, we will return to the previous scenario, with vague and restrictive regulations. For this reason, authorities, the medical community and the various stakeholders of the health market (in particular, health plans and hospitals) are already intensely deliberating the regulation of telemedicine in a post-pandemic world. Debates have been deadlocked both in the scope of the FCM (which is weighing a new Resolution on telemedicine) and in the National Congress (where bills on the subject are being processed). While telemedicine was very well-received during the pandemic, there is no consensus on critical points for future regulation. Among the most controversial points are whether to require a first face-to-face consultation (for a physical examination of the patient); the need for physicians to register with the Regional Councils of Medicine of all states to which they provide remote services; and the unrestricted or limited possibility of using telemedicine in different medical areas (due to the greater or lesser degree of physical interaction with the patient). Regardless of the solution reached regarding each of these controversial points, it is both necessary and urgent to regulate telemedicine as soon as possible, to provide legal certainty for its exercise after the end of the pandemic. Once the Covid-19 pandemic has been overcome, it is essential that Brazilian authorities recognise telemedicine as a concrete, current and effective instrument for expanding access to healthcare. WHO, ‘A health telematics policy in support of WHO’s Health-For-All strategy for global health development: report of the WHO group consultation on health telematics’, 11–16 December, Geneva, 1997. Geneva, World Health Organization, 1998. Ordinance MoH No 35/2007. Ordinance MoH No 402/2010. Ordinance MoH No 2,546/2011. Resolution FCM No 1,958/2010. Resolution FCM No 2,010/2013. Law No 12,965/2014. Law No 13,709/2018. In addition, with the aim of expanding the exercise of remote health, on 24 March 2020, the MoH published Resolution No 357/2020 to temporarily increase the maximum number of drugs subject to special control that could be remotely commercialised; and allow remote home delivery of drugs subject to special control. Provisional Measure No 983/2020 created three types of electronic signature to ensure legal certainty: simple, for low-risk cases (request for information and appointment scheduling); advanced, for confidential documents; and qualified, which uses a digital certificate (under the terms of Provisional Measure No 2,200-2/2001). For drug prescriptions issued by health professionals, only those with advanced or qualified signature are considered valid. Formally established by Ordinance No 188/2020, pursuant to Decree No 7,616/2011.
https://www.ibanet.org/telemedicine-brazil-covid
IN-VEHICLE DISPLAY ICONS AND OTHER INFORMATION ELEMENTS. VOLUME I: GUIDELINES Because of the speed with which In-Vehicle Information Systems (IVIS) devices are entering the automotive marketplace, many research issues associated with the design of in-vehicle visual symbols and other information elements have not been adequately addressed. The overall goal the "In-Vehicle Icons and Other Information Elements" project has been to provide the designers of these in-vehicle technologies with a set of design guidelines for in-vehicle display icons and other information elements. Specific objectives of this project were to: design and perform experimentation to select appropriate symbols for in-vehicle use, then use the resulting data to write final guidelines for in-vehicle symbols usage, encompassing both current and future symbols; and write preliminary, as well as empirically based, final guidelines. The key product of this project is a set of clear, concise, and user-centered human-factors design guidelines for in-vehicle icon design. The 42 guidelines address issues such as the legibility, recognition, interpretation, and evaluation of graphical and text-based icons and symbols. These guidelines provide IVIS developers with key information regarding the use and integration of existing and new visual symbols. In addition, guidelines are provided for the design of in-vehicle auditory information. - Record URL: - Record URL: - Corporate Authors: Battelle Human Factors Transportation Center1100 Dexter Avenue North Seattle, WA United States 98109-3598 Federal Highway AdministrationTurner-Fairbank Highway Research Center, 6300 Georgetown Pike McLean, VA United States 22101 - Authors: - Campbell, J L - Richman, J B - Carney, C - Lee, J D - Publication Date: 2004-9 Language - English Media Info - Media Type: Digital/other - Features: Figures; References; Tables; - Pagination: 238 p.
https://trid.trb.org/view/754994
CROSS-REFERENCE TO RELATED APPLICATIONS FIELD OF THE DISCLOSURE This application is a 371 National Stage Entry of PCT/US18/25033, which claims priority to U.S. provisional patent application Ser. No. 62/478,660 filed Mar. 30, 2017, and Ser. No. 62/511,732 filed May 26, 2017, all of which are herein incorporated by reference in their entireties. The present disclosure relates to the field of genome editing and molecular biology. BACKGROUND OF THE DISCLOSURE Genome editing technologies, such as meganucleases, zinc finger nucleases, transcription activator-like effector nucleases (TALENS), CRISPR Cas endonucleases (such as but not limited to Cas9), other RNA-guided endonucleases, as well as base editing technology, have made it possible to edit the genome of many organisms, including plants and animals. While these technologies allow for targeted modification of sequences of interest, there is the potential for off-target genetic modification. There remains a need for methods and compositions to determine on-target and off-target gene editing sites and to measure off-target activity. SUMMARY Methods and compositions are provided for identifying and characterizing variations in a polynucleotide, for example variations due to edits created by a double-strand-break-inducing agent, a base-editing composition, by transformation with a heterologous polynucleotide, or by mutagenesis. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising generating nucleic acid fragments of the captured polynucleotide and recovering said fragments to create an enriched DNA pool. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising generating nucleic acid fragments of the captured polynucleotide and recovering said fragments to create an enriched DNA pool; further comprising characterizing the sequence composition of the enriched DNA pool to determine the nature of the enriched pool. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising generating nucleic acid fragments of the captured polynucleotide and recovering said fragments to create an enriched DNA pool; further comprising characterizing the sequence composition of the enriched DNA pool to determine the nature of the enriched pool; wherein the nature of the enriched pool comprises the composition and abundance of each sequenced fragment species. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is selected from the group consisting of: insertion of at least one nucleotide, deletion of at least one nucleotide, chemical modification of at least one nucleotide, substitution of at least one nucleotide, or a combination of any of the preceding. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a double-strand-break-inducing agent or a base editing molecule. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a double-strand-break-inducing agent or a base editing molecule; wherein said base editing molecule is a deaminase. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a double-strand-break-inducing agent or a base editing molecule; wherein said deaminase is a cytidine deaminase or an adenine deaminase. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a double-strand-break-inducing agent or a base editing molecule; wherein said double-strand-break-inducing agent is a Cas endonuclease, a meganuclease, a zinc finger nuclease, a transcription activator-like effector nuclease, or a restriction enzyme. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with at least one guide polynucleotide. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with a plurality of guide polynucleotides, to create a plurality of Cas9-guide polynucleotide complexes, wherein each guide polynucleotide directs the dCas9 protein to a different target site. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with a plurality of guide polynucleotides, to create a plurality of Cas9-guide polynucleotide complexes, wherein each guide polynucleotide directs the dCas9 protein to a different target site; wherein the plurality of guide polynucleotides comprises guides that are specific for the target site of the nucleic acid, non-specific for the target site of the nucleic acid, specific for one or more off-target sites, or combinations thereof. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with at least one guide polynucleotide, wherein the guide polynucleotide is selected for its potential to create off-target site edits. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with at least one guide polynucleotide, wherein the guide polynucleotide is selected for the ability of the guide polynucleotide-Cas endonuclease complex to recognize and bind a sequence at or near the intended target site on the polynucleotide. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with at least one guide polynucleotide, further comprising allowing the guide polynucleotide/Cas endonuclease complex to bind to the polynucleotide. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecule that binds to said polynucleotide but lacks substantial nuclease activity. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecule that binds to said polynucleotide but lacks substantial nuclease activity, wherein said molecule is a deactivated Cas9 (dCas9). In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecule that binds to said polynucleotide but lacks substantial nuclease activity, wherein the molecule further comprises a tag for protein purification or isolation. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a dCas9 that is linked to a tag for purification or isolation. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecule that binds to said polynucleotide but lacks substantial nuclease activity, wherein the molecule further comprises a tag for protein purification or isolation, wherein said tag is selected from the group consisting of: His tag, FLAG tag, HA tag, chitin binding protein (CBP) tag, maltose binding protein (MBP) tag, glutathione-S-transferase (GST) tag, thioredoxin (TRX) tag, poly(NANP) tag, V5-tag, Myc-tag, and NE-tag. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecule that binds to said polynucleotide but lacks substantial nuclease activity, wherein said molecule is a deactivated Cas9 (dCas9), wherein the dCas9 is complexed with a guide polynucleotide corresponding to a target sequence of interest on the polynucleotide. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with at least one guide polynucleotide, further comprising eluting the guide polynucleotide-dCas9 protein-polynucleotide complex created in the capture step. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with at least one guide polynucleotide, further comprising comparing the sequence of the target polynucleotide to a reference nucleic acid sequence to determine whether the guide polynucleotide directed the dCas9 to bind an intended target site on the polynucleotide of (a), a potential off-target site, or combination thereof. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with at least one guide polynucleotide; further comprising determining the ability of the guide polynucleotide/Cas endonuclease complex to recognize and bind the intended target site on the nucleic acid. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein said at least one edit is created by a Cas endonuclease, wherein said Cas endonuclease is complexed with at least one guide polynucleotide; further comprising determining the guide polynucleotide/Cas endonuclease complex's preference for a sequence motif, the complex's binding strength to a particular sequence motif, or combinations thereof. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe; wherein the molecular inversion probe comprises target arms that flank a target site on the nucleic acid. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe; wherein a plurality of molecular inversion probes are used to capture one or more variations of the polynucleotide, wherein each molecular inversion probe comprises target arms that flank a different target site, off-target site, or a combination thereof. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe; wherein a plurality of molecular inversion probes are used to capture one or more variations of the polynucleotide, wherein the plurality of molecular inversion probes are pooled to generate a molecular inversion probe assay library. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe; wherein a plurality of molecular inversion probes are used to capture one or more variations of the polynucleotide, wherein the plurality of molecular inversion probes are pooled to generate a molecular inversion probe assay library; wherein the library comprises molecular inversion probes that are specific for the target site of the nucleic acid, non-specific for the target site of the nucleic acid, and/or combinations thereof. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe; wherein a plurality of molecular inversion probes are used to capture one or more variations of the polynucleotide, wherein the plurality of molecular inversion probes are pooled to generate a molecular inversion probe assay library; further comprising: hybridizing the polynucleotide with the molecular inversion probes from the molecular inversion probe library, recircularizing the hybridized molecular inversion probe using polymerase and ligase, subjecting the nucleic acid and molecular inversion probes to an exonuclease so that linear genomic DNA and un-circularized molecular inversion probes are digested, indexing and amplifying targeted sequences in a polymerase chain reaction (PCR) to produce indexed amplicons, and pooling and purifying the indexed amplicons. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe; wherein a plurality of molecular inversion probes are used to capture one or more variations of the polynucleotide, wherein the plurality of molecular inversion probes are pooled to generate a molecular inversion probe assay library; further comprising: hybridizing the polynucleotide with the molecular inversion probes from the molecular inversion probe library, recircularizing the hybridized molecular inversion probe using polymerase and ligase, subjecting the nucleic acid and molecular inversion probes to an exonuclease so that linear genomic DNA and un-circularized molecular inversion probes are digested, indexing and amplifying targeted sequences in a polymerase chain reaction (PCR) to produce indexed amplicons, and pooling and purifying the indexed amplicons; wherein the indexing and amplifying targeted sequences use an indexed primer and non-indexed primer in the PCR to produce indexed amplicons. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe; wherein a plurality of molecular inversion probes are used to capture one or more variations of the polynucleotide, wherein the plurality of molecular inversion probes are pooled to generate a molecular inversion probe assay library; further comprising: hybridizing the polynucleotide with the molecular inversion probes from the molecular inversion probe library, recircularizing the hybridized molecular inversion probe using polymerase and ligase, subjecting the nucleic acid and molecular inversion probes to an exonuclease so that linear genomic DNA and un-circularized molecular inversion probes are digested, indexing and amplifying targeted sequences in a polymerase chain reaction (PCR) to produce indexed amplicons, and pooling and purifying the indexed amplicons; wherein the indexing and amplifying targeted sequences use an indexed primer and non-indexed primer in the PCR to produce indexed amplicons; further comprising sequencing the pooled and purified indexed amplicons to generate sequence reads. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe; wherein a plurality of molecular inversion probes are used to capture one or more variations of the polynucleotide, wherein the plurality of molecular inversion probes are pooled to generate a molecular inversion probe assay library; further comprising: hybridizing the polynucleotide with the molecular inversion probes from the molecular inversion probe library, recircularizing the hybridized molecular inversion probe using polymerase and ligase, subjecting the nucleic acid and molecular inversion probes to an exonuclease so that linear genomic DNA and un-circularized molecular inversion probes are digested, indexing and amplifying targeted sequences in a polymerase chain reaction (PCR) to produce indexed amplicons, and pooling and purifying the indexed amplicons; wherein the indexing and amplifying targeted sequences use an indexed primer and non-indexed primer in the PCR to produce indexed amplicons; further comprising sequencing the pooled and purified indexed amplicons to generate sequence reads; further comprising deconvoluting the sequence reads into sample bins by index sequence. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide is captured by a molecular inversion probe; wherein a plurality of molecular inversion probes are used to capture one or more variations of the polynucleotide, wherein the plurality of molecular inversion probes are pooled to generate a molecular inversion probe assay library; further comprising: hybridizing the polynucleotide with the molecular inversion probes from the molecular inversion probe library, recircularizing the hybridized molecular inversion probe using polymerase and ligase, subjecting the nucleic acid and molecular inversion probes to an exonuclease so that linear genomic DNA and un-circularized molecular inversion probes are digested, indexing and amplifying targeted sequences in a polymerase chain reaction (PCR) to produce indexed amplicons, and pooling and purifying the indexed amplicons; wherein the indexing and amplifying targeted sequences use an indexed primer and non-indexed primer in the PCR to produce indexed amplicons; further comprising sequencing the pooled and purified indexed amplicons to generate sequence reads; further comprising deconvoluting the sequence reads into sample bins by index sequence; further comprising analyzing the deconvoluted sequence by identifying reads that belong to a specific sample using the target arm of the MIP that flanks the 5′ end of the target site, the target arm of the MIP that flanks the 3′ end of the target site, or combinations thereof. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the capturing of the polynucleotide is by a biotinylated probe. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the capturing of the polynucleotide is by a biotinylated probe; further comprising shearing of the polynucleotide prior to the capture step. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising comparing at least one sequence of (e) to a reference nucleic acid sequence. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising comparing at least one sequence of the assessing step to a reference nucleic acid sequence, comprising aligning said at least one sequence of (e) with the reference nucleic acid sequence and identifying at least one difference between said sequence and the reference nucleic acid sequence. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising comparing at least one sequence of the assessing step to a reference nucleic acid sequence, wherein the reference sequence does not comprise said target site edit, said off-target site edit, or combinations thereof. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising cutting the captured polynucleotide into smaller fragments using random shearing or restriction digestion to generate a target site library of fragments. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is in an in vitro environment. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is from an oligonucleotide target site library. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is from a randomer nucleotide combinatorial target site library. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is in an in vivo environment. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is from a eukaryote or prokaryote. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is from a plant, mammal, insect, virus, fungus, or microorganism. Arabidopsis In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is from a plant selected from the group consisting of: maize, rice, sorghum, rye, barley, wheat, millet, oats, sugarcane, turfgrass, switchgrass, soybean, canola, alfalfa, sunflower, cotton, tobacco, peanut, potato, tobacco, , vegetable, and safflower. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is genomic. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is synthetic. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof is isolated from its natural environment. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising determining the presence or absence of any off-target site edits, intended target site edits, or combinations thereof in the nucleic acid sequence. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein any two or more of the preceding steps are conducted essentially in parallel. In one aspect, a method is provided for identifying or characterizing a plurality of variations in an edited polynucleotide, comprising: capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein any two or more of the preceding steps are conducted essentially in parallel. In one aspect, a method is provided for identifying or characterizing variations in a plurality of polynucleotides, comprising: capturing the polynucleotides that comprises at least one intended target site, off-target site, or a combination thereof, amplifying the captured polynucleotides to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; wherein any two or more of the preceding steps are conducted essentially in parallel. In any of the methods provided herein, a further step of editing a sequence of the polynucleotide based on the assessment of the presence or absence of the intended target site edit, off-target site edit, or combination thereof is provided. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof, optionally under various conditions or in different environments. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising evaluating the genotype or phenotype of the cell or organism at more than one time point. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising evaluating the genotype or phenotype of the organism in more than one cell type or tissue. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising selecting at least one cell or individual of said organism that comprises said intended target site. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising selecting at least one cell or individual of said organism that does not comprise at least one off-target site. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising selecting at least one cell or individual of said organism that comprises said intended target site, does not comprise at least one off-target site, or comprises at least one off-target site; and further comprising growing said cell or organism. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising selecting at least one cell or individual of said organism that comprises said intended target site, does not comprise at least one off-target site, or comprises at least one off-target site; and further comprising reproducing said cell or organism. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising selecting at least one cell or individual of said organism that comprises said intended target site, does not comprise at least one off-target site, or comprises at least one off-target site; and further comprising crossing said organism with another to obtain a progeny, and evaluating said progeny for the presence or absence of the target and/or off-target sites. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising selecting and reproducing or crossing at least one cell or individual of said organism that comprises said intended target site, does not comprise at least one off-target site, or comprises at least one off-target site; and further comprising selecting a progeny based on the determination of the presence or absence of the determined off-target site edit, the intended target site edit, or combinations thereof. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising selecting and reproducing or crossing at least one cell or individual of said organism that comprises said intended target site, does not comprise at least one off-target site, or comprises at least one off-target site; and further comprising selecting a progeny that comprises the intended target site edit in its genome. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising selecting and reproducing or crossing at least one cell or individual of said organism that comprises said intended target site, does not comprise at least one off-target site, or comprises at least one off-target site; and further comprising selecting a progeny that does not comprise at least one off-target site edit in its genome. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising: evaluating the genotype or phenotype of a cell or an organism or a progeny thereof comprising the intended target site edit, off-target site edit, or combinations thereof; further comprising additional editing of the polynucleotide based on the assessment of the presence or absence of the intended target site edit, off-target site edit, or combination thereof in said cell, organism, or progeny. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising evaluating the genotype or phenotype of a cell or an organism or a progeny thereof comprising the intended target site edit, off-target site edit, or combinations thereof; wherein said cell, organism, or progeny is or is derived from a plant, mammal, virus, insect, fungus, or microorganism. Arabidopsis In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising evaluating the genotype or phenotype of a cell or an organism or a progeny thereof comprising the intended target site edit, off-target site edit, or combinations thereof; wherein said cell, organism, or progeny is or is derived from a plant selected from the group consisting of: maize, rice, sorghum, rye, barley, wheat, millet, oats, sugarcane, turfgrass, switchgrass, soybean, canola, alfalfa, sunflower, cotton, tobacco, peanut, potato, tobacco, , vegetable, and safflower. In one aspect, a method is provided for identifying or characterizing one or more variations in an edited polynucleotide, comprising: creating at least one edit in a polynucleotide at an intended target site, capturing the polynucleotide that comprises the intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising identifying at least one potential gene editing off-target site using in silico techniques. In one aspect, a method is provided for identifying a potential off-target site nucleotide variation, comprising: creating a polynucleotide variation at an intended target site, capturing said polynucleotide that comprises at least the intended target site, amplifying the captured polynucleotide(s) to create a pool of polynucleotides, sequencing the pool of polynucleotides, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof. In one aspect, a method is provided for identifying a potential off-target site nucleotide variation, comprising: creating a polynucleotide variation at an intended target site, capturing said polynucleotide that comprises at least the intended target site, amplifying the captured polynucleotide(s) to create a pool of polynucleotides, sequencing the pool of polynucleotides, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising identifying at least one off-target site. In one aspect, a method is provided for identifying a potential off-target site nucleotide variation, comprising: creating a polynucleotide variation at an intended target site, capturing said polynucleotide that comprises at least the intended target site, amplifying the captured polynucleotide(s) to create a pool of polynucleotides, sequencing the pool of polynucleotides, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising editing an off-target site. In one aspect, a method is provided for identifying a potential off-target site nucleotide variation, comprising: creating a polynucleotide variation at an intended target site, capturing said polynucleotide that comprises at least the intended target site, amplifying the captured polynucleotide(s) to create a pool of polynucleotides, sequencing the pool of polynucleotides, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising identifying at least one on-target site. In one aspect, a method is provided for identifying a potential off-target site nucleotide variation, comprising: creating a polynucleotide variation at an intended target site, capturing said polynucleotide that comprises at least the intended target site, amplifying the captured polynucleotide(s) to create a pool of polynucleotides, sequencing the pool of polynucleotides, and assessing the pool of polynucleotides to identify sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof; further comprising editing at least one on-target site. In one aspect, a method is provided for generating a portfolio of intended target sites, potential off-target sites or combinations thereof within a genome of interest, comprising: creating at least one nucleotide variation at an intended target site in a polynucleotide, capturing said polynucleotide that comprises at least one intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, identifying from the pool of polynucleotides to sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof, and selecting at least one sequence from the pool to include in the portfolio. In one aspect, a method is provided for generating a portfolio of intended target sites, potential off-target sites or combinations thereof within a genome of interest, comprising: creating at least one nucleotide variation at an intended target site in a polynucleotide, capturing said polynucleotide that comprises at least one intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, identifying from the pool of polynucleotides to sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof, and selecting at least one sequence from the pool to include in the portfolio, wherein said sequence is represented in silico. In one aspect, a method is provided for generating a portfolio of intended target sites, potential off-target sites or combinations thereof within a genome of interest, comprising: creating at least one nucleotide variation at an intended target site in a polynucleotide, capturing said polynucleotide that comprises at least one intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, identifying from the pool of polynucleotides to sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof, and selecting at least one sequence from the pool to include in the portfolio, wherein said sequence is a polynucleotide molecule placed in a biologically compatible environment in vitro. In one aspect, a method is provided for generating a portfolio of intended target sites, potential off-target sites or combinations thereof within a genome of interest, comprising: creating at least one nucleotide variation at an intended target site in a polynucleotide, capturing said polynucleotide that comprises at least one intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, identifying from the pool of polynucleotides to sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof, and selecting at least one sequence from the pool to include in the portfolio, wherein said sequence is stored as a polynucleotide in a cell. In one aspect, a method is provided for generating a portfolio of intended target sites, potential off-target sites or combinations thereof within a genome of interest, comprising: creating at least one nucleotide variation at an intended target site in a polynucleotide, capturing said polynucleotide that comprises at least one intended target site, an off-target site, or a combination thereof, amplifying the captured polynucleotide to create a pool of polynucleotides, sequencing the pool, identifying from the pool of polynucleotides to sequences corresponding to the intended target site, sequences corresponding to an off-target site, or a combination thereof, and selecting a plurality of sequences from the pool to include in the portfolio. In one aspect, a nucleic acid target portfolio comprising a library of intended target sites and/or potential off-target sites generated from any of the methods described herein is provided. DETAILED DESCRIPTION The disclosure relates to compositions and methods of identifying and characterizing potential gene editing on-target and off-target sites in a nucleic acid. Identification of potential on-target and off-target site edits will allow for the selection of a guide polynucleotide and/or endonuclease that minimizes the risk of off-target site edits and increases the likelihood of intended on-target site edit(s). The presence or absence of on-target site edits and/or off-target site edits in a nucleic acid may be confirmed and, if desired, monitored in edited organisms using the methods and compositions described herein. Definitions The terms “target site”, “target sequence”, “target site sequence, “target DNA”, “target locus”, “genomic target site”, “genomic target sequence”, “genomic target locus” and “protospacer”, are used interchangeably herein and refer to a polynucleotide sequence such as, but not limited to, a nucleotide sequence on a chromosome, episome, a transgenic locus, or any other DNA molecule in the genome (including chromosomal, choloroplastic, mitochondrial DNA, plasmid DNA) of a cell, at which a guide polynucleotide/Cas endonuclease complex can recognize, bind to, and optionally nick or cleave. The target site can be an endogenous site in the genome of a cell, or alternatively, the target site can be heterologous to the cell and thereby not be naturally occurring in the genome of the cell, or the target site can be found in a heterologous genomic location compared to where it occurs in nature. As used herein, terms “endogenous target sequence” and “native target sequence” are used interchangeably herein to refer to a target sequence that is endogenous or native to the genome of a cell and is at the endogenous or native position of that target sequence in the genome of the cell. Cells include, but are not limited to, human, non-human, animal, bacterial, fungal, insect, yeast, non-conventional yeast, and plant cells as well as plants and seeds produced by the methods described herein. An “artificial target site” or “artificial target sequence” are used interchangeably herein and refer to a target sequence that has been introduced into the genome of a cell. Such an artificial target sequence can be identical in sequence to an endogenous or native target sequence in the genome of a cell but may be located in a different position (i.e., a non-endogenous or non-native position) in the genome of a cell. An “altered target site”, “altered target sequence”, “modified target site”, “modified target sequence” are used interchangeably herein and refer to a target sequence as disclosed herein that comprises at least one alteration when compared to non-altered target sequence. Such “alterations” include, for example: (i) replacement of at least one nucleotide, (ii) a deletion of at least one nucleotide, (iii) an insertion of at least one nucleotide, (iv) substitution of at least one nucleotide, (v) chemical modification of at least one nucleotide, or (vi) any combination of (i)-(v). “Off-target site” means one or more alterations to a site other than the intended on-target site edit on a nucleic acid. “On-target site” means one or more alterations to the intended site on a nucleic acid. “Variation(s)”, in the context of gene editing, refers to the range of polynucleotide modifications that are created by a particular agent (double-strand-break-inducing agent, or a base-editing composition. Such variations may comprise on-target edits, off-target edits, or a combination thereof. “Cas9” (formerly referred to as Cas5, Csn1, or Csx12) herein refers to a Cas endonuclease that when in complex with a suitable polynucleotide component (such as crNucleotide and a tracrNucleotide, or a single guide polynucleotide) is capable of recognizing, binding to, and optionally nicking or cleaving all or part of a DNA target sequence. A Cas9 protein comprises a RuvC nuclease domain and an HNH (H-N-H) nuclease domain, each of which can cleave a single DNA strand at a target sequence (the concerted action of both domains leads to DNA double-strand cleavage, whereas activity of one domain leads to a nick). In general, the RuvC domain comprises subdomains I, II and III, where domain I is located near the N-terminus of Cas9 and subdomains II and III are located in the middle of the protein, flanking the HNH domain (Hsu et al., 2013, Cell 157:1262-1278). Cas9 endonucleases are sometimes derived from a type II CRISPR system, which includes a DNA cleavage system utilizing a Cas9 endonuclease in complex with at least one polynucleotide component (Makarova et al. 2015, Nature Reviews Microbiology Vol. 13:1-15). The term “Cas endonuclease” herein refers to a protein encoded by a Cas (CRISPR-associated) gene. A Cas endonuclease, when in complex with a suitable polynucleotide component, is capable of recognizing, binding to, and optionally nicking or cleaving all or part of a specific DNA target sequence. Examples of Cas endonuclease include a Cas9 protein, a Cpf1 protein, a C2c1 protein, a C2c2 protein, a C2c3 protein, Cas3, Cas3-HD, Cas 5, Cas7, Cas8, Cas10, or complexes of these (Makarova et al. 2015, Nature Reviews Microbiology Vol. 13:1-15). As used herein, “nucleic acid” means a polynucleotide and includes a single or a double-stranded polymer of deoxyribonucleotide or ribonucleotide bases. Nucleic acids may also include fragments and modified nucleotides. Thus, the terms “polynucleotide”, “nucleic acid sequence”, “nucleotide sequence” and “nucleic acid fragment” are used interchangeably to denote a polymer of RNA and/or DNA that is single- or double-stranded, optionally containing synthetic, non-natural, or altered nucleotide bases. Nucleotides (usually found in their 5′-monophosphate form) are referred to by their single letter designation as follows: “A” for adenosine or deoxyadenosine (for RNA or DNA, respectively), “C” for cytosine or deoxycytosine, “G” for guanosine or deoxyguanosine, “U” for uridine, “T” for deoxythymidine, “R” for purines (A or G), “Y” for pyrimidines (C or T), “K” for G or T, “H” for A or C or T, “I” for inosine, and “N” for any nucleotide. “Open reading frame” is abbreviated ORF. As used herein, the term “guide polynucleotide” relates to a polynucleotide sequence that can form a complex with a Cas endonuclease and enables the Cas endonuclease to recognize, bind to, and optionally cleave a DNA target site. The guide polynucleotide can be a single molecule or a double molecule. The guide polynucleotide sequence can be a RNA sequence (referred to as guide RNA, gRNA), a DNA sequence, or a combination thereof (a RNA-DNA combination sequence). Optionally, the guide polynucleotide can comprise at least one nucleotide, phosphodiester bond or linkage modification such as, but not limited, to Locked Nucleic Acid (LNA), 5-methyl dC, 2,6-Diaminopurine, 2′-Fluoro A, 2′-Fluoro U, 2′-O-Methyl RNA, phosphorothioate bond, linkage to a cholesterol molecule, linkage to a polyethylene glycol molecule, linkage to a spacer 18 (hexaethylene glycol chain) molecule, or 5′ to 3′ covalent linkage resulting in circularization. A guide polynucleotide that solely comprises ribonucleic acids is also referred to as a “guide RNA” or “gRNA” (See also U.S. Patent Application US 2015-0082478 A1, published on Mar. 19, 2015 and US 2015-0059010 A1, published on Feb. 26, 2015, both are hereby incorporated in its entirety by reference). The term “genome” as it applies to a plant cells encompasses not only chromosomal DNA found within the nucleus, but organelle DNA found within subcellular components, for example, mitochondria, or plastid, of the cell. The term “mammal” includes but is not limited to a pig, a horse, a rabbit, a goat, a cow, a cat, a dog, or a human. Certain embodiments provide methods and compositions to identify potential gene editing off-target sites in DNA created by a site-specific nuclease, including, for example, ZFNs, TALENs, homing endonucleases, and any guided endonuclease, such as Cas endonuclease, e.g. CAS9/CRISPR. Such Cas endonucleases include, but are not limited to Cas9 and Cpf1 endonucleases. Other Cas endonucleases and nucleotide-protein complexes that find use in the methods disclosed herein include those described in WO 2013/088446. Any suitable method or technique may be used to identify the guide polynucleotide/Cas endonuclease for its potential to create on-target site edits or off-target site edits, including in vivo or in vitro assays. Alternatively or in addition, bioinformatics algorithms and in silico models or techniques can also be used to identify potential candidate off-target sites in DNA of interest. In addition to the double-strand break inducing agents, site-specific base conversions can also be achieved to engineer one or more nucleotide changes to create one or more EMEs described herein into the genome. These include for example, a site-specific base edit mediated by an C·G to T·A or an A·T to G·C base editing deaminase enzymes (Gaudelli et al., Programmable base editing of A·T to G·C in genomic DNA without DNA cleavage.” Nature (2017); Nishida et al. “Targeted nucleotide editing using hybrid prokaryotic and vertebrate adaptive immune systems.” Science 353 (6305) (2016); Komor et al. “Programmable editing of a target base in genomic DNA without double-stranded DNA cleavage.” Nature 533 (7603) (2016):420-4. Catalytically dead dCas9 fused to a cytidine deaminase or an adenine deaminase protein becomes a specific base editor that can alter DNA bases without inducing a DNA break. Base editors convert C-→T (or G-→A on the opposite strand) or an adenine base editor that would convert adenine to inosine, resulting in an A-→G change within an editing window specified by the gRNA. The target site may be located in a region outside of a gene sequence or within a gene sequence, for example, a regulatory sequence, a non-coding sequence or a coding sequence. In certain embodiments, genes of interest for targeting include but are not limited to, for example, those genes involved in information, such as zinc fingers, those involved in communication, such as kinases, and those involved in housekeeping, such as heat shock proteins. More specific categories of transgenes, for example, include genes for encoding important traits for agronomics, insect resistance, disease resistance, herbicide resistance, fertility or sterility, grain characteristics, and commercial products. Genes of interest in certain embodiments include, generally, those involved in oil, starch, carbohydrate, or nutrient metabolism as well as those affecting kernel size, sucrose loading, and the like that can be stacked or used in combination with other traits, such as but not limited to herbicide resistance, described herein. In some embodiments, genes of interest include herbicide-resistance coding sequences, insecticidal coding sequences, nematicidal coding sequences, antimicrobial coding sequences, antifungal coding sequences, antiviral coding sequences, abiotic and biotic stress tolerance coding sequences, or sequences modifying plant traits such as yield, grain quality, nutrient content, starch quality and quantity, nitrogen fixation and/or utilization, fatty acids, and oil content and/or composition. In some embodiments, certain genes of interest include, but are not limited to, genes that improve crop yield, polypeptides that improve desirability of crops, genes encoding proteins conferring resistance to abiotic stress, such as drought, nitrogen, temperature, salinity, toxic metals or trace elements, or those conferring resistance to toxins such as pesticides and herbicides, or to biotic stress, such as attacks by fungi, viruses, bacteria, insects, and nematodes, and development of diseases associated with these organisms. Agronomically important traits such as oil, starch, and protein content can be genetically altered in addition to using traditional breeding methods. Modifications include increasing content of oleic acid, saturated and unsaturated oils, increasing levels of lysine and sulfur, introducing essential amino acids, and also modification of starch. In certain embodiments, genes of interest for targeting include but are not limited to, for example, those genes involved in or associated with various diseases, such as cancer, memory-impacted diseases, hyperplasia, or cardiomyopathy. In certain embodiments, genes of interest for targeting include those involved in genetic diseases, such as cataract, Duchenne muscular dystrophy, hereditary tyrosinemia, cystic fibrosis, β-Thalassemia, or Urea cycle disorder. Genes targeted may include but are not limited to cystic fibrosis transmembrane conductor regulator (CFTR), crystallin gamma C (Crygc), dystrophin (Dmd), fumarylacetoacetate hydrolase (FAH), hemoglobin beta (HBB), or ornithine transcarbamylase (OTC). In certain embodiments, genes of interest for targeting include those involved in infectious diseases, including but not limited to human immunodeficiency virus (HIV), hepatitis B virus (HBV), Epstein-Barr virus (EBV), or human papillomavirus (HPV). Genes targeted for infectious disease may include but are not limited to HIV-1 LTR (long terminal repeat), HBV covalently closed circular DNA (cccDNA), Latent EBV in Burkitt's lymphoma cell line, or HPV oncogenes E6 and E7 in cancer cell lines. In some embodiments, combinations of a particular Cas endonuclease and guide polynucleotide designs can be tested for their ability to recognize and bind on-target sites and off-target sites in a nucleic acid, for example, genomic DNA. In certain embodiments, a guide polynucleotide/Cas endonuclease complex that can bind, but not cleave, a target DNA sequence may be used to identify potential targets in genomic DNA. Such a complex may comprise a Cas protein in which all of its nuclease domains are mutant, dysfunctional. For example, a Cas9 protein that can bind to a DNA target site sequence, but not able to cleave one or more strands at the target site sequence, may comprise a mutant, dysfunctional RuvC domain, a mutant, dysfunctional HNH (H-N-H) nuclease domain. For example, the Cas endonuclease can comprise a modified form of the Cas9 polypeptide. The modified form of the Cas9 polypeptide can include an amino acid change (e.g., deletion, insertion, or substitution) that reduces the naturally-occurring nuclease activity of the Cas9 protein. See, for example, US patent application US20140068797 A1, published on Mar. 6, 2014. In some cases, the modified form of the Cas9 polypeptide has no substantial nuclease activity and is referred to as catalytically “inactivated Cas9” or “deactivated cas9 (dCas9).” Catalytically inactivated Cas9 variants include Cas9 variants that contain mutations in the HNH and RuvC nuclease domains. These catalytically inactivated Cas9 variants are capable of interacting with sgRNA and binding to the target site in vivo but cannot cleave either strand of the target DNA. See, US patent application US20140068797 A1, published on Mar. 6, 2014, a catalytically inactive Cas9 can be fused to a heterologous sequence. The nucleic acid having potential target sites may be whole or in fragments, including but not limited to genomic DNA fragments. The nucleic acid may be isolated or synthetic in origin, such as synthesized oligonucleotides. The oligonucleotides or fragments having potential target sites may optionally be pooled into a library. As used herein, the term “target site library” means a library of nucleic acids comprising potential target sites. In some embodiments, the oligonucleotides or fragments may be used to make the target site library, which may be used for identifying which potential target sites on genomic DNA a guide polynucleotide/Cas endonuclease complex is capable of binding to. The nucleic acid in the library may be human, non-human, animal, bacterial, fungal, insect, yeast, non-conventional yeast, and/or plant DNA. The step of screening for the ability of a particular guide polynucleotide/Cas endonuclease complex to bind potential off-target and on-target sites may include using a target site library. Use of the target site library in the methods and compositions herein provide for the simultaneous identification of a plurality of potential off-target sites and on-target edit sites for any particular guide polynucleotide/Cas endonuclease. Incubation of different guide polynucleotide/Cas endonuclease complexes with the nucleic acid, for example, DNA, will result in binding of target sites in the target site library. Suitable conditions for subjecting the nucleic acid to the guide polynucleotide/Cas endonuclease complexes will be apparent to those of skill in the art. The methods may also include identifying the sites on the nucleic acid that the particular guide polynucleotide/Cas endonuclease complex binds to, for example, binding of target sites. The binding sites may be off-target sites or on-target sites. These binding sites may be identified using any suitable method that detects interactions between protein-nucleic acids including, e.g., ELISA, co-immunoprecipitation, bimolecular fluorescence complementation, affinity electrophoresis, pull-down assays, and the like. In some embodiments, the method is carried out using a tagged endonuclease. As exemplified herein, the endonuclease may be tagged to facilitate detection and immobilization of bound nucleic acids with target sites. Such tags include, e.g., a His-tag, FLAG-tag, V5-tag, HA-tag, c-myc-tag, chitin binding protein (CBP) tag, maltose binding protein (MBP) tag, glutathione-S-transferase (GST) tag, thioredoxin (TRX) tag, poly(NANP) tag, or NE-tag or combinations thereof, such as 1XFLAG-6XHis. In particular embodiments, the target molecule is biotinylated. For example, as described herein in Example 1, a His-tagged dCas9/gRNA or HA-epitope dCas9/gRNA is contacted with the target site library comprising potential target sites, and target nucleic acids/His-tagged dCas9/gRNA or HA-epitope dCas9/gRNA complexes are captured using HIS-coated beads or immunoprecipated using a HA antibody. Any unbound nucleic acids may be removed by washing. Nucleic acids bound to the binding agents of interest may be isolated by any suitable technique, including but not limited to denaturation and recovery. Typically, these techniques involve dissociating the bound guide polynucleotide/Cas endonuclease complex from the target DNA. The target site on the DNA may be amplified using PCR, identified using molecular inversion probes (MIPs), identified using Southern by Sequencing technology and/or sequenced. With regard to Southern by Sequencing technology, see U.S. patent application Ser. No. 14/255,144; herein incorporated by reference in its entirety and as described elsewhere herein. The target site may be further characterized for composition, nature and abundance. Any number of techniques and methodologies may be used to characterize the target sites. For example, the recovered nucleic acids may be hybridized with a probe, or primer that hybridizes to a region comprising the intended on-target site edit or off-target site edit, or using a primer pair to amplify a region comprising the intended on-target site edit or off-target site edit. 5 In another embodiment, the sequence is determined by sequencing. For example, linear amplification products may be analyzed directly without further amplification in some embodiments, for example, by using single-molecule sequencing methodology. Sequencing of nucleic acid molecules can also be carried out using next-generation sequencing (NGS). Next-generation sequencing includes any sequencing method that determines the nucleotide sequence of either individual nucleic acid molecules or clonally expanded proxies for individual nucleic acid molecules in a highly parallel fashion, for example, greater than 10molecules are sequenced simultaneously. In one embodiment, the relative abundance of the nucleic acid that was bound by the guide polynucleotide/Cas endonuclease can be estimated by counting the relative number of occurrences of their cognate sequences in the data generated by sequencing. Next generation sequencing instruments and methods are known in the art, and are described, e.g., in Metzker, M. (2010) Nature Biotechnology Reviews 11:31-46, incorporated herein by reference. See also for example next-generation sequencing platforms including, but not limited to, Oxford Nanopore Technologies, Illumina and Pacific Biosciences systems. The sequence motif preference or binding strength to a particular sequence for a guide polynucleotide/Cas endonuclease complex may be determined from the data obtained. For example, preferential binding efficiencies may be calculated based on the ratio of sequencing reads corresponding to each binding motif. In turn, this information may be used to determine guide polynucleotide specificity and provide a target portfolio of potential on-target and off-target sites within a nucleic acid of interest for any particular guide polynucleotide. The methods and compositions provided herein allow those of skill in the art to design, identify, and/or select guide polynucleotides for generating specific desired on-target-site edits and/or decrease the likelihood for generating off-target site edits in DNA. Once the particular guide polynucleotide and Cas endonuclease for making the intended on-target site edit are selected, the intended gene edit may be generated. In certain embodiments, the gene edit is a deletion, substitution, or insertion of a particular DNA sequence introduced at the target site or at a region near or adjacent to the target site. For example, the intended on-target site edit may be the specific introduction of a knock-out, edit, or knock-in at a particular DNA sequence, such as in a chromosome or plasmid of a cell. In some embodiments, genome editing may be performed herein by cleaving one or both strands at a specific DNA sequence in a cell with a Cas protein associated with a suitable polynucleotide component. Such DNA cleavage, if a double-strand break (DSB), may prompt NHEJ or HDR processes which may lead to modifications at the target site. The terms “knock-in”, “gene knock-in”, “gene insertion” and “genetic knock-in” are used interchangeably herein. A knock-in represents the replacement or insertion of a DNA sequence at a specific DNA sequence in a cell by genome editing using a guide polynucleotide and Cas endonuclease in combination with donor DNA polynucleotide. Examples of knock-ins are a specific insertion of a heterologous amino acid coding sequence in a coding region of a gene, or a specific insertion of a transcriptional regulatory element in a genetic locus. Various methods and compositions may be employed to obtain a cell having a polynucleotide of interest inserted in a target site for a Cas endonuclease. Such methods may employ homologous recombination to provide integration of the polynucleotide of interest at the target site. In one method provided, a polynucleotide of interest is provided to the organism cell in a donor DNA construct. As used herein, “donor DNA” includes reference to a DNA construct that comprises a polynucleotide of interest to be inserted into the target site of a Cas endonuclease. In some embodiments, the donor polynucleotide may correct a mutant gene and/or increase expression of an endogenous gene, for example, by inserting a sequence not present in the target site of interest. The donor polynucleotide may be a natural or a modified polynucleotide. The donor DNA construct may further comprise a first and a second region of homology that flank the polynucleotide of Interest. The first and second regions of homology of the donor DNA share homology to a first and a second genomic region, respectively, present in or flanking the target site of the cell or organism genome. The donor DNA may be tethered to the guide polynucleotide and/or the Cas endonuclease. Tethered donor DNAs may allow for co-localizing target and donor DNA, useful in genome editing and targeted genome regulation, and may also be useful in targeting post-mitotic cells where function of endogenous HR machinery is expected to be highly diminished (Mali et al. 2013 Nature Methods Vol. 10: 957-963.) The donor polynucleotide may be either single-stranded or double-stranded. Furthermore, it is recognized that the polynucleotide of interest may also comprise antisense sequences complementary to at least a portion of the messenger RNA (mRNA) for a targeted gene sequence of interest. In addition, the polynucleotide of interest may also be used in the sense orientation to suppress the expression of endogenous genes in the organism. Methods for suppressing gene expression in the organism using polynucleotides in the sense orientation are known in the art. The methods generally involve introducing into the cell or organism a DNA construct comprising a promoter that drives expression in the organism or specific cell-type or tissue operably linked to at least a portion of a nucleotide sequence that corresponds to the transcript of the endogenous gene. Typically, such a nucleotide sequence has substantial sequence identity to the sequence of the transcript of the endogenous gene, generally greater than about 65% sequence identity, about 85% sequence identity, or greater than about 95% sequence identity. See, U.S. Pat. Nos. 5,283,184 and 5,034,323; herein incorporated by reference. Genome editing using DSB-inducing agents, such as Cas9-gRNA complexes, has been described, for example in U.S. Patent Application US 2015-0082478 A1, published on Mar. 19, 2015, WO2015/026886 A1, published on Feb. 26, 2015, U.S. application 62/023,246, filed on Jul. 7, 2014, and U.S. application 62/036,652, filed on Aug. 13, 2014, all of which are incorporated by reference herein. For example, a guide polynucleotide/Cas endonuclease system may be used to modify or replace nucleotide sequences of interest (such as a regulatory elements), insertion of polynucleotides of interest, gene knock-out, gene-knock in, modification of splicing sites and/or introducing alternate splicing sites, modifications of nucleotide sequences encoding a protein of interest, amino acid and/or protein fusions, and gene silencing by expressing an inverted repeat into a gene of interest. See, for example, U.S. Patent Application US 2015-0082478 A1, published on Mar. 19, 2015, WO2015/026886 A1, published on Feb. 26, 2015, US 2015-0059010 A1, published on Feb. 26, 2015, U.S. application 62/023,246, filed on Jul. 7, 2014, and U.S. application 62/036,652, filed on Aug. 13, 2014, all of which are incorporated by reference herein. Generally, the methods of introducing a guide polynucleotide/Cas endonuclease complex into a cell include methods of introducing at least one guide polynucleotide and at least one Cas endonuclease protein into a cell, and growing said cell under suitable conditions to allow said guide polynucleotide and said Cas endonuclease protein to form a complex inside said cell. In embodiments where a polynucleotide of interest is to be inserted into the target site, donor DNA may be introduced by any means known in the art. In certain embodiments, the intended on-target site edit may be made with the guide polynucleotide-Cas endonuclease and optionally donor polynucleotide depending on the type of desired edit (insertion edit) in cells inside the organism (in vivo), in cells outside of the organism but delivered back to the organism (ex vivo), or in cells outside of the organism (in vitro). Accordingly, a polynucleotide of interest may be provided and integrated into the organism's genome at the target site and expressed in the organism. The organism may be further evaluated for a particular phenotype, function or expression level. Nucleic acids and proteins may be provided to a cell by any method including methods using molecules to facilitate the uptake of anyone or all components of a guided Cas system (protein and/or nucleic acids), such as cell-penetrating peptides and nanocariers. See also US20110035836 Nanocarier based plant transfection and transduction, and EP 2821486 A1 Method of introducing nucleic acid into plant cells, incorporated herein by reference. The methods of the present disclosure allow for the identification of DNA edited by the particular guide polynucleotide/Cas endonuclease complex. In certain embodiments, the method includes confirming the presence of the intended on-target site edit and/or absence of the off-target site edit in the target sequence of the nucleic acid in a cell of the organism. The methods may include selecting from a group of plants, mammals, viruses, insects, fungi, or microorganisms, one or more plants, mammals, viruses, insects, fungi, or microorganisms that comprise the intended target site edit in its nucleic acid. The intended on-target edit or off-target site edit may include a deletion, insertion of a donor polynucleotide, or substitution or combinations thereof, for example, in a target site in genomic DNA. In certain embodiments, the method of confirming the presence or absence of the intended on-target site edit or off-target site edit includes but is not limited to use of a PCR based method or assay, Southern blot assay, Northern blot assay, protein expression assay, Western blot assay, ELISA assay, MIP technology, or Next Generation Sequencing and any combination thereof. See, for example, U.S. patent application Ser. No. 12/147,834, and U.S. patent application Ser. No. 14/255,144, The Plant Genome. (March 2015) 8:1 (1-15), the content of each of which is incorporated by reference herein in its entirety. These methods may include the use of a primer or probe of the target sequence, intended on-target edit sequence or off-target site edit sequence. For example, in some embodiments, the methods may include the use of a primer or probe that hybridizes to a region comprising the intended on-target site edit or off-target site edit or the use of a primer pair to amplify a region comprising the intended on-target site edit or off-target site edit. In certain embodiments, the methods and compositions described herein use molecular inversion probes (MIP) technology to detect or amplify particular nucleic acid sequences. Use of MIP technology allows for the detection of changes at the single nucleotide level without prior knowledge of the exact edit that is generated at each site. The MIPs technology may be used to characterize both the region of desired editing as well as potential off target locations in the genome that may also exhibit editing with specific guide species. Accordingly, in certain embodiments, the methods and compositions described herein include but are not limited to the use of MIP assays in determining target site sequences, for example, intended on-target site edits or off-target site edits. In certain embodiments, the MIPs have targeting arms that flank DNA regions of interest. As used here, “targeting arms” means sequences that have homology to the desired nucleic acid region surrounding the target site. The DNA regions of interest may include the intended on-target sites and potential off-target sites or combinations thereof. The targeting arms may be designed to hybridize upstream and downstream of one or more specific on-target site edit sequence or potential off-target site edit sequence located on DNA. In some embodiments, the targeting arm may include a sequence that is complementary to the intended on-target site or off-target site. The targeting arm may be designed to hybridize to one or the other strand of DNA. The MIPs are allowed to hybridize to nucleic acid to perform capture of target sequences located on the template. Incubation of one or more different MIPs with the DNA will allow the targeting arms to hybridize upstream and downstream of one or more specific on-target site edit sequences or potential off-target site edit sequences on the DNA. Suitable conditions for hybridizing MIPs to DNA will be apparent to those of skill in the art. The hybridized MIPs may be recircularized using polymerase and ligase enzymes to fill and seal the gap between the two probe ends, two arms, forming a covalently-closed circular molecule that contains the target sequence. The recircularized MIPs may be subjected to an exonuclease digestion to degrade linear genomic DNA and un-circularized probes. See, for example, U.S. Pat. Nos. 5,866,337; 7,790,388; 6,858,412; 7,993,880; 7,700,323; 6,558,928; 6,235,472; 7,320,860; 7,351,528; 7,074,564; 5,871,921; 7,510,829; 7,862,999; and 7,883,849, the content of each of which is incorporated by reference herein in its entirety. The circular target probes may optionally be pooled into a panel, which may be used for characterizing one or more on-target site sequences or potential off-target site edit sequences on a nucleic acid, for example, DNA. The advantages of this approach include the ability to pool and assay many sites in parallel. The captured target sequences of interest may be indexed and amplified and the resulting indexed amplicons may be pooled and purified. See, for example, Beckman Ampure XP (Danvers, Mass.). In some embodiments, adaptors for sequencing may be attached during PCR or to linear post-capture amplicons. In some embodiments, each adaptor may contain a unique identifier for each probe, for example, barcodes, such that the unique identifier does not appear within the probe or targeted sequence. Purified amplicon pools may be sequenced using any suitable approach as described herein and known to one skilled in the art. In certain embodiments, sequencing reads may be deconvoluted into sample bins by index sequence. The per sample reads may be analyzed using identifying reads that belong to a specific MIPs assay via the 5′ and 3′ targeting arm. The aligned reads or targeted MIPs sequence may be compared to a suitable control or reference sequence that does not have the intended on-target site edit or wildtype reference sequence, for example, one that was used to design the original assays. Differences between the reference sequences may be identified by comparison of nucleotides at certain positions or looking for mismatches in sequence alignment. In certain embodiments, the methods and compositions described herein use Southern by Sequencing (SbS) technology to detect both the region of desired editing as well as potential off target locations in the genome that may also exhibit editing with specific guide species. Constructs used in the transformation of the gene editing constructs may also be detected using the methods, such as SbS, and compositions described herein. Use of SbS technology allows for the detection of changes at the single nucleotide level. In certain embodiments, the methods and compositions described herein include but are not limited to the use of SbS approaches in determining or monitoring edited target site sequences, for example, intended on-target site edit sequences or off-target site edit sequences. In certain embodiments, SbS employs a sequence capture-based method that enriches sequencing libraries, such as Illumina™ or PACBIO™ sequencing libraries, for nucleic acid fragments comprising fragments of constructs used in the process of making the intended gene edit, intended on-target site edit sequences, or potential off-target site edit sequences, or combinations thereof. Next generation DNA shotgun libraries for individual gene-edited events may be used in the methods described herein. See, for example, Example 5. The nucleic acid in the library may be human, non-human, animal, bacterial, fungal, insect, yeast, non-conventional yeast, and/or plant DNA. Nucleic acid, such as genomic DNA, may be isolated or extracted from various materials, for example, edited genetic materials of all species including, human, non-human, animal, bacterial, fungal, insect, yeast, non-conventional yeast, and/or plant DNA, using any number of techniques known to one skilled in the art and/or as described elsewhere herein. In one example, genomic DNA may be isolated from plant, for example, from leaf punches. In some examples, the DNA is purified and assessed for quality and quantity. The DNA of the individual gene-edited event may be fragmented into smaller DNA fragments using restriction enzymes or sonication. The ends of the DNA may be end-repaired if desired. Adapter sequences may be ligated to the ends of the fragments, for example, via blunt or sticky-end ligations depending on the technique utilized to fragment the DNA and whether the DNA fragment ends were end-repaired. In some embodiments, each adaptor may contain a unique identifier for each probe, for example, barcode, such that the unique DNA identifier does not appear elsewhere within the probe or targeted sequence. As used herein, a “barcode” be also be referred to as a “tag”, “multiplex identifier”, or “index” sequence and may link a sequence or read to its library or source (pool). In some examples, the barcode may be flanked by a specific sequence that is used for attaching the fragment to the flow cell, such as a Illumina™ specific sequence. In some embodiments, the adaptors may be flanked by one or more barcodes to enable sample pooling at the hybridization and/or sequencing stages. In some embodiments, each sample has a barcode. The captured target sequences of interest may be indexed and amplified and the resulting indexed amplicons may be pooled and purified. See, for example, Beckman Ampure XP (Danvers, Mass.). In some embodiments, adaptors for sequencing may be attached during PCR or to linear post-capture amplicons. Ligated fragment libraries may be amplified. See, for example, NimbleGen™ capture protocols, SeqCap EZ Library: Technical Note “Double Capture: High Efficiency Sequence Capture of Small Targets.” (2012). Together, the DNA fragments with attached barcodes form fragment libraries that can be enriched via PCR amplification using any suitable primers, for example, adapter-specific PCR primers. The ligated fragment libraries may be pooled, for example, in equal molar ratios. A probe library that contains probes for various intended on-target site edits, potential off-target site edits, or DNA constructs used in the process of making the intended on-target edit, or combinations thereof may be created. The probes may have homology to a nucleic acid region surrounding the intended on-target site or potential off-target site, or have homology to a nucleic acid region that includes the intended on-target site, potential off-target site, or combinations thereof. Additionally, the probes may have homology to a DNA construct used in the process of making the intended on-target edit. The probes may be designed to hybridize upstream and downstream of one or more specific intended on-target site edit sequence or potential off-target site edit sequence located on DNA. In some embodiments, the probe may include a sequence that is complementary to the intended on-target site sequence or off-target site sequence. The probe may be designed to hybridize to one or the other strand of DNA. In a further embodiment, the method utilizes a labeled probe library comprising sequence fragments containing constructs used the process of making the intended edit, sequences of intended on-target site edits, sequences of potential off-target site edits or combinations thereof. Any suitable label may be used to label the probe, including but not limited to biotinylation. Sequence fragments containing intended target site edits or potential off-target site edits of interest may be analyzed as a collection and reduced to a set of unique sequences representing all bases within the collection. The DNA probe library is designed such that nearly all bases within a construct pool, regions containing one or more intended on-target site edits, region containing one or more potential off-target site edits, or combinations thereof are targeted during the enrichment process. The construct pool includes any construct used in the genome editing process, including but not limited to transformation and guideRNA constructs. The probe library may be in solution, glass slide, or plate microarray or any other suitable environment. The probes are allowed to hybridize to the nucleic acid to perform capture of target sequences located in the fragment and/or event. Incubation of one or more different probes with the DNA will allow the probes to hybridize upstream and downstream of one or more specific intended on-target site edit sequences, potential off-target site edit sequences, or combinations thereof on the DNA. In some examples, incubation of one or more different probes with the DNA will allow the probes to hybridize to the constructs used in the process of making the intended on-target edits. Suitable conditions for hybridizing probes to DNA will be apparent to those of skill in the art. The probes may optionally be pooled into a library, which may be used for characterizing one or more intended on-target site edit sequences or potential off-target site edit sequences on a nucleic acid, for example, DNA. The advantages of this approach include the ability to pool and assay many target sites in parallel, including intended on-target site edits and potential off-target site edits. Sequence enrichment or capture may be accomplished using any number of methods as described elsewhere herein and known to one skilled in the art. In some embodiments, a double capture approach may be used to increase on target reads. See, for example, NimbleGen™ protocols. In one embodiment, DNA libraries comprising the individual events or fragments may be denatured and incubated with a labeled probe library, such as a biotinylated probe library. DNA fragments in the libraries that bind to one or more probes in the library may be isolated using any suitable technique, for example, using beads, columns or other solid support capable of binding to the label on the probe, including but not limited to Streptavidin Dynabeads. In some examples, the library pools may be washed and eluted. Washed and eluted library pools may optionally be amplified and/or captured. In some examples, the library pools may be amplified and purified a second time using methods known to one skilled in the art and as described elsewhere herein. Final capture library pools may be quantified and diluted for sequencing. Purified amplicon pools may be sequenced using any suitable approach as described herein and known to one skilled in the art. In certain embodiments, sequencing reads may be deconvoluted into sample bins by index sequence. The per sample reads may be analyzed using identifying reads that belong to a specific barcode. The reads can be aligned to the sequence of a control or reference sequence, for example, a corresponding genomic sequence that does not contain the intended on-target gene edit. In some embodiments, reads that align to the control or reference sequence and are identical in sequence, are considered “endogenous reads”. Endogenous reads may be excluded from further analysis in the methods disclosed herein. In certain embodiments, the reads for an event or sample may be first compared to a reference sequence of a construct that was used in the process of making the intended on-target site edit. Junction sequences between the plasm id/construct and the genomic segment may be identified using the processes described in U.S. patent application Ser. No. 14/255,144; herein incorporated by reference in its entirety. If the read for an event does not align to the construct reference sequence, the read may be aligned to a reference sequence that comprises potential off-target site edit sequences. If no potential off-target sites are determined or identified for the event or sample, then the reads may be aligned to a reference sequence that comprises the intended on-target site edit sequence to identify, confirm or monitor the intended on-target edit. See, for example, Example 6, described herein. The comparisons of the reads to the various reference sequences may be performed in any order desired. Methods of alignment of sequences for comparison are well known in the art. Computer implementations of these mathematical algorithms can be utilized for comparison of sequences to determine optimum alignment, for example, CLUSTAL; the ALIGN program, GAP, BESTFIT, BLAST, FASTA, and TFASTA in the GCG and the like. Alignments using these programs can be performed using the default parameters. Differences between the reference sequences may be identified by comparison of nucleotides at certain positions or looking for mismatches in sequence alignment. Generated sequence may be used to identify fragments of any construct used in the genome editing process, any intended on-target site edit sequence, any off-target site edit sequence, or combinations thereof, and the integrity of the intended on-target site and off-target site. In certain embodiments, the presence or absence of the intended on-target site edit(s) and/or absence of the off-target site edit(s) may be detected based on the expression of the targeted gene, for example, a change in the expression level or temporal or spatial expression pattern of the targeted gene, for example, when compared to the expression level, temporal or spatial expression pattern of a control or reference gene that does not have the same intended on-target site edit(s). In some examples, the methods described herein include growing the organism, such as a plant or animal, that has the confirmed intended on-target site edit and/or absence of off-target site edits for further testing and evaluation. In some instances, the method includes using the selected organism, such as the plant or animal, that that has the confirmed intended on-target site edit and/or absence of off-target site edits for use in a breeding program. For example, when the organism is a plant, the plant having the intended on-target site edit and/or absence of off-target site edits may be used in recurrent selection, bulk selection, mass selection, backcrossing, pedigree breeding, open pollination breeding, restriction fragment length polymorphism enhanced selection, genetic marker enhanced selection, double haploids and transformation. In some instances the plant may be crossed with another plant or back-crossed so that the intended on-target site edit may be introgressed into the plant by sexual outcrossing or other conventional breeding methods. In some instances, the intended on-target site edit may be used as a marker for use in marker-assisted selection in a breeding program to produce plants or animals that have the phenotype of the plant or animal with the intended on-target site edit. For example, the phenotype may include an alteration in the expression level of a protein of interest whose sequence is modified, inserted or deleted by the intended on-target edit, for examples, increasing or decreasing expression of the protein. Examples include but are not limited to increasing the copy number of coding sequence that encodes the protein of interest, modifying the endogenous coding sequence that encodes the protein of interest, or modifying the endogenous promoter or regulatory elements that are driving the endogenous coding sequence encoding the protein of interest, for example, inserting a heterologous element, until the desired level of protein is detected in the sample from the organism. In addition to or alternatively, when the level of protein of interest detected or quantified is present in low amounts, decisions may be made on protein presence, absence, or range specific expression; and the assays, methods, and systems may optionally include culling plants or animals that express the protein of interest in non-desired or sub-optimal amounts, or growing or breeding plants or animals which express the protein of interest in desired or optimal amounts. In certain embodiments, off-target cutting is observed prior to medical therapy that utilizes an active or inactive Cas9. In certain embodiments, the presence or absence of the intended on-target site edit and/or off-target site edits are monitored in the progeny or subsequence generations of the organisms. Any suitable method or technique may be used to monitor the presence or absence of any intended on-target site edits and/or any off-target site edits. The presence or absence of the edits at the target sites on the DNA may be determined using any suitable method or technique described herein or known to one skilled in the art. Examples include but are not limited to PCR based methods or assays, Southern blot assays, Northern blot assays, protein expression assays, Western blot assays, ELISA assays, MIP technologies, or Next Generation Sequencing and any combination thereof. These methods may include the use of a primer or probe of the target sequence, intended on-target edit sequence or off-target site edit sequence or combinations thereof. For example, in some embodiments, the methods may include the use of a primer or probe that hybridizes to a region comprising the intended on-target site edit or off-target site edit or the use of a primer pair to amplify a region comprising the intended on-target site edit or off-target site edit. The following examples are offered by way of illustration and not by way of limitation. EXAMPLES Example 1 Use of dCas9 for Capture of Nucleic Acid Example 2 Molecular Inversion Probe for Targeting Gene Edited Nucleic Acids—Molecular Inversion Probe Design Example 3 Example of Identifying and Characterizing Target Sites Example 4 Tiling Method Example 5 Southern by Sequencing Approach Example 6 Southern by Sequencing Bioinformatic Pipeline The embodiments are further defined in the following Examples, in which parts and percentages are by weight and degrees are Celsius, unless otherwise stated. It should be understood that these Examples, while indicating embodiments of the disclosure, are particular by way of illustration only. From the above discussion and these Examples, one skilled in the art can ascertain the essential characteristics of the embodiments, and without departing from the spirit and scope thereof, can make various changes and modifications of them to adapt to various usages and conditions. Thus, various modifications of the embodiments in addition to those shown and described herein will be apparent to those skilled in the art from the foregoing description. Such modifications are also intended to fall within the scope of the appended claims. The disclosure of each reference set forth herein is incorporated herein by reference in its entirety. All publications and patent applications mentioned in the specification are indicative of the level of those skilled in the art to which this disclosure pertains. All publications and patent applications are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference. Although the foregoing disclosure has been described in some detail by way of illustration and example for purposes of clarity of understanding, it will be obvious that certain changes and modifications may be practiced within the scope of the appended claims. Use of the methods and compositions described herein allow for rapid and cost-effective identification and characterization of potential gene editing off target sites in the genomes of various species. This information can be used to determine and characterize the sequence specificity of gene editing tools (guide [targeting] nucleic acids, novel enzymes). In addition, this information can be used to monitor for unintended (off intended target) gene edit events in edited genetic materials of all species including, plants, microbes, virus and mammals. In one example, modified Cas9 protein, labeled as “dCas9”, is tagged with 6XHis tags and FLAG epitope for subsequent extraction of the guide polynucleotide/Cas endonuclease-nucleic acid bound complex. The dCas9 protein still has the ability to bind DNA (when complexed with a guide polynucleotide target) but is unable to subsequently cut the DNA region it is bound. As a result, the binding of dCas9 is directed to bind to a synthetic oligonucleotide or specific genomic regions after random shearing or restriction digestion of the genome, for example, in a target site library, using guide polynucleotides corresponding to sequence of interest. After binding, the guide polynucleotide/Cas endonuclease complex bound to nucleic acid in this example can be eluted using His beads or immunoprecipitation with an anti-HA antibody. Recovery of the enriched DNA fragments can then be performed via a simple denaturation step (phenol/chloroform/ethanol, for example) or via PCR amplification directly off the bead/synthetic oligonucleotide complex, using PCR primers annealing directly to universal sequences located at the 5′ and 3′ ends of the bound oligonucleotides. Subsequently, sequence composition of the enriched DNA pool is characterized to determine the nature (composition and abundance of each sequenced fragment species) of the enriched pool. This information is used to determine the sequence specificity of the gene editing tools by determining the ability of these tools to recognize and bind target sequence. The sequence motif preference or binding strength to a particular sequence can be determined with the data obtained. In turn, this information can determine gene editing tool specificity and provide a target portfolio of potential target and off-target sites within a genome of interest for any particular guide polynucleotide. In the latter example, this information is used to monitor off-target gene modifications in gene edited biological materials, for example a cell. In some embodiments, the protocol would include the following steps: 1) generation of a dCas9-gRNA complex; 2) random shearing of genomic DNA to size of interest and generating a target site library (e.g., Illumina) of sheared fragments—OR—synthesizing a randomer nucleotide combinatorial target site library); 3) binding of dCas9-gRNA complex to sheared genomic DNA or target site library; 4) elution of bound DNA fragments with His beads or immunoprecipitation; 5) denaturation—OR—PCR amplification, and recovery of DNA fragment for further handling, including DNA sequencing. This approach has several advantages including 1) the possibility of isolating multiple genomic regions in parallel including the intended target and potential off-site targets; 2) the possibility of isolating genomic regions of varying sequence composition, including sequence not known to exist in a reference genome; 3) isolation of genomic DNA regions from various lines and species of interest; 4) no requirement for extensive probe design and no need for extensive knowledge of the sequence for the same region in other lines and/or species; 5) low cost and protocols amenable to high-throughput isolation; and 6) the ability to use this approach in any line or species of interest with only a minimum knowledge of the sequence of the targeted region(s) or combinations thereof. Methods 1) Preparation of Tagged dCas9 A tagged dCas9 was prepared using one of two ways. In one experiment, a commercial tagged dCas9 was purchased directly from New England Biolabs (NEB—Catalog Number M0652S) and used directly for the enrichment assay. The commercial dCas9 is tagged with a SNAP-tag® which is a “highly engineered version of AGT (alkylguanine DNA alkyltransferase)”. See New England Biolabs FAQ: What is the SNAP-tag®? E. coli E. coli In a second experiment. a purified tagged dCas9 was expressed in from a vector containing a tagged dCas9 gene. In the latter experiment, an expression vector containing a tagged Cas9 (SP) (ALT1)-1XFLAG-6XHis gene is transformed into BL21. Two liters of 2X-YT media plus kanamycin are inoculated with 0.1% overnight culture and grown at 37° C. until OD600˜0.7. The temperature is reduced to 16° C. and cultures are induced with 0.1 mM IPTG. Cultures are incubated at 16° C. overnight. Cells are harvested by centrifugation at 8,000 g for 20 min (JLA8.1 Rotor, 6,500 rpm). Pellets are stored at −20° C. 1 liter shake-flask pellet of CAS9 (SP) (ALT1)-1XFLAG-6XHis protein is re-suspended in 50 ml Buffer A (20 mM Tris, pH 8.0, 500 mM NaCl) with 2.5 U/ml Benzonase and Complete Protease Inhibitor and mixed at 4° C. for 10 minutes. The cells are lysed using homogenizer (two passes at 25 kpsi). The lysate is clarified by centrifugation for 20 min (SS-34 rotor, 16,000 RPM). The supernatant is collected. The supernatant is passed over one 2 ml His-Pur Superflow Ni-NTA column that is pre-equilibrated with 10CV of buffer A. The column is sequentially washed with 10CV buffer A, 10CV buffer B (20 mM Tris, pH 8.0, 500 mM NaCl, 1% Triton X-100) and 5CV buffer A. The protein is sequentially eluted from the column with 5CV buffer C (20 mM Tris, pH 8.0, 500 mM NaCl, 10% Glycerol)+10 mM imidazole, 5CV buffer C+20 mM imidazole, 5CV buffer C+50 mM imidazole and 2CV buffer C+250 mM imidazole (for E250, added buffer and mixed well and let the column sit for 10 minutes before collecting). The 4 ml E250 elution sample is diluted 1:10 to reduce the imidazole concentration to 25 mM and 20 ml is stored at 4° C. pending cation exchange chromatography. The 20 ml E250 elution sample is concentrated to 2 ml and diluted 1:10 in 20 mM Tris, pH 8.0, 10% glycerol to create 20 ml of protein sample in the following buffer condition prior to cation exchange chromatography: 20 mM Tris, pH 8.0, 50 mM NaCl, and 2.5 mM imidazole. The 20 ml protein sample (˜8.5 mg) are loaded onto a 1 ml HiTrap S P FF cation exchange column pre-equilibrated with 20 mM Tris, pH 8.0, 10% glycerol, eluted over 20CV with a 0-100% 1M NaCl gradient in 20 mM Tris, pH 8.0, 10% glycerol and collected in 1 ml fractions. The protein concentration of the final pooled sample is measured by Bradford assay using BSA as standard. The sample is aliquoted into 19 vials of 0.2 ml each and stored at −20° C. 2) Preparation of Template DNA Different types of template DNA can be used, including randomer nucleotide combinatorial target site library, genomic templates or large clone (BAC) templates. In this particular example, a randomer nucleotide combinatorial target site library was used for the experiment. Prior to capture and enrichment, random 24-mer single-stranded oligonucleotides flanked by universal adapters (Integrated DNA Technologies, Coralville, Iowa) were made double-stranded by primer extension in a solution containing 0.88 μM template, 88 μM primer, 1×Taq-Pro Complete, 2.0 mM MgCl2 (Denville Scientific). 3) Capture and Enrichment of Targeted Genetic Sequences E. coli. Capture of targeted genetic sequences was performed using either the commercial tagged dCas9 from NEB or the purified tagged dCas9 expressed in, and purified from, The purified dCas9 (64 nM) was mixed with 192 nM sgRNA (crRNA and tracrRNA) in 1×Cas9 nuclease reaction buffer (100 nM HEPES, pH 7.4; 750 mM KCl; 50 nM MgCl2; 25% glycerol)) in the presence of 3 μg template DNA, then incubated at 37 C for 4 hours. Several methods can be used for recovering bound DNA/dCas9/sgRNA complex, including immunoprecipitation with anti-FLAG antibody (monoclonal or polyclonal), bead-based pull-down or Ni-NTA agarose. The method described below will focus on the use of Dynabeads™ His-Tag magnetic beads (Thermo Fisher) for isolation and pull-down assay. 50 μl magnetic bead re-suspension of His-Tag beads (2 mg) (Thermo, Prod. #: 10103D) was used for the pull-down assay. The His-Tag beads were washed twice with 200 μl 1×Binding/Wash buffer (50 mM Sodium-Phosphate, pH 8.0; 300 mM NaCl; 0.01% Tween™-20). DNA/protein complex (prepared in 1×Binding/Wash buffer) in 100 μl total reaction volume was added to the His-Tag magnetic beads in a micro centrifuge tube and incubated for 10 min at room temperature with rotation. After incubation, the tube was placed on a magnet for 2 min, then the supernatant was discarded. The beads were washed 4 times with 0.3 ml 1×Binding/Wash buffer by placing the tube on the magnet for 2 min after resuspension and discarding the supernatant. To elute the protein, 100 μl of 1×His Elution buffer (300 nM Imidazole; 50 mM Sodium-Phosphate, pH 8.0; 300 mM NaCl; 0.01% Tween™-20) was added to the His-Tag beads. After incubation on a roller for 5 min at room temperature, the tube was placed on a magnet for 2 min and the supernatant containing the eluted histidine-tagged protein/DNA complex was transferred to a clean tube and used as template for PCR amplification. The capture assay with the commercial tagged dCas9 was performed under the same conditions but in the presence of 1×NEB3.1 reaction buffer. The recovery of bound DNA/dCas9/sgRNA complex was performed using commercial beads from NEB (SNAP-Capture Magnetic beads, Catalog number S9145S), following the manufacturer's recommendations and using 1×NEB3.1 buffer as immobilization buffer. The binding assay was incubated at 4° C. overnight, with mixing. After washing, the tag-bound DNA/dCas9/sgRNA complex was used as template for PCR amplification. In both instances, recovery was performed by PCR amplification. After PCR and PCR clean-up, amplified oligonucleotides were sequenced on the Illumina platform. 4) Data Analysis Alignment of the resulting sequencing data to their respective gRNA sequences suggested either modest enrichments (˜2× to 6×increase, depending on the motif analyzed) for the experiment performed with the purified tagged dCas9, in comparison to the presence of said sequences in negative control samples (e.g., no gRNA in binding reaction), Alignments showed no specific enrichment for the experiment performed with the commercial tagged dCas9, Both experiments showed high level of non-specific oligonucleotide sequences in both the enriched and negative control samples. Molecular inversion probes assays were first designed by analyzing a 100 basepair window surrounding the target site of interest, for example a 100 basepair window. A target site of interest for CAS9 molecular characterization was defined as a desired edit site or a potential off target site identified through off-target assay or in-silico analysis of guide polynucleotide sequences across the genome. Targeting arms flanking the region of interest were selected based on the following assay criteria: arm length of 17-28 basepairs, distance between 5′ and 3′ targeting arms of 1-70 basepairs and predicted melting temperature of 68-72 degrees Celsius. Following design, targeting arms for each assay were linked by a common backbone sequence 30-50 basepairs in length and ordered as individual oligos with a 5′ phosphorylation. The individual 250 uM MIPs oligos are pooled in equal volumes to generate a 250 uM assay pool. MIPs Targeting and Amplification MIPs targeting and sequencing pooling creation was accomplished via a four step process: hybridization, circularization, exonuclease digestion and indexing/amplification. Briefly, hybridization reactions were prepared by combining 250 ng of DNA with 1.25 ul ampligase buffer (Epicentre), 0.5 ul 1 M blocking oligo, a volume of MIPs assay pool that resulted in a DNA:MIPs ratio of 500:1 to 5000:1 depending on panel size, and water to a final reaction volume of 12.5 ul. Reactions were denatured for 10 minutes at 95 degrees Celsius followed by three hour incubation at 60 degrees Celsius in a thermocycler with heated lid. Following incubation hybridized MIPs were recircularized by addition of 0.2 ul of 10×Ampligase buffer, 1 ul 2 U/ul HF Phusion polymerase (New England Biolabs), 0.25 ul 100 U/ul Ampligase enzyme (Epicentre) and 0.55 ul 0.25 mM dNTP mix (New England Biolabs) to the completed hybridization reaction, while the reaction was maintained at 60 degrees Celsius. The final circularization reaction was mixed gently, sealed and incubated at 60 degrees Celsius for 16-18 hours. Following circularization, incubation reactions were collected by centrifugation, incubated for 1 minute at 37 degrees Celsius and stored at 4 degrees Celsius until exonuclease digestion. Exonuclease digestion to remove linear genomic DNA and un-circularized probes were performed by adding 1 ul of 20 U/ul Exo I and 1 ul of 100 U/ul Exo III (New England Biolabs) to the circularized MIP reaction from the previous step. Reactions were incubated in a thermocycler for 15 minutes at 37 degrees Celsius followed by a 2 minute inactivation at 95 degrees Celsius. Following digestion, targeted sequences were indexed and amplified by adding 12.5 ul of 2×iProof Master mix (Biorad), 0.125 ul 100 uM universal backbone forward primer, 0.125 ul 100 uM indexed backbone reverse primer, and 9.8 ul water. Reactions were denatured at 98 degrees for 2 minutes and amplified by 25 cycles of 98 degrees for 10 seconds, 60 degrees for 30 seconds, 72 degrees for 60 seconds. Resulting indexed amplicons are pooled and purified by a 1:1 Ampure XP cleanup according manufacturers recommendations (Beckman). Purified amplicon pools were sequenced via Illumina recommendations on MiSeq sequencers, generating 100 basepair paired end reads. Sequencing reads were deconvoluted into sample bins by index sequence. Per sample reads were analyzed by identifying reads that belong to a specific MIPs assay via the 5′ and 3′ targeting arm. Reads were aligned via Bowtie version 2 to the wildtype reference that is used to design the original assays. Differences between the reference sequences were identified by mismatches in alignment and reported via SAM Tools. Allele sequences for 121 loci comprising on and off target loci were targeted and successfully detected in 760 plants. Alleles at both on target and off-target loci were characterized mutations detected. Plants with mutations at both the desired on-target locus and additional off-target loci were characterized through subsequent generations via molecular inversion probes and confirmed the process can be used to characterize allele segregation. Potential gene editing off-target sites can be identified to optimize selection of guide polynucleotide design in an effort to reduce the number of potential, unintentional off-target edits using the methods described in Example 1. Based on criteria and considerations including nucleotide sequence composition and uniqueness of the target site within the genome to be edited, a guide polynucleotide is selected and used for gene editing. Methods to determine guide polynucleotide potential off-target sites are described in Example 1. Example 1 teaches that combinations of nuclease/gRNA designs can be tested in vitro for their ability to recognize and bind target and off-target DNA oligonucleotides or fragments. In this method, candidate gRNAs and a modified, tagged CAS9 protein that has lost its ability to cut double stranded DNA (e.g., dCAS9) are incubated in a reaction containing a combinatorial pool of synthesized double-stranded oligonucleotides or randomly sheared genomic DNA fragments. The tagged dCAS9 protein/oligonucleotide complexes are enriched and DNA sequence analysis is performed to identify candidate off-target sites bound by the dCAS9/gRNA complex. Alternatively, bioinformatics algorithms can also be used to identify candidate off-target sites in genomes of interest. The product of Example 1 is a list and/or target site library of candidate target sites which can be used to determine the suitability of a particular gRNA design, or can be used to develop a MIPs panel to screen edited materials to survey for unintended gene edits at candidate off-target sites as described in Example 2. In this example, the panel is comprised of between 1 and 100,000 candidate target and off-target sites. This panel is determined from the results of Example 1 or 3 or from in silico prediction. In Example 1, inclusion of a sequence in the panel, based on the ranked likelihood that a specific sequence, or based on its relative abundance determined by sequence analysis (sequence counting), could be targeted by a particular nuclease/gRNA combination. In practice, gene editing can be carried out by any suitable method. To determine whether the intended target edit is created, the nucleic acid can be sequenced using conventional methods or targeted as described in Example 2. Sequence results obtained from genome editing are characterized candidate off-target sites in edited biological materials and enable the determination if unintended gene editing has occurred at these sites. Using sequence information from the genomic on-target site and potential off-target site sequence, ligation mediated nested PCR (LMN-Tiling primers) are designed. Assay sensitivity and specificity is determined by the nested PCR primer design, in which two primers are designed for every 200 base pairs on alternating stands, or 400 base pair spacing on a single strand. Following primer design, DNA is extracted from lyophilized leaf punches using the EZNA Plate 96™ kit (Omega Biotek, Norcross, Ga.). Purified genomic DNA is assessed for quality and quantity with a Fragment Analyzer™ (Advanced Analytical, Ames, Iowa) and subsequently sheared to an average fragment size of 1500 base pairs with a Covaris E210™ (Covaris Inc, Woburn, Mass.). Sheared DNA is end repaired, A-Tailed, and ligated according to the protocols provided by Kapa Biosystems™ (Woburn, Mass.). Ligated adapters are custom designed with ninety-six unique, six base-pair barcodes and linked to the Illumina P7™ sequence to enable Illumina sequencing post-PCR. Following ligation, fragment libraries are enriched for intended on-target site or potential off-target site sequences by two rounds of twenty cycle amplification. Primary PCR utilizes the first primer of the nested pair as the forward primer and an adapter-specific primer as the reverse primer, anchoring one end of each amplicon. Secondary PCR paired the adapter-specific primer with the nested PCR primer, which includes the Illumina P5™ sequence, finishing the fragments for Illumina™ sequence. Following purification with AmpureXP™ beads (Beckman Genomics, Danvers, Mass.), fragment libraries are analyzed on the Fragment Analyzer™, pooled in equal molar ratios into ninety six sample pools and diluted to 2 nM. Pools are sequenced on the Illumina (San Diego, Calif.) MiSeq or HiSeq 2500™ system, generating one to two million 100 base pair paired end reads per sample as per manufacturer protocols. Generated sequence is used to identify the on-target site sequence, off-target sequence and integrity of the on-target site and/or off-target site. The Southern by Sequencing (SbS) application employs a sequence capture based method to enrich sequencing libraries for sequence fragments containing constructs used in the process of making one or more intended gene edits, intended on-target site sequences, potential off-target site sequences or combinations thereof, for example, using Illumina™ or PACBIO sequencing libraries (Zastrow-Hayes, G. M., H. Lin, A. L. Sigmund, J. L. Hoffman, C. M. Alarcon, K. R. Hayes, T. A. Richmond, J. A. Jeddeloh, G. D. May, and M. K. Beatty. 2015. Southern-by-Sequencing: A Robust Screening Approach for Molecular Characterization of Genetically Modified Crops. Plant Genome 8. doi:10.3835/plantgenome2014.08.0037). A biotinylated probe library is designed and synthesized. Sequence fragments containing intended on-target site edits or potential off-target site edits of interest are analyzed as a collection and reduced to a set of unique sequences representing all bases within the collection. A DNA probe library is designed such that nearly all bases within a construct pool, intended on-target site region and potential off-target site region are targeted during the enrichment process. Following probe library design, next generation DNA shotgun libraries are produced for individual gene-edited events via standard molecular manipulations. With respect to a plant example, in brief, DNA is isolated from leaf punches via Omega Biotek (Norcross, Ga.) EZNA Plant 96™ kit. Purified genomic DNA is assessed for quality and quantity with a Fragment Analyzer™ (Advanced Analytical, Ames, Iowa) and subsequently sheared by sonication to an average fragment size of 400 bp with a Covaris E210™ (Covaris Inc, Woburn, Mass.). Sheared DNA is end repaired, A-Tailed, and ligated according to the protocols provided by Kapa Biosystems™ (Woburn, Mass.). The ligated B100 Scientific (Austin, Tex.) NEXTFlex™ adapter sequences includes ninety six unique six base pair bar-codes flanked by Illumina™ specific sequences to enable sample pooling at the hybridization and sequencing stages. To support efficient pooling of samples, index barcodes are incorporated into the Illumina library construction process by adding them into Illumina's I5™ adapter and utilizing the standard Illumina barcodes in Illumina's I7™ adapter. Pared with Illumina's I7 adapter barcodes, this provides the means to run over 2000 samples together with a unique barcode identifier on each sample. Ligated fragment libraries are amplified eight cycles according NimbleGen™ capture protocols. Amplified libraries are once again assessed for quality and quantity with the Advanced Analytical Fragment Analyzer™, pooled in equal molar ratios in groups of 24, 48, or 96 and diluted to a working stock of 5 ng/ul. Sequence enrichment is accomplished according to the NimbleGen™ protocols, utilizing a double capture approach to increase on-target reads. DNA shotgun libraries described above are denatured in a cocktail with hybridization buffers, SeqCap EZ Developer Reagent™, and blocking oligos corresponding to the adapter sequences in the pool. Post denaturation, the cocktail is combined with the biotinylated oligo library and incubated at forty-seven degrees Celsius for sixteen hours. Following the hybridization, the cocktail is mixed with streptavidin Dyanbeads M-270™ (LifeTech, Grand Island, N.Y.). Using the DynaMag-2™ (LifeTech, Grand Island, N.Y.) the bound DNA fragments are washed according to the NimbleGen™ capture protocol. Washed and eluted library pools are amplified five cycles, purified according to manufacturer instructions with Qiagen (Germantown, Md.) Qiaquick™ columns, and then captured, amplified sixteen cycles, and purified a second time using the methods described above. Final capture library pools are quantified with the Agilent tape station and diluted to 2 nM for sequencing. Pools are sequenced on the Illumina™ (San Diego, Calif.) MiSeq™ or HiSeq 2500 System™, generating one to two million 100 base pair paired-end reads per sample. Generated sequence is used to identify fragments of any construct used in the genome editing process, any intended on-target site edits, any potential off-target site edits, and the integrity of the on-target and/or off-target sites. SbS can be used to identify one or more intended on-target site edits, potential off-target site edits, integration site, copy number, integrity, backbone presence, and rearrangement of the plasmid insertions by detecting chimeric junction sequences between transformation plasmid and genomic DNA or noncontiguous plasmid DNA, or combinations thereof. The representative sequences may be aligned to the genome or reference sequence comprising the desired intended on-target site edit. The genome or reference sequence may be human, non-human, animal, bacterial, fungal, insect, yeast, non-conventional yeast, and/or plant. In cases where the genome has been edited only, for example, the reads are screened for any construct or plasmid that is used in the gene editing process to ensure that none are inadvertently integrated into the gene edited event's genome. For example, junctions between the plasm id/construct and the genomic segment may be identified. Reads that do not align to or contain plasm id/construct sequences may be further analyzed. For example, reads may be aligned to a genome reference sequence. Endogenous reads that align to the genome not containing the intended on-target site edit or off-target site edit are identified and excluded from further analysis. The remaining reads are aligned to a reference sequence comprising the intended on-target site edit or potential off-target site edit sequence and determined if the sequence has the intended on-target gene edit or potential off-target site edit. If desired, the reads may be extended from the gene edited target site edit, such as the intended on-target edit site and/or off-target edit site, into longer contigs using clean reads. An advancement decision, for example, for a plant event, may be made based on a set of criteria based on the analysis result, the on-target site sequence, off-target sequence, integrity of the on-target site and off-target site, and fragments of any construct used in the process. This pipeline works well for enriched sequences of the constructs used in the process of making the intended edit, intended on-target site or potential off-target site sequences and the flanking sequences generated by sequence capture method. It can also be applied for whole genome shotgun sequencing of the human, non-human, animal, bacterial, fungal, insect, yeast, non-conventional yeast, and/or plant genomes including transgenic and/or gene edited human, non-human, animal, bacterial, fungal, insect, yeast, non-conventional yeast, and/or plant cells. Use of SbS is a high-throughput pipeline that minimizes the advancement of poor gene-edited events so that time and money is not spent further on poor gene-edited events, for example, with respect to plants, in the downstream product development stages of plant lines.
# Nitto Records Nitto Records (ニットーレコード, Nittō rekōdo) was a Japanese record label, originally published by the company Nittō Chikuonki Kabushiki Gaisha (日東蓄音器株式会社) established in Osaka on March 20, 1920. The label was also called the "Swallow Brand", because of their trademark artwork. Nitto was known as a prolific brand, with over 2,000 titles published in about five years. In 1925, they were recognized as one of the two major labels in Japan at that time. The label produced numerous records of traditional Japanese music and art such as Jōruri, as well as solo plays of western music and orchestra recordings, although the latter remained small in the share of their overall production. As more record companies with foreign capital entered the market, Nitto gradually lost its competitiveness, and was taken over by Taihei Gramophone Co., Ltd. (太平蓄音器株式会社, Taihei Chikuonki Kabushiki Gaisha) in 1935. After the takeover, Taihei renamed the newly merged company Dainippon Gramophone Co., Ltd. (大日本蓄音器株式会社, Dainippon Chikuonki Kabushiki Gaisha), while maintaining the "Swallow Brand - Nitto Records" along with other labels. In one article of a long run series (1997–1999) celebrating the 120th anniversary of phonograph, the Kobe Shimbun wrote: "The only company which could rival Columbia Records head-to-head in the market... (Nitto was) so tightly focused on traditional arts which made them late in responding to rapidly growing demand for popular music. Moreover, it was the retirement of Morishita, who was driving Nitto's marketing strategy all alone, that determined the fate of Nitto." The Swallow Nitto brand finally disappeared in 1942, when Dainippon Gramophone was absorbed by Kodansha, shareholder of King Records, to become only a production facility of the latter. Musicians who worked for the label included pop and jazz composer Ryoichi Hattori. Artists signed to the label included Korean soprano Yun Sim-deok. The British Library Sound Archive holds several Nitto 78rpm discs of the nagauta genre in its collection.
https://en.wikipedia.org/wiki/Nitto_Records
Voscur is consulting on its draft programme for the Sector Leaders initiative. The proposal below has been developed by Voscur and we are now seeking sector feedback that will inform the final version of the programme. Please send your comments to: [email protected] – we would also love to hear from anyone wishing to become a Sector Leader. What do we mean by Sector Leaders? Sector Leaders embody Voscur’s strategic aim of strong and distributed leadership. By ‘Sector Leaders’ we mean a group of people able to provide a view of the sector that is valuable in both its breadth and depth. They are also able to use the position of influence that they have through their role at their organisation or their work within local communities to help Voscur; - respond to the sector’s needs, - deliver on the new Bristol In Partnership (formerly the Bristol Compact) and objectives in the new VCSE strategy and - take forward issues, opportunities and challenges to strategic partners, commissioners and decision makers within the city. - Some examples of what a position of influence could be include: - A network of contacts within a local community that can be consulted to help reinforce community leadership through local decision making - Access to specific mechanisms/decision makers, e.g. public boards, community activities/groups - Delivering a project/projects that have particular relevance to an objective in the VCSE strategy - Strong sector knowledge and experience that can help shape policy and guide decision making How will Sector Leaders be different to Advocates? Advocates were elected for the purpose of sitting on public sector strategic boards and working groups, mainly led by Bristol City Council and then reporting back to the sector. This is useful in ensuring the VCSE sector is ‘at the table’ and able to influence policy and strategy whilst gaining intelligence about local government policy. However, this way of working does not help us to empower the wider sector. Sector Leaders on the other hand, can be those operating at senior management level in the sector and/or those working on the ground with the community. They can, therefore, be appointed from any organisation and level of seniority assuming that they meet certain criteria (see ‘Appointment Process’ below). This allows us to appoint people from a range of roles to ensure more diverse participation, richer intelligence and to be more responsive to the needs of the sector. They are not intended to just be CEOs, but anyone who speaks for, acts for, represents or mobilises a relevant community. Why do we need them? Sector Leaders help the sector become better-connected, exercise influence and provide a conduit into specific strategic bodies. Appointing people in positions of influence who are able to enact change enables us to meet these needs and achieve the sector’s aims as set out in Bristol In Partnership and the VCSE strategy. Aims: - Encourage growth and sustainability in the VCSE sector - Ensure best possible outcomes for communities served - Strengthen voice and influence of communities and the wider sector - Achieve the aims of the Bristol SDG (Sustainable Development Goals) Alliance Outcome: Sector is better - Integrated into decision-making mechanisms that affect its work - Known and understood by priority stakeholders - Valued as a peer by stakeholders that share aims and values Where will Sector Leaders operate? Priority areas are the socio-economic issues which the VCSE sector seeks to address in order to improve communities and quality of life. Sector Leaders’ roles within organisations connected to these socio-economic issues will help to realise these improvements. They are ‘where’ Sector Leaders operate. Examples of priority areas are listed below. These will ideally be independent of any specific organisation/individual and will work towards broad social goals: - Health and wellbeing - Poverty and inequality - Environmental sustainability and energy security - Skills and education - Inclusive employment - Economic regeneration - Sustainable food production and food security How will Sector Leaders be expected to operate? Short term: To begin with the Sector Leaders will be a small group that consists of members of the legacy Advocate programme. All current sector leaders will be transferred to the new programme. The initial Sector Leader activities will revolve around quarterly meetings where they will contribute to the development of the new programme. The format of these meetings will be similar to an Action Learning Set, with Voscur acting as facilitator. Long term: The aim is to expand the programme to a larger network. Once the number of Sector Leaders exceeds 20 it will no longer be practical to meet as one group. Sector Leaders will be loosely aligned along the priority themes that they work within in order to make their activities easier to manage and to encourage relevant connections. As the overall network expands, smaller groups involved in related issues may wish to meet in person to discuss and collaborate on relevant actions and suggest new projects; Voscur will help facilitate meetings. Sector Leaders will be connected by an online forum – for example, a Linked In group – which would be the focal point for network-wide communication and coordinating activities. There is a difference between Voscur’s commercial work and the duties carried out by Sector Leaders. Sector Leaders will work on behalf of the sector and/or community to enact change. Voscur, on the other hand, works on request on behalf of organisations or partnerships, usually to help them engage with or connect with the sector and/or communities. The work of Sector Leaders is to apply ‘upward pressure’; by this we mean that it: - Has been requested by or based on the needs of the community or the sector - Involves advocating for or implementing positive change on behalf of the community or sector - Is not replicating the work of an existing project - Is seen as having achievable aims - Is seen as being worth the Sector Leader’s/Voscur’s time The number of projects active in each priority area should reflect the community need. This need can be judged based on progress on Bristol’s One City Plan and the SDG Alliance goals. What will Voscur do? Voscur will act as a facilitator for the Sector Leaders programme. This will involve: - Coordinating appointment of Sector Leaders - Supporting Sector Leaders in achieving sector aims - Facilitating initial Sector Leader meetings - Providing analysis of intelligence where required - Disseminating information to sector - Seeking information or feedback from the wider sector to inform the group’s work - Checking and approving Sector Leader appointments according to the role’s criteria Proposed role description: Sector Leaders are a group of influential and passionate people from the Bristol VCSE sector. They take a principal role in the development of the sector and get the opportunity to bring about meaningful change. The role will involve: Routine - Work with Voscur staff and other Sector Leaders to find ways to exercise influence in a way that benefits the VCSE sector - Work with Voscur staff and other Sector Leaders to develop policy and responses on behalf of the sector - Act upon the policies and plans agreed with Voscur and other Sector Leaders and use their position to enact change on behalf of the sector - Attend relevant strategic meetings and meetings with other Sector Leaders - Report regularly to the Sector Leaders group Advocacy - Identify opportunities, issues and challenges affecting the VCSE sector - Help to develop and exercise solutions that solve VCSE challenges - Inform Voscur of important developments within the Sector Leaders’ area Appointment process Sector leaders can be suggested by anyone with an interest in the sector or individuals can put themselves forward. To promote the programme and generate suggestions for additional Sector Leaders, Voscur will publicise it across all its communication channels and through its ongoing outreach activities in local communities. The aim of this is to recruit a representative network of leaders from across the city. Voscur approves all Sector Leader appointments. The criteria for selection are: - The candidate’s role at an organisation or with a group of people is relevant to at least one of the priority areas under the Bristol SDG Alliance and/or the objectives within the VCSE Strategy - They are able to exercise influence or enact change (not, for example, in a statutory role) Voscur will help find opportunities for ‘buddying’ to help smaller organisations to participate Person specification - Willing and able to work effectively to exercise influence or enact change that supports the VCSE Strategy and One City Plan - Enthusiastic and passionate about the VCSE sector - Strong understanding of the issues affecting the sector - Able to lead beyond their organisation - Ability to understand complex issues and work constructively with partners and stakeholders Next steps and timeline: - Voscur AGM January: present changes from Advocates to Sector Leaders as part of VCSE Strategy and Bristol in Partnership launch, e.g. invite a selection of current Advocates and potential Sector Leaders to discuss what they do, then ask them to facilitate table discussions on the question: “What does community leadership mean to you and your community?” - February meeting: Voscur presents suggested new format for Sector Leader programme to current Sector Leaders. - Mid-March: Sector Leaders feedback on suggested programme - End of March: final Sector Leaders programme published to Sector Leaders, Voscur board and Voscur staff. Update Voscur website with details of the new Sector Leaders programme. - Early April: Voscur and Sector Leaders identify individuals to approach. Individuals are identified based on the role specification and criteria outlined above. - During April: Voscur shortlist approved by CEO. Voscur approaches shortlist of potential candidates. - During May: appointment process takes place – Coincide with first official sector leaders meeting in Q2.
https://www.voscur.org/content/sector-leaders-draft-programme
Published on May 22nd, 2019 Learning something new is always a tough task. Since most concepts are entirely new to you, putting every element in its place may seem impossible and frustrating. However, this does not imply that you are dumb. Facing struggle when learning only implies that the method of approach needs to be tuned to one your brain relates to. To make learning easy, you have to understand the mechanism of your brain and how best to tackle various concepts and understand the topics in depth. In this article, we plot a map on how to rewire your brain and reach your maximum potential, therefore, earning spurs as a student. Since the best way to learn is by engaging professionals, we discuss how to engage thesishelpers.com to hone your skills and crack the code to your ideal learning pattern. 1. Practicing Bits By Bits Breaking a concept to small blocks makes it comprehend facts and also easier to retrieve content when needed. For instance, when working on a broad topic, you may divide the items into smaller sections tackling one concept in each. By subdividing the content, you avoid bombarding your memory with a lot of concepts to internalize and also grasp every block of thought in depth. When doing this, however, ensure that you inter-relate every idea afterward, thus being able to handle the whole problem. Understanding your brain patterns and ideal conditions that allow you to muster the full concentration The key to rewiring your brain for the better study is identifying your weaknesses and fine-tuning your method of education to one that best suits your abilities. In your study sessions, time the limits you can study without losing concentration. Afterward, take a test on the topic you were handling to find out how much your mind can internalize in each session. After learning your study patterns, prepare a program that puts the factors into consideration. While at it, consider the mode of content you understand best, therefore, indulging thesis writers in breaking topics down to easily consumable chunks. Also, ensure that you find suitable conditions for studying that allow you to harness your concentration and channel it to learning. By putting these factors into consideration, one can plan each study session in detail and retain the most content. However, it is key to push yourself to the limits as it may be tempting to do the bare minimum in a bid to make studies less challenging. To create an optimal schedule, you may consider engaging a friend or a tutor, thus getting appropriate figures. 2. Learning New Items Frequently To rewire your brain and realize better learning habits requires continually studying and nurturing a reading culture. To do this, you may study regularly, read more material, take tests more frequently, and engage in discussions. By studying new material, you gain perspective on how to address various issues and build your prowess in multiple niches, therefore, handling classwork efficiently. For scholarly articles, you may consider engaging content curators, thus learning the structure of different material and also increasing your grasp in topic-related terminologies. To ensure that learned items stick to the mind for longer, reread newly acquired material and take tests, therefore, noting areas that need you to invest more time studying. To boost the quality of a study session, take breaks in between sessions and alternate subjects to prevent wearing your mind out.
https://www.newszii.com/articles/learning-to-learn
Madhubani art is a type of folk art that originates from the Mithila region of India and Nepal. Madhubani paintings are typically characterized by their use of bright colors and bold geometric patterns. Mandala art, on the other hand, is a type of sacred art that is often used as a tool for meditation and prayer. Mandalas are typically circular in shape and often contain complex patterns and symbols. What is Madhubani Art? Madhubani art is a style of Indian painting that is traditionally done by women in the villages of Madhubani district in Bihar state, India. The name “Madhubani” means “forest of honey” in the local Maithili language. Madhubani paintings are characterized by their use of bright colors, geometric patterns, and depictions of nature scenes. The paintings are often done on walls or floors, using a variety of natural materials like cow dung, clay, and vegetable dyes. Madhubani art has been practiced for centuries, and has recently gained popularity outside of India. Madhubani paintings are now sold in tourist markets and galleries around the world. What is Mandala Art? Mandalas are a type of art that is created with a very specific purpose in mind. They are often used as a way to help people relax and find inner peace. Mandalas can be created with a variety of different mediums, but they all have one thing in common: they are created with a very specific, intentional design.
https://www.madhubani-art.in/difference-between-madhubani-art-and-mandala-art/
Primary health care worker with shortness of breath and productive cough for two weeks. History of worsening shortness of breath, orthopnea, fever, myalgia, and altered taste sensation for last three days. History of recent contact with a COVID-19 positive case. Patient Data Loading Stack - 0 images remaining Suspicion of an inhomogeneous opacity in the peripheral right lower lung zone. Left lung is well-aerated and clear. No pleural effusion or pneumothorax is seen. Loading images... Loading Stack - 0 images remaining Findings: Scan demonstrates multiple scattered wide spread ground glass opacities in both lungs, particularly affecting the lower lobes. No pleural effusion, pneumothorax, or significant mediastinal lymphadenopathy is seen. Conclusion: Multiple scattered wide spread ground glass opacities in both lungs, particularly affecting the lower lobes. Considering the patient's history and exposure to the positive COVID-19 case, CT features are suggestive of COVID-19 pneumonia. The patient tested positive for COVID-19. Other laboratory investigations showed high LDH, ferritin, CRP, and creatine kinase (CK, CPK) levels. WBCs, D-dimer, CK-MB & troponin I were normal. Case Discussion Patients with SARS-CoV-2 (COVID-19) infection primarily present with respiratory tract symptoms, like cough, difficulty in breathing, and fever 1. Patients may also complain of disturbance in sense of smell & taste, fatigue, myalgia, and joint pains 1. Rhabdomyolysis (skeletal muscle damage & necrosis) can be caused by different viral and bacterial infections. The influenza, parainfluenza, cytomegalic virus (CMV), Epstein Barr virus (EBV), Herpes Simplex virus (HSV), and Human Immunodeficiency Virus (HIV), are some well-known acute viral infections associated with rhabdomyolysis. Creatine kinase (CK), a muscle enzyme, is markedly elevated in rhabdomyolysis 1. Rhabdomyolysis can infrequently be seen in COVID-19 infection; however, this association is not well known. Recently, there is a case report of two patients who presented with rhabdomyolysis as the initial manifestation of COVID-19 infection with no respiratory symptoms 1. There is also a recent case report of a patient having rhabdomyolysis as a late complication of COVID-19 2.
https://prod-images.static.radiopaedia.org/cases/covid-19-pneumonia-103?lang=us
Meet The Assassin’s Creed IV Black Flag Voice Actors Ubisoft would like to introduce you to the actors behind the pirate legends within Assassin’s Creed IV Black Flag in this vignette. In this video, Matt Ryan (Edward Kenway), Mark Bonnar (Blackbeard), and Ralph Ineson (Charles Vane) dive into the intricacies of the colorful pirates captains that gamers will encounter in in Assassin’s Creed IV Black Flag. Assassin’s Creed IV Black Flag tells the story of Edward Kenway, who falls from privateering for the Royal Navy into piracy as the war between the major Empires comes to an end. Edward is a fierce pirate and seasoned fighter who soon finds himself embroiled in the ancient war between Assassins and Templars. Set at the dawn of the 18th Century, the game features some of the most infamous pirates in history, such as Blackbeard and Charles Vane, and takes players on a journey throughout the West Indies during a turbulent and violent period of time later to become known as the Golden Age of Pirates. Assassin’s Creed® IV Black Flag will be released on the PlayStation®3, PlayStation®4, Xbox One®, Xbox 360®, Nintendo Wii U™, Windows PC and other next generation consoles. The game will be available on the PlayStation®3, Xbox 360® and Nintendo Wii U™ system on October 29, 2013.
http://villagegamer.net/2013/08/30/meet-the-assassins-creed-iv-black-flag-voice-actors/
NEW ORLEANS — As metro New Orleans prepares for potential flooding associated with heavy rains from Tropical Storm Barry, the city's water agency is releasing an interactive map to help residents understand the city's drainage system. The Sewerage and Water Board of New Orleans explains that the system was designed around the turn of the 20th Century and much of the original equipment is still in use. It's made up of more than 68,000 catch basins, 1,400 miles of drainage pipes, hundreds of miles of open and underground canals and 120 pumps housed in 24 pump stations, stationed throughout. HOW IT WORKS Storm water runoff enters the streets where it enters the network through the catch basins. Storm water then travels through the system of pipes and canals to reach the pump stations, which then send it to either canals or directly into nearby waterways, including Lake Pontchartrain. The 21 smallest pumps, called "constant duty" pumps, are continuously moving groundwater that seeps into canals. Not all 120 pumps operate at the same time, because doing so could possibly overflow downstream canals and cause neighborhood flooding. But, if a pump goes offline, the idle pumps kick in and take over. When it was designed more than 100 years ago, the system could move 1 inch of water out of the city within the first hour of a storm and a half-inch of water each hour after. Intense rainstorms - like those anticipated with Tropical Storm Barry - end up dropping more than an inch of water in an hour and outpace what the system can handle. The result - flooding, until the system can catch up. Louisiana and metro New Orleans are at risk of "significant flooding" as Barry inches slowly to the coast. A surprise storm Wednesday - not associated with the tropical system - put the system to the test, as it dropped 8 inches of rain over the course of a few hours, causing significant flooding across the city, including in places not usually flood-prone. With 10-15 inches of rain expected in the New Orleans metro area through Saturday, residents are being encouraged to make sure they have food and supplies for 72 hours in case flooding makes getting around a problem. Sandbags are being provided in many parishes and there are a small number of mandatory evacuations, like Lafitte, Crown Point, Barataria and Grand Isle.
https://www.12newsnow.com/article/weather/hurricane/interactive-map-new-orleans-pump-station-drainage-map/289-a079d78c-4388-4c31-93be-aeb444f5190c
The Ecumenical Center, through Mindwise Now, offers comprehensive psychological evaluations and assessments for both children and adults, administered by certified, highly-trained psychologists and counselors, who help to interpret and apply the results of the tests. The Center provides a wide range of testing which can assess gifted and talented, learning disabilities, ADD and ADHD, depression, developmental delays, cognitive delays, career preferences, marital compatibility, clinical behaviors, substance abuse, IQ, personality traits and emotional/behavioral problems, among other things. Psychological assessments are given to help professionals, students, children and families identify issues and challenges. When issues are discovered and good counseling applied, it leads to a fuller life where individuals can excel socially, mentally and academically.
https://www.ecrh.org/testing/mindwise-now/
It’s back to school season again. Whether you're sending your kid(s) off to college, high school, middle school, or elementary school, there are some things they must possess or must hone before starting their new journey in life. We've compiled a list of 10 things that every student needs to survive school. We'll go over each item and explain why it's important. Nutrition is important for learning because it helps kids grow strong bones, muscles, and brains. Kids who eat well perform better academically than those who don't. Kids who eat healthy foods tend to be healthier overall. They're less likely to be overweight, and they may even live longer. But nutrition isn't just about eating right. The quality of food matters too. Foods that are grown locally are fresher and taste better. Food that's organic is produced without pesticides or synthetic fertilizers. And food that's free range means chickens were allowed to roam outside instead of being confined indoors. All these factors contribute to a nutritious diet. So when you talk to your child about nutrition, emphasize the importance of eating fresh fruits and vegetables, whole grains, lean protein, and dairy products. Also discuss the benefits of drinking water and limiting sugary drinks. Water helps the body digest food and flush out toxins. Sleep is essential to our health and well-being. We need sleep to maintain our energy level, to keep our immune system strong, and to repair damaged cells. But many kids today are getting too little sleep. Kids who don't get enough sleep are at risk for poor grades, obesity, depression, anxiety, and behavioral problems. They're also more likely to engage in risky behavior, including drinking alcohol and smoking cigarettes. To help prevent these problems, parents should encourage healthy sleep habits in their kids. Here are some tips: If you're not exercising regularly, you may experience health problems later in life. Exercise improves your mood, reduces stress, and makes you feel better overall. But there's another reason to exercise regularly—it boosts your memory. Studies show that people who exercise regularly perform better on tests of short-term memory than those who don't. And when you exercise regularly, you build stronger muscles and bones. So you're less likely to break a bone or suffer a heart attack or stroke. Finally, exercise increases your energy level. This means you're more productive throughout the day. And you're less likely to fall asleep during class or study sessions. So whether you're a student or not, exercising regularly really has a lot of benefits. Attitude is everything. Positive attitude means being happy, optimistic, and confident. Students who are positive tend to be happier, more successful, and better able to cope with challenges. Students who are negative often feel unhappy, depressed, and stressed out. They're not very productive, and they struggle academically. When students are positive, they're more likely to learn and perform well. They're more likely to achieve academic goals, and they're more likely to become leaders in their communities. Positive attitude helps kids stay motivated and focused throughout the day. It makes them more resilient when things go wrong. And it gives them the confidence to take risks and try new things. Positive attitude is contagious. Kids who are positive influence others around them. So if you want your child to succeed in school, help him develop a positive attitude. Students who study well tend to perform better in school than those who don't. So, what does it take to be a student who studies well? First, students must understand the importance of studying. They should realize that learning is essential to succeeding in school. Students who study well develop a habit of studying, and they're able to maintain this habit throughout the school day. Second, students must learn how to study effectively. This means developing effective study skills. These include things like organizing materials, setting goals, taking notes, reviewing material, and summarizing information. Finally, students must practice these study skills over time. The more they practice them, the better they become at studying. If students master these three steps, they'll be ready to succeed in school. Students who lack time management skills often find themselves overwhelmed by too many assignments, projects, tests, and exams. They feel stressed out and frustrated because they're unable to complete everything on time. To avoid this problem, students must learn to prioritize tasks and set realistic deadlines for each assignment. Students should also be aware of the amount of work required for each task and the amount of time needed to complete each project. If students fail to plan ahead, they may end up working late at night and missing important classes. This leads to poor grades and makes them ineligible for scholarships and financial aid. Students should also keep track of their progress throughout the semester. This helps them identify areas where they need improvement and allows them to adjust their study habits accordingly. Finally, students should use technology to help them stay organized. For example, they can create a calendar on Google Calendar to remind them when assignments are due, and they can use Evernote to store notes and reminders. Students who organize themselves well tend to be better prepared for school and even life after graduation. They're able to keep track of assignments, study materials, and deadlines. They also tend to be better at managing their time and money. And they're more likely to complete homework and projects on time. If you're looking to improve your organizational skills, here are some tips to help you stay organized: Communication skills are important for every student, especially those who plan on attending college. Students who communicate well tend to be more successful academically and socially. Students who lack communication skills often struggle with homework assignments, tests, and exams. They may not understand instructions, ask questions, or fail to complete tasks. To help students improve their communication skills, parents should encourage them to practice speaking out loud and writing down ideas. This helps students learn to express themselves clearly and effectively. Another way to improve communication skills is to teach students how to listen. Listening is an essential skill for any student because it allows him/her to understand others' points of view. Finally, students should learn to work together. Collaboration improves teamwork and increases productivity. Working together means working with others, sharing information, and solving problems. Students who lack social skills often struggle academically. They may be shy, withdrawn, or unable to communicate effectively with others. These students tend to avoid group activities and prefer individual pursuits. They're not alone. Many kids today suffer from poor social skills and this problem isn't limited to just elementary schools. If you want to help these students succeed in school, you must teach them social skills. Social skills include things like making friends, dealing with conflict, and understanding body language. To help students develop social skills, try teaching them empathy. Empathy is the ability to understand another person's feelings and emotions. Empathy helps students learn to relate to others. It teaches them to put themselves in another person's shoes and recognize when they're hurting someone else. Teach students to also use positive self-talk. Positive self-talk is talking to yourself in a way that makes you feel better about yourself. You might say, "I'm glad I didn't get mad when he said that." When you practice positive self-talk, you'll become more confident and comfortable in social situations. This confidence will help you build friendships and improve your social life. Students who take responsibility for their own learning tend to be more successful than those who blame others for their failures. If you're responsible for your own learning, you'll learn faster because you won't waste time blaming others. Instead, you'll spend your time studying and practicing. When you're responsible for your learning, you'll also feel better about yourself. You'll realize that you're capable of succeeding, rather than feeling like a failure. This is true whether you're taking classes at school or working toward a career goal. So what does this mean for your kids? Let them take ownership of their life and their future. And remind them that there's no shame in failing. Everyone fails sometimes. In failure, one is also given a chance to learn even further. In conclusion, students need to have a positive attitude toward learning, be willing to work hard, and be able to manage their time well. They also need to understand that education isn't only about grades and test scores—it's about developing skills and knowledge that will serve them throughout their lives. And finally, they need to realize that being successful doesn't mean being perfect—it means working hard to achieve your goals and doing whatever it takes to succeed.
https://conferencecentergtcc.com/10-things-students-need-to-succeed-as-they-go-back-to-school/
Abstract: We perform a comparison, object-by-object and statistically, between the Munich semi-analytical model, L-Galaxies, and the IllustrisTNG hydrodynamical simulations. By running L-Galaxies on the IllustrisTNG dark matter-only merger trees, we identify the same galaxies in the two models. This allows us to compare the stellar mass, star formation rate and gas content of galaxies, as well as the baryonic content of subhaloes and haloes in the two models. We find that both the stellar mass functions and the stellar masses of individual galaxies agree to better than $\sim0.2\,$dex. On the other hand, specific star formation rates and gas contents can differ more substantially. At $z=0$ the transition between low-mass star-forming galaxies and high-mass, quenched galaxies occurs at a stellar mass scale $\sim0.5\,$dex lower in IllustrisTNG than in L-Galaxies. IllustrisTNG also produces substantially more quenched galaxies at higher redshifts. Both models predict a halo baryon fraction close to the cosmic value for clusters, but IllustrisTNG predicts lower baryon fractions in group environments. These differences are due primarily to differences in modelling feedback from stars and supermassive black holes. The gas content and star formation rates of galaxies in and around clusters and groups differ substantially, with IllustrisTNG satellites less star-forming and less gas-rich. We show that environmental processes such as ram-pressure stripping are stronger and operate to larger distances and for a broader host mass range in IllustrisTNG. We suggest that the treatment of galaxy evolution in the semi-analytic model needs to be improved by prescriptions which capture local environmental effects more accurately. Submission historyFrom: Mohammadreza Ayromlou [view email] [v1] Wed, 29 Apr 2020 18:00:00 UTC (2,524 KB) Full-text links: Download: (license) Current browse context: astro-ph.GA Change to browse by: References & Citations a Loading... Bibliographic and Citation Tools Bibliographic Explorer (What is the Explorer?) Litmaps (What is Litmaps?) scite Smart Citations (What are Smart Citations?) Code and Data Associated with this Article Demos Recommenders and Search Tools Connected Papers (What is Connected Papers?) CORE Recommender (What is CORE?) IArxiv Recommender (What is IArxiv?) arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs and how to get involved.
https://arxiv.org/abs/2004.14390
CROSS REFERENCE TO RELATED APPLICATIONS: FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION This application is related to earlier filed U.S. Provisional Patent Application No. 62/395,823 filed on Sep. 16, 2016 and entitled CENTERED, LEFT- AND RIGHT-SHIFTED DEEP NEURAL NETWORKS AND THEIR COMBINATION. The applicants claim priority to this application, which is incorporated by reference in its entirety herein. The present invention relates generally to speech recognition and, more particularly, to systems and methods for speech recognition based on time shifted models and deep neural networks (DNN). Automatic speech recognition (ASR) technology has advanced rapidly with increasing computing power available in devices of all types. It remains, however, a computationally intensive activity. There remains a need to process speech using neural networks and other architectures that can be efficiently trained based on available resources. According to an embodiment of the present invention, Deep Neural Networks (DNN) are time shifted relative to one another and trained. The time-shifted networks may then be combined to improve recognition accuracy. The approach is based on an automatic speech recognition (ASR) system using DNN. Initially, a regular ASR model is trained. Then a top layer (e.g., SoftMax layer) and the last hidden layer (e.g., Sigmoid) may be fine-tuned with same data set but with a feature window left- and right-shifted. That is, for a regular DNN training, the feature window takes n frames from the left and n frames from the right. In this approach, one fine-tuning (left-shifted) takes (n+n/2) frames from left and (n−n/2) frames from the right, and the other fine-tuning (right-shifted) takes (n−n/2) frames from the left and (n+n/2) frames from the right. In this way, we have the left-shifted networks, the regular networks (centered), and right-shifted networks. From these three networks, four combination networks may be generated: left- and right-shifted, left-shifted and centered, centered and right-shifted, and left-shifted, centered, and right-shifted. The centered networks are used to perform the initial (first-pass) ASR. Then the other six networks are used to perform rescoring. The resulting lattices may then be combined using ROVER (recognizer output voting error reduction) to improve recognition performance. According to one embodiment of the invention, a system for training deep neural networks using centered and time shifted features includes a memory and a processor. The memory includes program instructions for training DNN models, preparing automatic speech recognition features and aligning units using left-shifted, centered and right-shifted features and storing audio and transcription data for training. The processor is coupled to the memory for executing the program instructions to generate: a first DNN having a plurality of layers based on the centered data, a second DNN based on the first DNN that shares the same number of layers as the first DNN and shares at least two bottom layers with the first DNN and in which the remaining layers are trained using the left-shifted features in the memory, and a third DNN based on the first DNN that shares the same number of layers as the first DNN and shares at least two bottom layers with the first DNN and in which the remaining layers are trained using the right-shifted features in the memory. The processor according to one embodiments further receives the features, audio and transcription data and assigns corresponding data to levels of the first, second and third DNN to creates trained first, second and third DNN networks that when combined produce a combined trained network transcription output for audio inputs. According to another embodiment of the invention, processor further executes the program instructions, and processes an audio file to create a first-pass recognition lattice corresponding to the first DNN and subsequently re-scores the lattices based on the second and third DNNs and their combination with the first DNN. The program instructions in the memory further include program instructions for combining the first, second and third DNNs, including based on scoring using ROVER. According to still other embodiments, the second DNN may be based on left-shifted features that are shifted to the left more than the right-shifted features are shifted to the right for the corresponding third DNN. The third DNN may be based on right-shifted features that are shifted to the right more than the left shifted features are shifted to the left for the corresponding second DNN. Alternatively, the right-shifted and the left-shifted features may be shifted the same amount in time. According to still another embodiments of the invention, a method for training deep neural networks using centered and time shifted features includes: preparing a deep neural network (DNN) for automatic speech recognition based on automatic speech recognition features, audio data, transcript data, lexicon and phonetic information; training a first DNN having a plurality of layers from an automatic speech recognition training tool based on the features centered in time; preparing left-shifted features and right-shifted features; fine-tuning the top two of the layers of the first trained DNN based on the left-shifted and right shifted features to create second and third respective trained DNNs sharing the bottom layers with the first DNN and each having its own respective top layers; using the first DNN for a first pass recognition on audio data; and combining the second and third DNNs with the first DNN to re-score the transcription of audio and combine the output of the first, second and third DNNs to increase the accuracy compared to the using the first DNN only. According to still another embodiment of the invention, a computer program product includes a non-transitory computer usable medium having computer program logic therein, the computer program logic includes configuring logic, receiving logic, shifting logic training logic, conducting logic and combining logic. The configuring logic causes the computer to configure at least three deep neural networks (DNNs) each having a plurality of levels to be trained for transcribing audio data. The receiving logic causes the computer to receive audio data and transcription data corresponding to the audio. The shifting logic causes the computer to prepare features based on left-shifted, centered and right-shifted timing. The training logic causes the computer to train a first one of the DNN networks based on features having centered timing. The conducting logic causes the computer to conduct iterative training to create a second DNN based on the features having left-shifted time and a third DNN based on the features having right-shifted timing and the combining logic for causes the computer to combine outputs from the first, second and third DNN to create a combined trained DNN that has increased accuracy compared to the using the first DNN only. The outputs of the first, second and third DNN may be combined using at least one of averaging, maximum or training additional neural network layers. FIG. 1 FIG. 1 100 110 120 130 140 140 An approach to training time-shifted deep neural networks (DNN) and combining these time-shifted networks is described herein. The approach is based on an automatic speech recognition (ASR) system using DNN. depicts an illustrative DNN network structure . Referring to , DNN includes an input feature layer , the hidden layers , and a triphone target output layer . Each layer includes a plurality of nodes and between layers, all nodes are connected. Initially, a regular ASR model is trained. Then a top layer (e.g., SoftMax layer) and the last hidden layer (e.g., Sigmoid) may be fine-tuned with same data set but with a feature window left- and right-shifted. That is, for a regular DNN training, the feature window takes n frames from the left and n frames from the right. In this approach, one fine-tuning (left-shifted) takes (n+n/2) frames from left and (n−n/2) frames from the right, and the other fine-tuning (right-shifted) takes (n−n/2) frames from the left and (n+n/2) frames from the right. In this way, the left-shifted networks, the regular networks (centered), and right-shifted networks are available to contribute to the output. From these three networks, four combination networks may be generated: left- and right-shifted, left-shifted and centered, centered and right-shifted, and left-shifted, centered, and right-shifted. The centered networks are used to perform the initial (first-pass) ASR. Then the other six networks are used to perform rescoring. The resulting lattices may then be combined using an error reduction technique or optimization technique such as recognizer output voting error reduction (“ROVER”) to improve recognition performance. One can use available tools to train a deep neural networks (DNN) triphone model using Kaldi, RWTH ASR, or other Toolkits, which have standard components like DNN, triphone, linear discrimination analysis (“LDA”), etc. To train a DNN triphone model, audio and corresponding transcription is needed. Other DNN models may be used in some embodiments, for example Long Short Term Memory (LSTM) neural networks, convolutional neural networks (CNN) or recurrent neural networks (RNN) may be used, among others. This type of data can be obtained from LDA or other channels. In addition, word pronunciations are needed. One can use the CMU pronunciation dictionary for this purpose. For an out-of-vocabulary word, generally a grapheme-to-phoneme tool is used to predict the out-of-vocabulary word's pronunciation. To train a triphone model, linguistic grouping may be prepared. This can be obtained from standard linguistic text books with groupings such as voicing, labial, dental, plosive, etc. In one example of an embodiment of the invention, a RWTH ASR Toolkit may be used along with audio data having associated transcriptions. Illustrative data may also include word pronunciations data, a RWTH grapheme-to-phoneme conversion tool, and a general linguistic question list. For example, there may be 4501 classes in the triphone decision tree grouping. The audio has a 16 kHz sampling rate for this example but may be any rate. The acoustic features are standard MFCC features, which have a frame size of 25 ms, a frame shift of 10 ms, and output size of 16 coefficients per frame. MFCC features are transformed with LDA with a window size of 9 frames and an output size of 45. The initial acoustic models may be trained with traditional GMM modeling to obtain the alignment and triphone groupings, and LDA transformation. FIG. 2 FIG. 2 FIG. 1 FIG. 2 210 220 230 210 220 230 230 250 260 260 depicts an illustrative image of an alignment that corresponds to an audio file , a feature series , and phoneme alignments that may be generated according to one illustrative embodiment of the invention. Referring to , the horizontal axis is represented by time in seconds and the vertical axis is divided into three sections: The audio waveform (top) , the feature vectors (middle) , and the phoneme alignments (bottom) . Referring to , the symbols (sil, @, t, E, n, etc.) are phoneme representations. The vertical long bars among the phoneme alignments indicate boundaries between phonemes. In between the feature vectors and phoneme alignments, 15 bars and one bar symbolize that at one time frame, a phoneme corresponds to 15 consecutive feature frames during the training. In , these 15 features are centered at the time frame: 7 frames on the left and 7 frames on the right of bar . The overall image is a display of the time-centered features frames and phoneme alignment for DNN model training. FIG. 2 For example, after the initial modeling, fifteen consecutive LDA features (7 frames from the left and 7 frames from the right as shown in ) are concatenated to form a 675 dimension vector per frame. The concatenated features in this example may be first mean and variance normalized and then fed to the DNN training. It will be understood that the window size and output size may be changed, along with the dimensionality of the vectors. 120 The DNN model may be trained first with supervised pre-training and then is followed by fine-tuning. According to one illustrative example, the DNN has five hidden layers with 1280 nodes each. The output SoftMax layer has 3500 nodes. The training is performed on a CUDA-enabled GPU machine. Both Kaldi and RWTH toolkits provide recipes for supervised pre-training and fine-tuning. In pre-training, the first hidden layer is trained and fixed; then the second hidden layer is added, trained, and fixed; so on and so forth. During fine-tuning, the DNN learning rate is controlled using a Newbob protocol. After each iteration, the new DNN model is evaluated against a development data set on the frame classification error. The new learning rate depends on the improvement on the frame classification error; and the fine-tuning stops when the improvement is very small. It will be understood this is only one example and that more or fewer layers may be used and the number of nodes may be changed as desired and depending on the application. Left- and Right-Shifted DNN Networks FIG. 3 FIG. 2 FIG. 4 FIG. 2 220 260 260 220 260 260 is similar to except that the features used in DNN model training are left-shifted. That is, there are 10 frames on the left of bar and 4 frames on the right of bar in reference to the phoneme time frame. is similar to except that the features used in DNN model training are right-shifted. That is, there are 4 frames on the left of bar and 10 frames on the right of bar in reference to the phoneme time frame. FIG. 3 FIG. 4 10 For the left- and right-shifted DNN training, according to one embodiment of the invention, fine-tuning of the top two layers (the last hidden layer and the top layer) is performed using time shifted features. Specially, the left-shifted training takes 10 frames from the left and 4 frames from the right (see ); and the right-shifted training takes 4 frames from the left and frames from the right (see ). FIGS. 5A and 5B FIGS. 5A and 5B FIG. 1 FIG. 5(A) FIG. 5(B) FIGS. 5A and 5B 510 520 highlight that according to one embodiment of the invention, the last two layers may be trained using respectively the left-shifted and the right-shifted features. are similar to except that in only the top two layers are trained (fine-tuned) and the input features are left-shifted, and in only the top two layers are trained (fine-tuned) and the input features are right-shifted. Both of the left-shifted and right-shifted networks shown in use the regular trained networks as the initial parameters. Instead of taking equal numbers of frames from the left and the right, this approach takes uneven numbers of frames from the left and the right. It will be understood, however, this this example is illustrative and that more or fewer layers may be used for training the left and right shifted networks. For example, the top 1, 2, 3, 4 or more layers may be used in some embodiments for training depending on implementation. Combining the Left-Shifted, Centered, and Right-Shifted Networks The regular centered DNN model is used for the first-pass recognition. To perform the ASR, one needs to prepare a language model and a lexicon. One can download text data from websites (e.g., CNN, Yahoo News, etc.). After that, language modeling tools SRILM or IRSTLM can be used. Lexicon can be prepared similarly as in the training, which uses a lexicon dictionary and using a grapheme-to-phoneme tool. FIG. 6 FIGS. 6A-6D FIG. 5 and its component parts depict an example output for a recognition of audio files or streams according to an embodiment of the present invention. Referring to , the horizontal axis is represented by time. That is, the position of a node on the horizontal axis represents its time in the audio file (from left to right). There are candidate recognition results for an audio file or stream. These nodes are interconnected, and they form different paths (arcs) from the start to the end. Each path represents one hypothesis relating to the corresponding audio. For each arc, according to one illustrative embodiment, there is an input word, an acoustic score, and a language model score in the format of “word/score1/score2” and timing information. The best path is selected based on the scores (acoustic and language) associated with each word. Usually the ASR output includes the top best recognition results and lattices. FIG. 7 depicts an illustrative computation of the DNN networks and their combination. The center networks are the regular (centered) trained networks, the top networks are the left-shifted networks, and bottom networks are the right-shifted networks. The computation of the first four hidden layers of the network is shared. After computation of the left-shifted, centered, and right-shifted networks, their outputs are combined, for example as shown. The combination operation can handled in a variety of ways and for example, could be based on an average, maximum, or an additional trained combination neural layer. FIG. 7 After the initial recognition, the left-shifted and right-shifted network outputs (acoustic scores) are computed (see ). Because the left-shifted and right-shifted networks share several layers (except the top two layers) with the centered networks, the intermediate network outputs can be shared. This approach incurs only four layers of additional computation (two for left-shifted and two for the right-shifted). The computation of output may be done with a SoftMax computation. For example, for the combination of the left-shifted, centered, and right-shifted networks, these scores (without SoftMax) are averaged and then SoftMax is applied. According to this embodiment, minimal computation is incurred to produce the combination acoustic scores. FIGS. 8 and 9 Using these different types of scores, the lattice can be re-scored, and new lattices produced. These lattices in the end are combined using the standard ROVER (Recognition Output Voting Error Reduction) approach (see ). Alternatively, using these different types of scores, new lattices can be generated in a regular recognition process. To speed up these recognition processes, the language model can be derived from the first-pass lattice produced earlier. Again, these lattices in the end are combined using the standard ROVER approach. FIG. 8 FIG. 8 810 820 830 840 850 depicts a method of training for the left-shifted, right-shifted DNNs and in turn applying the training results to a recognition process. Referring to , in 800 features, audio and corresponding transcripts, lexicon and phonetic information are prepared in order to train a DNN. In , a DNN network is trained from ASR or a similar recognition training tool based on the audio and transcripts. In , the left and right shifted features are then prepared as described above for the top two layers (but more or fewer may be chosen). In , the top two layers (or more or fewer depending on implementation) of the regular trained DNN networks are fine tuned using the left and right shifted features. In , the regular DNN network is used for first pass recognition. In , the left and right shifted networks and their combination with the regular networks are used for rescoring and combining final results using ROVER to realize a more accurate DNN system. FIG. 9 FIG. 9 900 900 depicts an illustrative system for training using DNN using the left-shifted, centered, and right-shifted features. Referring to , a frame based DNN receives inputs that may be stored in one or more databases including, for example audio inputs corresponding to an audio stream, transcripts transcribing the audio stream with timing information and various other inputs including ASR data and toolkits, and pronunciation data. The pronunciation data may further be prepared using tools such as grapheme to phoneme conversion tools. The DNN and other tool kits may also be stored in the databases and may be stored locally or remotely and be accessible over a network. The databases may be accessed by a server or other computer that includes a processor and memory. The memory (including the database) may also store the data, the toolkits and the various inputs, including the audio and transcription inputs. The memory and processor may also implement the neurons of the neural networks that are set up and fine-tuned according to the techniques described herein. 910 900 910 920 930 900 940 950 FIGS. 5A and 5B A DNN training engine may be coupled to the database (or memory) and receive audio and transcription data. The training engine may output left shifted and right shifted features . A DNN training engine may then operate on the left-shifted and right-shifted features, the audio and transcription data and the database to produce a set of trained DNN networks including the left shifted and right shifted DNN networks shown in . The processor executes program instructions to run the various programs based on the inputs to achieve trained DNN networks using the left-shifted, centered, and right-shifted features. The combined DNN training engine uses the centered DNN network to do a first pass recognition on an audio stream with corresponding transcript. It then uses the time shifted DNN networks and their combination with the regular DNN to produce additional scoring results. The results are combined using Rover or another technique described herein. The combined DNN engine may also integrate language modeling data to facilitate scoring, rescoring and combining the outputs to produce a combined recognition result. Once trained, the trained networks may then be used to process new audio or other files to facilitate scoring translations of words or their constituent parts in a stand- alone translation or to annotate translations being done using a frame based DNN approach, for example, to improve and acoustically re-score and to combine results into traditional ASR techniques. By training with left shifted and right shifted features, in addition to centered features, the training is augmented with variations of data that enable better discrimination and accuracy improvement in the overall trained network. FIG. 10 FIG. 10 1030 1000 1040 1050 1000 1030 900 1050 1000 1005 900 1015 1020 1010 1050 depicts an illustrative server or computer system for implementing systems and methods described herein according to an embodiment of the invention. For example, referring to , a processor is coupled to a memory , input/out devices including illustratively, for example, a keyboard, mouse, display, microphone, and speaker and to a network . The memory includes computer program instructions stored therein that may be accessed and executed by the processor to cause the processor to create the DNN structures, load data and tool kits into memory from databases such as database or from a network , executed the tool kits, perform DNN training, create left-shifted and right-shifted feature sets and perform the methods and additional DNN training on time shifted features and output combinations described herein. In this regard, for example, the memory may include toolkits and tools that may be resident in memory or may be store in database , program instructions to implement training engines for training centered, left-shifted, right-shifted or combined networks as described according to embodiments of the present invention, language models and data associated with inputs, the training process or outputs or associated with using a trained, combined DNN as described transcribing audio input received, for example, from a network . While particular embodiments of the present invention have been shown and described, it will be understood by those having ordinary skill in the art that changes may be made to those embodiments without departing from the spirit and scope of the present invention. BRIEF DESCRIPTION OF THE FIGURES The above described features and advantages of the invention will be more fully appreciated with reference to the appended drawing figures, in which: FIG. 1 depicts an illustrative image of a DNN network. FIG. 2 depicts an illustrative image of an alignment that corresponds to an audio file, a feature series, and phoneme alignments that may be generated according to one illustrative embodiment of the invention. FIG. 3 depicts a network with left-shifted features according to an embodiment of the invention. FIG. 4 depicts a network with right-shifted features according to an embodiment of the invention. FIG. 5A depicts a network in which the top two layers are trained (fine-tuned) and the input features are left-shifted according to an embodiment of the invention. FIG. 5B depicts a network in which only the top two layers are trained (fine-tuned) and the input features are right-shifted according to an embodiment of the invention. FIG. 6 6 6 and its constituent parts A-D depict an illustrative output for the recognition of an audio files or streams according to an embodiment of the present invention. FIG. 7 depicts combinations of the outputs of centered, left-shifted and right-shifted DNN networks according to an embodiment of the invention. FIG. 8 depicts a method of training for the left-shifted, right-shifted DNNs and in turn applying the training results to an illustrative recognition process according to an embodiment of the invention. FIG. 9 depicts an illustrative system for training using DNN using the left-shifted, centered, and right-shifted features according to an embodiment of the invention. FIG. 10 depicts an illustrative system for implementing systems and methods described herein according to an embodiment of the invention.
What draws ‘lone wolves’ to the Islamic State? Author Professor of Modern Middle Eastern History, University of California, Los Angeles Disclosure statement James L. Gelvin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. As a professor of modern Middle Eastern history, I have spent the majority of my professional life studying the region, its culture, society and politics. In recent years, I have researched and written about IS and its terrorist activities. While other experts and I have long looked at how radicalization occurs, some new ideas are emerging. Of lone wolves, flaming bananas and machismo Like this recent attack in New York, many IS attacks around the globe are carried out by individuals the media have dubbed “lone wolves” – that is, freelancers who act without the direct knowledge of the IS leadership. To avoid glamorizing them, the RAND Corporation prefers the term “flaming bananas.” There are two theories as to why these individuals pledge allegiance to the group. The first is that they get “radicalized.” Radicalization refers to a step-by-step process whereby individuals become increasingly susceptible to jihadi ideas. First, they cut themselves off from social networks such as family, which provide them with support and a conventional value system. They then immerse themselves in a radical religious counterculture. They might do this on their own, or a jihadi recruiter might bring them into the fold. Either way, the result is the same. Some observers claim IS propaganda plays a key role in recruitment. Rather than presenting a religious rationale for the group’s actions, IS propaganda tends to focus on the violence the group perpetrates. IS has even released a video game based on Grand Theft Auto 5 in which, rather than stealing cars and battling the police, the player destroys advancing personnel carriers and shoots enemy soldiers. Perhaps, then, the radicalization model is wrong or not universally applicable. Perhaps there’s something other than religious zealotry at play. Consider the widely reported story of two would-be jihadists who, before they left Birmingham, U.K., for Syria, ordered “Islam for Dummies” and “The Koran for Dummies” to fill the gaps in their knowledge. Newspaper stories time and again puzzle over the problem of how it happens that individuals who go on to join IS were found in bars, even gay bars, or had Western girlfriends and smoked and drank almost up to the time they committed some act of violence for the group. The most common explanation is that their dissolute lifestyle was a cover. After the driver of a truck ran down and killed 84 people in Nice, France, for example, the French interior minister was at a loss to explain how someone who drank during Ramadan – which had ended a week and a half before – could have radicalized so quickly. Former French President Francois Hollande in Paris in September 2016 at a memorial service for victims killed by terrorism in France.AP Photo/Michael Euler Rather than joining a radically different religious counterculture, individuals are attracted to IS, these experts argue, because its actions reaffirm the cultural values of those who are marginalized, or those who exhibit what psychiatrists call “anti-social personality disorders.” Could it be that IS volunteers are drawn to a value system that asserts an aggressive machismo, disparages steady work and sustains the impulse for immediate gratification? Could it be that they are attracted to a culture that promotes redemption through violence, loyalty, patriarchal values, thrill-seeking to the point of martyrdom and the diminution of women to objects of pleasure? In this reading, IS more closely resembles the sort of street gang with which many of its Western and Westernized enlistees are familiar than its more austere competitor, al-Qaida.
YOUR OPPORTUNITY FOR IMPACT Be our resident Salesforce expert: Serving as our in-house Salesforce Administrator, you’ll become an expert in our existing Salesforce configuration, pursue opportunities for greater efficiency and user adoption, and manage requests for improvements across all teams. - Collaborate cross-functionally to identify requirements, develop solutions, and maintain them over time - Identify new opportunities to leverage Salesforce to support additional business processes - Provide recommendations for improvements by staying informed on new Salesforce releases, features, and functionality - Manage advanced aspects of Salesforce – e.g. user profiles and roles, custom objects, Flow, workflow, Process Builder, etc. System Administration: Own day-to-day administration, maintenance, and optimization of our technology stack (Airtable, Zapier, LastPass, FormAssembly, Slack, Confluence, etc.) - Ensure data integrity by maintaining health of systems through monitoring, data integrity efforts, and training - Understand user needs and craft approaches to maximize user adoption across the organization - Identify and train “Subject Matter Experts” across teams - Roll out new initiatives related to cybersecurity, compliance, data governance, etc.
https://jobs.chalkbeat.org/bureau/us/
Environmental, Social and Governance at a Glance At NESR, we believe that our sustainability, business continuity and success are closely tied to the health of the economies, environments and communities in which we live and work. This belief drives our commitment to integrate Environmental, Social and Governance (“ESG”) into our practices, business processes, decisions, and strategic planning. We are dedicated to maintaining a sustainable and responsible company and strive to reduce our environmental footprint, deepen our social impact and strengthen our corporate governance everywhere we operate. We continually endeavor to create shared value through pursuing ESG initiatives that are aligned with community needs, our business strategies and our customers’ priorities. We are also finalizing our long-term corporate ESG goals based on identified material ESG risks and opportunities in our operations. We believe that managing material ESG risks and opportunities in our business will produce operational efficiency, enhance our sustainability, and produce tangible and intangible value for our shareholders. Our license to operate depends on our ability to adapt our practices and conform with global social and environmental expectations. We believe that conventional methods are not sustainable and that we need to constantly improve our operations and business practices to address environmental and social challenges. In 2020, corporate ESG targets were introduced and corporate ESG performance was linked to compensation. NESR is currently mapping out a plan to achieve net zero carbon emissions from our operations by 2050. We believe that setting ESG key performance indicators across the company for different functions should allow us to achieve our ESG goals in an expedited manner. For more information, click here . Oversight of ESG NESR is built on a strong foundation of integrity, ethics, and social responsibility. Corporate governance principles are overseen by our Board of Directors (“BOD”) and instilled in our employees through our Code of Conduct (“Code”). At the BOD’s direction, ESG updates are presented to the BOD quarterly to ensure ESG considerations are factored into business decisions. In addition, the BOD reviews and monitors ESG risks and opportunities at Board and committee meetings as deemed necessary. For more information, click here . Stakeholder Engagement NESR’s Board of Directors and management are committed to transparency and open communication with all internal and external stakeholders. In 2020, our priorities include improving the quality and quantity of our public disclosures on ESG, collecting ESG data, and improving our ESG performance. We intend to produce our first Sustainability Report in 2021 to help our stakeholders better understand our ESG performance. We believe that public disclosure of our ESG performance through our different communication channels, and our proactivity in engaging with our stakeholders, will improve our competitiveness and enhance transparency in our business. Generating Local Value We pride ourselves on being the “National Champion” of the MENA region and the first company from the MENA region to be listed on the NASDAQ exchange. As a viable national alternative and a leading oilfield services provider in the region, we are steadfast in our commitment to contributing to the health of the economies in which we operate. We are guided by a strict mandate to align our local activities and investments with the visions and national priorities of the countries of the MENA region. As such, we participate in many local content programs intended to increase local employment, manufacturing, and procurement where we do business. Our focus on enhancing our contribution to the communities in which we operate includes hiring and developing local talent, manufacturing locally, investing in cutting edge research and development, and supporting the growth of small and medium enterprises in the MENA region, particularly through contracting with local suppliers. Our commitment to local content value creation is founded in the belief that creating shared social and economic value through local content programs will enhance our long-term sustainability and growth in the region. Diversity, Equity and Inclusion We believe that providing an empowering, inclusive, and diverse work environment enables us to attract and retain top employees who can drive the success of our business. We also believe that all people should be treated with dignity and respect and that our employees and contractors should adhere to the policies, guidelines, business ethics and values set in the NESR Code of Conduct. Our Code reaffirms our commitment to protect our employees from all forms of discrimination in the workplace. It applies to all directors, officers, and employees, and contractors of the Company, as well as third parties who do business with the Company, and can only be waived by written approval of NESR’s BOD. We draw strength from the diversity of our employees and our inclusive culture. We employ more than 5,000 people representing more than 60 nationalities and working in 16 countries throughout the Middle East, Asia, Africa and the US. The gender, cultural, ethnic, and religious diversity of our workforce enriches and strengthens our culture and workplace and fosters innovation, creativity, tolerance and inclusiveness. This provides an environment where our employees can reach their full potential and bring their best to work every day. Our inclusive culture creates a common NESR identity that unites us in the pursuit of a common goal: providing best-in-class service to our clients. Ensuring gender diversity in our workplace is key to our business. NESR’s Code includes anti-harassment and non-discrimination policies to ensure that our workplace is safe and our work environment is empowering to all. We are committed to providing equal opportunities and maintaining pay equity for all our employees. At our inception, NESR’s Board of Directors was 25% female and sourced its directors from 4 different continents. Our BOD today continues to be diverse by gender, nationality, age and experience. NESR is an equal opportunity employer. We follow international best practices and the employment laws of the countries in which we operate. We do not discriminate based on race, color, gender, age, sexual orientation, ethnicity, disability, religion, union membership or marital status in hiring and employment practices such as promotions, rewards, and access to training. We are committed to conducting business in a manner that preserves and respects human dignity. It is our policy to provide equal employment opportunities and comply with all applicable immigration and employment laws. For more information, click here . Health and Safety Protecting the health and safety of our employees, contractors, and the communities in which we operate are major drivers for our business. Our goal is to deliver safe and compliant operations without harming individuals, while making a positive impact on the community. Our commitment to safety runs over to our contractors and society at large; not only do we want our operations to be safe, we want to influence others to follow safety procedures in their own operations. Ultimately, we want to play a key role in promoting a safety culture in our industry and beyond. In 2018, we audited our safety practices and identified areas for development. These improvements were implemented in collaboration with our customers and cooperation of our employees. Similarly, we exhibited excellent progress in 2019 in our journey toward injury-free operations through strong leadership in implementing a new management system and a focused, risk-based approach, including extensive training of our employees and contractors to recognize and manage health and safety risks. In 2019, NESR was recognized as a best in class performer in health and safety in our two major countries of operation. Our proactive response to the unprecedented health risks presented by the global spread of COVID-19 reaffirms our commitment to protecting the health and safety of our people and our operations. Our emergency response included actions and plans to address logistical concerns, customer engagement plans, supply chain sustainability, and inventory planning, among other things. We believe that it is our duty to support our employees and their families during these challenging times. We continue to closely monitor the situation and hold regular meetings to promptly address and adapt our business decisions to developing circumstances. Our priority is to keep our employees safe, keep our operations running without interruption, and to support local communities. For more information, click here . Environmental Stewardship We are committed to reducing our environmental footprint and aligning our environmental initiatives with the United Nations Sustainable Development Goals. We recognize that stewardship of the environment is necessary to ensure the long-term success and sustainability of our business. In order to understand our baseline and improve our environmental performance, we started tracking environmental data related to energy, water and natural resources consumption, air emissions and waste. As we embark on understanding and reducing our environmental footprint, we are conscious of the risks presented by climate change, evolving environmental regulations, and depleting natural resources. We believe that deploying sustainable water strategies, implementing responsible energy consumption practices, and better waste management systems are essential to our ability to conduct our operations in the MENA region. In order to reduce our water consumption substantially, our goal is to recycle all water from the wash bays and the permanent man camps in our biggest operating locations by the end of 2023. We also started tracking our total energy consumption from different sources and the waste produced by our operations to understand the impact of our business on different ecosystems. We are dedicated to maintaining a sustainable and socially responsible company and strive to reduce our environmental footprint everywhere we operate. For more information, click here .
https://www.nesr.com/esg-at-a-glance.html
Study Purpose: To determine whether patients with bone cancer pain who were already administered opioids obtain clinically important pain control with regular oxycodone/paracetamol Intervention Characteristics/Basic Study Process: Patients received one to three placebo or oxycodone/paracetamol tablets four times per day for days 1–3, with the dosage titrated step by step based on pain assessment, up to 12 tablets per day, maximum. Patients recorded pain diary entries at baseline and on the study days. Immediate-release oral morphine was used to control breakthrough pain with 10% dose increments of the background continuous-release opioid, with no maximum (these were dispensed to the patient at the beginning of the study with specific instructions on administration). Patients remained on current background analgesic management, and additional analgesic drugs could be used, but not altered, during the study period. Sample Characteristics: A total of 246 patients began the trial, with 225 completing the three-day study. Patient age range was 28–84 years. Of the sample, 122 were male and 124 were female. Patients had malignant solid tumors with bone metastasis confirmed via imaging, had bone-related pain rated as 4 or higher on an 11-point pain scale, and had received treatment with controlled-release morphine or transdermal fentanyl patches for one week or more. They had conscious mental status, the ability to take oral tablets, and were at least 18 years of age. Patients were excluded from the study if they had received chemotherapy, radiation, or endocrine or monoamine oxidase inhibitors within the previous 30 days (or during the study), had history of alcohol abuse or severe hepatic disease, or had received nonsteroidal anti-inflammatory drugs or paracetamol combinations. Setting: Multisite Home setting Beijing, China Phase of Care and Clinical Applications: The study has clinical applicability for late effects and survivorship, and end-of-life and palliative care. Study Design: The study was a multicenter, randomized, double-blinded, placebo-controlled trial. Results: Prior to the study, 55.6% of the intervention group experienced breakthrough pain, while 50.8% of the placebo group did. After treatment, only 38% of the intervention group suffered breakthrough pain, while 58% of the placebo group did. The use of immediate-release morphine decreased from 50% to 27.8% in the intervention group while in the study, whereas the placebo group decreased from 46.7% pre to 43.3% in the same time frames. Conclusions: When oxycodone/paracetamol is added to intermediate- or high-dose continuous-release opioids, patients with bone cancer pain experienced greater relief of pain. Limitations: The authors cite that the study was conducted on only Chinese patients and point to the need to consider other ethnicities. There is no analysis based on overall analgesic regimens used, and no full description of these. Addition of this medication essentially increased the opioid dosing per day, so it is not clear whether this particular formulation was any more helpful than simple dosage increases. Nursing Implications: This study is applicable to patients with bone cancer pain who experience significant breakthrough pain while taking relatively high doses of a continuous-release opioid. It is not clear from this study how this particular formulation fits into an overall pain management regimen because it did provide higher dosage of opioid. Increasing opioid dosages may have had the same effect.
Miss Kabbeh Gibson professionally known as K-Love is a graduate from Temple University with a Bachelor of Science in Public Health. She is Goal-redirected, results-oriented, Teacher, with a strong background and education in both mental health and public health education in relation to community health services. She’s a skill communicator, persuasive, and adaptable, self motivated with high energy level, initiative and focus. Keen insight into the needs and views of others, able to listen and identify issues in areas and find innovative solutions. Area of strength includes: Comprehensive public health knowledge Client community relations Problem Solving/Decision making Presentation/Training Program Needs Assessment/Management Documentation/writing Public Speaking Grant Writing Community Outreach programs Planning Experience Crisis Manager Delaware County Intermediate Unit Jul 2014 – Present6 yrs 2 mos Collingdale PA (Collingdale Community school) View 15 Popular Schools » Job Description Crisis managers develop emergency plans in the public and private sectors according to government regulations. This usually includes a training plan for workers. Crisis Managers must consider a variety of emergencies such as natural disasters or chemical spills, and they often work with public officials to coordinate response plans. The crisis manager assesses an emergency and oversees the activities of workers to protect the safety of employees and the public. Residential Counselor Devereux Mental Health Mar 2007 – Present 13 yrs 6 mos West Chester Pennsylvania •Develop service recipient’s basic living skills (e.g., social, domestic, and hygiene) through instruction and encouragement. • Assist and document the development and implementation of long and short-term goals for service recipient, as developed by the Personal Support. •performing Personal care services that are assigned by a health professional may include observation, reporting and documentation of changes in the status of the person or in body functions. Community Health Outreach Associate Riddle Memorial Hospital Sep 2013 – Dec 20134 mos • Serve as a key member of the team to plan and administer community health service programs within the assigned area, including establishing priorities, monitoring and evaluating the effectiveness and efficiency of programs, and developing and implementing plans to improve services. Sales & Marketing Associate (Internship) AmeriHealth Caritas May 2013 – Aug 20134 mos South West Philadelphia Area • Attend meetings, seminars and programs to learn about new products and services, learn new skills, and receive technical assistance in developing new accounts. • Calculate premiums and establish payment methods and develop marketing strategies to compete with other individuals or companies who sell insurance. • Ensure that policy requirements are fulfilled, including any necessary medical examinations and the completion of appropriate forms. Education Temple University school of Social Work Bachelor’s DegreePublic Health 2011 – 2012 Activities and societies: Study Abroad Department. schoolName Temple University Bachelor’s DegreePublic Health 2009 – 2014 Activities and societies: Health Education Team Public Health Major schoolName Delaware County Community College Associate’s DegreeScience 2009 – 2011 Activities and societies: Peers Educator Voluntary Experience American Red Cross Southwestern Pennsylvania Chapter Health Educator Health Jul 2013 – Present Select valid sources of information about health needs and interests. Utilize computerized sources of health-related information. Apply survey techniques to acquire health data. Conduct health-related needs assessment in communities. Investigate physical, social, emotional, and intellectual factors influencing health behavior Identify behaviors that tend to promote or compromise health. Recognize the role of learning and affective experiences in shaping patterns of health behavior. Analyze social, cultural, economic, and political factors that influence health. Analyze needs assessment data. Determine priority areas of need for health education.
https://wownewzmedia.com/?p=1239
There is no one set way of defining or thinking about ‘sustainability’ or ‘sustainable development’. In my case I was introduced to these ideas mostly in a business context, where related concepts such as the ‘triple bottom line’ framework (i.e. financial, social and environmental performance) and ‘sustainable enterprise’ are central. Within the consulting world I often felt like a bit of an interloper. Most coworkers and our clients used to work in the ‘environmental’ space, e.g. having a background in environmental management and planning (and often a degree in environmental science), which had evolved into ‘sustainability’, or they used to work in functions such as community and government relations and/or in health and safety (this was often the case in the resources sector where firms often now have a broader ‘Health, Safety, Environment and Community’ function that integrates these activities). I had experience of neither, and hold very different degrees (marketing, strategic foresight). Initially my consulting work often drew on my background in communications to produce annual sustainability reports. Discussion of ‘sustainability’ in the context of my strategic foresight studies tended to be incorporated into a wider critique of contemporary Western capitalism. Richard Slaughter asserts that “if there is a single concept which challenges existing economic practice, and especially the notion of unrestrained economic growth, it is sustainability” and argues that “it calls the bluff of those who have forgotten that the Earth (in terms of its capacity to provide resources and absorb waste) is finite”. This macro framing of sustainability focuses on limits to growth – particularly limits to material consumption – and prescribes a shift towards a “steady state” economy. These ideas and concerns tended to be outside the scope of consulting assignments! I’ve since learned that there are many additional conceptualisations of sustainability. In political scientist John Dryzek’s model of environmental discourses adherents to the sustainability discourse view conflicts between environmental and economic values as being capable of resolution by refining concepts of growth and development. My friend Peter Ellyard advocates this perspective. He argues there are, in principle, no limits to growth if the focus of economic activity shifts from the physical to ‘metaphysical’ (e.g. information, creative and cultural industries, etc). I.e. a paradigmatic shift towards non-material ‘things’. In a recent essay he asserts that “a sustainable society need not be a non-growth society”. Beyond these competing perspectives I’ve also come to understand ‘sustainability’ as an “umbrella” term that captures the general gist of the 21st century context. That is, the need to work out how to sustain – and broaden – ‘prosperity’ (broadly defined) as we add two more “Chinas” of people to the total number of people alive by 2050 (to around 9-10 billion). With only around 7 billion now, and a small percent of these people (~15%) living in high-consumption developed countries, we are already significantly shaping the Earth’s climate and natural systems. We increasingly must understand and respond to global environmental change. (Nb. Within futures discourse this is increasingly referred to as ‘the global megacrisis’). Similarly, the most common definition of ‘sustainable development’ is brought to life by considering this challenging context: “Development that meets the needs of the present without compromising the ability of future generations to meet their own needs”. (Thus seeking a long-term orientation, ensuring intergenerational equity). Current development clearly does threaten to compromise the ability of future generations to meet their needs. Having said all of this, the conception of sustainability that currently informs a lot of my thinking was articulated by European scholars who have theorised the concept of ‘reflexive governance’. Their book, ‘Reflexive Governance for Sustainable Development’ – as per the publishers summary – ‘deals with the issue of sustainability in a novel and innovative way’ by examining ‘reflexive modernisation’, and moving ‘away from endless quarrels about the rightness of normative claims and from attempts to define sustainability on the basis of biophysical principles’. They develop a broad-ranging critique of modernist (or rationalist) problem solving – in particular the regular creation of unintended consequences, the ‘second-order problems’ which are often more severe and difficult to handle than the initial problems that were being addressed. In their view issue of “sustainability is one, if not the main, second-order problem of modernist problem solving”. These scholars observe that “when it comes to practical implementation the concept seems to dissolve into rhetoric that masks familiar conflicts” in areas such as energy, agriculture, transport, and housing. In their view the concept of ‘sustainability’ should be primarily understood for its fundamental governance implications – that is, of the increasing focus on “the systemic and long-term nature of social, economic, and ecological development”, which “brings complexity and uncertainty to the fore”. Thus, they argue sustainability refers to a “kind of problem framing that emphasises the interconnectedness of different problems and scales, as well as the long-term and indirect effects of actions that result from it”. Indeed, we are learning how to handle complex ecological and social problems for which ‘rationalist’ problem solving is often unsuited. They consequently see ‘sustainable development’ as being “more about the organisation of processes” (rather than about particular outcomes) and advocate viewing it as “modes of problem treatment and the types of strategies that are applied to search for solutions and bring about more robust paths of social and technological development”. These strategies focus on aspects of the problem-handling process, such as goal definition (making this more participatory), and strategy implementation (which is argued to need to be far more dispersed). Examples in the book of emerging approaches include ‘transition management’, ‘adaptive management’, ‘constructive technology assessment’, and ‘strategic niche management’. They also note the growth of ‘foresight exercises’. Such approaches tend to “emphasize participation, experimentation, and collective learning as key elements of governance”. These academics argue that sustainability cannot be unambiguously or unproblematically translated into a blueprint. I agree that sustainability calls for new forms of problem handling. (Just look at the increasingly conflict over the management of many environmental issues, e.g. the long-term degradation of the Murray Darling Basin). However, I also recognise this view of sustainability is a challenging one for people with set/clear vision of what the world should look like. Nonetheless, futures research is clearly highly relevant to this conception of sustainability and sustainable development. Futures research methods and foresight methods can surely contribute to new modes of problem treatment which better address complexity and uncertainty, and pay greater attention to the long-term effects of actions. Sounds like foresight to me!
http://www.facilitatingsustainability.net/?p=161
He said the studio and he ‘just don’t see eye to eye.’ Just one day after New Line Cinema announced a new screenwriter for the Sandman movie, actor/director Joseph Gordon-Levitt left the project, citing creative differences. Gordon-Levitt was on track to direct and star in the long-awaited adaptation of Neil Gaiman‘s Sandman comics, collaborating with Gaiman, longtime DC Comics filmmaker David Goyer, and screenwriter Jack Thorne. Then, in a Facebook post on Saturday, Gordon-Levitt announced that he would no longer be working on Sandman. “A few months ago,” he wrote, “I came to realize that the folks at New Line and I just don’t see eye to eye on what makes Sandman special, and what a film adaptation could/should be.” He went on to thank his collaborators on the film, adding that “it’s been a particular privilege as well as a rocking good time getting to know Mr. Gaiman.” It’s not unusual for films to change hands during the early stages of development, although it’s rare to see such a well-known actor come right out and say that he left due to creative differences. In this case, it’s safe to assume that Gordon-Levitt’s decision was connected to New Line hiring a new writer, a move that generally signals a new creative direction. The original screenwriter, Jack Thorne, is best known for the acclaimed British drama series This Is England and for collaborating with J.K. Rowling to create Harry Potter and the Cursed Child. The new writer, Eric Heisserer, has primarily worked on horror movies like Final Destination 5. On Twitter, Neil Gaiman seemed entirely supportive of Gordon-Levitt’s decision, saying that he’d be happy to work with him again. He also clarified that he has no official creative control over Sandman adaptations. While Gaiman said nothing negative about New Line or Sandman‘s new direction, his tweets tacitly distanced himself from the film. Sandman was always going to be a difficult comic to adapt, and this upheaval definitely feels like a worrying sign.
https://www.dailydot.com/parsec/joseph-gordon-levitt-leaves-sandman-movie/
Sequence data for all SSR markers developed are available from GenBank (accession numbers KX867678 - KX867785). Introduction {#sec001} ============ Inferring recent demographic history and contemporary evolutionary processes are major goals in the field of population genetics. Climate change, human disturbances of natural habitats and human-aided dispersal can cause dramatic shifts in the distributions of natural species, and biological invasions are increasingly prevalent worldwide. Analyzing the genetic diversity and population genetic structure of native and introduced populations of an invasive species allows recovering pathways of invasion and identifying founding events and/or admixture events among invasive populations. All these processes affect the demographic success and future expansion of the invasive species and determine its potential for adaption to new environmental conditions. Their understanding is invaluable for devising appropriate management strategies \[[@pone.0176197.ref001],[@pone.0176197.ref002]\]. The molecular markers used for population genetics studies are currently essentially of two kinds: genome-wide Single Nucleotide Polymorphisms (SNPs) identified by next-generation-sequencing-based techniques such as Restriction site Associated DNA sequencing (RAD-seq) or genotyping-by-sequencing \[[@pone.0176197.ref003]\], and Simple Sequence Repeats (SSR) markers (microsatellites). Some limitations of SSR markers are low density throughout the genome, complex mutational patterns and possible presence of homoplasy and null alleles \[[@pone.0176197.ref004]\]. However, SSR markers are easy to score, highly polymorphic and thus highly informative and the theory and practice of SSR marker analysis and their afferent bias are well known \[[@pone.0176197.ref005]\], making them still the markers of choice for ecological and evolutionary studies. In comparison to SNPs, SSRs are especially well suited for analyzing processes occurring at small temporal or spatial scales and have proven highly relevant for revealing recent expansion and recent admixture or analyzing parentage and kinship \[[@pone.0176197.ref005]--[@pone.0176197.ref007]\]. Next-generation sequencing technologies now allow to rapidly develop large sets of SSRs \[[@pone.0176197.ref008]\]. In addition, the ever-increasing availability of transcriptome sequences (Expressed Sequence Tags, EST) in public databases enables fast and cost-effective development of genic SSRs marker (EST-SSRs). EST-SSRs are expected to be less polymorphic than gSSRs but also to display fewer null alleles and be more transferable among related species \[[@pone.0176197.ref009],[@pone.0176197.ref010]\]. As their polymorphism may be influenced by selective processes, EST-SSRs may reveal somewhat different genetic patterns than gSSRs \[[@pone.0176197.ref011]\]. The genus *Ambrosia* in the *Asteraceae* family includes at least 51 species collectively known as "ragweeds" and mainly distributed in North America \[[@pone.0176197.ref012]\]. Four different species (*A*. *artemisiifolia* L., *A*. *trifida* L., *A*. *psilostachya* D.C. and *A*. *tenuifolia* Spreng.) occur in Europe but are native from America \[[@pone.0176197.ref013],[@pone.0176197.ref014]\]. *A*. *artemisiifolia* is an annual herb mostly known as a successful invasive and a highly allergenic plant causing severe rhinitis and asthma \[[@pone.0176197.ref015],[@pone.0176197.ref016]\]. It has been introduced in Europe in the 19^th^ century by the import of contaminated grain and forage \[[@pone.0176197.ref017]\]. *A*. *artemisiifolia* has colonized different types of habitats such as railways, riversides and wastelands, as well as cultivated fields where it is now a noxious weed competing with several summer crops \[[@pone.0176197.ref017]\]. To investigate the population genetic structure in *A*. *artemisiifolia*, reliable and polymorphic molecular markers are needed. To date, only a few gSSR markers have been developed from French *A*. *artemisiifolia* populations \[[@pone.0176197.ref018],[@pone.0176197.ref019]\]. These few gSSRs were used to assess the population genetic structure and patterns of colonization across continental and regional scales in Europe \[[@pone.0176197.ref020]--[@pone.0176197.ref025]\], North America \[[@pone.0176197.ref026]\] and China \[[@pone.0176197.ref027]\]. In addition, most of the gSSR markers available showed PCR amplification failures and excess homozygote genotypes \[[@pone.0176197.ref020]--[@pone.0176197.ref026]\]. Excess homozygosity can be caused by the presence of null alleles resulting from mutations at primer binding sites that preclude PCR amplification. Alternatively, excess homozygosity has sometimes been interpreted as evidence for partial selfing in a mostly outcrossing species. This issue was highly debated in several SSR-based population genetics studies conducted on *A*. *artemisiifolia* \[[@pone.0176197.ref020]--[@pone.0176197.ref023]\]. The present study had three purposes: (a) develop new nuclear SSR markers for *A*. *artemisiifolia* following three different approaches (whole-genome enrichment followed by 454 sequencing, whole-genome Illumina sequencing, and use of existing EST databases), (b) investigate the genetic diversity, population structure and mating system of *A*. *artemisiifolia* using populations sampled in North America and Europe, and (c) assess marker transferability to *A*. *trifida*, *A*. *psilostachya* and *A*. *tenuifolia*. Materials and methods {#sec002} ===================== Plant material {#sec003} -------------- A total of 321 *A*. *artemisiifolia* individuals were sampled from 11 populations spanning the invasive range in Europe and 5 populations in North America ([Table 1](#pone.0176197.t001){ref-type="table"}, [Fig 1](#pone.0176197.g001){ref-type="fig"}). Twenty individuals were sampled from two populations of *A*. *trifida*, 22 individuals from one population of *A*. *psilostachya* and 21 individuals from one population of *A*. *tenuifolia*. A 0.2-cm^2^ leaf section was collected on each individual and DNA extracted as described in \[[@pone.0176197.ref028]\]. All three species studied are alien invasive, not protected species. Sampling locations were not localized within protected areas so that no specific permission was required. *Ambrosia artemisiifolia* is described as a diploid species (2n = 36, \[[@pone.0176197.ref013], [@pone.0176197.ref029],[@pone.0176197.ref030]\]). As the presence of triploid plants has sometimes been questioned \[[@pone.0176197.ref026]\], we counted nuclear chromosomes as described \[[@pone.0176197.ref031]\] in 10 plants randomly chosen from one French population. Results were in agreement with diploidy with 2n = 36. ([S1 Fig](#pone.0176197.s001){ref-type="supplementary-material"}). *Ambrosia trifida* is a diploid species with a different basic chromosome number (2n = 24) \[[@pone.0176197.ref029], [@pone.0176197.ref030]\], while *A*. *psilostachya* and *A*. *tenuifolia* have the same basic chromosome number as *A*. *artemisiifolia* but variable ploidy levels \[[@pone.0176197.ref013],[@pone.0176197.ref014], [@pone.0176197.ref029], [@pone.0176197.ref032]\]. ![Map of the studied *Ambrosia sp*. populations.](pone.0176197.g001){#pone.0176197.g001} 10.1371/journal.pone.0176197.t001 ###### *Ambrosia sp*. populations analyzed. ![](pone.0176197.t001){#pone.0176197.t001g} Species Population code Country Date of sampling Nb of individuals analyzed Geographic coordinates --------------------------- ----------------- --------- ------------------ ---------------------------- ----------------------------------------------------- *Ambrosia artemisiifolia* 1H USA 2013 16 Not available[^a^](#t001fn001){ref-type="table-fn"} 2H USA 2013 11 Not available[^a^](#t001fn001){ref-type="table-fn"} 3H USA 2013 15 Not available[^a^](#t001fn001){ref-type="table-fn"} KEN-A USA 2010 24 N38°01'00", W84°33'10" STC-A Canada 2010 20 N45°10'03", W73°40'50" 26P17 France 2005 24 N44°44'52", E04°55'07" 39P04 France 2011 24 N46°45'56"; E05°34'10" 69P28 France 2013 24 N45°44'49", E05°04'59" 89P10 France 2011 24 N48°10'40", E03°15'02" GEN13.03 France 2013 24 N47° 11\' 29", E05° 15\' 0" BES-I Italy 2011 20 N45°18'25", E08°58'21" HOR-G Germany 2011 11 N52°17'28", E10°38'26" DOM-G Germany 2011 24 N51°38'21", E14°11'50" TAT-H Hungary 2009 20 N47°34'21", E18°27'18" KAP-H Hungary 2011 20 N46°22'12", E17°51'17" GRA-B Bosnia 2011 20 N45°08'19", E17°15'51" *Ambrosia trifida* AMBTR01 France 2013 3 N46°24'35", E5°5'42" AMBTR31 France 2013 17 N43°15'48", E1°2'36" *Ambrosia psilostachya* PSI-PJ France 2014 22 N44°25'42" ; E04°42'44' *Ambrosia tenuifolia* TEN-JV France 2014 21  N43°49'54" ; E04°34'10' ^a^These populations were sampled in Connecticut, USA. Development of new nuclear SSR markers for *A*. *artemisiifolia* {#sec004} ---------------------------------------------------------------- ### Obtaining sequence data {#sec005} For the SSR-enriched gDNA library approach, total gDNA from 8 *A*. *artemisiifolia* individuals was isolated using the DNeasy Plant Mini Kit (QIAGEN, Valencia, California, USA) and processed by GenoScreen, (Lille, France). A SSR enriched DNA library was obtained as described in \[[@pone.0176197.ref033]\]. Briefly, total DNA was mechanically fragmented and enriched for AG, AC, AAC, AAG, AGG, ACG, ACAT and ATCT repeat motifs. Enriched fragments were subsequently amplified. Amplicons were sequenced on a 454 GS FLX Titanium system (454 Life Sciences, Branford, USA) following manufacturer's protocols. For the Illumina gDNA sequencing approach, total gDNA from one *A*. *artemisiifolia* individual was extracted using the DNeasy Plant Mini Kit (QIAGEN, Valencia, California, USA). Sequencing was performed at GENTYANE (INRA, Clermont-Ferrand, France). An Illumina paired-end shotgun library was prepared by shearing 2 μg DNA using a Covaris M220 ultrasonicator (Covaris, Woburn, USA) and following the standard Illumina TruSeq DNA Library Kit protocol (Illumina, San Diego, USA). Sequencing was conducted on the Illumina MiSeq with 250 bp paired-end reads. For the EST public data use approach, the two existing sets of *A*. *artemisiifolia* transcriptome 454 sequence data were downloaded from Genbank Sequence Read Archive \[[@pone.0176197.ref034]\]. They correspond to one individual sampled in the USA (accession SRX096892) and one sampled in Hungary (accession SRX098769). Both datasets were merged before analysis. For each of the three sequence datasets, stringent sequence quality control and filtering were performed using the ShortRead package in the Bioconductor software \[[@pone.0176197.ref035]\]. Briefly, read ends were first trimmed by quality scores. Only sequences longer than 300 bp (454 reads) or 200 bp (Illumina reads) with a mean Phred quality score higher than 30% and less than 1% Ns were retained. Exact sequence duplicates were discarded. In the Illumina dataset, only matching paired-end reads were kept after quality filtering and overlapping reads were merged using FLASH \[[@pone.0176197.ref036]\]. Detection of SSR motifs was conducted on the merged reads only, ensuring that the size of the flanking regions was large enough to design good-quality primers. ### SSR identification and primer design {#sec006} SSRs were identified with QDD version 3.1 \[[@pone.0176197.ref037]\]. Only 2- to 6-nucleotides motifs were considered. The minimum repeat unit was set to eight for di-nucleotides, six for tri-nucleotides, and five for longer motifs. Expected amplicon sizes were constrained to a 100--300 bp range. Primer pairs were thoroughly tested for clear, stable amplification on 12 *A*. *artemisiifolia* individuals from three populations (one French population from the Rhône Valley, the German population DOM and the American population KEN). PCRs were performed in 10-μL as previously described \[[@pone.0176197.ref028]\]. Cycling parameters consisted in a first denaturation step (2 min at 95°C) followed by 39 cycles of 5 s at 95°C, 10 s at 60°C and 30 s at 72°C. Amplicons were visualised by electrophoresing five microliters of PCRs on 3% (wt/vol) agarose gels run for 25 min at 100V in Tris-Borate EDTA buffer. SSR marker validation and assessment of genetic polymorphism in *A*. *artemisiifolia* {#sec007} ------------------------------------------------------------------------------------- ### Genotyping {#sec008} SSRs successfully amplifying in *A*. *artemisiifolia* were used to genotype 384 individuals, including 321 *A*. *artemisiifolia* (16 populations), 20 *A*. *trifida* (two populations), 22 *A*. *psilostachya* (one population) and 21 *A*. *tenuifolia* (one population) individuals ([Table 1](#pone.0176197.t001){ref-type="table"}). Genotyping was performed at GENTYANE (INRA, Clermont-Ferrand, France). PCR products were labelled with one fluorescent tag (6-FAM, NED, VIC or PET) and loaded on an ABI 3730XL capillary DNA analyzer (Applied Biosystem) with the size standard GS500 LIZ. Peakscanner version 1.0 (Applied Biosystems) and the R package MsatAllele were used to read allele sizes \[[@pone.0176197.ref038]\]. A Principal Component Analysis (PCA) was performed on genotype data using the package adegenet \[[@pone.0176197.ref039]\] in R 3.1.2 in order to examine the genetic relationship among the four species studied. ### Check for null alleles {#sec009} MicroChecker 2.2.0.3 was used to check for the presence of null alleles and scoring errors due to stuttering and large allele dropout for each marker in each *A*. *artemisiifolia* population \[[@pone.0176197.ref040]\]. The markers showing the overall lowest occurrence of null alleles and stuttering were retained for further analyses. Frequencies of null alleles at the retained loci in each population were estimated using INEST 2.1 \[[@pone.0176197.ref041]\]. ### F~ST~ outlier tests {#sec010} All SSR loci were screened for evidence of selection based on an F~ST~ outlier test that identifies loci with an F~ST~ value unexpectedly high (diversifying selection) or unexpectedly low (balancing or purifying selection). We used data from *A*. *artemisiifolia* and the software Bayescan \[[@pone.0176197.ref042]\]. This program implements a Bayesian method based on a multinomial Dirichlet distribution for allele frequencies. The Dirichlet distribution holds under a variety of demographic models when populations derive from a common gene pool. As a recent range expansion has been shown to increase the proportion of false selection event detection \[[@pone.0176197.ref043]\], we used a conservative prior value of 100 for the 'odds of neutrality' (only 1 locus out of 100 was under selection). For each locus, probability for selection was examined based on relative posterior probabilities for models with and without selection. We implemented 20 pilot runs of 5,000 iterations, a burn-in period of 50,000 iterations and 100,000 subsequent iterations with a sample size of 5,000 and thinning interval of 20. Estimation of the mating system in *A*. *artemisiifolia* {#sec011} -------------------------------------------------------- The mating system of *A*. *artemisiifolia* was investigated using five gSSR markers (SSR10, SSR17, SSR47, SSR71 and SSR73) in six additional French populations sampled in 2014 and located within a few kilometres around population GEN13.03 ([Table 1](#pone.0176197.t001){ref-type="table"}). These gSSR markers showed less null alleles than others. Leaf tissue and mature seeds were collected on six to eight mother-plants per population. Eight to 16 progeny-plants per mother-plant were genotyped, yielding a total of 614 individuals. MLTR \[[@pone.0176197.ref044]\] was used to estimate the multi-locus outcrossing rate *tm*, the maternal inbreeding coefficient *F*, the outcrossing rates between related individuals *tm-ts* and the correlation of paternity *rp*. Genetic diversity and inbreeding {#sec012} -------------------------------- The allelic richness per locus and per population using a rarefaction method (*A*), expected heterozygosity (*H*~S~) and the genetic differentiation (*F*~ST~) were calculated using Fstat \[[@pone.0176197.ref045]\]. Significance of *F*~ST~ values was based on 1000 bootstrap resampling over loci. Inbreeding coefficient (*F*~IS~) were estimated with INEST 2.1 \[[@pone.0176197.ref041]\] using a Bayesian procedure robust to the presence of null alleles. To assess the statistical significance of inbreeding we compared the model with inbreeding with the random mating model (*F*~IS~ = 0) based on the Deviance Information Criterion (DIC). Genetic diversity and differentiation parameters for *A*. *artemisiifolia* were calculated over all populations, over North American populations and over European populations. *A*. *artemisiifolia* population structure {#sec013} ------------------------------------------ Population structure was assessed using Structure 2.2 \[[@pone.0176197.ref046]\]. The admixture model and correlated allele frequencies between populations were selected as specified \[[@pone.0176197.ref047]\] to determine the number of genetic clusters (*K*) best fitting the data. The length of the burn-in period was 100,000 runs followed by 500,000 Markov Chain Monte Carlo. Ten iterations were performed for each value of *K* from 1 to 15. *K* was determined graphically based on log likelihood values as previously described \[[@pone.0176197.ref046]\] using the web-based program Structure Harvester \[[@pone.0176197.ref048]\]. In addition, the ΔK method \[[@pone.0176197.ref049]\] was used to determine the best value of *K*. Finally, Clumpp 1.1.2 \[[@pone.0176197.ref050]\] and R 3.1.2 were used to produce graphical outputs for the inferred population structure. Genetic divergence and bottleneck tests {#sec014} --------------------------------------- Genetic divergence is likely to vary across populations because of differences in population effective sizes and local migration rates. This is especially the case when recent founder effects have occurred, such as during range expansions. Patterns of genetic divergence were estimated for the invasive range (Europe) by calculating population-specific *F*~ST~ values based on the *F*-model \[[@pone.0176197.ref051]\]. We used the Bayesian method of Foll and Gagiotti \[[@pone.0176197.ref052]\] implemented in the software GESTE v2. To assess geographical patterns in genetic divergence, we compared three models: one null model that simply estimate population-specific *F*~ST~ values, and two models that used either latitude or longitude as explanatory variables. In addition to the study of genetic divergence patterns, we investigated the signature of recent bottleneck events using the Wilcoxon test for excess expected heterozygosity implemented in INEST2.1 and based on the method of Cornuet and Luikart \[[@pone.0176197.ref053]\]. Analyses were run with the Two-Phase Mutation (TPM) model with default settings. Results {#sec015} ======= Development of new nuclear SSR markers {#sec016} -------------------------------------- Sequencing results, filtering and success rates of microsatellite loci development are summarized in [Table 2](#pone.0176197.t002){ref-type="table"}. Most 454 reads were eliminated because of insufficient length, while most Illumina reads were eliminated because paired-end reads could not be merged. As expected, the proportion of quality reads containing a SSR motif was much higher among the 454 reads obtained from an enriched gDNA library (24%) than among the Illumina reads obtained from raw gDNA (0.3%) or among the transcriptome 454 reads (0.1%). The low rate of SSR motifs obtained from Illumina sequencing of raw gDNA was more than compensated for by the high amount of reads generated and this method allowed the identification of ten times more potentially amplifiable loci than 454 sequencing of enriched gDNA ([Table 2](#pone.0176197.t002){ref-type="table"}). The distribution of motif length was very similar between the two methods used to develop gSSRs: on average the di- tri-, tetra-, penta- and hexa-nucleotides accounted for 40.2%, 48.7%, 7.5%, 2.5% and 1% of gSSRs, respectively. By contrast, most EST-SSRs were tri-nucleotides (81.6%), and di-, tetra-, penta- and hexa-nucleotides accounted for 11.1%, 7%, 0.2% and 0% of EST-SSRs, respectively. Success rate of PCR amplification were quite similar among SSR sets, yielding 67 gSSRs (GenBank accession number KX867678---KX867743) and 41 EST-SSRs (GenBank accession number KX867744-KX867785) with consistent amplification in *A*. *artemisiifolia* ([Table 2](#pone.0176197.t002){ref-type="table"}). Among these, 46 gSSRs and 32 EST-SSRs gave clear, easy to score patterns after capillary electrophoresis ([S1](#pone.0176197.s006){ref-type="supplementary-material"}--[S3](#pone.0176197.s008){ref-type="supplementary-material"} Tables). Homology with known proteins was detected for 25 EST-SSRs ([S3 Table](#pone.0176197.s008){ref-type="supplementary-material"}). 10.1371/journal.pone.0176197.t002 ###### Characteristics of sequence datasets used for development of new SSR markers. ![](pone.0176197.t002){#pone.0176197.t002g} Sequence dataset Total number of reads Number of good quality reads (%) PALs[^a^](#t002fn001){ref-type="table-fn"} Number of tested loci Number of loci successfully amplified (%) ------------------------ ----------------------- ---------------------------------- -------------------------------------------- ----------------------- ------------------------------------------- Enriched Genomic---454 556,018 18,218 (3.27%) 270 96 30 (31.2%) Genomic---Illumina 12,000,000 923,229 (7.69%) 2,720 110 37 (33.6%) EST---454 1,317,778 393,302 (29.85%) 397 173 41 (23.7%) ^a^ Potentially Amplifiable Loci (PALs). Genetic polymorphism at gSSRs and EST-SSRs loci in *A*. *artemisiifolia* {#sec017} ------------------------------------------------------------------------ Genetic polymorphism was assessed using 16 *A*. *artemisiifolia* populations for a total of 321 individuals ([Table 1](#pone.0176197.t001){ref-type="table"}). After checking for null alleles and stutters at each locus in each population, 14 gSSRs and 13 EST-SSRs were retained as best markers for population genetic analysis. Among the 27 loci tested, only one (SSR86, [S1 Table](#pone.0176197.s006){ref-type="supplementary-material"}) was unambiguously detected as being under selection (Bayesian probability = 1, [S2 Fig](#pone.0176197.s002){ref-type="supplementary-material"}). SSR86 showed less genetic differentiation (F~ST~ = 0.014) than other markers, but a very high within-population genetic diversity (Hs = 0.776), suggesting balancing selection at or near the locus. This locus was therefore discarded for further analyses. All 26 retained loci were polymorphic and revealed high levels of genetic diversity ([Table 3](#pone.0176197.t003){ref-type="table"}). The frequencies of null alleles estimated over all populations ranged between 0.06 and 0.19 and were on average similar between gSSRs and EST-SSRs (0.11; [Table 3](#pone.0176197.t003){ref-type="table"}). Allelic richness per locus and population, calculated based on a minimum sample size of eight individuals ([Fig 2](#pone.0176197.g002){ref-type="fig"}), was slightly lower for EST-SSRs than for gSSRs (4.438 *versus* 4.748) but the difference was not significant (Wilcoxon test p-value = 0.778) ([Fig 2](#pone.0176197.g002){ref-type="fig"}). The mean expected heterozygosity within populations was high (0.635 for gSSRs and 0.625 for EST-SSRs) and not significantly different between the two types of markers (Wilcoxon test p-value = 0.778). The mean genetic differentiation F~ST~ was 0.072 for gSSRs and 0.058 for EST-SSRs (difference significant, Wilcoxon test p-value = 0.045). ![Allelic richness for the 13 EST-SSRs and 13 gSSRs analyzed in *A*. *artemisiifolia*.\ Markers are plotted by increasing mean value.](pone.0176197.g002){#pone.0176197.g002} 10.1371/journal.pone.0176197.t003 ###### Genetic diversity and differentiation estimated at 13 gSSR and 13 EST-SSR loci in 16 populations of *A*. *artemisiifolia*. Number of alleles (*N*a), mean observed (*H*o) and expected (*H*s) heterozygosity, genetic differentiation (*F*~ST~) and average across-populations frequency of null alleles (*P*null) are indicated for each marker. ![](pone.0176197.t003){#pone.0176197.t003g} Locus Repeated motif Size range Na Ho Hs *F*~ST~ *P*null ------------ ---------------- ------------ ---- ------- ------- --------- --------- **ILL02** (ACCACT)~6~ 265--297 12 0.495 0.605 0.054 0.108 **ILL12** (AACAG)~5~ 114--148 10 0.309 0.432 0.095 0.163 **ILL48** (AGC)~10~ 116--148 14 0.523 0.702 0.053 0.087 **ILL64** (ACC)~7~ 279--304 8 0.501 0.538 0.104 0.073 **ILL74** (AGC)~7~ 281--308 11 0.662 0.740 0.067 0.081 **SSR10** (GATA)~6~ 160--203 10 0.378 0.625 0.071 0.194 **SSR17** (CATA)~5~ 145--196 9 0.320 0.485 0.105 0.135 **SSR26** (GAA)~9~ 106--120 6 0.566 0.588 0.074 0.072 **SSR47** (AG)~9~ 96--136 20 0.488 0.762 0.066 0.127 **SSR67** (GAA)~7~ 183--230 14 0.580 0.707 0.091 0.105 **SSR71** (TCC)~7~ 127--161 12 0.715 0.706 0.065 0.071 **SSR73** (AC)~8~ 185--206 13 0.640 0.772 0.044 0.091 **SSR91** (TC)~8~ 108--118 6 0.537 0.592 0.072 0.159 **EST13** (AGT)~9~ 173--225 18 0.594 0.814 0.050 0.089 **EST50** (AAAG)~5~ 204--224 8 0.451 0.634 0.052 0.160 **EST56** (AGC)~7~ 208--223 5 0.647 0.688 0.052 0.070 **EST61** (AAT)~7~ 144--180 13 0.471 0.653 0.048 0.105 **EST69** (AAT)~7~ 116--138 12 0.614 0.761 0.070 0.090 **EST71** (AAT)~7~ 141--165 13 0.394 0.545 0.054 0.098 **EST111** (ACC)~7~ 134--159 9 0.491 0.625 0.099 0.087 **EST113** (ACC)~7~ 151--167 6 0.427 0.516 0.056 0.137 **EST123** (ACC)~7~ 114--142 8 0.374 0.589 0.064 0.165 **EST131** (ACC)~7~ 117--131 13 0.456 0.705 0.051 0.114 **EST135** (ACG)~7~ 132--148 6 0.419 0.456 0.043 0.073 **EST138** (ACC)~7~ 130--168 12 0.283 0.498 0.065 0.140 **EST153** (AGG)~6~ 172--193 8 0.641 0.645 0.052 0.058 Insight into the mating system in *A*. *artemisiifolia* {#sec018} ------------------------------------------------------- Thirty-six maternal progenies sampled from six French populations were analysed with five gSSRs to estimate mating system parameters. In addition, direct evidence for the presence of null alleles was sought by considering the progenies from maternal plants apparently homozygous at one locus. If a null allele was present, the maternal plant would actually be heterozygous (i.e., carrying one null allele and one detectable allele). Its progeny would thus contain some plants apparently homozygous for alleles different from the maternal one, but actually carrying one maternal null allele and one paternal detectable allele. Evidence for the presence of null alleles was obtained for all five markers. Depending on the marker considered, from 25% (3 out of 12) to 35% (8 out of 23) of the progenies from plants scored as homozygotes contained non-matching genotypes. Mating system parameters were estimated after excluding these progenies. The maternal inbreeding coefficient was non-significant (F = 0). Multi-locus outcrossing rates (tm) were high and not significantly lower than 1 in all populations ([Table 4](#pone.0176197.t004){ref-type="table"}). The rates of mating between related individuals (tm-ts) were not significant. Paternity correlations (rp) were weak and only significant for two populations. These results suggested complete outcrossing for *A*. *artemisiifolia* and large numbers of pollen donor parents. 10.1371/journal.pone.0176197.t004 ###### Estimates of mating system parameters for six French *A*. *artemisiifolia* populations based on five SSR markers (SSR10, SSR17, SSR47, SSR71 and SSR73). Multi-locus outcrossing (tm) and single-locus outcrossing (ts) rates, outcrossing rate between related individuals (tm-ts), maternal inbreeding coefficient (F) and correlation of paternity (rp) are indicated for each marker. ![](pone.0176197.t004){#pone.0176197.t004g} Samples *tm* *ts* *tm---ts* *F* *rp* --------- --------------- --------------- ---------------- ---------------- -------------------------------------------------------- GEN02 1.025 (0.080) 1.065 (0.083) -0.040 (0.059) -0.200 (0.058) **0.151[\*](#t004fn002){ref-type="table-fn"} (0.049)** GEN05 0.882 (0.114) 0.789 (0.115) 0.093 (0.037) -0.037 (0.104) 0.001 (0.179) GEN07 0.941 (0.077) 0.895 (0.077) 0.046 (0.046) -0.072 (0.152) 0.106 (0.061) GEN10 0.963 (0.056) 0.906 (0.085) 0.057 (0.046) -0.200 (0.046) 0.063 (0.048) GEN11 0.985 (0.081) 0.933 (0.133) 0.052 (0.073) -0.125 (0.130) **0.175[\*](#t004fn002){ref-type="table-fn"} (0.086)** GEN17 0.762 (0.161) 0.693 (0.118) 0.069 (0.062) -0.110 (0.047) 0.118 (0.074) The values in brackets are S.D. \* indicates significant values. Patterns of genetic diversity and inbreeding in *A*. *artemisiifolia* populations {#sec019} --------------------------------------------------------------------------------- We compared patterns of genetic diversity between the native range (North America) and the invasive range (Europe). Allelic richness within population and mean expected heterozygosity within population were slightly higher in North America than in Europe ([Table 5](#pone.0176197.t005){ref-type="table"}). However, for both parameters, the difference between the two ranges was not significant (Wilcoxon test p-values: 0.100 and 0.173 for allelic richness and expected heterozygosity, respectively). The inbreeding coefficient estimated taking null alleles into account was significantly higher than zero in only seven populations: four of the five North-American populations and three of the eleven European populations ([S4 Table](#pone.0176197.s009){ref-type="supplementary-material"}). Consequently, *F*~IS~ values were on average higher in the native range than in the invasive range ([Table 5](#pone.0176197.t005){ref-type="table"}), although the difference was not significant (Wilcoxon test p-values: 0.2951). F~ST~ in the native range was low (0.042) but significant. F~ST~ in the invasive range was higher (0.071) and also significant. The difference in F~ST~ values between the two areas was significant based on 99% bootstrap confidence intervals. 10.1371/journal.pone.0176197.t005 ###### Genetic diversity parameters across populations of *A*. *artemisiifolia* sampled within (i) North America and Europe, (ii) North America only and (iii) Europe only. *A*: average allelic richness after rarefaction, *H*~O~: observed heterozygosity, *H*~S~: expected heterozygosity, *F*~IS~: inbreeding coefficient estimated taking into account the presence of null alleles, *F*~ST~: coefficient of genetic differentiation among populations. ![](pone.0176197.t005){#pone.0176197.t005g} Group Number of populations *A* *H*~O~ *H*~S~ *F*~IS~ *F*~ST~ --------------- ----------------------- ------- -------- -------- --------- -------------------------------------------- Overall 16 3.989 0.544 0.630 0.078 0.064[\*](#t005fn001){ref-type="table-fn"} North America 5 4.193 0.651 0.651 0.092 0.042[\*](#t005fn001){ref-type="table-fn"} Europe 11 3.896 0.496 0.620 0.065 0.071[\*](#t005fn001){ref-type="table-fn"} \* *F*~ST~ estimate significantly higher than zero based on 99% bootstrap confidence intervals. Population genetic structure of *A*. *artemisiifolia* {#sec020} ----------------------------------------------------- The posterior likelihood generated by Structure increased continuously with *K*, whereas the Evanno method (49) suggested that the most likely number of genetic clusters was six ([S3 Fig](#pone.0176197.s003){ref-type="supplementary-material"}). From *K* = 2, a west-east gradient of genetic variation was observed across Europe, showing that Central and Eastern European populations are different from Western European populations, and similar to the Northern American populations studied. At *K* = 6, a more detailed genetic structuring was revealed mainly within Europe ([Fig 3](#pone.0176197.g003){ref-type="fig"}). Most of the additional clusters were very specific to one or two populations (cluster 1: HOR-G, cluster 2: 89-P10, cluster 3: TAT-H, cluster 6: BES-I and DOM-G, Figs [3](#pone.0176197.g003){ref-type="fig"} and [4](#pone.0176197.g004){ref-type="fig"}). The two main genetic clusters (clusters 4 and 5, [Fig 4](#pone.0176197.g004){ref-type="fig"}) were frequent in the western and south-eastern part of the invasive range, respectively, but only the first one (cluster 4) was observed at high frequencies in the native range. ![Individual plant membership probabilities for the genetic clusters identified by the software Structure within 16 *A*. *artemisiifolia* populations sampled in Europe and in North America.\ Populations are classified from a west (left) to east (right) gradient.](pone.0176197.g003){#pone.0176197.g003} ![Genetic structure of 16 populations of *Ambrosia artemisiifolia* analyzed using 26 SSR markers.\ Proportions of the six genetic clusters within 16 *A*. *artemisiifolia* populations sampled in Europe and in North America.](pone.0176197.g004){#pone.0176197.g004} Structure analyses were also performed separately using gSSR data only or EST-SSR data only. The most likely numbers of genetic clusters among the 16 populations were four ([S4 Fig](#pone.0176197.s004){ref-type="supplementary-material"}) and three ([S5 Fig](#pone.0176197.s005){ref-type="supplementary-material"}) for EST-SSRs and gSSRs, respectively. Overall, the same patterns were observed for both datasets, i.e., variation in cluster membership probabilities among populations and a west-east gradient of genetic variation across Europe. Patterns of local genetic divergence and bottlenecks in the invasive range {#sec021} -------------------------------------------------------------------------- Population--specific F~ST~ values were best explained by the model that included latitude as a linear explanatory factor (posterior probability: 0.66) in comparison to the null model with no explanatory factor (posterior probability: 0.28) or the model including longitude (posterior probability: 0.04). Population-specific F~ST~ values increased with latitude ([Fig 5A](#pone.0176197.g005){ref-type="fig"}). While there was no linear relationship with longitude, a non-linear pattern was revealed with populations from Central Europe (Italy: BES-I and Germany: HOR-G, DOM-G) and two populations located in the western (89-P10) and eastern (TAT-H) parts of the range showing elevated F~ST~ values ([Fig 5B](#pone.0176197.g005){ref-type="fig"}). Noticeably, these populations were those harbouring specific genetic clusters under the most detailed Structure models (Figs [3](#pone.0176197.g003){ref-type="fig"} and [4](#pone.0176197.g004){ref-type="fig"}). Increased genetic divergence may be due to recent founder events for these populations. However, no significant signatures of recent bottlenecks were detected based on the Wilcoxon test for expected heterozygosity excess in any of the studied populations. ![**Population-specific genetic divergence (expressed as *F***~**ST**~**/(1 --*F***~**ST**~**)) of *Ambrosia artemisiifolia* in Europe, as a function of (A) latitude and (B) longitude.** Dots are coloured according to the most frequent genetic cluster identified by Structure (at *K* = 6) in each population (see Figs [3](#pone.0176197.g003){ref-type="fig"} and [4](#pone.0176197.g004){ref-type="fig"}).](pone.0176197.g005){#pone.0176197.g005} Transferability of SSRs and relationships among species {#sec022} ------------------------------------------------------- Cross-species transferability was tested for 31 gSSRs and 32 EST-SSRs. Among these markers, 32.2%, 54.8% and 67.7% of gSSRs and 46.9%, 75% and 81.2% of EST-SSRs gave consistent amplification and clear electrophoretic migration patterns in *A*. *trifida*, *A*. *psilostachya* and *A*. *tenuifolia*, respectively ([S1](#pone.0176197.s006){ref-type="supplementary-material"}--[S3](#pone.0176197.s008){ref-type="supplementary-material"} Tables). Among the 26 markers used to analyse the genetic variation in *A*. *artemisiifolia* ([Table 3](#pone.0176197.t003){ref-type="table"}), three gSSRs (SSR17, SSR26 and SSR73) and five EST-SSRs (EST-SSR13, EST-SSR61, EST-SSR69, EST-SSR111 and EST-SSR123) were scorable in all three other species. Relationships among species were visualised by a PCA based on data at these eight markers ([Fig 6](#pone.0176197.g006){ref-type="fig"}). In coherence with the transferability of SSR markers, *A*. *trifida* was the most genetically divergent species, while *A*. *psilostachya* and *A*. *tenuifolia* appeared to be very genetically close to *A*. *artemisiifolia*. ![Relationships among *Ambrosia* species assessed with PCA based on eight SSR markers.\ Three gSSRs (SSR 17--26 and 73) and five EST-SSRs (EST-SSRs 13--61--69--111 and 123) were used. Each color represents one *Ambrosia* species.](pone.0176197.g006){#pone.0176197.g006} Discussion {#sec023} ========== New, highly polymorphic nuclear SSRs in *A*. *artemisiifolia* {#sec024} ------------------------------------------------------------- Next-generation sequencing technologies have considerably facilitated the development of SSRs for non-model organisms. Until recently, the method of choice was 454 sequencing of enriched gDNA libraries. 454 sequencing generates relatively long reads, which facilitates the design of primers within the regions flanking SSRs. However, the more recent Illumina sequencing technique provides much higher numbers of reads at a lower cost, and can now generate paired reads of 2×250 bp or longer \[[@pone.0176197.ref008]\]. Here, we implemented a rigorous initial quality filtering of reads. Further, we merged Illumina paired-end reads and kept only long-enough sequences, which facilitated primer design. A similar amplification success of potentially amplifiable loci was obtained from Illumina and from 454 data. This, together with the Illumina technology yielding one to two orders of magnitude more reads than 454, highlights Illumina as the currently most efficient sequencing technology for developing SSRs, provided reads are carefully checked for quality and paired-end reads are merged. All markers developed were highly polymorphic in *A*. *artemisiifolia*. As compared to gSSRs, EST-SSRs are expected to be less polymorphic and more prone to the influence of selective processes: divergent selection may increase the estimation of genetic differentiation among populations at these loci, while purifying or balancing selection may have the opposite effect \[[@pone.0176197.ref009]--[@pone.0176197.ref011]\]. Here, allelic richness and expected heterozygosity were similar between the two kinds of markers, while genetic differentiation was slightly lower for EST-SSRs. Most EST-SSRs were tri-nucleotide repeats, for which length polymorphism does not result in any frameshift in the coding sequence. No influence of selection could be detected for any of the EST-SSRs analysed. The high level of polymorphism observed at both non-genic and genic locations in the genome of *A*. *artemisiifolia* likely reflects very large effective population sizes in a plant species known to have recently undergone a demographic expansion both in its native range \[[@pone.0176197.ref026]\] and in invasive ranges \[[@pone.0176197.ref015]\]. Similarly, a large variation for life traits is known in *A*. *artemisiifolia* \[[@pone.0176197.ref054]\]. Null alleles rather than partial selfing explain excess homozygotes in *A*. *artemisiifolia* {#sec025} -------------------------------------------------------------------------------------------- One undesirable counterpart of high nucleotide polymorphism is the presence of null alleles resulting from mutations at primer binding sites. Here, null alleles were observed for both gSSRs and EST-SSRs, with overall estimated frequencies of about 10%. This is consistent with a literature survey indicating that null alleles frequencies are often below 20% but can in some cases range from 40% to 75% \[[@pone.0176197.ref055]\]. Analyzing progenies in French populations provided direct evidence for the presence of null alleles, but no evidence for selfing or biparental inbreeding. Our results were consistent with a previous study of invasive populations from China \[[@pone.0176197.ref027]\] that also indicated complete allogamy and no shift towards partial selfing during invasion. Significant *F*~IS~ values were observed in only seven populations out of sixteen, indicating that null alleles are the main cause for excess homozygosity. Significant *F*~IS~ values were observed in a small minority of populations from Europe but in the majority of populations from the native range. As any evolution of the mating system towards loss of selfing during invasion seems unlikely, it remains to be investigated whether this might be due to a different functioning of the populations in the two ranges, with populations from the native range showing some Wahlund effect \[[@pone.0176197.ref026]\]. Genetic diversity, population structure and population-specific genetic divergence in *A*. *artemisiifolia* {#sec026} ----------------------------------------------------------------------------------------------------------- Genetic diversity within population was similar in North America (the native range) and in Europe, but genetic differentiation among populations (*F*~ST~) was greater in the invasive range. A similar trend had also been observed previously \[[@pone.0176197.ref023]\]. This difference in *F*~ST~ values may arise simply because only a small area of the native range was sampled (five American populations) in our study and in \[[@pone.0176197.ref023]\]. This pattern is also consistent with a scenario involving multiple introduction events, as previously proposed \[[@pone.0176197.ref020],[@pone.0176197.ref022],[@pone.0176197.ref023]\]. Rare alleles initially present in American populations may have shifted to high frequencies in different European populations after invasion and local demographic expansion \[[@pone.0176197.ref022]\]. The maintenance of high levels of genetic diversity within invasive populations, a trend opposite to that found in many other biological invasions processes \[[@pone.0176197.ref056]\], can be attributed to high numbers of introduced seeds in multiple events \[[@pone.0176197.ref057]\], high gene flow and possibly genetic admixture among introduced populations \[[@pone.0176197.ref021],[@pone.0176197.ref058]\]. The main pattern of population structure we observed in Europe was a west-east gradient. Differentiation between the western and eastern parts of the European invasive range had previously been observed \[[@pone.0176197.ref022]\] and attributed to two main, distinct invasion sources. Here, we also observed that populations from central Europe (Germany to Italy) were genetically distinct from both Western and Eastern European populations. Several genetic clusters predicted by Structure were not observed or were very infrequent in North American populations, suggesting that we may have sampled only a fraction of the native sources. Alternatively, the additional clusters revealed by Structure under the more refined model (*K* = 6) could simply reflect the elevated genetic divergence of some populations. Indeed, Structure analyses are known to be biased towards inferring extra genetic clusters when some populations have undergone strong recent genetic drift; in that case, and contrary to the assumptions of the admixture model, not all genetic clusters are ancestral sources for the present populations \[[@pone.0176197.ref059]\]. Population-specific genetic divergences as estimated based on the F-model largely varied among populations, a pattern not expected if all populations similarly derived from a number of ancestral sources \[[@pone.0176197.ref051]\]. This, together with the outputs of Structure for increasing genetic partitioning ([Fig 3](#pone.0176197.g003){ref-type="fig"}), suggests secondary founding events associated with genetic drift. This hypothesis was not supported by signatures for recent bottlenecks; however, it is well known that bottleneck tests have a very limited power \[[@pone.0176197.ref060]\]. Populations from Italy, Germany and one population from Hungary (TAT) likely had their genetic sources in the South-Eastern part of Europe, whereas one population from France (89-P10) likely originated from eastern France. Although this would need to be validated based a more extensive sampling, genetic patterns revealed here are overall consistent with two main distinct colonization events in Europe (in South-Eastern France: the Rhone valley, and South-Eastern Europe: the Pannonian plain), with secondary colonization events arising northwards and towards Central Europe. Genetic variation among species of the genus *Ambrosia* {#sec027} ------------------------------------------------------- Most SSR markers developed for *A*. *artemisiifolia* (65% and 75%, respectively) were transferable to *A*. *psilostachya* and *A*. *tenuifolia*, whereas only 40% were transferable to *A*. *trifida*. The genus *Ambrosia* is composed of many, not clearly delineated species \[[@pone.0176197.ref015]\] for which there is no well-established phylogeny. Former morphological classification considered *A*. *artemisiifolia*, *A*. *psilostachya* and *A*. *tenuifolia* as related species belonging to one same group, while *A*. *trifida* was classified in a separate group \[[@pone.0176197.ref030]\]. This is consistent with differences in gametophytic chromosome numbers (n = 18 for *A*. *artemisiifolia*, *A*. *psilostachya* and *A*. *tenuifolia* but n = 12 for *A*. *trifida*, \[[@pone.0176197.ref013],[@pone.0176197.ref014],[@pone.0176197.ref029],[@pone.0176197.ref030],[@pone.0176197.ref032]\]) and with a chloroplast DNA phylogeny \[[@pone.0176197.ref061]\]. The success of SSR marker transfer among species fully confirms these previous data. In addition, some degree of hybridization between *A*. *artemisiifolia* and *A*. *psilostachya* was suggested \[[@pone.0176197.ref032]\]. This, or homoplasy at SSR markers, may explain the overlapping genetic variation between the two species. *A*. *psilostachya* and *A*. *tenuifolia* are perennial species that reproduce both sexually and clonally \[[@pone.0176197.ref013], [@pone.0176197.ref014], [@pone.0176197.ref032], [@pone.0176197.ref062]\]. Although these species are of less concern than the annual *A*. *artemisiifolia* and *A*. *trifida*, being less widespread and invasive, our SSR markers will be useful for assessing vegetative *versus* sexual reproduction, as well as for identifying colonization sources and relatedness among populations. However, given that several ploidy levels were reported in these species \[[@pone.0176197.ref013], [@pone.0176197.ref014], [@pone.0176197.ref029], [@pone.0176197.ref032]\], we recommend that ploidy is carefully checked before markers developed from *A*. *artemisiifolia* are transferred to *A*. *psilostachya* and *A*. *tenuifolia*. *A*. *trifida* is a noxious annual weed, very widespread in its native area \[[@pone.0176197.ref063]\] and introduced in several European countries including, for instance, France, Italy, Slovenia and Serbia \[[@pone.0176197.ref064],[@pone.0176197.ref065]\]. Despite the potential threat set by this species on both human health and agriculture, population genetics studies are still lacking. Given its distant relation to *A*. *artemisiifolia* and the low success rate of markers transferability, we recommend that additional SSR markers are specifically developed for *A*. *trifida*. Conclusions {#sec028} =========== Large sets of genomic SSRs and EST-SSRs were developed and validated in *A*. *artemisiifolia*, providing useful new resources for genetic studies of this highly noxious invasive weed. All markers were highly polymorphic. EST-SSRs revealed as many alleles as gSSRs and yielded similar genetic diversity estimates. The genetic patterns revealed for a set of American and European populations confirmed results from previous studies by showing a high within-population genetic diversity in both the native and invasive ranges. A geographical gradient of genetic variation in Europe was consistent with at least two major colonization events in Western and Eastern Europe, respectively. Secondary founding events were identified, especially in Central Europe. In addition, we settled a former controversy by demonstrating that inbreeding observed within populations is attributable mostly to the presence of null alleles rather than to selfing. Last, most SSRs were transferable to three other *Ambrosia* species. These SSRs can readily be used for studying key aspects of the biology and population dynamics of the two species most closely related to *A*. *artemisiifolia* (*A*. *psilostachya* and *A*. *tenuifolia*). Supporting information {#sec029} ====================== ###### Karyotype showing the 36 chromosomes of *A*. *artemisiifolia* in a telophase (left) and in the end of a prophase (right). The red arrows indicate the chromosomes. Gx100. (TIF) ###### Click here for additional data file. ###### **Results of Bayescan F**~**ST**~ **outlier analysis on 14 gSSR (a) and 13 EST-SSR loci (b).** The vertical bars correspond to threshold P-values of 0.05 (solid line) and 0.01 (dashed line) for the neutral model. (a) Data from all 16 populations. (b) Data from 11 European populations. (TIF) ###### Click here for additional data file. ###### Data maximum likelihood value (left) and deltaK method (right) results used to determine the most likely number of genetic clusters with 13 EST-SSR and 13 gSSR markers. (TIF) ###### Click here for additional data file. ###### Genetic structure of 16 populations of *A*. *artemisiifolia* analyzed using the 13 EST-SSR markers. (TIF) ###### Click here for additional data file. ###### Genetic structure of 16 populations of *A*. *artemisiifolia* analyzed using the 13 gSSR markers. (TIF) ###### Click here for additional data file. ###### gSSR markers obtained by 454 sequencing of enriched *A*. *artemisiifolia* gDNA and showing consistent PCR amplifications and clear electrophoretic migration patterns. Loci in bold represent the markers selected in *A*. *artemisiifolia*. (DOCX) ###### Click here for additional data file. ###### gSSR markers obtained by Illumina sequencing of raw *A*. *artemisiifolia* gDNA and showing consistent PCR amplifications and clear electrophoretic migration patterns. Loci in bold represent the markers selected in *A*. *artemisiifolia*. (DOCX) ###### Click here for additional data file. ###### EST-SSR markers obtained by 454 sequencing of *A*. *artemisiifolia* ESTs and showing consistent PCR amplifications and clear electrophoretic migration patterns. Loci in bold represent the markers selected in *A*. *artemisiifolia*. (DOCX) ###### Click here for additional data file. ###### Genetic diversity parameters at 26 nuclear SSR markers for 16 populations of *Ambrosia artemisiifolia*. *N*: number of individuals genotyped, *A*: average allelic richness after rarefaction, *H*~O~ observed heterozygosity, *H*~S~ expected heterozygosity, *F*~IS~ inbreeding coefficient estimated taking into account the presence of null alleles. \* F~IS~ estimates significantly greater from zero (using the Bayesian model comparison based on Deviance Information Criterion implemented in INEST2.1). (DOCX) ###### Click here for additional data file. We thank Charles Poncet and the GENTYANE team (INRA Clermont-Ferrand) for performing the genotyping and Dr Mona Abirached-Darmency (INRA Dijon) for producing the karyotype of *A*. *artemisiifolia*. [^1]: **Competing Interests:**The authors of the manuscript have the following interests: during the realization of the study, GB was a paid employee of BASF France SAS and LM received a research grant and salary from BASF France SAS. The other authors have declared that no competing interests exist. Commercial affiliation for some of the authors does not alter our adherence to PLOS ONE policies on sharing data and materials. [^2]: **Conceptualization:** VLC.**Data curation:** VLC.**Formal analysis:** LM VLC.**Funding acquisition:** GB CD.**Investigation:** LM RC FP RS.**Project administration:** CD GB.**Supervision:** VLC CD BC GB.**Visualization:** LM.**Writing -- original draft:** LM.**Writing -- review & editing:** LM VLC GB BC CD.
Graham, L. A. Lougheed, S. C. Ewart, K. V. Davies, P. L. MetadataShow full item record Abstract Fishes living in icy seawater are usually protected from freezing by endogenous antifreeze proteins (AFPs) that bind to ice crystals and stop them from growing. The scattered distribution of five highly diverse AFP types across phylogenetically disparate fish species is puzzling. The appearance of radically different AFPs in closely related species has been attributed to the rapid, independent evolution of these proteins in response to natural selection caused by sea level glaciations within the last 20 million years. In at least one instance the same type of simple repetitive AFP has independently originated in two distant species by convergent evolution. But, the isolated occurrence of three very similar type II AFPs in three distantly related species (herring, smelt and sea raven) cannot be explained by this mechanism. These globular, lectin-like AFPs have a unique disulfide-bonding pattern, and share up to 85% identity in their amino acid sequences, with regions of even higher identity in their genes. A thorough search of current databases failed to find a homolog in any other species with greater than 40% amino acid sequence identity. Consistent with this result, genomic Southern blots showed the lectin-like AFP gene was absent from all other fish species tested. The remarkable conservation of both intron and exon sequences, the lack of correlation between evolutionary distance and mutation rate, and the pattern of silent vs non-silent codon changes make it unlikely that the gene for this AFP pre-existed but was lost from most branches of the teleost radiation. We propose instead that lateral gene transfer has resulted in the occurrence of the type II AFPs in herring, smelt and sea raven and allowed these species to survive in an otherwise lethal niche.
https://dalspace.library.dal.ca/handle/10222/46551
The Museum houses an ongoing collection of objects and artworks based on mourning memorabilia, shrines, graveyard debris and contains bones, mummified rats, cats, frogs, taxidermy birds, haunted dolls, extinct animal sculpture, ritual pots, drawings of dust, ex votos, poppets and folk horror zines amongst other things. DUST the collection explores the multitude of ways in which bereaved people express loss, often drawing inspiration from global, multicultural traditions ranging from a belief in reincarnation to animism and participation in the Day of the Dead. The project is interested in how people make visible their loss through an interaction with objects and how this interaction often involves artistic creativity, or the crafting of memory onto objects, materialising loss. A popular stereotype about British culture is that talk of death is repressed, and forms of the visible and collective expression of grief are absent. The stories and objects in the shop and museum will explore through creative practice the on going relationship we have with the dead and how they continue to influence our lives. DUST investigates the role that contemporary art practice has to play in negotiating grief and encouraging a space for active communication to take place between the dead and the living. The collection of objects and artifacts act as prompts to encourage conversation around death and dying. As many of our death rites such as caring for the body in the home have been lost over the years perhaps we can draw inspiration from the rich and varied way global cultures grieve for their dead. All kinds of stories emerge from the shadows. SHRINES During the past nine weeks of Covid lockdown while the shop has been closed I have been arranging objects that are part of DUST's collection into small shrines as a dedication to things that we hold dear. Each week the theme is slightly different, the objects placed together to tell a story that might provoke a feeling of loss and how that can be addressed. We build spaces in our lives for remembrance. A shrine is often an external representation of an interior space of something precious that we wish to communicate with in some way. They become a private space of communication. Through the careful placement of objects that hold meaning we are inviting the presence of something or someone we wish to speak with, creating a space that invites communication to take place. Once the Covid restriction have been lifted artists will be invited to respond to the collection in DUST to create their own shrine and story to accompany it. As part of the forthcoming program of talks DUST will start with a lecture on temporary shrines, their meanings and the need we have to commemorate sudden and tragic deaths with objects and memorabilia in this way. Roadside shrines appear throughout the world to mark the spot where someone has died suddenly and tragically. In Chile as the short documentary below explores it is widely believed that the soul is separated from the body at the site of death. Houses for the soul are created where community can leave a message, a flower, object to continue a direct relationship with the person who has died. The weekly shrines that I have been making in the shop invite you to contemplate the relationship we have with the world and the things that we are losing, to question what we hold dear reflect on that. Talking to the Dead There is a white ghost like telephone booth in Otsuchi, Japan. Inside the booth is an old black dial up telephone. It is connected to nowhere but is a place where anyone who has suffered a loss can go and speak to their loved ones. It was built in the gardens of Itaru Sasalu in 2010 after losing his cousin. The year after in 2011 the area was hit by a Tsunami that killed 15,000 people and the telephone was opened up to the public. Mourners can dial up an imaginary number and say the things that they wish to say holding a belief that the wind will carry their messages to the spirit world. Over 30,000 have used the wind telephone and replicas have started appearing all over the world. The power of the imagination to connect us to the deceased and rituals such as this help to keep our loved ones part of our lives in meaningful ways.
https://www.dustltd.art/museum
Cylinder Tonnage Calculations To calculate the tonnage that a cylinder can provide, you must have the following values: - Cylinder Bore (B) - Rod Diamater (D) - Maximum Pressure (P) For an example, we will be using a cylinder with an 8" bore (B=8), a rod diameter of 4" (D=4), and a Max PSI of 3000 PSI It should be noted that the cylinder will have different tonnage strengths if it is pushing or if it is pulling. The tonnage is always higher pushing, as the pressure is acting against the entire lower system of the piston, whereas when it is pulling, the pressure is acting on that area, less the area of the rod. Pushing Tonnage To calculate the "Pushing," or expanding tonnage, - Find the cross sectional area (A) of the surface. This is simply the area of the circle, so A= Pi X (B/2)^2 - For the example, A=3.14 X (8"/2)^2 = 3.14 x 4"^2 = 3.14x 16 in^2 = 50.24 in^2 - Multiply the cross sectional area (A) by the maximum pressure (P), so the tonnage (T)= A x P - For the example, T=50.24 in^2 x 3000PSI= 150720 pounds - Convert pounds to tons, T/2000= new T - 150720 lbs/2000= 75.36 Tons Pulling Tonnage To calculate the "Pulling," or contracting tonnage,
https://wiki.opensourceecology.org/wiki/Cylinder_Tonnage_Calculations
Mechanical engineering is a very broad field of engineering that involves the application of physical principles for analysis, design, manufacturing, and maintenance of mechanical systems. The process of mechanical engineering can be as simple as the design of a wheel or as complex as the optimization of a turbocharged engine for speed. It can be as small as the cutting of a nano-sized gear or as large as the assembly of a supertanker used to carry oil around the world. It is a diversifying study of engineering, encompassing areas ranging from robotics machine to computational mechanics, composite materials and teratology. Mechanical engineering continues to play a key role in developing, operating and manufacturing new machines, devices and processes to benefit mankind. Mechanical engineers apply their creative imaginations and professional skills to combine both theory and practice in a variety of situations. For this, they need an in-depth understanding of scientific principles and engineering processes. They also need to be able to develop solutions to real-life problems in the face of conflicting requirements. Mechanical engineers in the commercial world combine technical and management skills to retain the competitive advantage for their companies. Mechanical Engineering is a diverse profession that lies at the crossroads of all the engineering disciplines. In a way, Mechanical Engineers are involved in creating the future. They are the driving force behind many of our technologies and industrial processes including innovative products like mobiles, PCs and DVD etc in our modern world. Department’s mission Department of mechanical engineering at NFC Institute of Engineering & Fertilizer Research (NFC-IEFR) dedicates itself to provide students with a set of skills, knowledge and attributes that will permit its graduates to succeed and prosper as engineers and leaders. The department endeavor to organize its graduates to pursue life-long learning, serve the profession and meet intellectual, ethical and career challenges. Program Educational Objectives (PEOs) PEO-1: Apply the knowledge to solve analytical and practical engineering problems PEO-2: Work for the continuous socio-technical development of the society PEO-3: Exhibit strong communication, and managerial skills, as team leaders as well as team members Program Learning Outcomes (PLOs) - Engineering Knowledge: An ability to apply knowledge of mathematics, science, engineering fundamentals and an engineering specialization to the solution of complex engineering problems. - Problem Analysis: An ability to identify, formulate, research literature, and analyse complex engineering problems reaching substantiated conclusions using first principles of mathematics, natural sciences and engineering sciences. - Design/Development of Solutions: An ability to design solutions for complex engineering problems and design systems, components or processes that meet specified needs with appropriate consideration for public health and safety, cultural, societal, and environmental considerations. - Investigation: An ability to investigate complex engineering problems in a methodical way including literature survey, design and conduct of experiments analysis and interpretation of experimental data, and synthesis of information to derive valid conclusions. - Modern Tool Usage: An ability to create, select and apply appropriate techniques, resources, and modern engineering and IT tools, including prediction and modeling, to complex engineering activities, with an understanding of the limitations. - The Engineer and Society: An ability to apply reasoning informed by contextual knowledge to assess societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to professional engineering practice and solution to complex engineering problems. - Environment and Sustainability: An ability to understand the impact of professional engineering solutions in societal and environmental contexts and demonstrate knowledge of and need for sustainable development. - Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of engineering practice. - Individual and Team Work: An ability to work effectively, as an individual or in a team, on multifaceted and /or multidisciplinary settings. - Communication: An ability to communicate effectively, orally as well as in writing, on complex engineering activities with the engineering community and with society at large, such as being able to comprehend and write effective reports and design documentation, make effective presentations, and give and receive clear instructions. - Project Management: An ability to demonstrate management skills and apply engineering principles to one's own work, as a member and/or leader in a team, to manage projects in a multidisciplinary environment.
https://www.iefr.edu.pk/dept.php?area=5&portion=overview
“Communication and media are central to promoting sustainable development and democracy. The right to freedom of expression underpins a free, pluralistic, inclusive and independent media environment as well as freedom of information” (UNESCO, 2017) L21 is a media outlet and content platform that brings together a vast community of experts and academics, who produce analysis and opinion about political, economic and social issues in Latin America. Through the free dissemination of expert and diverse opinions, we seek to help improve Latin Americans’ capacity to make critical judgments on the main issues occurring in the region. We also seek to foster democracy and a dialogue towards a culture of peace and non-violence. We want to promote freedom of ession, in line with the objectives of the Communication for Development initiative of the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the United Nations 2030 Agenda. Likewise, we seek to promote a space for our own analysis and an independent point of view on the different realities that exist in Latin American. L21, as a content platform, was founded in January 2017 by the Uruguayan journalist Jerónimo Giorgi and, in July 2019, we launched our website with which we began our journey as an independent media outlet. Currently, our team consists of thirteen people and we have more than two hundred active columnists, specialized in the fields of political science, sociology, economics and journalism. We have consolidated as a collaborative, independent and plural space that advocates the dissemination of contents that make the audience reflect on regional events. The diversity of origin and points of view of our contributors aims to promote a pluralistic environment and greater gender equity, as well as a more inclusive communication that encourages the participation of minorities. The opinions expressed in the articles always represent the personal views of the authors, not those of Latinoamérica21. Main objectives of Latinoamérica21 1. To support the development of the media and access to information in Latin America, promoting intercultural dialogue, a culture of peace and non-violence. 2. To promote collaboration with free, independent and pluralistic media, which guarantee the safety of journalists, protect cultural and natural heritage, seek to strengthen systems of governance for culture and seek to realize fundamental freedoms. 3. To contribute to the promotion of freedom of expression, democracy and human rights in the region. 4. To publish content on Latin America in different formats prepared by academics and experts. 5. To distribute our own and external content to various global media. 6. To serve as a meeting point and exchange of ideas among academics, specialists, students and others interested in the region. 7. To build an autonomous space for reflection and analysis on the Latin American reality. 8. To promote knowledge and scientific research on Latin America at a regional and global level. 9. Promote the use of Big Data and data journalism, producing and publishing quality data on Latin America. What we do? Inform about the political, economic and social reality of Latin America through opinion articles, analyses, books and other products produced by our team of collaborators. Strengthen freedom of expression, human rights, equality and democracy in Latin America. Connect academics, specialists, and a general audience interested in Latin American affairs. Disseminate the contents of L21 to encourage the analysis of the regional reality. Promote events of analysis and debate on the Latin American reality, as well as academic publications and other content that address regional issues. Partners Associations:
https://latinoamerica21.com/en/a-journalistic-platform-that-produces-content-on-latin-america/
In 2005 and 2006 members of the John Howard led Coalition Government, including the Prime Minster and Federal Treasurer Peter Costello, questioned whether Muslim dress, such as the hijab, conformed with ‘mainstream’ Australian standards of secularism and gender equality. In doing so, Howard and Costello used a feminist-sounding language to critique aspects of Islam for purportedly restricting the freedom and autonomy of Muslim women. I argue that race is implicated in the construction of Islam as a “threat” to secularism and gender equality because an unnamed assumption of the Australian ‘mainstream’ as Anglo-Celtic and white informs the standards of normalcy the Government invokes and constructs Islam as a ‘foreign’ religion. Further, whilst the demand for Muslim women to conform with ‘mainstream’ norms potentially contradicts the Government’s commitment to women’s autonomy, such a contradiction is not peculiar to the Howard Government. Using the work of Jean-Luc Nancy and Stewart Motha, I place the ‘hijab debates’ within the tension in liberal democracies between fostering autonomy and requiring a universal civil law to guarantee (but exist above) individual autonomy.
https://researchers.mq.edu.au/en/publications/secularism-feminism-and-race-in-representations-of-australianness
CANSEA network is to optimize the similarities and the complementarities between countries and institutions in the Mekong region, to improve on one hand the efficiency of research carried out by the various projects and on other hand to go over the “pilot” diffusion of CA systems in small-scale households in Southeast Asia. World Overview of Conservation Approaches and Technologies (WOCAT) WOCAT is a global network on Sustainable Land Management (SLM) that promotes the documentation, sharing and use of knowledge to support adaptation, innovation and decision-making in SLM. The Asia Indigenous Peoples Pact (AIPP) is a regional organization founded in 1992 by indigenous peoples’ movements. AIPP is committed to the cause of promoting and defending indigenous peoples’ rights and human rights and articulating issues of relevance to indigenous peoples. At present, AIPP has 46 members from 14 countries in Asia with 18 indigenous peoples’ national alliances/networks (national formations), 30 local and sub-national organizations. Of this number, 16 are ethnic based organizations, six (6) indigenous women and four (4) are indigenous youth organizations and one (1) organization of indigenous persons with disabilities. Asian Partnership for the Development of Human Resources in Rural Asia (AsiaDHRRA) is a regional partnership of eleven social development networks and organisations in eleven Asian nations that envisions Asian rural communities that are just, free, prosperous, living in peace and working in solidarity towards self-reliance. Tracing its history from the founding workshop on the development of human resources in rural areas in 1974 in Swanganiwas, Thailand, the DHRRA network’s mission is to be an effective promoter and catalyst of partnership relations, facilitator of human resource development processes in the rural areas and mobilizer of expertise and opportunities for the strengthening of solidarity and kinship among Asian rural communities. It is dedicated to the empowerment of farmers in the Asian region. The Land Information Working Group (LIWG) is a civil society network that was set up in 2007. The LIWG consists mostly of international and local civil society organization staff and other individuals working on land issues in Lao PDR. The group has over 80 Core Members representing nearly 40 organizations, and over 180 individual Supporting Members. The LIWG activities are implemented through the LIWG Secretariat which is overseen by the Committee, elected from among the member organizations. PORTAL The Open Development Initiative (ODI) is an open data and information network developed by EWMI that sheds light on development trends in the Lower Mekong Basin. The lower Mekong basin is a trans-boundary ecosystem shared by 6 countries providing a central livelihood and food security to 65 million people as the largest inland fishery in the world. ODI’s objective is to increase public awareness, enable individual analysis, improve information sharing, and inform rigorous debate – all contributing to the sustainable development of the region from a social, economic and environmental perspective. Lao Civil Society (Lao CSO) is an open information platform to serve stakeholders contributing to development in Lao PDR. LCSW is aimed at Lao Civil Society Organisations and others seeking to interact with them including International Non-Governmental Organizations, Development Partners, donors, and consultant/individuals working in this sector. DOC REPOSITORY LaoFAB document repository LaoFAB is a forum for sharing information about Farmers and AgriBusiness in Laos. Members include Government officials, staff of donor agencies and NGOs, project experts, academics and business people. MyLAFF document repository MyLAFF is a forum for sharing information about Land, Agribusiness, Forestry issues in Myanmar. Members include staff of donor agencies and NGOs, CSOs, project experts, academics and business people. Pha Khao Lao The Pha Khao Lao Agrobiodiversity Resource Platform aims to consolidate the wealth of written and oral knowledge in the country so it can be readily accessed and used by students, researchers, develop professionals, decision-makers, local communities and the private sector. The platform is meant to be interactive, insightful and useful to those who use it. Lao44 (Lao language only) 44 means a fundamental right of Lao citizens as stated in article 44 of the Lao Constitution: ” Lao citizens have the right and freedom of speech, press and assembly; and have the right to set up associations and to state demonstrations which are not contrary to the laws”. This website is a service of CLICK I4Dev which promote access to information for development in order for the public to learn and use the information to improve their livelihoods and their work. All documents and videos are contributions from government organizations, international organizations, civil society organizations, private sectors, educational institutes as well as individuals. FORUM Agricultural Transition This web-page has been created by a common effort by many organizations. We want to show the wide range of sustainable agricultural practices, and that peasants and other small scale food producers and providers can nourish a growing population, preserve the environment and contribute substantially to stop the climate change. WEBSITES East-West Management Institute (EWMI) The East-West Management Institute works to strengthen democratic societies by bringing together government, civil society, and the private sector – to build accountable, capable and transparent institutions. Founded in 1988, as an independent not-for-profit organization, EWMI’s work began the year before the wall came down, with the challenge of crafting functioning democratic systems in transitioning post-soviet societies. We learned – in our initial work across central and eastern europe, and in the decades that followed around the world – that a collaborative approach involving civil society, government and the private sector is the key to ensuring that citizens exercise their rights, and institutions are accountable for protecting them. Working Group on Agriculture (WGA) Established in 2002, the Working Group on Agriculture (WGA) has been working toward the implementation of priority projects under the Core Agriculture Support Program (CASP). The working group, which includes senior agriculture officials of GMS countries, currently implements and monitors the Strategy for Promoting Safe and Environment-Friendly Agro-Based Value Chains and Siem Reap Action Plan, 2018-2022. In tandem with the WGA Secretariat, WGA coordinators, who are senior agriculture ministry officials, are responsible for supervising the implementation of the strategy and action plan, and for reporting regularly to their respective Agriculture Ministers on the status. Solagro Solagro is a leader in France in the design of evaluation tools and indicators, assisting farmers in shifting their production methods towards serving economic viability and the environment. Solagro also leads the practical development of agroecology in France, notably through its support to various agro-ecological infrastructure projects (hedges, orchards…). BLOGS Welcome to the Kremen Lab Page As a conservation biologist, I seek mechanisms for preventing or reversing the loss of biodiversity, which is one of the greatest environmental challenges facing humanity in the 21st century.
https://ali-sea.org/useful-links/
As a child of the ’90s, one of my favorite movie quotes is from Harriet the Spy: “there are as many ways to live as there are people in this world, and each one deserves a closer look.” Likewise, there are as many ways to browse the web as there are people online. We each bring unique context to our web experience based on our values, technologies, environments, minds, and bodies. Assistive technologies (ATs), which are hardware and software that help us perceive and interact with digital content, come in diverse forms. ATs can use a whole host of user input, ranging from clicks and keystrokes to minor muscle movements. ATs may also present digital content in a variety of forms, such as Braille displays, color-shifted views, and decluttered user interfaces (UIs). One more commonly known type of AT is the screen reader. Programs such as JAWS, Narrator, NVDA, and VoiceOver can take digital content and present it to users through voice output, may display this output visually on the user’s screen, and can have Braille display and/or screen magnification capabilities built in. If you make websites, you may have tested your sites with a screen reader. But how do these and other assistive programs actually access your content? What information do they use? We’ll take a detailed step-by-step view of how the process works. (For simplicity we’ll continue to reference “browsers” and “screen readers” throughout this article. These are essentially shorthands for “browsers and other applications,” and “screen readers and other assistive technologies,” respectively.) The semantics-to-screen-readers pipeline#section2 Accessibility application programming interfaces (APIs) create a useful link between user applications and the assistive technologies that wish to interact with them. Accessibility APIs facilitate communicating accessibility information about user interfaces (UIs) to the ATs. The API expects information to be structured in a certain way, so that whether a button is properly marked up in web content or is sitting inside a native app taskbar, a button is a button is a button as far as ATs are concerned. That said, screen readers and other ATs can do some app-specific handling if they wish. On the web specifically, there are some browser and screen reader combinations where accessibility API information is supplemented by access to DOM structures. For this article, we’ll focus specifically on accessibility APIs as a link between web content and the screen reader. Here’s the breakdown of how web content reaches screen readers via accessibility APIs: The web developer uses host language markup (HTML, SVG, etc.), and potentially roles, states, and properties from the ARIA suite where needed to provide the semantics of their content. Semantic markup communicates what type an element is, what content it contains, what state it’s in, etc. The browser rendering engine (alternatively referred to as a “user agent”) takes this information and maps it into an accessibility API. Different accessibility APIs are available on different operating systems, so a browser that is available on multiple platforms should support multiple accessibility APIs. Accessibility API mappings are maintained on a lower level than web platform APIs, so web developers don’t directly interact with accessibility APIs. The accessibility API includes a collection of interfaces that browsers and other apps can plumb into, and generally acts as an intermediary between the browser and the screen reader. Accessibility APIs provide interfaces for representing the structure, relationships, semantics, and state of digital content, as well as means to surface dynamic changes to said content. Accessibility APIs also allow screen readers to retrieve and interact with content via the API. Again, web developers don’t interact with these APIs directly; the rendering engine handles translating web content into information useful to accessibility APIs. Examples of accessibility APIs#section3 - Windows: Microsoft Active Accessibility (MSAA), extended with another API called IAccessible2 (IA2) - Windows: UI Automation (UIA), the Microsoft successor to MSAA. A browser on Windows can choose to support MSAA with IA2, UIA, or both. - MacOS: NSAccessibility (AXAPI) - Linux/Gnome: Accessibility Toolkit (ATK) and Assistive Technology Service Provider Interface (AT-SPI). This case is a little different in that there are actually two separate APIs: one through which browsers and other applications pass information along to (ATK) and one that ATs then call from (AT-SPI). The screen reader uses client-side methods from these accessibility APIs to retrieve and handle information exposed by the browser. In browsers where direct access to the Document Object Model (DOM) is permitted, some screen readers may also take additional information from the DOM tree. A screen reader can also interact with apps that use differing accessibility APIs. No matter where they get their information, screen readers can dream up any interaction modes they want to provide to their users (I’ve provided links to screen reader commands at the end of this article). Testing by site creators can help identify content that feels awkward in a particular navigation mode, such as multiple links with the same text (“Learn more”), as one example. Example of this pipeline: surfacing a button element to screen reader users#section4 Let’s suppose for a moment that a screen reader wants to understand what object is next in the accessibility tree (which I’ll explain further in the next section), so it can surface that object to the user as they navigate to it. The flow will go a little something like this: - The screen reader requests information from the API about the next accessible object, relative to the current object. - The API (as an intermediary) passes along this request to the browser. - At some point, the browser references DOM and style information, and discovers that the relevant element is a non-hidden button: <button>Do a thing</button>. - The browser maps this HTML button into the format the API expects, such as an accessible object with various properties: Name: Do a thing, Role: Button. - The API returns this information from the browser to the screen reader. - The screen reader can then surface this object to the user, perhaps stating “Button, Do a thing.” Suppose that the screen reader user would now like to “click” this button. Here’s how their action flows all the way back to web content: - The user provides a particular screen reader command, such as a keystroke or gesture. - The screen reader calls a method into the API to invoke the button. - The API forwards this interaction to the browser. - How a browser may respond to incoming interactions depends on the context, but in this case the browser can raise this as a “click” event through web APIs. The browser should give no indication that the click came from an assistive technology, as doing so would violate the user’s right to privacy. Now that we have a general sense of the pipeline, let’s go into a little more detail on the accessibility tree. The accessibility tree#section5 The accessibility tree is a hierarchical representation of elements in a UI or document, as computed for an accessibility API. In modern browsers, the accessibility tree for a given document is a separate, parallel structure to the DOM tree. “Parallel” does not necessarily mean there is a 1:1 match between the nodes of these two trees. Some elements may be excluded from the accessibility tree, for example if they are hidden or are not semantically useful (think non-focusable wrapper divs without any semantics added by a web developer). This idea of a hierarchical structure is somewhat of an abstraction. The definition of what exactly an accessibility tree is in practice has been debated and partially defined in multiple places, so implementations may differ in various ways. For example, it’s not actually necessary to generate accessible objects for every element in the DOM whenever the DOM tree is constructed. As a performance consideration, a browser could choose to deal with only a subset of objects and their relationships at a time—that is, however much is necessary to fulfill the requests coming from ATs. The rendering engine could make these computations during all user sessions, or only do so when assistive technologies are actively running. Generally speaking, modern web browsers wait until after style computation to build up any accessible objects. Browsers wait in part because generated content (such as ::before and ::after) can contain text that can participate in calculation of the accessible object’s name. CSS styles can also impact accessible objects in other various ways: text styling can come through as attributes on accessible text ranges. Display property values can impact the computation of line text ranges. These are just a few ways in which style can impact accessibility semantics. Browsers may also use different structures as the basis for accessible object computation. One rendering engine may walk the DOM tree and cross-reference style computations to build up parallel tree structures; another engine may use only the nodes that are available in a style tree in order to build up their accessibility tree. User agent participants in the standards community are currently thinking through how we can better document our implementation details, and whether it might make sense to standardize more of these details further down the road. Let’s now focus on the branches of this tree, and explore how individual accessibility objects are computed. Building up accessible objects#section6 From API to API, an accessible object will generally include a few things: - Role, or the type of accessible object (for example, Button). The role tells a user how they can expect to interact with the control. It is typically presented when screen reader focus moves onto the accessible object, and it can be used to provide various other functionalities, such as skipping around content via one type of object. - Name, if specified. The name is an (ideally short) identifier that better helps the user identify and understand the purpose of an accessible object. The name is often presented when screen focus moves to the object (more on this later), can be used as an identifier when presenting a list of available objects, and can be used as a hook for functionalities such as voice commands. - Description and/or help text, if specified. We’ll use “Description” as a shorthand. The Description can be considered supplemental to the Name; it’s not the main identifier but can provide further information about the accessible object. Sometimes this is presented when moving focus to the accessible object, sometimes not; this variation depends on both the screen reader’s user experience design and the user’s chosen verbosity settings. - Properties and methods surfacing additional semantics. For simplicity’s sake, we won’t go through all of these. For your awareness, properties can include details like layout information or available interactions (such as invoking the element or modifying its value). Let’s walk through an example using markup for a simple mood tracker. We’ll use simplified property names and values, because these can differ between accessibility APIs. <form> <label for="mood">On a scale of 1–10, what is your mood today?</label> <input id="mood" type="range" min="1" max="10" value="5" aria-describedby="helperText" /> <p id="helperText">Some helpful pointers about how to rate your mood.</p> <!-- Using a div with button role for the purposes of showing how the accessibility tree is created. Please use the button element! --> <div tabindex="0" role="button">Log Mood</div> </form> First up is our form element. This form doesn’t have any attributes that would give it an accessible Name, and a form landmark without a Name isn’t very useful when jumping between landmarks. Therefore, HTML mapping standards specify that it should be mapped as a group. Here’s the beginning of our tree: - Role: Group Next up is the label. This one doesn’t have an accessible Name either, so we’ll just nest it as an object of role “Label” underneath the form: - Role: Group - Role: Label Let’s add the range input, which will map into various APIs as a “Slider.” Due to the relationship created by the for attribute on the label and id attribute on the input, this slider will take its Name from the label contents. The aria-describedby attribute is another id reference and points to a paragraph with some text content, which will be used for the slider’s Description. The slider object’s properties will also store “labelledby” and “describedby” relationships pointing to these other elements. And it will specify the current, minimum, and maximum values of the slider. If one of these range values were not available, ARIA standards specify what should be the default value. Our updated tree: - Role: Group - Role: Label - Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10 The paragraph will be added as a simple paragraph object (“Text” or “Group” in some APIs): - Role: Group - Role: Label - Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10 - Role: Paragraph The final element is an example of when role semantics are added via the ARIA role attribute. This div will map as a Button with the name “Log Mood,” as buttons can take their name from their children. This button will also be surfaced as “invokable” to screen readers and other ATs; special types of buttons could provide expand/collapse functionality (buttons with the aria-expanded attribute), or toggle functionality (buttons with the aria-pressed attribute). Here’s our tree now: - Role: Group - Role: Label - Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10 - Role: Paragraph - Role: Button Name: Log Mood On choosing host language semantics#section7 Our sample markup mentions that it is preferred to use the HTML-native button element rather than a div with a role of “button.” Our buttonified div can be operated as a button via accessibility APIs, as the ARIA attribute is doing what it should—conveying semantics. But there’s a lot you can get for free when you choose native elements. In the case of button, that includes focus handling, user input handling, form submission, and basic styling. Aaron Gustafson has what he refers to as an “exhaustive treatise” on buttons in particular, but generally speaking it’s great to let the web platform do the heavy lifting of semantics and interaction for us when we can. ARIA roles, states, and properties are still a great tool to have in your toolbelt. Some good use cases for these are - providing further semantics and relationships that are not naturally expressed in the host language; - supplementing semantics in markup we perhaps don’t have complete control over; - patching potential cross-browser inconsistencies; - and making custom elements perceivable and operable to users of assistive technologies. Notes on inclusion or exclusion in the tree#section8 Standards define some rules around when user agents should exclude elements from the accessibility tree. Excluded elements can include those hidden by CSS, or the aria-hidden or hidden attributes; their children would be excluded as well. Children of particular roles (like checkbox) can also be excluded from the tree, unless they meet special exceptions. The full rules can be found in the “Accessibility Tree” section of the ARIA specification. That being said, there are still some differences between implementers, some of which include more divs and spans in the tree than others do. Notes on name and description computation#section9 How names and descriptions are computed can be a bit confusing. Some elements have special rules, and some ARIA roles allow name computation from the element’s contents, whereas others do not. Name and description computation could probably be its own article, so we won’t get into all the details here (refer to “Further reading and resources” for some links). Some short pointers: aria-label, aria-labelledby, and aria-describedbytake precedence over other means of calculating name and description. - If you expect a particular HTML attribute to be used for the name, check the name computation rules for HTML elements. In your scenario, it may be used for the full description instead. - Generated content ( ::beforeand ::after) can participate in the accessible name when said name is taken from the element’s contents. That being said, web developers should not rely on pseudo-elements for non-decorative content, as this content could be lost when a stylesheet fails to load or user styles are applied to the page. When in doubt, reach out to the community! Tag questions on social media with “#accessibility.” “#a11y” is a common shorthand; the “11” stands for “11 middle letters in the word ‘accessibility.’” If you find an inconsistency in a particular browser, file a bug! Bug tracker links are provided in “Further reading and resources.” Not just accessible objects#section10 Besides a hierarchical structure of objects, accessibility APIs also offer interfaces that allow ATs to interact with text. ATs can retrieve content text ranges, text selections, and a variety of text attributes that they can build experiences on top of. For example, if someone writes an email and uses color alone to highlight their added comments, the person reading the email could increase the verbosity of speech output in their screen reader to know when they’re encountering phrases with that styling. However, it would be better for the email author to include very brief text labels in this scenario. The big takeaway here for web developers is to keep in mind that the accessible name of an element may not always be surfaced in every navigation mode in every screen reader. So if your aria-label text isn’t being read out in a particular mode, the screen reader may be primarily using text interfaces and only conditionally stopping on objects. It may be worth your while to consider using text content—even if visually hidden—instead of text via an ARIA attribute. Read more thoughts on aria-label and aria-labelledby. Accessibility API events#section11 It is the responsibility of browsers to surface changes to content, structure, and user input. Browsers do this by sending the accessibility API notifications about various events, which screen readers can subscribe to; again, for performance reasons, browsers could choose to send notifications only when ATs are active. Let’s suppose that a screen reader wants to surface changes to a live region (an element with role="alert" or aria-live): - The screen reader subscribes to event notifications; it could subscribe to notifications of all types, or just certain types as categorized by the accessibility API. Let’s assume in our example that the screen reader is at least listening to live region change events. - In the web content, the web developer changes the text content of a live region. - The browser (provider) recognizes this as a live region change event, and sends the accessibility API a notification. - The API passes this notification along to the screen reader. - The screen reader can then use metadata from the notification to look up the relevant accessible objects via the accessibility API, and can surface the changes to the user. ATs aren’t required to do anything with the information they retrieve. This can make it a bit trickier as a web developer to figure out why a screen reader isn’t announcing a change: it may be that notifications aren’t being raised (for example, because a browser is not sending notifications for a live region dynamically inserted into web content), or the AT is not subscribed or responding to that type of event. Testing with screen readers and dev tools#section12 While conformance checkers can help catch some basic accessibility issues, it’s ideal to walk through your content manually using a variety of contexts, such as - using a keyboard only; - with various OS accessibility settings turned on; - and at different zoom levels and text sizes, and so on. As you do this, keep in mind the Web Content Accessibility Guidelines (WCAG 2.1), which give general guidelines around expectations for inclusive web content. If you can test with users after your own manual test passes, all the better! Robust accessibility testing could probably be its own series of articles. In this one, we’ll go over some tips for testing with screen readers, and catching accessibility errors as they are mapped into the accessibility API in a more general sense. Screen reader testing#section13 Screen readers exist in many forms: some are pre-installed on the operating system and others are separate applications that in some cases are free to download. The WebAIM screen reader user survey provides a list of commonly used screen reader and browser combinations among survey participants. The “Further reading and resources” section at the end of this article includes full screen reader user docs, and Deque University has a great set of screen reader command cheat sheets that you can refer to. Some actions you might take to test your content: - Read the next/previous item. - Read the next/previous line. - Read continuously from a particular point. - Jump by headings, landmarks, and links. - Tab around focusable elements only. - Get a summary of all elements of a particular type within the page. - Search the page for specific content. - Use table-specific commands to interact with your tables. - Jump around by form field; are field instructions discoverable in this navigational mode? - Try out anything that creates a content change or results in navigating elsewhere. Would it be obvious, via screen reader output, that a change occurred? Tracking down the source of unexpected behavior#section14 If a screen reader does not announce something as you’d expect, here are a few different checks you can run: - Does this reproduce with the same screen reader in multiple browsers on this OS? It may be an issue with the screen reader or your expectation may not match the screen reader’s user experience design. For example, a screen reader may choose to not expose the accessible name of a static, non-interactive element. Checking the user docs or filing a screen reader issue with a simple test case would be a great place to start. - Does this reproduce with multiple screen readers in the same browser, but not in other browsers on this OS? The browser in question may have an issue, there may be compatibility differences between browsers (such as a browser doing extra helpful but non-standard computations), or a screen reader’s support for a specific accessibility API may vary. Filing a browser issue with a simple test case would be a great place to start; if it’s not a browser bug, the developer can route it to the right place or make a code suggestion. - Does this reproduce with multiple screen readers in multiple browsers? There may be something you can adjust in your code, or your expectations may differ from standards and common practices. - How does this element’s accessibility properties and structure show up in browser dev tools? Inspecting accessibility trees and properties in dev tools#section15 Major modern browsers provide dev tools to help you observe the structure of the accessibility tree as well as a given element’s accessibility properties. By observing which accessible objects are generated for your elements and which properties are exposed on a given element, you may be able to pinpoint issues that are occurring either in front-end code or in how the browser is mapping your content into the accessibility API. Let’s suppose that we are testing this piece of code in Microsoft Edge with a screen reader: <div class="form-row"> <label>Favorite color</label> <input id="myTextInput" type="text" /> </div> We’re navigating the page by form field, and when we land on this text field, the screen reader just tells us this is an “edit” control—it doesn’t mention a name for this element. Let’s check the tools for the element’s accessible name. 1. Inspect the element to bring up the dev tools. 2. Bring up the accessibility tree for this page by clicking the accessibility tree button (a circle with two arrows) or pressing Ctrl+Shift+A (Windows). Reviewing the accessibility tree is an extra step for this particular flow but can be helpful to do. When the Accessibility Tree pane comes up, we notice there’s a tree node that just says “textbox:,” with nothing after the colon. That suggests there’s not a name for this element. (Also notice that the div around our form input didn’t make it into the accessibility tree; it was not semantically useful). 3. Open the Accessibility Properties pane, which is a sibling of the Styles pane. If we scroll down to the Name property—aha! It’s blank. No name is provided to the accessibility API. (Side note: some other accessibility properties are filtered out of this list by default; toggle the filter button—which looks like a funnel—in the pane to get the full list). 4. Check the code. We realize that we didn’t associate the label with the text field; that is one strategy for providing an accessible name for a text input. We add for="myTextInput" to the label: <div class="form-row"> <label for="myTextInput">Favorite color</label> <input id="myTextInput" type="text" /> </div> And now the field has a name: In another use case, we have a breadcrumb component, where the current page link is marked with aria-current="page": <nav class="breadcrumb" aria-label="Breadcrumb"> <ol> <li> <a href="/cat/">Category</a> </li> <li> <a href="/cat/sub/">Sub-Category</a> </li> <li> <a aria-current="page" href="/cat/sub/page/">Page</a> </li> </ol> </nav> When navigating onto the current page link, however, we don’t get any indication that this is the current page. We’re not exactly sure how this maps into accessibility properties, so we can reference a specification like Core Accessibility API Mappings 1.2 (Core-AAM). Under the “State and Property Mapping” table, we find mappings for “ aria-current with non- false allowed value.” We can check for these listed properties in the Accessibility Properties pane. Microsoft Edge, at the time of writing, maps into UIA (UI Automation), so when we check AriaProperties, we find that yes, “current=page” is included within this property value. Now we know that the value is presented correctly to the accessibility API, but the particular screen reader is not using the information. As a side note, Microsoft Edge’s current dev tools expose these accessibility API properties quite literally. Other browsers’ dev tools may simplify property names and values to make them easier to read, particularly if they support more than one accessibility API. The important bit is to find if there’s a property with roughly the name you expect and whether its value is what you expect. You can also use this method of checking through the property names and values if mapping specs, like Core-AAM, are a bit intimidating! Advanced accessibility tools#section16 While browser dev tools can tell us a lot about the accessibility semantics of our markup, they don’t generally include representations of text ranges or event notifications. On Windows, the Windows SDK includes advanced tools that can help debug these parts of MSAA or UIA mappings: Inspect and AccEvent (Accessible Event Watcher). Using these tools presumes knowledge of the Windows accessibility APIs, so if this is too granular for you and you’re stuck on an issue, please reach out to the relevant browser team! There is also an Accessibility Inspector in Xcode on MacOS, with which you can inspect web content in Safari. This tool can be accessed by going to Xcode > Open Developer Tool > Accessibility Inspector. Diversity of experience#section17 Equipped with an accessibility tree, detailed object information, event notifications, and methods for interacting with accessible objects, screen readers can craft a browsing experience tailored to their audiences. In this article, we’ve used the term “screen readers” as a proxy for a whole host of tools that may use accessibility APIs to provide the best user experience possible. Assistive technologies can use the APIs to augment presentation or support varying types of user input. Examples of other ATs include screen magnifiers, cognitive support tools, speech command programs, and some brilliant new app that hasn’t been dreamed up yet. Further, assistive technologies of the same “type” may differ in how they present information, and users who share the same tool may further adjust settings to their liking. As web developers, we don’t necessarily need to make sure that each instance surfaces information identically, because each user’s preferences will not be exactly the same. Our aim is to ensure that no matter how a user chooses to explore our sites, content is perceivable, operable, understandable, and robust. By testing with a variety of assistive technologies—including but not limited to screen readers—we can help create a better web for all the many people who use it. Further reading and resources#section18 - WebAIM “Survey of Users with Low Vision” - WebAIM “Screen Reader User Survey” - W3C developer guides - W3C specifications: The docs below are known as “AAMs.” They detail how content maps into various accessibility APIs and may be less relevant to web developers’ day-to-day work. However, some have notes on how specific elements’ names and descriptions are meant to be calculated:
https://alistapart.com/article/semantics-to-screen-readers/
Despite the setbacks globalisation has faced in recent years from reactionary politics, the advent of artificial intelligence and robotisation are set to ensure its continuation. Domestic policy must therefore be designed in such a way as to reap the rewards of globalisation while avoiding its pitfalls. This column uses the case of Finland to show how this can be done. Finland has grown faster than its peers over two waves of globalisation, despite enduring substantial setbacks. In both its successes and challenges, it is an important example of the need for deliberate policies to prepare for future disruptions. Richard Baldwin, Vesa Vihriälä, 19 December 2017 Jon Danielsson, 15 November 2017 Artificial intelligence is increasingly used to tackle all sorts of problems facing people and societies. This column considers the potential benefits and risks of employing AI in financial markets. While it may well revolutionise risk management and financial supervision, it also threatens to destabilise markets and increase systemic risk. Nicholas Bloom, Chad Jones, John Van Reenen, Michael Webb, 20 September 2017 The rate of productivity growth in advanced economies has been falling. Optimists hope for a fourth industrial revolution, while pessimists lament that most potential productivity growth has already occurred. This column argues that data on the research effort across all industries shows the costs of extracting ideas have increased sharply over time. This suggests that unless research inputs are continuously raised, economic growth will continue to slow in advanced nations. Yoko Konishi, 15 September 2017 The latest AI boom that started in 2012 shows no signs of fading, thanks to the recent availability of big data and widespread adoption of deep learning technologies. This column argues that this new combination of data and technology offers an unprecedented opportunity for society. AI will develop sustainably only if systems are in place to collect relevant data, and AI is not adopted for its own sake. Jacques Bughin, Eric Hazan, 21 August 2017 Artificial intelligence has been around since the 1950s, and has gone through many cycles of hype and ‘winters’. Based on a survey of senior executives from over 3,000 companies in ten countries, this column describes how artificial intelligence is experiencing a new spring and is here to stay. The authors also argue that it can bring firm-level productivity and profit growth, with employment dynamics that may not be as bad as anticipated by some. Hidemichi Fujii, Shunsuke Managi, 16 June 2017 Patent applications are a good indicator of the nature of technological progress. This column compares trends in applications for artificial intelligence patents in Japan and the US. One finding is that the Japanese market appears to be less attractive for artificial intelligence technology application, perhaps due to its stricter regulations on the collection and use of data. Daron Acemoğlu, Pascual Restrepo, 05 July 2016 Many economists throughout history have been proven wrong in predicting that technological progress will cause irreversible damage to the labour market. This column shows that so far, the labour market has always adapted to the replacement of jobs with capital, using evidence of new types of skilled jobs between 1970 and 2007. As long as the rate of automation of jobs by machines and the creation of new complex tasks for workers are balanced, there will be no major labour market decline. The nature of new technology, and its impact on future innovation potential, has important implications for labour stability. Masayuki Morikawa, 07 June 2016 The substitution of human labour by artificial intelligence and robots is a keenly debated topic. Some claim that a substantial share of jobs is at risk, while others argue that computers and robots will lead to product innovations and hence to unimaginable new occupations. This column uses a survey of Japanese firms to examine the impact of AI-related technologies on business and employment. Overall, firms expect a positive impact on business but a negative impact on employment. Firms with a highly skilled workforce, however, have a more optimistic view than firms with lower skilled employees.
https://voxeu.org/taxonomy/term/6604?qt-quicktabs_cepr_policy_research=1&page=1
This stage is from 2 years to 7 years. During this stage, children are able to develop language and they will able to use symbols, words gestures, signs and images to represent objects. In addition, children do not have notion of time, they only think in the present. The third stage is named the concrete operational because during this stage children are able to think logically about concrete problems and organize things into categories and series. In fact, children are able to reverse thinking to mentally “undo” actions. Children are also unpredictable and emotional, they will show up different bahaviour in different places such as at school and at home. We might know the children more well if we seeing children through observation. Event sampling observation method allows the observer to analyze the predefined behaviour more easily. It is also an observation method which can be understood easily by the others such as parent. The method of event sampling is more prefer by most of the observer as it is easier to be use. Unlike a three-year old in the preocupational stage, a nine-year old is no longer inhibited by an egocentric point of view. A nine-year old can also easily pass conservation tasks. At age nine, Piaget describes that children excel at jobs which involve organization and solving problems bases on physically tangible or visible objects. For instance, a class of nine-year olds would be able to organize themselves from tallest to shortest quite quickly, or arrange their desks into various shapes. However, children in the preocupational phase still struggle with solving problems that have to do with things that are not tangible or visible. Jean Piaget was a Swiss psychologist and epistemologist best known for pioneering studies on cognitive development in children. Piaget is best known for his theory of cognitive development and for advancing the field of genetic epistemology, which he established. Piaget was born in Neuchâtel, Switzerland on August 9th, 1896 to Arthur Piaget, a university professor, and Rebecca Jackson. Since young childhood, Piaget showed an aptitude for biology, particularly with his studies concerning mollusks which garnered professional attention. Additionally, Piaget was introduced to epistemology at a young age by his godfather, who stressed the importance of studying philosophy and logic. Adapting communication for the age of the child helps prevent barriers as younger children need a lot more reassurance and support whereas young people are quite confident but are not sure how to reflect and deal with situations or problems. You could change the language you are using, as younger children don't have such a wide word vocabulary, the 5 year old won't need feedback, they will need encouragement and approval that what they have done is brilliant and you like it. All children of different ages need different things from the commutation they have with you. Schools provide a lot of situations such as 1:1 commutation to group communications. Which can mean you can be more or less formal in different situations. Jean Piaget, a Swiss-born Psychologist, was one who was particularly interested in how children perceive their environment. So engrossed was he by this process, that Piaget used his own children as scientific models in his experiments, in establishing his theory of Cognitive Development. After analyzing the behaviors of his children in their early development, Piaget concluded that there are four main stages of human cognitive maturation:- The Sensorimotor Stage, the Preoperational Stage, the Concrete Operational Stage and the Formal Operational Stage. This essay seeks to outline and examine Piaget’s Cognitive Development Theory, and to illustrate how this theory can influence the learning and teacher pedagogy in classes within the Caribbean region. The first stage of Piaget’s Cognitive Development theory is the Sensorimotor Stage, which he states takes place from birth LDC-9: Children comprehend and use information presented in books and other print media. LDC-12: Children begin to develop knowledge of the alphabet and the alphabetic principle. Materials: Alphabet Picture Board Word Cutouts Duration: 10 minutes lesson on language development (15 minutes for children to complete match board Anticipatory Set: Hi class, today’s lesson is all about language development. How many of you know what language development is? Raise your hand to answer. The Lego DC Super Heroes toy is a good example where a child can expand their logical thinking skills. According to Jean Piagets, Concrete operational stage this stage is when children from the age of 7 to 11 years old starts to develop demonstrates their common logically integrated (Driscoll, 2005, P. 197). With this game a child is gaining a better understanding of their mental operations because they are constructing a toy. This age is targeted at the right age it says on the box that is it recommended for children between the ages of 7 to 14 years. The cognitive constructivist theory can be traced back to the work of a talented individual called Jean Piaget who was born on August 9 1896 in Switzerland. By the age of eleven, he had published his first scientific paper, and by his early teens, Piaget’s mollusk papers were published and accepted by academics who were unaware of his age. In 1918, Piaget studied zoology at the University of Neuchâtel and achieved a PhD and after meeting Carl Jung and Paul Eugen Bleuler at the University of Zürich, his career changed direction leading him to study psychology at the Sorbonne in Paris. His work involved checking standardized reasoning tests designed to draw connections between a child’s age and his errors. However, Piaget disagreed with the construction of the test and set about designing his own which led to the birth of the cognitive development theory that was based around a concept of constructivism and Cognition is the study of the mind works. When we study cognitive development, we are acknowledging the fact that changes occur in how we think and learn as we grow. There is a very big difference in the way that children and adults think about and understand their environment. Jean Piaget (1896-1980), a biology student did extensive research work in the area of child development and is attributed with the development of the theory of cognitive development which has played a major role in this field (child development). His approach of studying the development of the human mind was a synthesis of ideas drawn from biology and philosophy.
https://www.ipl.org/essay/What-Is-Piagets-Theory-Of-Cognitive-Development-PCJ62UZ3XG
The dominant pedagogy in classrooms today is still direct instruction, the pedagogy underlying so-called computer-based, “personalized learning” environments. But, we argue in this week’s blog that truly 1-to-1 implementations, which are only now becoming feasible, are the opportunity needed to transform classrooms and support educators in moving to an inquiry pedagogy, a pedagogy that develops students’ critical thinking, creativity, communication, and collaboration. Reportedly some 30,000 Chromebooks come online in K-12 each day. There are positive and negative aspects to that, though, as far as we can tell, the Chromebook invasion is mostly a good thing for education. This blog post kicks off a new blog theme: Reinventing Curriculum. Like teacher and pedagogy, curriculum is one of the keys to a successful learning experience. Due to three trends, we will argue, curriculum – its development, its distribution, and its use — is in a state of real turbulence. The educational community, in general, and educational technology, in particular, needs to focus on the “next turn of the crank” in curriculum! Media Specialist David Olson explains how transformations in the library are helping to enhance efforts to provide blended (or hybrid) learning in the classroom. No question: the future of educational technology is blended learning enacted in 1-to-1 classrooms. But: exactly what instruction will be delivered? In the past, textbooks played the role of providing teachers with the day-by-day, week-by-week, instructional roadmap. Current lesson marketplaces, however, provide supplemental lessons; there is a huge need for basal/comprehensive, blended learning curricula. Curriculum developers: Listen up! Unlike a previous blog post where we pooh-poohed blended learning, in this blog post we do a flip-flop and hail blended learning as the model for the future of ed tech. Now our formulation of Blended Learning may diverge from the orthodoxy, but so what: We see a future where K-12 students, with their 1-to-1 computing devices, will be engaging in lessons that are computer-based and computer-mediated. You can take that prediction to the bank! In this week’s blog we ask YOU a question: What are the obstacles – the barriers – that prevent K-12 teachers from using technology in their classroom? Devices are crucial as a conduit for content; however, they do not directly improve learning outcomes. On the way to Personalized Learning 3.0, we may well need to “pass through” Personalized Learning 1.0. But we mustn’t tarry! Educational automation is not an interesting goal! The vision of a personalized “bicycle for the mind” for each and every child must drive us to "informate" – to create Personalized Learning 3.0 environments!! Without question, children need to develop reading fluency. Commonsensically, having kids read lots and lots should help in developing such fluency. Well, the data from U.S. classrooms on methods such as “Sustained Silent Reading” and its cousin, “Drop Everything and Read” are equivocal, that’s not stopping the Taiwanese. In 3 short years, 10 percent of its 2,700 elementary schools have adopted the “Modeled Sustained Silent Reading” Program! The data be hanged! Commonsense is winning in Taiwan. All Webcasts The rush to digital transformation during the pandemic, paired with a sharp increase in cyberattacks, has created a perfect storm for cybersecurity in education — and an expensive one. Not only have the costs of ransomware attacks skyrocketed to the six-figure range, but cyber-insurance premiums have also jumped as much as 300% for organizations that don’t have best-in-class security controls in place. Bottom line: It’s more important than ever to have a robust cybersecurity strategy in place on campus. Tune in to learn everything you need to know to get there. Read more...
https://thejournal.com/Articles/List/Viewpoint.aspx?blogid=M7OF0LWB&m=1&Page=28
- Defender must remove one tag to stop attackers progress. He/She then holds up the tag and drops it to the ground marking where the play the ball should occur. There is a marker in the play the ball. - If a player propels the ball in a forward direction with their hand or arm and the ball comes into contact with the ground, an opponent or the referee, a knock on will be awarded. A changeover will be awarded to the non-offending team. The referee may allow the non-offending team to take possession and gain an advantage. If they are tagged, it will be a zero tag. - The game is non-tackle – the attacker cannot deliberately bump into a defender. A defender cannot change direction and move into attackers path. Whoever initiates contact will be penalised. The onus is on the attacking player to avoid defenders. - The ball carrier is not allowed to protect his tag or fend off defenders. - A try is awarded to the attacking team when they ground the ball on or over the try line. There are no dead ball lines - Defence must be back 7 metres. KICKING - The ball may not be kicked until after the fourth tag and before the initial tag. - Kicks in General play cannot be above the shoulder height of the referee. Attacking team cannot dive on a kicked ball in any situation, but can kick on kick offs and line drop outs. If the ball lands in the field of play and then rolls across the try line whether touched or not a line drop out occurs. Try line becomes the dead ball line for all kicks. SCORING 1 Point will be awarded for each try In mixed divisions: 2 points for try scored by female - Winning Team will receive 3 points - Drawn Game will receive 2 points - A loss will receive 1 point - Any team forfeiting the game will be given 1 point for notifying the Competition Convenor and 0 points if they don’t. OTHER IMPORTANT ISSUES - Defensive line can move forward only when dummy half touches the ball. Dummy half can run and be tagged with the ball. Dummy half may score a try. - An attacker must stop and play the ball if he is in possession with only one tag on. - The only persons able to promote the ball with one tag on is the dummy half, and the player taking the tap (as long as they do not take more than one step with the ball). - Simultaneous tag is play on. (If the referee is unable to decide, the pass is allowed - play on. The advantage goes to attacking team.) - If the ball is kicked or passed into the referee, the referee will order a changeover where he was struck - If a players knee’s hit the ground whilst diving for a try and a defender is within tagging distance a try is disallowed and a tag is counted - Unsportsmanlike conduct covers the behaviour and attitude of players on the field and may result in penalty, sin bin or dismissal. Rule changes for the following age groups: U8's & U10’s Boys & U10 Girls: - Play the ball with No Marker - Defence cannot move until 1st receiver has the ball unless Dummy half runs with the ball - Dummy Half can score but if they are tagged it is a change over where the tag was made - All other Divisions have a marker and defence can move as soon as Dummy Half touches the ball. Please contact the Australian Oztag Sports Association on (02) 9522 2777 to purchase a full copy of the rule book.
http://www.sutherlandshireoztag.com/index.php?page=1765
(RxWiki News) The idea that exercise could help depressed patients isn't a new one. But what could be a new idea is that it could also help their hearts. A new study from Emory University found that, while depression symptoms were linked to an increased risk of heart disease, regular exercise seemed to reduce this risk. "Our findings highlight the link between worsening depression and cardiovascular risk and support routinely assessing depression in patients to determine heart disease risk," said lead study author Arshed A. Quyyumi, MD, the director of the Emory Clinical Cardiovascular Research Institute, in a press release. "This research also demonstrates the positive effects of exercise for all patients, including those with depressive symptoms." Depression has repeatedly been linked to an increased risk of and worse outcomes for heart disease and other health conditions. As many as 20 percent of patients hospitalized for heart attack report symptoms of depression. Heart disease patients likewise have three times the risk of depression as other patients. To better understand the link between depression and heart disease, Dr. Quyyumi and team looked at 965 patients without a history of heart disease or mental health issues. Surveys were used to assess these patients for depression, physical activity level and early indicators of heart disease, such as arterial stiffening and inflammation. Researchers found that as depression symptoms worsened, so did the early indicators for heart disease. These indicators were also more pronounced in the inactive patients than the patients who exercised regularly. This study was published Jan. 11 in the Journal of the American College of Cardiology. Information on funding sources and conflicts of interest were not available at the time of publication.
https://feeds.rxwiki.com/news-article/exercise-may-lower-heart-disease-risk-depressed-patients
Effects of mindfulness-based therapy for patients with breast cancer: A systematic review and meta-analysis. To quantify the effects of mindfulness-based therapy (MBT) on physical health, psychological health and quality of life (QOL) in patients with breast cancer. Studies were identified through a systematic search of six electronic databases. Randomized control trials (RCTs) examining the effects of MBT, versus a control group receiving no intervention on physical health, psychological health and QOL in breast cancer patients were included. Two authors independently assessed the methodological quality of included studies using a quality-scoring instrument developed by Jadad et al. and extracted relevant information according to a predesigned extraction form. Data was analysed using the Cochrane Collaboration's Revman5.1. Finally, seven studies involving 951 patients were included. While limited in power, the results of meta-analysis indicated a positive effect of MBT in reducing anxiety [SMD -0.31, 95% CI -0.46 to -0.16, P<0.0001], depression[SMD -1.13, 95% CI -1.85 to -0.41, P=0.002], fear of recurrence[SMD -0.71, 95% CI -1.05 to -0.38, P<0.0001], and fatigue[SMD -0.88, 95% CI -1.71 to -0.05, P=0.04] associated with breast cancer, and improving emotional well-being [SMD 0.39, 95% CI 0.19-0.58, P=0.0001], physical function[SMD 0.42, 95% CI 0.19-0.65, P=0.0004], and physical health [SMD 0.31, 95% CI 0.08-0.54, P=0.009] in these patients. Although the effects on stress, spirituality, pain and sleep were in the expected direction, they were not statistically significant (P>0.05). Moreover, there is limited evidence from a narrative synthesis that MBT can improve QOL of breast cancer patients. The present data indicate that MBT is a promising adjunctive therapy for patients with breast cancer. Due to some methodological flaws in the literature, further well-designed RCTs with large sample sizes are needed to confirm these preliminary estimates of effectiveness.
The project will assess administrative and governance aspects such as recruitment, transfer, facilities and services, promotion, training and culture through interviews with supervisory officers and focus group discussions with women in the force. The project will first conduct a baseline assessment of the status of policewomen in Karnataka, and then build on the baseline research to produce a comprehensive ground assessment of challenges policewomen face in Karnataka with the aim of recommending targeted institutional measures. The final project report will be submitted to the Karnataka Police. Further to the assessment, the project will also work on providing suggestions and measures for the police department to consider to improve the working conditions for women in Karnataka Police. These are full-time roles based in Bengaluru. Research and write reports and notes on legal provisions pertaining to women in police Develop and implement targeted dissemination plans Coordinate, supervise and proofread the translation of the created material into Kannada Coordinate with key project stakeholders, NLSIU administration and project funders Represent NLSIU along with the Project Head, and independently where necessary, in meetings with the Karnataka Police through the course of the study to share updates, seek inputs, present findings and finalize the recommendations of the report Research and input into the survey questionnaire for the ground assessment study Conduct corresponding ground assessments of challenges facing policewomen in the state in at least 4 districts total from the southern and western parts of Karnataka identified for the study including – inputting into the assessment parameters – inputting into the survey questionnaire and getting it translated into Kannada – conducting focus group discussions with policewomen at different ranks in the select districts – conducting interviews with the relevant supervisor officers, as identified – administering the survey during the FGDs as well as at one-on-one interviews with supervisory officers; and – collating and reviewing the survey answers in the agreed format Convene interactions with local civil society groups, academics and experts working on gender issues to ensure recommendations on policy improvements and training on gender mainstreaming are representative of cultural contexts; Coordinate with the designated nodal officer from the Karnataka Police to hold validation meetings. Graduate degree, preferably in law and social sciences Three years of work experience in the field of human rights; legal and policy research and advocacy; and/or women’s rights Proven research and analytical skills through published work Strong communication skills in English and Kannada Ability to work independently. Write to [email protected] with the subject heading “Women in Karnataka Police project” with – Your CV A published writing sample (not more than 2000 words) A short statement of purpose (not more than 500 words) The deadline for applications is December 14, 2021.
https://www.barandbench.com/apprentice-lawyer/call-for-applications-project-role-of-women-karnataka-police
This festive period Three Wise Women from the Faculty of Medicine will be giving us the gift of wisdom. Our first is Professor Gerry Thomas, a leading authority on the health impacts of radiation, who tells us why we should focus on the facts. I was born in the 1960s and grew up believing that the word ‘radiation’ meant something that was infinitely dangerous. Back then, we were led to believe that nuclear weapons would lead to the extinction of our species, and that to be bitten by a radioactive spider would confer supernatural powers! I was therefore sceptical about the use of nuclear power. It wasn’t until 1992, when I started to study the health effects of the accident at the Chernobyl nuclear power station in 1986, that I began to question whether my understanding of the health effects of radiation came more from science fiction than scientific fact. I have spent 21 years running the Chernobyl Tissue Bank in order that my research group, based at Imperial for the last 12 years, and others around the world, could have access to ethically sourced, high quality human samples to understand the mechanisms that underpin the development of thyroid cancer in children and adolescents. Let’s start with some facts We are a successful species inhabiting a naturally radioactive world and must have evolved protective mechanisms to deal with the effects of natural radiation – or we wouldn’t be here. All of us will be exposed to between 2 – 3mSv – millisievert is the unit for whole-body radiation dose – for each year of our life from our natural environment. Individuals seem to accept the use of higher levels of radiation when they can associate it with a direct beneficial effect – such as the use of radiation in medical diagnostics and therapies, particularly cancer treatment. However, there appears to be less acceptance of the risk associated with any radiation level when the possibility of exposure to often much lower doses results from emissions from the nuclear industry. So, where does our evidence come from? Most of our understanding of the effect of radiation on health stems from epidemiological studies of history’s worst radiation incidents. This includes studies of the survivors of the atomic bombs that landed on Hiroshima and Nagasaki in the 1940s, to cohorts of workers who were exposed to radiation in the workplace, such as the radium dial painters in the early years of the 20th century – also known as the “Radium Girls”, and, more recently, the nuclear power plant accidents at Chernobyl and Fukushima. Each of these is a slightly different scenario, involving different types of radiation and different routes of exposure – factors which we now know influence health effects. Many things can affect our health; many agents in our environment can lead to the development of cancer. Compared with many other things, radiation is a pretty weak carcinogen, particularly at low doses. To put this into some context, there have been 17,803 cancers in the Japanese survivors of the 1945 atomic bombing, of which only 941 are attributable to radiation exposure. The Chernobyl accident may well eventually result in a total of 16,000 excess thyroid cancer cases, of which only 1% would be predicted to be fatal. Although initial estimates predicted 4,000 excess cancer cases (other than thyroid cancer) in the cohorts that were involved in cleaning up the accident at Chernobyl, the data so far suggests that there have not been any that are attributable to the radiation. Professor Gerry Thomas featured on an Australian documentary on Fukushima. Herein lies a problem Our epidemiological evidence shows that the effect of radiation exposure on public health is dwarfed by the effects of everything else that affects our health. It is rather like looking for the needle in a haystack. Even in the largest studies, it has been difficult to produce good data to categorically show the health effects of radiation at individual doses below 100 mSv. All toxins, including radiation, show a relationship between the dose to which we are exposed and the magnitude of their effect on health. Working out the dose delivered to a particular tissue in someone exposed to radiation is complicated and requires an understanding of physics, chemistry and biology. The physical half-life of a radioactive isotope determines how much radioactivity will be released over a period of time. Tipping the balance Our bodies exist in equilibrium with our environment; we are constantly taking in and releasing chemicals. The amount of time an individual chemical substance, such as a radioactive isotope, stays within our bodies is termed the biological half-life. This is governed by the chemistry of our bodily tissues – some of our tissues have developed biological pumps to concentrate particular chemical entities within a tissue and mechanisms to store complexes of these chemicals. In general, where biological half-life is greater than physical half-life, the dose of radiation to a given tissue will be higher, and therefore the health effects are likely to be greater. The doses from the isotopes with longer physical half-lives are much lower than our unconscious biases would lead us to think. The dose from Caesium-137 isotope to 6 million residents living in the vicinity of the Chernobyl nuclear power plant was 10mSv over about 20 years – the same as a whole-body CT scan. Radiation fears may be exaggerated The health effects of low-dose radiation exposure have been exaggerated by some, and the resulting fear of radiation may be leading us to decide energy policy based on urban myths rather than scientific facts. There is evidence from Germany that ditching nuclear and increasing renewables results in an increase, not a decrease, in carbon emissions. I think that propagation of opinion and belief rather than evidence-based science is becoming a serious issue for society – you only have to look at the effects of social media and pseudoscience on the uptake of the measles vaccine to see the potential societal effects. Misinterpretation of health risks related to radiation has potential planetary consequences. By rejecting nuclear power as a source of low-carbon energy, because of our lack of perspective on its real risk, we expose ourselves to the much greater health risks posed by climate change which threatens all life on this planet, not just our own species. Professor Gerry Thomas is a Chair in Molecular Pathology at Imperial’s Department of Surgery and Cancer and Director of the Chernobyl Tissue Bank. She was awarded an OBE in 2019 for services to Science and Public Health.
https://wwwf.imperial.ac.uk/blog/imperial-medicine/2019/12/09/radiation-and-human-health-separating-scientific-facts-from-urban-myths/
Multiple reward systems and the prefrontal cortex. Electrical stimulation of the major divisions of the prefrontal cortex, the mediodorsal and sulcal areas, can serve as a reinforcing stimulus. Studies of self-stimulation of the prefrontal cortex have produced behavioral, anatomical and pharmacological evidence that the substrate of these rewarding effects can be dissociated from that subserving self-stimulation of ventral diencephalic sites such as the lateral hypothalamus. Other studies indicate that within the prefrontal cortex itself, self-stimulation of the medial and sulcal divisions can be attributed to dissociable processes. These observations suggest the existence of multiple, largely autonomous prefrontal subsystems involved in reinforcement. This raises the question of the functional significance of such systems, and of their organization. An approach to this problem is to consider the relationship between the behavioral functions of the prefrontal divisions and the characteristics of stimulation-induced reward obtained at each site. Studies of the effects of restricted prefrontal lesions indicate that the medial and sulcal divisions can be dissociated according to their involvement in the control of distinct types of sensory and motor events. Further experiments indicate that damage to each division causes selective deficits in the learning of stimulus-reinforcer and response-reinforcer relations, depending in part on the nature of the reinforcing event. Conditioning experiments further show that the rewarding effects produced by stimulation of these areas are preferentially associated to sensory events which correspond to the functional specialization of each division. These data are interpreted to suggest that different rewarding events and/or different attributes of rewarding stimuli are processed by distinct systems which are reflected by the organization of dissociable self-stimulation pathways.
Updated 06/16/22 with a new recipe card, additional recipe directions and instructions along with updated images. I have yet to experience a summer living gluten free. In Cape Cod, where we spend the majority of our summer, our diet is equal parts native berries and ice cream. Sort of like the bears in Yosemite. Many a day has been spent wading in the water, collecting crabs or eels, leaving us hot and famished, sandy and sticky–a perfect storm for throwing in the towel on dinner and going out for ice cream instead. Faced with a future without waffle cones, I decided to try to make my own homemade gluten-free waffle cones. I remembered seeing a blog post some time ago by Kathy Strahs of Panini Happy. She made adorable mini cones on her panini press (which is known as a panini machini around these parts). So I set to work adapting her recipe to be gluten free. And I’m thrilled to report that it worked…and very well indeed! There is a bit of a learning curve, to be sure. You need to play around with how much dough to put in the press, how firmly it needs to be pressed down, and how quickly the flat cone needs to be rolled around the mold (quickly…and it is hot). I learned that the thin cookies are easier to roll without cracking. I also discovered that I like the flavor of the darker cones, but that those need to be rolled when they are very hot or they will crack. Once I got the hang of it, I had about a dozen adorable cones, just waiting to be filled with ice cream (homemade, of course). Now I’m not going to go so far as to say that this recipe will save my summer…but it sure will make it a lot more pleasurable. And now that the weather has warmed up, I’m sure the kids won’t wait until summer to put in their request for more gluten-free homemade waffle cones. Here’s An Updated Version of the Recipe – 6/10/2021 So I did eventually get an actual waffle cone iron, and it’s proved to be an excellent investment. I hesitate to remove the previous instructions because #1 some folks don’t have a waffle iron and a panini press is an excellent alternative, and #2 look at my baby (the auburn kiddo pictured above)! I never want to remove a time capsule of a post just to update it. She just finished her first year of college and we are celebrating with ice cream and gluten-free homemade waffle cones, naturally! Are you looking for delicious homemade ice cream to scoop into those cones? We’ve got you covered! - Easy Homemade Fresh Blueberry Ice Cream - Strawberry Buttermilk Ice Cream - Dairy Free Chai Spice Coconut Ice Cream If you make this recipe be sure to drop a comment and star rating below, and tag me on Instagram and use the hashtags #agirldefloured #deflouredrecipes! Thank you! Homemade Gluten Free Waffle Cones Equipment - Waffle Cone Iron (Or Panini Press) Ingredients - 1 cup heavy cream - 1 teaspoon pure vanilla extract - ¼ teaspoon almond extract - 1½ cups gluten-free flour blend I like Bob's Red Mill 1 to 1 - 1½ cups powdered sugar - 1 teaspoon xanthan gum omit if your flour blend contains it - pinch of ground cinnamon - pinch of ground nutmeg - pinch sea salt Instructions - Whip the cream with a hand mixer or in the bowl of a stand mixer fitted with the paddle attachment until it forms soft peaks. Gently stir in the extracts. - Sift the flour, sugar, cinnamon, nutmeg and add it to the whipped cream. Fold together until a thick batter forms. Cover with plastic and refrigerate for at least 30 minutes. - Heat a waffle cone iron (or panini press) to 375 degrees (or medium high). Add a heaping tablespoon of batter and press down the lid firmly. I use a small cookie dough scoop. Check on the waffle after 90 seconds. It should be light golden brown. - Remove cookie to a cool surface, let cool for 10 seconds then roll into a cone shape. Hold firmly in place for about 10 seconds longer or until the cone is set. Repeat with the remaining batter.
https://www.agirldefloured.com/homemade-gluten-free-waffle-cones/
Having worked with the National Park Service for 25 years, Otis Halfmoon is a wealth of information. He’s the kind of person you could listen to for days on end and you’d still want to hear more. From battlefields and historic trails to recreation areas and international affairs, we recently chatted with Otis about his National Park Service career and the importance of national parks. What does a national park mean to you? A national park is a place to reflect about one’s self. A place to consider the historical event that took place and/or to see America’s cultural and natural resources. It is truly a place to save for generations not yet born. A national park is also an area to hear the untold stories of various nationalities. In this sense, to enrich an already rich story. They are truly the gems of America. What was your first national park experience and how many national parks have you visited in total? My father was the Tribal Chairman of my Tribe. Senator Frank Church and he were very instrumental in the creation of Nez Perce National Historical Park in 1965. It was created to tell the story of a living culture through their history and today. The very first superintendent was Robert Burns, and he was an excellent ambassador to the Tribe. He gained the trust of my People, including the elders and my Dad. Superintendent Burns’ staff was extremely friendly and worked well with American Indians. Superintendent Burns hired tribal members and listened to the stories of our elders. He was an outstanding ambassador. Since that time, I have visited national parks in the Virgin Islands, Hawaii, Alaska, and all over the lower 48. Why did you decide to work for the National Park Service and how long have you worked for NPS? I always wanted to work for NPS because of the influences of observing the staff at Nez Perce National Historical Park. They were telling the stories of my People, my elders, my home. I saw this as an opportunity to let the world know about my Tribe and also what American Indians have contributed to American culture. I saw working for NPS as an opportunity to tell the untold stories that were not in history books; the good, the bad and the ugly of the relationship between the United States and American Indians. I have worked for the National Park Service for 25 years. Where have you worked during your National Park Service career? How does the National Park Service/National Park System help tell American Indian/tribal stories? Back in the late 1960’s and early 1970’s, American Indians wanted these four words known across the country, “We Are Still Here!” While NPS is good at telling our history at places like Nez Perce National Historical Park, Washita Battlefield National Historic Site, Sand Creek Massacre National Historic Site, Little Bighorn Battlefield National Monument and Trail of Tears National Historic Trail, we need to do a better job letting visitors know that the Tribes are still here. We can also talk about Tribes at our natural parks like Yosemite, Yellowstone and Everglades. And, we can include Tribes at Civil War battlefields and Alcatraz. In many respects, the Tribes are cataloged with “Cultural Resources” or “Archaeology and Anthropology, ” and the National Park Service needs to revisit the idea of a senior level position for Native American affairs. What is your most favorite memory within the National Park System? I think of two stories, but I will only tell one. But, in a sense, both are similar. The National Park Service can bridge cultures. It can promote healing between races and nationalities. My home reservation was in the path of Lewis and Clark in 1805.The National Park Service took the lead with the Bicentennial of the Corps of Discovery and there was discussion among the communities on how to work with the National Park Service and Tribes. In many respects, the story of Lewis and Clark is a tribal story, and the public knew this fact. Around many American Indian reservations, and in neighboring communities, racism is alive and well. There is much hatred toward the neighboring Tribes. The Corps of Discovery of 1805 came across many contemporary communities that had such ideas. But, they knew they had to work with the Tribes. To make a long story short, the Bicentennial of the Corps of Discovery with the National Park Service had many Tribal members sharing a meal, with smiles and laughter, with a community that felt hostile toward Indian people. Do you have any tips for families visiting national parks or tips on engaging kids in the park or program experience? The opportunities to create a positive dialogue with American Indians and the National Park Service are many. For instance, working with tribal schools and/or communities and re-introducing them back to their homelands. Many Tribes live on reservations that are not located on their ancestral home. And, within these ancestral home areas are National Park Service lands. To bring tribal youth to these areas can be the beginning of a very positive experience for the tribal youth and the people wearing the green and gray uniform. I was very impressed with Superintendent Robert Burns when I was a kid and I then wanted to wear that same uniform. Just think about how many of these tribal youth will keep that memory of that person wearing the green and gray uniform and someday, they could wear that same uniform. How can people get involved and give back to the national park or program where you work? Our National Park Service is for American citizens and people around the world. We are a nation of many colors, not necessarily a “melting pot,” but rather a “tossed salad” where every ingredient stands proud, where we live together and understand each other’s cultures. We need people to support the national parks, help us preserve the resources, and help us use the parks to heal between cultures. That is what people can do. What words of wisdom would you like to share with our national park community? Diversity and inclusion do not happen overnight. We have been working on these areas for a few generations and they will improve through the dedication of National Park Service leadership. There will be a day when the Indigenous people can trust the leadership and the leadership can trust the Indigenous people. It will take time and the youth of today will take the lead. Perhaps someday, we can have a “mosaic” gathering of National Park Service employees. Wilfred Otis Halfmoon, 62, is a member of the Nez Perce Tribe of Idaho. He was born in Lapwai, Idaho on the Nez Perce Indian Reservation. He currently works as the American Indian Services Specialist in the Office of Relevancy, Diversity and Inclusion for the National Park Service. He has a B.A from Washington State University and is also a veteran of the U.S. Army (Honorable Discharge). He follows and believes in many of the tribal traditions of his Tribe. He is a powwow M.C. and also a Northern Style Traditional Dancer. He resides with his wife, Virginia, at their family home near Espanola, New Mexico.
https://www.nationalparks.org/connect/blog/qa-nps-american-indian-services-specialist-otis-halfmoon
When it rains, stormwater runoff is captured in city storm sewers and eventually empties into rivers, ports and other waterways. Communities use many strategies to prevent, control and treat stormwater. These strategies include reducing impervious surfaces on driveways and sidewalks and creating drought-resistant landscapes to hold and filter water. The objective is to get the land to act like a sponge and soak up the rainwater and return it to the ground rather than divert it to a sewer. Limiting the flow of stormwater reduces the amount of polluted runoff reaching waterways and prevents treatment facilities from being overwhelmed by combined sewer overflows. Now urban designers are looking at ways to design and build cities like sponges for another reason – to capture water to counter drought conditions. A recent Morning Edition report on National Public Radio (NPR) reported on efforts to respond to water scarcity in Los Angeles by capturing rainwater and turning it into water for drinking and irrigation. Woodbury University’s Arid Lands Institute (ALI) is helping developers in the city find the best spots for water to percolate into the ground. An experimental project in one neighborhood is placing bioswales along sidewalks. These are gullies filled with drought-resistant plants. Water collects in the bioswales and filters down into cisterns that are buried below the street. According to ALI, an education, research and outreach center dedicated to design innovation in water-stressed environments, in an average rain year, a city block puts enough water into the ground for approximately 30 families for a year. Another consideration is the design of roofs. The peaked roof is practical in areas where snow falls. Experts are suggesting that roof designs in arid areas should have a wide mouth that is open to the sky and built to catch rain. Desert cities may be the first “sponge” cities, but others are likely to follow. The Natural Resource Defense Council’s Climate Change, Water, and Risk report found that 1,100 U.S. counties – one-third of all counties in the lower 48 states – will face higher risks of water shortages by mid-century due to climate change. Cathy Spain is a National League of Cities Service Line Warranty Program Advisor and President of The Spain Group. She works with private companies and nonprofits to design, analyze and promote local government programs. She’s held senior management, research and lobbying positions at the National League of Cities, Government Finance Officers Association, Public Risk Database Project and the New York State Assembly.
https://slwablog.com/2015/02/02/building-sponge-cities/
On January 20, all of the Form II Algebra I students learned the answer to the classic question, “Why do we learn this?” by experiencing the Engineers Teaching Algebra workshop. Former engineer Mark Love returned for his third consecutive year to conduct two 90-minute sessions with the Browning boys. Math Department Chair Michael Klein reports that the boys, using a pencil, paper and a calculator, applied their algebraic problem-solving skills to the installation of traffic lights at an intersection between the entrance to a shopping mall and the main thoroughfare. Variables were defined and simultaneous equations constructed to design a system to optimize traffic flow. Mr. Klein explained, “Experiencing real-world applications of topics and skills learned in the classroom can be powerful motivation for students and augment their engagement in a subject. This workshop has low accessibility, yet high enrichment for the boys, and the feedback is always very positive.
https://www.browning.edu/news/2015/1/21/third-consecutive-year-for-engineers-teaching-algebra-workshop
Fee Setting & Collection Procedure The Catholic Education Commission of Western Australia (CECWA) has a responsibility to make a Catholic education available to all Catholic students whose parents seek a Catholic education for them, insofar as this is possible, while embodying the Church’s preferential option for the poor and disadvantaged (Mandate page 50).¹ Schools have a responsibility to communicate the financial constraints under which they operate to parents enrolling their children in Catholic schools. Parents are required to make a commitment to support Catholic education financially by paying fees. Read the full Fee Setting & Collection Procedure Reporting Cycle Reporting Timeline for parents. Dispute and Complaint Resolution Procedure Catholic Education is committed to ensuring that disputes and complaints are dealt with fairly, objectively and in a timely manner, and that processes reflect the principles of participation, co- responsibility and subsidiarity. Read the full Dispute and Complaint Resolution Procedure Code of Conduct This code has been developed for all Catholic Schools and describes minimum standards of conduct in all behaviour and decision -making to ensure the safety and wellbeing of students. The code applies to staff, students, volunteers, parents and guardians. Parents wanting to work in classrooms this year must complete a ‘Code of Conduct’ course with me please. The following dates for this will be this Tuesday, 21 March and Wednesday, 22 March at 9.00am and again at 3.10pm in our school library. You only need to attend one (morning or afternoon) of the meetings. Student Code of Conduct At St Brigid’s, all students have the right to feel SAFE and HAPPY at school at all times. Read the full Student Code of Conduct Behaviour Management Procedure St Brigid’s School recognises the uniqueness of each individual, created in the image and likeness of God. Our vision statement challenges us to create a learning community based on values. This is our core belief and permeates all aspects of our curriculum. We recognise the importance of developing and nourishing the whole child, and indeed, each member of our school community. St Brigid’s strives to promote respect for the rights of every person – in a safe, nurturing and respectful environment. We believe each student should be encouraged to develop habits of self-discipline and respect – for self and others. Students are expected to show respect to all staff members and visitors to the school. When parents accept a position for their child they agree to the discipline policy of the school Read the full Behaviour Management Procedure Attendance & Non-Attendance Procedure Attendance needs to be marked twice per day, using the SEQTA platform. - Morning Attendance needs to be done by 9.15am. - Afternoon Attendance needs to be done when children return from lunch. Read the full Attendance & Non-Attendance Procedure Homework Guidelines St Brigid’s School values the role homework has in a child’s education. Homework helps students; complementing and reinforcing classroom learning, fostering good lifelong learning and study habits, and provides an opportunity for students to be responsible for their own learning. Read the full Homework Guidelines School Excursion Guidelines To contribute to students’ Christian development, the starting point for all curriculum decisions will be the students themselves and their individual needs. Education that seeks to promote integrated personal development relates curriculum content to students’ real life situations (Mandate, para 66). School excursions are opportunities for students to experience learning outside of their normal school environment.
https://stbrigidsbt.wa.edu.au/procedures-guidelines/
Bone Scan What is a bone scan? A bone scan is a radiology procedure used to look at the skeleton. It's done to find areas of physical and chemical changes in bone. A bone scan may also be used to see if treatment of certain conditions is working. A bone scan is a type of nuclear radiology procedure. This means that a tiny amount of a radioactive substance is used during the scan to assist in the examination of the bones. The radioactive substance, called a radionuclide, or radioactive tracer, may either be increased or decreased in abnormal areas of bone. The radionuclide gives off a type of radiation, called gamma radiation. The gamma radiation is detected by a scanner. This processes the information into a picture of the bones. The areas where the radionuclide collects are called "hot spots." They may be a sign of conditions such as cancerous bone tumors and metastatic bone cancer. This is cancer that has spread from another site, such as the lungs, to the bones. Other conditions include those related to the bone. These include bone infection and bone injury not seen on regular X-rays. Why might I need a bone scan? Bone scans are most commonly used to look for the spread of cancer. The bone surrounding the cancer will appear as a hot spot on a bone scan. This is due to increased bone activity in the area of the cancer cells. Bone scans may also be used to see how much cancer there is before and after treatment to see if the treatment is working. Other reasons for doing a bone scan may include: To assess for bone injury or damage when regular X-rays don't show the problem To find fractures that are hard to locate To determine the age of fractures To detect or assess bone infection (osteomyelitis) To look for the cause of unexplained bone pain To detect and assess conditions such as: Arthritis Bone tumors Paget disease. This is a bone disorder that often happens to people over age 50. It causes long-term (chronic) inflammation of the bones. The bones become thickened and soft, and the long bones become curved. Avascular necrosis. This is death of bone tissue not caused by infection. There may be other reasons for your healthcare provider to recommend a bone scan. What are the risks of a bone scan? The amount of the radionuclide injected into your vein for the procedure is small enough that there is no need for precautions against radioactive exposure. The injection of the tracer may cause some slight discomfort. Allergic reactions to the tracer are rare, but may happen. Tell your healthcare provider if you are allergic to or sensitive to medicines, contrast dyes, or latex. Tell your healthcare provider if you are pregnant or think you might be. Tell your healthcare provider if you are breastfeeding. There may be other risks depending on your specific health condition. Be certain your healthcare provider knows about all of your health conditions before the procedure. How do I get ready for a bone scan? Your healthcare provider will explain the procedure to you and you can ask questions. Make a list of questions and any concerns to discuss with your healthcare provider before the procedure. Consider bringing a family member or trusted friend to the medical appointment to help you remember your questions and concerns and to take notes. You will be asked to sign a consent form that gives your permission to do the test. Read the form carefully and ask questions if anything is not clear. Generally, no preparation is needed, such as not eating or not taking medicine, before a bone scan. Tell your healthcare provider, the radiologist, or the technologist if you are allergic to or sensitive to medicines, contrast dyes, or iodine. Tell your healthcare provider if you are pregnant or think you may be. Tell your healthcare provider if you are breastfeeding. Make sure your healthcare provider has a list of all prescribed and over-the-counter medicines, and all herbs, vitamins, and supplements that you are taking. Based on your health condition, your healthcare provider may give you other instructions on what to do before the bone scan. What happens during a bone scan? A bone scan may be done on an outpatient basis or as part of your stay in a hospital. Procedures may vary depending on your condition and your healthcare provider's practices. Generally, a bone scan follows this process: You will be asked to remove any clothing, jewelry, or other objects that may get in the way of the scan. A bracelet with your name and an identification number may be put on your wrist. You may get a second bracelet if you have allergies. If you are asked to remove your clothing, you will be given a gown to wear. An IV (intravenous) line will be started in your hand or arm for injection of the radioactive tracer. The tracer will be injected into your vein. The tracer will be allowed to collect in the bone tissue for a period of 1 to 3 hours. You may be allowed to walk around or even leave the facility during this time. You will not be hazardous to other people, as the tracer gives off less radiation than a standard X-ray. During the waiting period, you will need to drink 4 to 6 glasses of water to help flush out any tracer that does not collect in the bone tissue. If your bone scan is being done to look for bone infection, a set of scans may be done right after the injection of the tracer. Another set of scans will be done after the tracer has been allowed to collect in the bone tissue. When the tracer has been allowed to collect in the bone tissue for the right amount of time, you will be asked to empty your bladder. This is because a full bladder can distort the bones of the pelvis, and may become uncomfortable during the scan. This may take up to an hour to complete. You will be asked to lie still on a padded scanning table. Any movement may affect the quality of the scan. The scanner will move slowly over and around you several times as it detects the gamma rays given off by the tracer in the bone tissue. You may be repositioned during the scan to get certain views of the bones. When the scan has been completed, the IV line will be removed. It takes about 1 hour to do a full body scan. While the bone scan itself causes no pain, having to lie still for the length of the procedure might be uncomfortable, particularly if you have recently had surgery or an injury. The technologist will use all possible comfort measures and complete the procedure as quickly as possible to reduce any discomfort or pain. What happens after a bone scan? Move slowly when getting up from the scanner table to avoid any dizziness or lightheadedness. You will be instructed to drink plenty of fluids and empty your bladder often for 24 to 48 hours after the scan. This will help flush the remaining tracer from your body. The IV site will be checked for any signs of redness or swelling. If you notice any pain, redness, or swelling at the IV site after you go home, you should tell your healthcare provider. This may be a sign of infection or other type of reaction. You should not have any other radionuclide procedures for the next 24 to 48 hours after your bone scan. You may go back to your usual diet and activities, unless your healthcare provider tells you differently. Your healthcare provider may give you other instructions after the procedure, depending on your particular situation. Next steps Before you agree to the test or the procedure make sure you know: The name of the test or procedure The reason you are having the test or procedure What results to expect and what they mean The risks and benefits of the test or procedure What the possible side effects or complications are When and where you are to have the test or procedure Who will do the test or procedure and what that person’s qualifications are What would happen if you did not have the test or procedure Any alternative tests or procedures to think about When and how you will get the results Who to call after the test or procedure if you have questions or problems
Although China's unitary fiscal system differed from India's federal structure, they share similar problems related to fiscal federalism, such as the presence of multilevel government, bureaucracy, corruption and erosion of capital. In both the countries, the problems like inefficiency and macro stability arose at a local level due to lack of adequate tax bases for the expenditure responsibilities that had been assigned by law or through politico-bureaucratic decisions of the states. The economists felt that greater local autonomy could contribute to the economic development of both China and India. To the fiscal system of India and China. To the characteristics of an efficient fiscal system. To the importance of economic decentralisation. To to analyse the fiscal federalism initiatives that would improve the accountability.
http://ibscdc.org/Case_Studies/Economics/Government%20and%20Business%20Environment/GBE0087IRC.htm
Ever since Sarah was a little girl, her parents knew she was different from the other children. She saw things, figures and lights and other phenomena that no one else could see. While at first they attributed it to Sarah's childish imagination, it soon became clear that something else was at work when the sights not only persisted, but grew stronger as she approached puberty. Around the same time, mysterious occurrences happened in and around the household: slight changes in furniture arrangement, clothes shifting color without warning. Eventually, the Ackermans grew used to their daughter's mysterious power, adapting their lives around it and doing their best to make sure it remained secret from their neighbors. Around Sarah's eleventh birthday, a powerful wizard known as Testament, the sole remaining member of a holy order, passed through Salem, his path coincidentally taking him near the Ackerman residence. He could feel the energy radiating off of her, even from a distance; he could tell Sarah for what she was, a natural wielder of magical power. Without hesitation, he approached the Ackermans about their daughter's condition, let them know that he would have to take Sarah away to hone her abilities - and he did just that, despite their objections. Sarah's parents called the cops, begged them to hurry, that some crazy man was taking their daughter away. By the time they arrived, though, it was too late; Testament was gone, and Sarah with him. The next decade of Sarah's life was miserable. Testament subjected her to a wide variety of grueling training and exercises, shaping her into his apprentice and eventual successor to his position, that of the High Priestess of his order, devoted to protecting the world, no matter the cost. At first, Sarah refused to work with him, ignoring her duties and purposefully doing poorly at her studies. As she grew older, though, her hate for the man tempered itself into a desire for revenge and, to accomplish that, she would need the skills that he was offering her. With a new goal in mind, Sarah threw herself at her training with a passion, honing her body and her mind into the ultimate magical weapon, learning to channel her power through her body and empower it. Her training culminated in an arcane ritual to turn her body into a font for magical energy - the ritual scarring inspired the name she would eventually choose for herself: Scripture. Scripture would eventually have her revenge against Testament when the man attempted to kill her and some of her allies during a mission to prevent a former High Priest of their holy order from summoning God to Earth. She had broken away from Testament a few months previous, living in Millennium City and helping aid the local superhero population in their war against crime; losing his apprentice, the girl he had spent ten years molding into his perfect successor, combined with the growing mental strain of his holy pact with Elysium, finally pushed him over the edge into a sort of cold insanity. Scripture, along with a small group of friends, retaliated, leading an assault on Testament's holy sanctum that left the ground permanently tainted, and the insane wizard dead. With Testament's shadow no longer hanging over her, Scripture is finally in control of her own life, and has devoted herself to fighting back whatever supernatural force crops up to threaten humanity. Appearance And Personality Scripture is a petite twenty year old, standing at just five-foot-four inches and weighing about a hundred and ten pounds, mostly muscle. She has golden blonde hair that she wears down to her shoulders, and large blue eyes that shine ever so slightly with a magical glow. Her body is well-muscled from years upon years of physical training, and covered from neck to toe in a series of ritual scars, inscribed upon her body to help her focus larger amounts of magical energy into herself. The scars glow with a soft blue-white light whenever she actively makes use of this ability, even through her clothing, no matter how thick it is. Scripture usually presents a kind, formal face to others, treating them respectfully even if they choose not to do the same to her. She has an incredibly formal manner of speech, rarely making use of contractions and avoiding slang; this is all result of her upbringing, as Testament demanded that she carried herself better than the masses she would be protecting. As such, she can sometimes seem overly formal to the point of being condescending, even if there's no ill intention behind her words or demeanor. In a way, she views herself as a servant to mankind, devoting her life to protecting them from the dangers the average person can't face; she would rather have the people she's guarding over like her than view her as rude and unwanted, since it makes the task that much more rewarding. When she's around friends, or by herself, Scripture drops her formal attitude, relaxing considerably; it's almost like she's a different person entirely when she's not putting on a show for the rest of the world. She enjoys relaxing in front of the television and watching awful action movies or cheesy television shows when she's not nose-deep in one book or another. She has a large library, consisting of all sorts of books, from encyclopedias to fantasy novels to children's pop-up books - she's an avid collector and has no problems admitting to it. Most of all, though, Scripture is a compassionate young woman, who truly loves the world and most everyone in it, even some of the villains that she's encountered in her time. She believes that everyone has some good in them, and that some just need a guiding hand to help set them on the right path. Of course, she's more than willing to be that hand when she honestly believes that she can help someone, even if its an inconvenience to herself. This can lead to her (sometimes forcefully) pushing herself into someone's life, whether they want her help or not, though everyone (so far) has appreciated her help after the fact. Allies & Enemies Ally: Oath First and foremost among Scripture's allies is her a small white dove by the name of 'Oath'. Scripture first met the animal when the bird was suffering from a broken wing; unable to sit by while the bird was in such obvious pain, Scripture picked her up and brought her back home, to nurse her back to health. The two bonded greatly over that time and, when Oath's wing was completely healed, Scripture chose to take the animal as her familiar, tying the two together on a spiritual level and providing them both with a variety of benefits. Scripture has gained in Oath a steadfast companion, a dispenser of advice, and perhaps her closest friend. At the same time, Oath has received a small portion of Scriptures' magical talent, the ability to speak, enhanced longevity, and increased intelligence. One is never far from the other; if Scripture is somewhere, you can count on Oath to be nearby, if she's not right there with the girl. When at home with little else to do, the two sometimes play board games, with their favorite being checkers. Their relationship doesn't come without some downsides, however; while Oath does have some small magical ability, she's still just a bird and, because of their bond, Scripture can feel whatever pain Oath feels. If the bird were to die, it's possible that the same would happen to Scripture, and vice versa. Oath can also be used to track Scripture through their bond, if she were captured by a villain with the magical know-how to do so. Allies: Unaffiliated Powers And Abilities - Magic: - Imbued Striking Strength: Scripture uses magic to increase the strength of her blows, allowing her to strike harder than her small frame would normally allow. This enhanced strength does not extend towards lifting or pushing heavy objects. She's able to punch through solid stone and dent metal with ease. - Imbued Reflexes: Scripture has also enhanced her reflexes and reaction time with her magic, allowing her to avoid attacks with greater ease. Considering how small she is compared to what she normally fights, this is frequently more useful than being able to absorb attacks. - Orb of Light: Rather self-explanatory, Scripture can summon a small ball of light above the palm of her hand to light up dark areas, with varying degrees of luminosity depending on how much magic she uses to fuel the spell. It can also be burst on command to dazzle foes with flashes of bright light. - Sixth Sense: This spell allows Scripture to extend her perception to pick out magic and magical beings in the nearby area, usually in the form of a light magical aura surrounding the mystical anomaly in question. Given time to study, she can even pick out the purpose behind more simple enchantments. - Binding Ritual: Usually employed against extradimensional beings, this ritual allows Scripture to bind the being to a spot or object, as long as it has been properly weakened and she has the time to complete the rather lengthy ritual. From there, she can either choose to release it or banish it to its home dimension, though the latter is difficult if not impossible when it comes to more powerful beings. Weaknesses And Quirks - Small: Scripture is a tiny, tiny lady. This provides some disadvantages when dealing with larger, more physically powerful villains, the most obvious being that she can be knocked around with ease. She also can't take advantage of some grapple maneuvers, while she's easier to restrain by similar means. - Magic Takes Time: With few exceptions, most of Scripture's magic takes time for her to employ, with the more complicated effects requiring a longer time to 'charge', so to speak. She's open to attack while she's performing these spells, and interrupting her during specific ones might have a disastrous effect on her. - Faith In Humanity: Scripture's faith in humanity can be used against her by more cunning villains; ones who feign a desire to change their ways can cause Scripture to lower her guard, allowing them to get a surprise attack on her. Skills and Equipment Scripture acquired a host of skills during her time training under Testament. Most of them are martial in nature, but also extends to knowledge of multiple magical methods, even if she cannot (or will not) employ most of them. She has also gathered a small collection of magical implements, with various purposes. - Martial Arts: Scripture's main skill is her knowledge of martial arts, taught to her by one of Testament's associates that had practiced in the mystical city of Shamballah. She's been training intensively since she was ten years old, and she's actually more capable in a hand-to-hand fight than she is with her magic. - Spellcasting: Scripture knows how to gather magical energy and employ it in a wide variety of spells, as well as recognize when magic is being employed and, going by the manner of casting, can even sometimes predict what spell the opposing caster is going to use. - Rituals: Scripture's also familiar with various magical rituals and religious traditions, allowing her to better approach the sometimes-esoteric problems that crop up when dealing with supernatural threats. This mostly comes in handy when investigating supernatural crime scenes, as she can deduce the ritual used and what purpose it was used for. - Ruby Necklace: Gifted to her by Lorekeeper, and worn at almost all times, even when not actively fighting crime. The necklace serves as a small repository of magic, giving Scripture a well to draw upon when she's exhausted herself. It isn't enough to perform more complicated spells, but it can be the difference between life and death. - Enchanted Gloves: A pair of gloves that, when magic is channeled through them, gathers pure magical energy into Scripture's fists, giving her punches a bit more oomph in the form of bursts of energy upon impact. She only makes use of this against more durable or deadly foes, however, as the effects can be lethal. Trivia - Scripture's favorite television show is Brutalitron: Oil And Sand, a violent action show that features gladiatorial robotic combat. - Scripture rescued a small birdlike lizard from the villain Bazaar, who had brought it from its home dimension to sell it for exorbitant sums of money. She named it Quetzal, after the Mesoamerican deity Quetzalcoatl. Comments Comments from Scripture's superhero peers go here!
http://primusdatabase.com/index.php?title=Scripture
Background and objectives: Migrants are considered as one of the groups at high risk of developing the disease. Moreover, mental health is one of the main problems facing them. The present study aims to evaluate migrants’ mental health status. Methods: Three hundred migrants settled in Bastam are selected by multistage sampling to conduct this cross-sectional study in 2016. Data are collected by 28-item General Health Questionnaire (GHQ-28) and a questionnaire containing demographic characteristics. Data analysis is performed by statistical ttests, ANOVA, and regression analysis using SPSS 16 software program. Significance level is set at 0.05. Results: Mean score of migrants’ mental health is 26.7 ± 0.86. About 44.7% of migrants enjoy good mental health and others are mentally disordered. Mental health subscales i.e. depression, anxiety, social dysfunction, and physical dysfunction are respectively seen among 9%, 9.7%, 6.1%, and 7% of migrants. The results show statistically significant association of mental health with gender, marital status, state of residence, education, employment status and type of migration (P<0.05).
https://www.alliedacademies.org/abstract/an-evaluation-of-migrants-mental-health-status-and-affecting-factors-in-bastam-10032.html
--- abstract: 'It is known that the Langevin dynamics used in MCMC is the gradient flow of the KL divergence on the Wasserstein space, which helps convergence analysis and inspires recent particle-based variational inference methods (ParVIs). But no more MCMC dynamics is understood in this way. In this work, by developing novel concepts, we propose a theoretical framework that recognizes a general MCMC dynamics as the fiber-gradient Hamiltonian flow on the Wasserstein space of a fiber-Riemannian Poisson manifold. The “conservation + convergence” structure of the flow gives a clear picture on the behavior of general MCMC dynamics. The framework also enables ParVI simulation of MCMC dynamics, which enriches the ParVI family with more efficient dynamics, and also adapts ParVI advantages to MCMCs. We develop two ParVI methods for a particular MCMC dynamics and demonstrate the benefits in experiments.' bibliography: - 'refs\_thesis.bib' --- Introduction ============ Dynamics-based Markov chain Monte Carlo methods (MCMCs) in Bayesian inference have drawn great attention because of their wide applicability, efficiency, and scalability for large-scale datasets [@neal2011mcmc; @welling2011bayesian; @chen2014stochastic; @chen2016stochastic; @li2019communication]. They draw samples by simulating a continuous-time *dynamics*, or more precisely, a diffusion process, that keeps the target distribution invariant. However, they often exhibit slow empirical convergence and relatively small effective sample size, due to the positive auto-correlation of the samples. Another type of inference methods, called particle-based variational inference methods (ParVIs), aim to deterministically update samples, or particles as they call them, so that the particle distribution minimizes the KL divergence to the target distribution. They fully exploit the approximation ability of a set of particles by imposing an interaction among them, so they are more particle-efficient. Optimization-based principle also makes them convergence faster. Stein variational gradient descent (SVGD) [@liu2016stein] is the most famous representative, and the field is under an active development both in theory [@liu2017steinflow; @chen2018stein; @chen2018unified; @liu2019understanding_a] and application [@liu2017steinpolicy; @pu2017vae; @zhuo2018message; @yoon2018bayesian]. The study on the relation between the two families starts from their interpretations on the Wasserstein space $\clP(\clM)$ supported on some smooth manifold $\clM$ [@villani2008optimal; @ambrosio2008gradient]. It is defined as the space of distributions $$\begin{aligned} \clP(\clM) := \{ q \mid\; & \text{$q$ is a probability measure on $\clM$ and} \notag\\ & \exists x_0 \in \clM \st \bbE_{q(x)}[d(x,x_0)^2] < \infty \} \label{eqn:p2}\end{aligned}$$ with the well-known Wasserstein distance. It is very general yet still has necessary structures. With its canonical metric, the gradient flow (steepest descending curves) of the KL divergence is defined. It is known that the Langevin dynamics (LD) [@langevin1908theorie; @roberts1996exponential], a particular type of dynamics in MCMC, simulates the gradient flow on $\clP(\clM)$ [@jordan1998variational]. Recent analysis reveals that existing ParVIs also simulate the gradient flow [@chen2018unified; @liu2019understanding_a], so they simulate the same dynamics as LD. However, besides LD, there are more types of dynamics in the MCMC field that converge faster and produce more effective samples [@neal2011mcmc; @chen2014stochastic; @ding2014bayesian], but no ParVI yet simulates them. These more general MCMC dynamics have not been recognized as a process on the Wasserstein space $\clP(\clM)$, and this poses an obstacle towards ParVI simulations. On the other hand, the convergence behavior of LD becomes clear when viewing LD as the gradient flow of the KL divergence on $\clP(\clM)$ (, @cheng2017convergence), which leads distributions to the target in a steepest way. However, such knowledge on other MCMC dynamics remains obscure, except a few. In fact, a general MCMC dynamics is only guaranteed to keep the target distribution invariant [@ma2015complete], but unnecessarily drives a distribution towards the target steepest. So it is hard for the gradient flow formulation to cover general MCMC dynamics. In this work, we propose a theoretical framework that gives a unified view of general MCMC dynamics on the Wasserstein space $\clP(\clM)$. We establish the framework by two generalizations over the concept of gradient flow towards a wider coverage: **(a)** we introduce a novel concept called *fiber-Riemannian manifold* $\clM$, where only the Riemannian structure on each fiber (roughly a decomposed submanifold, or a slice of $\clM$) is required, and we develop the novel notion of *fiber-gradient flow* on its Wasserstein space $\clP(\clM)$; **(b)** we also endow a Poisson structure to the manifold $\clM$ and exploit the corresponding Hamiltonian flow on $\clP(\clM)$. Combining both explorations, we define a fiber-Riemannian Poisson (fRP) manifold $\clM$ and a fiber-gradient Hamiltonian (fGH) flow on its Wasserstein space $\clP(\clM)$. We then show that any regular MCMC dynamics is the fGH flow on the Wasserstein space $\clP(\clM)$ of an fRP manifold $\clM$, and there is a correspondence between the dynamics and the structure of the fRP manifold $\clM$. This unified framework gives a clear picture on the behavior of MCMC dynamics. The Hamiltonian flow conserves the KL divergence to the target distribution, while the fiber-gradient flow minimizes it on each fiber, driving each conditional distribution to meet the corresponding conditional target. The target invariant requirement is recovered in which case the fiber-gradient is zero, and moreover, we recognize that the fiber-gradient flow acts as a stabilizing force on each fiber. It enforces convergence fiber-wise, making the dynamics in each fiber robust to simulation with the noisy stochastic gradient, which is crucial for large-scale inference tasks. This generalizes the discussion of @chen2014stochastic and @betancourt2015fundamental on Hamiltonian Monte Carlo (HMC) [@duane1987hybrid; @neal2011mcmc; @betancourt2017conceptual] to general MCMCs. In our framework, different MCMCs correspond to different fiber structures and flow components. They can be categorized into three types, each of which has its particular behavior. We make a unified study on various existing MCMCs under the three types. Our framework also bridges the fields of MCMCs and ParVIs, so that on one hand, the gate to the reservoir of MCMC dynamics is opened to the ParVI family and abundant efficient dynamics are enabled beyond LD, and on the other hand, MCMC dynamics can be now simulated in the ParVI fashion, inheriting advantages like particle-efficiency. To demonstrate this, we develop two ParVI simulation methods for the Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) dynamics [@chen2014stochastic]. We show the merits of using SGHMC dynamics over LD in the ParVI field, and ParVI advantages over conventional stochastic simulation in MCMC. **Related work**   @ma2015complete give a complete recipe on general MCMC dynamics. The recipe guarantees the target invariant principle, but leaves the behavior of these dynamics unexplained. Recent analysis towards a broader kind of dynamics via the Fokker-Planck equation [@kondratyev2017nonlinear; @bruna2017asymptotic] is still within the gradient flow formulation, thus not general enough. On connecting MCMC and ParVI, @chen2018unified explore the correspondence between LD and Wasserstein gradient flow, and develop new implementations for dynamics simulation. However, their consideration is still confined on LD, leaving more general MCMC dynamics untouched. @gallego2018stochastic formulate the dynamics of SVGD as a particular kind of MCMC dynamics, but no existing MCMC dynamics is recognized as a ParVI. More recently, @taghvaei2018accelerated derive an accelerated ParVI that is similar to one of our ParVI simulations of SGHMC. The derivation does not utilize the dynamics and the method connects to SGHMC only algorithmically. Our theory solidates our ParVI simulations of SGHMC, and enables extensions to more dynamics. Preliminaries {#sec:pre} ============= We first introduce the recipe for general MCMC dynamics [@ma2015complete], and prior knowledge on flows on a smooth manifold $\clM$ and its Wasserstein space $\clP(\clM)$. A smooth manifold $\clM$ is a topological space that locally behaves like an Euclidean space. Since the recipe describes a general MCMC dynamics in an Euclidean space $\bbR^M$, it suffices to only consider $\clM$ that is globally diffeomorphic to $\bbR^M$, which is its global coordinate system. For brevity we use the same notation for a point on $\clM$ and its coordinates due to their equivalence. A tangent vector $v$ at $x \in \clM$ can be viewed as the differentiation along the curve that is tangent to $v$ at $x$, so $v$ can be expressed as the combination $v = \sum_{i=1}^M v^i \partial_i$ of the differentiation operators $\{ \partial_i := \frac{\partial}{\partial x^i} \}_{i=1}^M$, which serve as a set of basis of the tangent space $T_x \clM$ at $x$. The cotangent space $T^*_x \clM$ at $x$ is the dual space of $T_x \clM$, and the cotangent bundle is the union $T^* \! \clM := \bigcup_{x \in \clM} T^*_x \clM$. We adopt Einstein convention to omit the summation symbol for a pair of repeated indices in super- and sub-scripts (, $v = v^i \partial_i := \sum_{i=1}^M v^i \partial_i$). We assume the target distribution to be absolutely continuous so that we have its density function $p$. The Complete Recipe of MCMC Dynamics {#sec:pre-mcmc} ------------------------------------ The fundamental requirement on MCMCs is that the target distribution $p$ is kept stationary under the MCMC dynamics. @ma2015complete give a general recipe for such a dynamics expressed as a diffusion process in an Euclidean space $\bbR^M$: $$\begin{gathered} \ud x = V(x) \dd t + \sqrt{2 D(x)} \dd B_t(x),\\ V^i(x) = \frac{1}{p(x)} \partial_j \Big( p(x) \big( D^{ij}(x) + Q^{ij}(x) \big) \Big), \label{eqn:recipe} \end{gathered}$$ for any positive semi-definite matrix $D_{M \times M}$ (diffusion matrix) and any skew-symmetric matrix $Q_{M \times M}$ (curl matrix), where $B_t(x)$ denotes the standard Brownian motion in $\bbR^M$. The term $V(x) \dd t$ represents a deterministic drift and $\sqrt{2 D(x)} \dd B_t(x)$ a stochastic diffusion. It is also shown that if $D$ is positive definite, $p$ is the unique stationary distribution. Moreover, the recipe is complete, , any diffusion process with $p$ stationary can be cast into this form. The recipe gives a universal view and a unified way to analyze MCMCs. In large scale Bayesian inference tasks, the stochastic gradient (SG), a noisy estimate of $(\partial_j \log p)$ on a randomly selected data mini-batch, is crucially desired for data scalability. The dynamics is compatible with SG, since the variance of the drift is of higher order of the diffusion part [@ma2015complete; @chen2015convergence]. In many MCMC instances, $x = (\theta, r)$ is taken as an augmentation of the target variable $\theta$ by an auxiliary variable $r$. This could encourage the dynamics to explore a broader area to reduce sample autocorrelation and improve efficiency (, @neal2011mcmc [@ding2014bayesian; @betancourt2017geometric]). Flows on a Manifold {#sec:pre-flow} ------------------- The mathematical concept of the *flow* associated to a vector field $X$ on $\clM$ is a set of curves on $\clM$, $\{ (\varphi_t(x))_t \mid x \in \clM \}$, such that the curve $(\varphi_t(x))_t$ through point $x \in \clM$ satisfies $\varphi_0(x) = x$ and that its tangent vector at $x$, $\frac{\ud}{\ud t} \varphi_t(x) \big|_{t=0}$, coincides with the vector $X(x)$. For any vector field, its flow exists at least locally (@do1992riemannian, Sec. 0.5). We introduce two particular kinds of flows for our concern. ### Gradient Flows {#sec:pre-flow-grad} We consider the gradient flow on $\clM$ induced by a *Riemannian structure* $g$ (, @do1992riemannian), which gives an inner product $g_x(\cdot,\cdot)$ in each tangent space $T_x \clM$. Expressed in coordinates, $g_x(u,v) = g_{ij}(x) u^i v^j, \forall u = u^i \partial_i, v = v^i \partial_i \in T_x \clM$, and the matrix $(g_{ij}(x))$ is required to be symmetric (strictly) positive definite. The *gradient* of a smooth function $f$ on $\clM$ can then be defined as the steepest ascending direction and has the coordinate expression: $$\begin{aligned} \grad f(x) = g^{ij}(x) \partial_j f(x) \partial_i \quad \in T_x \clM, \label{eqn:grad}\end{aligned}$$ where $g^{ij}(x)$ is the entry of the inverse matrix of $(g_{ij}(x))$. It is a vector field and determines a gradient flow. On $\clP(\clM)$, a Riemannian structure is available with a Riemannian support $(\clM, g)$ [@otto2001geometry; @villani2008optimal; @ambrosio2008gradient]. The tangent space at $q \in \clP(\clM)$ is recognized as (@villani2008optimal, Thm. 13.8; @ambrosio2008gradient, Thm. 8.3.1): $$\begin{aligned} T_{q} \clP(\clM) = \overline{ \{ \grad f \mid f \in {\clC_c^{\infty}}(\clM) \} }^{\clL^2_{q}(\clM)}, \label{eqn:p2tg}\end{aligned}$$ where ${\clC_c^{\infty}}(\clM)$ is the set of compactly supported smooth functions on $\clM$, $\clL^2_{q}(\clM)$ is the Hilbert space $\{ \text{$X$: vector field on $\clM$} \mid \bbE_q [g(X, X)] < \infty \}$ with inner product $\lrangle{ X, Y }_{\clL^2_{q}} := \bbE_{q(x)} [g_x(X(x), Y(x))]$, and the overline means closure. The tangent space $T_{q} \clP$ inherits an inner product from $\clL^2_{q}(\clM)$, which defines the Riemannian structure on $\clP(\clM)$. It is consistent with the Wasserstein distance [@benamou2000computational]. With this structure, the gradient of the KL divergence $\KL_p(q) := \int_{\clM} \log(q/p) \dd q$ is given explicitly (@villani2008optimal, Formula 15.2, Thm. 23.18): $$\begin{aligned} \grad \KL_p (q) = \grad \log (q / p) \quad \in T_{q} \clP(\clM). \label{eqn:gradkl}\end{aligned}$$ Noting that $T_{q} \clP$ is a linear subspace of the Hilbert space $\clL^2_q(\clM)$, an orthogonal projection $\pi_q: \clL^2_q(\clM) \to T_{q} \clP$ can be uniquely defined. For any $X \in \clL^2_q(\clM)$, $\pi_q(X)$ is the unique vector in $T_{q} \clP$ such that $\divg(q X) = \divg(q \pi_q(X))$ (@ambrosio2008gradient, Lem. 8.4.2), where $\divg$ is the divergence on $\clM$ and $\divg(q X) = \partial_i (q X^i)$ when $q$ is the density w.r.t. the Lebesgue measure of the coordinate space $\bbR^M$. The projection can also be explained with a physical intuition. Let $X \in \clL^2_q(\clM)$ be a vector field on $\clM$, and let its flow act on the random variable $x$ of $q$. The transformed random variable $\varphi_t(x)$ specifies a distribution $q_t$, and a distribution curve $(q_t)_t$ is then induced by $X$. The tangent vector of such $(q_t)_t$ at $q$ is exactly $\pi_q(X)$. ### Hamiltonian Flows {#sec:pre-flow-ham} The Hamiltonian flow is an abstraction of the Hamiltonian dynamics in classical mechanics [@marsden2013introduction]. It is defined in association to a *Poisson structure* (@fernandes2014lectures) on a manifold $\clM$, which can be expressed either as a Poisson bracket $\{\cdot,\cdot\}: \clC^{\infty}(\clM) \times \clC^{\infty}(\clM) \to \clC^{\infty}(\clM)$, or equivalently as a bivector field $\beta: T^* \! \clM \times T^* \! \clM \to \clC^{\infty}(\clM)$ via the relation $\beta(\ud f, \ud h) = \{f, h\}$. Expressed in coordinates, $\beta_x(\ud f (x), \ud h (x)) = \beta^{ij}(x) \partial_i f (x) \partial_j h (x)$, where the matrix $(\beta^{ij}(x))$ is required to be skew-symmetric and satisfy: $$\begin{aligned} \beta^{il} \partial_l \beta^{jk} + \beta^{jl} \partial_l \beta^{ki} + \beta^{kl} \partial_l \beta^{ij} = 0, \forall i,j,k. \label{eqn:jacob}\end{aligned}$$ The *Hamiltonian vector field* of a smooth function $f$ on $\clM$ is defined as $X_f(\cdot) := \{\cdot, f\}$, with coordinate expression: $$\begin{aligned} X_f(x) = \beta^{ij}(x) \partial_j f(x) \partial_i \quad \in T_x \clM. \label{eqn:hamfl}\end{aligned}$$ A Hamiltonian flow $\{(\varphi_t(x))_t\}$ is then determined by $X_f$. Its key property is that it conserves $f$: $f(\varphi_t(x))$ is constant w.r.t. $t$. The Hamiltonian flow may be more widely known on a symplectic manifold or more particularly a cotangent bundle (, @da2001lectures; @marsden2013introduction), but these cases are not general enough for our purpose (, they require $\clM$ to be even-dimensional). On $\clP(\clM)$, a Poisson structure can be induced by the one $\{\cdot, \cdot\}_{\clM}$ of $\clM$. Consider linear functions on $\clP(\clM)$ in the form $F_f: q \mapsto \bbE_{q} [f]$ for $f \in {\clC_c^{\infty}}(\clM)$. A Poisson bracket for these linear functions can be defined as (, @lott2008some, Sec. 6; @gangbo2010differential, Sec. 7.2): $$\begin{aligned} \{F_f, F_h\}_{\clP} := F_{\{f, h\}_{\clM}}. \label{eqn:wpois}\end{aligned}$$ This bracket can be extended for any smooth function $F$ by its *linearization* at $q$, which is a linear function $F_f$ such that $\grad F_f(q) \! = \! \grad F(q)$. The extended bracket is then given by $\{F, H\}_{\clP}(q) \! := \! \{F_f, F_h\}_{\clP}(q)$ (@gangbo2010differential, Rem. 7.8), where $F_f$, $F_h$ are the linearizations at $q$ of functions $F$, $H$. The Hamiltonian vector field of $F$ is then identified as (@gangbo2010differential, Sec. 7.2): $$\begin{aligned} \clX_F(q) = \clX_{F_f}(q) = \pi_q(X_f) \quad \in T_{q} \clP(\clM). \label{eqn:wham}\end{aligned}$$ On the same topic, @ambrosio2008hamiltonian study the existence and simulation of the Hamiltonian flow on $\clP(\clM)$ for $\clM$ as a symplectic Euclidean space, and verify the conservation of Hamiltonian under certain conditions. @gangbo2010differential investigate the Poisson structure on the algebraic dual $({\clC_c^{\infty}}(\clM))^*$, a superset of $\clP(\clM)$, and find that the canonical Poisson structure induced by the Lie structure of ${\clC_c^{\infty}}(\clM)$ coincides with . Their consideration is also for symplectic Euclidean $\clM$, but the procedures and conclusions can be directly adapted to Riemannian Poisson manifolds. @lott2008some considers the Poisson structure on the space of smooth distributions on a Poisson manifold $\clM$, and find that it is the restriction of the Poisson structure of $({\clC_c^{\infty}}(\clM))^*$ by @gangbo2010differential. Understanding MCMC Dynamics as Flows on the Wasserstein Space $\clP(\clM)$ {#sec:flow} ========================================================================== This part presents our main discovery that connects MCMC dynamics and flows on the Wasserstein space $\clP(\clM)$. We first work on the two concepts and introduce novel concepts for preparation, then propose the unified framework and analyze existing MCMC instances under the framework. Technical Development {#sec:flow-prep} --------------------- We excavate into MCMC and Wasserstein flows and introduce novel concepts in preparation for the framework. **On the MCMC side**   Noting that flows on $\clP(\clM)$ are deterministic while MCMCs involve stochastic diffusion, we first reformulate MCMC dynamics as an equivalent deterministic one for unification. Here we say two dynamics are *equivalent* if they produce the same distribution curve. \[lem:detmcmc\] The MCMC dynamics with symmetric diffusion matrix $D$ is equivalent to the deterministic dynamics in $\bbR^M$: $$\begin{gathered} \ud x = W_t(x) \dd t, \\ (W_t)^i = D^{ij} \partial_j \log (p / q_t) + Q^{ij} \partial_j \log p + \partial_j Q^{ij}, \label{eqn:detmcmc} \end{gathered}$$ where $q_t$ is the distribution density of $x$ at time $t$. Proof is provided in Appendix A.1. For any $q \in \clP(\bbR^M)$, the projected vector field $\pi_{q}(W)$ can be treated as a tangent vector at $q$, so $W$ defines a vector field on $\clP(\bbR^M)$. In this way, we give a first view of an MCMC dynamics as a Wasserstein flow. An equivalent flow with a richer structure will be given in Theorem \[thm:equiv\]. This expression also helps understanding Barbour’s generator $\clA$ [@barbour1990stein] of an MCMC dynamics, which can be used in Stein’s method [@stein1972bound] of constructing distribution metrics. For instance the standard Langevin dynamics induces the Stein’s operator, and it in turn produces a metric called the Stein discrepancy [@gorham2015measuring], which inspires SVGD, and @liu2018riemannian consider the Riemannian counterparts. The Barbour’s generator maps a function $f \in {\clC_c^{\infty}}(\bbR^M)$ to another $(\clA f) (x) := \frac{\ud}{\ud t} \bbE_{q_t} [f] \big|_{t=0}$, where $(q_t)_t$ obeys initial condition $q_0 = \delta_x$ (Dirac measure). In terms of the linear function $F_f$ on $\clP(\bbR^M)$, we recognize $(\clA f) (x) = \frac{\ud}{\ud t} F_f (q_t) \big|_{t=0} = \lrangle{ \grad F_f, \pi_{q_0}(W_0) }_{T_{q_0}\clP}$ as the *directional derivative* of $F_f$ along $(q_t)_t$ at $q_0$. This knowledge gives the expression: $$\begin{aligned} \clA f = \frac{1}{p} \partial_j \left[ p \left( D^{ij} + Q^{ij} \right) (\partial_i f) \right], \label{eqn:barbour}\end{aligned}$$ which meets existing results (, @gorham2016measuring, Thm. 2). Details are provided in Appendix A.2. **On the Wasserstein flow side**   We deepen the knowledge on flows on $\clP(\clM)$ with a Riemannian and Poisson structure of $\clM$.[^1] The gradient of $\KL_p$ is given by , but its Hamiltonian vector field is not directly available due to its non-linearity. We first develop an explicit expression for it. \[lem:klham\] Let $\beta$ be the bivector field form of a Poisson structure on $\clM$ and $\clP(\clM)$ endowed with the induced Poisson structure described in Section \[sec:pre-flow-ham\]. Then the Hamiltonian vector field of $\KL_p$ on $\clP(\clM)$ is: $$\begin{aligned} \clX_{\KL_p} (q) = \pi_q( X_{\log (q/p)} ) = \pi_q( \beta^{ij} \partial_j \log (q/p) \partial_i ). \label{eqn:klham} \end{aligned}$$ Proof is provided in Appendix A.3. Note that the projection $\pi_q$ does not make much difference recalling $X$ and $\pi_q(X)$ produce the same distribution curve through $q$. For a wider coverage of our framework on MCMC dynamics, we introduce a novel concept called *fiber-Riemannian manifold* and develop associated objects. This notion generalizes Riemannian manifold, such that the non-degenerate requirement of the Riemannian structure is relaxed. We say that a manifold $\clM$ is a *fiber-Riemannian manifold* if it is a fiber bundle and there is a Riemannian structure on each *fiber*. [r]{}[.24]{} ![image](fiber_var.png){width=".26\textwidth"} \[fig:fiber\] See Fig. \[fig:fiber\] for illustration. Roughly, $\clM$ (of dimension $M = m + n$) is a fiber bundle if there are two smooth manifolds $\clM_0$ (of dimension $m$) and $\clF$ (of dimension $n$) and a surjective projection $\varpi: \clM \to \clM_0$ such that $\varpi$ is locally equivalent to the projection on the product space $\clM_0 \times \clF \to \clM_0$ (, @nicolaescu2007lectures, Def. 2.1.21). The space $\clM_0$ is called the base space, and $\clF$ the common fiber. The *fiber* through $x \in \clM$ is defined as the submanifold $\clM_{\varpi(x)} := \varpi^{-1}(\varpi(x))$, which is diffeomorphic to $\clF$. Fiber bundle generalizes the concept of the product space to allow different structures among different fibers. The coordinate of $\clM$ can be decomposed under this structure: $x = (y, z)$ where $y \in \bbR^m$ is the coordinate of $\clM_0$ and $z \in \bbR^n$ of $\clM_{\varpi(x)}$. Coordinates of points on a fiber share the same $y$ part. We allow $m$ or $n$ to be zero. According to our definition, a fiber-Riemannian manifold furnish each fiber $\clM_y$ with a Riemannian structure $g_{\clM_y}$, whose coordinate expression is $\big( (g_{\clM_y}\!)_{ab} \big)$ (indices $a, b$ for $z$ run from $1$ to $n$). By restricting a function $f \in \clC^{\infty} (\clM)$ on a fiber $\clM_y$, the structure defines a gradient on the fiber: $\grad_{\clM_y} f(y, z) = (g_{\clM_y}\!)^{ab}(z) \, \partial_{z^b} f(y, z) \, \partial_{z^a}$. Taking the union over all fibers, we have a vector field on the entire manifold $\clM$, which we call the *fiber-gradient* of $f$: $\big( (\operatorname{grad_{fib}}f )^i (x) \big) := \big( 0_m, (g_{\clM_{\varpi(x)}}\!)^{ab}(z) \, \partial_{z^b} f(\varpi(x), z) \big)$. To express it in a similar way as the gradient, we further define the *fiber-Riemannian structure* $\tgg$ as: $$\begin{aligned} \big( \tgg^{ij} (x) \big)_{M \times M} := \begin{pmatrix} 0_{m \times m} & 0_{m \times n} \\ 0_{n \times m} & \big( (g_{\clM_{\varpi(x)}}\!)^{ab} (z) \big)_{n \times n} \end{pmatrix}, \! \label{eqn:fibriem}\end{aligned}$$ and the fiber-gradient can be expressed as $\operatorname{grad_{fib}}f = \tgg^{ij} \partial_j \! f \partial_i$. Note that $\operatorname{grad_{fib}}f (x)$ is tangent to the fiber $\clM_{\varpi(x)}$ and its flow moves points within each fiber. It is not a Riemannian manifold for $m \ge 1$ since $(\tgg^{ij})$ is singular. Now we turn to the Wasserstein space. As the fiber structure of $\clP(\clM)$ is hard to find, we consider the space ${{\widetilde \clP}}(\clM) := \Set{q(\cdot|y) \in \clP(\clM_y)}{y \in \clM_0}$. With projection $q(\cdot|y) \mapsto y$, it is locally equivalent to $\clM_0 \times \clP(\clM_y)$. Each of its fiber $\clP(\clM_y)$ has a Riemannian structure induced by that of $\clM_y$ (Section \[sec:pre-flow-grad\]), so it is a fiber-Riemannian manifold. On fiber $\clP(\clM_y)$, according to , we have $\grad \KL_{p(\cdot|y)} \! \big( q(\cdot|y) \big) (z) = (g_{\clM_y}\!)^{ab}(z) \partial_{z^b} \! \log \! \frac{q(z|y)}{p(z|y)} \partial_{z^a} = (g_{\clM_y}\!)^{ab}(z) \partial_{z^b} \! \log \frac{q(y,z)}{p(y,z)} \partial_{z^a}$ as a vector field on $\clM_y$. Taking the union over all fibers, we have the fiber-gradient of $\KL_p$ on ${{\widetilde \clP}}(\clM)$ as a vector field on $\clM$: $$\begin{aligned} \operatorname{grad_{fib}}\KL_p (q) (x) = \tgg^{ij}(x) \, \partial_j \log \big( q(x) / p(x) \big) \, \partial_i. \label{eqn:fibgrad}\end{aligned}$$ After projected by $\pi_q$, $\operatorname{grad_{fib}}\KL_p (q)$ is a tangent vector on the Wasserstein space $\clP(\clM)$. Note that $\clP(\clM)$ is locally equivalent to $\clP(\clM_0) \times {{\widetilde \clP}}(\clM)$ thus not a fiber-Riemannian manifold in this way, so it is hard to develop the fiber-gradient directly on $\clP(\clM)$. The Unified Framework {#sec:flow-frame} --------------------- We introduce a *regularity* assumption on MCMC dynamics that our unified framework considers. It is satisfied by almost all existing MCMCs and its relaxation will be discussed at the end of this section. \[asm:reg\] We call an MCMC dynamics regular if its corresponding matrices $(D, Q)$ in formulation  additionally satisfies: **(a)** the diffusion matrix $D = C$ or $D = 0$ or $D = \begin{pmatrix} 0 & 0 \\ 0 & C \end{pmatrix}$, where $C(x)$ is symmetric positive definite everywhere; **(b)** the curl matrix $Q(x)$ satisfies everywhere. ![Illustration of our unified framework (Theorem \[thm:equiv\]): a regular MCMC dynamics is equivalent to the fGH flow $\clW_{\KL_p}$ on the Wasserstein space $\clP(\clM)$ of an fRP manifold $\clM$. The projected fiber-gradient (green solid arrows) and Hamiltonian vector field (red dashed arrows) at $q_t$ on $\clM$ are plotted. ](frame_var.png){width=".45\textwidth"} \[fig:frame\] Now we formally state our unified framework, with an illustration provided in Fig. \[fig:frame\]. \[thm:equiv\] We call $(\clM, \tgg, \beta)$ a fiber-Riemannian Poisson (fRP) manifold, and define the fiber-gradient Hamiltonian (fGH) flow on $\clP(\clM)$ as the flow induced by the vector field: $$\begin{split} \clW_{\KL_p} :=& - \pi( \operatorname{grad_{fib}}\KL_p ) - \clX_{\KL_p}, \\ \clW_{\KL_p} (q) =& \pi_q \big(\, (\tgg^{ij} + \beta^{ij}) \partial_j \log (p/q) \partial_i \,\big). \label{eqn:fgh} \end{split}$$ Then: **(a)** Any regular MCMC dynamics on $\bbR^M$ targeting $p$ is equivalent to the fGH flow $\clW_{\KL_p}$ on $\clP(\clM)$ for a certain fRP manifold $\clM$; **(b)** Conversely, for any fRP manifold $\clM$, the fGH flow $\clW_{\KL_p}$ on $\clP(\clM)$ is equivalent to a regular MCMC dynamics targeting $p$ in the coordinate space of $\clM$; **(c)** More precisely, in both cases, the coordinate expressions of the fiber-Riemannian structure $\tgg$ and Poisson structure $\beta$ of $\clM$ coincide respectively with the diffusion matrix $D$ and the curl matrix $Q$ of the regular MCMC dynamics. The idea of proof is to show $\pi_q (W) = \clW_{\KL_p} (q)$ ($W$ defined in Lemma \[lem:detmcmc\]) at any $q \in \clP(\clM)$ so that the two vector fields produce the same evolution rule of distribution. Proof details are presented in Appendix A.4. This formulation unifies regular MCMC dynamics and flows on the Wasserstein space, and provides a direct explanation on the behavior of general MCMC dynamics. The fundamental requirement on MCMCs that the target distribution $p$ is kept stationary, turns obvious in our framework: $\clW_{\KL_p}(p) = 0$. The Hamiltonian flow $-\clX_{\KL_p}$ conserves $\KL_p$ (difference to $p$) while encourages efficient exploration in the sample space that helps faster convergence and lower autocorrelation [@betancourt2017geometric]. The fiber-gradient flow $-\operatorname{grad_{fib}}\KL_p$ minimizes $\KL_{p(\cdot|y)}$ on each fiber $\clM_y$, driving $q_t(\cdot|y)$ to $p(\cdot|y)$ and enforcing convergence. Specification of this general behavior is discussed below. Existing MCMCs under the Unified Framework {#sec:flow-ins} ------------------------------------------ Now we make detailed analysis on existing MCMC methods under our unified framework. Depending on the diffusion matrix $D$, they can be categorized into three types. Each type has a particular fiber structure of the corresponding fRP manifold, thus a particular behavior of the dynamics. **Type 1:** $D$ is non-singular ($m=0$ in ).\ In this case, the corresponding $\clM_0$ degenerates and $\clM$ itself is the unique fiber, so $\clM$ is a Riemannian manifold with structure $(g_{ij}) = D^{-1}$. The fiber-gradient flow on ${{\widetilde \clP}}(\clM)$ becomes the gradient flow on $\clP(\clM)$ so: $$\begin{aligned} \clW_{\KL_p} = -\pi( \grad \KL_p ) - \clX_{\KL_p},\end{aligned}$$ which indicates the convergence of the dynamics: the Hamiltonian flow $-\clX_{\KL_p}$ conserves $\KL_p$ while the gradient flow $-\grad \KL_p$ minimizes $\KL_p$ on $\clP(\clM)$ steepest, so they jointly minimize $\KL_p$ monotonically, leading to the unique minimizer $p$. This meets the conclusion in @ma2015complete. The Langevin dynamics (LD) [@roberts1996exponential], used in both full-batch [@roberts2002langevin] and stochastic gradient (SG) simulation [@welling2011bayesian], falls into this class. Its curl matrix $Q = 0$ makes its fGH flow comprise purely the gradient flow, allowing a rich study on its behavior [@durmus2016high; @cheng2017convergence; @wibisono2018sampling; @bernton2018langevin; @durmus2018analysis]. Its Riemannian version [@girolami2011riemann] chooses $D$ as the inverse Fisher metric so that $\clM$ is the distribution manifold in information geometry [@amari2016information]. @patterson2013stochastic further explore the simulation with SG. **Type 2:** $D = 0$ ($n=0$ in ).\ In this case, $\clM_0 = \clM$ and fibers degenerate. The fGH flow $\clW_{\KL_p}$ comprises purely the Hamiltonian flow $-\clX_{\KL_p}$, which conserves $\KL_p$ and helps distant exploration. We note that under this case, the decrease of $\KL_p$ is not guaranteed, so care must be taken in simulation. Particularly, this type of dynamics cannot be simulated with parallel chains unless samples initially distribute as $p$, so they are not suitable for ParVI simulation. The lack of a stabilizing force in the dynamics also explains their vulnerability in face of SG, where the noisy perturbation is uncontrolled. This generalizes the discussion on HMC by @chen2014stochastic and @betancourt2015fundamental to dynamics of this type. The Hamiltonian dynamics (, @marsden2013introduction, Chap. 2) that HMC simulates is a representative of this kind. To sample from a distribution $p(\theta)$ on manifold $\clS$ of dimension $\ell$, variable $\theta$ is augmented $x=(\theta,r)$ with a vector $r \in \bbR^{\ell}$ called momentum. In our framework, this is to take $\clM$ as the cotangent bundle $T^* \! \clS$, whose canonical Poisson structure corresponds to $Q = (\beta^{ij}) = \begin{pmatrix} 0 & -I_{\ell} \\ I_{\ell} & 0 \end{pmatrix}$. A conditional distribution $p(r|\theta)$ is chosen for an augmented target distribution $p(x) = p(\theta) p(r|\theta)$. HMC produces more effective samples than LD with the help of the Hamiltonian flow [@betancourt2017geometric]. As we mentioned, the dynamics of HMC cannot guarantee convergence, so it relies on the *ergodicity* of its simulation for convergence [@livingstone2016geometric; @betancourt2017conceptual]. It is simulated in a deliberated way: the second-order symplectic leap-frog integrator is employed, and $r$ is successively redrew from $p(r|\theta)$. HMC considers Euclidean $\clS$ and chooses Gaussian $p(r|\theta) = \clN(0, \Sigma)$, while @zhang2016towards take $p(r|\theta)$ as the monomial Gamma distribution. On Riemannian $(\clS, g)$, $p(r|\theta)$ is chosen as $\clN \big( 0, (g_{ij}(\theta)) \big)$, , the standard Gaussian in the cotangent space $T^*_{\theta} \clS$ [@girolami2011riemann]. @byrne2013geodesic simulate the dynamics for manifolds with no global coordinates, and @lan2015markov take the Lagrangian form for better simulation, which uses velocity (tangent vector) in place of momentum (covector). **Type 3:** $D \neq 0$ and $D$ is singular ($m, n \ge 1$ in ).\ In this case, both $\clM_0$ and fibers are non-degenerate. The fiber-gradient flow stabilizes the dynamics only in each fiber $\clM_y$, but this is enough for most SG-MCMCs since SG appears only in the fibers. SGHMC [@chen2014stochastic] is the first instance of this type. Similar to the Hamiltonian dynamics, it takes $\clM = T^* \! \clS$ and shares the same $Q$, but its $D_{2\ell \times 2\ell}$ is in the form of Assumption \[asm:reg\](a) with a constant $C_{\ell \times \ell}$, whose inverse $C^{-1}$ defines a Riemannian structure in every fiber $\clM_y$. Viewed in our framework, this makes the fiber bundle structure of $\clM$ coincides with that of $T^* \! \clS$: $\clM_0 = \clS$, $\clM_y = T^*_{\theta} \clS$, and $x = (y,z) = (\theta,r)$. Using Lemma \[lem:detmcmc\], with a specified $p(r|\theta)$, we derive its equivalent deterministic dynamics: $$\begin{aligned} \!\! \begin{cases} \! \frac{\ud \theta}{\ud t} = - \nabla_r \log p(r|\theta), \\ \! \frac{\ud r}{\ud t} = \nabla_{\theta} \! \log p(\theta) \!+\! \nabla_{\theta} \! \log p(r|\theta) \!+ C \nabla_r \! \log \frac{p(r|\theta)}{q(r|\theta)}. \end{cases} \!\!\!\!\!\!\!\! \label{eqn:sghmc-det}\end{aligned}$$ We note that it adds the dynamics $\frac{\ud r}{\ud t} = C \nabla_r \log \frac{p(r|\theta)}{q(r|\theta)}$ to the Hamiltonian dynamics. This added dynamics is essentially the fiber-gradient flow $- (\operatorname{grad_{fib}}\KL_p) (q)$ on $\clP(\clM)$ (), or the gradient flow $- ( \grad \KL_{p(\cdot|\theta)} ) (q(\cdot|\theta))$ on fiber $T^*_{\theta} \clS$, which pushes $q(\cdot|\theta)$ towards $p(\cdot|\theta)$. In presence of SG, the dynamics for $\theta \in \clS$ is unaffected, but for $r \in T^*_{\theta} \clS$ in each fiber, a fluctuation is introduced due to the noisy estimate of $\nabla_{\theta} \log p(\theta)$, which will mislead $q(\cdot|\theta)$. The fiber-gradient compensates this by guiding $q(\cdot|\theta)$ to the correct target, making the dynamics robust to SG. Another famous example of this kind is the SG Nosé-Hoover thermostats (SGNHT) [@ding2014bayesian]. It further augments $(\theta,r)$ with the thermostats $\xi \in \bbR$ to better balance the SG noise. In terms of our framework, the thermostats $\xi$ augments $\clM_0$, and the fiber is the same as SGHMC. Both SGHMC and SGNHT choose $p(r|\theta) = \clN(0, \Sigma^{-1})$, while SG monomial Gamma thermostats (SGMGT) [@zhang2017stochastic] uses monomial Gamma, and @lu2016relativistic choose $p(r|\theta)$ according to a relativistic energy function to adapt the scale in each dimension. Riemannian extensions of SGHMC and SGNHT on $(\clS, g)$ are explored by @ma2015complete and @liu2016stochastic. Viewed in our framework, they induce a Riemannian structure $\big( \! \sqrt{\! (g^{ij}(\theta))} \trs \! C^{-1} \! \sqrt{\! (g^{ij}(\theta))} \big)_{\ell \times \ell}$ in each fiber $\clM_y = T^*_{\theta} \clS$. **Discussions**   Due to the linearity of the equivalent systems , ,  w.r.t. $D$, $Q$ or $(\tgg^{ij})$, $(\beta^{ij})$, MCMC dynamics can be combined. From the analysis above, SGHMC can be seen as the combination of the Hamiltonian dynamics on the cotangent bundle $T^* \! \clS$ and the LD in each fiber (cotangent space $T^*_{\theta} \clS$). As another example, @zhang2017stochastic combine SGMGT of Type 3 with LD of Type 1, creating a Type 1 method that decreases $\KL_p$ on the entire manifold instead of each fiber. This improves the convergence, which meets their empirical observation. Assumption \[asm:reg\](a) is satisfied by all the mentioned MCMC dynamics, and Assumption \[asm:reg\](b) is also satisfied by all except SGNHT related dynamics. On this exception, we note from the derivation of Theorem \[thm:equiv\] that, Assumption \[asm:reg\](b) is only required for $\clM$ thus $\clP(\clM)$ to be a Poisson manifold, but is not used in the deduction afterwards. Definition of a Hamiltonian vector field and its key property could also be established without the assumption, so it is possible to extend the framework under a more general mathematical concept that relaxes Assumption \[asm:reg\](b). Assumption \[asm:reg\](a) could also be hopefully relaxed by an invertible transformation from any positive semi-definite $D$ into the required form, effectively converting the dynamics into an equivalent regular one. We leave further investigations as future work. Simulation as ParVIs {#sec:parvi} ==================== The unified framework (Theorem \[thm:equiv\]) recognizes an MCMC dynamics as an fGH flow on the Wasserstein space $\clP(\clM)$ of an fRP manifold $\clM$, expressed in explicitly. Lemma \[lem:detmcmc\] gives another equivalent dynamics that leads to the same flow on $\clP(\clM)$. These findings enable us to simulate these flow-based dynamics for an MCMC method, using existing finite-particle flow simulation methods in the ParVI field. This hybrid of ParVI and MCMC largely extends the ParVI family with various dynamics, and also gives advantages like particle-efficiency to MCMCs. We select the SGHMC dynamics as an example and develop its particle-based simulations. With $p(r|\theta) = \clN(0, \Sigma)$ for a constant $\Sigma$, $r$ and $\theta$ become independent, and from Lemma \[lem:detmcmc\] becomes: $$\begin{aligned} \begin{cases} \frac{\ud \theta}{\ud t} = \Sigma^{-1} r, \\ \frac{\ud r}{\ud t} = \nabla_{\theta} \log p(\theta) - C \Sigma^{-1} r - C \nabla_r \log q(r). \end{cases} \label{eqn:psghmcd}\end{aligned}$$ From the other equivalent dynamics given by the framework (Theorem \[thm:equiv\]), the fGH flow () for SGHMC is: $$\begin{aligned} \hspace{-10pt} \begin{cases} \! \frac{\ud \theta}{\ud t} = \Sigma^{-1} r + \nabla_r \log q(r), \\ \! \frac{\ud r}{\ud t} = \nabla_{\theta}\! \log p(\theta) \!-\! C \Sigma^{-1} r \!-\! C \nabla_r\! \log q(r) \!-\! \nabla_{\theta}\! \log q(\theta). \end{cases} \hspace{-37pt} \label{eqn:psghmcf} $$ The key problem in simulating these flow-based dynamics with finite particles is that the density $q$ is unknown. @liu2019understanding_a give a summary on the solutions in the ParVI field, and find that they are all based on a smoothing treatment, in a certain formulation of either smoothing the density or smoothing functions. Here we adopt the Blob method [@chen2018unified] that smooths the density. With a set of particles $\{ r^{(i)} \}_i$ of $q(r)$, Blob makes the following approximation with a kernel function $K_r$ for $r$: $$\begin{aligned} \!\! - \nabla_{\! r} \! \log q( r^{(i)} \!) \! \approx - \frac{\sum_k \!\! \nabla_{\! r^{(i)}} \! K_r^{(i,k)}}{\sum_j \! K_r^{(i,j)}} \!-\! \sum_k \! \frac{\nabla_{\! r^{(i)}} \! K_r^{(i,k)}}{\sum_j \! K_r^{(j,k)}}, \label{eqn:blob}\end{aligned}$$ where $K_r^{(i,j)} := K_r(r^{(i)}, r^{(j)})$. Approximation for $- \nabla_{\theta} \log q(\theta)$ can be established in a similar way. The vanilla SGHMC simulates dynamics  with $- C \nabla_{r} \log q(r) \dd t$ replaced by $\clN (0, 2 C \dd t)$, but dynamics  cannot be simulated in a similar stochastic way. More discussions are provided in Appendix B. We call the ParVI simulations of the two dynamics as pSGHMC-det () and pSGHMC-fGH (), respectively (“p” for “particle”). Compared to the vanilla SGHMC, the proposed methods could converge faster and be more particle-efficient with deterministic update and explicit repulsive interaction (). On the other hand, SGHMC could make a more efficient exploration and converges faster than LD, so our methods could speed up over Blob. One may note that pSGHMC-det resembles a direct application of stochastic gradient descent with momentum [@sutskever2013importance] to Blob. We stress that this application is inappropriate since Blob minimizes $\KL_p$ on the infinite-dimensional manifold $\clP(\clM)$ instead of a function on $\clM$. Moreover, the two methods can be nourished with advanced techniques in the ParVI field. This includes the HE bandwidth selection method and acceleration frameworks by @liu2019understanding_a, and other approximations to $-\nabla \log q$ like SVGD and GFSD/GFSF [@liu2019understanding_a]. Experiments {#sec:exp} =========== Detailed experimental settings are provided in Appendix C, and codes are available at <https://github.com/chang-ml-thu/FGH-flow>. Synthetic Experiment -------------------- ![Dynamics simulation results. Rows correspond to Blob, SGHMC, pSGHMC-det, pSGHMC-fGH, respectively. All methods adopt the same step size $0.01$, and SGHMC-related methods share the same $\Sigma^{-1} = 1.0$, $C = 0.5$. In each row, figures are plotted for every 300 iterations, and the last one for 10,000 iterations. The HE method [@liu2019understanding_a] is used for bandwidth selection. ](./synth_res_banana/export_LD_Blob_he_gd.pdf "fig:"){width=".47\textwidth"} ![Dynamics simulation results. Rows correspond to Blob, SGHMC, pSGHMC-det, pSGHMC-fGH, respectively. All methods adopt the same step size $0.01$, and SGHMC-related methods share the same $\Sigma^{-1} = 1.0$, $C = 0.5$. In each row, figures are plotted for every 300 iterations, and the last one for 10,000 iterations. The HE method [@liu2019understanding_a] is used for bandwidth selection. ](./synth_res_banana/LD_Blob_he_gd_final.pdf "fig:"){width=".105\textwidth"}\ ![Dynamics simulation results. Rows correspond to Blob, SGHMC, pSGHMC-det, pSGHMC-fGH, respectively. All methods adopt the same step size $0.01$, and SGHMC-related methods share the same $\Sigma^{-1} = 1.0$, $C = 0.5$. In each row, figures are plotted for every 300 iterations, and the last one for 10,000 iterations. The HE method [@liu2019understanding_a] is used for bandwidth selection. ](./synth_res_banana/SGHMC-1_LD_he_gd.pdf "fig:"){width=".47\textwidth"} ![Dynamics simulation results. Rows correspond to Blob, SGHMC, pSGHMC-det, pSGHMC-fGH, respectively. All methods adopt the same step size $0.01$, and SGHMC-related methods share the same $\Sigma^{-1} = 1.0$, $C = 0.5$. In each row, figures are plotted for every 300 iterations, and the last one for 10,000 iterations. The HE method [@liu2019understanding_a] is used for bandwidth selection. ](./synth_res_banana/SGHMC-1_LD_he_gd_final.pdf "fig:"){width=".105\textwidth"}\ ![Dynamics simulation results. Rows correspond to Blob, SGHMC, pSGHMC-det, pSGHMC-fGH, respectively. All methods adopt the same step size $0.01$, and SGHMC-related methods share the same $\Sigma^{-1} = 1.0$, $C = 0.5$. In each row, figures are plotted for every 300 iterations, and the last one for 10,000 iterations. The HE method [@liu2019understanding_a] is used for bandwidth selection. ](./synth_res_banana/SGHMC-1_Blob_he_gd.pdf "fig:"){width=".47\textwidth"} ![Dynamics simulation results. Rows correspond to Blob, SGHMC, pSGHMC-det, pSGHMC-fGH, respectively. All methods adopt the same step size $0.01$, and SGHMC-related methods share the same $\Sigma^{-1} = 1.0$, $C = 0.5$. In each row, figures are plotted for every 300 iterations, and the last one for 10,000 iterations. The HE method [@liu2019understanding_a] is used for bandwidth selection. ](./synth_res_banana/SGHMC-1_Blob_he_gd_final.pdf "fig:"){width=".105\textwidth"}\ ![Dynamics simulation results. Rows correspond to Blob, SGHMC, pSGHMC-det, pSGHMC-fGH, respectively. All methods adopt the same step size $0.01$, and SGHMC-related methods share the same $\Sigma^{-1} = 1.0$, $C = 0.5$. In each row, figures are plotted for every 300 iterations, and the last one for 10,000 iterations. The HE method [@liu2019understanding_a] is used for bandwidth selection. ](./synth_res_banana/SGHMC-2_Blob_he_gd.pdf "fig:"){width=".47\textwidth"} ![Dynamics simulation results. Rows correspond to Blob, SGHMC, pSGHMC-det, pSGHMC-fGH, respectively. All methods adopt the same step size $0.01$, and SGHMC-related methods share the same $\Sigma^{-1} = 1.0$, $C = 0.5$. In each row, figures are plotted for every 300 iterations, and the last one for 10,000 iterations. The HE method [@liu2019understanding_a] is used for bandwidth selection. ](./synth_res_banana/SGHMC-2_Blob_he_gd_final.pdf "fig:"){width=".105\textwidth"}\ \[fig:synth\] We show in Fig. \[fig:synth\] the equivalence of various dynamics simulations, and the advantages of pSGHMC-det and pSGHMC-fGH. We first find that all methods eventually produce properly distributed particles, demonstrating their equivalence. For ParVI methods, both proposed methods (Rows 3, 4) converge faster than Blob (Row 1), indicating the benefit of using SGHMC dynamics over LD, where the momentum accumulates in the vertical direction. For the same SGHMC dynamics, we see that our ParVI versions (Rows 3, 4) converge faster than the vanilla stochastic version (Row 2), due to the deterministic update rule. Moreover, pSGHMC-fGH (Row 4) enjoys the HE bandwidth selection method [@liu2019understanding_a] for ParVIs, which makes the particles neatly and regularly aligned thus more representative for the distribution. pSGHMC-det (Row 3) does not benefit much from HE since the density on particles, $q(\theta)$, is not directly used in the dynamics . Latent Dirichlet Allocation (LDA) --------------------------------- \ \[fig:lda\] We study the advantages of our pSGHMC methods in the real-world task of posterior inference for LDA. We follow the same settings as @liu2019understanding_a and @chen2014stochastic. We see from Fig. \[fig:lda-lc\] the saliently faster convergence over Blob, benefited from the usage of SGHMC dynamics in the ParVI field. Particle-efficiency is compared in Fig. \[fig:lda-ptcl\], where we find the better results of pSGHMC methods over vanilla SGHMC under a same particle size. This demonstrates the advantage of ParVI simulation of MCMC dynamics, where particle interaction is directly considered to make full use of a set of particles. Bayesian Neural Networks (BNNs) ------------------------------- \ \[fig:bnn\] We investigate our methods in the supervised task of training BNNs. We follow the settings of @chen2014stochastic with slight modification explained in Appendix. Results in Fig. \[fig:bnn\] is consistent with our claim: pSGHMC methods converge faster than Blob due to the usage of SGHMC dynamics. Their slightly better particle-efficiency can also be observed. Conclusions {#sec:conclusion} =========== We construct a theoretical framework that connects general MCMC dynamics with flows on the Wasserstein space. By introducing novel concepts, we find that a regular MCMC dynamics corresponds to an fGH flow for an fRP manifold. The framework gives a clear picture on the behavior of various MCMC dynamics, and also enables ParVI simulation of MCMC dynamics. We group existing MCMC dynamics into 3 types under the framework and analyse their behavior, and develop two ParVI methods for the SGHMC dynamics. We empirically demonstrate the faster convergence by more general MCMC dynamics for ParVIs, and particle-efficiency by ParVI simulation for MCMCs. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, 61621136008, 61571261), Beijing NSF Project (No. L172037), DITD Program JCKY2017204B064, Tiangong Institute for Intelligent Computing, Beijing Academy of Artificial Intelligence (BAAI), NVIDIA NVAIL Program, and the projects from Siemens and Intel. Appendix {#appendix .unnumbered} ======== A. Proofs {#a.-proofs .unnumbered} --------- ### A.1. Proof of Lemma \[lem:detmcmc\] {#a.1.-proof-of-lemmalemdetmcmc .unnumbered} Given the dynamics , the distribution curve $(q_t)_t$ is governed by the Fokker-Planck equation (, @risken1996fokker): $$\begin{aligned} \partial_t q_t = - \partial_i (q_t V^i) + \partial_i \partial_j (q_t D^{ij}),\end{aligned}$$ which reduces to: $$\begin{aligned} \partial_t q_t =& - (\partial_i q_t) V^i - q_t (\partial_i V^i) \\ & {} + q_t (\partial_i \partial_j D^{ij}) + (\partial_i \partial_j q_t) D^{ij} \\ & {} + (\partial_i q_t) (\partial_j D^{ij}) + (\partial_j q_t) (\partial_i D^{ij}) \\ =& - (\partial_i q_t) (\partial_j D^{ij} + \partial_j Q^{ij}) - (\partial_i q_t) (D^{ij} + Q^{ij}) \frac{\partial_j p}{p} \\ & {} - q_t \partial_i \partial_j (D^{ij} + Q^{ij}) - q_t (\partial_i D^{ij} + \partial_i Q^{ij}) \frac{\partial_j p}{p} \\ & {} - q_t (D^{ij} + Q^{ij}) ( \frac{\partial_i \partial_j p}{p} - \frac{(\partial_i p) (\partial_j p)}{p^2} ) \\ & {} + q_t (\partial_i \partial_j D^{ij}) + (\partial_i \partial_j q_t) D^{ij} \\ & {} + (\partial_i q_t) (\partial_j D^{ij}) + (\partial_j q_t) (\partial_i D^{ij}) \\ =& \hspace{12pt} (\partial_i q_t - \frac{q_t}{p} \partial_i p) (\partial_j D^{ij} - \partial_j Q^{ij}) \\ & {} - \frac{1}{p} (\partial_i q_t) (\partial_j p) (D^{ij} + Q^{ij}) \\ & {} - \frac{q_t}{p} (\partial_i \partial_j p) D^{ij} + \frac{q_t}{p^2} (\partial_i p) (\partial_j p) D^{ij} + (\partial_i \partial_j q_t) D^{ij},\end{aligned}$$ where we have used the symmetry of $D$ and skew-symmetry of $Q$ in the last equality: $(\partial_j p) (\partial_i D^{ij}) = (\partial_i p) (\partial_j D^{ji}) = (\partial_i p) (\partial_j D^{ij})$ and similarly $(\partial_j p) (\partial_i Q^{ij}) = - (\partial_i p) (\partial_j Q^{ij})$; $\partial_i \partial_j Q^{ij} = \partial_j \partial_i Q^{ji} = -\partial_i \partial_j Q^{ij}$ so $\partial_i \partial_j Q^{ij} = 0$ and similarly $(\partial_i p) (\partial_j p) Q^{ij} = 0$, $(\partial_i \partial_j p) Q^{ij} = 0$. The deterministic dynamics in the theorem $\ud x = W_t(x) \dd t$ with $W_t(x)$ defined in induces the curve: $$\begin{aligned} \partial_t q_t =& - \partial_i (q_t (W_t)^i) \\ =& - (\partial_i q_t) (W_t)^i - q_t (\partial_i (W_t)^i) \\ =& - (\partial_i q_t) D^{ij} (\frac{\partial_j p}{p} - \frac{\partial_j q_t}{q_t}) \\ & {} - (\partial_i q_t) Q^{ij} (\frac{\partial_j p}{p}) - (\partial_i q_t) (\partial_j Q^{ij}) \\ & {} - q_t (\partial_i D^{ij}) (\frac{\partial_j p}{p} - \frac{\partial_j q_t}{q_t}) \\ & {} - q_t D^{ij} (\frac{\partial_i \partial_j p}{p} - \frac{(\partial_j p) (\partial_i p)}{p^2} - \frac{\partial_i \partial_j q_t}{q_t} + \frac{(\partial_j q_t) (\partial_i q_t)}{q_t^2}) \\ & {} - q_t (\partial_i Q^{ij}) \frac{\partial_j p}{p} - q_t Q^{ij} (\frac{\partial_i \partial_j p}{p} - \frac{(\partial_j p) (\partial_i p)}{p^2}) \\ & {} - q_t (\partial_i \partial_j Q^{ij}) \displaybreak \\ =& \hspace{12pt} (\partial_i q_t - \frac{q_t}{p} \partial_i p) (\partial_j D^{ij} - \partial_j Q^{ij}) \\ & {} - \frac{1}{p} (\partial_i q_t) (\partial_j p) (D^{ij} + Q^{ij}) \\ & {} - \frac{q_t}{p} (\partial_i \partial_j p) D^{ij} + \frac{q_t}{p^2} (\partial_i p) (\partial_j p) D^{ij} + (\partial_i \partial_j q_t) D^{ij},\end{aligned}$$ where we have also applied aforementioned properties in the last equality. Now we see that the two dynamics induce the same distribution curve thus they are equivalent. ### A.2. Derivation of {#a.2.-derivation-of .unnumbered} Barbour’s generator is understood as the directional derivative $(\clA f) (x) = \frac{\ud}{\ud t} F_f (q_t) \Big|_{\substack{q_0 = \delta_x \\ t = 0}}$ on $\clP(\bbR^M)$. Due to the definition of gradient, this can be written as $(\clA f) (x) = \lrangle{ \grad F_f, \pi_{q_0}( W_0 ) }_{T_{q_0}\clP} = \lrangle{ \grad F_f, W_0 }_{\clL^2_{q_0}}$, where $\pi_{q_0} (W_0)$ is the tangent vector of the distribution curve $(q_t)_t$ at time $0$ due to Lemma \[lem:detmcmc\], and the last equality holds due to that $\pi_q$ is the orthogonal projection from $\clL^2_q$ to $T_{q} \clP$ and $\grad F_f \in T_{q_0} \clP$ (see Section \[sec:pre-flow-grad\]). Before going on, we first introduce the notion of *weak derivative* (, @nicolaescu2007lectures, Def. 10.2.1) of a distribution. For a distribution with a smooth density function $q$ and a smooth function $f \in {\clC_c^{\infty}}(\bbR^M)$, the rule of integration by parts tells us: $$\begin{aligned} \int_{\bbR^M} f(x) (\partial_i q(x)) \dd x =& \hspace{10pt} \int_{\bbR^M} \partial_i ( f(x) q(x) ) \dd x \\ & {} - \int_{\bbR^M} (\partial_i f(x)) q(x) \dd x.\end{aligned}$$ Due to Gauss’s theorem (, @abraham2012manifolds, Thm. 8.2.9), $\int_{\bbR^M} \partial_i ( f(x) q(x) ) \dd x = \lim_{R \to +\infty} \int_{\bbS^{M-1}(R)} ( f(y) q(y) ) v_i(y) \dd y$, where $\bbS^{M-1}(R)$ is the $(M-1)$-dimensional sphere in $\bbR^M$ with radius $R$, $y \in \bbS^{M-1}$, and $v_i$ is the $i$-th component of the unit normal vector $v$ (pointing outwards) on $\bbS^{M-1}(R)$. Since $f$ is compactly supported and $\lim_{\|x\| \to +\infty} q(x) = 0$, after a sufficiently large $R$, $f(y) q(y) = 0$, so the integral vanishes, and we have: $$\begin{aligned} \int_{\bbR^M} f(x) (\partial_i q(x)) \dd x = - \int_{\bbR^M} (\partial_i f(x)) q(x) \dd x, \\ \forall f \in {\clC_c^{\infty}}(\bbR^M).\end{aligned}$$ We can use this property as the definition of $\partial_i q$ for non-absolutely-continuous distributions, like the Dirac measure $\delta_{x_0}$: $$\begin{aligned} \int_{\bbR^M} f(x) (\partial_i \delta_{x_0}(x)) \dd x :=& \int_{\bbR^M} (\partial_i f(x)) \delta_{x_0}(x) \dd x \\ =& \partial_i f(x_0).\end{aligned}$$ Now we begin the derivation. Using the form in and noting $q_0 = \delta_{x_0}$, we have: $$\begin{aligned} & (\clA f) (x_0) = \lrangle{ \grad F_f, W_0 }_{\clL^2_{q_0}} \\ =& \bbE_{q_0(x)} [ \lrangle{ \grad f(x), W_0(x) }_{\bbR^M} ] = \bbE_{q_0} [ (\partial_i f) W_0^i ] \\ =& \bbE_{q_0} \big[ D^{ij} (\partial_i f) \big( \partial_j \log (p/q_0) \big) + Q^{ij} (\partial_i f) (\partial_j \log p) \\ &\hspace{18pt} + (\partial_j Q^{ij}) (\partial_i f) \big] \\ =&\hspace{14pt} \big[ D^{ij} (\partial_i f) (\partial_j \log p) \big] (x_0) \\ & {} - \int_{\bbR^M} \big( D^{ij} (\partial_i f) \big) (x) (\partial_j q_0) (x) \dd x \\ & {} + \big[ Q^{ij} (\partial_i f) (\partial_j \log p) + (\partial_j Q^{ij}) (\partial_i f) \big] (x_0) \\ =& \left[ D^{ij} (\partial_i f) (\partial_j \log p) + \frac{1}{p} \partial_j ( p Q^{ij} ) (\partial_i f) \right] (x_0) \\ & {} + \int_{\bbR^M} \partial_j \big( D^{ij} (\partial_i f) \big) (x) q_0(x) \dd x \\ =& \left[ D^{ij} (\partial_i f) (\partial_j \log p) + \frac{1}{p} \partial_j ( p Q^{ij} ) (\partial_i f) \right] (x_0) \\ & {} + \left[ \partial_j \big( D^{ij} (\partial_i f) \big) \right] (x_0) \\ =& \bigg[ D^{ij} (\partial_i f) (\partial_j \log p) + \frac{1}{p} \partial_j ( p Q^{ij} ) (\partial_i f) \\ &\hspace{10pt} {} + (\partial_j D^{ij}) (\partial_i f) + D^{ij} (\partial_i \partial_j f) \bigg] (x_0) \\ =& \left[ \frac{1}{p} \partial_j \big( p (D^{ij} + Q^{ij}) \big) (\partial_i f) + D^{ij} (\partial_i \partial_j f) \right] (x_0) \\ =& \left[ \frac{1}{p} \partial_j \big( p (D^{ij} + Q^{ij}) \big) (\partial_i f) + (D^{ij} + Q^{ij}) (\partial_i \partial_j f) \right] (x_0) \\ =& \left[ \frac{1}{p} \partial_j \left[ p \left( D^{ij} + Q^{ij} \right) (\partial_i f) \right] \right] (x_0),\end{aligned}$$ where the second last equality holds due to $Q^{ij} (\partial_i \partial_j f) = 0$ from the skew-symmetry of $Q$. This completes the derivation. ### A.3. Proof of Lemma \[lem:klham\] {#a.3.-proof-of-lemmalemklham .unnumbered} Noting that the KL divergence $\KL_p(q) = \int_{\clM} \log (q/p) \dd q$ is a non-linear function on $\clP(\clM)$, we need to first find its linearization. We fix a point $q_0 \in \clP(\clM)$. gives its gradient at $q_0$: $\grad \KL_p (q_0) = \grad \log (q_0 / p)$. Consider the linear function on $\clP(\clM)$: $$\begin{aligned} F: q \mapsto \int_{\clM} \log(q_0/p) \dd q.\end{aligned}$$ According to existing knowledge (, @villani2008optimal, Ex. 15.10; @ambrosio2008gradient, Lem. 10.4.1; @santambrogio2017euclidean, Eq. 4.10), its gradient at $q_0$ is given by: $$\begin{aligned} \big( \grad F \big) (q_0) = \grad \left( \left. \frac{\delta F}{\delta q} \right|_{q=q_0} \right),\end{aligned}$$ where $\frac{\delta F}{\delta q}$ is the first functional variation of $F$, which is $\log(q_0/p)$ at $q=q_0$. Now we find that $\grad F (q_0) = \grad \log(q_0/p) = \grad \KL_p (q_0)$, so $F(q)$ is the linearization of $\KL_p(q)$ at $q=q_0$ and the corresponding $f \in {\clC_c^{\infty}}(\clM)$ in is $\log(q_0/p)$. Then we have: $$\begin{aligned} \clX_{\KL_p} (q_0) = \pi_{q_0}( X_{\log (q_0/p)} ).\end{aligned}$$ Referring to , $X_{\log (q_0/p)} = \beta^{ij} \partial_j \log (q_0/p) \partial_i$. Due to the generality of $q_0$, this completes the proof. ### A.4. Proof of Theorem \[thm:equiv\] {#a.4.-proof-of-theoremthmequiv .unnumbered} For a fixed $q \in \clP(\clM)$, two vector fields on $\clM$ produce the same distribution curve if they have the same projection on $T_q \clP(\clM)$, so showing $\pi_q(W) = \clW_{\KL_p}(q)$ is sufficient for showing the equivalence of the two dynamics. This in turn is equivalent to show $\pi_q(W - \clW_{\KL_p}(q)) = 0_{\clL^2_q}$, or $\divg \big( q (W - \clW_{\KL_p}(q)) \big) = \divg (q 0_{\clL^2_q}) = 0$ (see Section \[sec:pre-flow-grad\]). We first consider case (b): given an fRP manifold $(\clM, \tgg, \beta)$, we define an MCMC dynamics whose diffusion matrix $D$ and curl matrix $Q$ are the coordinate expressions of the fiber-Riemannian structure $(\tgg^{ij})$ and the Poisson structure $(\beta^{ij})$, respectively. It is regular, as Assumption \[asm:reg\] is satisfied due to properties of $(\tgg^{ij})$ (see ) and $(\beta^{ij})$ (see Section \[sec:pre-flow-ham\]). Its equivalent deterministic dynamics at $q$ (see Lemma \[lem:detmcmc\]) is given by: $$\begin{aligned} W^i = \tgg^{ij} \partial_j \log (p/q) + \beta^{ij} \partial_j \log p + \partial_j \beta^{ij}.\end{aligned}$$ So we have: $$\begin{aligned} & \divg \big( q (W - \clW_{\KL_p}(q)) \big) \\ =& \divg \Big( q \big( \tgg^{ij} \partial_j \log (p/q) + \beta^{ij} \partial_j \log p + \partial_j \beta^{ij} \\ & \hspace{32pt} {} - (\tgg^{ij} + \beta^{ij}) \partial_j \log (p/q) \big) \, \partial_i \Big) \\ =& \divg \big( q ( \partial_j \beta^{ij} + \beta^{ij} \partial_j \log q ) \partial_i \big) \\ =& \divg \big( ( q \partial_j \beta^{ij} + \beta^{ij} \partial_j q ) \partial_i \big) \\ =& \divg \big( \partial_j ( q \beta^{ij} ) \partial_i \big) \\ =& \partial_i \partial_j ( q \beta^{ij} ) \\ =& 0,\end{aligned}$$ where the last equality holds due to the skew-symmetry of $(\beta^{ij})$. This shows that the constructed regular MCMC dynamics is equivalent to the fiber-gradient Hamiltonian flow $\clW_{\KL_p}$ on $\clM$. For case (a), given any regular MCMC dynamics whose matrices $(D, Q)$ satisfy Assumption \[asm:reg\], we can define an fRP manifold $(\clM, \tgg, \beta)$ whose structures are defined in the coordinate space by the matrices: $\tgg^{ij} := D^{ij}$, $\beta^{ij} := Q^{ij}$. Assumption \[asm:reg\] guarantees that such $\tgg$ is a valid fiber-Riemannian structure and $\beta$ a valid Poisson structure. On this constructed manifold, we follow the above procedure to construct a regular MCMC dynamics equivalent to the fGH flow $\clW_{\KL_p}$ on it, whose equivalent deterministic dynamics is: $$\begin{aligned} W^i = D^{ij} \partial_j \log (p/q) + Q^{ij} \partial_j \log p + \partial_j Q^{ij},\end{aligned}$$ which is exactly the one of the original MCMC dynamics. This shows that the original regular MCMC dynamics is equivalent to the fGH flow $\clW_{\KL_p}$ on the constructed fRP manifold. Finally, statement (c) is verified in both cases by the introduced construction. This completes the proof. B. Details on Flow Simulation of SGHMC Dynamics {#b.-details-on-flow-simulation-of-sghmc-dynamics .unnumbered} ----------------------------------------------- We first introduce more details on the Blob method, referring to the works of @chen2018unified and @liu2019understanding_a. The key problem in simulating a general flow on the Wasserstein space is to estimate the gradient $u(x) := -\nabla \log q(x)$ where $q(x)$ is the distribution corresponding to the current configuration of the particles. The gradient has to be estimated using the finite particles $\{ x^{(i)} \}_{i=1}^N$ distributed obeying $q(x)$. The analysis of @liu2019understanding_a finds that an estimate method has to make a smoothing treatment, in the form of either smoothing the density or smoothing functions. The Blob method [@chen2018unified] first reformulates $u(x)$ in a variation form: $$\begin{aligned} u(x) = \nabla \left( -\frac{\delta}{\delta q} \bbE_{q} [\log q] \right),\end{aligned}$$ then with a kernel function $K$, it replaces the density in the $\log q$ term with a smoothed one: $$\begin{aligned} u(x) \approx& \nabla \left( -\frac{\delta}{\delta q} \bbE_{q} [\log (q * K)] \right) \\ =& -\nabla \log (q * K) - \nabla \left( \frac{q}{(q * K)} * K \right),\end{aligned}$$ where “\*” denotes convolution. This form enjoys the benefit of enabling the usage of the empirical distribution: take $q(x) = \hqq(x) := \frac{1}{N} \sum_{i=1}^N \delta_{x^{(i)}}(x)$, with $\delta_{x^{(i)}}(x)$ denoting the Dirac measure at $x^{(i)}$. The above formulation then becomes: $$\begin{aligned} u(x^{(i)}) =& - \nabla_{\! x} \! \log q( x^{(i)} \!) \! \\ \approx& - \frac{\sum_k \!\! \nabla_{\! x^{(i)}} \! K^{(i,k)}}{\sum_j \! K^{(i,j)}} \!-\! \sum_k \! \frac{\nabla_{\! x^{(i)}} \! K^{(i,k)}}{\sum_j \! K^{(j,k)}},\end{aligned}$$ where $K^{(i,j)} := K(x^{(i)}, x^{(j)})$. This coincides with . The vanilla SGHMC dynamics replaces the dynamics $\ud r = - C \nabla_r \log q(r) \dd t$ in with $\ud r = 2 C \dd B_t$ or more intuitively $\ud r = \clN(0, 2 C \dd t)$, where $B_t$ denotes the standard Brownian motion. The equivalence between these two dynamics can also be directly derived from the Fokker-Planck equation: the first one produces a curve by $\partial_t q_t = - \partial_i \big( q_t (- C^{ij} \partial_j \log q_t) \big) = \partial_i ( C^{ij} \partial_j q_t )$, and the second one by $\partial_t q_t = \partial_i \partial_j ( q_t C^{ij} ) = \partial_i ( C^{ij} \partial_j q_t )$ for a constant $C$, so the two curves coincides. But dynamics  cannot be simulated in a stochastic way, since $-\nabla_r \log q(r)$ and $-\nabla_{\theta} \log q(\theta)$ are used to update $\theta$ and $r$, respectively, that is, the correspondence of gradients and variables is switched. In this case, estimating the gradient cannot be avoided. Finally, we write the explicit update rule of the proposed methods using Blob with particles $\{ (\theta, r)^{(i)} \}_{i=1}^N$. Let $K_{\theta}$, $K_r$ be the kernel functions for $\theta$ and $r$, and $\varepsilon$ be a step size. The update rule for pSGHMC-det in becomes: $$\begin{aligned} \begin{cases} \theta^{(i)} \asn \theta^{(i)} + \varepsilon \Sigma^{-1} r^{(i)}, \\ r^{(i)} \asn r^{(i)} + \varepsilon \nabla_{\theta} \log p(\theta^{(i)}) \\ \hspace{20pt} {} - \varepsilon C \Big( \Sigma^{-1} r^{(i)} + \frac{\sum_k \! \nabla_{\! r^{(i)}} \! K_r^{(i,k)}}{\sum_j \! K_r^{(i,j)}} + \sum_k \! \frac{\nabla_{\! r^{(i)}} \! K_r^{(i,k)}}{\sum_j \! K_r^{(j,k)}} \Big), \end{cases} \label{eqn:psghmcd-upd}\end{aligned}$$ and for pSGHMC-fGH in : $$\begin{aligned} \begin{cases} \theta^{(i)} \asn \theta^{(i)} \!+\! \varepsilon \Big( \Sigma^{-1} r^{(i)} \!+\! \frac{\sum_k \! \nabla_{\! r^{(i)}} \! K_r^{(i,k)}}{\sum_j \! K_r^{(i,j)}} \!+\! \sum_k \! \frac{\nabla_{\! r^{(i)}} \! K_r^{(i,k)}}{\sum_j \! K_r^{(j,k)}} \Big), \\ r^{(i)} \asn r^{(i)} + \varepsilon \nabla_{\theta} \! \log p( \theta^{(i)} \!) \\ \hspace{20pt} {} - \varepsilon \Big( \frac{\sum_k \! \nabla_{\! \theta^{(i)}} \! K_{\theta}^{(i,k)}}{\sum_j \! K_{\theta}^{(i,j)}} \!+\! \sum_k \! \frac{\nabla_{\! \theta^{(i)}} \! K_{\theta}^{(i,k)}}{\sum_j \! K_{\theta}^{(j,k)}} \Big) \\ \hspace{20pt} {} - \varepsilon C \Big( \Sigma^{-1} r^{(i)} + \frac{\sum_k \! \nabla_{\! r^{(i)}} \! K_r^{(i,k)}}{\sum_j \! K_r^{(i,j)}} + \sum_k \! \frac{\nabla_{\! r^{(i)}} \! K_r^{(i,k)}}{\sum_j \! K_r^{(j,k)}} \Big), \end{cases} \label{eqn:psghmcf-upd}\end{aligned}$$ where $K_{\theta}^{(i,j)} := K_{\theta}(\theta^{(i)}, \theta^{(j)})$ and similarly for $K_r^{(i,j)}$. C. Detailed Settings of Experiments {#c.-detailed-settings-of-experiments .unnumbered} ----------------------------------- ### C.1. Detailed Settings of the Synthetic Experiment {#c.1.-detailed-settings-of-the-synthetic-experiment .unnumbered} For the random variable $x = (x_1, x_2)$, the target distribution density $p(x)$ is defined by: $$\begin{aligned} \log p(x) =& -0.01 \times \left( \frac{1}{2} (x_1^2 + x_2^2) + \frac{0.8}{2} (25 x_1 + x_2^2)^2 \right) \\ & {} + \const,\end{aligned}$$ which is inspired by the target distribution used in the work of @girolami2011riemann. We use the exact gradient of the log density instead of stochastic gradient. Fifty particles are used, which are initialized by $\clN \big( (-2,-7), 0.5^2 I \big)$. The window range is $(-7,3)$ horizontally and $(-9,9)$ vertically. See the caption of Fig. \[fig:synth\] for other settings. ### C.2. Detailed Settings of the LDA Experiment {#c.2.-detailed-settings-of-the-lda-experiment .unnumbered} We follow the same settings as @ding2014bayesian, which is also adopted in @liu2019understanding_a. The data set is the ICML data set[^2] developed by @ding2014bayesian. We use 90% words in each document to train the topic proportion of the document and the left 10% words for evaluation. A random 80%-20% train-test split of the data set is conducted in each run. For the LDA model, parameters of the Dirichlet prior of topics is $\alpha = 0.1$. The mean and standard deviation of the Gaussian prior on the topic proportions is $\beta = 0.1$ and $\sigma = 1.0$. Number of topics is 30 and batch size is fixed as 100. The number of Gibbs sampling in each stochastic gradient evaluation is 50. All the inference methods share the same step size $\varepsilon = 1\e{-3}$. SGHMC-related methods (SGHMC, pSGHMC-det and pSGHMC-fGH) share the same parameters $\Sigma^{-1} = 300$ and $C = 0.1$. ParVI methods (Blob, pSGHMC-det and pSGHMC-fGH) use the HE method for kernel bandwidth selection [@liu2019understanding_a]. To match the fashion of ParVI methods, SGHMC is run with parallel chains and the last samples of each chain are collected. ### C.3. Detailed Settings of the BNN Experiment {#c.3.-detailed-settings-of-the-bnn-experiment .unnumbered} We use a 784-100-10 feedforward neural network with sigmoid activation function. The batch size is 500. SGHMC, pSGHMC-det and pSGHMC-fGH share the same parameters $\varepsilon = 5\e{-5}$, $\Sigma^{-1} = 1.0$ and $C = 1.0$, while Blob uses $\varepsilon = 5 \e{-8}$ (larger $\varepsilon$ leads to diverged result). For the ParVI methods, we find the median method and the HE method for bandwidth selection perform similarly, and we adopt the median method for faster implementation. [^1]: We do not consider the compatibility of the Riemannian and Poisson structure so it is different from a Kähler manifold. [^2]: <https://cse.buffalo.edu/~changyou/code/SGNHT.zip>
Changi's crown Jewel scales new heights Work on Jewel Changi Airport is moving ahead at full-speed, with construction workers seen scaling its external facade on Tuesday. The complex is scheduled to open in early 2019, and will have five storeys above ground and five basement levels, with a total gross floor area of about 134,000 sq m. All terminals at Changi Airport and departure gates will be connected to the complex and be within walking distance. Jewel Changi Airport will have aviation and travel-related facilities, as well as some 300 retail and food and beverage outlets. It will also house one of Singapore's largest indoor collections of plants, over about 22,000 sq m. The Forest Valley, a five-storey garden, will be one of Jewel's centrepiece attractions. There will also be a 40m-high Rain Vortex at the central core of the complex, which will have a light-and-sound show every night. The Canopy Park, on the topmost level of Jewel, will have play attractions, gardens, walking trails and dining outlets. A version of this article appeared in the print edition of The Straits Times on December 28, 2017, with the headline 'Changi's crown Jewel scales new heights'. Print Edition | Subscribe The Straits Times We have been experiencing some problems with subscriber log-ins and apologise for the inconvenience caused. Until we resolve the issues, subscribers need not log in to access ST Digital articles. But a log-in is still required for our PDFs.
MOSCOW, February 3. /TASS/. The Court of Arbitration for Sport (CAS) announced on Friday that it accepted a lawsuit from Russian cross-country skiers Alexander Legkov and Evgeniy Belov, who appealed their recently imposed provisional suspensions. "Russian cross-country skiers Alexander Legov and Evgeniy Belov have filed appeals at the Court of Arbitration for Sport (CAS) against the International Ski Federation (FIS) regarding the provisional suspensions imposed on them by the FIS Doping Panel," the Swiss-based court said in its statement. "On 22 December 2016, the International Olympic Committee (IOC) opened investigation procedures against the athletes further to evidence presented in the second McLaren Independent Investigation Report that urine samples provided by the athletes during the 2014 Sochi Olympic Winter Games may have been tampered with, by manipulation of samples in the WADA-accredited laboratory in Sochi," the CAS statement said. "The same day, the FIS Doping Panel imposed a provisional suspension on each athlete, which it confirmed in a decision dated 25 January 2017," according to the statement. "The athletes seek to overturn the FIS Doping Panel’s decision of 25 January 2017. The parties have agreed to conduct an expedited procedure." On December 23, the FIS (International Ski Federation) slapped provisional suspensions on six Russian cross-country skiers over alleged violations of anti-doping rules at the 2014 Winter Olympic Games in Sochi. The athletes subjected to the provisional suspensions are four male skiers, namely Alexander Legkov, Maxim Vylegzhanin, Yevgeny Belov and Alexei Petukhov and two female skiers - Yulia Ivanova and Yevgeniya Shapovalova. The decision was made in the wake of the infamous McLaren Report.
https://tass.com/sport/928932