content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
What is the main purpose of your study? This paper investigates the ways in which commemorations produced after disasters remember the locations, events, and lives of those impacted. By commemoration, we mean any object or act that helps people remember after a disaster: ceremonies, memorials, statues, signage, etc. We specifically consider how commemorations change the physical and social characteristics of the impacted location, in turn, shaping long-term community recovery. What are the practical, day to day, implications of your study? Disaster commemoration during the recovery process shapes the ways in which communities and individuals discuss, think about, and remember tragic events. Ideas about community identity and belonging forged through commemoration guide public sentiment on what should be rebuilt, which aspects of community history matter, and who has a place in the community’s future. How does your study relate to other work on the subject? We engage with prominent works from the cultural geography tradition on commemoration. However, our study departs from previous work by focusing memorial texts (i.e., any text that remembers) produced during the long-term recovery process rather than solely on those designated as disaster memorials. Hence, our sample is defined by time rather than by subject. What are two or three interesting findings that come from your study? We found that while some memorial texts focused on the disaster event itself (event-based commemoration), other memorials focused on the place characteristics where the disaster happened. This place-based commemoration often reflected a shared, yet reimagined history of the event from the community’s perspective. This type of shared, collective memory is powerful—it can both unify and fracture communities during the disaster recovery process. What might be some of the theoretical implications of this study? We identify that event-based and place-based commemoration occurs during disaster recovery, by providing a vocabulary and framework we can consider how memorials that focus on survivor memories through event-based commemoration impel community recovery differently than memorials that reconstruct imagined pasts through place-based commemoration. How does your research help us think about Geography? The concept of place is a foundational concept in geographic scholarship. Our work offers an opportunity to consider the role of place in long-term disaster recovery and specifically considers how commemoration reimagines and reinvents places that have experienced disasters. To access the full articles, become an AGS affiliate and get free online access! Click here to read the abstract of this article on the Wiley Online Library.
https://ubique.americangeo.org/article-preview/patterns-of-disaster-commemoration-in-long-term-recovery/
The coronavirus pandemic has exposed South Australia’s reliance on overseas migration to fill jobs, reduce the rate of the state’s ageing population and drive the economy, according to the University of Adelaide’s SA Centre for Economic Studies. In a series of research papers released today, SACES found that the net migration of people moving to South Australia fell from 16,630 in 2019 to 4410 last year, largely due to international border closures. The pandemic-induced border closures also prevented international students from entering South Australia, a number of whom traditionally stay on as residents and workers after graduation. Student visas declined by about 55 per cent in 2020 and the ripple effect on students who would now otherwise be on years two and three of their visas will be felt into the future. The State Government has spruiked the turnaround in net interstate migration into the positive for the first time in 30 years. But this gain – of just 98 people in 2020 – will take years to make up for the 8000 mostly young South Australians lost to interstate in 2018 and 2019 and pales in significance compared to the thousands of mainly skilled migrants usually brought in from overseas each year. Net overseas migration in South Australia is dominated by people aged 25 to 44 years, meaning that their absence has helped the average age of the state’s population gain even more pace. Fertility rates have been reducing in SA since 2008, with the ABS predicting the number of coupled families without children will exceed the number of families with children sometime between 2023 and 2029. The flipside of this is that the ageing population is likely to create plenty of jobs. SACES estimates that on average from 2010 to 2020, for each 1000-person increase in the population aged 65 and over, there were on average 240 jobs created across the five sectors most closely related to health. “The outlook is for growth of at least another 30,000 jobs in health, aged and disability care, and associated personal and other services, with increased qualifications and higher pay rates, by 2030,” it said. SACES Executive Director, Associate Professor Michael O’Neil, said the SACES analysis across the three papers examined the nexus between population movements and the state’s economic health. The research was undertaken on behalf of the SACES Independent Research Fund, a group of key private and public sector individuals that sits under the umbrella of SACES. O’Neil said SACES’ calls to reopen the Australian economy to international migrants came with caution. “It is important to get the balance right between encouraging overseas migration and supporting our young people because skilled migrants tend to put downward pressure on wages and reduce the responsibility of local businesses to provide training to local staff,” he said. “As the world emerges from the COVID crisis, and people start moving again, South Australia’s enviable position in terms of healthcare, education, housing value and lifestyle should prove very attractive to potential interstate and overseas migrants.” While that sounds promising for the future, employers struggling to find and keep skilled staff are looking for more immediate answers. Unemployment figures released by the ABS last week showed SA’s jobless rate in August was 5 per cent, slightly up on July’s 10-year low figure of 4.7 per cent. However, more detailed ABS labour force data released yesterday shows that unemployment in the Central Adelaide and Hills region, which is predominantly made up of skilled workers in the eastern suburbs, was just 2.4 per cent in August. This is compared with 5.7 per cent in the northern and southern suburbs, where there are traditionally more unskilled workers. In country areas, the skills divide appears even greater with unemployment in the Outback region that takes in the north of the state, Eyre Peninsula and the regional cities of Port Augusta, Whyalla and Port Lincoln, hitting 10 per cent in August. SACES says increasing the participation rate could offset labour shortages in the short to medium term particularly while net overseas migration remained low in the next few years. But it says population growth in South Australia will remain reliant on the resumption of overseas migration, which was the largest source of overall population gains in the 15 years before the pandemic. Get InDaily in your inbox. Daily. The best local news every workday at lunch time. Thanks for signing up to the InDaily newsletter. “Its importance will only grow in the future given the expectation that gains from net natural increase will decline and eventually reverse as deaths rise and births remain broadly static or decline,” the report concludes. “In the interim, there needs to be a much closer examination of employment prospects for young people to support retention of the young and qualified in South Australia and much greater attention to reducing the rate of long-term unemployment for all age groups.” O’Neil said there were a number of policy initiatives that could encourage population and employment growth as the state emerged from the economic and social challenges imposed by the coronavirus pandemic. “Reversing and maintaining the current trend in net interstate migration is a key public policy priority that can be partially achieved through much faster job creation in high value-added sectors. Paid commencement internships and payroll tax support are possible policy levers in this regard,” he said. O’Neil said reducing the impact of the earned income test on Age Pensions could also ease labour shortages. South Australia’s total population in 2031 is estimated to be 1,922,855, up from 1,720,000 in 2017. Local News Matters Media diversity is under threat in Australia – nowhere more so than in South Australia. The state needs more than one voice to guide it forward and you can help with a donation of any size to InDaily. Your contribution goes directly to helping our journalists uncover the facts. Please click below to help InDaily continue to uncover the facts.
https://indaily.com.au/news/2021/09/24/closed-borders-put-squeeze-on-labour-market-sa-population-growth/
A: This is an interesting question. I contacted the folks Pampers and Huggies to get to the "bottom" of this. Pampers let me know that their diapers do not have an expiration date. They said that the only thing that may happen over time is a possible discoloration to a light yellow. But, they said the performance does not go down. Huggies responded that there is no shelf life or expiration date on their diapers. All content on this Web site, including medical opinion and any other health-related information, is for informational purposes only and should not be considered to be a specific diagnosis or treatment plan for any individual situation. Use of this site and the information contained herein does not create a doctor-patient relationship. Always seek the direct advice of your own doctor in connection with any questions or issues you may have regarding your own health or the health of others. Be the first to comment!
https://www.parents.com/advice/babies/diapering/do-diapers-expire/
p.m. on 20 April 2021 to 4 p.m. on 20 May 2021. On 23 October 2020, the Company announced that, together with certain of its subsidiaries (the "Note Parties"), the Company had entered into a Forbearance Agreement with the AHG. The forbearance period initially expired at 4 p.m. GMT on 20 December 2020 (the "Initial Expiration Date"), at which time the Initial Expiration Date automatically extended to 4 p.m. GMT on 18 February 2021 and on that date was automatically extended again to 4 p.m. GMT on 20 March 2021. On 20 March 2021, approval was received from all the members of the AHG to extend the expiry of the Forbearance Agreement from 4 p.m. on 20 March 2021 to 4 p.m. on 20 April 2021. The restructuring work is progressing, and the Company will make further announcements when appropriate. LEI: 2138007VWEP4MM3J8B29 Further information For further information please visit www.nog.co.uk Further enquiries Martin Cocker - Chief Financial Officer [email protected] Instinctif Partners - UK Mark Garraway Sarah Hourahane Galyna Kulachek + 44 (0) 207 457 2020 [email protected] Notifying person Thomas Hartnett Company Secretary About Nostrum Oil & Gas Nostrum Oil & Gas PLC is an independent oil and gas company currently engaging in the production, development and exploration of oil and gas in the pre-Caspian Basin. Its shares are listed on the London Stock Exchange (ticker symbol: NOG). The principal producing asset of Nostrum Oil & Gas PLC is the Chinarevskoye field, in which it holds a 100% interest and is the operator through its wholly-owned subsidiary Zhaikmunai LLP. In addition, Nostrum Oil & Gas holds a 100% interest in and is the operator of the Rostoshinskoye oil and gas field through the same subsidiary. Located in the pre-Caspian basin to the north-west of Uralsk, this exploration and development field is situated approximately 100 kilometres from the Chinarevskoye field. Forward-Looking Statements Some of the statements in this document are forward-looking. Forward-looking statements include statements regarding the intent, belief and current expectations of the Company or its officers with respect to various matters. When used in this document, the words "expects", "believes", "anticipates", "plans", "may", "will", "should" and similar expressions, and the negatives thereof, are intended to identify forward-looking statements. Such statements are not promises nor guarantees and are subject to risks and uncertainties that could cause actual outcomes to differ materially from those suggested by any such statements. No part of this announcement constitutes, or shall be taken to constitute, an invitation or inducement to invest in the Company or any other entity, and shareholders of the Company are cautioned not to place undue reliance on the forward-looking statements. Save as required by the Listing Rules and applicable law, the Company does not undertake to update or change any forward-looking statements to reflect events occurring after the date of this announcement. This information is provided by RNS, the news service of the London Stock Exchange. RNS is approved by the Financial Conduct Authority to act as a Primary Information Provider in the United Kingdom. Terms and conditions relating to the use and distribution of this information may apply. For further information, please contact [email protected] or visit www.rns.com. RNS may use your IP address to confirm compliance with the terms and conditions, to analyse how you engage with the information contained in this communication, and to share such analysis on an anonymised basis with others as part of our commercial services. For further information about how RNS and the London Stock Exchange use the personal data you provide us, please see our Privacy Policy.
https://ca.advfn.com/stock-market/london/NOG/stock-news/84864241/nostrum-oil-gas-plc-extension-of-forbearance-agr
Paycor will hire approximately 250 new Associates in the next 12 months. To accommodate a portion of that growth, the company leased 39,400 square feet of space at the Central Parke Office Building in Norwood. By the spring of 2017, Central Parke will be home to 280 sales team members. Paycor is experiencing outsized growth in its Frisco tech hub, as well. In April, the company expanded its presence at Hall Office Park from an existing 11,878 square feet to 37,199 square feet. This expansion will grow Paycor’s Frisco staff to 180 Associates in the next one to two years. In June, the company added 7,350 square feet of office space in Fort Collins, Colorado, which will be home to 40 Newton Software team members. Paycor acquired Newton in December 2015. A recent recipient of the Cincinnati Business Courier Fast 55 award, Paycor is growing inside and outside of its hometown, providing hundreds of jobs and boosting the presence of HR technology. The company’s growth reflects U.S. businesses are relying on partners like Paycor at an increasing rate due to increasing compliance and the importance of technology to today’s workers. Paycor’s solutions and continued focus on customer service are meeting market needs. Prospective employees looking for a career in Client Services are encouraged to visit Paycor’s open house on July 25.
https://www.paycor.com/press-releases/paycor-hiring-more-than-250-in-next-12-months
ctive for more than three decades, German experimental producer Asmus Tietchens has a huge discography and his work has often been compared to that of pioneering composers such as Stockhausen. Despite an austere and forbidding reputation, his music consists of a much wider range than the glacial soundscapes he’s most often associated with. He lurks on the outer edges of contemporary composition but also of the industrial scene and music is marked by the unease and uncanniness of the strange hinterland he has created for himself. This new release is a cannibalisation, not just of his old material but what he calls “recycling of recyclings of recyclings…”. The obvious question this poses for an artist with such a large and often very impressive back catalogue is when (or whether) to stop a process that could potentially carry on for the rest of his career. The results presented here seem to raise more questions than they answer. Does the strategy of infinite recycling accrue a new aesthetic or degrade the existing one? Can the sounds rise above the process to be more than a technical demonstration of a formalist process? Certainly, any musicologists with a deep knowledge of his work will be kept busy trying to identify sources and techniques. Knowing that these are recycled works places a stronger demand on them to prove themselves aesthetically – do they function autonomously untethered from the works that generated them? Tietchens claims that “To be ready for active listening (opposite to passive hearing) is the basic requirement for analytical perception.” Which again raises the question of whether than this is more than just a a test of active listening, of knowledge of his back catalogue, of the listener’s playback equipment, or – in the worst cases – their patience. There is no doubt that this subtle, esoteric work is extremely well-crafted and quite possibly a good way to re-set and re-condition hearing saturated by louder forms but is it more than that? Once more, we have to ask if it is more than a possibly therapeutic exercise in restoring a listener’s attention to detail. Perhaps by this point the alert reader will have noted that the sounds themselves have not yet been discussed and begin to intuit why this might be, yet they might also be (partially) mistaken. Like many classic Tietchens works, FmF4 is a watery grave of sounds lost in a cold, shimmering haze. Advancing and receding sonic currents trouble the wreckage of these half-shipwrecked drones, drips, and drones. Initially less compelling than much of his work, the watery motion slowly imbues an uncanny aesthetic that is more than a sum of its parts, if still a wreck of his previous works. The aesthetic power it does possess is the fascinating power of a storm-wrecked ship run aground and at the mercy of the tides. L10RC is more like an archaeological excavation site by night with a pale moon revealing new layers of lost and recovered fragments among the exposed strata. The issue here is that most of this eerie track is marked by a repeated owl-like hooting that seems intrusive and unsubtle compared to the shifting drones in the background. It’s only belatedly in the final section when the hooting ceases that the more interesting details beneath are given the exposure they deserve. Finally, L10RB shifts the mood. It’s an unusually affirmative piece for Tietchens, with rippling tones that might once have been gamelan-like. There’s a trace of Harold Budd’s work in the brighter tones, but overall it’s much more opaque, existing for itself without a defined “point” and fading without leaving a strong impression.
https://www.trebuchet-magazine.com/a-cannibalisation-not-just-of-his-old-material-but-what-he-calls-recycling-of-recyclings-of-recyclings-asmus-tietchens-fahl/
I am using SU2 in order to test a shape optimization loop, and I need to compute the wave drag and the viscous drag separately, as well as friction drag and pressure drag. But I couldn't find the SU2 capabilities to do so, therefore all I've got so far is the total drag. I would like to know if SU2 provides us with these values, or if we should evaluate them from the flow solution using our own methods. Thanks in advance, Andre | | Hi Andre, It's great to hear that you're working with SU2 for shape design. Unfortunately, SU2 does not output a drag breakdown, as it currently only computes the total integrated drag on the specified surface. There are certainly methods for doing so, and if you have some methods of your own, might you be interested in including/sharing these with the SU2 code? Thank you for the feedback, Tom | | Hi Tom, Thanks a lot for your reply. As soon as we develop some methods to do so we will be able to share them.
https://www.cfd-online.com/Forums/su2-shape-design/119856-separated-drag-contributions-print.html
Data is a business’s best friend. But it can also be a nightmare, especially if it is sub-standard. Fortunately, there are plenty of strategies that can be used to improve data quality and build solid data practice into the fabric of your business. So, what is data quality? Data quality is the ability of a dataset to serve its purpose. Put simply: If your data is low quality, it won’t be as helpful in what you want it to do with it. This can make the difference between keeping your head above water or sinking. Why? Because poor quality data leads to poor finances. Data Quality Metrics - Part 1 These six simple criteria can help your company measure data quality. In an ideal world, they are all as important as one another. However, a lot depends on what you intend to use data for as this can influence what criteria you prioritise. All of the criteria are equal, but some are more equal than others. 1. Homogeneity Homogeneity is vital. It means ensuring data can be compared and contrasted across different data sets. This can be achieved by universality and consistency. Keep it simple. Keep things clear. 2. Accuracy Accuracy refers to whether your data is correct. Has it come from the source and can you prove that it hasn’t been changed? This question is yes, then you are well on your way to data accuracy. 3. Validity Not to be confused with accuracy, validity can be used to assess whether your data is the type you wanted, without bias. To work this out, ask yourself: does data reflect what you want it to reflect? Is it complete, reasonable, and sound? Data Quality Metrics - Part 2 4. Uniqueness Uniqueness is knowing how to differentiate one from another. Sometimes we know we have a duplicate record; each record is unique. At the same time, we have identified a unique instance of a business concept that is recognised in the business glossary. 5. Opportuneness Opportuneness or timeliness surrounds the date of your data. Older data is more likely to be less relevant to your business as it grows and adapts. Ensure data is updated and monitored regularly. Do not let it languish. 6. Completeness The final, perhaps most fundamental, criteria asks if data is complete. Does a data set have everything it needs? Where are the holes and how can they be filled? Whatever path you choose to improve the quality of your data, you must be sure that you also measure the effectiveness of your efforts. This will help you realise if that time and money is paying off. Find out more about data quality in our Data Quality Course and learn how to implement practices in your business.
https://robinsonryan.com/what-are-the-key-metrics-to-measure-data-quality/
On June 25, 2019, Governor Pritzker signed the Cannabis Regulation and Tax Act (CRTA) which legalizes the sale, possession, and use of cannabis for recreational purposes in limited quantities by persons 21 years and older, beginning January 1, 2020. As with medical cannabis, the Recreational Cannabis Act provides a limited number of licenses that the State will issue for cultivation centers and dispensaries. It is anticipated that the majority of the existing 55 medical cannabis facilities in the state will apply for “early approval” licenses to allow those dispensaries to serve both medical and recreational cannabis markets starting January 1, 2020. In State Police District 34 (DuPage County), there are currently three licensed medical dispensaries, one in Naperville and two in Addison. In addition to the one recreational license that will be available for existing medical cannabis distributers to have at their existing location, those business owners may apply also for a “secondary site” license which could be offsite elsewhere in the region. If allowed, one of these secondary sites may choose to locate in Glen Ellyn and could potentially start operation as early as January 1, 2020. After that, the State will issue licenses on a staggered basis through 2022. Municipalities may regulate or ban cannabis businesses in their jurisdiction but have no control over the licensing process. The Village of Glen Ellyn may prohibit or significantly limit the location of recreational cannabis businesses by ordinance. The Recreational Cannabis Act itself prohibits a new cannabis dispensary from locating within 1,500 feet of another dispensary. If the Village chooses to permit recreational cannabis dispensaries within its borders, then it may designate the zoning districts within which cannabis businesses may be allowed as a permitted or special use and enact reasonable zoning regulations such as standards for off-street parking, signs, separation from residential or other sensitive uses, and hours of operation, among other things. In addition, the Act gives the Village the ability to decide if they wish to allow other kinds of cannabis businesses (cultivation, distribution centers, etc.) and if the use of cannabis will be allowed at the dispensary (such as a smoking lounge). The Recreational Cannabis Act sets tax rates on both wholesale transactions by cultivators and retail sales by dispensaries. It also authorizes counties and municipalities to impose local taxes on retail sales including an optional tax up to 3%. Village staff believe the public should have abundant opportunity for comment and access to information. To this end, a separate webpage was created on the Village website offering information about the CRTA and how it affects the Village. The webpage also provides a form by which the public can submit comments and concerns. A link to this webpage was provided in the Village e-newsletter and distributed on the Village’s social media pages. Village staff will also use these channels to promote the meetings in which the Village’s approach to recreational cannabis will be discussed. History/Process - August 19, 2019 the Village Board discussed the new state legislation passed by Governor Pritzker at a Village Board workshop meeting. The Village Board directed staff to gather more information related to potential benefits and impacts of allowing adult-use cannabis business establishments within the Village. - September 9, 2019 the Village Board voted to approve Resolution No. 19-12 establishing that the Village Zoning Code does not currently address adult-use cannabis businesses and directed staff to draft potential amendments for consideration within 90 days. - October 10, 2019 the Plan Commission reviewed potential text amendments that could be adopted if the Village Board determined that Glen Ellyn should allow cannabis retail establishments within the Village limits. There were several members of the public in attendance of which 16 spoke; 7 spoke in favor of allowing recreational cannabis sales in the Village, 7 spoke against and 2 were neutral. No members of the public specifically commented on the proposed text amendments. The Plan Commission recommended approval of the proposed text amendments by a vote of six (6) “yes” and two (2) “no”. The consensus of the Plan Commission was that the Village should wait until more data becomes available from other communities that have opted to allow the sale of recreational cannabis in their jurisdictions. The Commission felt that a referendum might also be appropriate to bring further confidence that the right choice for the community is made. - October 28, 2019 the Village Board reviewed the draft text amendments, considered the Plan Commission’s recommendation and heard public comment on the topic. The Village Board determined that more information is needed before a decision can be made and a motion was put forth to put a moratorium on the sale of cannabis within the Corporate Limits of Glen Ellyn. - November 12, 2019 the Village Board adopted Ordinance No. 6732 establishing a moratorium on cannabis business establishments until October 1, 2020. - August 24, 2020 the Village Board amended Ordinance No. 6732 through the adoption of Ordinance 6799 which extended the moratorium on cannabis business establishments until December 15, 2020 so that the results on the referendum could be received prior to further consideration by the Village Board on the topic. - November 3, 2020 a referendum was placed on the General Election Ballot requesting voter preference on whether Glen Ellyn should allow the sale of adult-use cannabis within the Village limits. - December 7, 2020 the Village Board discussed the results of the referendum. With a slight majority of residents voting in favor (51.25%) of the Village allowing the sale of recreational cannabis within Glen Ellyn, staff was directed to bring forward text amendments to allow adult-use cannabis dispensaries. December 14, 2020 the Village Board voted to extend the moratorium ordinance on accepting or approving applications for recreational cannabis dispensaries in Glen Ellyn until July 1, 2021. The moratorium ordinance will maintain the status quo by prohibiting recreational cannabis businesses in Glen Ellyn. The Village Board will continue to weigh this decision and will revisit the topic prior to July 2021. Press Releases/Updates - April 27, 2020: At the Village Board meeting on Monday, April 27, the Board approved the language for an Advisory Referendum Question that will be added to the November 3, 2020 General Election ballot. The question will ask residents, "Shall the Village permit the sale of recreational cannabis within Village limits?" - November 14, 2019: Glen Ellyn Village Board Votes to Impose a Moratorium Ordinance on Recreational Cannabis - October 30, 2019: Glen Ellyn Village Board To Revisit Final Recreational Cannabis Decision in 2020. Informational Documents Share your thoughts and opinions on this topic: Please submit your comments on this topic HERE. (All commentary will be shared with the Glen Ellyn Village Board, Plan Commission and key Village staff. However, individual responses will not be generated by the Village from this form.) Please note that additional public comment opportunities are available at Village Board and Commission Meetings for any member of the public. Sign-up to receive updates Sign-up through the Village website's Notify Me Tool to receive updates (via email or text) on this topic. - Is the sale of recreational cannabis permitted in the Village of Glen Ellyn? - When will the new Illinois Cannabis Regulation and Tax Act (CRTA) law go into effect? - Who will be allowed to smoke cannabis recreationally in Illinois under the new law? - How much recreational cannabis can I possess in Illinois under the new law? - Where can I purchase recreational cannabis in Illinois under the new law? - Where will I be allowed to consume cannabis in Illinois under the new law? - Where is consumption prohibited? - Who will be allowed to possess cannabis in Illinois in a vehicle under the new law? - Who will be allowed to grow cannabis in Illinois under the new law? - What is the legal limit of THC blood concentration for a DUI in Illinois? - Can I remove prior cannabis possession convictions from my criminal record now that cannabis is legal in Illinois? - Can an employer restrict the use of cannabis? - Can a landlord or business owner restrict cannabis use? - Where can I get additional information regarding the Cannabis Regulation and Tax Act? - What are the penalties for violating cannabis restrictions? - When was cannabis made legal in Illinois? - How much cannabis may an individual possess? - Who can legally purchase and consume cannabis? - Who can legally grow and sell recreational cannabis? - Can the consumption/possession of cannabis be banned by a local municipality like Glen Ellyn? - Will the Village have any regulatory abilities? - What regulatory abilities, if any, do business owners and landlords have? - Will cannabis consumption be allowed in public spaces? - Where will consumption be allowed? - Does the Act itself restrict the location of cannabis businesses or advertising? - What is the licensing timeline? - What will the Village's role be in the licensing process? - Are there any changes to existing medical cannabis laws? - Is the sale of medical cannabis currently allowed in Glen Ellyn? If so, where? - How is cannabis taxed? - How will the potential tax revenue generated be used? - How do federal laws affect Illinois' law? - How does the law effect workplace drug policies? - Does the Act place limits on cannabis advertising? - How does recreational cannabis affect criminal records?
http://www.glenellynfire.org/676/Recreational-Cannabis-Information
Human beings are built to connect. Our health and resilience are strengthened by interactions with others, which is why it’s natural for us to have a preference for in-person, face-to-face support during times of struggle. However, the realities of COVID-19 and social distancing have forced helpers to adapt how we connect and offer support. This workshop explores the benefits and challenges around supporting others remotely. Practical strategies for how to intentionally engage with others when we are not able to meet in person are provided. Price: $89.00 CAD Per # of Viewers Add to Cart Continuing Education Credit Hours (CEU) 2 Some of the Topics Reviewed - Types of Remote Support - Benefits and Challenges of Supporting Remotely - Using Technology to Build Relationships - Key Communication Skills for Providing Support - Special Considerations When Working Remotely Learning Objectives At the end of this workshop, participants should be able to: - Transfer their experience of providing in-person support to a remote-based supportive framework - Understand the benefits and challenges of providing support remotely - Strengthen communication skills required for supporting remotely - Boost confidence of providing support through remote mediums ABOUT THE TRAINER John Koop Harder, MSW, RSW John has been working as a therapist and trainer for almost 20 years. He is a Registered Social Worker who holds a Master of Social Work degree. John is a contributing author of our Counselling Insights and Counselling in Relationships books. Much of John’s career has centred on working with children, youth, adults, and families dealing with crisis and trauma. While he has a diverse practice, he has particular interest and specialized experience in working with individuals and families impacted by mental health concerns, violence, post-war trauma recovery, gender/sexuality issues, and sexual abuse recovery. John’s work is also informed by his international experiences working with individuals and communities impacted by civil war and ethnic conflicts in Colombia, Albania, and Northern Ireland. John believes people are their own best experts and already have many of the skills, abilities, and competencies that will assist them to address the challenges influencing their lives. John is a warm and engaging facilitator who values interactive learning experiences. Target Audience This is an introductory-intermediate level workshop intended for social service and health care professionals, counsellors, social workers, and school personnel.
https://ca.ctrinstitute.com/on-demand-workshops/on-demand-providing-support-remotely/
On April 14, 2017 local time, Vice Premier Zhang Gaoli met with President Borut Pahor of Slovenia in Ljubljana when visiting Slovenia at invitation. Zhang Gaoli first conveyed the warm greetings and best wishes from President Xi Jinping. Zhang Gaoli said that China and Slovenia enjoy profound traditional friendship and are good friends and good partners who trust each other. Since the establishment of the diplomatic relations 25 years ago, bilateral relations have always kept a healthy and stable development, setting an example of getting along well with each other among countries with different political systems, development paths, and cultural backgrounds. China is willing to join hands with Slovenia to consolidate China-Slovenia traditional friendship, deepen political mutual trust, and enrich the connotation of bilateral practical cooperation, so as to elevate bilateral relations to new highs. Zhang Gaoli stated that China is willing to, together with Slovenia, enhance high-level exchanges and policy communication, well make top-level designs, push forward the docking of the development strategies between the two countries, and carry out practical cooperation under the framework of the joint construction of the “Belt and Road”. China will hold the International Cooperation Summit Forum on the “Belt and Road” Initiative in Beijing from May 14 to 15 this year and welcome important members of the Slovenian government to attend the high-level meetings of the forum. The China-Slovenia bilateral trade and mutual investment scale rank the top among the countries of the region. The two sides should give full play to the advantages of each other, make use of the cooperation platforms in important areas such as the “16+1 Cooperation”, and actively explore to conduct cooperation in major projects of transportation infrastructure construction and other fields. The two sides should continue to well carry out practical cooperation in such fields as aviation, machine electricity, and new energy vehicles, pay more attention and increase investment in the fields of information technology, creation and innovation, so as to inject new impetus into the improvement of quality and efficiency of China-Slovenia cooperation in the new situation. Borut Pahor asked Zhang Gaoli to convey his warm greetings and good wishes to President Xi Jinping. He said that since the establishment of bilateral diplomatic relations, Slovenia has always stuck to actively develop its relations with China and upheld the one-China policy. Highly appreciating President Xi Jinping’s speech at the World Economic Forum in Davos, Slovenia advocates the trade liberation, values China’s important role in international affairs, and firmly supports the EU in deepening relations with China. They are satisfied with the high-level mutual trust between the two countries and harbor full confidence in the future of bilateral relations. On the same day, Zhang Gaoli also held talks with Prime Minister Miro Cerar of Slovenia. Zhang Gaoli conveyed the warm greetings and sound wishes of Premier Li Keqiang. He said that China is willing to join hands with Slovenia to take the 25th anniversary of the establishment of bilateral diplomatic relations as an opportunity to enhance high-level exchanges, promote political mutual trust, and push forward practical cooperation. China stands ready to accelerate the negotiation on the signing of governmental memorandum of understanding of jointly promoting the “Belt and Road” construction, so as to boost connectivity and mutually beneficial and win-win cooperation. China will continue encouraging more enterprises with sound capability and high reputation to invest and start business in Slovenia as well as push forward the two sides to conduct cooperation in cars, aviation, pharmacy, metal processing, food processing and other key fields. Based on the construction and development demand in Slovenian airports and ports, the two sides strive to launch more major projects as early as possible. China welcomes Slovenian enterprises to explore Chinese market and expand the export of high quality agricultural products and other superior products to China. China is willing to intensify exchanges and cooperation with Slovenia in such fields as culture, education, movies and television, tourism, and civil aviation, so as to enhance the mutual understanding and friendship between the two peoples. Zhang Gaoli pointed out that Slovenia always supports and actively participates in the “16+1 Cooperation”. China supports Slovenia to play a greater role in the“16+1 Cooperation” in forestry, tourism, think tank, people-to-people and cultural exchanges and other fields based on its own advantages as a leading country of the “16+1 Cooperation” mechanism in forestry. Miro Cerar asked Zhang Gaoli to convey his warm greetings and sound wishes to Premier Li Keqiang. He said that currently Slovenia-China relations maintain a sound momentum of development. Last year, the bilateral trade volume reached the highest level in history. More and more Chinese tourists come to Slovenia for travel. Slovenia is willing to further enhance and explore bilateral cooperation in economy, trade, forestry, agriculture, tourism, infrastructure, winter sports, digital construction and other areas. Slovenia reiterates to continue to actively participate in cooperation in all fields under the framework of the “16+1 Cooperation” and firmly support the construction of the “Belt and Road”, and spares no efforts to realize the robust and sustainable development of bilateral relations.
https://www.fmprc.gov.cn/mfa_eng/wjb_663304/zzjg_663340/xos_664404/xwlb_664406/t1454468.shtml
Atmospheric conditions favorable for refraction of the radar beam can produce additional ground clutter return, called anomalously- propagated (AP) return. This AP return is a contaminant within the radar moment data that causes erroneous estimates of rainfall accumulation, false wind shears, and can confound operational users of the data such as air traffic controllers. Within the current Weather Surveillance Radar-1988 Doppler (WSR-88D) (also called "NEXRAD") quality control system, AP clutter return is removed by manual application of additional clutter filters. Automation of clutter filter control is desired. To achieve automation, a recognition algorithm must first determine where the AP ground clutter return is located. The algorithm that performs this task is contained within the Radar Echo Classifier (REC) that is currently being installed within the WSR-88D Open Radar Product Generator (ORPG), Build 2. The ORPG Build 2 will be deployed in September 2002. The Radar Echo Classifier (REC) uses fuzzy-logic techniques to determine the type of scatterer measured by the WSR-88D. Currently, three algorithms have been designed and tested: the AP detection algorithm (APDA) locates anomalously-propagated (AP) ground clutter return, the precipitation detection algorithm (PDA) determines convective and stratiform precipitation regions, and the insect clear air detection algorithm (ICADA) defines return from insects in the boundary layer. These algorithms have been developed using data from WSR-88D systems located across the USA and from various field campaigns of the NCAR S-Pol polarimetric radar. Expert users of the WSR-88D data provided the "truth" data sets used to optimize the algorithm performances. For the S-Pol data sets, the polarimetric variables are input into a fuzzy logic polarimetric identification (PID) algorithm to determine the type of radar echo return that is present. The PID output is used as the "truth" field for optimization of algorithm performance. Results will be presented and statistical estimates of performance shown.
https://ams.confex.com/ams/annual2003/techprogram/paper_54946.htm
Egg Bhurjee Recipe. Alfred Prasad, Head chef at London's Michelin-starred Tamarind shows how to make his special Egg Bhurjee. This is a spicy cooked egg dish, fantastic with toast of hot chapattis for a classic Indian breakfast. Appreciate our Egg Bhurjee recipe. Step 1: You will need 2 tbsp Vegetable oil 2 medium Onions 2 Green chillies 1 inch piece Ginger 1/2 tsp Turmeric powder 3/4 tsp Chilli powder 2 Tomatoes 4 Eggs 110 ml Milk to taste Salt 1/2 Bunch coriander leaves 1 mixing bowl 1 whisk 1 sharp knife 1 non-stick frying pan 1 wooden spoon Step 2: Beat eggs Get started by breaking 4 eggs, and putting the yolk and white into a mixing bowl. Add 1/4 of a cup of milk, and whisk. Make sure that the milk and eggs are totally blended. Step 3: Chop Vegetables Cut the ends off an onion with a sharp knife. Then peel off the skin, cut in half, and chop finely. Next, finely chop 2 green chillies. Step 4: TOP TIP Dip your fingers in oil before you chop the chillies. This will stop the juices inside the chillies stinging your fingers. Also, don't forget to wash your hands really well after chopping the chillies. Wash the two tomatoes, and finely chop. Use a sharp knife to finely cut the rough outer skin off the ginger. Then, finely chop. Roughly chop up half a bunch of coriander leaves. Coriander really adds a zingy flavour to the dish, but the exact amount you use depends on your preference. Step 5: TOP TIP To really clean coriander and select the freshest leaves, wash it after you've chopped it. Put the leaves into a bowl of cold water, and swirl them around for about 30 seconds. The freshest leaves will float on the top of the water. Remove these and leave to drain on kitchen paper or a strainer. Step 6: Fry Heat two tablespoons of vegetable oil in a large non-stick frying pan, over a medium heat. Add the chopped onion and chilli to the pan, and cook until the onion is translucent. This should take about 5 minutes. Step 7: Spices Then, add the chopped ginger along with 1/2 teaspoon of turmeric and 3/4 teaspoon of chilli powder. Keep frying for another 5 minutes. Step 8: Tomatoes Add the chopped tomatoes. Stir everything together over a medium heat until the tomatoes have turned to pulp, and the mix resembles a sauce. You might need to add a little water along the way, to make this happen. Stir and cover. Leave to simmer for about 10 minutes Step 9: Egg Now add the egg and milk mixture that you whisked earlier. Keep stirring until the egg is cooked. Don't let it burn. The egg will become more solid as it cooks. Step 10: Coriander Gently stir in the chopped coriander leaves. Step 11: Serve Spoon into a dish. And serve.
http://www.letvc.com/product/362/607/how-to-make-egg-bhurjee
Senate Policy Committee Moves Along Job Killer Bills Over Employer Objections Despite employer objections, the Senate Labor and Industrial Relations Committee this week passed two California Chamber of Commerce-opposed job killer bills. One deals with releasing company pay data and the other with unlawful employment practices. Both bills are opposed by a large coalition of employer groups and local chambers of commerce. - SB 1284 (Jackson; D-Santa Barbara) Disclosure of Company Pay Data. - SB 1300 (Jackson; D-Santa Barbara) Removes Legal Standing and Prohibits Release of Claims. SB 1284: Pay Data Report SB 1284 requires that in 2019, an employer that is incorporated in California with 100 or more employees must submit a pay data report to the Department of Industrial Relations (DIR). CalChamber has identified SB 1284 as a job killer because the bill creates a false impression of wage discrimination or unequal pay where none exists and therefore subjects employers to unfair public criticism, enforcement measures, and significant litigation costs to defend against meritless claims. Just last year, Governor Edmund G. Brown Jr. vetoed AB 1209 (Gonzalez Fletcher; D-San Diego), which was a very similar bill. SB 1284 provides the same uncertainty and ambiguity as AB 1209. The CalChamber and coalition also oppose SB 1284 because it: - Exposes employers to public shaming for wage disparities that are not unlawful. - Allows employers to use the federal Employer Information Report, otherwise known as the EEO-1 Report. - Utilizes data that may be affected by employee choices. SB 1300: Legal Standing/Release of Claims SB 1300 removes the current legal standing requirement for specific Fair Employment and Housing Act (FEHA) claims and limits the use of nondisparagement agreements and general releases. CalChamber has identified SB 1300 as a job killer because these provisions will significantly increase litigation against California employers and limit their ability to invest in their workforce. The CalChamber and coalition also oppose SB 1300 because it: - Removes the current standing requirement and allows anyone to sue a company for specific harassment claims. - Is unnecessary and exposes employers to costly litigation. Sexual harassment prevention is already regulated by the Department of Fair Employment and Housing (DFEH). - Will deter employers from conducting self-audits and providing severance agreements. - Will chill the use of settlement agreements, disadvantaging employers and employees. Key Vote Both SB 1284 and SB 1300 passed Senate Labor and Industrial Relations on April 11 by votes of 4-1: Ayes: Pan (D-Sacramento), Jackson (D-Santa Barbara), Mitchell (D-Los Angeles), Wieckowski (D-Fremont). No: J. Stone (R-Temecula). Action Needed Both SB 1284 and SB 1300 will be considered next by the Senate Judiciary Committee.
https://cajobkillers.com/senate-policy-committee-moves-along-job-killer-bills-over-employer-objections/
In the current state of the robotic art, industrial robots are high-cost machines. Small electrical units range in cost from $30,000 to $50,000, while complete installations can range in cost from $100,000 to $150,000. In addition, such robots are of limited application because they typically operate on dead-reckoning. These robots lack force-sensing ability and force control capabilities which are necessary elements in an adaptive control application. Without adaptive control capabilities, such activities as parts-mating in assembly operations are difficult to perform. Generally, commercially available industrial robots are position-controlled devices which can be commanded to move from one position to another. In applications such as paint spraying, dipping for investment castings, and materials handling in well-structured environments, such a position controlled capability is adequate. However, where it is desired to use a robot for assembly or fitting operations, the presently available commercial robots are generally not capable of sensing and controlling the forces that develop during encounters between the manipulated and the fixed objects. In the past, typical solutions to this problem have involved the use of special jigs and fixtures designed and fabricated to locate parts and to provide the required compliance or "give" to allow the robot to perform the task despite some misalignment of the parts. Such jigs and fixtures typically triple the cost of a robot installation. Another solution involved the use of a compliant element at the wrist of the robot to absorb some of the errors caused by misalignment of the parts. Current efforts in providing force-sensing and control include a D.C. servo approach. In such an approach, a conventional D.C. motor is driven by control circuitry. A tachometer, a position-encoder, and a strain gauge are all positioned on the D.C. motor shaft. From the strain gauge, a torque measurement is obtained, while from the position-encoder, a shaft position measurement is obtained. The tachometer provides a velocity measurement. Based upon these measured quantities, the control circuitry shapes the drive signal provided to the D.C. motor to control the same. There are numerous drawbacks in such a configuration. The D.C. motor, typically having two to four poles, is costly and often requires substantial gear-reduction in order to produce the proper rotational velocity range which is suitable for robot operation. Additionally, brush and commutator wear limit the life of the motor to between 5,000 and 10,000 hours. Moreover, in order to provide position control capabilities two feedback loops are required. Thus, the conventional D.C. servo approach has a high component count, a substantial gear-reduction requirement, a high motor cost, and reduced motor life. In recent years, stepper motors have found increasing application in driving robotic arms. Customarily, stepper motors are driven by a train of pulses or steps. Each step causes the motor to rotate some fraction of one complete revolution, this fraction being a function of the number of poles in the motor. Thus, the angular position of a stepper motor is determined by the number of steps supplied to it. Among the problems associated with the use of stepper motors in a robotics application is resonance at certain velocities and jerky movement a low rotational velocities. There have been a number of stepper motor drive systems proposed for driving the motor with fractional steps. Generally, these proposals disclose a drive waveform which varies sinusoidally or trapezoidally and which is formed of a fixed number of fractional steps. One drawback of such drive configurations is that at high rotational velocities, the duration of each fractional step is too short to permit substantial control to be exercised over the motor on a fractional step basis.
Local CAMRA Branch Country Pub of the Year 2015 and winner of previous awards, this 17th Century warm, friendly locals’ pub has a striking ceiling display of hops, coppers and brasses built up over twenty years. There is a magnificent open fire and a large dining area to the rear offers locally sourced, home cooked food. Six ales and at least two ciders, (and up to five), are served. The pub hosts pool and pétanque teams and is the home of the local Bonfire Society. The large garden has a children’s play area.
https://www.southeastsussex.camra.org.uk/viewnode.php?id=15631
Job Summary: Coaches practices and workouts, assists in preparing game plans and works with assigned student athletes, serving as Assistant WBB Coach. Develops a comprehensive knowledge of the sport and of University, ACC and NCAA rules. Ensures athletes’ compliance. Collaborates with Athletic Academic Services and monitors student-athletes’ academic progress. Participates in public relations. Assistant WBB Coach – All Duties are Essential Skills Coaching: – 35% Coaches practice sessions and workouts. Attends Coaches’ preparatory meetings and assists in setting up practice and game plans. Implements strategies with assigned student athletes. Maintains knowledge of WBB strategies and skills. Assists with coaching during athletic events. Professional Development and Compliance: – 30% Develops a comprehensive knowledge of WBB rules and regulations. Stays abreast of all University, ACC and NCAA rules. Engages in continuing education with the Compliance Office. Ensures that student-athletes adhere to rules and regulations concerning conduct, appearance and behavior. Academics Monitoring; – 15% Collaborates with athletic Academic Service. Monitors academic progress of student-athletes, enforcing all academic policies. Collaborates with Director of Vickery Hall to refer athletes to academic resources. Public Relations: – 10% Participates in University public relations efforts. Assists with verbal and written communications efforts and performs public speaking. Administrative Support: – 10% Manages team travel arrangements, such as pre-game meals, accommodations and practice times. Serves as tournament director. Manages special events, including parents’ and alumni weekends. Keeps records up-to-date. QUALIFICATIONS: Bachelor’s Degree with some experience coaching on a Division 1 level Preferred: Master’s Degree JEANNE CLERY ACT: The Jeanne Clery Disclosure Act requires institutions of higher education to disclose campus security information including crime statistics for the campus and surrounding areas. As a current or prospective Clemson University employee, you have a right to obtain a copy of this information for this institution. For more information regarding our Employment, Campus Safety and Benefits, please visit the Human Resources – Prospective Employees web page shown below: http://www.clemson.edu/cao/humanresources/prospective/ CLOSING STATEMENT: Clemson University is an AA/EEO employer and does not discriminate against any person or group on the basis of age, color, disability, gender, pregnancy, national origin, race, religion, sexual orientation, veteran status or genetic information. Clemson University is building a culturally diverse faculty and staff committed to working in a multicultural environment and encourages applications from minorities and women.
https://www.hoopcoach.org/job/womens-assistant-clemson-guards-coach/
Zoria: Age of Shattering - Devlog Update: Progress so Far A new devlog update for Zoria: Age of Shattering shares progress so far. Devlog: Updates for 2021 so far Greetings adventurers of Zoria, It’s been a long time since our last Devlog, but there are also many changes that happened to the game in that time. The main reason we’ve been silent is that we’ve been busy making the game. Because of this, today’s Devlog is a short and introductory one, with a brief description of all the things that happened to the game since. Alchemy and Cooking: our last Devlog was about Crafting, but in the meantime we also implemented the Cooking and Alchemy systems. We will be making a separate Devlog detailing how the systems work in the following days. Quest System Update: this update is not something immediately obvious to the players, but it’s nonetheless a very important update, as it allows us far more control in crafting the quests and interactions in the game. With this update we now have far more freedom in making dialogue choices as well as quest choices that have real impact and consequences in the game. NPC behavior system: we wanted the world of Zoria to feel alive and inhabited by all kinds of characters, each with their lives. We achieved this by creating a behavior system for all the humanoid NPCs in the game, friendly or hostile. Each NPC in the game has one or more points of interests and behaviors, triggered by different events in the game. Classes and Abilities update: this is a big one. We are in the process of updating the skill trees for all the classes in the game, with new abilities and a completely revamped tree. The update is intended to give more agency to the player in customizing all the followers, as well as making the combat more interesting with all kinds of new abilities. We will return in the following weeks with detailed presentations for each class, the skill tree and all the abilities available. Content: Together with all these updates we have been working on the content, with the first big area in the game, Rithvale, almost complete at this point, in terms of main area, dungeons and encounters. There is still work being done on the main quest, side quests and all the small details that make a game good. As with all the other elements described in this post, we will return with a more detailed post about Rithvale. We also completely reworked some of the scenes from the Prologue (Thallamar for example) and increased the visual quality overall. Until further details, I leave you with the new Main Screen background:loading...
https://www.rpgwatch.com/news/zoria--age-of-shattering--devlog-update--progress-so-far-45570.html
Throughout Germany, you can find incredible destinations full of amazing natural beauty, historical attractions, religious landmarks and interesting culture. Although hotspots like Munich and Frankfurt are definitely worth exploring, you won’t want to miss the many great cities found further north. Northern Germany boasts an array of lesser-known destinations like the port city of Lübeck, the bustling city of Hamburg and the Gothic city of Stralsund. Enjoy your time in Deutschland by exploring these fantastic and unforgettable destinations in Northern Germany. 1 Lubeck 2 Hamburg 3 Schwerin 4 Stralsund 5 Wismar GET MORE INFORMATION – Subscribe ➜ SHARE this Video: ➜ Thanks for watching video about Most Amazing Destinations in Northern Germany Also check another playlists.. Tourist Attraction in United States ➜ Tourist Attraction in America ➜ Tourist Attractions in Asia ➜ Tourist Attractions in Europe ➜ Tourist Attraction in Australia ➜ Backsound: Source: www.touropia.com IMPORTANT If you have any issue with the content used in my channel or you find something that belongs to you, before you claim it to youtube, please SEND ME A MESSAGE and i will DELETE it right away. Thanks for understanding.
https://jpsartre.org/5-most-amazing-destinations-in-northern-germany-europe-love-is-vacation-2/
This call-to-arms resonated with me profoundly when I read it as a young ballet teacher at the end of the 1980s. It not only shaped my subsequent career, but continues to drive me to constantly question and seek answers. Disillusioned at the time by ballet training systems and methodologies, I began my own reflective journey that led to many questions. Despite sports professionals eagerly acknowledging and applying an abundance of new findings to their work – especially in how we learn – the teaching of classical ballet seemed sacrosanct, stuck and resistant to change. There was a real dearth of available research and a resulting void of any practical advice for teachers or students, so I set out to find answers for myself. As my research grew, the answers came, and I found others on the same journey. Dance teaching is imbued with tradition; accepted practices and standards are passed down from one generation to the next. Dance teachers often teach as they themselves were taught and this loyalty to tradition can unfortunately be the enemy of change. Considering change may mean a fundamental examination into the roots of what and how you are teaching, and in turn could cause teachers to fear a loss of identity. This may, in part, explain our profession’s hesitancy in acknowledging developments in scientific advancements in the study of teaching and learning. 30 years on, and with an ever-growing body of evidence to support the need for change, we are still, frustratingly, a long way from the truth. Content vs pedagogy There is more to a dance education than passing exams. Over the years, there have been an increasing number of teacher training courses available for private dance teachers focusing on content knowledge (the steps of classical ballet) rather than pedagogical knowledge (the science of teaching). Content knowledge courses, offered by many dance societies, often focus on assessment content and training exercises for a particular graded exam. The concern with such courses is the implication of a hierarchy of ‘what to teach’, over ‘how to teach’. Teachers can falsely presume that the syllabus content is a training programme. A narrow focus on syllabus settings will not develop diverse and skilled dancers. The exercises, steps and variations within a syllabus are the tools for an assessment-not the recipe for a dance training. Studying to sit syllabus exams can be rewarding but, as is often now heard in many contexts, can be at the detriment of broader and deeper learning: especially so where any learning revolves solely around a syllabus. Syllabus knowledge may help students pass an exam but will not help them learn and fully understand how to dance. High marks in exams do not necessarily equate to a well-rounded skilled dancer. Pedagogy, by contrast, is the study of how teachers’ actions and interactions affect student learning. Pedagogical study includes subjects such as child development and learning; communication; motor skill acquisition and reflective practice. However, regardless of the subject taught, it is a well-established principle in teacher training, that effective teaching and learning rely on both content and pedagogical knowledge and the resulting ability to apply both in context. The lack of teacher training courses providing informed, current and relevant pedagogical information has been detrimental to the advancement of classical ballet education and the art form as a whole. Only through education can we instigate change, help teachers understand the significance of developments in dance teaching and learning to their practice and then begin to see a real difference in student learning and performance. A healthy balance At The Royal Ballet School we are actively championing and implementing a step-change in teacher training. By providing progressive and innovative teacher training we can make a real difference to student learning in this country. At the forefront of teacher training and education we deliver and embed our philosophy to a wide cross section of teachers: recreational, educational and vocational teachers, and of course to our own staff, through regular INSET sessions. Adapted to context, our message is the same regardless: we deliver the ‘what’, the ‘how’ and the ‘why’ of classical ballet teaching: ensuring that both content and pedagogy are given equal weighting. What to teach is driven by the associated psychomotor, cognitive and affective skills aligned with the developmental and learning needs of students: skills such as placement, turnout, proprioception, coordination and interpretation. Without the foundation and associated skills in place, the ‘steps’ will lack depth of clarity and quality. There must be a balance between developing the ‘feeling before the form’ and the ‘form within the feeling’ – learning from the inside out. Pedagogical study will enable teachers to understand the ‘how’ and ‘why’ of student learning, helping them to adapt and apply strategies to advance the performance skills of their students. For example, an area of dance pedagogy that has seen rapid progression is within the field of motor skill acquisition. Appreciation of motor learning can enable teachers to rationally plan structure, content and methodology, helping students to avoid performance barriers and develop into autonomous learners. Within this context of motor skill acquisition, Christopher Powney’s concerns over competitions , published last November, can be fully understood at a scientific level. Inhibiting what is learned, as a result of either excessive competition practice, or where learning is limited to syllabus settings, can result in students being one-trick ponies, failing to reach their dynamic movement potential. Fast-tracking skills at the expense of developing clean technique, can result in embedding ineffective neural pathways and bad habits that are impossible to eradicate. Perhaps short-term glory should be sacrificed for long-term gain? The dance world has been painstakingly slow in accepting the need for change, but I believe we are on the brink of transition. We need to empower teachers with the skills that are required, not just to teach the students of today but those of tomorrow, in an ever-challenging and changing world. Not only will we see more children engage positively with the art form, but we will develop intelligent, dynamic dancers who in time may teach the next generation to eagerly embrace change and not ever shy from the truth.
https://www.royalballetschool.org.uk/2019/05/16/out-of-step-the-need-for-change-in-ballet-teacher-training/
Today, John Swinney, Cabinet Secretary for Education and Life Long Learning and Deputy First Minister, launched Respect for All in the Scottish Parliament. Respect for All is Scotland’s national approach to anti-bullying. LGBT Youth Scotland is pleased to announce the launch of Addressing Inclusion: Effectively Challenging Homophobia, Biphobia and Transphobia, which sits alongside Respect for All. Developed in partnership with respectme, Addressing Inclusion aims to help primary and secondary education staff to challenge incidents of homophobic, biphobic and transphobic bullying in their schools. Research shows that 69% of LGBT young people had experienced homophobic or biphobic bullying in school and 77% of transgender young people had experienced homophobic, biphobic, or transphobic bullying in school. This guidance will support teachers to ensure that they are able to recognise and respond to incidents of bullying. Additionally, it features practical advice in how to be proactive in ensuring that their schools are inclusive learning environments and learners understand and respect LGBT identities. This will help schools to prevent bullying before it starts; we believe prevention is the best method of responding to bullying. “Research with young people tells us that homophobic, biphobic and transphobic bullying is still a significant problem in Scottish schools. At LGBT Youth Scotland, we know the effect that can have on young people’s mental health and wellbeing and resulting impact on educational attainment and achievement. This is an up to date and practical resource to help teachers and other professionals working with young people ensure LGBT learners are safe, supported and included.” -Fergus McMillan, Chief Executive of LGBT Youth Scotland LGBT Youth Scotland has also recently launched Supporting Transgender Young People: Guidance for Schools in Scotland in partnership with Scottish Trans Alliance and funded by the Scottish Government. You can access this guidance, as well as other educational resources here.
https://lgbtyouth.org.uk/news/2017/november/addressing-inclusion-effectively-challenging-bullying/
Changes in spine morphology may underlie memory formation, but the molecular mechanisms that subserve such alterations are poorly understood. Here we show that fear conditioning in rats leads to the movement of profilin, an actin polymerization-regulatory protein, into dendritic spines in the lateral amygdala and that these spines undergo enlargements in their postsynaptic densities (PSDs). A greater proportion of profilin-containing spines with enlarged PSDs could contribute to the enhancement of associatively induced synaptic responses in the lateral amygdala following fear learning. |Original language||English| |Pages (from-to)||481-483| |Number of pages||3| |Journal||Nature Neuroscience| |Volume||9| |Issue number||4| |DOIs| |State||Published - Apr 2006| Bibliographical noteFunding Information: We thank D. Bush for his helpful discussions about this work. This research was supported in part by National Institutes of Health grants MH58911, MH46516, MH38774 and MH067048.
https://cris.haifa.ac.il/en/publications/fear-conditioning-drives-profilin-into-amygdala-dendritic-spines
Weddings, funerals, baby showers – they are momentous occasions, but what happens to the beautiful floral arrangements after the attendees scatter? Most are thrown away, but others are donated – along with flowers slightly past their prime from local markets – to The Petal Connection, a Rocklin nonprofit that repurposes the roses, orchids and daisies into smaller bouquets that are delivered to hospices throughout Placer County. Most months, about 750 bouquets are delivered. In months with flowery occasions, like Valentine’s or Mother’s Day, volunteers make closer to 1,000, said Jennifer Arey, founder of the nonprofit. Recently a group of nine ladies, scissors in hand, gathered around a large table inside a storefront in Rocklin. Behind them was a long line of bright flowers donated by Raley’s, Trader Joes, Whole Foods and Nugget Market. Volunteers chatted and laughed as they pulled out the wilted petals, sorted the flowers and reassembled them into posies. The bouquets are placed in donated vases to be picked up and delivered later by hospice volunteers. “It’s very meaningful and humbling to enter into these families’ lives at this time,” said Marilyn Bell, a Meadow Vista resident who’s volunteered at Sutter Auburn Faith Hospice since 1996. Arey’s mother was a florist, and after Arey’s grandmother passed away, she had a new understanding of how flowers could brighten a room and change a mood, but it wasn’t until her son went to college that she decided she wanted to fill her days with flowers. The Petal Connection started in her garage in October, 2013, with one hospice facility recipient: Bristol Hospice in Roseville. It bloomed from there. Soon she had close to 25 volunteers and a list of hospice recipients throughout Placer County, including Vitas, Green Valley and Sutter Auburn Faith. In an effort to make the bouquets as beautiful as possible, Arey now has a volunteer florist who trains new volunteers once a month. With new floral-arranging skills, volunteers have branched out with their bouquets, sometimes repurposing decorated soup cans or tea cups as vases. Every week is a new puzzle, Arey said, as she never knows how many vases and flower donations she’ll receive or what kind of flowers they’ll be, but she always makes it work. She never shortchanges a hospice center after she’s promised a delivery. One of her favorite parts of her volunteer work is the response she receives from the recipients. One nurse told Arey the story of a very quiet elderly gentleman she’d been caring for. It was difficult to start up a conversation with him, but when she walked into his room with roses from Petal Connection, he lit up. He told the nurse he’d had 160 rose bushes in his yard, and he just wanted to smell them again. Most of the time, Arey said, it’s the families and caregivers who are grateful for the fresh bouquets. They often tell her the flowers brightened up a really difficult day. “It’s a gift for me personally that someone is going to see God’s beauty at the end and know this world is good,” Arey said.
https://thepetalconnection.org/auburn-journal
Olivia weighed in at 9lbs, 3 oz. Michelle (4th year medical student) said that Olivia is growing better than they could hope for a heart baby. She’s in the 50th to 70th percentile for weight!!! Breast milk seems to agree with this kid. Oxygen levels were in the mid 70’s. Heart and pulse rates were where they need to be and “very good”. She did an echo and the shunt looks great.
https://www.olivia-ann.com/wordpress/2004/08/
What if we could actually capture a measure that lies beyond engagement —the moments spent working in the FLOW where we are fully immersed and engaged in the work we are doing? ​It’s not uncommon for me to suddenly realize that I’ve been gliding across the water effortlessly; unaware of the energy my arms were exuding or how far I’d traveled. Flow occurs when we experience a deeply satisfying connection to our activity, intrinsically motivating us to go further and making even challenging tasks seem effortless. Time passes quickly and unnoticed as we’re completely engulfed in the moment. That’s not to say that flow can occur only in the comfort zone. On the contrary — many people (including me, I’ll admit it) can become easily bored when they’re too calm or cozy; when there’s no challenge, no adrenaline to sharpen the senses. Boredom leads to complacency and frustration. Flow, however, involves a highly engaged state of being, one where we’re focused on activities that utilize our best skills and allow us to perform at our peak. Provide challenging opportunities that allow employees to utilize their best skills. Connect the organization’s mission and values to individuals’ passions and beliefs. Create a physical environment that supports collaboration and commitment to results. Let’s look beyond engagement. Let’s strive to build organizational cultures where employees can experience the sublime state of flow as a daily work experience. When, and under what conditions, have you experienced the state of flow in workplace? HR musings, talent management, business accelleration, family, fun - I'll cover them all. Check back often, and share your comments. Copyright© 2018 THRIO Consulting. All Rights Reserved.
http://www.thrioconsulting.com/blog/what-lies-beyond-engagement
SIGN UP FOR OUR NEWSLETTER New Doc Captures Oral Histories of South Williamsburg Andrew Parsons and Laurie Sumiye’s “Of Memory and Los Sures” screens at the IFC Center on Thursday as part of DOC NYC in the program Then & Now. The short film interviews longtime residents of Williamsburg’s Southside against archival material and present-day imagery to create a portrait of the rapidly changing neighborhood. We spoke to them about the project. [jump] What makes South Williamsburg unique?Andrew: I think part of what makes Los Sures unique is how it persists in memory. When you walk up and down the street you see social justice murals from the 80s and 90s as well as portraits of those caught up in gang violence. At the same time you have these ghost signs and old ads from businesses from the 50s or so. It’s a bit of a musuem—but it really comes out when you talk to long time residents. There’s an interesting tension between the history of the neighborhood and how it exists in memory. It’s a neighborhood that has a history of ethnic diversity: Germans pre-1950s, Puerto Ricans after the great-migration, and Dominicans more recently. So in memory there’s a nostalgia for this great working-class neighborhood cohesion that existed in all of those phases. But at the same time, it has a history of gang-violence and was hit hard by the crack epidemic, so memories are often marred trauma and conscious of not wanting to romanticize those aspects as well. Laurie: I think South Williamsburg is unique because it has retained its Latin culture and flavor, even in the face of gentrification. The community is well-defined in a geographical area of North Brooklyn with homegrown organizations that still actively support it, like El Puente, Southside United HDFC (Los Sures), Williamburg Charter Schools, to name a few. Visually, the social justice murals by Los Muralistas de El Puente that dot the neighborhood gives it a specific identity and personality. You feel a specific youthful energy, hope and local culture in a way that you don’t see in other parts of the city. How is its history different from that of the Northside?Andrew: The main difference is that the Northside was never a Hispanic neighborhood; through most of the 20th century, it was primarily Italian and Polish. Williamsburg in general is a neighborhood of immigrants, and its shared history is one of many different immigrant groups. While making this film, Luis Gardenia Acosta told us that Latinos all over the US knew of “Los Sures” because it was such a dense Puerto Rican population in New York City. But to give a real view of its cultural value in context to the rest of Northwest Brooklyn, he’d take visitors on a walking tour from the North to the South. “I would actually do this United Nation’s mile start in the Northside,” he told us, “where Polish was freely spoken on the streets, come to the Southside on Bedford… where the voices and the language would change as you cross Metropolitan Avenue and all of a sudden you’re hearing only Spanish. And then you would go into the Hasidic neighborhood where you would hear Yiddish mostly.” He said what made it unique is you get your idea of American norms challenged but “at the same time, it still has that sense of hometown America.” So since the great migration of Caribbean Hispanics in the 1950s, there’s been a real cultural diversity to the neighborhood and its history that’s a real point of pride. What’s driving the transformation of the neighborhood now?Andrew: It’s a blend of high-valued real estate because of how safe the neighborhood is and how close it is to Manhattan, and also the long-time residents who want to maintain the vibrant past and the stability that they’ve achieved in the neighborhood, too. Everyone wants to have a fair share and take part in the ongoing development of the neighborhood. Laurie: The neighborhood also has become a desirable neighborhood for newcomers coming to live in New York; Williamsburg has an international reputation now. I regularly see young and old tourists coming from Manhattan to shop, dine and drink here.
PanARMENIAN.Net - President Serzh Sargsyan on Friday, November 24 blasted Azerbaijan for its unfounded accusations against Armenia over the Nagorno Karabakh settlement process. Addressing the Eastern Partnership Summit in Brussels, Sargsyan said that no matter how many times Azerbaijan may attempt to distort the essence of the Karabakh issue and the peaceful settlement process, the conflict is nonetheless based on the three principles of international law. The president cited non-use of force or threat of force, territorial integrity and the equal rights and self-determination of peoples as the principles in question, proposed the OSCE Minsk Group, the only body mandated to tackle the peaceful settlement of the conflict. “The international community’s position on the matter is expressed in the declarations adopted by the leaders of the Minsk Group co-chairing countries. In this context, settlement without the Karabakh people’s right to self-determination is simply impossible,” the president said. “One more thing: the United Nations has never adopted any resolution concerning the settlement of the conflict. The four resolutions adopted by the UN in 1993 were aimed at the cessation of hostilities in Karabakh, which Azerbaijan rejected then.” Also, the president assured that Armenia will continue remaining committed to its peace-building and sustainable development vision. Read also: Armenia often credited for bringing integration bodies closer: president Top stories "We have left the gift packages at the government where people in charge promised us to transfer them to Canada's premier," Kankanyan said. When Mary Tatossian came across a painting online done by her late brother, renowned Canadian artist Armand Tatossian, she was shocked. French singer and songwriter Charles Aznavour has died at 94 after a career lasting more than 80 years, a spokesman has confirmed.
http://www.panarmenian.net/eng/news/249193/
The ovarian cancers in geriatric population: the validity of inflammatory markers, malignancy risk indices 1, 2, 3, 4, and CA-125 levels in malignancy discrimination of adnexal masses. To investigate the predictive value of the Risk of Malignancy Index (RMI), CA-125, and inflammatory markers in discriminating ovarian cancers (OCs). The postmenopausal (PM) women (n= 139) with adnexal masses who un- derwent surgery were included. The predictive value of CA-125, RMI (1, 2,3, and 4) and inflammatory markers [neutrophil lymphocyte ratio (NLR), platelet lymphocyte ratio (PLR)] were calculated in geriatric (G) and non-geriatric women. OCs had significantly increased NLR and PLR. RMI models were highly reliable in PM (Kappa: 0.642-0.715; AUC: 0.907-0.934). CA-125 measurement alone had good accuracy and moderate reliability in PM (kappa: 0.507-0.587), excellent accuracy and moderate reliability in G, NLR, and PLR predicting OCs, showed fair agreement in the PM, while PLR had a moderate agreement with G. RMI algorithms were the best models for malignancy prediction. However, the rise of PLR and CA-125 levels in a G population may be used as refer- ring adnexal masses to gynecologic oncologists.
Many HSE mental health teams are accessed through GP referral. This is particularly important for people with eating disorders as medical monitoring of their physical health is part of safe care, and the GP and mental health teamwork in partnership in this. Your Core Treating Team HSE Regional Eating Disorder Teams These are the new community eating disorder specialist teams being developed under the HSE Model of Care for Eating Disorders. They will provide outpatient and day programme care (LINK). Clinicians and therapists on these teams deliver a variety of therapies that are recommended for the treatment of eating disorders. See our Treatment Guide section for more information. HSE eating disorder teams include Consultant Psychiatrists or Child Psychiatrists, Clinical Psychologists, Nurses, Registered Dietitians, Social Workers, Family Therapists, Occupational Therapists and Creative Therapists. As of January 2020, four of these regional eating disorder services have commenced their clinical services. - Linn Dara Community eating disorder service.
https://ncped.selfcareapp.mobi/seeking-help-1/c/0/i/36562680/hse-services
Abstract: CEP discussion paper Risk and Evidence of Bias in Randomized Controlled Trials in Economics Peter Boone, Alex Eble and Diana Elbourne September 2013 Paper No' CEPDP1240: Full Paper The randomized controlled trial (RCT) has been a heavily utilized research tool in medicine for over 60 years. Since the early 2000’s, large-scale RCTs have been used in increasingly large numbers in the social sciences to evaluate questions of both policy and theory. The early economics literature on RCTs invokes the medical literature, but seems to ignore a large body of this literature which studies the past mistakes of medical trialists and links poor trial design, conduct and reporting to exaggerated estimates of treatment effects. Using a few consensus documents on these issues from the medical literature, we design a tool to evaluate adequacy of reporting and risk of bias in RCT reports. We then use this tool to evaluate 54 reports of RCTs published in a set of 52 major economics journals between 2001 and 2011 alongside a sample of reports of 54 RCTs published in medical journals over the same time period. We find that economics RCTs fall far short of the recommendations for reporting and conduct put forth in the medical literature, while medical trials stick fairly close to them, suggesting risk of exaggerated treatment effects in the economics literature.
https://cver.lse.ac.uk/publications/abstract.asp?index=4310
Over decades of study at The University of Texas at Austin, countless students have lived along East Riverside Drive, located southeast of campus. Rich in culture, assets, and history, this neighborhood is also home to a disproportionate amount of violent and property crime, accounting for approximately 4% of all crime in the City of Austin. David Springer, Director of the RGK Center and a Distinguished Teaching Professor in the LBJ School of Public Affairs at UT Austin, is leading the research team on the Riverside Togetherness Project. This three-year, $1 million grant from the Department of Justice, Bureau of Justice Assistance, aims to reduce violent and property crime in hot spots, increase community safety and engagement, build trust between residents and police, and revitalize the targeted neighborhood (see map above). Riverside Togetherness is a cross-sector collaboration involving a number of key stakeholders, including the Austin Police Department, the RGK Center and LBJ School, DOJ’s Community Based Crime Reduction (CBCR) program, private businesses, local nonprofits, residents, community leaders, and volunteers. For more detail on this community-university collaboration, check out the video below. The challenges being addressed in the two-square mile grant area range from gun violence, domestic violence, homelessness, prostitution, aggravated assault, and burglary of vehicles. These are not problems that we can simply arrest our way out of. Springer’s team is working with law enforcement, nonprofits, and community leaders to implement a number of evidence-based strategies which are smart on crime, rather than tough on crime. The research team has created a community-based asset inventory, identifying formal and informal assets that serve the neighborhood. Engaging the nonprofit and philanthropic sector to move the dial on complex social issues is central to revitalizing the Riverside area. The long-term vision is to create sustainable solutions to improve the quality of life for all residents that live and work in this vibrant neighborhood.
https://rgkcenter.org/news/cross-sector-collaboration-and-neighborhood-revitalization-austin
Development of a culturally relevant consumer health information website for Harlem, New York. The process of creating a geographically tailored health information website with ongoing feedback from community members is one of inquiry and discovery, frustration and triumph, and development and reevaluation. This article reviews the development and implementation of GetHealthyHarlem.org, a health literacy level-appropriate consumer health information website tailored to consumers in Harlem, New York City. From 2004 to 2009, the Harlem Health Promotion Center, one of 37 Prevention Research Centers in the United States, sought to determine the use and seeking of online health information in Harlem, New York City in order to further explore the possibility of providing online health information to this community. Specifically, this article details how we sought to identify gaps, concerns, and uses of online health information and health care seeking in this local, predominantly racial and ethnic minority population. We review how we identified and addressed the multitude of variables that play a role in determining the degree of success in finding and using online health information, and include discussions about the genesis of the website and our successes and challenges in the development and implementation stages.
At Bilingual Behavioral Services we use a comprehensive approach in working with children and families. We attend to all areas of development in the young child and design specific objectives and goals. - Behavior: Exhibits and maintains appropriate social interactions with family members and at school Engages in meaningful activities with others without maladaptive behaviors - Receptive Communication: Responding to name by turning towards others Following one and two steps routines Pointing to named body parts Identifying by pointing Attending and joining others for 5 to 10 minutes during activities - Expressive Communication: Vocalizing with intent Pointing to indicate choice between objects Producing single words Tacting/Labeling items Saying no Manding/Asking questions - Social Skills Attending to other people Engaging in parallel play Responding to greetings Gaining others' attention Playing motor games Sharing toys when requested Identifying affect in others - Imitation Imitating steps, motor actions and facial expressions Imitating vocal sounds, animals sounds, single words and sentences - Play Demonstrating appropriate play behaviors with a variety of toys according to age and development Completing tasks and putting materials away Occupying self with varied materials with occasional adult guidance Engaging in pretend play and role play Following others' lead in play - Cognition Matching, Sorting, Searching, Requesting, Categorizing, Counting.
https://www.bilingualbehavioralservices.com/our-approach
Pari Shams graduated with distinction in Medicine, Surgery, clinical Pharmacology and Therapeutics, from Guy’s, King’s and St Thomas’ School of Medicine in London in 1999. She also graduated with a Bachelor of Science degree with first class honours in the field of Anatomy and Basic Medical Sciences and was awarded the Roger-Warwick Prize for achievement in her BSc. She received several other awards and prizes during her time as a medical student. Miss Shams undertook her early medical training at Guys and St Thomas’s Hospital, the National Hospital for Neurology and Neurosurgery and the Hammersmith Hospital London, becoming a member of the Royal College of Physicians of London in 2002. Her ophthalmic surgical training was undertaken in London at Moorfields Eye Hospital, The Royal Free Hospital, The Chelsea and Westminster Hospital and The Western Eye Hospital where she was trained by many of the best ophthalmologists in the world. She became a fellow of the Royal College of Ophthalmologists, London in 2011. Miss Shams’ expertise in functional and cosmetic eyelid surgery, lacrimal and orbital surgery was developed during three years (2010-2014) of advanced sub-speciality surgical training at world leading institutions for oculoplastic, lacrimal and orbital surgery; The University of British Columbia, Vancouver, Canada, The South Australian Institute of Ophthalmology, Adelaide, Australia, The Chelsea and Westminster Hospital, London and University of Iowa Hospitals and Clinics, Iowa City, U.S.A. In 2014 Miss Shams was appointed as a full time Consultant Ophthalmologist to the Adnexal Service at Moorfields Eye Hospital London where she offers a comprehensive cosmetic surgery service in all aspects of eyelid, lacrimal and orbital disease. Miss Shams lead the development of an entirely new Adnexal service at Moorfields South satellite in Croydon. Through the Moorfields Eye Hospital charity and its generous donors she secured funding for a state of the art endoscopic lacrimal service in 2014. Miss Shams practices as part of a large multidisciplinary team including dermatologists and maxillo-facial surgeons to deliver the highest quality of care, patient experience, outcomes and safety to all her patients. Miss Shams is a full member of the British Oculoplastic Surgery Society (BOPSS), the American Society of Plastic and Reconstructive Surgery (ASOPRS) and the International Thyroid Eye Disease Society ITEDS). She specialises in the treatments of conditions that affect the eyelids, brow, cheek, eye socket and tear duct system. Miss Shams has been actively involved in clinical research over the last 15 years and has contributed numerous peer-reviewed publications to the field of oculoplastic, lacrimal and orbital disease. She has presented her research at over 30 national and international conferences and has received numerous awards and prizes. Miss Shams has been involved in teaching and training of physicians and surgeons for the last 10 years, and has been involved in numerous educational courses for GPs, physicians and ophthalmologists. She is currently Clinical and Educational Supervisor and College Tutor for a group of ophthalmic trainees at Moorfields Eye Hospital.
https://www.parishams.com/dr-shams/
Reading time: 4 minutes In food production, sustainability is a big issue. According to the UK Government’s Food and Farming Report, many current global food production methods are unsustainable in the long term.1 As a consequence, innovation is leading to new methods of producing and growing food for both humans and animals, and the businesses involved often have more in common with process industries than traditional agriculture. This has profound implications when it comes to site and location selection. For innovative food producers, the best location may well be a site more usually associated with chemical and process manufacturing. Global factors are driving the growth of innovation in food production. Forecasters suggest that there will be a 70% increase in calorific demand by 2050 due to growth in world population2. At the same time, countries are committing to decreasing the output of greenhouse gases, more than 25% of which come from agriculture, forestry, and land-use change2. Additionally, in developed nations there is demand for alternative proteins led by a swing towards plant based vegetarian and vegan diets, with research showing that about 30% of UK adults plan to eat more meat-free products in 2021 than in 20203. Investment and innovation in the sector is essential if we are to meet future demand for lower carbon, sustainable alternatives to traditional agriculture. Innovative food production methods don’t only increase the amount of food that can be grown per square meter, but also enable indoor production in areas that would not previously have been suitable for agriculture. A single vertical farm, for example, can grow four hectares (the area of 5 Olympic size swimming pools) worth of food in less than half a hectare of land. Some companies are using what were previously regarded as waste products to produce animal feed. Others are utilising industrial processes to grow proteins as an alternative to meat. Rather than basing themselves in traditional food production regions, these businesses are looking to industrial sites and locations to meet their needs. Different methods of food production inevitably have different requirements, but low-cost energy and heat, together with specific industrial gases on-tap, are necessary for many companies operating in the sector. Scott Taylor, AVP Business Development at Sembcorp Energy UK, explained why their Wilton International site in Teesside, which is usually associated with the chemical and process sectors is attracting wider interest from companies in this new sector. The advantages of a ‘process industries’ site for food production “Increasingly we’re seeing interest from innovative food production companies,” Scott said. “Wilton International offers ‘plug and play’ infrastructure and services to all our clients, as well as available development land. There’s a strong alignment between the site and firms interested in setting up scaled production in the food sector”. We can provide lower cost electricity through our private wires network – an essential for companies who are high users of round-the-clock lighting, for example. We’re also able to provide heat at various grades through our steam network. Various processes in this industry need heat, whether to keep the ambient temperature suitable for fermentation, cell growth or other food production methods, or to sterilise equipment and ingredients.” Another valuable asset is Wilton International’s pipeline network from BOC to facilities on the site. The benefits of locating within an industrial cluster “Because we’re a part of the Teesside industrial cluster - the largest in the UK - it’s possible for what is effectively waste from our customers’ industrial processes to be used as feedstock for innovative food production,” Scott explained. “We’re always keen to find ways of increasing the sustainability and circularity of businesses on the site, which makes it an excellent location for food products grown in vertical farms or by other novel agricultural methods.” The Teesside cluster is very welcoming to new, incoming businesses,” Scott said. “The area’s well-established expertise in chemicals, process and general engineering leads to great flexibility within the workforce, and provides a strong knowledge base for innovating businesses. For example, the Centre for Process Innovation (CPI) located in the Wilton Centre, right next to Wilton International, partners with industry for R&D, and is a centre of excellence for innovative foods and agri-technology”. Freight connectivity is an additional benefit of locating within the industrial cluster, according to Scott.
https://www.wiltoninternational.com/posts/in-the-food-production-sector-the-drive-for-sustainability-is-creating-innovative-businesses-with-very-different-site-and-location-requirements/
As we work remotely and isolate ourselves from friends and colleagues as best we can, the impacts of COVID-19 will continue to hit the technology sector in terms of output and innovation. According to analysis released this week from recruitment firm Robert Walters, the UK’s tech industry remains the fastest growing sector in the UK and will maintain resilient as permanent job vacancies in the sector increased by 32.56%, and contract tech roles increased by 48.27% in comparison to the same period last year. Ahsan Iqbal, director of technology at Robert Walters, said that as digital infrastructure becomes the focal point for many internal business discussions, he does not anticipate a cancellation or slowdown in tech projects. “In fact, there will be a revised focus on firms’ digital offering, with particular attention on improving e-comms channels through better CRM systems, upgraded website capabilities, improved security and enhanced accessibility and use of data. “As pressure mounts in the coming weeks and months on IT departments to help support remote working capabilities as well as business continuity plans, firms will look to strengthen their teams with contract staff who have prior experience of in-house systems and will be able to hit the ground running.” Despite this optimism, could it be the case that the cybersecurity sector will be critically hit in the short-term (from now and for the next six months)? Rick Holland, CISO and VP of strategy at Digital Shadows, said: “Historically, cybersecurity is a sector of the economy where spending still occurs even in economic downturns. There are risks to smaller and emerging firms, but sales revenue and the amount of capital raised provides resilience.” “Historically, cybersecurity is a sector of the economy where spending still occurs even in economic downturns” Wim Van Campen, senior director of EMEA Business at Lookout, said that it is “probable that the current disruption will shake-out weaker organizations leaving strong businesses in an even greater market position” and any firm that does not have the funds to perform essential business functions such as marketing, customer-facing services or trial offers will be at a significant disadvantage. Is there a negative outlook for businesses in the cybersecurity industry, despite the optimism of market predictions? Steve Durbin, managing director of the Information Security Forum disagreed, saying that in the short-term, he doubted this would be the case. “If anything, we are seeing a light being shone on the already much talked about skills shortage: it is more likely that businesses will be exposed because they neither have in-house, nor external, access to the necessary skills to deliver their business operations with a remote workforce.” Durbin said he did not see a short-term altering of budgets, but clearly this will come for many organizations as the crisis continues. “It would be an extremely short-sighted business leader who reduced cybersecurity staff at a time when the majority of the workforce is critically dependent on cyber to function.” Etay Maor, CSO at IntSights, added that it is not a matter of the company size, but the value it provides its customers. “Companies will need to grow out of their siloed approach and show their value add in the likes of integration to other products in the security stack and providing professional services,” he said. “Even before the current situation, I heard CISOs talking about consolidation and integration of security offerings – they don’t want analysts sitting in front of eight different product screens and then working on tying the data they analyzed – they want less screens with more capabilities and integrations.” “Smaller startup cybersecurity firms could be amongst the worst hit by the challenges faced due to COVID-19” It may be the case that budgets are reallocated within businesses. After all, with a reduced travel budget for the 2020-2021 financial year, could this mean more money for IT, as more IT support will be needed? David Greene, chief revenue officer of Fortanix, said that many businesses will have difficult budget decisions to make in the coming months, and hoped that companies “can see where small investments in cybersecurity can remove one set of worries and protect against adding to their list of challenges.” Richard Hughes, head of the technical cybersecurity division at A&O Cybersecurity, explained that with considerable financial challenges to be faced over the next few weeks or months, there will be a reduction in spending in some areas to protect the business as a whole. “We will almost certainly see budgets reduced across the board and I do not doubt that some companies with less mature information security programs may well consider that a reduction in their cybersecurity spend would be without consequence” he said. “Businesses will be looking to spend in areas where they can expect the greatest returns and whilst this is unlikely to be cybersecurity, those tasked with such decisions must consider that although cybersecurity programs rarely increase revenue, they almost certainly protect it.” What about smaller firms and startups, who are just emerging into the daylight of the cybersecurity space? “With a booming global cybersecurity market, it is no surprise to see numerous smaller startup cybersecurity firms vying for a slice of the cake, but these companies could be amongst the worst hit by the challenges faced due to COVID-19,” he said. “Without an established customer base and repeat business to help weather the storm, smaller and emerging cybersecurity firms must seek new business to survive and herein lies the problem. The cybersecurity industry requires a certain amount of trust, Hughes added, and with a lack of reputation, this will be difficult to build without a physical presence, whether that is a facility where you can host a client or a visit to the client's own offices, neither is compatible with the guidelines on travel or social distancing or with the readiness of individuals to put themselves at increased risk of infection. “Additionally, having activated business continuity plans, potential clients will likely want to focus on keeping the ship afloat with little appetite to enter into new business relationships. It is highly likely we will see some cybersecurity startups fail as a result of such difficulties.” Of the people Infosecurity spoke to about this, there was a real mixture of perspective on how businesses can survive, and how many would actually be able to survive. Dave Weinstein, CSO at Claroty, argued that regardless of how long it takes us to get through this global crisis, “businesses that take proactive measures to endure it will be in a much better place than those who continue to deny or ignore its severity.” It is too easy to say in hindsight what steps could and should have been taken by businesses, and many would likely have wished that a greater investment would have been made into IT, remote working capability, BYOD and potentially even concepts like zero trust and software defined perimeters. However, this may be the normal situation for the next few months, and for those businesses that were prepared and that can be flexible enough to adapt, now is the time to act. Infosecurity Magazine will be discussing the impact of COVID-19 on the information security industry in this upcoming webinar. Register now for free!
https://www.infosecurity-magazine.com/news-features/short-impact-covid19-industry/
Quantum computing in the newz Update (10/10). In case anyone is interested, here’s a comment I posted over at Cosmic Variance, responding to a question about the relevance of Haroche and Wineland’s work for the interpretation of quantum mechanics. The experiments of Haroche and Wineland, phenomenal as they are, have zero implications one way or the other for the MWI/Copenhagen debate (nor, for that matter, for third-party candidates like Bohm 🙂 ). In other words, while doing these experiments is a tremendous challenge requiring lots of new ideas, no sane proponent of any interpretation would have made predictions for their outcomes other than the ones that were observed. To do an experiment about which the proponents of different interpretations might conceivably diverge, it would be necessary to try to demonstrate quantum interference in a much, much larger system — for example, a brain or an artificially-intelligent quantum computer. And even then, the different interpretations arguably don’t make differing predictions about what the published results of such an experiment would be. If they differ at all, it’s in what they claim, or refuse to claim, about the experiences of the subject of the experiment, while the experiment is underway. But if quantum mechanics is right, then the subject would necessarily have forgotten those experiences by the end of the experiment — since otherwise, no interference could be observed! So, yeah, barring any change to the framework of quantum mechanics itself, it seems likely that people will be arguing about its interpretation forever. Sorry about that. 🙂 Where is he? So many wild claims being leveled, so many opportunities to set the record straight, and yet he completely fails to respond. Where’s the passion he showed just four years ago? Doesn’t he realize that having the facts on his side isn’t enough, has never been enough? It’s as if his mind is off somewhere else, or as if he’s tired of his role as a public communicator and no longer feels like performing it. Is his silence part of some devious master plan? Is he simply suffering from a lack of oxygen in the brain? What’s going on? Yeah, yeah, I know. I should blog more. I’ll have more coming soon, but for now, two big announcements related to quantum computing. Today the 2012 Nobel Prize in Physics was awarded jointly to Serge Haroche and David Wineland, for “for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems.” I’m not very familiar with Haroche’s work, but I’ve known of Wineland for a long time as possibly the top quantum computing experimentalist in the business, setting one record after another in trapped-ion experiments. In awarding this prize, the Swedes have recognized the phenomenal advances in atomic, molecular, and optical physics that have already happened over the last two decades, largely motivated by the goal of building a scalable quantum computer (along with other, not entirely unrelated goals, like more accurate atomic clocks). In so doing, they’ve given what’s arguably the first-ever “Nobel Prize for quantum computing research,” without violating their policy to reward only work that’s been directly confirmed by experiment. Huge congratulations to Haroche and Wineland!! In other quantum computing developments: yes, I’m aware of the latest news from D-Wave, which includes millions of dollars in new funding from Jeff Bezos (the founder of Amazon.com, recipients of a large fraction of my salary). Despite having officially retired as Chief D-Wave Skeptic, I posted a comment on Tom Simonite’s article in MIT Technology Review, and also sent the following email to a journalist. I’m probably not a good person to comment on the “business” aspects of D-Wave. They’ve been extremely successful raising money in the past, so it’s not surprising to me that they continue to be successful. For me, three crucial points to keep in mind are: (1) D-Wave still hasn’t demonstrated 2-qubit entanglement, which I see as one of the non-negotiable “sanity checks” for scalable quantum computing. In other words: if you’re producing entanglement, then you might or might not be getting quantum speedups, but if you’re not producing entanglement, then our current understanding fails to explain how you could possibly be getting quantum speedups. (2) Unfortunately, the fact that D-Wave’s machine solves some particular problem in some amount of time, and a specific classical computer running (say) simulated annealing took more time, is not (by itself) good evidence that D-Wave was achieving the speedup because of quantum effects. Keep in mind that D-Wave has now spent ~$100 million and ~10 years of effort on a highly-optimized, special-purpose computer for solving one specific optimization problem. So, as I like to put it, quantum effects could be playing the role of “the stone in a stone soup”: attracting interest, investment, talented people, etc. to build a device that performs quite well at its specialized task, but not ultimately because of quantum coherence in that device. (3) The quantum algorithm on which D-Wave’s business model is based — namely, the quantum adiabatic algorithm — has the property that it “degrades gracefully” to classical simulated annealing when the decoherence rate goes up. This, fundamentally, is the thing that makes it difficult to know what role, if any, quantum coherence is playing in the performance of their device. If they were trying to use Shor’s algorithm to factor numbers, the situation would be much more clear-cut: a decoherent version of Shor’s algorithm just gives you random garbage. But a decoherent version of the adiabatic algorithm still gives you a pretty good (but now essentially “classical”) algorithm, and that’s what makes it hard to understand what’s going on here. As I’ve said before, I no longer feel like playing an adversarial role. I really, genuinely hope D-Wave succeeds. But the burden is on them to demonstrate that their device uses quantum effects to obtain a speedup, and they still haven’t met that burden. When and if the situation changes, I’ll be happy to say so. Until then, though, I seem to have the unenviable task of repeating the same observation over and over, for 6+ years, and confirming that, no, the latest sale, VC round, announcement of another “application” (which, once again, might or might not exploit quantum effects), etc., hasn’t changed the truth of that observation. Best,
https://scottaaronson.blog/?p=1136
2. (n.) Disinclination to action or labor; sluggishness; laziness; idleness. 3. (n.) Any one of several species of arboreal edentates constituting the family Bradypodidae, and the suborder Tardigrada. They have long limbs and long prehensile claws. Both jaws are furnished with teeth (see Illust. of Edentata), and the ears and tail are rudimentary. They inhabit South and Central America and Mexico. 4. (v. i.) To be idle.
https://bibleapps.com/s/sloth.htm
Ulysses S. Grant served as U.S. general and commander of the Union armies during the late years of the American Civil War, later becoming the 18th U.S. president. Who Was Ulysses S. Grant? Ulysses S. Grant was entrusted with the command of all U.S. armies in 1864 and relentlessly pursued the enemy during the Civil War. In 1869, at age 46, Grant became the youngest president in U.S. history to that point. Though Grant was highly scrupulous, his administration was tainted with scandal. After leaving the presidency, he commissioned Mark Twain to publish his best-selling memoirs. Early Years Grant was born Hiram Ulysses Grant on April 27, 1822, in Point Pleasant, Ohio, near the mouth of the Big Indian Creek at the Ohio River. His famous moniker, "U.S. Grant," came after he joined the military. He was the first son of Jesse Root Grant, a tanner and businessman, and Hannah Simpson Grant. A year after Grant was born, his family moved to Georgetown, Ohio, and had what he described as an "uneventful" childhood. He did, however, show great aptitude as a horseman in his youth. Grant was not a standout in his youth. Shy and reserved, he took after his mother rather than his outgoing father. He hated the idea of working in his father's tannery business—a fact that his father begrudgingly acknowledged. When Grant was 17, his father arranged for him to enter the United States Military Academy at West Point. A clerical error had listed him as Ulysses S. Grant. Not wanting to be rejected by the school, he changed his name on the spot. Grant didn't excel at West Point, earning average grades and receiving several demerits for slovenly dress and tardiness, and ultimately decided that the academy "had no charms" for him. He did well in mathematics and geology and excelled in horsemanship. In 1843, he graduated 21st out of 39 and was glad to be out. He planned to resign from the military after he served his mandatory four years of duty. Early Career After graduation, Lieutenant Grant was stationed in St. Louis, Missouri, where he met his future wife, Julia Dent. Grant proposed marriage in 1844, and Dent accepted. Before the couple could wed, however, he was shipped off for duty. During the Mexican-American War, Grant served as quartermaster, efficiently overseeing the movement of supplies. Serving under General Zachary Taylor and later under General Winfield Scott, he closely observed their military tactics and leadership skills. After getting the opportunity to lead a company into combat, Grant was credited for his bravery under fire. He also developed strong feelings that the war was wrong, and that it was being waged only to increase America's territory for the spread of slavery. After a four-year engagement, Grant and Dent were finally married in 1848. Over the next six years, the couple had four children, and Grant was assigned to several posts. In 1852, he was sent to Fort Vancouver, in what is now Washington State. He missed Dent and his two sons — the second of whom he had not yet seen at this time — and thusly became involved in several failed business ventures in an attempt to get his family to the coast, closer to him. He began to drink, and a reputation was forged that dogged him all through his military career. In the summer of 1853, Grant was promoted to captain and transferred to Fort Humboldt on the Northern California coast, where he had a run-in with the fort's commanding officer, Lieutenant Colonel Robert C. Buchanan. On July 31, 1854, Grant resigned from the Army amid allegations of heavy drinking and warnings of disciplinary action. In 1854, Grant moved his family back to Missouri, but the return to civilian life led him to a low point. He tried to farm land that had been given to him by his father-in-law, but this venture proved to be unsuccessful after a few years. Grant then failed to find success with a real estate venture and was denied employment as an engineer and clerk in St. Louis. To support his family, he was reduced to selling firewood on a St. Louis street. Finally, in 1860, he humbled himself and went to work in his father's tannery business as a clerk, supervised by his two younger brothers. American Civil War On April 12, 1861, Confederate troops attacked Fort Sumter in Charleston Harbor, South Carolina. This act of rebellion sparked Grant's patriotism, and he volunteered his military services. Again he was initially rejected for appointments, but with the aid of an Illinois congressman, he was appointed to command an unruly 21st Illinois volunteer regiment. Applying lessons that he'd learned from his commanders during the Mexican-American War, Grant saw that the regiment was combat-ready by September 1861. When Kentucky's fragile neutrality fell apart in the fall of 1861, Grant and his volunteers took the small town of Paducah, Kentucky, at the mouth of the Tennessee River. In February 1862, in a joint operation with the U.S. Navy, Grant's ground forces applied pressure on Fort Henry and Fort Donelson, taking them both — these battles are credited as the earliest significant Union victories of the American Civil War. After the assault on Fort Donelson, Grant earned the moniker "Unconditional Surrender Grant" and was promoted to major general of volunteers. READ MORE: How Ulysses S. Grant Earned the Nickname "Unconditional Surrender Grant" Battle of Shiloh, Vicksburg Siege and the Battle for Chattanooga In April 1862, Grant moved his army cautiously into enemy territory in Tennessee, in what would later become known as the Battle of Shiloh (or the Battle of Pittsburg Landing), one of the bloodiest battles of the Civil War. Confederate commanders Albert Sidney Johnston and P.G.T. Beauregard led a surprise attack against Grant's forces, with fierce fighting occurring at an area known as the "Hornets' Nest" during the first wave of assault. Confederate General Johnston was mortally wounded, and his second-in-command, General Beauregard, decided against a night assault on Grant's forces. Reinforcement finally arrived, and Grant was able to defeat the Confederates during the second day of battle. The Battle of Shiloh proved to be a watershed for the American military and a near disaster for Grant. Though he was supported by President Abraham Lincoln, Grant faced heavy criticism from members of Congress and the military brass for the high casualties, and for a time, he was demoted. A war department investigation led to his reinstatement. Union war strategy called for taking control of the Mississippi River and cutting the Confederacy in half. In December 1862, Grant moved overland to take Vicksburg — a key fortress city of the Confederacy — but his attack was stalled by Confederate cavalry raider Nathan Bedford Forest, as well as due to getting bogged down in the bayous north of Vicksburg. In his second attempt, Grant cut some, but not all, of his supply lines, moved his men down the western bank of the Mississippi River and crossed south of Vicksburg. Failing to take the city after several assaults, he settled into a long siege, and Vicksburg finally surrendered on July 4, 1863. Though Vicksburg marked both Grant's greatest achievement thus far and a moral boost for the Union, rumors of Grant's heavy drinking followed him through the rest of the Western Campaign. Grant suffered from intense migraine headaches due to stress, which nearly disabled him and only helped to spread rumors of his drinking, as many chalked up his migraines to frequent hangovers. However, his closest associates said that he was sober and polite and that he displayed deep concentration, even in the midst of a battle. In October 1863, Grant took command at Chattanooga, Tennessee. The following month, from November 22 to November 25, Union forces routed Confederate troops in Tennessee at the battles of Lookout Mountain and Missionary Ridge, known collectively as the Battle of Chattanooga. The victories forced the Confederates to retreat into Georgia, ending the siege of the vital railroad junction of Chattanooga — and ultimately paving the way for Union General William Tecumseh Sherman's Atlanta campaign and march to Savannah, Georgia, in 1864. Union Victory
The term quite literally means “barrier”. - disk partition - A segment of disk storage capacity. Often a partition is dedicated to store a filesystem volume. Since the process of creating a filesystem volume will often use every bit of space available on a disk or partition, smaller volumes can be created by splitting the disk into smaller segments. Generally, the term “partition” means a segment of a disk. However, a BSDlabel/disklabel “partition” may refer to a segment of a disk which exists entirely within another partition such as an MBR-style partition. In environments (such as BSD operating systems on x86 platforms) where such ambiguity exists, it is a great idea to try to explicitly clarify which sort of partition is meant when using the term “partition”. (This may be discussed further by different types of partition terminology.) A “partition scheme” generally refers to how partitions are laid out on a disk. This will include the size of the partition, and where each partition starts. Other details about partitions, such as the value of a “type” identifier, might also be considered to be part of the partition scheme for that disk. More details, about creating a partition scheme/layout on a disk, are available in the section about disk layouts. - cubicle partition - A “partitioning wall” (or “partition wall”) piece may be a segment of a wall that is about 1.75 meters tall. (There is roughly 4 to 5 feet, using “customary” measurements.) They are often approximately as wide as they are tall, giving them a rather square shape. Multiple of these pieces may be attached, creating walls that are relatively easy to move if there's a desire to do so. They often have a cloth surface, and may have a plexiglass window so a portion of them may be seen through. This is commonly used to allow a large area (room) in a building to be inexpensively segmented, so individual staff members may have an area to work that is seperated from other workers. In addition to providing a small sense of individual space (compared to if there were no cubicle walls), they often serve to help reduce how much sound spreads, such as the voices of people who are talking into a telephone (and not necessarily desiring that other people, especially other telephones, end up hearing the sound). - [#patabbr]: PAT - See: port address translation - PC - Personal computer. This may generically be used to refer to the platform of computers compatible with an IBM PC, including 32-bit x86 computers. The term has also been used to refer to x86-compatible platforms, including 64-bit computers which are largely compatible with 32-bit x86 applications even though 64-bit systems may not be as compatible with the 16-bit IBM PC. The term may also be used to reference an x86-compatible workstation, contrasting such a computer to upper-end server-class hardware, since the technology of many common workstations is technology fundamentally similar to personal computers. - [#periphrl]: peripheral - A peripheral device generally refers to an add-on device that may be less critical for basic computer operation. In the days when MS-DOS was the most common OS being used on a generic-brand PC, the term “peripheral device” could include speakers, microphones, and even a mouse, while the monitor would be considered a key part of the base system. This was before mice started to be commonly used for a large variety of tasks. Some older text books correctly identified the term “peripheral” as generally being an abbreviation referring to the longer phrase “peripheral device” or a “peripheral card”. A sound card, for example, was generally considered to be a luxary add-on. (Since then, sound cards started to get to be built into motherboards. So whether a specific type of device is considered to be peripheral may change over time.) The term may still refer to concepts of being less necessary, or being on the “outside” edge. (In life, beyond the realm of computers, the term is probably most commonly used when referring to “peripheral vision”.) - [#pid]: PID (“process ID”) - [#pixel]: pixel - This is said to be an abbreviation for “picture element”. Many people understand the pixel to be the smallest unit that a monitor can display. This can be confusing when people then try to learn about parts of monitors (especially CRT monitors) such as phosphors and shadow masks, and hear about different sized pixels. To be clear, a pixel may not be the smallest element: a pixel may be made up of a group of phosphors. However, the pixel is the smallest element which can be uniquely identified by hardware such as the video card, and by software which controls that hardware. An individual monitor may implement each pixel by combining multiple small phosphors that are affected by the electrons that the monitor is using. The amount of phosphors used per pixel (which would affect the actual size of the pixel) may vary between monitors and be affected by things like the video screen resolution, but this isn't something that is available to the video card or any other circuitry in the computer or any software that is using that circuitry. - [#pointdvc]: pointing device - See: rodent. - [#policy]: - - Technical term: policy - This may refer to settings that are designed to help rules be automatically enforced. One example is Microsoft Windows having “local policy”. Perhaps even more famous is the “group policy” functionality of Microsoft Windows, which basically involves client machines asking a server for policy settings and then using those settings to affect the “local policy” settings. SearchSecurity TechTarget: Differentiating between policies, standards, procedures and technical controls seems to identify this concept with the term “technical control”. A control is used to help enforce the rules of administration's policies. - [#polcyrul]: Administrative term: policy - The term “policy” may be distinct from some other terms, such as standard, procedures, and guidelines. Basically, a policy refers to a set of rules about what is expected by administrators/management. - [#pop]: POP - - Post Office Protocol - Most commonly implemented as POP3, this refers to a protocol that E-Mail clients have used to communicate with E-Mail servers to check for new mail. This is an older, and basically inferior, option to IMAP4 which provides end users with support for multiple E-Mail folders. However, POP3 is a bit simpler in some ways, and so may be an easier protocol for a software programmer to start implementing. - Point-of-Presence - A term quite often used when discussing a telecommunications company. This location provides services to additional organizations. A demarc point may be directly connected to the POP, or might even be located at the POP. (The common location for a point of demarcation may vary in different locations, such as different nations.) - [#port]: port - - A hardware port - A connector that a plug gets placed into. Also known as a “jack”. - Software ports - Software designed for one platform may be “ported”, which means that support is then provided for a different platform. The resulting software may be called a “port”. For OpenBSD, the term “port” may refer to a software program/collection. Specifically, the term “port” refers to the collection of files needed to make a release of the software, so the software's source code is certainly part of the port. (The term “package” is a specific term in OpenBSD jargon, and refers to compiled software that was created using the port.) - Hardware platform - NetBSD's introduction to ports says “a supported architecture” (a.k.a. a platform of computer hardware) is called a “port”. - [#prtadrtr]: port address translation (“PAT”) - Port Address Translation. RFC 3022 (info about NAT) describes this as “Network Address Port Translation, or NAPT”, although it is often referred to as “Port Address Translation (PAT)” (as can be seen by RFC 4925 section 2.3). Commonly, this involves a firewall allowing outgoing traffic. The traffic is then modified so that the “source” IP address will be translated, probably most commonly made to appear like the traffic is coming from the device that is going the translation. The device that performed the translation also keeps track of where it received the traffic from, probably by storing this information in a section of memory that may be called a “table”. Then, reply traffic will go to the device that performed the translation. The device that performed the translation can then look in its memory to see where the reply traffic should be forwarded to. - [#post]: POST - Power-on Self Test When the computer first turns on, some typical behaviors include verifying that it seems like detected hardware is responsive, critical hardware exists, and a simple memory check completes okay. For most common desktop PCs, “critical hardware” refers to a CPU, RAM, and a video output display. If there is no RAM or there is no functioning video display, the motherboard may try to use audio equipment to emit “beep codes”. The memory check is not a very thorough test. Mainly, the computer is just trying to detect that it seems like memory is responsive at certain addresses. If the memory seems to respond, the test might not have any reason to report errors, even if the memory does not store information very reliably at all. More thorough memory testing can be performed by using other software, as mentioned in the sections on RAM testing (perhaps more commonly just called “memory testing”), and the other sections on warning about hardware damage and multiple component testing. - [#pots]: POTS - “Plain ol' telephone service” refers to using standard landline phone service, without the broadband data speeds offered by newer technologies such as ISDN or newer (notably DSL). This service was able to provide the needed technology for voice communications, dial-up modems, and fax machines. Regulatory limits to electrical output power were blamed for preventing modems from getting faster than about 56kbps when POTS was used, which led to demand for new services that could provide faster data rates. Telecommunications companies were able to offer these services for higher prices than just standard phone service. Related details may be found at: Glossary for PSTN, POTS Modem, download POTS speed increases. - [#ppid]: PPID - Parent Process ID. This is the PID of a parent process. When a “process” (a running program) runs another program, the first program may be called a “parent”. Each program may be assigned a PID (“process ID”). The PID of the parent process may be referred to as the PPID of the child process. - [#procedur]: procedure - A set of steps. - Programming term: procedure - To get a definition for, and to compare, the terms procedure, function, routine, method, and program, see: Functions/Procedures/Methods/Routines. - Administrative term: procedure - In contrast to other terms (like procedures, standards, and guidelines), the term “procedure” may refer to a series of steps. For example, a document that describes how to tell a computer to shut down “cleanly” could be a procedure. This is not really a series of expectations (like a policy or a standard), but rather, is more of a technical how-to document. A procedure describes how to accomplish a task. For example, a procedure might specify some directions on how to quickly obtain the information needed to create a report. This is different than a standard, which might specify how much detail is required for the result to be considered “sufficient” quality. This is also different from a policy, which specifies what is actually expected. - [#process]: process - - running process - At least sometimes, this term refers to the concept of a running instance of a program. If an operating system runs a text editing program, that is a process. If the user then starts up a second copy of the text editing program in a multitasking operating system, then the second copy of the same program is another process. This term was used heavily in Unix environments, where the command was used to show “process information” (info about one or more processes). Microsoft Windows has started to have some software that can show a unique “Process ID” number that gets assigned to each process, so the term has also become more relevant for administrators of Microsoft Windows machines. ps - set of procedural steps - (e.g. in contrast to a policy/procedure) - protocol - The term “protocol” refers to a set of rules, or perhaps (in some cases) conventions. In the realm of diplomacy, diplomats may be expected to observe certain protocols so that they please, rather than offend, people from a different culture. In the realm of network communications, the term protocol is often referring to a certain set of standards that need to be followed. Devices that send information are expected to follow certain rules when assembling and transmitting groups of information, so that the receiver of the data can correctly interpret the information that has been sent. This includes situations where the direct receiver of the data may be some sort of network infrastructure (such as a router, or even a switch) which receives the data just for the purpose of then relaying the information to another device. To accomplish this, sending and receiving devices tend to follow commonly-accepted rules so that the data may be interpreted the same way. A collection of such rules (which are often very detailed, nitpicky rules) is often referred to as a “protocol”. - [#pdu]: protocol data unit (“PDU”) - The term PDU refers to a unit/group/bunch/collection of data that is used when devices communicate. Most/all common protocols that operate on Layer 3 of the OSI Model (the “Network” layer) will use a PDU called a “packet”. Most/all common protocols that operate on Layer 2 of the OSI Model (the “Data Link” layer) will use a PDU called a “frame”. The concepts behind a packet and a frame are the same, but using these different terms helps people to realize what layer of the OSI Model is being used. This way, when people talk about frames (even if discussing a relatively unfamiliar technology), other people can understand that the devices being used are likely processing MAC-48 addresses, are not paying attention to IP addresses, and that the unit of data is not likely to be routed to a different subnet. (Instead, if the payload of that data needs to be routed, a new frame will get created.) For these types of reasons, using the correct terminology is preferred. - [#ps2]: PS2 - For people who play video game consoles, this may be an abbreviation for “Playstation 2”. Computer gamers will often be familiar with that abbreviation from video gaming jargon. The logo for the original “Playstation 2” gaming console looked like the three characters PS2, and the Playstation 2 Slimline was more well known as a PS2 Slimline or PS2 Slim. PS2 may also be a misspelling of PS/2. For computer gamers, the term PS2 may cause some confusion from ambiguity caused by the homonymity with the term PS2. - [#persys2]: PS/2 - The term PS/2 refers to the IBM Personal System/2 computers, which were newer than the PS/1 systems that were made by IBM. The PS/2 computer systems included several new standards, such as VGA, but may be most well known for the PS/2 keyboard ports and the PS/2 mouse ports. - [#ps2port]: PS/2 ports - See: PS/2 ports. For computer gamers, the phrase PS/2 may cause some confusion from ambiguity caused by the homonymity with the term PS2. - PSTN - “public switched telephone network”. This refers to telephone service provided by a landline/wired telephone company used by the general public. See: Communications hardware: POTS Modem for details related to PSTN (much of which may be details specific to using computer equipment with the PSTN). See also: POTS. - [#pupsftwr]: PUP - Officially, the term stands for “potentially unwanted program”, although in many cases the term “probably unwanted program” is just as accurate and more precise. This refers to software which is typically not desired to be used, but which is often run anyway, possibly as the result of being bundled with an installer that also installs other software. Some anti-malware software has been known to check for PUPs. The reason that such software is called a PUP is probably due to an effort by anti-malware software vendors to not get sued for defamation (negative statements), libel (inaccurate negative written statements), or slander (inaccurate negative spoken statements). A number that is assigned whenever a software program is started. The number is unique to the computer, so every program running on the computer will have its own unique PID. If there are multiple copies of the program running, then the program has started multiple times, and each copy received a PID when it was started. Note that multiple programs may simply be individual pieces of another, larger program. In other words, a (larger) program may have multiple pieces, and these pieces may be separate individual programs. In this case, each piece may be considered to be a separate process, so there may be multiple PIDs when a user starts up a single program.
https://cyberpillar.com/dirsver/1/mainsite/glossary/glossryp.htm
Olivia Carreti is interested in understanding how organisms interact with changing habitats, especially in light of climate change and increased anthropogenic influences in the marine environment. She received her BA from St. Mary’s College of Maryland where she studied settlement patterns of blue crabs in shifting SAV communities in the Chesapeake Bay. Olivia now comes to us from the Dauphin Island Sea Lab where she studied the potential impacts of exotic tiger shrimp on native shrimp species in the Gulf of Mexico. For her MS at NC State, Olivia will be studying the underwater soundscapes and fish production of heavily fished oyster reefs in Pamlico Sound to understand their total contribution to the estuarine soundscape and fish community. Kayelyn Simmons Kayelyn Simmons received her MS degree from Nova Southeastern University where she researched the parasite diversity within the lionfish complex, Pterois volitans/miles, in the Western Atlantic, Caribbean, and the Gulf of Mexico. For her doctoral research at NC State University, Kayelyn will be exploring the impacts of sport diver harvest behavior on lionfish populations and the surrounding reef habitat under three different marine sanctuary zones. Erin Voigt Erin Voigt is interested in community and landscape ecology as well as biogenic habitat conservation. She received her BA from St. Mary’s College of Maryland where she studied the effects of coastal acidification on juvenile oyster calcification and her MS from San Diego State University where she studied the effects of mesograzer biodiversity and structural complexity on eelgrass ecosystem function. For her PhD research she is planning to examine how coastal landscape mosaics affect biogeochemical cycling in relation to oyster health. Daniel Bowling Daniel Bowling earned his BS in Marine Biology and Environmental Science at the University of North Carolina at Wilmington. Following this, he spent many years working and conducting research abroad in Australia and Fiji where he worked with local communities, NGOs, government, universities, and tourism to address local and regional issues, protect critical habitats, study endangered and vulnerable species, and conduct education and community outreach. His interests are in applied ecology where inter-disciplinary science can investigate and develop real-world and practical solutions. For his PhD., he is working with local oystermen, The Nature Conservancy, North Carolina Division of Marine Fisheries, and regional university colleagues to design and test a statistically robust, cost-effective fishery-independent population survey methodology for subtidal and intertidal oysters in North Carolina. This new sample design aims to fill some of the data gaps currently surrounding the fishery, which will allow for the future development of a stock assessment. Ian Grace Ian Grace is interested in population dispersal and connectivity. He received his BS degree in Marine Science from North Carolina State University where he researched methane seep habitat composition and species diversity. For his MS research, he is using LA-ICPMS to assess elemental composition in mussel shells and identify spatial, bathymetric, and temporal patterns in composition that may infer dispersal among deep-sea methane seeps in the Gulf of Mexico and on the West Atlantic Margin. Melissa LaCroce Melissa LaCroce is interested in community and benthic ecology as well as habitat conservation. She received her BA from Manhattanville College where she studied the distribution of predator species, Hemigrapsus sanguineus and prey species, Littorina littorea in western Long Island Sound rocky intertidal zones. After working as an environmental educator aboard traditional sailing vessels on the Long Island Sound and Chesapeake Bay she obtained her MS from the University of North Carolina Wilmington. There she studied the seasonality and recruitment of the epibenthic community on an Onslow Bay, North Carolina hard bottom. As the technician in the Marine Ecology and Conservation Lab she plans to assist all current and future students with their projects and continue to be a life-long learner. Melissa is looking forward to learning more about drone mapping and soundscape metrics.
https://cmast.ncsu.edu/eggleston/eggleston-people/
Happy almost the end of October, y’all! For this week’s Cookbook Sunday dinner, I made a modified version of Sean Brock’s recipe for crispy fried farm eggs with fresh cheese, pickled mushrooms, watercress, and red-eye vinaigrette. Like the previous recipes, it’s a long one, with many steps and a delicious ending. Thankfully, many of the steps can be completed ahead of time, making the actual execution a little quicker than you might think on first glance. It looks like a doozy of recipe but the pickled mushrooms are done at least a week early; the farmer’s cheese can be done the day before; the eggs can be soft-boiled and peeled the day before, and stored in the refrigerator; and the dressing can be made the day before. In addition to cutting the recipe in half, I made quite a few modifications to the recipe in Heritage. The original recipe specified chanterelles–we had cremini and shiitakes, so those were the mushrooms that were pickled. The original also used rendered ham fat; I’m mostly a vegetarian, minus occasional seafood, so, the ham fat was out. I added a little smoked sea salt instead, and a little more oil. I also modified how I made the farmer’s cheese, following the recipe from Kenji Lopez at Serious Eats that’s pretty fool-proof and delicious. The other substitution was for the honey vinegar. This is a vinegar made by taking equal parts honey and water and aging for 2 years; it’s not readily available and it’s a little expensive (~$49 for 16.9 oz). So, I made a fly-by-the-seat-of-my-pants-kind of substitution using equal parts honey and Champagne vinegar; this was actually delicious all on its own. Now, I’ve never fried an egg like this, so, it was an adventure, and the play on texture was great with the thin crispy crust, the just-set egg white, and the velvety yolk that coated the greens when the egg was cut open. Admittedly, I wasn’t a big fan of the watercress that I used, and I don’t know if it’s because I don’t like watercress (too much stem) or I just bought the wrong stuff, or it’s just the wrong time of year for watercress. I would make this again but probably with different greens, like baby arugula, thinly sliced kale, thinly sliced escarole, or some mix of all of it. The eggs and the pickled mushrooms were the big winners for this dish, taste-wise, and it plated beautifully. Apologies for not measuring the weights for this one; I blame the weirdest Seahawks game ever. Crispy fried farm eggs with fresh cheese, pickled mushrooms, watercress, and red-eye vinaigrette Original recipe served 6; modification is for 3 Heritage by Sean Brock Pickled Mushrooms 1 1/2 lbs Mushrooms (chantarelles, cremini, shiitake, etc.) 3/4 C Rice wine vinegar 1/4 C Apple cider vinegar 3/4 C Sugar 1/4 C Honey 1 Tbsp + 2 tsp Dijon mustard 1 Tbsp + 2 tsp Whole-grain mustard 2 tsp kosher salt 2-3 Thyme sprigs 1 Fresh bay leaf Fresh Farmer’s Cheese 2 C Whole milk 1/4 C Heavy cream 1 Tbsp + 1 tsp White vinegar 1/8 tsp Salt Pinch freshly ground pepper Vinaigrette 2/3 C Cider Vinegar 2 Tbsp Honey Vinegar* 1 Tbsp Smoked Salt* 2/3 C Grapeseed oil 1 1/2 tsp Sugar 2 tsp Lemon juice 2 tsp Instant coffee granules Crispy Fried Eggs 4 Large eggs 2 C Canola oil 1 C All-purpose flour 2 C Panko bread crumbs* Kosher salt Freshly ground black pepper Salad 2 C Watercress, washed and dried 1/4 C Red onion (1/2 small red onion), thinly sliced Details: Pickled Mushrooms - Pickle the mushrooms at least 1 week in advance! - Lightly rinse the mushrooms—do not soak them. - Dry the mushrooms, cut into bite-sized pieces, and put in a clean glass container. - Combine the vinegars, sugar, honey, Dijon mustard, whole grain mustard, salt, thyme, and bay leaf in a small stainless steel saucepan, stir well, and bring to a boil over medium-high heat. - Once the sugar and honey are dissolved, pour the brine over the mushrooms, and cool to room temperature. - Cover and refrigerate for at least 1 week before eating. - Mushrooms will keep up to 2 weeks in the refrigerator. Fresh Cheese - Line a fine strainer with cheesecloth or a couple of paper towels. - Combine milk, cream, vinegar, and salt in a microwave-safe container, - Whisk to combine. - Heat on high for 3-5 minutes—curds should form. - Poor the hot mixture into the lined strainer; let sit for at least 4 hours at room temperature, or preferably overnight in the refrigerator. - Once the cheese is firm, transfer from the cheesecloth (or paper towels) to a bowl and add the pepper—stir to combine. The whey that drained out of the cheese can be used for cooking grits, adding to pancakes, or to smoothies—refrigerate for up to 5 days or freeze for up to 3 months. - Cheese will keep, tightly covered, in the refrigerator for up to 3 days. Vinaigrette - Combine the vinegars in a bowl (or large mouth jar with a tight-fitting lid). - Whisk in the remaining ingredients, or combine with the vinegars in the large-mouth jar and screw the lid on tightly and shake vigorously to combine. - Sean Brock notes that this will be broken vinaigrette and won’t be smooth. - Keep refrigerated for ups to 3 days. Crispy Fried Eggs - Fill a medium saucepan with water and bring to a boil over high heat; add 1 Tbsp salt. - Make an ice bath in a medium bowl with equal parts ice and water. - Once the water is boiling, use a slotted spoon to add the 3 eggs to the water. Cover and boil for exactly 5 minutes and 15 seconds, then transfer to the ice bath. - As soon as the eggs are cool enough to handle, peel the eggs while still in the ice bath, the remove and drain well. Note, be patient with peeling the eggs—these are very softly boiled! - Set-up a breading station with flour (lightly salted) in one bowl, one egg whipped with about a teaspoon of water in a second bowl, and the bread crumbs (lightly salted) in the last bowl. - Heat the canola oil to 350. - Bread the eggs by first dredging in the flour, then the egg, and finally the bread crumbs. - Fry until golden (about 2 minutes)—the egg yolks should be runny when the eggs are cut into. To complete - Combine the watercress (or baby greens), onion, and 1-2 tablespoons vinaigrette, or more to your taste—toss to combine. - Divide the salad between two plates. - Crumble 2 Tbsp cheese over each plate, top with the fried egg and 5-6 pickled mushrooms. *Notes: – The original recipe called for honey vinegar–but, as I said earlier, I had none of this on hand and just sort of winged it! – The smoked salt was my addition and was originally 1/4 C plus 1 Tbsp rendered hat fat. – For the Panko, the recipe said to finely ground the crumbs in a processor; I didn’t do this and it worked great!
https://wellandcrafted.com/2016/10/25/cookbook-sunday-dinner-heritage-week-3/
Audio recording of discussion event on 17 March 2011 The orthodoxies, role and value of libraries - and the place of paper in an increasingly digital age - were discussed at this National of Scotland panel event. The panel consisted of: - Martyn Wade FRSA (National Librarian) - Chris Banks FRSA (University Librarian, University of Aberdeen) - Karen Cunningham (Head of Libraries, Glasgow City Council). They considered topics such as: - What physical designs should 21st-century libraries have, and what role? - With immediately accessible information online, do libraries remain important? The discussion was chaired by Ann Packard of the Royal Society of Arts. It was followed by a a lively question-and-answer session that touched on many different opinions Recording © National Library of Scotland. Opinions in expressed in this recording do not necessarily reflect those of the National Library of Scotland.
https://www.nls.uk/events/audio-recordings/21st-century-libraries/
Migrants: Sport and integration, Coni's project in Catanzaro Catanzaro - It's called 'Sport and integration' and it's a project approved and financed by Coni's regional committee, an initiative by the provincial delegation of Catanzaro, to contribute to the integration of young migrants, especially minors, in Calabria. The proposal was approved during a meeting of the Olympic committee, during which the 2017 budget was reorganized and the required funds were identified.A note says 'the project, conceived by provincial delegate Giampaolo Latella, aims at promoting social integration of foreigners through sport, by contrasting racial discrimination and intolerance and, especially, by promoting an idea of society open to multiple and multi-ethnic cultures. 'Sport and integration' it says is intended for young migrants living in shelters in the province of Catanzaro, and involves holding sport introduction classes, competitive training lessons, technical and regular training. A program during which attendees will be assisted by technical and professional trainers (not exclusively from the Olympic world), who will be in charge of helping youngsters through the integration process, teaching them different skills, including general rules of behavior, assistance, hygiene and safety. In Catanzaro, Coni 'has put a lot of faith in this project', which is of 'great social value for a region, such as Calabria, that stands out for the quality of the hospitality offered to migrants. Considering the importance it attributes to these issues, Coni's objective in the region's capital city, is to get the best shelter facilities involved, based mainly on the principles of transparency and legality. The spirit of the 'Sport and integration' project it says aims at highlighting the power of sport when it comes to creating a sense of community: nothing is better than practicing sports to get young people with different ethnic origins to meet and bond'. Spend time together to get through adversities and reach a common objective is the spirit of both sports activities and social life. This is why Coni explains the project "Sport and Integration" can play a significant role against frequent forms of xenophobia, helping young people to share a system of rules, develop the culture of respect and living together and, especially, build a common sense of belonging with people of the same age who come from a foreign contest'.
https://international.agi.it/news/migrants_sport_and_integration_coni_s_project_in_catanzaro-2100734/news/2017-08-29/
Develop sport, culture and education in Asian continent through OCA’s continuous efforts. Excellence Being your best, taking part, marking progress, enjoying the healthy combination of body are equal important to the winning. Respect Learn to accept your personal peaceful behavior under practicing the one unified sports rules and regulations will lead to the respect of your competitor and society. Friendship Sport is an instrument for mutual understanding between people all over the world. Fair Play The concept of sport is equity and balance, which can also be applied in many different ways and contexts beyond the field of play. It can lead to the development and reinforcement of similar behaviour in one’s everyday life. Faster, Higher, Stronger The Olympic “Citius, Altius, Fortius” motto is a call to scale the heights, broaden horizons, reset standards, beat the clock, and better the best. Scroll NewsOpen the news Kuwait City, Kuwait, September 23, 2020: The President of the Olympic Council of Asia, HE Sheikh Ahmad Al-Fahad Al-Sabah, believes a major documentary series on the Asian Games will create a "valua... 23.09.2020 Tashkent, Uzbekistan, September 23, 2020: Rustam Shaabdurakhmanov, President of the Uzbekistan Na... 23.09.2020 Hong Kong, China, September 22, 2020: The Hong Kong Rugby Union has announced its return-to-play ...
https://ocasia.world/
A geographic overview of Porcupine Seabight and the mound provinces is shown on Figure F1. Three distinct mound provinces have been identified: the Hovland, Magellan, and Belgica mound provinces. Hovland mounds The Hovland mounds are the first mound occurrences reported from industrial data on the northern slope of Porcupine Basin (Hovland et al., 1994) that led to the unveiling of a complex setting with large multiphased contourite deposits and high-energy sediment fills, topped by a set of outcropping mounds or elongated mound clusters as high as 250 m (Henriet et al., 1998; De Mol et al., 2002). Magellan mounds The Hovland mounds are flanked to the north and west by the crescent-shaped, well-delineated Magellan mound province, which is characterized by a very high density of buried medium-sized mounds (1 mound/km2; average height = 60–80 m) (Huvenne et al., 2003). High-resolution seismic data combined with 3-D industrial seismic data (Huvenne et al., 2002) has shed some light on the presence of a past slope failure that partly underlies the mound cluster. Belgica mounds On the eastern margin of Porcupine Basin, a 45 km long range of large mounds towers from a strongly eroded surface (Fig. F2). The mounds partly root on an enigmatic, deeply incised, very faintly stratified seismic facies (Unit P2) (Fig. F3) (De Mol et al., 2002; Van Rooij et al., 2003) that De Mol (2002) interpreted as a nannofossil ooze of Pliocene age analogous to the similar seismic facies of ODP Site 980 in the southwestern Rockall Trough (Unit P1) (Jansen, Raymo, Blum, et al., 1996) and partly on a layered sequence capped by a set of short-wavelength, sigmoidal depositional units (De Mol et al., 2002, 2005b; Van Rooij et al., 2003). The Belgica mound province consists of 66 conical mounds (single or in elongated clusters) in water depths ranging from 550 to 1025 m. The mounds are partly enclosed in contourite deposits (Van Rooij et al., 2003). Mounds typically trap sediment on their upslope flank, which is consequently buried, whereas their seaward side is well exposed and forms a steep step in bathymetry. Average slope angles range from 10° to 33°. The largest mounds have a height of ~170 m. In the deeper part (>900 m water depth) of the Belgica mound province (Beyer et al., 2003), an extremely “lively” mound was discovered in 1998 on the basis of a very diffuse surface acoustic response. This mound, known as Thérèse Mound, was selected as a special target site to study processes involved in mound development for European Union (EU) Fifth Framework research projects. Video imaging revealed that Thérèse Mound, jointly with its closest neighbor, Galway Mound, might be one of the richest cold-water coral environments in Porcupine Seabight, remarkably in the middle of otherwise barren mounds. Challenger Mound, to the southwest, also shares the acoustic properties of Thérèse Mound but is only covered by dead coral rubble (Foubert et al., 2005; De Mol et al., 2005a; Huvenne et al., 2005; Wheeler et al., 2005). Geologic setting Porcupine Seabight forms an inverted triangle opening to the Porcupine Abyssal Plain through a narrow gap of 50 km at a water depth of 2000 m at its southwest apex between the southern and western tips of the Porcupine Bank and terraced Goban Spur, respectively (Fig. F1). It gradually widens and shoals to depths of 500 m to the east on the Irish continental shelf and north to Slyne Ridge. Porcupine Seabight is the surface expression of the underlying deep sedimentary Porcupine Basin (Fig. F4), which is a failed rift of the proto-North Atlantic Ocean and is filled with a 10 km thick series of Mesozoic and Cenozoic sediments (Shannon, 1991). Basin evolution can be summarized in three major steps: a Paleozoic synrift phase, a predominantly Jurassic rifting episode, and a Late Cretaceous–Holocene thermal subsidence period. Basin development and synrift sedimentation The basement of Porcupine Basin is composed of Precambrian and lower Paleozoic metamorphic rocks forming continental crust ~30 km thick (Johnston et al., 2001). The prerift succession probably commences with Devonian clastic sediments overlain by lower Carboniferous carbonates and clastics. The upper Carboniferous rocks feature deltaic to shallow-marine deposits with Westphalian coal-bearing sandstones and shales and possibly Stephanian redbed sandstones (Shannon, 1991; Moore and Shannon, 1995). Permian and lowermost Mesozoic deposits are early rift valley continental sediments which can be >2 km thick. During Permian times, predominantly fluvial and lacustrine sedimentation took place with nonmarine mixed clastic deposits and evaporites. Triassic sediments contain nonmarine to marine facies (Ziegler, 1982; Shannon, 1991). Lower Jurassic deposits are not found over the entire basin but, where present, could comprise limestones and rare organic-rich shales with sandstones. Jurassic rifting phase The middle Kimmerian rifting phase marked an increase in tectonic events in the Arctic, Atlantic, and Tethys rift systems. This major tectonic event was apparently accompanied by a renewed eustatic lowering of sea level and is likely responsible for erosion of a large part of the Triassic and Jurassic deposits (Ziegler, 1982). Middle Jurassic fluvial claystones and minor sandstones might lie unconformably above earlier deposited strata and can be considered to be products of this major rifting episode. During the Late Jurassic, differential subsidence was responsible for the transition from a continental to a shallow-marine sedimentary environment in Porcupine Basin. Cretaceous subsidence and Paleogene–Neogene sedimentation Porcupine Basin began at the start of the Cretaceous as a failed rift structure with a typical steer’s head profile (Moore and Shannon, 1991). A major rifting pulse during the Early Cretaceous, associated with the Late Kimmerian orogeny, was accompanied by a significant eustatic sea level fall and gave rise to a regional unconformity that is largely of a submarine nature (Ziegler, 1982; Moore and Shannon, 1995). This undulatory unconformity marks the base of the Cretaceous, where marine strata onlap Jurassic sequences (Shannon, 1991). The onset of the Late Cretaceous was characterized by a further relative sea level rise, featuring offshore sandstone bars, followed by a northward thinning and onlapping outer shelf to slope sequence of pelagic carbonates (chalk). Along the southwestern and southeastern margins of the basin, Moore and Shannon (1995) recognized the presence of biohermal reef buildups. The transition from Late Cretaceous to early Paleocene sedimentation is characterized by a high-amplitude seismic reflector marking the change from carbonate to clastic deposition (Shannon, 1991). Most of the Paleogene postrift sediments are dominantly sandstones and shales, influenced by frequent sea level fluctuations. In general, the Paleocene succession is more mud-dominated, whereas the main coarse clastic input occurred in the middle Eocene to earliest late Eocene (McDonnell and Shannon, 2001). The Paleocene–Eocene is subdivided into five sequences characterized by southerly prograding complex deltaic events overlain by marine transgressive deposits (Naylor and Shannon, 1982; Moore and Shannon, 1995). The controls on the relative rises and falls in sea level are dominantly due to the North Atlantic plate tectonic regime. During the late Paleogene and Neogene, passive uplift of the Norwegian, British, and Irish landmasses was very important in shaping the present-day Atlantic margin. Although the origin of this uplift remains unclear, it probably resulted in enhancement of contour currents, causing local erosion, deposition, and an increased probability of sedimentary slides and slumps. Therefore, overall Oligocene and Miocene sedimentation is characterized by along-slope transport and redepositional processes yielding contourite siltstones and mudstones and hemipelagic–pelagic deep-marine sediments, caused by a combination of differential basin subsidence and regional sea level and paleoclimate changes. The youngest unconformity mapped in Porcupine Basin is correlated with an early Pliocene erosion event in Rockall Basin and is considered to be a nucleation site for present-day cold-water coral mounds (e.g., McDonnell and Shannon, 2001; De Mol et al., 2002; Van Rooij et al., 2003). Pleistocene and Holocene sedimentation Recent sedimentation is mainly pelagic to hemipelagic, although foraminiferal sands (probably reworked) can be found on the upper slope of the eastern continental margin. The main sediment supply zone is probably located on the Irish and Celtic shelves, whereas input from Porcupine Bank seems to be rather limited (Rice et al., 1991). In contrast to the slopes of the Celtic and Armorican margins, which are characterized by a multitude of canyons and deep-sea fans, the east-west-oriented Gollum channels are the only major downslope sediment transfer system located on the southeastern margin of the seabight (Kenyon et al., 1978; Tudhope and Scoffin, 1995), which discharges directly onto the Porcupine Abyssal Plain. Rice et al. (1991) suggest that the present-day channels are inactive. According to Games (2001), the upper slope of northern Porcupine Seabight bears predominantly north-south-trending plough marks on several levels within the Quaternary sedimentary succession. Smaller plough marks are also observed and interpreted as Quaternary abrasion of the continental shelf caused by floating ice grounding on the seabed. An abundance of pockmarks is also apparent on the seabed in this area (Games, 2001). Seismic studies/site survey data Studies carried out during the past seven years under various EU Fourth and Fifth Framework programs, European Science Foundation programs, UNESCO Training Through Research Program, and various European national programs have gathered substantial information from the area of interest, including box cores, long gravity cores, piston cores, high-resolution seismics (surface and deep towed), side-scan sonar (surface and deep-towed) at various frequencies and elevations over the seabed, surface multibeam coverage, and ultra high resolution swath bathymetry (using a remotely operated vehicle [ROV]) and video mosaicking (using ROV). High-resolution seismic data (penetration = ~350 m; resolution = 1–3 m) have been acquired over the Belgica mound province (1125 km of seismic lines over a 1666 km2 area). All drill sites are located on high-quality cross lines. Side-scan sonar data have been acquired at various resolutions and elevations: deep-tow 100 kHz side-scan sonar and 3.5 kHz profiler, resolution = 0.4 m (95 km2 in the Belgica mound province), high-resolution Makanchi acoustic imaging data, and towed ocean bottom instrument side-scan sonar (30 kHz). A multibeam survey was completed in June 2000 (Polarstern), and the area was covered again by the Irish Seabed Program. The ROV VICTOR (Institut Francais de Recherche pour l’Exploration de la Mer) was employed twice (Atalante and Polarstern) to video survey different mounds in the Begica mound province (Thérèse Mound and a transect from Challenger Mound to Galway Mound). Previous subbottom sampling includes more than 40 gravity and piston cores in the Belgica mound province (penetration = 1.5–29 m), numerous box cores, and ~1.5 tons of television-controlled grab samples. Three main seismostratigraphic units can be identified in the Belgica mound area separated by two regional discontinuities (Van Rooij et al., 2003) (Fig. F3). The lowermost Unit P1 is characterized by gentle basinward-dipping, continuous parallel strata with moderate to locally high amplitude reflectors. A clinoform pattern formed by a number of superposed sigmoid reflectors is encountered in the upper strata of Unit P1 below and adjacent to Challenger Mound (Fig. F5). These clinoforms are frequently characterized by a high-amplitude top sigmoid reflector. They appear to reflect high-energy slope deposits and have the possibility, based on reversals of signal polarity, to contain traces of gas. An alternative explanation for the phenomenon is a contrast in lithology between the top of the clinoforms and the overlying sediments in combination with the geometry of the unit, which enhances the amplitude of the reflection (De Mol et al., 2005b). This seismic facies is interpreted as migrating drift bodies of Miocene age (Van Rooij et al., 2003; De Mol et al., 2005b). The upper boundary of Unit P1 is an erosional unconformity which strongly incises the underlying strata. Unit P2 is characterized by a nearly transparent acoustic facies on top of the erosional unconformity bounding Units P1 and P2. Only a few sets of continuous, relatively high amplitude reflectors are observed within Unit P2. De Mol (2002) interpreted this seismic facies as a nannofossil ooze of Pliocene age analogous to the similar seismic facies of ODP Site 980 in the southwestern Rockall Trough (Jansen, Raymo, Blum, et al., 1996). The uppermost seismic Unit P3, characterized by slightly upslope migrating wavy parallel reflectors, represents Quaternary drift deposits partly enclosing the mounds. The reflectors of Unit P3 onlap the mound, suggesting that the mounds were already present before deposition of the most recent drift. Scouring and moat features around the mounds suggest that they affect the intensity of the currents and the deposition of the enclosing sediments (De Mol et al., 2002; Van Rooij et al., 2003). Challenger Mound roots on the regional erosional unconformity separating Units P1 and P3 (Figs. F3, F5). The mound appears on seismic profiles as an almost acoustically transparent dome-shaped structure. The mound is bounded by diffraction hyperbolae originating at the summit of the mound. Inside the mound, no internal reflectors have been recognized, indicating a uniform facies without any large acoustic impedance differences. The mound acoustic facies might also be interpreted as a loss of seismic energy due to scattering or absorption by the rough seabed and internal structure of the mound. However, an important observation is that the reflectors underneath the mounds show reduced amplitudes, although the reflectors have not completely disappeared. This argues for the fact that not all the seismic energy is absorbed or dispersed inside the mound facies (De Mol et al., 2005b). The internal structure of the mounds is derived from the observation of shallow cores (Foubert et al., in press) and seismic velocity analyses (De Mol, 2002). The seismic facies of the coral banks is homogeneous and transparent with an estimated internal velocity of 1850 ± 50 m/s based on velocity pull-ups of single-channel seismic. This velocity suggests carbonate-rich sediment (velocity = 2300 m/s) intermixed with terrigeneous material (velocity = 1700 m/s) as groundtruthed by the surficial sediment samples.
http://publications.iodp.org/proceedings/307/101/101_3.htm
Free! “Architecture is not created, it is discovered—the hand will find solutions before the mind can even comprehend them.” – Glenn Murcutt Behind many great architecture projects sits the humble model. The model, with its tactile form and shape—its physicality—is intrinsic to the act of making. This model-making workshop for adults and kids alike—brought to you by the Australian Institute of Architects’ Emerging Archititects + Graduates Network—invites you to explore the methodologies of design through model-making, using materials inspired by the bamboo structure of the MPavilion. In the heart of the Queen Victoria Gardens, a crew of architects will be on-hand to assist with, and share experiences in the importance of, physical model-making and the process of discovering form and ideas through the act of making.
https://2016.mpavilion.org/program/emerging-architects-graduates-network-model-making-workshop/
Among the world's oldest national parks, Banff National Park serves as a World Heritage-listed mecca for cyclists, hikers, skiers, bird-watchers, mountain climbers, and anyone else with a passion for exploring vast untouched wildernesses.Get a sense of the local culture at Cave and Basin National Historic Site and Burgess Shale. Museum-lovers will appreciate Whyte Museum of the Canadian Rockies and Banff Park Museum. Explore the numerous day-trip ideas around Banff National Park: Yoho National Park (Natural Bridge & Wapta Falls). There's much more to do: stroll through Bow Glacier Falls, hike along Plain of Six Glaciers, head outdoors with Via Ferrata, and take a memorable drive along Icefields Parkway. For traveler tips, where to stay, other places to visit, and tourist information, go to the Banff National Park online trip itinerary builder. September in Banff National Park sees daily highs of 15°C and lows of 2°C at night. You'll set off for home on the 18th (Wed).
https://planner.makemytrip.com/trip/8-days-in-banff-national-park-itinerary-with-culture-outdoors-relaxing-historic-sites-museums-and-wildlife-a43a62873-0e2e-449c-b384-c1b98d2f9546
This course qualifies as an ethics course. “The information presented in this course will be very helpful with my documentation of psychotherapy and enlightened me about potential risk and legal issues. The presentation was clear and he gave real life examples to illustrate what he was saying. I learned the difference between a supoena and a court order, and specifically what goes into effective documentation of psychotherapy notes."-Claire H., Social Worker, Vermont This webinar explains the goals and potential benefits of effective clinical documentation as well as the ethical and legal requirements for doing so. Documentation is addressed from a clinical and risk management perspective. Specific recommendations are made for how to document the clinical services provided in a competent and effective manner. Additionally, common pitfalls to avoid are addressed. How to store and maintain records is addressed for both paper and electronic records. Precautions to take to protect and preserve records are described in detail along with how and when to dispose of them. Guidance is provided for following HIPAA and other laws and regulations relevant to documentation and record keeping. Participants in this webinar will receive practical guidance that can be integrated into daily practice to document more effectively, to better achieve the goals of thoughtfully created treatment records, and to meet and exceed professional standards and practice guidelines relevant to documentation and record keeping. Please click "View Shopping Cart" to continue to checkout.
https://tzkseminars.com/Custom/TZKSeminars/Pages/WebinarDetails.aspx?id=5096&Documentation-and-Record-Keeping:-Essentials-for-Mental-Health-Professionals-Home-Study--3-CEs-
Danish beer brand owner, Carlsberg has recalled 500,000 bottles of its Tuborg brand. The product is being recalled from shelves due to fears that half a million bottles could contain small pieces of glass. Speaking to Sky News, Adam Withrington, Communications Manager for Carlsberg UK said, "Investigations by our bottle manufacturer found that at the time of production there had been an isolated problem on their bottling line. "The recall only affects 275ml bottles but crucially not all 275ml bottles. "We are only recalling a specific batch so the majority of Tuborg on sale across the UK is unaffected." The bottles concerned are sold in packs of 8 or 15. The fault only affects bottles with best before dates of July 22nd, 23rd and 24th 2011. No other Carlsberg owned brands are believed to be affected and the company is placing adverts in the national press today to advise the public of the move. Customers will be entitled to a 100% refund.
https://news.sky.com/story/tuborg-beer-bottles-recalled-over-glass-fears-10491160
Methanolysis of thioamide promoted by a simple palladacycle is accelerated by 10(8) over the methoxide-catalyzed reaction. Palladacycle 1 catalyzes the methanolytic cleavage of N-methyl-N-(4-nitrophenyl)thiobenzamide (4) via a mechanism involving formation of a Pd-bound tetrahedral intermediate (TI). The rate constant for decomposition of the complex formed between 1, methoxide, and 4 is 9.3 s(-1) at 25 °C; this reaction produces methyl thiobenzoate and N-methyl-4-nitroaniline. The ratio of the second-order rate constant for the catalyzed reaction, given as k(cat)/K(d), relative to that of the methoxide-promoted reaction is 3 × 10(8), representing a very large catalysis of thioamide bond cleavage by a synthetic metal complex.
Life As ArtFebruary 24, 2014 So many times our lives feel like they’ve been reduced to a to-do list we’re forever trying to finish. We tear through our weeks, striving to find a balance between doing and being, giving to others and taking care of ourselves. Even a happy life can be reduced to a black-and-white list of things accomplished. What if we think of life in a different way? What if we think about our days as blank canvases, waiting for us to paint them? What if we turn our lives into an art form, picturing each of our activities as a color? Most of us spend a good deal of time working for the benefit of others, or to support ourselves financially. Even if we don’t especially enjoy our jobs, there is beauty in them, in the benefits they bring to us and others. We can think of them as the base color of our canvases, and picture those hours painted a favorite color. Our free time gives us a chance to add accent colors to our base color. Just as each artist has her own vision for her art, each person will have her own vision for her life’s canvas: some people will want theirs primarily filled with one color, and others will want a canvas splashed with multiple colors. Some will gleefully spatter their canvases with bright tones, while others will choose a more muted, serene palette. I like variety, so I’m happiest when my paintings have multiple colors. My ideal canvas would have plenty of purple and blue, the colors I associate with reading and writing. I’d also have strokes of red for physical activity, green for working for my family, even some yellow for doing nothing. (I’m not sure how a literal painting like this would look, but my imaginary painting looks great!) At the end of each day, when we look at our finished canvases, what do we notice? Is our free time primarily filled with things we value? Have we let too much work take over? Or too much mindless entertainment? What about self-care, or acts of kindness? Do they appear? What does a week of canvases look like? A month? A year? We are the artists of our own lives—why don’t we paint some masterpieces? (For more parallels between art and life, see “Artful Living: Applying the Five Es”.) If your day was a painting, what colors would you fill your canvas with, and what would they represent?
http://www.catchinghappiness.com/2014/02/life-as-art.html
--- abstract: 'We consider an estimator for the location of a shift in the mean of long-range dependent sequences. The estimation is based on the two-sample Wilcoxon statistic. Consistency and the rate of convergence for the estimated change point are established. In the case of a constant shift height, the $1/n$ convergence rate (with $n$ denoting the number of observations), which is typical under the assumption of independent observations, is also achieved for long memory sequences. It is proved that if the change point height decreases to $0$ with a certain rate, the suitably standardized estimator converges in distribution to a functional of a fractional Brownian motion. The estimator is tested on two well-known data sets. Finite sample behaviors are investigated in a Monte Carlo simulation study.' address: | Faculty of Mathematics\ Ruhr-Universit[ä]{}t Bochum\ 44780 Bochum, Germany\ author: - bibliography: - 'PaperCPE.bib' title: 'Change point estimation based on Wilcoxon tests in the presence of long-range dependence ' --- Introduction ============ Suppose that the observations $X_1, \ldots, X_n$ are generated by a stochastic process $\left(X_i\right)_{i\geq 1}$ $$\begin{aligned} X_i=\mu_i+Y_i,\end{aligned}$$ where $(\mu_i)_{i\geq 1}$ are unknown constants and where $(Y_i)_{i\geq 1}$ is a stationary, long-range dependent (LRD, in short) process with mean zero. A stationary process $(Y_i)_{i\geq 1}$ is called “long-range dependent” if its autocovariance function $\rho$, $\rho(k):={{\operatorname{Cov}}}(Y_1, Y_{k+1})$, satisfies $$\begin{aligned} \label{autocovariances} \rho(k)\sim k^{-D}L(k), \ \text{as } k\rightarrow \infty, \end{aligned}$$ where $0<D< 1$ (referred to as long-range dependence (LRD) parameter) and where $L$ is a slowly varying function. Furthermore, we assume that there is a change point in the mean of the observations, that is $$\begin{aligned} \mu_i=\begin{cases} \mu, & \text{for} \ i=1, \ldots, k_0, \\ \mu + h_n, & \text{for} \ i=k_0+1, \ldots, n, \end{cases}\end{aligned}$$ where $k_0=\lfloor n\tau\rfloor$ denotes the change point location and $h_n$ is the height of the level-shift. In the following we differentiate between fixed and local changes. Under fixed changes we assume that $h_n=h$ for some $h\neq 0$. Local changes are characterized by a sequence $h_n$, $n\in \mathbb{N}$, with $h_n\longrightarrow 0$ as $n\longrightarrow\infty$; in other words, in a model where the height of the jump decreases with increasing sample size $n$. In order to test the hypothesis $$\begin{aligned} H: \mu_1=\ldots =\mu_n\end{aligned}$$ against the alternative $$\begin{aligned} A: \mu_1=\ldots =\mu_k\neq \mu_{k+1}=\ldots =\mu_n \ \text{for some $k\in \left\{1, \ldots, n-1\right\}$}\end{aligned}$$ the Wilcoxon change point test can be applied. It rejects the hypothesis for large values of the Wilcoxon test statistic defined by $$\begin{aligned} W_n:=\max\limits_{1\leq k\leq n-1}\left|W_{k, n}\right|, \ \text{where} \ W_{k, n}:=\sum\limits_{i=1}^k\sum\limits_{j=k+1}^n\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right)\end{aligned}$$ (see [@DehlingRoochTaqqu2013a]). Under the assumption that there is a change point in the mean in $k_0$ we expect the absolute value of $W_{k_0, n}$ to exceed the absolute value of $W_{l, n}$ for any $l\neq k_0$. Therefore, it seems natural to define an estimator of $k_0$ by $$\begin{aligned} \hat{k}_W=\hat{k}_W(n):=\min \left\{k : \left|W_{k, n}\right|=\max\limits_{1\leq i\leq n-1}\left|W_{i, n}\right|\right\}.\end{aligned}$$ Preceding papers that address the problem of estimating change point locations in dependent observations $X_1, \ldots, X_n$ with a shift in mean often refer to a family of estimators based on the CUSUM change point test statistics $C_{n}(\gamma):=\max_{1\leq k\leq n-1}|C_{k, n}(\gamma)|$, where $$\begin{aligned} C_{k, n}(\gamma):=\left(\frac{k(n-k)}{n}\right)^{1-\gamma}\left(\frac{1}{k}\sum\limits_{i=1}^kX_i-\frac{1}{n-k}\sum\limits_{i=k+1}^n X_i\right)\end{aligned}$$ with parameter $0\leq \gamma<1$. The corresponding change point estimator is defined by $$\begin{aligned} \label{eq:CUSUM_estimator} \hat{k}_{C, \gamma}=\hat{k}_{C, \gamma}(n):=\min \left\{k : \left|C_{k, n}(\gamma)\right|=\max\limits_{1\leq i\leq n-1}\left|C_{i, n}(\gamma)\right|\right\}.\end{aligned}$$ For long-range dependent Gaussian processes [@HorvathKokoszka1997] derive the asymptotic distribution of the estimator $\hat{k}_{C, \gamma}$ under the assumption of a decreasing jump height $h_n$, i.e. under the assumption that $h_n$ approaches $0$ as the sample size $n$ increases. Under non-restrictive constraints on the dependence structure of the data-generating process (including long-range dependent time series) [@KokoszkaLeipus1998] prove consistency of $\hat{k}_{C, \gamma}$ under the assumption of fixed as well as decreasing jump heights. Furthermore, they establish the convergence rate of the change point estimator as a function of the intensity of dependence in the data if the jump height is constant. [@HarizWylie2005] show that under a similar assumption on the decay of the autocovariances the convergence rate that is achieved in the case of independent observations can be obtained for short- and long-range dependent data, as well. Furthermore, it is shown in their paper that for a decreasing jump height the convergence rate derived by [@HorvathKokoszka1997] under the assumption of gaussianity can also be established under more general assumptions on the data-generating sequences. [@Bai1994] establishes an estimator for the location of a shift in the mean by the method of least squares. He proves consistency, determines the rate of convergence of the change point estimator and derives its asymptotic distribution. These results are shown to hold for weakly dependent observations that satisfy a linear model and cover, for example, ARMA($p$, $q$)-processes. Bai extended these results to the estimation of the location of a parameter change in multiple regression models that also allow for lagged dependent variables and trending regressors (see [@Bai1997]). A generalization of these results to possibly long-range dependent data-generating processes (including fractionally integrated processes) is given in [@KuanHsu1998] and [@LavielleMoulines2000]. Under the assumption of independent data [@Darkhovskh1976] establishes an estimator for the location of a change in distribution based on the two-sample Mann-Whitney test statistic. He obtains a convergence rate that has order $\frac{1}{n}$, where $n$ is the number of observations. Allowing for strong dependence in the data [@GiraitisLeipusSurgailis1996] consider Kolmogorov-Smirnov and Cramér-von-Mises-type test statistics for the detection of a change in the marginal distribution of the random variables that underlie the observed data. Consistency of the corresponding change point estimators is proved under the assumption that the jump height approaches $0$. A change point estimator based on a self-normalized CUSUM test statistic has been applied in [@Shao2011] to real data sets. Although Shao assumes validity of using the estimator, the article does not cover a formal proof of consistency. Furthermore, it has been noted by [@ShaoZhang2010] that even under the assumption of short-range dependence it seems difficult to obtain the asymptotic distribution of the estimate. In this paper we shortly address the issue of estimating the change point location on the basis of the self-normalized Wilcoxon test statistic proposed in [@Betken2016]. In order to construct the self-normalized Wilcoxon test statistic, we have to consider the ranks $R_i$, $i=1,\ldots,n$, of the observations $X_1, \ldots, X_n$. These are defined by $R_i:={{\operatorname{rank}}}(X_i)=\sum_{j=1}^n1_{\{X_j\leq X_i\}}$ for $i=1,\ldots,n$. The self-normalized two-sample test statistic is defined by $$SW_{k, n} =\frac{\sum_{i=1}^kR_i-\frac{k}{n}\sum_{i=1}^nR_i}{\bigg\{\frac{1}{n}\sum_{t=1}^k S_t^2(1,k)+\frac{1}{n}\sum_{t=k+1}^n S_t^2(k+1,n)\bigg\}^{1/2}},$$ where $$S_{t}(j, k):=\sum\limits_{h=j}^t\left(R_h-\bar{R}_{j, k}\right)\ \ \text{with }\bar{R}_{j, k}:=\frac{1}{k-j+1}\sum\limits_{t=j}^kR_t.$$ The self-normalized Wilcoxon change point test for the test problem $(H, A)$ rejects the hypothesis for large values of $T_n(\tau_1, \tau_2)=\max_{k\in \left\{\lfloor n\tau_1\rfloor, \ldots, \lfloor n\tau_2\rfloor\right\}}\left|SW_{k, n}\right|$, where $0< \tau_1 <\tau_2 <1$. Note that the proportion of the data that is included in the calculation of the supremum is restricted by $\tau_1$ and $\tau_2$. A common choice for these parameters is $\tau_1= 1-\tau_2=0.15$; see [@Andrews1993]. A natural change point estimator that results from the self-normalized Wilcoxon test statistic is $$\begin{aligned} \hat{k}_{SW}=\hat{k}_{SW}(n):=\min \left\{k : \left| SW_{k, n}\right|=\max\limits_{\lfloor n\tau_1\rfloor\leq i\leq \lfloor n\tau_2\rfloor}\left|SW_{i, n}\right|\right\}.\end{aligned}$$ We will prove consistency of the estimator $\hat{k}_{SW}$ under fixed changes and under local changes whose height converges to $0$ with a rate depending on the intensity of dependence in the data. Nonetheless, the main aim of this paper is to characterize the asymptotic behavior of the change point estimator $\hat{k}_W$. In Section \[Main Results\] we establish consistency of $\hat{k}_W$ and $\hat{k}_{SW}$, derive the optimal convergence rate of $\hat{k}_W$ and finally consider its asymptotic distribution. Applications to two well-known data sets can be found in Section \[Applications\]. The finite sample properties of the estimators are investigated by simulations in Section \[Simulations\]. Proofs of the theoretical results are given in Section \[Proofs\]. Main Results {#Main Results} ============ Recall that for fixed $x$, $x \in \mathbb{R}$, the Hermite expansion of $1_{\left\{G(\xi_i)\leq x\right\}}-F(x)$ is given by $$\begin{aligned} 1_{\left\{G(\xi_i)\leq x\right\}}-F(x)=\sum\limits_{q=1}^{\infty}\frac{J_q(x)}{q !}H_q(\xi_i),\end{aligned}$$ where $H_q$ denotes the $q$-th order Hermite polynomial and where $$\begin{aligned} J_q(x)={{ \operatorname E}}\left(1_{\left\{G(\xi_i)\leq x\right\}}H_q(\xi_i)\right).\end{aligned}$$ \[ass:subordination\] Let $Y_i=G(\xi_i)$, where $\left(\xi_i\right)_{i\geq 1}$ is a stationary, long-range dependent Gaussian process with mean $0$, variance $1$ and LRD parameter $D$. We assume that $0<D <\frac{1}{r}$, where $r$ denotes the Hermite rank of the class of functions $1_{\left\{G(\xi_i)\leq x\right\}}-F(x)$, $x \in \mathbb{R}$, defined by $$\begin{aligned} r:=\min \left\{q\geq 1: J_q(x)\neq 0 \ \text{for some} \ x\in \mathbb{R}\right\}.\end{aligned}$$ Moreover, we assume that $G:\mathbb{R}\longrightarrow \mathbb{R}$ is a measurable function and that $\left( Y_i\right)_{i\geq 1}$ has a continuous distribution function $F$. Let $$\begin{aligned} g_{D, r}(t):=t^{\frac{rD}{2}}L^{-\frac{r}{2}}(t)\end{aligned}$$ and define $$\begin{aligned} d_{n, r} :=\frac{n}{g_{D, r}(n)}c_{r}, \ \text{where} \ c_{r}:=\sqrt{\frac{2 r !}{(1-Dr)(2-Dr)}}.\end{aligned}$$ Since $g_{D, r}$ is a regularly varying function, there exists a function $g_{D, r}^{-}$ such that $$\begin{aligned} g_{D, r}( g_{D, r}^{-}(t))\sim g_{D, r}^{-}( g_{D, r}(t))\sim t, \text{ as $t\rightarrow\infty$,}\end{aligned}$$ (see Theorem 1.5.12 in [@Bingham1987]). We refer to $g_{D, r}^{-}$ as the asymptotic inverse of $g_{D, r}$. The following result states that $\frac{\hat{k}_W}{n}$ and $\frac{\hat{k}_{SW}}{n}$ are consistent estimators for the change point location under fixed as well as certain local changes. \[Prop:consistency\] Suppose that Assumption \[ass:subordination\] holds. Under fixed changes, $\frac{\hat{k}_W}{n}$ and $\frac{\hat{k}_{SW}}{n}$ are consistent estimators for the change point location. The estimators are also consistent under local changes if $h_n^{-1}=o\left(\frac{n}{d_{n, r}}\right)$ and if $F$ has a bounded density $f$. In other words, we have $$\begin{aligned} \frac{\hat{k}_W}{n}\overset{P}{\longrightarrow}\tau, \qquad \frac{\hat{k}_{SW}}{n}\overset{P}{\longrightarrow}\tau\end{aligned}$$ in both situations. Furthermore, it follows that the Wilcoxon test is consistent under these assumptions (in the sense that $\frac{1}{nd_{n, r}}\max_{1 \leq k\leq n-1}|W_{k, n}|\overset{P}{\longrightarrow}\infty$). The following theorem establishes a convergence rate for the change point estimator $\hat{k}_W$. Note that only under local changes the convergence rate depends on the intensity of dependence in the data. \[convergence rate\] Suppose that Assumption \[ass:subordination\] holds and let $m_n:=g_{D, r}^{-}(h_n^{-1})$. Then, we have $$\begin{aligned} \left|\hat{k}_W-k_0\right|=\mathcal{O}_P(m_n)\end{aligned}$$ if either - $h_n = h$ with $h\neq 0$ or - $\lim_{n\rightarrow \infty}h_n=0$ with $h_n^{-1}=o\left(\frac{n}{d_{n, r}}\right)$ and $F$ has a bounded density $f$. \[remark:convergence\_rate\] 1. Under fixed changes $m_n$ is constant. As a consequence, $|\hat{k}_W-k_0|=\mathcal{O}_P(1)$. This result corresponds to the convergence rates obtained by [@HarizWylie2005] for the CUSUM-test based change point estimator and by [@LavielleMoulines2000] for the least-squares estimate of the change point location. Surprisingly, in this case the rate of convergence is independent of the intensity of dependence in the data characterized by the value of the LRD parameter $D$. An explanation for this phenomenon might be the occurrence of two opposing effects: increasing values of the LRD parameter $D$ go along with a slower convergence of the test statistic $W_{k, n}$ (making estimation more difficult), but a more regular behavior of the random component (making estimation easier) (see [@HarizWylie2005]). 2. Note that if $h_n^{-1}=o\left(\frac{n}{d_{n, r}}\right)$ and $m_n=g_{D, r}^{-}(h_n^{-1})$, it holds that - $m_n\longrightarrow \infty$, - $\frac{m_n}{n}\longrightarrow 0$, - $ \frac{d_{m_n, r}}{m_n} \sim h_n$, as $n\longrightarrow \infty$. Based on the previous results it is possible to derive the asymptotic distribution of the change point estimator $\hat{k}_W$: \[thm:asymp\_distr\] Suppose that Assumption \[ass:subordination\] holds with $r=1$ and assume that $F$ has a bounded density $f$. Let $m_n:=g_{D, 1}^{-}(h_n^{-1})$, let $B_H$ denote a fractional Brownian motion process and define $h(s; \tau)$ by $$\begin{aligned} h(s; \tau)= \begin{cases} s(1-\tau)\int f^2(x)dx &\text{if $s\leq 0$}\\ -s\tau \int f^2(x)dx &\text{if $s> 0$} \end{cases}.\end{aligned}$$. If $h_n^{-1}=o\left(\frac{n}{d_{n, 1}}\right)$, then, for all $M>0$, $$\begin{aligned} \frac{1}{e_n}\left(W_{k_0+\lfloor m_n s\rfloor, n}^2-W_{k_0, n}^2\right), \ -M\leq s\leq M,\end{aligned}$$ with $e_n=n^3h_nd_{m_n, 1}$, converges in distribution to $$\begin{aligned} 2\tau(1-\tau)\int f^2(x)dx\left({{\operatorname{sign}}}(s)B_H(s)\int J_1(x)dF(x)+h(s; \tau)\right), \ -M\leq s\leq M,\end{aligned}$$ in the Skorohod space $D\left[-M, M\right]$. Furthermore, it follows that $m_n^{-1}(\hat{k}_W-k_0)$ converges in distribution to $$\begin{aligned} \label{eq:limit_cpe} {{\operatorname{argmax}}}_{-\infty < s <\infty}\left({{\operatorname{sign}}}(s)B_H(s)\int J_1(x)dF(x)+h(s; \tau)\right).\end{aligned}$$ 1. Under local changes the assumption on $h_n$ is equivalent to Assumption C.5 (i) in [@HorvathKokoszka1997]. Moreover, the limit distribution closely resembles the limit distribution of the CUSUM-based change point estimator considered in that paper. 2. The proof of Theorem \[thm:asymp\_distr\] is mainly based on the empirical process non-central limit theorem for subordinated Gaussian sequences in [@DehlingTaqqu1989]. The sequential empirical process has also been studied by many other authors in the context of different models. See, among many others, the following: [@Muller1970] and [@Kiefer1972] for independent and identically distributed data, [@BerkesPhillip1977] and [@PhilippPinzur1980] for strongly mixing processes, [@BerkesHoermannSchauer2009] for S-mixing processes, [@GiraitisSurgailis1999] for long memory linear (or moving average) processes, [@DehlingDurieuTusche2014] for multiple mixing processes. Presumably, in these situations the asymptotic distribution of $\hat{k}_W$ can be derived by the same argument as in the proof of Theorem \[thm:asymp\_distr\] for subordinated Gaussian processes. In particular, Theorem 1 in [@GiraitisSurgailis1999] can be considered as a generalization of Theorem 1.1 in [@DehlingTaqqu1989], i.e. with an appropriate normalization the change point estimator $\hat{k}_W$, computed with respect to long-range dependent linear processes as defined in [@GiraitisSurgailis1999], should converge in distribution to a limit that corresponds to (up to multiplicative constants). Applications {#Applications} ============ We consider two well-known data sets which have been analyzed before. We compute the estimator $\hat{k}_W$ based on the given observations and put our results into context with the findings and conclusions of other authors. ![Measurements of the annual discharge of the river Nile at Aswan in $10^8 m^3$ for the years 1871-1970. The dotted line indicates the potential change point estimated by $\hat{k}_{\text{W}}$; the dashed lines designate the sample means for the pre-break and post-break samples.[]{data-label="Nile"}](Nile) The plot in Figure \[Nile\] depicts the annual volume of discharge from the Nile river at Aswan in $10^8 m^3$ for the years 1871 to 1970. The data set is included in any standard distribution of `R`. Amongst others, [@Cobb1978], [@MacNeill1991], [@WuZhao2007], [@Shao2011] and [@BetkenWendler2016] provide statistically significant evidence for a decrease of the Nile’s annual discharge towards the end of the 19th century. The construction of the Aswan Low Dam between 1898 and 1902 serves as a popular explanation for an abrupt change in the data around the turn of the century. Yet, Cobb gave another explanation for the decrease in water volume by citing rainfall records which suggest a decline of tropical rainfall at that time. In fact, an application of the change point estimator $\hat{k}_W$ identifies a change in 1898. This result seems to be in good accordance with the estimated change point locations suggested by other authors: Cobb’s analysis of the Nile data leads to the conjecture of a significant decrease in discharge volume in 1898. Moreover, computation of the CUSUM-based change point estimator $\hat{k}_{C, 0}$ considered in [@HorvathKokoszka1997] indicates a change in 1898. [@Balke1993] and [@WuZhao2007] suggest that the change occurred in 1899. ![Monthly temperature of the Northern hemisphere for the years 1854-1989 from the data base held at the Climate Research Unit of the University of East Anglia, Norwich, England. The temperature anomalies (in degrees C) are calculated with respect to the reference period 1950-1979. The dotted line indicates the location of the potential change point; the dashed lines designate the sample means for the pre-break and post-break samples.[]{data-label="NemiTemp"}](NhemiTemp) The second data set consists of the seasonally adjusted monthly deviations of the temperature (degrees C) for the Northern hemisphere during the years 1854 to 1989 from the monthly averages over the period 1950 to 1979. The data has been taken from the `longmemo` package in `R`. It results from spatial averaging of temperatures measured over land and sea. In view of the plot in Figure \[NemiTemp\] it seems natural to assume that the data generating process is non-stationary. Previous analysis of this data offers different explanations for the irregular behavior of the time series. [@DeoHurvich1998] fitted a linear trend to the data, thereby providing statistical evidence for global warming during the last decades. However, the consideration of a more general stochastic model by the assumption of so-called semiparametric fractional autoregressive (SEMIFAR) processes in [@BeranFeng2002] does not confirm the conjecture of a trend-like behavior. Neither does the investigation of the global temperature data in [@Wang2007] support the hypothesis of an increasing trend. It is pointed out by Wang that the trend-like behavior of the Northern hemisphere temperature data may have been generated by stationary long-range dependent processes. Yet, it is shown in [@Shao2011] and also in [@BetkenWendler2016] that under model assumptions that include long-range dependence an application of change point tests leads to a rejection of the hypothesis that the time series is stationary. According to [@Shao2011] an estimation based on a self-normalized CUSUM test statistic suggests a change around October 1924. Computation of the change point estimator $\hat{k}_W$ corresponds to a change point located around June 1924. The same change point location results from an application of the previously mentioned estimator $\hat{k}_{C, 0}$ considered in [@HorvathKokoszka1997]. In this regard estimation by $\hat{k}_W$ seems to be in good accordance with the results of alternative change point estimators. Simulations {#Simulations} =========== We will now investigate the finite sample performance of the change point estimator $\hat{k}_W$ and compare it to corresponding simulation results for the estimators $\hat{k}_{SW}$ (based on the self-normalized Wilcoxon test statistic) and $\hat{k}_{\text{C}, 0}$ (based on the CUSUM test statistic with parameter $\gamma = 0$) . For this purpose, we consider two different scenarios: 1. Normal margins: We generate fractional Gaussian noise time series $(\xi_i)_{i\geq 1}$ and choose $G(t) = t$ in Assumption \[ass:subordination\]. As a result, the simulated observations $\left(Y_i\right)_{i\geq 1}$ are Gaussian with autocovariance function $\rho$ satisfying $$\begin{aligned} \rho(k)\sim \left(1-\frac{D}{2}\right)\left(1-D\right)k^{-D}.\end{aligned}$$ Note that in this case the Hermite coefficient $J_1(x)$ is not equal to $0$ for all $x\in \mathbb{R}$ (see [@DehlingRoochTaqqu2013a]) so that $m = 1$, where $m$ denotes the Hermite rank of $1_{\left\{G(\xi_i)\leq x\right\}}-F(x), x\in \mathbb{R}$. Therefore, Assumption \[ass:subordination\] holds for all values of $D\in\left(0, 1\right)$. 2. Pareto margins: In order to get standardized Pareto-distributed data which has a representation as a functional of a Gaussian process, we consider the transformation $$\begin{aligned} G(t)=\left(\frac{\beta k^2}{(\beta -1)^2(\beta-2)}\right)^{-\frac{1}{2}}\left(k(\Phi(t))^{-\frac{1}{\beta}} -\frac{\beta k}{\beta -1}\right) \end{aligned}$$ with parameters $k, \beta>0$ and with $\Phi$ denoting the standard normal distribution function. Since $G$ is a strictly decreasing function, it follows by Theorem 2 in [@DehlingRoochTaqqu2013a] that the Hermite rank of $1_{\left\{G(\xi_i)\leq x\right\}}-F(x), x\in \mathbb{R}$, is $m = 1$ so that Assumption \[ass:subordination\] holds for all values of $D\in\left(0, 1\right)$. To analyze the behavior of the estimators we simulated $500$ time series of length $600$ and added a level shift of height $h$ after a proportion $\tau$ of the data. We have done so for several choices of $h$ and $\tau$. The descriptive statistics, i.e. mean, sample standard deviation (S.D.) and quartiles, are reported in Tables \[sampling\_distribution\_Wilcoxon\], \[sampling\_distribution\_SN\_Wilcoxon\], and \[sampling\_distribution\_CUSUM\] for the three change point estimators $\hat{k}_W$, $\hat{k}_{SW}$ and $\hat{k}_{C, 0}$. The following observations, made on the basis of Tables \[sampling\_distribution\_Wilcoxon\], \[sampling\_distribution\_SN\_Wilcoxon\], and \[sampling\_distribution\_CUSUM\], correspond to the expected behavior of consistent change point estimators: - Bias and variance of the estimated change point location decrease when the height of the level shift increases. - Estimation of the time of change is more accurate for breakpoints located in the middle of the sample than estimation of change point locations that lie close to the boundary of the testing region. - High values of $H$ go along with an increase of bias and variance. This seems natural since when there is very strong dependence, i.e. $H$ is large, the variance of the series increases, so that it becomes harder to accurately estimate the location of a level shift. A comparison of the descriptive statistics of the estimator $\hat{k}_W$ (based on the Wilcoxon statistic) and $\hat{k}_{SW}$ (based on the self-normalized Wilcoxon statistic) shows that: - In most cases the estimator $\hat{k}_{SW}$ has a smaller bias, especially for an early change point location. Nevertheless, the difference between the biases of $\hat{k}_{SW}$ and $\hat{k}_W$ is not big. - In general the sample standard deviation of $\hat{k}_W$ is smaller than that of $\hat{k}_{SW}$. Indeed, it is only slightly better for $\tau=0.25$, but there is a clear difference for $\tau=0.5$. All in all, our simulations do not give rise to choosing $\hat{k}_{SW}$ over $\hat{k}_W$. In particular, better standard deviations of $\hat{k}_W$ compensate for smaller biases of $\hat{k}_{SW}$. Comparing the finite sample performance of $\hat{k}_W$ and the CUSUM-based change point estimator $\hat{k}_{C, 0}$ we make the following observations: - For fractional Gaussian noise time series bias and variance of $\hat{k}_{C, 0}$ tend to be slightly better, at least when $\tau=0.25$ and especially for relatively high level shifts. Nonetheless, the deviations are in most cases negligible. - If the change happens in the middle of a sample with normal margins, bias and variance of $\hat{k}_W$ tend to be smaller, especially for relatively high level shifts. Again, in most cases the deviations are negligible. - For Pareto($3$, $1$) time series $\hat{k}_W$ clearly outperforms $\hat{k}_{C, 0}$ by yielding smaller biases and decisively smaller variances for almost every combination of parameters that has been considered. The performance of the estimator $\hat{k}_{C, 0}$ surpasses the performance of $\hat{k}_W$ only for high values of the jump height $h$. It is well-known that the Wilcoxon change point test is more robust against outliers in data sets than the CUSUM-like change point tests, i.e. the Wilcoxon test outperforms CUSUM-like tests if heavy-tailed time series are considered. Our simulations confirm that this observation is also reflected by the finite sample behavior of the corresponding change point estimators. ![The MAE of $\hat{k}_W$ for different values of $H$.[]{data-label="mean_absolute_error"}](mean_absolute_error_5000) As noted in Remark \[remark:convergence\_rate\], $\hat{k}_W-k_0=\mathcal{O}_P(1)$ under the assumption of a constant change point height $h$. This observation is illustrated by simulations of the mean absolute error $$\begin{aligned} \text{MAE}=\frac{1}{m}\sum\limits_{i=1}^{m}\left|\hat{k}_{W, i}-k_0\right|,\end{aligned}$$ where $\hat{k}_{W, i}$, $i=1, \ldots, m$, denote the estimates for $k_0$, computed on the basis of $m=5000$ different sequences of fractional Gaussian noise time series. Figure \[mean\_absolute\_error\] depicts a plot of $\text{MAE}$ against the sample size $n$ with $n$ varying between $1000$ and $20000$. Since $\hat{k}_W-k_0=\mathcal{O}_P(1)$ due to Theorem \[convergence rate\], we expect $\text{MAE}$ to approach a constant as $n$ tends to infinity. This can be clearly seen in Figure \[mean\_absolute\_error\] for $H\in\left\{0.6, 0.7, 0.8\right\}$. For a high intensity of dependence in the data (characterized by $H=0.9$) convergence becomes slower. This is due to a slower convergence of the test statistic $W_{n}(k)$ which, in finite samples, is not canceled out by the effect of a more regular behavior of the sample paths of the limit process. margins $\tau$ $h$ $H= 0.6$ $H = 0.7$ $H = 0.8$ $H = 0.9$ --------------- -------- ------- ------------- ---------------------- ------------------------- ----------------------- ----------------------- normal $0.25$ $0.5$ mean (S.D.) [193.840]{} (64.020) [227.590]{} (99.788) [252.408]{} (110.084) [270.646]{} (113.720) quartiles (150, 168, 217.25) (150, 191, 284.25) (157, 226.5, 335.25) (172.75, 250, 353) $1$ mean (S.D.) 164.244 (27.156) 176.362 (42.059) [188.328]{} (63.751) [215.108]{} (88.621) quartiles (150, 153.5, 167) (150, 158, 190) (150, 159.5, 206.25) (150 176 256) $2$ mean (S.D.) [153.604]{} (8.255) [156.656]{} (12.393) [164.338]{} (29.570) [173.610]{} (41.514) quartiles (150, 151, 154) (150, 151, 158) (150, 151, 164) (150, 152, 180.25) $0.5$ $0.5$ mean (S.D.) [299.506]{} (30.586) [301.870]{} (61.392) [300.774]{} (82.610) [298.930]{} (98.368) quartiles (291, 300, 309) (274.75, 300.5, 320.25) (264, 299, 339.25) (233, 299, 353) $1$ mean (S.D.) [300.014]{} (9.141) [300.438]{} (18.695) [302.592]{} (42.213) [300.902]{} (50.487) quartiles (298, 300, 302) (297, 300, 304) (293, 300 307) (290, 300, 311) $2$ mean (S.D.) [300.064]{} (1.294) [299.922]{} (3.215) [299.504]{} (5.520) [300.282]{} (7.494) quartiles (300, 300, 300) (300, 300, 300) (300, 300, 300) (300, 300, 300) Pareto$(3,1)$ $0.25$ $0.5$ mean (S.D.) [158.166]{} (17.762) [164.080]{} (31.219) [179.512]{} (58.871) [194.126]{} (74.767) quartiles (150, 151, 159.25) (150, 152, 168) (150, 154, 191.25) (150, 159, 218.25) $1$ mean (S.D.) [154.160]{} (8.765) [156.090]{} (13.516) [164.712]{} (28.774) [178.174]{} (54.429) quartiles (150, 151, 155) (150, 151, 157) (150, 152, 168) (150, 152, 186) $2$ mean (S.D.) [152.256]{} (4.852) [155.592]{} (11.092) [160.686]{} (24.599) [169.374]{} (38.197) quartiles (150, 150, 152) (150, 151, 155.25) (150, 151, 159) (150, 150, 172) $0.5$ $0.5$ mean (S.D.) [298.072]{} (6.008) [296.432]{} (13.441) [293.060]{} (26.221) [289.946]{} (45.739) quartiles (297, 300, 300) (296, 300, 300) (294, 300, 301) (291, 300, 301) $1$ mean (S.D.) [299.178]{} (2.712) [298.744]{} (4.587) [296.674]{} (11.585) [296.168]{} (20.424) quartiles (299, 300, 300) (299, 300, 300) (298, 300, 300) (300, 300, 300) $2$ mean (S.D.) 299.798 (1.008) 299.716 (1.543) 299.384 (3.070) 298.896 (6.560) quartiles (300, 300, 300) (300, 300, 300) (300, 300, 300) (300, 300, 300) : Descriptive statistics of the sampling distribution of $\hat{k}_W$ for a change in the mean based on $500$ fractional Gaussian noise and Pareto time series of length $600$ with Hurst parameter $H$ and a change in mean in $\tau$ of height $h$. \[sampling\_distribution\_Wilcoxon\] margins $\tau$ $h$ $H= 0.6$ $H = 0.7$ $H = 0.8$ $H = 0.9$ ---------------- -------- ------- ------------- ---------------------- ----------------------- ----------------------- ------------------------- normal $0.25$ $0.5$ mean (S.D.) [172.288]{} (63.639) [216.934]{} (110.934) [242.202]{} (119.655) [268.878]{} (122.615) quartiles (135, 153, 183.25) (138, 171, 272.5) (143, 207.5, 333.5) (157, 243.5, 370.25) $1$ mean (S.D.) [152.406]{} (24.840) [160.618]{} (39.834) [174.424]{} (70.673) [204.906]{} (99.648) quartiles (140, 149, 158) (139, 150.5, 172.25) (136, 150, 188.25) (139.75, 161.5, 243.75) $2$ mean (S.D.) [148.836]{} (9.007) [150.208]{} (13.575) [153.194]{} (28.251) [160.026]{} (40.979) quartiles (144, 150, 152) (142.75, 150, 154) (138, 150, 158) (137.75, 150, 165) $0.5$ $0.5$ mean (S.D.) [297.712]{} (43.291) [302.204]{} (77.719) [302.866]{} (96.511) [297.662]{} (110.175) quartiles (277, 297, 320) (262, 300, 337) (248, 298.5, 369.5) (215, 301, 369.5) $1$ mean (S.D.) [299.052]{} (16.132) [299.910]{} (28.907) [302.386]{} (55.267) [300.956]{} (62.821) quartiles (290, 299, 308) (288, 300, 313) (277, 300, 324.25) (270, 300, 329) $2$ mean (S.D.) [300.010]{} (6.054) [299.612]{} (10.079) [298.844]{} (14.059) [301.424]{} (21.022) quartiles (297, 300, 303.25) (294, 300, 305) (291, 300, 307) (289, 300, 312) Pareto$(3, 1)$ $0.25$ $0.5$ mean (S.D.) [151.562]{} (18.392) [155.034]{} (32.505) [165.260]{} (58.363) [182.706]{} (83.268) quartiles (142, 150, 157) (140, 150, 163) (136, 150, 173) (136.75, 150, 196.25) $1$ mean (S.D.) [150.206]{} (9.116) [150.272]{} (15.405) [152.824]{} (25.074) [166.602]{} (58.982) quartiles (145, 150, 154) (143, 150, 156) (140, 150, 159.25) (136, 150, 174.25) $2$ mean (S.D.) [149.210]{} (6.201) [149.934]{} (11.821) [151.946]{} (21.426) [156.836]{} (39.311) quartiles (146, 150, 152) (143, 150, 153) (140, 150, 156) (136, 150, 160.25) $0.5$ $0.5$ mean (S.D.) [300.524]{} (11.841) [299.488]{} (21.317) [299.664]{} (37.136) [295.048]{} (55.000) quartiles (294, 300, 307) (290, 300, 310) (287, 300, 317) (280.75, 300, 318) $1$ mean (S.D.) [300.498]{} (6.600) [300.560]{} (10.383) [299.520]{} (18.862) [297.766]{} (28.308) quartiles (297, 300, 304) (296, 300, 306) (292, 300, 309.25) (289, 300, 312.25) $2$ mean (S.D.) [300.444]{} (4.411) [300.234]{} (7.517) [300.524]{} (11.122) [298.840]{} (16.004) quartiles (298, 300, 303) (296, 300, 304) (295.75, 300, 307) (292, 300, 308) : Descriptive statistics of the sampling distribution of $\hat{k}_{SW}$ for a change in the mean based on $500$ replications of fractional Gaussian noise and Pareto time series of length $600$ with Hurst parameter $H$ and a change in mean in $\tau$ of height $h$. \[sampling\_distribution\_SN\_Wilcoxon\] margins $\tau$ $h$ $H= 0.6$ $H = 0.7$ $H = 0.8$ $H = 0.9$ ------------------ -------- ------- ------------- ---------------------- ----------------------- ----------------------- ------------------------- normal $0.25$ $0.5$ mean (S.D.) [193.060]{} (64.917) [228.948]{} (101.442) [253.114]{} (111.182) [271.380]{} (114.590) quartiles (150, 166.5, 222) (151, 191.5, 286.75) (156.75, 226, 341.5) (172.75, 249.5, 354.25) $1$ mean (S.D.) [162.028]{} (22.948) [173.838]{} (39.845) [187.386]{} (63.865) [213.114]{} (87.356) quartiles (150, 153, 164) (150, 156.5, 187.25) (150, 158, 206) (150, 173, 254.25) $2$ mean (S.D.) [152.374]{} (6.249) [154.878]{} (10.395) [159.700]{} (22.064) [165.940]{} (33.124) quartiles (150, 150, 152) (150, 150, 156) (150, 151, 158) (150, 150, 165) $0.5$ $0.5$ mean (S.D.) [297.840]{} (30.249) [302.060]{} (63.878) [300.246]{} (84.346) [298.910]{} (97.904) quartiles (290, 299, 308) (276, 301, 322) (261.75, 300, 340) (236.25, 299, 353.25) $1$ mean(S.D.) [299.870]{} (9.356) [299.662]{} (21.281) [303.646]{} (42.245) [299.762]{} (52.492) quartiles (298, 300, 302) (297, 300, 304) (293, 300, 307) (290, 300, 311) $2$ mean (S.D.) [300.060]{} (1.473) [299.916]{} (3.199) [299.442]{} (5.234) [300.460]{} (8.179) quartiles (300, 300, 300) (300, 300, 300) (300, 300, 300) (300, 300, 300) Pareto($3$, $1$) $0.25$ $0.5$ mean (S.D.) [175.632]{} (48.517) [198.452]{} (79.303) [205.506]{} (88.482) [210.444]{}(93.831) quartiles (150, 159, 185) (150, 168, 223.75) (150, 173, 251.25) (150, 167, 259.5) $1$ mean (S.D.) [156.586]{} (14.133) [160.350]{} (27.204) [170.278]{} (45.402) [177.278]{} (66.661) quartiles (150, 152, 159) (150, 152, 161) (150, 153, 171) (150, 150, 174) $2$ mean (S.D.) [150.314]{} (1.349) [150.566]{} (3.984) [152.474]{} (18.578) [155.496]{} (29.408) quartiles (150, 150, 150) (150, 150, 150) (150, 150, 150) (150, 150, 150) $0.5$ $0.5$ mean (S.D.) [296.260]{} (22.306) [292.904]{} (43.471) [289.192]{} (64.033) [287.966]{} (64.827) quartiles (292, 300, 303.25) (288.75, 300, 305) (273.75, 300, 308.25) (285, 300, 303) $1$ mean (S.D.) [298.240]{} (6.104) [297.306]{} (9.361) [293.116]{} (26.614) [292.864]{} (37.601) quartiles (299, 300, 300) (299, 300, 300) (298, 300, 300) (300, 300, 300) $2$ mean (S.D.) [299.604]{} (1.843) [299.228]{} (3.385) [298.350]{} (8.354) [297.632]{} (14.525) quartiles (300, 300, 300) (300, 300, 300) (300, 300, 300) (300, 300, 300) : Descriptive statistics of the sampling distribution of $\hat{k}_{\text{C}, 0}$ for a change in the mean based on $500$ replications of fractional Gaussian noise and Pareto time series of length $600$ with Hurst parameter $H$ and a change in mean in $\tau$ of height $h$. \[sampling\_distribution\_CUSUM\] Proofs {#Proofs} ====== In the following let $F_k$ and $F_{k+1, n}$ denote the empirical distribution functions of the first $k$ and last $n-k$ realizations of $Y_1, \ldots, Y_n$, i.e. $$\begin{aligned} &F_k(x):=\frac{1}{k}\sum\limits_{i=1}^k1_{\left\{Y_i\leq x\right\}},\\ &F_{k+1, n}(x):=\frac{1}{n-k}\sum\limits_{i=k+1}^n1_{\left\{Y_i\leq x\right\}}.\end{aligned}$$ For notational convenience we write $W_n(k)$ instead of $W_{k, n}$ and $SW_{n}(k)$ instead of $SW_{k, n}$. The proofs in this section as well as the proofs in the appendix are partially influenced by arguments that have been established in [@HorvathKokoszka1997], [@Bai1994] and [@DehlingRoochTaqqu2013a]. In particular, some arguments are based on the empirical process non-central limit theorem of [@DehlingTaqqu1989] which states that $$\begin{aligned} d_{n, r}^{-1}\lfloor n\lambda\rfloor (F_{\lfloor n\lambda\rfloor}(x)-F(x)) \overset{\mathcal{D}}{\longrightarrow}\frac{1}{r!}J_r(x)Z_H^{(r)}(\lambda), \end{aligned}$$ where $r$ is the Hermite rank defined in Assumption \[ass:subordination\], $Z_H^{(r)}$ is an $r$-th order Hermite process[^1], $H=1-\frac{rD}{2}\in \left(\frac{1}{2}, 1\right)$, and “$\overset{\mathcal{D}}{\longrightarrow}$” denotes convergence in distribution with respect to the $\sigma$-field generated by the open balls in $D\left(\left[-\infty, \infty\right]\times \left[0, 1\right]\right)$, equipped with the supremum norm. The Dudley-Wichura version of Skorohod’s representation theorem (see [@ShorackWellner1986], Theorem 2.3.4) implies that, for our purposes, we may assume without loss of generality that $$\begin{aligned} \sup\limits_{\lambda\in\left[0, 1\right], x\in \mathbb{R}}\left|d_{n, r}^{-1}\lfloor n\lambda\rfloor\left(F_{\lfloor n\lambda\rfloor}(x)-F(x)\right)-\frac{1}{r!}J_r(x)Z_H^{(r)}(\lambda)\right|\longrightarrow 0\end{aligned}$$ almost surely. The proof of Proposition \[Prop:consistency\] is based on an application of Lemma \[Lem:W\_process\_under\_A\] in the appendix. According to Lemma \[Lem:W\_process\_under\_A\] it holds that, under the assumptions of Proposition \[Prop:consistency\], $$\begin{aligned} \frac{1}{n^2 h_n}\sum\limits_{i=1}^{\lfloor n\lambda\rfloor }\sum\limits_{j=\lfloor n\lambda\rfloor +1}^{n}\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right) \overset{P}{\longrightarrow} C\delta_{\tau}(\lambda), \ 0\leq \lambda \leq 1,\end{aligned}$$ where $\delta_{\tau}:[0, 1]\longrightarrow \mathbb{R}$ is defined by $$\begin{aligned} \delta_{\tau}(\lambda)= \begin{cases} \lambda(1-\tau) &\text{for} \ \lambda\leq \tau\\ (1-\lambda)\tau &\text{for} \ \lambda\geq \tau \end{cases}\end{aligned}$$ and $C$ denotes some non-zero constant. It directly follows that $\frac{1}{nd_{n, r}}\max_{1 \leq k\leq n-1}|W_{n}(k)|\overset{P}{\longrightarrow}\infty$. Furthermore, $$\begin{aligned} \frac{1}{n^2h_n}\max\limits_{1\leq k\leq \lfloor n(\tau-\varepsilon)\rfloor}\left|\sum\limits_{i=1}^k\sum\limits_{j=k+1}^n\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right)\right| \intertext{converges in probability to} C\sup\limits_{0\leq \lambda\leq \tau-\varepsilon}\delta_{\tau}(\lambda) =C(\tau-\varepsilon)(1-\tau)\end{aligned}$$ for any $0\leq \varepsilon <\tau$. For $\varepsilon>0$ define $$\begin{aligned} Z_{n, \varepsilon}:=\frac{1}{n^2h_n}\max\limits_{1\leq k\leq \lfloor n\tau\rfloor}\left|W_n(k)\right|-\frac{1}{n^2h_n}\max\limits_{1\leq k\leq \lfloor n(\tau-\varepsilon)\rfloor}\left|W_n(k)\right|.\end{aligned}$$ As $Z_{n, \varepsilon}\overset{P}{\longrightarrow} C(1-\tau)\varepsilon$, it follows that $P(\hat{k}_W<\lfloor n(\tau-\varepsilon) \rfloor)=P(Z_{n, \varepsilon}= 0)\longrightarrow 0$. An analogous line of argument yields $$\begin{aligned} P(\hat{k}_W>\lfloor n(\tau+\varepsilon)\rfloor)\longrightarrow 0.\end{aligned}$$ All in all, it follows that for any $\varepsilon >0$ $$\begin{aligned} &\lim\limits_{n\longrightarrow \infty}P\left(\left|\frac{\hat{k}_W}{n}-\tau\right|> \varepsilon \right)=0.\end{aligned}$$ This proves consistency of the change point estimator which is based on the Wilcoxon test statistic. In the following it is shown that $\frac{1}{n}\hat{k}_{SW}$ is a consistent estimator, too. For this purpose, we consider the process $SW_n(\lfloor n\lambda\rfloor)$, $0\leq \lambda\leq 1$. According to [@Betken2016] the limit of the self-normalized Wilcoxon test statistic can be obtained by an application of the continuous mapping theorem to the process $$\begin{aligned} \frac{1}{a_n} \sum\limits_{i=1}^{\lfloor n\lambda\rfloor}\sum\limits_{j=\lfloor n\lambda\rfloor+1}^n\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right), \ 0\leq \lambda \leq 1, \end{aligned}$$ where $a_n$ denotes an appropriate normalization. Therefore, it follows by the corresponding argument in [@Betken2016] that $$\begin{aligned} SW_n(\lfloor n\lambda\rfloor)\overset{P}{\longrightarrow}\frac{\left|\delta_{\tau}(\lambda)\right|}{\left\{\int_0^{\lambda}\left(\delta_{\tau}(t)-\frac{t}{\lambda}\delta_{\tau}(\lambda)\right)^2dt+\int_{\lambda}^1\left(\delta_{\tau}(t)-\frac{1-t}{1-\lambda} \delta_{\tau}(\lambda)\right)^2dt \right\}^{\frac{1}{2}}}\end{aligned}$$ uniformly in $\lambda \in [0, 1]$. Elementary calculations yield $$\begin{aligned} &\sup\limits_{\lfloor n\tau_1\rfloor \leq k\leq k_0-n\varepsilon}SW_n(k) \overset{P}{\longrightarrow}\sup\limits_{\tau_1\leq \lambda\leq \tau-\varepsilon}\frac{\sqrt{3}\lambda\sqrt{1-\lambda}}{(\tau-\lambda)},\\ &\sup\limits_{k_0+n\varepsilon \leq k\leq \lfloor n\tau_2\rfloor}SW_n(k) \overset{P}{\longrightarrow}\sup\limits_{\tau +\varepsilon\leq \lambda\leq \tau_2}\frac{\sqrt{3}\sqrt{\lambda}(1-\lambda)}{(\tau-\lambda)}.\end{aligned}$$ As $SW_n(k_0)\overset{P}{\longrightarrow}\infty$ due to Theorem 2 in [@Betken2016], we conclude that $P(\hat{k}_{SW}>k_0+ n\varepsilon)$ and $P(\hat{k}_{SW}<k_0-n\varepsilon)$ converge to $0$ in probability. This proves $\frac{1}{n}\hat{k}_{SW}\overset{P}{\longrightarrow}\tau$. In the following we write $\hat{k}$ instead of $\hat{k}_W$. For convenience, we assume that $h>0$ under fixed changes, and that for some $n_0\in \mathbb{N}$ $h_n>0$ for all $n\geq n_0$ under local changes, respectively. Furthermore, we subsume both changes under the general assumption that $\lim_{n\rightarrow\infty}h_n=h$ (under fixed changes $h_n=h$ for all $n\in \mathbb{N}$, under local changes $h=0$). In order to prove Theorem \[convergence rate\], we need to show that for all $\varepsilon>0$ there exists an $n(\varepsilon)\in \mathbb{N}$ and an $M>0$ such that $$\begin{aligned} P\left(\left|\hat{k}-k_0\right|>M m_n\right)<\varepsilon\end{aligned}$$ for all $n\geq n(\varepsilon)$. For $M\in \mathbb{R}^{+}$ define $D_{n, M}:=\left\{k\in \left\{1, \ldots, n-1\right\}\left|\right.\left|k-k_0\right|>Mm_n\right\}$. We have $$\begin{aligned} P\left(\left|\hat{k}-k_0\right|>M m_n\right) \leq P\left(\sup\limits_{k\in D_{n, M}}\left|W_n(k)\right|\geq |W_{n}(k_0)|\right)\leq P_1+P_2\end{aligned}$$ with $$\begin{aligned} &P_1:=P\left(\sup\limits_{k\in D_{n, M}}\left(W_n(k)- W_{n}(k_0)\right)\geq 0\right), \\ &P_2:=P\left(\sup\limits_{k\in D_{n, M}}\left(-W_n(k)-W_{n}(k_0)\right)\geq 0\right).\end{aligned}$$ Note that $D_{n, M}=D_{n, M}(1)\cup D_{n, M}(2)$, where $$\begin{aligned} &D_{n, M}(1):=\left\{k\in \left\{1, \ldots, n-1\right\}\left|\right.k_0-k>Mm_n\right\}, \\ &D_{n, M}(2):=\left\{k\in \left\{1, \ldots, n-1\right\}\left|\right.k-k_0>Mm_n\right\}.\end{aligned}$$ Therefore, $P_2\leq P_{2, 1}+P_{2, 2}$, where $$\begin{aligned} &P_{2,1}:= P\left(\sup\limits_{k\in D_{n, M}(1)}\left(-W_n(k)-W_{n}(k_0)\right)\geq 0\right),\\ &P_{2, 2}:=P\left(\sup\limits_{k\in D_{n, M}(2)}\left(-W_n(k)-W_{n}(k_0)\right)\geq 0\right). \end{aligned}$$ In the following we will consider the first summand only. (For the second summand analogous implications result from the same argument.) For this, we define $$\begin{aligned} \widehat{W}_n(k):=\delta_n(k)\Delta(h_n),\end{aligned}$$ where $$\begin{aligned} \delta_n(k):=\begin{cases} k(n-k_0), & k\leq k_0\\ k_0(n-k), & k> k_0 \end{cases}\end{aligned}$$ and $$\begin{aligned} \Delta(h_n):=\int \left(F(x+h_n)-F(x)\right)dF(x).\end{aligned}$$ Note that $$\begin{aligned} P_{2, 1} &\leq P\left(\sup\limits_{k\in D_{n, M}(1)}\left( \widehat{W}_n(k) -W_n(k)+\widehat{W}_n(k_0)-W_{n}(k_0)\right) \geq \widehat{W}_n(k_0)\right)\\ &\leq P\left(2\sup\limits_{\lambda \in \left[0, \tau\right]}\left|W_n(\lfloor n\lambda\rfloor)- \widehat{W}_n(\lfloor n\lambda\rfloor) \right| \geq k_0(n-k_0)\Delta(h_n)\right).\end{aligned}$$ We have $$\begin{aligned} &\sup\limits_{\lambda \in \left[0, \tau\right]}\left|W_n(\lfloor n\lambda\rfloor)- \widehat{W}_n(\lfloor n\lambda\rfloor) \right|\\ &=\sup\limits_{\lambda \in \left[0, \tau\right]}\Biggl|\sum\limits_{i=1}^{\lfloor n\lambda\rfloor}\sum\limits_{j=\lfloor n\tau\rfloor+1}^n\left(1_{\left\{Y_i\leq Y_j+h_n\right\}}-\int F(x+h_n)dF(x)\right)\\ &\quad +\sum\limits_{i=1}^{\lfloor n\lambda\rfloor}\sum\limits_{j=\lfloor n\lambda\rfloor+1}^{\lfloor n\tau\rfloor}\left(1_{\left\{Y_i\leq Y_j\right\}}-\frac{1}{2}\right)\Biggr|.\end{aligned}$$ Due to Lemma \[Lem\] in the appendix and Theorem 1.1 in [@DehlingRoochTaqqu2013a] $$\begin{aligned} 2\sup_{\lambda \in \left[0, \tau\right]}\left|W_n(\lfloor n\lambda\rfloor)- \widehat{W}_n(\lfloor n\lambda\rfloor) \right|=\mathcal{O}_P\left(nd_{n, r}\right),\end{aligned}$$ i.e. for all $\varepsilon >0$ there exists a $K>0$ such that $$\begin{aligned} P\left(2\sup_{\lambda \in \left[0, \tau\right]}\left|W_n(\lfloor n\lambda\rfloor)- \widehat{W}_n(\lfloor n\lambda\rfloor) \right|\geq Knd_{n, r}\right)<\varepsilon \end{aligned}$$ for all $n$. Furthermore, $k_0(n-k_0)\Delta(h_n)\sim Cn^2h_n$ for some constant $C$. Note that $Knd_{n, r}\leq k_0(n-k_0)\Delta(h_n)$ if and only if $$\begin{aligned} K\leq\frac{k_0}{n}\frac{n-k_0}{n}\frac{\Delta(h_n)}{h_n}\frac{nh_n}{d_{n, r}}. \end{aligned}$$ The right hand side of the above inequality diverges if $h_n=h$ is fixed or if $h_n^{-1}=o\left(\frac{n}{d_{n, r}}\right)$. Therefore, it is possible to find an $n(\varepsilon)\in \mathbb{N}$ such that $$\begin{aligned} P_{2, 1}&\leq P\left(2\sup\limits_{\lambda \in \left[0, \tau\right]}\left|W_n(\lfloor n\lambda\rfloor)- \widehat{W}_n(\lfloor n\lambda\rfloor) \right| \geq k_0(n-k_0)\Delta(h_n)\right)\\ &\leq P\left(2\sup\limits_{\lambda \in \left[0, \tau\right]}\left|W_n(\lfloor n\lambda\rfloor)- \widehat{W}_n(\lfloor n\lambda\rfloor) \right| \geq K nd_{n, r}\right)\\ &<\varepsilon\end{aligned}$$ for all $n\geq n(\varepsilon)$. We will now turn to the summand $P_1$. We have $P_{1}\leq P_{1,1}+P_{1,2}$, where $$\begin{aligned} &P_{1, 1}:=P\left(\sup\limits_{k\in D_{n, M}(1)}W_n(k)- W_{n}(k_0)\geq 0\right),\\ &P_{1, 2}:=P\left(\sup\limits_{k\in D_{n, M}(2)}W_n(k)- W_{n}(k_0)\geq 0\right).\end{aligned}$$ In the following we will consider the first summand only. (For the second summand analogous implications result from the same argument.) We define a random sequence $k_n$, $n \in \mathbb{N}$, by choosing $k_n\in D_{n, M}(1)$ such that $$\begin{aligned} &\sup\limits_{k\in D_{n, M}(1)}\left(W_n(k)-\widehat{W}_n(k)+\widehat{W}_n(k_0)-W_{n}(k_0)\right)\\ &=W_n(k_n)-\widehat{W}_n(k_n)+\widehat{W}_n(k_0)-W_{n}(k_0).\end{aligned}$$ Note that for any sequence $k_n$, $n\in\mathbb{N}$, with $k_n\in D_{n, M}(1)$ $$\begin{aligned} \widehat{W}_n(k_0)-\widehat{W}_n(k_n) =(n-k_0) l_n\Delta(h_n)\end{aligned}$$ where $l_n:=k_0-k_n$. Since $k_n\in D_{n, M}(1)$ and $m_n\longrightarrow \infty$ we have $$\begin{aligned} \frac{l_n}{d_{l_n, r}}=l_n^{1-H}L^{-\frac{r}{2}}(l_n)\geq (Mm_n)^{1-H}L^{-\frac{r}{2}}(Mm_n)\end{aligned}$$ for $n$ sufficiently large. Thus, we have $$\begin{aligned} \frac{1}{nd_{l_n, r}}\left(\widehat{W}_n(k_0)-\widehat{W}_n(k_n)\right)&\geq\frac{n-k_0}{n} \frac{m_n}{d_{m_n, r}}M^{1-H}\frac{L^{\frac{r}{2}}(m_n)}{L^{\frac{r}{2}}(Mm_n)}\Delta(h_n).\end{aligned}$$ If $h_n$ is fixed, the right hand side of the inequality diverges. Under local changes the right hand side asymptotically behaves like $$\begin{aligned} (1-\tau)M^{1-H}\int f^2(x)dx,\end{aligned}$$ since, in this case, $h_n\sim \frac{d_{m_n, r}}{m_n}$ due to the assumptions of Theorem \[convergence rate\]. In any case, for $\delta>0$ it is possible to find an $n_0\in \mathbb{N}$ such that $$\begin{aligned} \frac{1}{nd_{l_n, r}}\left(\widehat{W}_n(k_0)-\widehat{W}_n(k_n)\right)\geq M^{1-H} (1-\tau)\int f^2(x)dx -\delta\end{aligned}$$ for all $n\geq n_0$. All in all, the previous considerations show that there exists an $n_0\in \mathbb{N}$ and a constant $K$ such that for all $n\geq n_0$ $$\begin{aligned} P_{1,1}\leq P\left(\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k,r}}\left(W_n(k)-\widehat{W}_n(k)+\widehat{W}_n(k_0)-W_{n}(k_0)\right) \geq b(M)\right)\end{aligned}$$ where $b(M):=K M^{1-H}-\delta$ with $\delta>0$ fixed. Some elementary calculations show that for $k\leq k_0$ $$\begin{aligned} W_n(k)-\widehat{W}_n(k)+\widehat{W}_n(k_0)-W_n(k_0) =A_{n, 1}(k)+A_{n, 2}(k)+A_{n, 3}(k)+A_{n, 4}(k),\end{aligned}$$ where $$\begin{aligned} &A_{n, 1}(k):=-(n-k_0)(k_0-k)\int \left(F_{k, k_0}(x+h_n)-F(x+h_n)\right)d F_{k_0, n}(x),\\ &A_{n, 2}(k):=-(n-k_0)(k_0-k)\int \left( F_{k_0, n}(x)- F(x)\right)d F(x+h_n),\\ &A_{n, 3}(k):=(k_0-k)k\int \left(F_{k}(x)-F(x)\right)d F_{k, k_0}(x),\\ &A_{n, 4}(k):=-k (k_0-k)\int \left( F_{k, k_0}(x)- F(x)\right)dF(x).\end{aligned}$$ Thus, for $n\geq n_0$ $$\begin{aligned} P_{1, 1} &\leq P\left(\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\sum\limits_{i=1}^4\left|A_{n, i}(k)\right|\geq b(M)\right)\\ &\leq \sum\limits_{i=1}^{4} P\left(\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, i}(k)\right|\geq \frac{1}{4}b(M)\right).\end{aligned}$$ For each $i\in \left\{1, \ldots, 4\right\}$ it will be shown that $$\begin{aligned} P\left(\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, i}(k)\right|\geq \frac{1}{4}b(M)\right)<\frac{\varepsilon}{4} \end{aligned}$$ for $n$ and $M$ sufficiently large. 1. Note that $$\begin{aligned} &\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, 1}(k)\right|\\ &\leq \sup\limits_{k\in D_{n, M}(1)}\sup\limits_{x \in \mathbb{R}}\left|d_{k_0-k, r}^{-1} (k_0-k)\left(F_{k, k_0}(x)-F(x)\right)\right|.\end{aligned}$$ Due to stationarity $$\begin{aligned} &\sup\limits_{k\in D_{n, M}(1)}\sup\limits_{x \in \mathbb{R}}\left|d_{k_0-k, r}^{-1} (k_0-k)\left(F_{k, k_0}(x)-F(x)\right)\right|\\ &\overset{\mathcal{D}}{=}\sup\limits_{k\in D_{n, M}(1)}\sup\limits_{x \in \mathbb{R}}\left|d_{k_0-k, r}^{-1} (k_0-k)\left(F_{k_0-k}(x)-F(x)\right)\right|.\end{aligned}$$ Note that $$\begin{aligned} &\sup\limits_{k\in D_{n, M}(1)}\sup\limits_{x \in \mathbb{R}}\left|d_{k_0-k, r}^{-1} (k_0-k)\left(F_{k_0-k}(x)-F(x)\right)\right|\\ &\leq\sup\limits_{k\in D_{n, M}(1)}\sup\limits_{x \in \mathbb{R}}\left|d_{k_0-k, r}^{-1} (k_0-k)\left(F_{k_0-k}(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(1)J_r(x)\right|\\ &\quad+\frac{1}{r!}\left|Z_H^{(r)}(1)\right|\sup\limits_{x\in \mathbb{R}}\left|J_r(x)\right|.\end{aligned}$$ Since $$\begin{aligned} \sup\limits_{x \in \mathbb{R}}\left|d_{n, r}^{-1} n\left(F_{n}(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(1)J_r(x)\right|\longrightarrow 0 \ a.s.\end{aligned}$$ if $n\longrightarrow \infty$, and as $k_0-k\geq M m_n$ with $m_n\longrightarrow \infty$, it follows that $$\begin{aligned} \sup\limits_{k\in D_{n, M}(1)}\sup\limits_{x \in \mathbb{R}}\left|d_{k_0-k, r}^{-1} (k_0-k)\left(F_{k_0-k}(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(1)J_r(x)\right|\end{aligned}$$ converges to $0$ almost surely. Therefore, $$\begin{aligned} &P\left(\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, 1}(k)\right|\geq \frac{1}{4}b(M)\right)\\ &\leq P\left(\sup\limits_{k\in D_{n, M}(1)}\sup\limits_{x \in \mathbb{R}}\left|d_{k_0-k, r}^{-1} (k_0-k)\left(F_{k, k_0}(x)-F(x)\right)\right| \geq \frac{1}{4}b(M)\right)\\ &\leq P\left(\frac{1}{r!}\left|Z_H^{(r)}(1)\right|\sup\limits_{x\in \mathbb{R}}\left|J_r(x)\right| \geq \frac{1}{4}b(M)\right)+\frac{\varepsilon}{8}.\end{aligned}$$ for $n$ sufficiently large. Note that $\sup_{x\in \mathbb{R}}\left|J_r(x)\right|<\infty$. Furthermore, it is well-known that all moments of Hermite processes are finite. As a result, it follows by Markov’s inequality that for some $M_0\in \mathbb{R}$ $$\begin{aligned} &P\left(\frac{1}{r!}\left|Z_H^{(r)}(1)\right|\sup\limits_{x\in \mathbb{R}}\left|J_r(x)\right| \geq \frac{1}{4}b(M)\right)\leq {{ \operatorname E}}\left|Z_H^{(r)}(1)\right|\frac{4 r!}{\sup\limits_{x\in \mathbb{R}}\left|J_r(x)\right|b(M)}<\frac{\varepsilon}{8}\end{aligned}$$ for all $M\geq M_0$. 2. We have $$\begin{aligned} &\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, 2}(k)\right|\\ &\leq \left|d_{n, r}^{-1}(n-k_0)\int \left( F_{k_0, n}(x)- F(x)\right)d F(x+h_n)\right|\end{aligned}$$ for $n$ sufficiently large. As a result, $$\begin{aligned} \sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, 2}(k)\right| \leq \sup\limits_{x\in\mathbb{R}}\left|d_{n, r}^{-1}(n-k_0) \left( F_{k_0, n}(x)- F(x)\right)\right|.\end{aligned}$$ Due to the empirical process non-central limit theorem of [@DehlingTaqqu1989] we have $$\begin{aligned} \sup\limits_{x\in \mathbb{R}}\left|d_{n, r}^{-1}(n-k_0)\left( F_{k_0, n}(x)- F(x)\right)\right|\overset{\mathcal{D}}{\longrightarrow} \frac{1}{r!}\left|Z_H^{(r)}(1)-Z_H^{(r)}(\tau))\right|\sup\limits_{x\in \mathbb{R}} \left|J_r(x)\right|.\end{aligned}$$ Moreover, $$\begin{aligned} \frac{1}{r!}\left|Z_H^{(r)}(1)-Z_H^{(r)}(\tau)\right|\sup\limits_{x\in \mathbb{R}} \left|J_r(x)\right|\overset{\mathcal{D}}{=} \frac{1}{r!} (1-\tau)^{H}\left|Z_H^{(r)}(1)\right|\sup\limits_{x\in \mathbb{R}}\left|J_r(x)\right|\end{aligned}$$ since $Z_H^{(r)}$ is a $H$-self-similar process with stationary increments. Thus, we have $$\begin{aligned} &P\left(\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, 2}(k)\right|\geq \frac{1}{4}b(M)\right)\\ &\leq P\left( \frac{1}{r!} (1-\tau)^{H}\left|Z_H^{(r)}(1)\right|\sup\limits_{x\in \mathbb{R}}\left|J_r(x)\right|\geq \frac{1}{4}b(M)\right)+\frac{\varepsilon}{8}\end{aligned}$$ for $n$ sufficiently large. Again, it follows by Markov’s inequality that $$\begin{aligned} P\left( \frac{1}{r!} (1-\tau)^{H}\left|Z_H^{(r)}(1)\right|\sup\limits_{x\in \mathbb{R}}\left|J_r(x)\right|\geq \frac{1}{4}b(M)\right) < \frac{\varepsilon}{8}\end{aligned}$$ for $M$ sufficiently large. 3. Note that $$\begin{aligned} \frac{1}{n d_{k_0-k, r}}\left|A_{n, 3}(k)\right| \leq\left|d_{n, r}^{-1}k\int \left(F_{k}(x)-F(x)\right) d F_{k, k_0}(x)\right|\end{aligned}$$ for $n$ sufficiently large. Therefore, $$\begin{aligned} \sup\limits_{k\in D_{n, M}(1)}\frac{1}{n d_{k_0-k, r}}\left|A_{n, 3}(k)\right| \leq \sup\limits_{x\in \mathbb{R}, 0\leq \lambda\leq 1}\left|d_{n, r}^{-1}\lfloor n\lambda\rfloor \left(F_{\lfloor n\lambda\rfloor}(x)-F(x)\right)\right|.\end{aligned}$$ The expression on the right hand side of the inequality converges in distribution to $$\begin{aligned} \frac{1}{r!}\sup\limits_{0\leq \lambda\leq 1}\left|Z_H^{(r)}(\lambda)\right|\sup\limits_{x\in \mathbb{R}}\left|J_r(x)\right|\end{aligned}$$ due to the empirical process non-central limit theorem. Since $$\begin{aligned} \left\{Z_H^{(r)}(\lambda), \ 0\leq \lambda\leq 1\right\} \overset{\mathcal{D}}{=}\left\{\lambda^H Z_H^{(r)}(1), \ 0\leq \lambda\leq 1\right\},\end{aligned}$$ we have $$\begin{aligned} \sup\limits_{0\leq \lambda\leq 1}\left|Z_H^{(r)}(\lambda)\right| \overset{\mathcal{D}}{=}|Z_H^{(r)}(1)|.\end{aligned}$$ As a result, the aforementioned argument yields $$\begin{aligned} &P\left(\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, 3}(k)\right|\geq \frac{1}{4}b(M)\right)\\ &\leq P\left(\frac{1}{r!}\left|Z_H^{(r)}(1)\right|\sup\limits_{x\in \mathbb{R}}\left|J_r(x)\right|\geq \frac{1}{4}b(M)\right)+\frac{\varepsilon}{8}\\ &< \frac{\varepsilon}{4}\end{aligned}$$ for $n$ and $M$ sufficiently large. 4. We have $$\begin{aligned} &\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, 4}(k)\right|\\ &\leq \sup\limits_{k\in D_{n, M}(1)}\sup\limits_{x\in \mathbb{R}}\left|d_{k_0-k, r}^{-1} (k_0-k)\left( F_{k, k_0}(x)- F(x)\right)\right|.\end{aligned}$$ Hence, the same argument that has been used to obtain an analogous result for $A_{n, 1}$ can be applied to conclude that $$\begin{aligned} &P\left(\sup\limits_{k\in D_{n, M}(1)}\frac{1}{nd_{k_0-k, r}}\left|A_{n, 4}(k)\right|\geq \frac{1}{4}b(M)\right)<\frac{\varepsilon}{4}\\\end{aligned}$$ for $n$ and $M$ sufficiently large. All in all, it follows that for all $\varepsilon>0$ there exists an $n(\varepsilon)\in \mathbb{N}$ and an $M>0$ such that $$\begin{aligned} P\left(\left|\hat{k}-k_0\right|>M m_n\right)<\varepsilon\end{aligned}$$ for all $n\geq n(\varepsilon)$. This proves Theorem \[convergence rate\]. Note that $$\begin{aligned} &W_n^2(k_0+\lfloor m_n s\rfloor)-W_n^2(k_0)\\ &=\left(W_n(k_0+\lfloor m_n s\rfloor)-W_n(k_0)\right)\left(W_n(k_0+\lfloor m_n s\rfloor)+W_n(k_0)\right).\end{aligned}$$ We will show that (with an appropriate normalization) $W_n(k_0+\lfloor m_n s\rfloor)-W_n(k_0)$ converges in distribution to a non-deterministic limit process whereas $W_n(k_0+\lfloor m_n s\rfloor)+W_n(k_0)$ (with stronger normalization) converges in probability to a deterministic expression. For notational convenience we write $d_{m_n}$ instead of $d_{m_n, 1}$, $J$ instead of $J_1$, $\hat{k}$ instead of $\hat{k}_W$ and we define $l_n(s):=k_0+\lfloor m_n s\rfloor$. We have $$\begin{aligned} W_n(k_0+\lfloor m_n s\rfloor)-W_n(k_0)=\tilde{V}_n(l_n(s))+V_n(l_n(s)),\end{aligned}$$ where $$\begin{aligned} \tilde{V}_n(l)&=\begin{cases} -\sum\limits_{i=l+1}^{k_0}\sum\limits_{j=k_0+1}^n\left(1_{\left\{Y_i\leq Y_j+h_n\right\}}-1_{\left\{Y_i\leq Y_j\right\}}\right) &\text{if $s<0$}\\ - \sum\limits_{i=1}^{k_0}\sum\limits_{j=k_0+1}^{l}\left(1_{\left\{Y_i\leq Y_j+h_n\right\}}-1_{\left\{Y_i\leq Y_j\right\}}\right) &\text{if $s>0$}\\ \end{cases}\end{aligned}$$ and $$\begin{aligned} V_n(l)=\begin{cases} \sum\limits_{i=1}^{l}\sum\limits_{j=l+1}^{k_0}\left(1_{\left\{Y_i\leq Y_j\right\}}-\frac{1}{2}\right)-\sum\limits_{i=l+1}^{k_0}\sum\limits_{j=k_0+1}^n\left(1_{\left\{Y_i\leq Y_j\right\}}-\frac{1}{2}\right) &\text{if $s<0$}\\ \sum\limits_{i=k_0+1}^{l}\sum\limits_{j=l+1}^{n}\left(1_{\left\{Y_i\leq Y_j\right\}}-\frac{1}{2}\right)-\sum\limits_{i=1}^{k_0}\sum\limits_{j=k_0+1}^{l}\left(1_{\left\{Y_i\leq Y_j\right\}}-\frac{1}{2}\right) &\text{if $s>0$} \end{cases}.\end{aligned}$$ We will show that $\frac{1}{nd_{m_n}}\tilde{V}_n(l_n(s))$ converges to $h(s; \tau)$ in probability and that $\frac{1}{nd_{m_n}}V_n(l_n(s))$ converges in distribution to ${{\operatorname{sign}}}(s)B_H(s)\int J(x)dF(x)$ in $D\left[-M, M\right]$. We rewrite $\tilde{V}_n(l_n(s))$ in the following way: $$\begin{aligned} &\tilde{V}_n(l_n(s)) =-(k_0-l_n(s))(n-k_0) \int \left(F_{l_n(s), k_0}(x+h_n)-F_{l_n(s), k_0}(x)\right)dF_{k_0, n}(x) \intertext{if $s<0$,} &\tilde{V}_n(l_n(s)) =- k_0(l_n(s)-k_0) \int \left(F_{k_0}(x+h_n)-F_{k_0}(x)\right)dF_{k_0, l_n(s)}(x) \end{aligned}$$ if $s>0$. For $s<0$ the limit of $\frac{1}{nd_{m_n}}\tilde{V}_n(l_n(s))$ corresponds to the limit of $$\begin{aligned} -(1-\tau)d_{m_n}^{-1}(k_0-l_n(s)) \int \left(F(x+h_n)-F(x)\right)dF(x) \end{aligned}$$ due to Lemma \[Lem:int\_sq\_dens\] and stationarity of the random sequence $Y_i$, $i\geq 1$. Note that $$\begin{aligned} &d_{m_n}^{-1}(k_0-l_n(s)) \int \left(F(x+h_n)-F(x)\right)dF(x)\\ &=-d_{m_n}^{-1}\lfloor m_ns\rfloor h_n \int \frac{1}{h_n}\left(F(x+h_n)-F(x)\right)dF(x).\end{aligned}$$ The above expression converges to $-s\int f^2(x)dx$, since $h_n\sim \frac{d_{m_n}}{m_n}$. For $s>0$ the limit of $\frac{1}{nd_{m_n}}\tilde{V}_n(l_n(s))$ corresponds to the limit of $$\begin{aligned} - \tau d_{m_n}^{-1}(l_n(s)-k_0) \int \left(F(x+h_n)-F(x)\right)dF(x)\end{aligned}$$ due to Lemma \[Lem:int\_sq\_dens\] and stationarity of the random sequence $Y_i$, $i\geq 1$. Note that $$\begin{aligned} &d_{m_n}^{-1}(l_n(s)-k_0) \int \left(F(x+h_n)-F(x)\right)dF(x)\\ &=d_{m_n}^{-1}\lfloor m_ns\rfloor h_n\int \frac{1}{h_n}\left(F(x+h_n)-F(x)\right)dF(x)\end{aligned}$$ The above expression converges to $s\int f^2(x)dx$, since $h_n\sim \frac{d_{m_n}}{m_n}$.\ All in all, it follows that $\frac{1}{nd_{m_n}}\tilde{V}_n(l_n(s))$ converges to $h(s; \tau)$ defined by $$\begin{aligned} \end{cases} .\end{aligned}$$ In the following it is shown that $\frac{1}{nd_{m_n}}V_n(l_n(s))$ converges in distribution to $$\begin{aligned} {{\operatorname{sign}}}(s)B_H(s)\int J(x)dF(x), \ -M\leq s\leq M.\end{aligned}$$ Note that if $s<0$, $$\begin{aligned} V_n(l_n(s)) =&-l_n(s)(k_0-l_n(s))\int \left(F_{l_n(s), k_0}(x)-F(x)\right)dF_{l_n(s)}(x)\\ &-(k_0-l_n(s))(n-k_0)\int \left(F_{l_n(s), k_0}(x)-F(x)\right)dF_{k_0, n}(x)\\ &+l_n(s)(k_0-l_n(s))\int (F_{l_n(s)}(x)-F(x))dF(x)\\ &+(k_0-l_n(s))(n-k_0)\int \left(F_{k_0, n}(x)-F(x)\right)dF(x).\end{aligned}$$ If $s>0$, we have $$\begin{aligned} V_n(l_n(s)) =&(l_n(s)-k_0)(n-l_n(s))\int \left(F_{k_0, l_n(s)}(x)-F(x)\right)dF_{l_n(s), n}(x)\\ &+k_0(l_n(s)-k_0)\int\left(F_{k_0, l_n(s)}(x)-F(x)\right)(x)dF_{k_0}(x)\\ &-(l_n(s)-k_0)(n-l_n(s))\int \left(F_{l_n(s), n}(x)-F(x)\right)dF(x)\\ &-k_0(l_n(s)-k_0)\int \left(F_{k_0}(x)- F(x)\right)dF(x).\end{aligned}$$ The arguments that appear in the proof of Lemma \[Lem:int\_sq\_dens\] can also be applied to show that the limit of $\frac{1}{nd_{m_n}}V_n(l_n(s))$ corresponds to the limit of $$\begin{aligned} \frac{1}{nd_{m_n}}\left(A_{1, n}(s)+A_{2, n}(s)+A_{3, n}(s)\right),\end{aligned}$$ where $$\begin{aligned} &A_{1, n}(s):= (-l_n(s)-n+k_0)(k_0-l_n(s))\int \left(F_{l_n(s), k_0}(x)-F(x)\right)dF(x) \intertext{if $s<0$,} &A_{1, n}(s):=(n-l_n(s)+k_0)(l_n(s)-k_0)\int \left(F_{k_0, l_n(s)}(x)-F(x)\right)dF(x) \intertext{if $s>0$,} &A_{2, n}(s):= \begin{cases} (k_0-l_n(s))l_n(s)\int (F_{l_n(s)}(x)-F(x))dF(x) &\text{if $s<0$}\\ -(l_n(s)-k_0)(n-l_n(s))\int\left(F_{l_n(s), n}(x)-F(x)\right)dF(x) &\text{if $s>0$} \end{cases},\\ &A_{3, n}(s):= \begin{cases} (k_0-l_n(s))(n-k_0)\int \left(F_{k_0, n}(x)-F(x)\right)dF(x) &\text{if $s<0$}\\ -(l_n(s)-k_0)k_0\int \left(F_{k_0}(x)-F(x)\right)dF(x) &\text{if $s>0$} \end{cases}.\end{aligned}$$ Note that for $s<0$ $$\begin{aligned} \frac{1}{nd_{m_n}}A_{2, n}(s)=-\frac{1}{nd_{m_n}}\lfloor m_ns\rfloor l_n(s)\int (F_{l_n(s)}(x)-F(x))dF(x). \end{aligned}$$ The above expression converges to $0$ uniformly in $s$, since $\frac{m_n}{d_{m_n}}=o(\frac{n}{d_n})$ and since $$\begin{aligned} &\sup\limits_{-M\leq s\leq 0}\left|d_n^{-1}l_n(s)\int (F_{l_n(s)}(x)-F(x))dF(x)\right|\\ &\leq \sup\limits_{x, \lambda}\left| d_n^{-1}\lfloor n\lambda\rfloor(F_{\lfloor n\lambda\rfloor}(x)-F(x))- B_H(\lambda) J(x)\right|\\ & \quad+\sup\limits_{0\leq\lambda\leq1}\left| B_H(\lambda)\right| \left|\int J(x)dF(x)\right|,\end{aligned}$$ i.e. $\sup_{-M\leq s\leq 0}\left|d_n^{-1}l_n(s)\int (F_{l_n(s)}(x)-F(x))dF(x)\right|$ is bounded in probability. An analogous argument shows that $\frac{1}{nd_{m_n}}A_{3, n}(s)$ vanishes if $n$ tends to $\infty$. Therefore, it remains to show that $\frac{1}{nd_{m_n}}A_{1, n}(s)$ converges in distribution to a non-deterministic expression. Due to stationarity $$\begin{aligned} \frac{1}{nd_{m_n}}A_{1, n}(s)\overset{\mathcal{D}}{=} \frac{n+\lfloor m_ns\rfloor}{n}d_{m_n}^{-1}\lfloor m_ns\rfloor\int \left(F_{-\lfloor m_ns\rfloor}(x)-F(x)\right)dF(x)\end{aligned}$$ for $s<0$. As a result, $ \frac{1}{nd_{m_n}}A_{1, n}(s)$ converges in distribution to $-B_H(s)\int J(x)dF(x)$. If $s>0$, an application of the previous arguments shows that $\frac{1}{nd_{m_n}}A_{2, n}(s)$ and $\frac{1}{nd_{m_n}}A_{3, n}(s)$ converge to $0$ whereas $\frac{1}{nd_{m_n}}A_{1, n}(s)$ converges in distribution to $B_H(s)\int J(x)dF(x)$. All in all, it follows that $$\begin{aligned} \frac{1}{nd_{m_n}}\left(W_n(k_0+\lfloor m_n s\rfloor)-W_n(k_0)\right)\overset{\mathcal{D}}{\longrightarrow} {{\operatorname{sign}}}(s)B_H(s)\int J(x)dF(x)+h(s; \tau)\end{aligned}$$ in $D[-M, M]$. Furthermore, it follows that with the stronger normalization $h_nn^2$ the limit of $\frac{1}{h_n n^2}W_n(k_0+\lfloor m_n s\rfloor)$ corresponds to the limit of $\frac{1}{h_n n^2}W_n(k_0)$. We have $$\begin{aligned} \frac{1}{h_nn^2}W_n(k_0) &=\frac{1}{h_nn^2}k_0(n-k_0)\int\left(F_{k_0}(x+h_n)-F_{k_0}(x)\right)dF_{k_0, n}(x)\\ &\quad +\frac{1}{h_nn^2}\sum\limits_{i=1}^{k_0}\sum\limits_{j=k_0+1}^n\left(1_{\left\{Y_i\leq Y_j\right\}}-\frac{1}{2}\right).\end{aligned}$$ The second summand on the right hand side vanishes as $n$ tends to $\infty$, since $h_n^{-1}=o\left(n/d_n\right)$. Due to Lemma \[Lem:int\_sq\_dens\] the limit of $d_n^{-1}k_0\int\left(F_{k_0}(x+h_n)-F_{k_0}(x)\right)dF_{k_0, n}(x)$ corresponds to the limit of $d_n^{-1}k_0\int\left(F(x+h_n)-F(x)\right)dF(x)$. Therefore, $$\begin{aligned} h_n^{-1}\int\left(F_{k_0}(x+h_n)-F_{k_0}(x)\right)dF_{k_0, n}(x)\longrightarrow \int f^2(x)dx \quad a.s.\end{aligned}$$ In addition, $\frac{k_0}{n}\frac{(n-k_0)}{n}\longrightarrow \tau(1-\tau)$. From this we can conclude that $$\begin{aligned} \frac{1}{h_nn^2}\left(W_n(k_0+m_ns)+W_n(k_0)\right)\overset{P}{\longrightarrow}2\tau(1-\tau)\int f^2(x)dx\end{aligned}$$ in $D[-M, M]$. This completes the proof of the first assertion in Theorem \[thm:asymp\_distr\]. In order to show that $$\begin{aligned} &m_n^{-1}(\hat{k}-k_0)\overset{\mathcal{D}}{\longrightarrow}{{\operatorname{argmax}}}_{-\infty < s <\infty}\left({{\operatorname{sign}}}(s)B_H(s)\int J(x)dF(x)+h(s; \tau)\right),\end{aligned}$$ we make use of Lemma \[Lem:sargmax\]. For this purpose, we note that according to Lifshits’ criterion for unimodality of Gaussian processes (see Theorem 1.1 in [@Ferger1999]) the random function $G_{H, \tau}(s)={{\operatorname{sign}}}(s)B_H(s)\int J(x)dF(x)+h(s; \tau)$ attains its maximal value in $[-M, M]$ at a unique point with probability $1$ for every $M>0$. Hence, an application of Lemma \[Lem:sargmax\] in the appendix yields $$\begin{aligned} {{\operatorname{sargmax}}}_{s\in [-M, M]}\frac{1}{e_n}\left(W_n^2(k_0+\lfloor m_n s\rfloor)-W_n^2(k_0)\right)\overset{\mathcal{D}}{\longrightarrow}{{\operatorname{argmax}}}_{s\in [-M, M]}G_{H, \tau}(s).\end{aligned}$$ It remains to be shown that instead of considering the ${{\operatorname{sargmax}}}$ in $[-M, M]$ we may as well consider the smallest ${{\operatorname{argmax}}}$ in $\mathbb{R}$. By the law of the iterated logarithm for fractional Brownian motions we have $\lim_{|s|\rightarrow\infty}\frac{B_H(s)}{s}=0$ a.s. so that ${{\operatorname{sign}}}(s)B_H(s)\int J(x)dF(x)+h(s; \tau)\longrightarrow -\infty$ a.s. if $|s|\rightarrow \infty$. Therefore, the limit corresponds to ${{\operatorname{argmax}}}_{s\in (-\infty, \infty)}G_{H, \tau}(s)$ if $M$ is sufficiently large. For $M>0$ define $$\begin{aligned} \hat{\hat{k}}(M):=\min\left\{k: \left|k_0- k\right|\leq Mm_n, \ \left|W_{n}(k)\right|=\max\limits_{|k_0-i|\leq Mm_n}\left|W_{n}(i)\right|\right\}.\end{aligned}$$ Note that $$\begin{aligned} &\Bigl|{{\operatorname{sargmax}}}_{s\in [-M, M]}\left(W_n^2(k_0+\lfloor m_n s\rfloor)-W_n^2(k_0)\right)\\ &-{{\operatorname{sargmax}}}_{s\in (-\infty, \infty)}\left(W_n^2(k_0+\lfloor m_n s\rfloor)-W_n^2(k_0)\right)\Bigr|\\ &=m_n^{-1}\left|\hat{\hat{k}}(M)-\hat{k}\right| + \mathcal{O}_P(1).\end{aligned}$$ Therefore, we have to show that for some $M\in \mathbb{R}$ $$\begin{aligned} m_n^{-1}\left|\hat{\hat{k}}(M)-\hat{k}\right| \overset{P}{\longrightarrow}0\end{aligned}$$ as $n$ tends to infinity. Note that $$\begin{aligned} P\left(\hat{k}=\hat{\hat{k}}(M)\right) &= P\left(\left|\hat{k}-k_0\right|\leq Mm_n\right)\\ &=1-P\left(\left|\hat{k}-k_0\right|> Mm_n\right).\end{aligned}$$ Furthermore, we have $$\begin{aligned} &\lim\limits_{M\rightarrow\infty}\liminf_{n\rightarrow \infty}\left(1-P\left(|\hat{k}-k_0|>Mm_n\right)\right)\\ &=1-\lim\limits_{M\rightarrow\infty}\limsup_{n\rightarrow \infty}P\left(|\hat{k}-k_0|>Mm_n\right)\\ &=1\end{aligned}$$ because $|\hat{k}-k_0|=O_P(m_n)$ by Theorem \[convergence rate\]. As a result, we have $$\begin{aligned} \lim\limits_{M\rightarrow\infty}\liminf_{n\rightarrow \infty}P\left(\hat{k}=\hat{\hat{k}}(M)\right)=1.\end{aligned}$$ Hence, for all $\varepsilon>0$ there is an $M_0\in \mathbb{R}$ and an $n_0\in \mathbb{N}$ such that $$\begin{aligned} P\left(\hat{k}\neq \hat{\hat{k}}(M)\right)<\varepsilon\end{aligned}$$ for all $n\geq n_0$ and all $M\geq M_0$. This concludes the proof of Theorem \[thm:asymp\_distr\]. Auxiliary Results ================= In the following we prove some Lemmas that are needed for the proofs of our main results. Lemma \[Lem:W\_process\_under\_A\] characterizes the asymptotic behavior of the Wilcoxon process under the assumption of a change-point in the mean. It is used to prove consistency of the change-point estimators $\hat{k}_{\text{W}}$ and $\hat{k}_{SW}$. \[Lem:W\_process\_under\_A\] Define $\delta_{\tau}:[0, 1]\longrightarrow \mathbb{R}$ by $$\begin{aligned} \end{cases}.\end{aligned}$$ Assume that Assumption \[ass:subordination\] holds and that either 1. $h_n = h$ with $h\neq 0$, or 1. $\lim_{n\rightarrow \infty}h_n=0$ with $h_n^{-1}=\hbox{o}\left(\frac{n}{d_{n, r}}\right)$ and $F$ has a bounded density $f$. Then, we have $$\begin{aligned} \frac{1}{n^2h_n}\sum\limits_{i=1}^{\lfloor n\lambda\rfloor}\sum\limits_{j=\lfloor n\lambda\rfloor+1}^n\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right)\overset{P}{\longrightarrow} C\delta_{\tau}(\lambda), \ 0\leq \lambda \leq 1,\end{aligned}$$ where $$\begin{aligned} C:=\begin{cases} \frac{1}{h}\int \left(F(x+h)-F(x)\right)dF(x) &\text{if } h_n = h, \ h\neq 0,\\ \int f^2(x)dx &\text{if } \lim_{n\rightarrow \infty}h_n= 0 \text{ and } h_n^{-1}=\hbox{o}\left(\frac{n}{d_{n, r}}\right) \end{cases}.\notag\end{aligned}$$ First, consider the case $h_n=h$ with $h\neq 0$. For $\lfloor n\lambda\rfloor\leq \lfloor n\tau\rfloor$ we have $$\begin{aligned} &\frac{1}{n^2}\sum \limits_{i=1}^{\lfloor n\lambda\rfloor}\sum \limits_{j=\lfloor n\lambda\rfloor+1}^n\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right)\\ &=\frac{1}{n^2}\sum \limits_{i=1}^{\lfloor n\lambda\rfloor}\sum \limits_{j=\lfloor n\tau\rfloor+1}^n\left(1_{\left\{Y_i\leq Y_j+h\right\}}-\frac{1}{2}\right) +\frac{1}{n^2}\sum \limits_{i=1}^{\lfloor n\lambda\rfloor}\sum \limits_{j=\lfloor n\lambda\rfloor+1}^{\lfloor n\tau\rfloor}\left(1_{\left\{Y_i\leq Y_j\right\}}-\frac{1}{2}\right).\end{aligned}$$ By Lemma 1 in [@Betken2016] the first summand on the right hand side of the equation converges in probability to $\lambda(1-\tau)\int\left(F(x+h)-F(x)\right)dF(x)$ uniformly in $\lambda\leq \tau$. The second summand vanishes as $n$ tends to $\infty$. If $\lfloor n\lambda\rfloor> \lfloor n\tau\rfloor$, $$\begin{aligned} &\frac{1}{n^2}\sum \limits_{i=1}^{\lfloor n\lambda\rfloor}\sum \limits_{j=\lfloor n\lambda\rfloor+1}^n\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right)\\ &=\frac{1}{n^2}\sum \limits_{i=1}^{\lfloor n\tau\rfloor}\sum \limits_{j=\lfloor n\lambda\rfloor+1}^n\left(1_{\left\{Y_i\leq Y_j+h\right\}}-\frac{1}{2}\right)+\frac{1}{n^2}\sum \limits_{i=\lfloor n\tau\rfloor +1}^{\lfloor n\lambda\rfloor}\sum \limits_{j=\lfloor n\lambda\rfloor+1}^{n}\left(1_{\left\{Y_i\leq Y_j\right\}}-\frac{1}{2}\right).\end{aligned}$$ In this case, the first summand on the right hand side of the equation converges in probability to $(1-\lambda)\tau\int\left(F(x+h)-F(x)\right)dF(x)$ uniformly in $\lambda\geq\tau$ due to Lemma 1 in [@Betken2016] while the second summand converges in probability to zero. All in all, it follows that $$\begin{aligned} \frac{1}{n^2}\sum\limits_{i=1}^{\lfloor n\lambda\rfloor }\sum\limits_{j=\lfloor n\lambda\rfloor +1}^{n}\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right) \overset{P}{\longrightarrow}\delta_{\tau}(\lambda)\int\left(F(x+h)-F(x)\right)dF(x)\end{aligned}$$ uniformly in $\lambda\in[0,1]$. If $\lim_{n\rightarrow \infty}h_n= 0$, the process $$\begin{aligned} &\frac{1}{nd_{n, r}}\sum\limits_{i=1}^{\lfloor n\lambda\rfloor}\sum\limits_{j=\lfloor n\lambda\rfloor+1}^n\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right)\\ &-\frac{n}{d_{n, r}}\delta_{\tau}(\lambda)\int\left(F(x+h_n)-F(x)\right)dF(x), \ 0\leq \lambda \leq 1,\end{aligned}$$ converges in distribution to $$\begin{aligned} \frac{1}{r!}\int J_r(x)dF(x)\left(Z_H^{(r)}(\lambda)-\lambda Z_H^{(r)}(1)\right), \ 0\leq\lambda \leq 1,\end{aligned}$$ due to Theorem 3.1 in [@DehlingRoochTaqqu2013b]. By assumption $h_n^{-1}=o\left(\frac{n}{d_{n, r}}\right)$, so that $$\begin{aligned} \frac{1}{n^2h_n}\sum\limits_{i=1}^{\lfloor n\lambda\rfloor}\sum\limits_{j=\lfloor n\lambda\rfloor+1}^n\left(1_{\left\{X_i\leq X_j\right\}}-\frac{1}{2}\right)\overset{P}{\longrightarrow} \delta_{\tau}(\lambda)\int f^2(x)dx, \ 0\leq\lambda \leq 1.\end{aligned}$$ The proof of Theorem \[convergence rate\], which establishes a convergence rate for the estimator $\hat{k}_{\text{W}}$, requires the following result: \[Lem\] Suppose that Assumption \[ass:subordination\] holds and let $h_n$, $n\in \mathbb{N}$, be a sequence of real numbers with $\lim_{n\rightarrow \infty}h_n=h$. 1. The process $$\begin{aligned} \frac{1}{nd_{n, r}}\sum\limits_{i=1}^{\lfloor n\lambda\rfloor}\sum\limits_{j=\lfloor n\tau\rfloor+1}^n\left(1_{\left\{Y_i\leq Y_j+h_n\right\}}-\int F(x+h_n)dF(x)\right), \ 0\leq\lambda \leq \tau,\end{aligned}$$ converges in distribution to $$\begin{aligned} &(1-\tau)\frac{1}{r!}Z_H^{(r)}(\lambda)\int J_r(x+h)dF(x)\\ &-\lambda\frac{1}{r!}\left(Z_H^{(r)}(1)-Z_H^{(r)}(\tau)\right)\int J_r(x)dF(x+h)\end{aligned}$$ uniformly in $\lambda\leq \tau$. 2. The process $$\begin{aligned} \frac{1}{nd_{n, r}}\sum\limits_{i=1}^{\lfloor n\tau\rfloor}\sum\limits_{j=\lfloor n\lambda\rfloor+1}^n\left(1_{\left\{Y_i\leq Y_j+h_n\right\}}-\int F(x+h_n)dF(x)\right), \ \tau\leq\lambda \leq 1,\end{aligned}$$ converges in distribution to $$\begin{aligned} &(1-\lambda)\frac{1}{r!}Z_H^{(r)}(\tau)\int J_r(x+h)dF(x)\\ &-\tau\frac{1}{r!}\left(Z_H^{(r)}(1)-Z_H^{(r)}(\lambda)\right)\int J_r(x)dF(x+h)\end{aligned}$$ uniformly in $\lambda\geq \tau$. We give a proof for the first assertion only as the convergence of the second term follows by an analogous argument. The steps in this proof correspond to the argument that proves Theorem 1.1 in [@DehlingRoochTaqqu2013a]. For $\lambda\leq \tau$ it follows that $$\begin{aligned} \sum\limits_{i=1}^{\lfloor n\lambda\rfloor}\sum\limits_{j=\lfloor n\tau\rfloor+1}^n1_{\left\{Y_i\leq Y_j+h_n\right\}} &=\left(n-\lfloor n\tau\rfloor\right)\lfloor n\lambda\rfloor \int F_{\lfloor n\lambda\rfloor}( x+h_n) dF_{\lfloor n \tau \rfloor +1, n}(x).\end{aligned}$$ This yields the following decomposition: $$\begin{aligned} \label{decomposition} &\frac{1}{nd_{n, r}}\sum\limits_{i=1}^{\lfloor n\lambda\rfloor}\sum\limits_{j=\lfloor n\tau\rfloor+1}^n\left(1_{\left\{Y_i\leq Y_j+h_n\right\}}-\int F(x+h_n)dF(x)\right)\\ &=\frac{n-\lfloor n\tau\rfloor}{n}d_{n, r}^{-1}\lfloor n\lambda\rfloor \int\left( F_{\lfloor n\lambda\rfloor}( x+h_n) -F(x+h_n)\right)dF_{\lfloor n \tau \rfloor +1, n}(x)\notag\\ &\quad +\frac{n-\lfloor n\tau\rfloor}{n}d_{n, r}^{-1}\lfloor n\lambda\rfloor \int F(x+h_n) d\left(F_{\lfloor n \tau \rfloor +1, n}-F\right)(x).\notag\end{aligned}$$ For the first summand we have $$\begin{aligned} &\sup\limits_{0\leq\lambda\leq \tau}\Bigl|d_{n, r}^{-1}\lfloor n\lambda\rfloor \int \left(F_{\lfloor n\lambda\rfloor}( x+h_n) -F(x+h_n)\right)dF_{\lfloor n \tau \rfloor +1, n}(x)\\ &\qquad \ \ \ -\frac{1}{r!}Z_H^{(r)}(\lambda)\int J_r(x+h)dF(x)\Bigr|\\ &\leq\sup\limits_{0\leq\lambda\leq \tau}\Bigl|\int d_{n, r}^{-1}\lfloor n\lambda\rfloor\left(F_{\lfloor n\lambda\rfloor}( x+h_n) -F(x+h_n)\right)\\ &\qquad\quad \ \ \ -\frac{1}{r!}Z_H^{(r)}(\lambda)J_r(x+h_n)dF_{\lfloor n \tau \rfloor +1, n}(x)\Bigr|\\ &\quad +\frac{1}{r!}\sup\limits_{0\leq\lambda\leq \tau}\left|Z_H^{(r)}(\lambda)\right|\left|\int\left(J_r(x+h_n)-J_r(x+h)\right)dF_{\lfloor n \tau \rfloor +1, n}(x)\right|\\ &\quad +\frac{1}{r!}\sup\limits_{0\leq\lambda\leq \tau}\left|Z_H^{(r)}(\lambda)\right|\left|\int J_r(x+h)d\left(F_{\lfloor n \tau \rfloor +1, n}-F\right)(x)\right|.\end{aligned}$$ We will show that each of the summands on the right hand side converges to $0$. The first summand converges to $0$ because of the empirical non-central limit theorem of [@DehlingTaqqu1989]. In order to show convergence of the second and third summand, note that $\sup_{0\leq\lambda\leq \tau}|Z_H^{(r)}(\lambda)|<\infty$ a.s. since the sample paths of the Hermite processes are almost surely continuous. Furthermore, we have $$\begin{aligned} \int J_r(x+h)dF_{\lfloor n \tau \rfloor +1, n}(x) &=-\int \int 1_{\left\{x+h\leq G(y)\right\}}H_r(y)\varphi(y)dy dF_{\lfloor n \tau \rfloor +1, n}(x)\\ &=-\int \int 1_{\left\{x\leq G(y)-h\right\}}dF_{\lfloor n \tau \rfloor +1, n}(x)H_r(y)\varphi(y)dy \\ &=-\int F_{\lfloor n \tau \rfloor +1, n}(G(y)-h) H_r(y)\varphi(y)dy.\end{aligned}$$ Analogously, it follows that $$\begin{aligned} \int J_r(x+h_n)dF_{\lfloor n \tau \rfloor +1, n}(x)=-\int F_{\lfloor n \tau \rfloor +1, n}(G(y)-h_n) H_r(y)\varphi(y)dy.\end{aligned}$$ Therefore, we may conclude that $$\begin{aligned} &\left|\int \left(J_r(x+h_n)-J_r(x+h)\right)dF_{\lfloor n \tau \rfloor +1, n}(x)\right|\\ &\leq 2\sup\limits_{x\in\mathbb{R}}\left|F_{\lfloor n \tau \rfloor +1, n}(x)-F(x)\right|\int \left|H_r(y)\right|\varphi(y)dy\\ &\quad + \int \left|F(G(y)-h_n)-F(G(y)-h)\right|\left|H_r(y)\right|\varphi(y)dy.\end{aligned}$$ The first expression on the right hand side converges to $0$ by the Glivenko-Cantelli theorem and the fact that $\int \left|H_r(y)\right|\varphi(y)dy<\infty$; the second expression converges to $0$ due to continuity of $F$ and the dominated convergence theorem. To show convergence of the third summand note that $$\begin{aligned} &\left|\int J_r(x+h)d\left(F_{\lfloor n \tau \rfloor +1, n}(x)-F(x)\right)\right|\\ &=\frac{1}{n-\lfloor n\tau\rfloor}\left|\sum\limits_{i=\lfloor n\tau\rfloor +1}^n\left(J_r(Y_i+h)-{{ \operatorname E}}\ J_r(Y_i+h)\right)\right|\\ &\leq\frac{n}{n-\lfloor n\tau\rfloor}\frac{1}{n}\left|\sum\limits_{i=1}^n\left(J_r(Y_i+h)-{{ \operatorname E}}\ J_r(Y_i+h) \right)\right|\\ &\quad+\frac{\lfloor n\tau\rfloor}{n-\lfloor n\tau\rfloor}\frac{1}{\lfloor n\tau\rfloor}\left|\sum\limits_{i=1}^{\lfloor n\tau\rfloor}\left(J_r(Y_i+h)-{{ \operatorname E}}\ J_r(Y_i+h)\right)\right|.\end{aligned}$$ For both summands on the right hand side of the above inequality the ergodic theorem implies almost sure convergence to $0$. For the second summand in we have $$\begin{aligned} &\frac{n-\lfloor n\tau\rfloor}{n}d_{n, r}^{-1}\lfloor n\lambda\rfloor \int F(x+h_n) d\left(F_{\lfloor n \tau \rfloor +1, n}-F\right)(x)\\ &=-\frac{\lfloor n\lambda\rfloor}{n}d_{n, r}^{-1}(n-\lfloor n\tau\rfloor) \int \left(F_{\lfloor n \tau \rfloor +1, n}(x)-F(x)\right)dF(x+h_n).\end{aligned}$$ Since $\frac{\lfloor n\lambda\rfloor}{n}\longrightarrow \lambda$ uniformly in $\lambda$, consider $$\begin{aligned} &\Biggl|d_{n, r}^{-1}(n-\lfloor n\tau\rfloor) \int \left(F_{\lfloor n \tau \rfloor +1, n}(x)-F(x)\right)dF(x+h_n)\\ & \ -\frac{1}{r!}(Z_H^{(r)}(1)-Z_H^{(r)}(\tau))\int J_r(x)dF(x+h_n)\Biggr|\\ &\leq \left|\int d_{n, r}^{-1}n\left(F_{n}(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(1) J_r(x)dF(x+h)\right|\\ &\quad +\left|\int d_{n, r}^{-1}\lfloor n\tau\rfloor\left(F_{\lfloor n\tau\rfloor}(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(\tau) J_r(x)dF(x+h_n)\right|\\ &\quad +\frac{1}{r!}\left|Z_H^{(r)}(1)-Z_H^{(r)}(\tau)\right|\left|\int J_r(x)d\left(F(x+h_n)-F(x+h)\right)\right| .\end{aligned}$$ The first and second summand on the right hand side converge to $0$ because of the empirical process non-central limit theorem. For the third summand we have $$\begin{aligned} \left|\int J_r(x)d\left(F(x+h_n)-F(x+h)\right)\right| =\left|\int \left(J_r(x-h_n)-J_r(x-h)\right)dF(x)\right|.\end{aligned}$$ As shown before in this proof, convergence to $0$ follows by the Glivenko-Cantelli theorem and the dominated convergence theorem. Lemma \[Lem:int\_sq\_dens\] and Lemma \[Lem:sargmax\] are needed for the proof of Theorem \[thm:asymp\_distr\]. \[Lem:int\_sq\_dens\] Suppose that Assumption \[ass:subordination\] holds and let $l_n$, $n\in \mathbb{N}$, and $h_n$, $n\in \mathbb{N}$, be two sequences with $l_n\rightarrow \infty$, $\lim_{n\rightarrow \infty}h_n =h$ and $l_n=\mathcal{O}(n)$. Then, it holds that $$\begin{aligned} \sup\limits_{0\leq s \leq 1}\Biggl|& d_{l_n, r}^{-1}\lfloor l_ns\rfloor \int\left (F_{\lfloor l_ns\rfloor}(x+h_n)-F_{\lfloor l_ns\rfloor}(x+h)\right)dF_{n}(x)\notag\\ &-d_{l_n, r}^{-1}\lfloor l_ns\rfloor \int\left (F(x+h_n)-F(x+h)\right)dF(x)\Biggr|\label{Lem3:1} \intertext{and} \sup\limits_{0\leq s \leq 1}\Biggl| &d_{l_n, r}^{-1}\lfloor l_ns\rfloor \int\left (F_{n}(x+h_n)-F_{n}(x+h)\right)dF_{\lfloor l_ns\rfloor}(x)\notag\\ &-d_{l_n, r}^{-1}\lfloor l_ns\rfloor \int\left (F(x+h_n)-F(x+h)\right)dF(x)\Biggr|\label{Lem3:2}\end{aligned}$$ converge to $0$ almost surely. For the expression the triangle inequality yields $$\begin{aligned} &\sup\limits_{0\leq s \leq 1}\Biggl| d_{l_n, r}^{-1}\lfloor l_ns\rfloor \int\left (F_{\lfloor l_ns\rfloor}(x+h_n)-F_{\lfloor l_ns\rfloor}(x+h)\right)dF_n(x)\\ &\qquad \ \ \ -d_{l_n, r}^{-1}\lfloor l_ns\rfloor \int\left (F(x+h_n)-F(x+h)\right)dF(x)\Biggr|\\ &\leq 2\sup\limits_{s\in \left[0, 1\right], x\in \mathbb{R}}\left| d_{l_n, r}^{-1}\lfloor l_ns\rfloor \left(F_{\lfloor l_ns\rfloor}(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(s)J_r(x)\right|\\ &\quad+\frac{1}{r!}\sup\limits_{0\leq s \leq 1}\left|Z_H^{(r)}(s)\right| \left| \int (J_r(x+h_n)-J_r(x+h))dF_{n}(x)\right|\\ &\quad+\left|d_{l_n, r}^{-1}l_n \int\left (F(x+h_n)-F(x+h)\right)d\left(F_{n}-F\right)(x)\right|.\end{aligned}$$ The first summand converges to $0$ because of the empirical non-central limit theorem. Moreover, $\sup_{0\leq s \leq 1}\left|Z_H^{(r)}(s)\right|<\infty$ a.s. due to the fact that $Z_H^{(r)}$ is continuous with probability $1$. It is shown in the proof of Lemma \[Lem\] that $\left| \int (J_r(x+h_n)-J_r(x+h))dF_{n}(x)\right|\longrightarrow 0$. As a result, the second summand vanishes as $n$ tends to $\infty$. Furthermore, note that $$\begin{aligned} &\left|d_{l_n, r}^{-1}l_n \int\left (F(x+h_n)-F(x+h)\right)d\left(F_{n}-F\right)(x)\right| \\ &\leq K\left| \int \left(d_{n, r}^{-1}n\left(F_{n}(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(1)J_r(x)\right)dF(x+h_n)\right|\\ &\quad + K\left| \int \left(d_{n, r}^{-1}n \left(F_{n}(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(1)J_r(x)\right)dF(x+h)\right| \\ &\quad+K\frac{1}{r!}\left|Z_H^{(r)}(1)\right|\left|\int J_r(x)d\left(F(x+h_n)-F(x+h)\right)\right|\end{aligned}$$ for some constant $K$ and $n$ sufficiently large, since $l_n=\mathcal{O}(n)$. The first and second summand on the right hand side of the above inequality converge to $0$ due to the empirical process non-central limit theorem. In addition, we have $$\begin{aligned} \left|\int J_r(x)d\left(F(x+h_n)-F(x+h)\right)\right| =\left|\int \left(J_r(x-h_n)-J_r(x-h)\right)dF(x)\right|\end{aligned}$$ Therefore, it follows by the same argument as in the proof of Lemma \[Lem\] that the third summand converges to $0$. Considering the term in , note that $$\begin{aligned} &\sup\limits_{0\leq s \leq 1}\Biggl| d_{l_n, r}^{-1}\lfloor l_ns\rfloor \int\left (F_{n}(x+h_n)-F_{n}(x+h)\right)dF_{\lfloor l_ns\rfloor}(x)\\ &\qquad \ \ -d_{l_n, r}^{-1}\lfloor l_ns\rfloor \int\left (F(x+h_n)-F(x+h)\right)dF(x)\Biggr|\\ &\leq 2\sup\limits_{0\leq s \leq 1, x\in \mathbb{R}}\left| d_{l_n, r}^{-1}\lfloor l_ns\rfloor \left(F_{\lfloor l_ns\rfloor}(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(s)J_r(x)\right|\\ & \quad+\frac{1}{r!}\sup\limits_{0\leq s \leq 1}\left| Z_H^{(r)}(s)\right| \left|\int J_r(x)d\left(F_n(x+h_n)-F_n(x+h)\right)\right|\\ &\quad+2K\sup\limits_{ x\in \mathbb{R}}\left| d_{n, r}^{-1}n \left (F_n(x)-F(x)\right)-\frac{1}{r!}Z_H^{(r)}(1)J_r(x)\right|\\ &\quad+\frac{1}{r!}\left| Z_H^{(r)}(1)\right|\int\left|J_r(x+h_n)-J_r(x+h)\right|dF(x)\end{aligned}$$ for some constant $K$ and $n$ sufficiently large. The first and third summand on the right hand side of the above inequality converge to $0$ due to the empirical process non-central limit theorem. The last summand converges to $0$ due to the corresponding argument in the proof of Lemma \[Lem\]. It holds that $$\begin{aligned} & \left|\int J_r(x)d\left(F_n(x+h_n)-F_n(x+h)\right)\right|\\ &=\left|\int \left(F_n(G(y)-h_n)-F_n(G(y)-h)\right)H_r(y)\varphi(y)dy\right|\\ &\leq \left(2\sup\limits_{x \in \mathbb{R}}\left|F_n(x)-F(x)\right|+\sup\limits_{x \in \mathbb{R}}\left|F(x-h_n)-F(x-h)\right|\right)\int \left|H_r(y)\right|\varphi(y)dy.\end{aligned}$$ The right hand side of the above inequality converges to $0$ almost surely due to the Glivenko-Cantelli theorem and because $F$ is uniformly continuous. As a result, the second summand converges to $0$, as well. Lemma \[Lem:sargmax\] establishes a condition under which convergence in distribution of a sequence of random variables entails convergence of the smallest argmax of the sequence. \[Lem:sargmax\] Let $K$ be a compact interval and denote by $D(K)$ the corresponding Skorohod space, i.e. the collection of all functions $f:K\longrightarrow\mathbb{R}$ which are right-continuous with left limits. Assume that $Z_n$, $n\in \mathbb{N}$, are random variables taking values in $D(K)$ and that $Z_n\overset{\mathcal{D}}{\longrightarrow}Z$, where (with probability $1$) $Z$ is continuous and $Z$ has a unique maximizer. Then ${{\operatorname{sargmax}}}(Z_n)\overset{\mathcal{D}}{\longrightarrow}{{\operatorname{sargmax}}}(Z)$. Due to Skorohod’s representation theorem there exist random variables $\tilde{Z}_n$ and $\tilde{Z}$ defined on a common probability space $(\tilde{\Omega}, \tilde{F}, \tilde{P})$, such that $\tilde{Z}_n\overset{\mathcal{D}}{=}Z_n$, $\tilde{Z}\overset{\mathcal{D}}{=}Z$ and $\tilde{Z}_n\overset{a.s.}{\longrightarrow}\tilde{Z}$. Due to Lemma 2.9 in [@SeijoSen2011] the smallest argmax functional is continuous at $W$ (with respect to the Skorohod-metric and the sup-norm metric) if $W\in D(K)$ is a continuous function which has a unique maximizer. Since (with probability $1$) $Z$ is continuous with unique maximizer, ${{\operatorname{sargmax}}}(\tilde{Z}_n)\overset{a.s.}{\longrightarrow}{{\operatorname{sargmax}}}(\tilde{Z})$. As almost sure convergence implies convergence in distribution, we have ${{\operatorname{sargmax}}}(\tilde{Z}_n)\overset{\mathcal{D}}{\longrightarrow}{{\operatorname{sargmax}}}(\tilde{Z})$ and therefore ${{\operatorname{sargmax}}}(Z_n)\overset{\mathcal{D}}{\longrightarrow}{{\operatorname{sargmax}}}(Z)$. [^1]: If $r=1$, the Hermite process equals a standard fractional Brownian motion process with Hurst parameter $H=1-\frac{D}{2}$. We refer to [@Taqqu1979] for a general definition of Hermite processes.
Warehouse layout and design directly affect the efficiency of any business operation, from manufacturing and assembly to order fulfillment. Whether you’re planning a shipping operation or designing your space around manufacturing or assembly, a sound warehouse floor plan will help you minimize costs and maximize productivity. What to Know Before Creating Your Warehouse Layout Before launching into your warehouse design and layout planning process, you need to consider the order fulfillment methods you plan to use. Plus, you need to think through the various needs you have—from space utilization and storage options to aisle layout and production area workflows. You also need to be well-informed regarding the many warehouse storage and shelving options available, as well as equipment that will help boost warehouse productivity and efficiency. Finally, you need to keep your business inventory management systems in mind, as your layout will impact your ability to manage inventory effectively. These are the five steps that a warehouse layout and design process must include: - Create a warehouse layout schematic - Plan your warehouse layout for efficient space utilization - Understand your warehouse storage & work area equipment options - Use efficient warehouse floor plan traffic flow strategies - Test your warehouse traffic flow plan Let’s look at the first step in warehouse layout planning, which is creating a warehouse layout schematic. 1. Create a Warehouse Layout Schematic Your new warehouse space is a blank slate. Your goal is to transform it into a productive workspace that accomplishes your business goals. A good warehouse layout always starts with putting it all down on paper first, no matter the size of your space. The easiest way to do this is to use a copy of your warehouse blueprint, especially if your space is large or not a standard rectangle shape. If you’re renting, your landlord might be able to provide a blueprint you can use. If you can’t get your hands on a blueprint, it’s easy to draw up your own warehouse schematic on grid paper. When drawing your layout, plan as though one square on the grid paper equals 1 square foot in your warehouse. That way, the spatial relationships on your plan will match your actual warehouse space. When using a paper schematic, attach it to a piece of poster board or foam core so you’ll have a sturdy platform on which to design your warehouse layout. Then, overlay a piece of tracing paper. This lets you sketch and play with different shelving and equipment arrangements without marking up your original. You can use paper cutouts to represent shelving and work tables and move them around to test different layouts. You can also use layout software to create your warehouse floor plan schematic. The grid-based layout shown in the images above were created using Inkscape, a free graphic design program with an optional grid background. If your budget allows, you can use an online layout tool that offers specific options for warehouse design, such as SmartDraw. A single user plan with SmartDraw costs $297 and includes unlimited use. The upside with an online space planning and layout tool is that you can easily experiment with different layout approaches, as online tools allow you to move elements around on your screen with ease. Whether you choose to design your warehouse layout on grid paper or with an online layout tool, it’s important to ensure that the warehouse measurements you’re using are accurate. This means measuring your warehouse interior spaces yourself. In warehouse space planning, which we’ll discuss in the next section, you need to take every inch into account. Failure to do so can lead to disaster once you start bringing in shelving and warehouse equipment, which may not fit if your warehouse measurements are inaccurate. You don’t want to be making last-minute warehouse layout changes that can be avoided easily with proper planning. So, pull out a distance tape measure or rolling tape measure to take accurate measurements from the start. Once you have a printed or online schematic with measurements drawn to scale, note any stationary features such as columns or supports, office area buildouts, sloping floors, stairways, installed equipment, and overhead doors. These areas will place restrictions on your warehouse floor plan, so you want to note them on your warehouse layout schematic accurately. Many warehouse operations set aside some space for offices. In the example below, the office buildout takes a chunk out of the middle. A rough block-out of space is all you need, with one exception. Be sure to note when office doors open out into the warehouse as, if you omit this fact, you might accidentally block door access. In the example below, you can also see that the receiving and shipping pick up doors have been noted on the warehouse layout. Most warehouses require special areas for receiving and shipping out inventory, and hence, be sure to include these entrances and exits on your design schematic. Once you have noted major features on your warehouse design schematic, you’re ready for the next step. It’s time to start planning your warehouse layout. 2. Plan Your Warehouse for Efficient Space Utilization If you want to create an efficient warehouse floor plan, you must begin with a thorough consideration of how you plan to use your warehouse. You might be designing a warehouse layout suitable for a manufacturing or light product assembly operation. Perhaps you are planning a warehouse layout for a product storage and shipping facility, a common warehouse design for ecommerce businesses. Your business needs will dictate how you allocate your warehouse space and configure your warehouse layout. Plan Space for Warehouse Equipment & Surrounding Workspace In planning your warehouse layout, your first step is identifying your key units. These are the things that take up most of your space and/or are the center of your production zones. For example, if you are an ecommerce company that stocks and ships goods, your key units would be pallet rack and metal shelving. You can see what this layout looks like in the image below. A business’s key warehouse units will vary based on the primary goals of the warehouse. Your key units might be equipment or workstations. Whatever they are, you need to identify and place these elements on your plan first. If manufacturing is your business, then your primary concern is designing your space around equipment and adjacent production workspace. Storage spaces, while important, are secondary in your plan, and will be dependent on where you place your equipment. Most ecommerce companies’ warehouses focus on accepting, storing, picking, packing, and shipping items. In this instance, stock storage units are the primary equipment, as shown above. Storage units used are typically either shelves or bins. The variety in size, shape, and weight of these storage units vary greatly. For ecommerce companies, other activities that impact the overall warehouse floor plan include order packing and shipping as well as receiving stock. It’s important to provide ample space around your various warehouse work centers so that employees can perform their tasks effectively and so that any equipment used—from hand trucks to forklifts—can navigate the warehouse aisles easily. If you do light assembly paired with some shipping, assembly stations or light manufacturing equipment are likely to be a significant focus. After that, you’ll need to address storage space for parts and finished goods, plus adequate packaging, packing, and shipping areas. You must conduct a thorough review of your needs before embarking on any warehouse floor planning process. Failure to consider the full nature of your needs could result in ineffective warehouse design. Create Warehouse Production Zones & Workflow Areas After addressing primary units like equipment, stock shelving, and assembly stations, the next step is thinking about how workers, materials, and goods move in and around your key elements. You also need to consider the space needed for your production work to safely occur. Safety needs to be a prime consideration in all warehouses, although it may be more complex in manufacturing, where materials movement occurs around equipment. The Occupational Safety and Health Administration (OSHA) offers detailed publications that you should review in planning your warehouse safety initiatives. Safe workflows apply to all types of operations, so it’s important to include adequate production zones and workflow areas on any warehouse layout plan. In manufacturing, you need to allocate space for workbenches, bins, tools, and safety stations needed for production. Plus, you need to reserve adequate production zones around equipment for workers to move materials and safely produce goods. There are no one-size-fits-all rules on what’s considered adequate space that applies across all manufacturing equipment and production processes. Pay close attention to equipment manufacturing instructions, as each piece of equipment will come with complete directions for safe operation. For a stock and ship operation, one primary work area is the aisle space between shelving units, as shown below. This is where you or your employees need adequate space to stock received goods and pick items for orders. You’ll also need to allocate workspace for employees to move goods into, around, and out of the production zones, which are your packing, shipping, and receiving areas. Assembly operations often combine the space needs of manufacturing and stock and ship. Assembly stations and related equipment make up the heart of your production zone. These can include workbenches or specialized stations, plus any needed bins for parts and finished goods. Like manufacturing, you need to allocate ample production space around these areas. Then, like stock and ship, you need to reserve space to efficiently package, pack, and ship finished goods. Establish Warehouse Storage Areas Storage is another key factor to consider in your warehouse layout. In fact, for pack and ship and some assembly operations, efficient arrangement of storage areas is your prime concern. Storage is important for manufacturing too, but usually secondary to equipment needs. To determine the storage space you need, and the shelving or other storage units you’ll use, you first need to consider what you’re storing. Your warehouse storage needs may take many forms, including: - Small assembly items housed in bins on light-duty shelving - Pallets with machinery parts - Boxed goods for pick, pack, and ship - Overstock items - Large raw materials for manufacturing What you’re storing dictates the type of storage you need to plan for in your warehouse layout. It also dictates the space you need to allow in and around storage areas, like aisle widths between shelving and clearance areas for moving goods in and out of storage. How you move materials and/or goods around in your warehouse dictates aisle spacing. If you use a pallet jack or forklift to move pallets or equipment in your storage areas, you’ll need generous space between shelves or around other units. Pallet jacks need a minimum aisle width of 4 to 5 feet to navigate between shelving. Forklifts require much more open aisle space. If you plan on using a forklift in your warehouse, your required aisle width will need to be between 11 and 13 feet, depending on the type of forklift you plan to use. Before using forklifts in your warehouse operation, make sure you thoroughly review all manufacturer recommendations for forklifts you procure. Different machines have different use requirements. Also, before operating a forklift, familiarize yourself with OSHA’s rules regarding forklift use, and follow all mandated forklift training requirements. If your warehouse plans involve hand-stocking small boxes for assembly or pack-and-ship, handheld bins or rolling carts are all you need to stock and pull stored goods. In that case, your shelving aisles will need to range between 3.6 to 4 feet wide in most cases. In creating your warehouse floor plan, don’t forget your overhead spaces. Most small warehouses easily accommodate shelving that is 8 feet tall or higher. Larger warehouses can house shelving 12 feet tall and higher. If you need overstock areas for large stock purchases or materials storage, using high shelves is a great way to preserve your warehouse floor space for production activities. 3. Choose Your Warehouse Storage & Work Area Equipment Most small business warehouse operations, whether manufacturing, assembly, pick-pack-and-ship, or a combination of all three, need some form of storage and workspace equipment, such as assembly tables or packing stations. Here you have many options, and the storage you need greatly depends on what it is you do. When planning your warehouse layout, the size and type of storage, shelving, and workspace equipment all come into play. Pallet racks, heavy-duty and light-duty shelving, cantilever racks, and all types of bins are common warehouse solutions. While you can track down each of your warehouse equipment and supply needs from various sellers, you can get most of them at significantly lower prices on Alibaba. As one of the largest global marketplaces, Alibaba lets you buy warehouse necessities such as shelves, racks, bins, scales, stock carts, pallet jacks, conveyors, and work desks directly from hundreds of manufacturers. Popular Warehouse Storage & Shelving Options Before buying any storage or shelving units for your warehouse, it’s important to understand your options. It’s also helpful to know which solutions best meet your unique needs and will work well in your warehouse design plans. Type of Storage/Shelving Best For: Common Sizes & Space to Allow in Your Warehouse Layout Pallet rack Midweight to heavyweight storage needs 4’ deep x 8’ long per unit Heavy-duty shelving Lightweight to midweight storage needs 3’ to 4’ deep x 6’ to 8’ long per unit Light-duty shelving Lightweight storage needs 18” to 2’ deep x 4’ long per unit Cantilever rack & specialty shelving Specific storage needs for oversized items Varies by need Bins, boxes, and hoppers Loose parts and materials storage Varies. Common allowance is pallet size: 40” x 48” Small parts & assembly bins Storing small items in limited space None, usually used on shelves, carts, and/or workstations When to Use Pallet Racks It’s called a pallet rack because it’s designed to store pallets of goods, but it’s also used for stocking all sorts of products and materials, large and small. Pallet rack is best for midweight to heavyweight storage needs like boxed stock, work materials, and finished goods. Pallet rack is available in various sizes, most commonly in sections 4 feet deep by 8 feet long by 8 to 12 feet in height. Costs vary significantly depending on how much racking you intend to use. Expect to pay between $120 to $350 per set for new heavy-duty warehouse racking; you can save up to 50% by ordering these in bulk on Alibaba. You can also sometimes save by contacting used warehouse dealers to find deals on used pallet racks and other warehouse storage items. Pallet racking is assembled using end units called uprights, adjustable crossbars called rails, and heavy-duty particleboard or metal wire grid shelves called decks. You can have many shelves or just a few on each unit. Pallet rack can be freestanding, though it’s designed to interconnect for long shelving runs. Used this way, it’s the most cost-effective shelving solution for large warehouse storage areas. If you have storage space of 1,000 square feet―around 20 feet x 50 feet―or more, two long rows of pallet rack can provide ample storage at a reasonable cost. When to Use Heavy-duty Shelving Heavy-duty (HD) shelving is pallet rack’s baby brother. The name is a bit deceiving as pallet rack generally holds more weight than HD shelving, but HD shelving is a cost-effective solution in many warehouse designs. Heavy-duty shelving is best for light to midweight storage in smaller warehouse spaces, storage units, and garages. These types of shelving units come in various sizes—usually from 3 to 4 feet to 6 to 8 feet long and 6 to 8 feet high. Pay attention to the weight ratings on the shelves you purchase. For safety reasons, it’s important to adhere to weight stipulations assigned by the shelving unit’s manufacturer. You can expect to pay $75 to $200 for an HD shelving unit, though you can save from 30% to 50% by ordering units in bulk through Alibaba. If you need a few units, you can also buy these shelves anywhere shelving is sold, including Home Depot, Amazon, and Lowe’s. When to Use Light-duty Shelving Light-duty (LD) shelving is commonly used in garages, small retail storerooms, and residential storage areas like utility and craft rooms, but there are times when they’re the right choice for your warehouse needs. Light-duty storage is an inexpensive choice for small warehouse spaces and storage units. Sizes vary, although 18 inches to 2 feet deep by 4 feet long is common. If you want to maximize the height of your warehouse for extra storage space, you won’t be able to do that with LD shelving, as these units are usually only 6 to 7 feet high. A big plus with LD shelves, though, is that most units come with five or six adjustable shelves, which gives you versatility if you’re storing items with significantly different dimensions. Light-duty shelving also works well with stacked parts bins, discussed below, for stocking small items and assembly parts. You can buy light-duty shelving anywhere shelves are sold, including Home Depot, Lowes, and your local hardware store. Expect to pay between $40 and $100 per shelving unit; you can save up to 50% if you buy 10 or more of the LD shelves shown above on Alibaba. When installing LD shelving units, be sure to secure the shelving appropriately as directed. In most cases, LD shelves are to be secured up against a wall or another unit for safety reasons. When to Use Cantilever Racks Cantilever rack, shown above, can handle your pipe, lumber, panels, and oversize material storage needs. Cantilever rack sizes and costs vary by need and type of material stored, so you’ll need to contact a used warehouse dealer or online vendors, like Alibaba or Shelving.com, to get a quote if you feel cantilever rack storage is an appropriate option for your warehouse. If you have a storage need that regular shelves or cantilever racks can’t handle or need a unique size or wall-mounted solution, contact a used warehouse shelving dealer. Most have an amazing array of unique storage solutions, and some can custom-cut shelving to fit specific needs. When to Use Warehouse-caliber Boxes, Hoppers, & Barrels Metal and heavyweight plastic storage boxes, hoppers, and barrels are common in manufacturing and assembly operations. Many businesses move these on pallets using pallet jacks, but some bins and hoppers are wheeled. You can purchase these in various sizes and in various materials that are capable of holding even heavy items. Expect to pay $100 to $200 for the caliber of the box illustrated above, but as usual with most warehouse materials, you can save up to 50% when you buy in bulk on Alibaba. When to Use Small Parts & Assembly Bins These handy stackable bins are ideal for storing small items for all sorts of needs, including materials for manufacturing, parts for assembly, and small goods for pack and ship. Plus, their easy-access design makes them an efficient alternative to stocking small goods in closed boxes. Expect to pay $1 to $10 each for small parts and assembly bins. Popular Workspace Equipment Options for Warehouses We’ve covered a full range of storage options that suit most businesses’ warehouse storage needs. Now let’s look at some work area equipment that you might need in your warehouse as well as the space you’ll need to allocate for this equipment during the warehouse layout planning process. Popular Workspace Equipment Options Type of Workspace Equipment Best For: Common Sizes to Allow in Warehouse Space Planning Multi-use tables & workbenches Manufacturing, assembly, picking & packing Varies. Common sizes run 3’ deep x 5’ to 8’ long Specialty manufacturing assembly stations Manufacturing & assembly needs Varies. Common sizes run 2’ to 3’ deep x 5’ to 8’ long Dedicated packing stations Daily shipping needs Common size is 3’ deep x 6’ to 8’ long Pallet packing freight scale station Operations shipping truck freight regularly 4’ x 4’, or 4’ deep x 6’ long, Dedicated shipping station table Operations shipping parcels regularly Varies. Common sizes run 3’ deep x 5’ to 8’ long Stock carts & pallet jacks Operations that move goods with the warehouse Allow around 3’ wide x 5’ long for storage Rolling staircases Operations that store volume stock on shelves over 8’ in height Approx. 4’ wide x 8’ long Conveyers Operations that are performing light assembly Varies; 18” by 30” width and lengths of 2’ to 24’ are common You may not need all of the equipment listed in the above chart but be sure to give careful consideration to the various workstations you need in your warehouse and what types of tables or equipment will be required for those stations to operate effectively. You also must think through how you’ll move stock and materials around in your warehouse and secure the appropriate equipment necessary to transport the items you warehouse. 4. Use Efficient Warehouse Design Traffic Flow Strategies You now have a good idea of the types of equipment and storage solutions you will use for your warehouse space, which is essential for cost-effective warehouse floor plans. You also have a sense of where everything will fit in your warehouse layout. It’s now time to drill into your warehouse schematic to arrange every element to create an efficient, productivity-boosting traffic flow. You need to think about your operation by exploring the following warehouse usage needs: - Consider how much time will you and your employees spend in various locations in your warehouse. - Determine around which elements—manufacturing equipment, storage areas, or work tables—most work will center. - Explore different needs you and your employees will have regarding how to move within the warehouse, how items will be gathered from various warehouse locations, and what items need to be close-by to complete daily tasks. As you become more aware of what needs to be done, by whom, using what methods, you will more easily be able to layout work areas and predict traffic patterns within your warehouse. Remember, every business need is different so, while you can learn from other warehouse layouts, you must keep your needs foremost in mind. Here is an example of an ecommerce warehouse floor plan with typical equipment, storage, and operational functions fully considered. Ecommerce Warehouse Floor Plan Example: Aisle Pattern In the ecommerce pick-pack-and-ship warehouse layout below, notice where the aisles (A) for product storage are placed. You can see how various elements were brought into the warehouse floor plan to facilitate efficiency in this warehouse model. The busiest production zone, the packing area, is centrally located between stock shelves, with two aisles that directly feed into the packing zone. This warehouse layout allows staff to quickly access or “pick” the product on either side of the packing tables. Plus, each employee is assigned a specific section to pick and maintain, which keeps them from bumping into each other. All of this culminates in effective and efficient traffic flow. Stock storage areas are maximized by using a 12-foot tall pallet rack that allows ample overstock space on upper shelves, out of the path of daily workflows. Hand-carried bins and small carts are used for restocking and order picking tasks among the shelves. Aisle widths of 4 feet suit this warehouse’s box and cart-moving needs. Ample space is left for pallet movement along the central aisles since the warehouse receives and ships palletized freight too. Shelving is not used against the end walls, and instead, this warehouse runs 2 feet deep shelving along the perimeter for smaller items. This enables pickers to move from aisle to aisle without backtracking and to pick small items along the way as needed. Ecommerce Warehouse Floor Plan Example: Packing & Shipping Workspace Packing and shipping is the primary goal of this ecommerce operation, so ample space is dedicated to these tasks. In the central packing area (B), the warehouse layout includes a mix of 8-foot and 6-foot utility tables that can be moved and rearranged as packing needs dictate. This lets warehouse employees handle daily parcel packing with room to spare, accommodates holiday volumes easily, and allows staff to pack pallets for large freight orders. As a pack-and-ship operation, this ecommerce warehouse stores shipping boxes and packing materials in easy reach of the packing tables. Once parcels are packed, they are moved quickly to the nearby shipping station table for weighing, sealing, and labeling. If you plan on shipping daily, allocating space for a dedicated shipping station is a real time-saver. Order fulfillment and shipping can be a bit tricky. If you’ve not done this before, make sure you visit other warehouses and take a look at how others perform fulfillment and shipping cost-effectively. Doing your homework will save both money and hours of frustration. Ecommerce Warehouse Floor Plan Example: Generous Receiving & Shipping Areas Continuing with the ecommerce warehouse design example, you can see ample room is available for shipping and receiving thanks to the large overhead doors (C). As a pack-and-ship ecommerce operation, this company receives numerous freight and parcel stock shipments daily. Allowing room to store received stock prior to unpacking is essential. Plus, it’s helpful to keep receivables separate from daily outbound parcels to prevent confusion and carrier pick-up mistakes. Ecommerce Warehouse Floor Plan Example: Warehouse Equipment Storage Layout This warehouse uses two rolling staircases to safely store and retrieve large numbers of lightweight overstock boxes from its 12-foot shelves. If you plan to use high shelves in your warehouse, be sure to develop a way to access items that are overhead securely. In this example, rolling staircases work just fine. In other warehouses, heavier equipment, such as forklifts, are used to both store and access items stored overhead. Since the rolling staircases take up warehouse floor space, their storage must be considered in the warehouse layout. The spaces marked (D) near the receiving and shipping areas are used to store the rolling staircases. Rolling ladders, moving conveyors, and pallet jacks are things to keep in mind when planning your warehouse layout. If you don’t have them now, but think you might down the line, allocate warehouse space for these items now. Once you get your heavy equipment situated or rows of shelving securely installed, you don’t want to move them to make space for pallet jacks and other large items you had not considered. 5. Test Your Warehouse Traffic Flow Plan The last step before you start installing equipment, shelves, and tables is to walk your finished plan. To do this, measure off the space and apply masking tape on the floor to mark the positioning of your main units, be they equipment, tables, or shelves. You don’t need to do this for every piece but do it in your key workflow and production zone areas. Then, walk the space as though you’re conducting key tasks that will be performed in the warehouse. Practice Performing Work Functions in the Planned Layout Carry boxes, tools, or materials while you test your warehouse design. Make sure you have plenty of clearance in all directions. Roll carts or pallet jacks through the warehouse layout to ensure items navigate easily along the planned paths—even when heavily loaded down. Get Employees to Test Your Warehouse Floor Plan If you have employees, get them involved in acting out work processes. If you don’t have employees yet, enlist some family or friends to help role-play key warehouse processes. Make sure your staff has ample room to conduct the tasks they will be required to perform. Check Hard-to-Change Layout Areas Multiple Times If you have large spaces within your warehouse layout that will house heavy equipment or large shelving units, test these areas multiple times. You do not want to move these heavy fixtures and equipment once they are installed. It’s far better to make traffic flow corrections at this stage while changes are easily made. Your business’ needs likely differ somewhat from the ecommerce warehouse examples shared here, but the principles of effective warehouse floor plan design remain the same. Make sure you put considerable thought into your planning and testing process, and you’ll be rewarded with a cost-effective, efficient, productive space, no matter your size or operation. Bottom Line Effective warehouse design starts with identifying your needs, including the tasks to be performed within your warehouse and the equipment and fixtures that will best support those tasks. When you take the time and effort to create an efficient warehouse layout, you pave the way for saving time, money, and hassles for years to come.
https://fitsmallbusiness.com/warehouse-layout/
Should You Drink Green Tea before Exercising The first time I got a deep impression on green tea was from my grandfather and his notebook about tea and life. And I really got hooked on his experience and... How Do You Know If You Have a Caffeine Addiction How do you know if you have caffeine addiction? Before reading, answer me these questions please: How frequent do you have caffeinated drinks? How much coffee do you consume... Why Does Tea Make Me Nauseous We get the habits that enjoy a sip of tea after meals, and then welcome a boost for digestion in our body. And we may have heard suggestions like: not... How Many Cups of Green Tea A Day to Lose Weight As one of the most-consumed drinks in the world, tea has many health benefits. Among the many types of tea, green tea is the most popular one in the... What’s the Right Age for Kids to Start Drinking Coffee Think about the first time you were served with a cup of coffee. 16 years old in the Starbucks coffee shop or only 8 at home consuming the cup... Why Should You Avoid Drinking Water Immediately After Meals We have been taught many times not to drink water while eating since we were little kids with limited cognition of healthy drinking habits. It is because that the... Should You Drink A Cup of Coffee Before Workout Coffee is known as refreshing our brain and providing us with enough energy. Perhaps some regular drinkers wondering is it good to drink coffee before workout? The answer is... Can You Drink Tea That Has Been Left Out Overnight Can you drink tea that has been left out overnight? Is it bad for you health? The overnight tea actually means the tea that has been relatively sit out long.... How to Sleep after Drinking Too Much Coffee Have you ever drunk a cup of coffee during night? I guess most of you have. Perhaps you have to finish one more presentation for tomorrow’s early morning meeting... Coffee 101: Does Decaf Coffee Wake You Up It is estimated that one thirds of Americans regard certain amounts of coffee intake daily during work as an accompanying essential. Therefore, staff with heavy tasks can continue on...
https://www.ecooe.com/ecooe-life/tag/health-tips-en/page/2/
# Alchornea ilicifolia Alchornea ilicifolia, commonly known as the native holly is a bush of eastern Australia. Growing in or on the edges of the drier rainforests, from Jamberoo, New South Wales to Atherton, Queensland. ## Taxonomy The botanist John Smith originally described this species as Caelebogyne ilicifolia in 1839, from three specimens collected by Allan Cunningham in 1829. The Swiss botanist Johann Müller gave it its current name in 1865. The generic name Alchornea honours the English botanist Stanesby Alchorne. Ilicifolia refers to the holly like leaves (Ilex).The leaves are food for the larvae of the Common Albatross butterfly. ## Description A shrub or rarely a small tree up to 6 meters tall and with a stem diameter of 10 cm. The trunk is usually crooked, with pale grey smooth bark, with some pustules and lenticels. Small branches greenish or fawn in color, with paler lenticels. Leaves holly-like in appearance, 2 to 8 cm long, 2 to 5 cm wide. Ovate or rhomboidal in shape with three or four teeth on each side of the leaf. Leaf tip and teeth sharp and pointed. Leaves stiff, hairless and pale on the underside. Leaf stalks around 3 mm long. Leaf venation evident on both leaf sides. Three or four lateral leaf veins nearly at right angles to the midrib, ending in a sharp point. ### Flowers and fruit Greenish flowers appear in November, on racemes. Male and female flowers on separate plants. The fruit is a dark brown capsule about 6 mm in diameter, usually with three lobes. With one seed in each cell. Fruit ripe from September to November, but may also occur at other times of the year. Difficult to regenerate from seed, though cuttings strike well. ## Uses An attractive garden shrub. Excellent slow growing hedge.
https://en.wikipedia.org/wiki/Alchornea_ilicifolia
I’ve heard from various students that they are missing the weekly class, and somehow they do not find the motivation to paint at home on their own. This seems to be a fairly universal problem for aspiring artists. Here are some thoughts from one of my favorite artists, Alain Picard. “I know what happens when artists like you start showing up every day to practice your creativity. Beauty flourishes. Confidence builds. Dreams come true.” “Creativity is not a skill or a result of tapping into the “muse.” It’s a habit you develop, a learned behavior that allows your ideas to become manifest. The muse is attracted to those who are busy practicing their creative work.” ~~Alain Picard Blue Blossom Symphony: Acrylic skin collage, 16″x20″ In my personal experience, creativity is like a muscle. The more you exercise it, the stronger it gets and the more it expands. Every painting and every experiment, whether successful or not, leads to a slew of new ideas. That level of ongoing creativity is a very exciting and fulfilling state to live in. But like any other muscle, when it comes to creativity you need to use it or lose it. Seriously. Have you let your creative muscles atrophy? Do you find you simply cannot make yourself get into your studio or you just have no ideas or inspiration? Taking the time to immerse yourself in art on a regular basis gets that inspiration and creativity flowing again… it doesn’t even have to be your art! Here are some ideas. - Look at other artists’ work online. If you like any Old Master’s, Google their art and keep a file on your computer of your favorite works. If you like contemporary landscapes, Google that. If you like abstracts, find some that make your heart beat a bit faster and make a folder for those. I have many folders on my laptop with paintings that inspire me; I look through them and add to them on a regular basis. I find it exciting to come across wonderful new (to me) artists I never knew existed, and there are hundreds if not thousands of them. - If you’re looking for ideas of things to paint, Google photographs, choose the ones that excite you, and keep them in a separate. I have a folder for clouds, ocean scenes, trees, etc! - Follow other artists you like on Instagram or Pinterest. When you look at your feed, you’ll see a constant flow of inspiration. Both of those platforms will suggest other artists for you to check out as well. I find a lot of cool workshop opportunities, freebies, and amazing different styles of artwork in this way. - Share your work on your social media. The positive and supportive feedback will really get you going! - Watch artist tutorials on YouTube. This is one of my favorite, completely free ways to learn new things and get fresh inspiration. I’d rather watch an artist at the top of their game teaching me a new technique than the evening news any day of the week! There are many high caliber artists teaching on YouTube these days, and many host live streams. You can paint along, just watch, or even ask them questions in the live chat. - Paint something small, say 6”x 8” or 5”x 7” on a daily basis. Have your surface prepared and ready to go each day so you can just step up to the easel and get going. Give yourself anywhere from 20-40 minutes (or even less if you’re very busy), and when the time is up, you’re done. This is a very different practice than working on a painting for hours and hours, struggling to get it “right.” You’ll be amazed at how your skills, your artist’s eye, and your color sense will progress if you do something like this.(Daily painting is an actual movement in the artist community. Check out the work of Lisa Daria Kennedy, who has been doing a painting a day literally for years! She’s up to painting #4,629 as of the time of this writing!!!!!) - Keep your favorite work on display, so you can enjoy the fruits of your creativity and remember how much fun and how rewarding it is! Keep your least favorite work on display, and notice how far you’ve come or if you suddenly know how to make that piece so much better. Make your home into a gallery! - Go to local art shows. Here are two that are currently on at the time of this writing: - Littleton Fine Arts Guild presents the Contrast Show, 1/25 – 3/26 at the Town Hall Arts Center - Foothills Art Center’s annual Members Show, 1/29 – 4/24 How do you immerse in art and get those all-important creative juices flowing? I’d love to hear any other ideas that you have!
https://www.lesliemillerartworks.com/2022/01/26/has-your-creative-muscle-atrophied-some-solutions/
St. James Hospital is a small facility that can accommodate 70-80 patients. Medical staff provide patient treatment, surgery, diagnosis, and first aid for acute conditions, while nurses ensure care and routine procedures such as testing, giving medication, and assisting with hygiene procedures. The hospital administration is responsible for resource allocation, communication with suppliers, and hospital operation’s organizational and technical aspects. The hospital receives part of the funds from patients or their insurance companies and part from government funding and sponsors.Click the button, and we will write you a custom essay from scratch for only $13.00 $11.05/page 322 academic experts available The hospital has two current problems, such as a shortage of nursing staff and frequent patient re-admission, which are interrelated. Older patients most require more attentive and constant care, as, due to the symptoms of aging and their diseases, they are often unable to take care of themselves. For this reason, nurses are forced to pay attention to only the most essential aspects of the patient’s health conditions to have time to provide treatment for all of them. Consequently, some subsequent treatment details, such as medication or diet, may not be sufficiently discussed, understood, and remembered by patients. In addition, the lack of nurses can also force staff to discharge patients earlier to reduce the workload or provide a bed for another patient. Thus, patients are re-admitted to the hospital in less than 30 days due to incomplete treatment in the hospital or self-care mistakes at home due to lack of education. Re-admission of patients also leads to a reduction in resources and a lack of staff in the hospital. According to the Hospital Readmission Reduction Program, funding is received by the best-performing hospitals, while high re-admission rates reduce the institution’s financial assistance (“Hospital Readmissions,” 2020). St. James Hospital is a small institution, so funding cuts are dramatically affecting its resources. Thus, the administration cannot afford to hire more nurses, since the shortage in the labor market forces hospitals to offer better terms and salaries to attract professionals, but St. James Hospital cannot compete with them. Consequently, the problem of re-admission leads to an increase in the problem of staff shortages. However, introducing changes to the hospital’s work system helps to eliminate these problems. The main actions to improve hospital operations are to pay more attention and time to patient education and create a system of follow-up calls to discharged patients by nurses. It is likely that the introduction of these approaches will affect the operation of the hospital in two stages because the staff will need to adapt to the new conditions. In the first phase, the workload of nurses and managers will increase as their responsibilities will also include a duty of follow-up calls and provision of patient education. However, if the administration hires or appoints one or two staff members to provide education for patients, it will reduce the burden on the nurse slightly. However, once the goal of reducing readmissions has been achieved, the organization of work in the hospital will change for the better. First, the administration will receive funds to hire more nurses, which will reduce the shortage. Second, the number of patients with worsening conditions due to inappropriate treatment will decrease, which will reduce the workload of staff and allow them to improve the quality of care (Vernon et al., 2019). Better quality of services will also affect the hospital’s profits due to government aid, and higher hospital ratings. and sponsors. Consequently, while expanding the system with more nursing staff would have been a quicker solution, it is not possible due to limited resources and changing the hospital’s situation only marginally. At the same time, completely replacing the system is counterintuitive, since although it has disadvantages, it works effectively in general. For this reason, the addition of elements such as a follow-up call system and an increased emphasis on education complements and expands the system and help to improve the hospital’s quality of care.Only 3 hours, and you will receive a custom essay written from scratch tailored to your instructions Implementing new changes requires several phases and stages, such as analysis, preparation, implementation, and evaluation. These stages correspond to the system of setting SMART goals and the system of step-by-step implementation of processes in the organization (“Concepts,” n / a). In the first phase, the administration needs to analyze the level of knowledge of its patients, the rates of re-administration, and the resources available for use. In the second phase, managers should hire someone to be responsible for patient education, possibly part-time, schedule follow-up calls, and prepare work phones that nurses can use to communicate with patients. The third phase involves the implementation of the plan, that is, educational conversations with patients and follow-up calls. The fourth phase includes collecting information about the knowledge of patients, their satisfaction with the service, re-administration and feedback from the staff. In this case, the fourth phase should be carried out to the final deadline for achieving the set goals, and every 3 months to track progress and make changes if necessary. Thus, the organization of work focused exclusively on the necessary physical and psychological needs of patients during their stay in the hospital will be replaced by concern for their health state in general. References Concepts: Implementing a process in an organization. (n.d.). Web. Hospital Readmissions Reduction Program (HRRP). (2020). Web. Vernon, D., Brown, J. E., Griffiths, E., Nevill, A. M., & Pinkney, M. (2019). Reducing readmission rates through a discharge follow-up service. Future Healthcare Journal, 6(2), 114–117. Web.
https://nerdyroo.com/system-analysis-of-the-st-james-hospital-activity/
--- abstract: 'We have studied photon motion around axially symmetric rotating Kerr black hole in the presence of plasma with radial power-law density. It is shown that in the presence of plasma the observed shape and size of shadow changes depending on i) plasma parameters, ii) black hole spin and iii) inclination angle between observer plane and axis of rotation of black hole. In order to extract pure effect of plasma influence on black hole image the particular case of the Schwarzschild black hole has also been investigated and it has been shown that i) the photon sphere around the spherical symmetric black hole is left unchanged under the plasma influence, ii) however the Schwarzschild black hole shadow size in plasma is reduced due to the refraction of the electromagnetic radiation in plasma environment of black hole. The study of the energy emission from the black hole in plasma shows that in the presence of plasma the maximal energy emission rate from the black hole decreases.' author: - 'Farruh Atamurotov$^{1,2}$' - 'Bobomurat Ahmedov$^{1,3}$' - 'Ahmadjon Abdujabbarov$^{1,3}$' bibliography: - '/hp/ahmadjon\_hp/Nauka/gravreferences/gravreferences.bib' title: 'Optical properties of black hole in the presence of plasma: shadow' --- Introduction ============ The study of astrophysical processes in plasma medium surrounding black hole becomes very interesting and important due to the evidence for the presence of black holes at the centres of the galaxies  [@Eatough13; @Falcke00; @Falcke01]. For example, the gravitational lensing in inhomogeneous and homogeneous plasma around black holes has been recently studied in  [@Bisnovatyi2010; @Tsupko12; @Morozova13; @Perlick15; @Er14] as extension of vacuum studies (see, e.g.  [@Schee09; @Schee15]). From the literature it is known that the black hole shadow is appeared by the gravitational lensing effect, see, e.g.  [@Amarilla12; @Tsukamoto14; @Atamurotov13b; @Falcke00]. If black hole is placed between a bright source and far observer, dark zone is created in the source image by photon fall inside black hole which is commonly called shadow of black hole. Recently, this effect is investigated by many authors for the different black holes (see, e.g.  [@Hioki09; @Grenzebach2014; @Amarilla10; @Atamurotov13]). The silhouette shape of an extremely rotating black hole has been investigated by Bardeen  [@Bardeen73]. Our previous studies on the shadow of black hole are related to the non-Kerr [@Atamurotov13b], Hořava-Lifshitz [@Atamurotov13], Kerr-Taub-NUT [@Abdujabbarov13c] and Myers-Perry [@Papnoi14] black holes. A new coordinate-independent formalism for characterization of a black-hole shadow has been recently developed in  [@Abdujabbarov15]. Shape of black hole is determined through boundary of the shadow which can be studied by application of the null geodesic equations. The presence of plasma in the vicinity of black holes changes the equations of motion of photons which may lead to the modification of black hole shadow by the influence of plasma. In this paper our main goal is to consider silhouette of shadow of axially symmetric black hole using the equations of motion for photons in plasma with radial power-law density. We would like to underline that very recently, influence of a non-magnetized cold plasma with the radially dependent density to black hole shadow has been studied in [@Perlick15] using the different alternate approach. In addition @Rogers15 has studied the photon motion around black hole surrounded by plasma. The paper is arranged as following. In Sect. \[geodesics\], we consider the equations of motion of photons around axially symmetric black hole in the presence of plasma. In Sect. \[bh-shadow\] we study the shadow of the axial-symmetric black hole in the presence of plasma. As particular case in subsections \[nonrot\] and \[emission\] we study the shadow and the energy emission from the sphericaly symmetric black hole. Finally, in Sect. \[conclusion\] we briefly summarize the results found. Throughout the paper, we use a system of geometric units in which $G = 1 = c$. Greek indices run from $0$ to $3$. Photon motion around the black hole in the presence of plasma {#geodesics} ============================================================= The rotating black hole is described by the space-time metric, which in the standard Boyer-Lindquist coordinates, can be written in the form $$ds^2 = g_{\alpha \beta}dx^\alpha dx^\beta\ , \label{metric}$$ with  [@chandra98] $$\begin{aligned} g_{00}&=&-\left(1-\frac{2 M r}{\Sigma}\right)\ , \nonumber \\ g_{11}&=&\frac{\Sigma}{\Delta}\ , \nonumber \\ g_{22}&=&\Sigma\ , \nonumber \\ g_{33}&=&\left[(r^2+a^2)+\frac{2 a^2 M r \sin^2 \theta}{\Sigma}\right]\sin^2 \theta\ , \nonumber \\ g_{03}&=&-\frac{2 M a r \sin^2 \theta}{\Sigma}\ ,\label{1}\end{aligned}$$ $$\Delta = r^2 + a^2 -2 M r , \;\;\;\;\; \Sigma=r^2+a^2 \cos^2\theta \ , \nonumber \label{p2}$$ where as usual $M$ and $a$ are the total mass and the spin parameter of the black hole. In this paper we will consider the plasma surrounding the central axially symmetric black hole. The refraction index of the plasma will be $n=n(x^{i}, \omega)$ where the photon frequency measured by observer with velocity $u^\alpha $ is $\omega$. In this case the effective energy of photon has the form $\hbar \omega= - p_\alpha u^\alpha$. The refraction index of the plasma as a function of the photon four-momentum has been obtained in [@Synge60] and has the following form: $$n^2=1+\frac{p_\alpha p^\alpha}{\left( p_\beta u^\beta \right)^2} ,$$ and for the vacuum case one has the relation $n=1$. The Hamiltonian for the photon around an arbitrary black hole surrounded by plasma has the following form $$H(x^\alpha, p_\alpha)=\frac{1}{2}\left[ g^{\alpha \beta} p_\alpha p_\beta + (n^2-1)\left( p_\beta u^\beta \right)^2 \right]=0. \label{generalHamiltonian}$$ Following to the derivation of a gravitational redshift discussed in [@Rezzolla04] we will assume that the spacetime stationarity allows existence of a timelike Killing vector $\xi^\alpha$ obeying to the Killing equations $$\label{killing-eq} \xi_{\alpha ;\beta}+\xi_{\beta;\alpha}=0\ .$$ Then one can introduce two frequencies of electromagnetic waves using null wave-vector $k^\alpha$ the first one is the frequency measured by an observer with four-velocity $u^{\alpha}$ and defined as $$\label{freq_ob} \omega \equiv -k^\alpha u_\alpha \ ,$$ while the second one is the frequency associated with the timelike Killing vector $\xi^{\alpha}$ and defined as $$\label{freq_kil} \omega_\xi \equiv -k^\alpha\xi_{\alpha} \ .$$ The frequency (\[freq\_ob\]) depends on the observer chosen and is therefore a function of position, while the frequency (\[freq\_kil\]) is a conserved quantity that remains unchanged along the trajectory followed by the electromagnetic wave. One can apply this property to measure how the frequency changes with the radial position and is redshifted in the spacetime. Assume the Killing vector to have components $$\label{killing} \xi^{\alpha}\equiv \bigg(1,0,0,0\bigg) \ ; \qquad \xi_{\alpha}\equiv g_{00} \bigg(- 1,0,0,0 \bigg) \ ,$$ so that $\omega_{\xi}=k_0=$const. The frequency of an electromagnetic wave emitted at radial position $r$ and measured by an observer with four-velocity $u^{\alpha}\{1/\sqrt{-g_{00}},0,0,0\}$ parallel to $\xi^{\alpha}$ (i.e. a static observer) will be governed by the following equation $$\label{rs} \sqrt{-g_{00}}\omega(r)= \omega_\xi={\rm const} \ .$$ One may introduce a specific form for the plasma frequency for analytic processing, assuming that the refractive index has the general form $$n^2=1- \frac{\omega_e^2}{\omega^2}, \label{nFreq}$$ where $\omega_e$ is usually called plasma frequency. Now using the Hamilton-Jacobi equation which defines the equation of motion of the photons for a given space-time geometry  [@Synge60; @Rogers15; @Bisnovatyi2010]: $$\frac{\partial S}{\partial \sigma}=-\frac{1}{2}\Big[g^{\alpha\beta}p_{\alpha}p_{\beta}-(n^2-1)(p_{0} \sqrt{-g^{00}})^{2}\Big]\ , \label{p3}$$ where $p_{\alpha}=\partial S/\partial x^\alpha$. Using method of separation of variables the Jacobi action S can be written as [@chandra98; @Atamurotov13b]: $$S=\frac{1}{2}m^2 \sigma - {\cal E} t + {\cal L} \phi + S_{r}(r)+S_{\theta}(\theta)\ , \label{p4}$$ where $\cal L$, $\cal E$ are conservative quantities as angular momentum and energy of the test particles. For trajectories of the photons we have the following set of the equations: $$\begin{aligned} \Sigma\frac{dt}{d\sigma}&=&a ({\cal L} - n^2 {\cal E} a \sin^2\theta)\nonumber\\&&+ \frac{r^2+a^2}{\Delta}\left[(r^2+a^2)n^2 {\cal E} -a {\cal L} \right], \label{teqn} \\ \Sigma\frac{d\phi}{d\sigma}&=&\left(\frac{{\cal L}}{\sin^2\theta} -a {\cal E}\right)+\frac{a}{\Delta}\left[(r^2+a^2) {\cal E} -a {\cal L} \right], \label{pheqn} \\ \Sigma\frac{dr}{d\sigma}&=&\sqrt{\mathcal{R}}, \label{reqn} \\ \Sigma\frac{d\theta}{d\sigma}&=&\sqrt{\Theta}, \label{theteqn}\end{aligned}$$ can be derived from the Hamilton-Jacobi equation, where the functions $\mathcal{R}(r)$ and $\Theta(\theta)$ are introduced as $$\begin{aligned} \mathcal{R}&=&\left[(r^2+a^2) {\cal E} -a {\cal L} \right]^2+(r^2+a^2)^2(n^2-1){\cal E}^2 \nonumber \\ && -\Delta\left[\mathcal{K}+({\cal L} -a {\cal E})^2\right]\ , \label{9} \\ \Theta&=&\mathcal{K}+\cos^2\theta\left(a^2 {{\cal E}^2}-\frac{{\cal L}^2}{\sin^2\theta}\right) \nonumber\\&& -(n^2-1) a^2 {\cal E}^2 \sin^2\theta\ , \label{10}\end{aligned}$$ and the Carter constant as $ \mathcal{K} $. For calculation examples one needs the analytical expression of the llama frequency $\omega_e$ which for the electron plasma has the following form $$\omega_e^2=\frac{4 \pi e^2 N(r)}{m_e} \label{plasmaFreqDef}$$ where $e$ and $m_e$ are the electron charge and mass respectively, and $N(r)$ is the the plasma number density. Following the work by @Rogers15 here we consider a radial power-law density $$N(r)=\frac{N_0}{r^h}, \label{powerLawDensity}$$ where $h \geq 0$, such that $$\omega_e^2=\frac{k}{r^{h}}. \label{omegaN}$$ As an example here we get the value for power $h$ as 1 [@Rogers15]. For this value we plot the radial dependence of the effective potential $V_{\rm eff}$ of the radial motion of the photons defined as $$\begin{aligned} \left(\frac{dr}{d\sigma}\right)^2+V_{\rm eff}=1\ .\end{aligned}$$ ![image](effpot01.pdf){width="0.31\linewidth"} ![image](effpot05.pdf){width="0.31\linewidth"} ![image](effpot09.pdf){width="0.31\linewidth"} The radial dependence of the effective potential for different values of plasma refraction $n$ and black hole spin $a$ has been presented in Fig. \[effpot\]. In the Fig.\[effpot\] the left plot corresponds to the case when refraction parameter of the plasma is $n^2= 0.2\ ,\ 0.44\ ,\ 0.89$ (dotted, dashed and solid lines, respectively) at the position $r=3M$; middle plot corresponds to the case when the refraction parameter is $n^2= 0.19\ ,\ 0.42\ ,\ 0.88$ corresponding to dotted, dashed and solid lines, respectively, at the position $r=3M$; right plot represents the radial dependence of the effective potential when refraction parameter is $n^2= 0.14\ ,\ 0.39\ ,\ 0.88$ corresponding to dotted, dashed and solid lines, respectively, at the position $r=3M$. Shadow of black hole in the presence of the plasma {#bh-shadow} =================================================== ![image](fig2a.pdf){width="0.222\linewidth"} ![image](fig2b.pdf){width="0.24\linewidth"} ![image](fig2c.pdf){width="0.24\linewidth"} ![image](fig2d.pdf){width="0.24\linewidth"} ![image](fig2e.pdf){width="0.222\linewidth"} ![image](fig2f.pdf){width="0.24\linewidth"} ![image](fig2g.pdf){width="0.24\linewidth"} ![image](fig2h.pdf){width="0.24\linewidth"} ![image](fig2i.pdf){width="0.222\linewidth"} ![image](fig2j.pdf){width="0.24\linewidth"} ![image](fig2k.pdf){width="0.24\linewidth"} ![image](fig2l.pdf){width="0.24\linewidth"} ![image](fig2m.pdf){width="0.24\linewidth"} ![image](fig2n.pdf){width="0.24\linewidth"} ![image](fig2o.pdf){width="0.258\linewidth"} ![image](fig2p.pdf){width="0.24\linewidth"} In this section we consider the shadow cast by black hole surrounded by plasma. If black hole surrounded by plasma originated between the light source and the observer, then the latter can observe the black spot on the bright background. The observer at the infinity can only observe the light beam scattered away and due to capturing of the photons by the black hole the shaded area on the sky would be appeared. This spot corresponds to the shadow of the black hole and its boundary can be defined using the equation of motion of photons given by expressions (\[teqn\])–(\[theteqn\]) around black hole surrounded by plasma. In order to describe the apparent shape of the the black hole surrounded by plasma we need to consider the closed orbits around it. Since the equations of motion depend on conserved quantities ${\cal E}$, ${\cal L}$ and the Carter constant ${\cal K}$, it is convenient to parametrize them using the normalised parameters $ \xi={\cal L/E} $ and $\eta= {\cal K/E}^2$. The silhouette of the black hole shadow in the presence of the plasma can be found using the conditions $${\cal R}(r)=0=\partial {\cal R}(r)/\partial r .$$ Using these equations one can easily find the expressions for the parameters $\xi$ and $\eta$ in the form $$\begin{aligned} \xi &=& \frac{\cal {B}}{{\cal A}} +\sqrt{\frac{{\cal B}^2}{{\cal A}^2} -\frac{{\cal C}}{{\cal A}}}\ , \label{xiexp}\\ \eta&=& \frac{(r^2+a^2-a\xi)^2 +(r^2+a^2)^2 (n^2-1)}{\Delta} \nonumber\\&& -(\xi-a)^2 \label{etaexp}\end{aligned}$$ where we have used the following notations $$\begin{aligned} {\cal A}&=& \frac{a^2}{\Delta} \ ,\\ {\cal B}&=& \frac{a^2-r^2}{M-r}\frac{Ma }{\Delta} \ ,\\ {\cal C}&=& n^2 \frac{(r^2+a^2)^2}{\Delta}\nonumber \\&&+\frac{2r (r^2+a^2)n^2 +(r^2+a^2)^2 n n'}{M-r} \ ,\end{aligned}$$ and prime denotes the differentiation with respect to radial coordinate $r$. The boundary of the black hole’s shadow can be fully determined through the expessions (\[xiexp\])-(\[etaexp\]). However, the shadow will be observed at ’observer’s sky’, which can be referenced by the celestial coordinates related to the real astronomical measurements. The celestial coordinates are defined as $$\begin{aligned} \label{alpha1} \alpha&=&\lim_{r_{0}\rightarrow \infty}\left( -r_{0}^{2}\sin\theta_{0}\frac{d\phi}{dr}\right)\ , \\ \label{beta1} \beta&=&\lim_{r_{0}\rightarrow \infty}r_{0}^{2}\frac{d\theta}{dr}\ .\end{aligned}$$ Using the equations of motion (\[teqn\])-(\[theteqn\]) one can easily find the relations for the celestial coordinates in the form $$\begin{aligned} \alpha&=& -\frac{\xi}{n\sin\theta}\, \label{alpha}\ ,\\ \beta&=&\frac{\sqrt{\eta+a^2-n^2a^2\sin^2\theta-\xi^2\cot^2\theta }}{n} \label{beta}\ ,\end{aligned}$$ for the case when black hole is surrounded by plasma. In Fig \[shadow1\] the shadow of the rotating black hole for the different values of black hole rotation parameter $a$, inclination angle $\theta_0$ between the observer and the axis of the rotation is represented. In this figures we choose the plasma frequency in the form $\omega_e/\omega_\xi=k/r$. From the Fig. \[shadow1\] one can observe the change of the size and shape of the rotating black hole surrounded by plasma. Physical reason for this is due to gravitational redshift of photons in the gravitational field of the black hole. The frequency change due to gravitational redshift affects on the plasma refraction index. \[nonrot\]Shadow of non-rotating black hole ------------------------------------------- Now in order to extract pure plasma effects we will concentrate at the special case when the black hole is non-rotating and the size of the black hole shadow can be observed (see, e.g. [@Perlick15]). In the case of the static black hole the shape of the black hole is circle and the radius of the shadow will be changed by plasma effects. Using the expressions (\[alpha\]) and (\[beta\]) one can easily find the radius of the shadow of static black hole surrounded by plasma in the form: $$\begin{aligned} R_{sh}&=& \frac{1}{n(r-M)}\Bigg[2 r^3 (r-M) n^2 +r^4 nn'(r-M)\nonumber\\&&-2 r^2 M^2 + 2M r^2 \bigg\{n r^2 (n+r n') - (4 n +3r n') \nonumber\\&&\times n Mr+M^2 (1+3 n^2+2r nn')\bigg\}^{1/2}\Bigg]^{1/2}\ ,\label{shadrad}\end{aligned}$$ here $r$ is the unstable circular orbits of photons defined by $dr/d\sigma = 0$ and $\partial V_{\rm eff}/\partial r =0$. In the absence of the plasma one has the standard value of the photon sphere radius as $r=3M$ and for shadow radius $R_{sh}=3\sqrt{3}$ [@Virbhadra00; @Claudel01]. In the presence of the plasma we will have different value for the photon sphere radius and consequently different shadow radius of the boundary of black hole shadow. In the Fig. \[fig3\] the dependence of the radius of shadow of the static black hole from the plasma parameters has been presented which shows that the radius of the shadow of black hole surrounded by inhomogeneous plasma decreases. It is similar to the results of the paper [@Perlick15]. ![ The dependence of the Schwarzschild black hole shadow on the plasma frequentcey parameter. Here we take the plasma frequency as $\omega_e/\omega_\xi=k/r$. \[fig3\]](fig3.pdf){width="0.9\linewidth"} Emission energy of black holes in plasma \[emission\] ----------------------------------------------------- ![ Energy emission from black hole for the different values of $k/M$: Solid line corresponds to the vacuum case ($k/M=0$), Dashed line corresponds to the case when $k/M=0.4$, dot-dashed line corresponds to the case when $k/M=0.6$, dotted line corresponds to the case when $k/M=0.8$. Here we take the plasma frequency as $\omega_e/\omega_\xi=k/r$ and $\tilde{\Omega}$ is normalised to the $T$. \[energy\]](fig4.pdf){width="0.9\linewidth"} For the completeness of our study here we evaluate rate of the energy emission from the black hole in plasma using the expression for the Hawking radiation at the frequency $\Omega$ as [@Wei; @Atamurotov15] $$\frac{d^2E(\Omega)}{d\Omega dt}= \frac{2 \pi^2 \sigma_{lim}}{\exp{\Omega/T}-1}\Omega^3\ ,$$ where $T=\kappa/2\pi$ is the Hawking temperature, and $\kappa$ is the surface gravity. Here, for simplicity we consider the special case when the black hole is non rotating, and the background spacetime is spherically-symmetric. At the horizon the temperature $T$ of the black hole is $$\begin{aligned} T&=& \frac{1}{4\pi r_{+}}\ . \label{temp}\end{aligned}$$ The limiting constant $\sigma_{lim}$ $$\sigma_{lim} \approx \pi R_{sh}^2\ \nonumber$$ defines the value of the absorption cross section vibration for spherically symmetric black hole and $R_{sh}$ is given by expression (\[shadrad\]). Consequently, one can get $$\begin{aligned} \frac{d^2E(\Omega)}{d\Omega dt}=\frac{2\pi^3 R_{sh}^2}{e^{\Omega/T}-1}\Omega^3\ \nonumber\end{aligned}$$ that the energy of radiation of black hole in plasma depends on the size of its shadow. [The dependence of energy emission rate from frequency for the different values of plasma parameters]{} $\omega_{e}$ is shown in Fig. \[energy\]. [One can see that with the increasing plasma parameter]{} $\omega_{e}$ [the maximum value of energy emission rate decreases, caused by radius of shadow decrease.]{} Conclusions {#conclusion} =========== In this paper we have studied shadow and emission rate of axial symmetric black hole in presence of plasma with radial power-law density. The obtained results can be summarized as follows. - In the presence of plasma the observed shape and size of shadow changes depending on i) plasma parameters, ii) black hole spin and iii) inclination angle between observer plane and axis of rotation of black hole. - In order to extract pure effect of plasma influence on black hole image the particular case of the Schwarzschild black hole has also been investigated. It is shown that under influence of plasma the observed size of shadow of the spherical symmetric black hole becomes smaller than that in the vacuum case. So it has been shown that i) the photon sphere around the spherical symmetric black hole is left unchanged under the plasma influence, ii) however the Schwarzschild black hole shadow size in plasma is reduced due to the refraction of the electromagnetic radiation in plasma environment of black hole. - The study of the energy emission from the black hole in plasma has shown that with the increase of the dimensionless plasma parameter the maximum value of energy emission rate from the black hole decreases due to the decrease of the size of black hole shadow. In the future work we plan to study shadow and related optical properties of different types of gravitational compact objects in the presence of plasma in more detail and in more astrophysically relevant cases. Acknowledgments {#acknowledgments .unnumbered} =============== This research was partially supported by the Volkswagen Stiftung (Grant 86 866), by the project F2-FA-F113 of the UzAS and by the ICTP through the projects OEA-NET-76, OEA-PRJ-29. Warm hospitality that has facilitated this work to A.A. and B.A. by the Goethe University, Frankfurt am Main, Germany and the IUCAA, Pune is thankfully acknowledged. A.A. and B.A. acknowledge the TWAS associateship grant. We would like to thank Volker Perlick and anonymous referee for the careful reading and important comments which essentially improved the paper.
The announcement of EODY for the new cases of coronavirus Today we announce 358 new cases of the new virus in the country, of which 56 are associated with known outbreaks and 43 were detected following checks at the country’s gateways. The total number of cases is 16,286 , of which 55.8% are men. 2,696 (16.6%) are considered to be related to travel from abroad and 6,808 (41.8%) are related to an already known case. 73 of our fellow citizens are being treated by intubation. Their median age is 68 years. 20 (27.4%) are women and the rest are men. 89.0% of intubated patients have an underlying disease or are 70 years of age or older. 191 patients have been discharged from the ICU. Finally, we have 5 more recorded deaths and 357 deaths in total in the country. 132 (37.0%) women and the rest men. The median age of our dying fellow citizens was 78 years and 96.9% had some underlying disease and / or age 70 years and over. It is noted that a meeting was held on Wednesday morning at the Ministry of Health for the ICUs in Attica . At noon of the same day, Professor Anastasia Kotanidou announced on behalf of the Hellenic Society of Intensive Care the number of available ICU beds in Attica. According to her statement, 337 ICU beds are available in Attica in state and military hospitals. Of these, 107 have been allocated for the needs of patients with COVID-19, while in the coming days another 42 will be added to NSS hospitals. Today, 23/09/20, 30% of the ICU beds in Attica are free for disposal, and in particular 40% of the ICU-COVID beds are free.
https://kefaloniapulse.homeinkefalonia.properties/greek-coronavirus-daily-update-358-new-cases-today-with-43-detected-at-borders
Things to Keep in Mind While Writing an Assignment Why Assignment writing is important? An assignment is a piece task that is allotted to the students as a part of their academics. Students are provided with the opportunity to learn, practice as well as demonstrate whether they have achieved their learning goals. If the students are able to perform effectively on their assignments it indicates that they have been capable of achieving their learning objectives. Writing an assignment helps develop knowledge, improves the speed and quality of Academic writing, and is advantageous for uplifting one’s academic background. Research skills of individuals can also be improved when an individual has to write their assignment and they can learn to organise their ideas and convey them to others. Best assignment tips - Clarification of the task: The assignment task that has been allotted to you needs to be clarified initially. Any questions relating to the task need to be resolved and there should be no procrastination concerning the same. - Conduct necessary research: The topic of assignment that has been assigned to an individual needs to be researched upon. This can help the accumulation of reliable and relevant information concerning the topic. Collection of appropriate material will allow writing the assignment topic and improve the quality of your work. One can search for library and online sources as well as consult with experts in the field - Planning the task: Planning the ways you will write the assignment can increase your focus on the task and this might help enhance the ability to correctly answer the assignment task. It is possible to present the excellent quality of assignment because of planning, as work can be organised better and the ideas can be put to words more effectively. - Writing: Initially, a draft of the work needs to be made where the most important points need to be identified. It is important to ensure that a formal style of writing should be followed and all aspects of the assignment are addressed. It is important to ensure that the assignment being written is relevant and makes sense with the topic that has been assigned for the assignment. The writing should be informative, specific and brisk so that it is easily understood by others - Allow time for revising: When writing an assignment is complete, it is always essential to review the task. Proofreading the task and editing the same based on the needs of the task will help meet the objectives of the assignment task. It will help recognise minor mistakes and the clarity of the work can be confirmed. - Checking for accuracy: Lastly, the assignment should be checked for ensuring that proper citations and quotations have been added. If some aspects seem to be imperfect or not adequate one should try to enhance or improve the same in the next assignment as it is not possible to be flawless. Tips to complete an assignment in less time Completion of Assignments on time enables the students actively participate in their education. It keeps one responsible for aspects that are being learned and allows achieving more in terms of academics. Additionally, it is necessary as students can practice managing their workload which might not only be beneficial for their academics but for their future as well. - Making a list: The basic requirements of the assignment need to be recognised and the task needs to be broken into subparts. Thus, making a list of the several tasks will enable prioritising which work needs to be completed first and which are more important. - Estimating the time required for each task: Each item or sub-part should be allotted with an estimated time that might be required for each of them. This will help maintain the speed of the task and finish each sub-part of the assignment within a time range. - Gathering all the requirements: The estimation of time should follow gathering necessary information and material that might be required for the completion of your task and this need to be done quickly. - Reduce Distraction: Finding a place that is free from distraction and enables maintaining focus with the task can help completion of assignment on time. In order to reduce distraction, one can keep their mobile phones and tablets in silence mode unlit the time for a break arrives. - Timing oneself: Formulating a target to complete each subpart of your assignment within a stipulated time might boost the speed of the individual in completing the assignment. - Staying on the task: To complete an assignment on time it is essential to stay engaged with the task. Staying on task helps gain more knowledge concerning the topic and complete the same quickly. - Take necessary breaks: It is necessary to take an adequate break in between tasks this enables in keeping one’s energy up. In the absence of a necessary break, an individual might feel tired and therefore it is vital for maintaining dedication and motivation toward work. - Reward yourself: The completion of an assignment on time should follow rewarding oneself this enables keep one motivated to work with equal deduction and finish work on time in the future. College assignment tips College assignments are allotted to the students so that they can develop knowledge concerning the curriculum syllabus. - Make use of all sources of information available: An increasing number of resources are used for teaching students in college and the students might not overlook universities and this. The students besides making use of the notes and resources that are available from the college Moodle should note all information that is put on the blackboard down. Additionally, as a student, you should research online and make use of library resources like books and articles, Google scholar books and articles when you are assigned college assignments. - Reference the Work: One important aspect to note in the case of college assignments is that they should be adequately referenced. The use of others’ words without acknowledging the individual is considered an offence according to the universities and it is known as plagiarism. This is considered a form of cheating and therefore assignments in the college would be considered best when it has been appropriately referenced. Making use of referencing like APA, Chicago, Harvard and Vancouver to reference one’s work allows acknowledging the authors in a similar field and improves assignment quality. One can avoid plagiarism by paraphrasing the words of the authors. - Planning the assignment before writing: Planning a college assignment helps the same to be completed in an organised manner. Planning helps understand which information is to be placed where and arrange the assignment so that it can be completed with greater efficiency within time. - Selection of correct words: Technical and formal terms are required to be used for the completion of college assignments. Academic terms should be utilised which help make an assignment more presentable. Using brisk and clear statements is necessary when writing college assignments, which will make the same excellent in terms of quality. - Proofread the work and edit: The assignment should be checked at the end to identify any mistakes made and elimination of the same should be ensured. One can check for grammatical mistakes made now and make necessary corrections. Therefore, planning an assignment is essential for completing the same effectively and simultaneously completing the assignment on time. Following the above-mentioned suggestion will enable you to improve your assignments and therefore academic performance.
https://planetoverclock.com/things-to-keep-in-mind-while-writing-an-assignment/
In the expression, notice that the numerator and denominator are the same except for the signs of the terms. To reduce the expression, we first factor -1 out of the numerator. Here, a and b are real numbers and a ≠ b. Step 1 Factor the numerator and denominator. For the numerator, find two numbers whose product is -27 and whose sum is 6. These numbers are -3 and 9. The denominator is the difference of two squares: 32 - x2. Step 2 Cancel all pairs of factors common to the numerator and denominator. Since has the form it reduces to -1. The answer may be written in several ways.
https://www.algebra-tutoring.com/reducing-rational-expressions-2.htm
This past week was a great week for bike riding in Austin. I was able to get out on a few weekdays, including a ride to work on Friday for ride your bike to work day. Unfortunately my route did not take me by any free breakfast stations that were offered. Not that riding to work is out of the ordinary for me, I typically make the 11 mile round trip commute by bike 2 – 3 times per week. On Saturday, May 19th I started my ride bright and early at 8 am, hoping to avoid any wind that might pick up. I cruised north on Parmer Lane for 21 miles before making a u-turn and heading home. The day was quite beautiful: By the time I hit the u-turn the wind was in full force however, making it a little more difficult to enjoy the ride. Stopped at the used car dealership that sets out water containers for cyclists to get a picture of the flags flapping furiously in the wind. My total ride distance was 42 miles. On Sunday, May 20th I set out on a shorter ride in the afternoon. Oh, and without wind to annoy me. On shorter rides I sometimes just take a tour of neighborhoods near my home. Saw this house off Duval that had great metal yard art scattered throughout the front and back yards and some nice native plants out front. Total distance for this ride was 21 miles. I’m training for the 2012 Pedal to the Point Bike MS ride in northern Ohio. Consider supporting my participation by making a donation today!
https://lauramakes.com/2012/05/20/
Are we closed? Coming to School Healthy Schools Initiative Parking & Dropping off Parents in Partnership Parents Guide Children Gallery Events Archive E-safety Social Media Class Pages Reception Year 1 Year 2 Year 3 Year 4 Year 5 Year 6 Maths Reasoning and Problem Solving Arithmetic English - Reading English - Spelling English - Punctuation and Grammar School Council School Play Church and Community Christian Distinctiveness Church Links with the Community Governors' Zone Collective Worship Log in ADS Santa Fun Run Home News and Events Latest News ADS Santa Fun Run Congratulations to all the St Paul's runners at the Santa Fun Run on Sunday 25th November. With 46 members, the St Paul's team won the prize for the largest team entry for a second year in succession. A donation of £250 to the school fund was received. The pupils who participated in the 5k run will decide what the money should be spent on. The Fun Run is organised by Alzheimers Dementia Support, a local charity that supports people suffering from the disease and their families. Top Cookie information Cookie Notice Click here for more information .
http://www.stpaulsschool.co.uk/news/detail/ads-santa-fun-run/
Challenge: Down syndrome continues to be the most common chromosomal disorder — and it is also the least-funded major genetic condition by the National Institutes of Health. The disorder affects one out of every 700 babies born, for a total of 6,000 individuals each year. According to the Canadian Down Syndrome Society, Down syndrome births in the U.S. have fallen 30 percent below projections as a result of genetic screening. Official estimates of the number can be difficult, as there has never been a coordinated tracking of births and deaths of those with Down syndrome; however, the previous estimate was recently revised from 400,000 to approximately 250,000, as of 2008. The fewer the individuals with Down syndrome, the fewer to advocate for the community’s interests. Recent advocacy efforts emphasize allowing members of the community to speak to their specific needs on their own, serving as their own best advocate. Members of a disabled community shouldn’t need the efforts of the able-bodied population to receive recognition and assistance. How do you raise awareness for a cause when your most important representatives are disappearing? Key insight Why can’t people be endangered? The idea: Lions, tigers and bears…oh my! Maybe even a rhino. The brainchild of another Canadian Down Syndrome Society (CDSS) and an FCB Toronto venture, “Endangered Syndrome” depicts members of the Down community outfitted in endangered species garb via a series of online videos and print advertisements. The group is applying to be the first people on the International Union for Conservation of Nature (IUCN)'s Red List of Threatened Species. The campaign hopes to raise awareness of the need for support for housing, employment and education — and the overall decline in support for this community. This comparison begs the attention-grabbing question: Does society treat animals better than its people? In an effort to leverage the global Endangered Species List more than just symbolically, the main call-to-action of the campaign urges the public to sign a tangible, online petition for the inclusion of Down syndrome individuals on the global registry. According to the standards set by IUCN, the Down syndrome community qualifies as a threatened species in many parts of the world. Biserki Livaja, the mother of one of the video’s actors, Dylan Harman, says: “People do get behind endangered species, and sometimes people get behind those sorts of issues more than issues that deal with other people.” All efforts coincide with World Down Syndrome Day (March 21, 2019), the same day on which CDSS will physically present its case to the United Nations. Even if this proposition fails — again, no human group or community has ever been placed on the list — the hope is that this campaign sparks a much-needed conversation about future funding. What they said: “It may seem to be a dramatic way of getting our point across [regarding the endangered species theme],” agency co-creative Jeff Hilts concedes. “But the fact is, we need a dramatic shift in awareness and attitudes for the public to understand the seriousness of funding shortfalls if we are to provide a meaningful and inclusive life for the Down syndrome community.” Lessons to be learned: Don’t always rinse and repeat. — This isn’t the first tag-team effort by CDSS and FCB aimed at creating a powerful, award-winning campaign designed to foster empathy and understanding about the Down syndrome community. In 2016, young people with the condition answered commonly asked Google questions about their everyday lives. In 2017, the world was informed that “sorry” is the one word you should never say to parents of kids born with a development disorder. It would have been easy for either organization to follow these same templates; however, they choose to move the needle with something totally new — keenly, recasting some of the same talent that we previously celebrated. Several of the stars of the campaign are returning for their third consecutive CDSS campaign. Start a movement. — FCB could have stopped at just the commercials because the connection between endangered animals and the Down syndrome community was clearly articulated, but knowing that a series of :30 ads would be hard-pressed to start a movement, the agency decided to start one of its own. While the audacity to actually present the petition to the UN as a legitimate case speaks volumes about the importance of the issue at hand, the notion of not being afraid to fail for something you believe in is invaluable. Stretch dollars further. — Nonprofits are synonymous with shoestring budgets. FCB overcame this hurdle by turning to Etsy for its costuming. After finding a seller that created incredible downloadable PDFs of DIY printable masks, a talented and patient designer modified everything to be the perfect complement to the body pieces. Raise awareness for reality. — Attitudes around disabilities are changing. Individual and group advocacy efforts are changing the public perception from one of disadvantage and inability to one of strength and admiration. Advertising campaigns, such as recent ones from Nike and Tommy Hilfiger, emphasize the normalcy and ability of individuals with disabilities; however, these efforts can detract from the support that is still necessary to assist these individuals with their needs. Children with Down syndrome maintain a higher risk for a number of health complications, such as an assortment of diseases, as well as congenital heart defects.
https://www.deconstructedbrief.com/endangered-syndrome/
Inspiration and Strategic Change Jean Pierre Simard As the need for developing business models and manufacturing processes that sustain environmental health has become crucial to long-term success, many companies are reinventing themselves and searching for new ways of doing things. Inspiration and Strategic Change on the Path to Sustainability by Jean Pierre Simard IS Continuing Education Series Learning Outcomes I&S Continuing Education Series articles allow design practitioners to earn continuing education unit credits through the pages of the magazine. Use the following learning objectives to focus your study while reading this issue's article. To receive credit, see the series of questions and follow the instructions. After reading this article, you should be able to: Explain ways in which a company's environmental policy can become integrated into its corporate culture. Describe the differences between a technical nutrient and a biological nutrient. Describe the key elements of the Cradle-to-Cradle Design Paradigm. Identify four key areas of sustainable business activity and how they offer opportunities to develop strategic initiatives. In today's fast-changing business environment, forging continuity between vision, strategy and daily operations has never been more important. This is especially true in light of the recent revolution in sustainable design. As the need for developing business models and manufacturing processes that sustain environmental health has become crucial to long-term success, many companies are reinventing themselves and searching for new ways of doing things. This is an important first step. But to be a truly smart, agile, 21st-century company, sustainability must become a core business strategy. Rather than simply reducing waste in a narrowly defined sector of business, an innovative company can work toward making every decision reflect its commitment to sustainability. And that commitment can be expressed in positive terms. Your company, for example, might want to commit itself to designing products that benefit people and the environment, that enrich quality of life in every phase of their production and use, and that grow value and competitive advantage. As we have pursued similar goals, we've learned a few things about strategic change—ideas that we are sharing here—that might help your company move toward a new vision of quality and performance. Building Company Culture When environmental policy becomes a part of company culture it provides a reliable framework for integrating sustainability into strategic-decision making. Change begins with inspiration, but changing strategically requires more. Integrating a commitment to sustainability into company culture is key. All employees, from manufacturing staff to management, must be familiar with the company's environmental policy. They must also see themselves as members of a team committed to a common mission, and interaction between all departments must constantly be encouraged. In this atmosphere of open communication, an environmental policy can become a familiar part of a company's culture, providing a framework for good, strategic decision-making. Environmental policy influences design and manufacturing in a variety of ways. An Environmental Task Group, for example, might establish objectives and targets, which could range from increasing the quantity of optimized green materials used in products to generating energy savings through the use of renewable sources. This same task group can also serve to help implement an Environmental Management System, one that is ideally built in conformance with the ISO 14001 standard. Meeting this internationally recognized standard is not about compliance; it is to establish a foundation from which a company can continue to be an innovative, environmentally responsible company devoted to enriching quality of life. As positive change moves through a company, an ethic takes root. In the world of industry, a lean manufacturing ethic can make each small change contribute to overall quality and environmental performance. For a textile mill, thinking lean means eliminating waste in the manufacturing process; it means designing fabrics to minimize the need for backings and coatings; it means spinning processes that save natural resources and eliminate the need for lubricants. And when lean thinking is well-integrated—mapped throughout the entire organization—every process and system becomes more effective, generating value for customers while strengthening a company's commitment to sustainability. An Effective Product Development Process A collaborative product development process with well-established criteria assures stability and agility: quality and consistency go hand-in-hand with cutting-edge design and innovation. The architect and designer William McDonough often says, "Design is the first signal of human intention." That commitment to environmental quality should be rooted in the design process. To stay on course on the path to sustainability, to further integrate intentions into what a company does day in and day out, a product development process should be created to bring new products to market. A sound product development process allows a company to be sure that it is developing smart, innovative designs that meet a variety of criteria. Before moving ahead in the development process each product must meet criteria within categories such as: manufacturing feasibility material quality and construction marketability pricing design and aesthetics sustainability These and other criteria can then be evaluated at a series of "gate" meetings involving each department, including research and development, design, marketing and sales. The evaluations, and the synergy between departments, assures quality and consistency while also shifting most of the development time to the beginning of the design process, resulting in faster, smoother, more economical production. One of the prime benefits of the process is agility in the marketplace. While it might seem that the demanding gate meetings could throw obstacles in the path of designers, they actually provide a dependable framework in which fast-paced, cutting-edge creativity can flourish. For small companies built on innovation, that presents a huge competitive advantage. First Fruits A smart product development process can generate a new definition of quality. When strategic change is your game, agility can take you far. In fact, an openness to innovation oftentimes forges partnerships with key players in the field. One such player we have found is McDonough Braungart Design Chemistry (MBDC), a scientific consultancy that develops sustainable "eco-effective" products and practices for global clients. Working with MBDC, we developed a technologically advanced polyester fabric designed for sustainability from start to finish. Based on MBDC protocols, the new fabric is produced with materials and manufacturing practices that are optimized for health and safety. (See sidebar on page 14 for MBDC environmental health and safety criteria.) Why is this important? How can a synthetic, man-made textile be considered ecologically friendly anyway? Well, most polyester is manufactured using a chemical catalyst called antimony, which is a known carcinogen. Long-term inhalation of antimony trioxide, a by-product of polymer production, can cause chronic bronchitis and emphysema. The polyester fabric we developed with MBDC, however, is made with dyestuffs, auxiliary chemicals and a safe, antimony-free catalyst that meet strict human and environmental health criteria. Unlike conventional polyester, which is often made with materials or backings that make recycling unsafe or ineffective, the fabric is also designed for perpetual recycling. Additionally, industry leaders are exploring ways to establish a take-back program for large-scale polyester recycling. These changes are occurring cost-effectively, with real products in real markets, which suggests the efficacy of sustainable design across a wide spectrum of business activity. Using the design of textiles as an example, here are some of the objectives a company in search of sustainability might set in the product development process. An environmentally sound, high-quality textile is: made from a fully optimized fiber using a new, environmentally safe catalyst; designed to outperform traditional polyester in the dyeing process, using dyestuffs and energy more efficiently; designed with optimized dyestuffs and chemicals, which replace harmful chemicals such as chlorine, or heavy metals such as antimony; produced in a facility where a significant percent of the energy used comes from renewable energy sources that do not contribute to climate change; designed to be safely recycled into new fabric at the end of its life, with no hazardous by-products; designed for optimal value recovery within closed loop systems. In MBDC parlance, this new textile would be a technical nutrient, a synthetic material carefully designed for recovery and reuse throughout multiple product life cycles, which can be continually and safely recycled into new fabric after it is used. Along with biological nutrients—materials designed to safely biodegrade after use—technical nutrients are the centerpiece of the regenerative, ecologically intelligent approach to design articulated by the pioneering thinkers architect William McDonough and chemist Michael Braungart. The MBDC Protocol is based on McDonough and Braungart's work, laying out the step-by-step process of designing biological and technical nutrients for closed-loop, cradle-to-cradle material flows. Toward a New Design Paradigm Designs that benefit the environment at every phase of their use "transform the making of things into a positive, regenerative force." Developing ecologically intelligent products and cradle-to-cradle material flows is a decisive step for industry. As McDonough and Braungart have pointed out, the cradle-to-grave material flows that characterize conventional industry create a "one-way trip to the landfill" that generates pollution, wastes energy and uses up natural resources. They are quick to add, however, that "the destructive qualities of today's cradle-to-grave system are fundamentally a deeply ingrained design problem, not an inevitable outcome of human activity." In fact, "good design can transform the making of things into a positive, regenerative force." What McDonough and Braungart call "good design" is based on the laws of nature, which can be applied to the design of both natural and synthetic materials. "Just as in the natural world, in which one organism's 'waste' cycles through an ecosystem to provide nourishment for other living things, cradle-to-cradle materials circulate in closed-loop cycles, providing nutrients for nature and industry. The cradle-to-cradle model recognizes two metabolisms within which materials flow as healthy nutrients. Nature's nutrient cycles comprise the biological metabolism. Materials designed to flow optimally in the biological metabolism, which we call biological nutrients, can be safely returned to the environment after use to nourish living systems. The technical metabolism, designed to mirror the earth's cradle-to-cradle cycles, is a closed-loop system in which valuable, high-tech synthetics and mineral resources—technical nutrients—circulate in a perpetual cycle of production, recovery and remanufacture." McDonough and Braungart's philosophy and MBDC's Cradle-to-Cradle Protocol is a positive, principled way of thinking about not only the textile industry but many other markets and industries as well. It easily dovetails with any company's efforts to design products that benefit the environment in every phase of their production and use, and to make sure those products can be safely returned to nature or recycled into valuable new products. Idealistic? Maybe. But it's an idea whose time has come. (See Chart 1). So what's the next step? Beyond the introduction of a single product, how might a company integrate sustainability into everything it does? We've found that developing a platform for strategic change, a concept we call Eco-Intelligence Initiatives®, allows us to focus on key areas in the design process—partnerships, products, processes and people—that offer every company opportunities for making sustainability a cornerstone of business success. Partnerships: This is especially critical for small companies whose goal is not necessarily to grow big, but to work with partners to do big things. By sharing knowledge and expertise, a company can expand its reach, influence and impact, developing products that benefit the entire industry and help set up conditions for long-term success. Products: On the heels of developing its first eco-friendly product, a company can build on its momentum with the introduction of new products that expand the line. These new introductions benefit from, and build on, the success of the first, growing visibility and public awareness. While serving as a public launch, the first product also sets internal standards for a whole new series of environmental initiatives. Processes: The way you make a product is just as important as what you make. Manufacturers should continually look for cleaner, greener ways to make their products. While eliminating toxic elements from the manufacturing process, also use renewable energy sources to power facilities. In addition, track energy and raw material use and continue to reduce waste at all levels of operation. People: Stay attuned to community, inside and outside your doors. Consider everyone who works with your company a "partner" who contributes to your success. Only people bring passion to the table and only passion generates commitment. We include in our community our suppliers, customers and neighbors, and the goal of our initiatives is to deliver high quality products that benefit their health, safety and quality of life. People are the cornerstone of business success. Staying on the Cutting Edge There's no need to sacrifice color, choice, beauty, performance or customer satisfaction when designing with the environment in mind. Sustainable design is innovative design. One might think that being passionate about the environment might cause a company to miss a beat in the fast moving worlds of style and technology, but this need not be so. In fact, challenging designers to deliver fabrics valued for their eco-friendliness as well as their aesthetics and performance can stimulate cutting-edge thinking. Working from the synergistic foundation of a smart product development process, designers are free to think outside the box to bring freshness and energy to the design process. For example, the design process might begin with an "Inspiration Day," which draws on the creative currents surging through your city's galleries, museums, art studios and fashion runways. In a step-by-step process, inspiration can fuel the development of a design concept, and a concept can become an artwork, a textile pattern at the forefront of color and style trends that works within a cohesive collection. Working closely with the R&D team, the focus would turn to selecting fabric, mixing yarns and pushing the limits of construction. And throughout the process, designers might work closely with customers, tuning designs to their performance criteria and aesthetic needs. Considering human and environmental health criteria does not dull customer satisfaction nor the passion and innovative energy of the design studio. Indeed, what it does is foster great design—and none of the guilt. Marketing Sustainability and Strategic Change Leadership provides the stories that let the world know your business is making the world a better place. None of the guilt. That's one effective way to highlight the benefits of sustainable design and manufacturing. There are many. Saving resources. Protecting the environment. Improving product quality. Generating quality of life. All sound, true statements of fact. All things of which a company can be proud. What a company cannot be proud of is greenwashing—claims that go beyond what it has accomplished; claims that disguise shortcomings; claims that are patently false have no place in the toolbox of marketers. Developing sustainable products and practices, making a commitment to strategic change—these are undertaken in good faith, and good faith goes well rewarded in the workplace and the marketplace. Bill McDonough often talks about "doing well by doing good." A company that is truly committed to strategic change will discover that its journey and its leadership will provide it with many stories—examples that can genuinely demonstrate how it is helping make the world a better place. These "big ideas" are finding a bigger and more receptive audience each and every day. And that's a good business strategy not just today, but also for long-term successes in our contemporary economic climate. Human and Environmental Health Criteria Victor Innovatex, working with McDonough Braungart Design Chemistry, introduced a polyester fabric made with dyestuffs, auxiliary chemicals and a safe, antimony-free catalyst, which meets stringent human and environmental health criteria. On a five-point scale, the fabric met MBDC's Level Four protocol for products "designed for optimal value recovery within closed loop systems." Following the MBDC Design Protocol, materials' human and ecological health are characterized according to the following criteria: MBDC criteria also include value recovery potential, such as the technical feasibility of recycling a material and an energy profile, which evaluates the use of renewable sources of energy in the creation, distribution, use and value recovery process of a product.Source: www.greenblue.org Technical nutrient and Cradle-to-Cradle Design Protocol names are a trademark of McDonough Braungart Design Chemistry. Jean Pierre Simard is director of marketing for Victor Innovatex. Since its start as a woolen mill opening in 1947, Victor Innovatex has grown to become a leading fabric design and manufacturing company serving the contract industry. Its new, environmentally sound, technical nutrient polyester is the first introduction in the company's Eco Intelligence Initiatives®, a new way of thinking that is guiding Victor Innovatex on its path toward integrating sustainability as a core business strategy. For more information visit www.victor-innovatex.com/ecointelligence. Xcel Energy’s Data Center Efficiency program can help data centers and large-scale IT operations improve reliability and energy efficiency. Visit our website today to learn about the design flexibility of a Morton building and the endless possibilities of partnering with our designBUILD team. Wood construction is both cost and energy efficient. Check out Morton Buildings and our designBUILD team online today to discover all the benefits of post-frame construction. When choosing a metal-clad building for your next construction project, consider Morton Buildings, Inc., and their designBUILD team, we’ll make your dream a reality. The BUILDINGS brand serves commercial building owners and facilities management professionals for commercial real estate. BUILDINGS brand offers its audience of commercial building owners and facilities management professionals information on the development, construction, modernization, management and operations of commercial buildings as well as the products and services needed to support such commercial buildings and facilities management. With the information we offer, we help facilities management professionals make smarter decisions for commercial buildings. BUILDINGS delivers relevant and timely information that equips commercial building owners and facilities management professionals with the knowledge and support they need to solve critical issues; energy management, lowering operating costs, new products for commercial buildings, security, and much more. In addition to content, BUILDINGS features a comprehensive list of products and services and is your link to the companies that provide these valuable facilities management and commercial buildings resources. You also have the opportunity to maintain your commercial buildings certifications with BUILDINGS. Continuing Education Units are available with approved content from BOMI.
Toyota Verso-S Japan model of the M class. This car is presented with diesel and gasoline engine. The most powerful version of the car has an engine 1.3 (99 hp) gasoline with a 6 gears. With this engine, the gasoline consumption is 5.5 liters per hundred kilometers in the city, on the highway – 4.8 liters, with a mixed trip – 6 liters. The capacity of the fuel tank is 42 liters. The car accelerates at 100 km / h for 13.3 seconds. The maximum speed for Toyota Verso-S is 170 km/h. The front suspension are independent suspensions. The car has ventilated disc brakes on the front wheels and disc brakes at the rear.
https://carsot.com/toyota/verso-s/toyota-verso-s-2010-now-hatchback-5-door.html
10/170" - Sunday 10/180" - Monday 10/190" - Tuesday 10/200" - Wednesday 10/210" - Today 10/220" - Friday 10/230" - Saturday 10/240" - Sunday 10/250" Weather - Summit | 1569ft43°FE 9mph - Base | 1060ft43°FE 9mph Snowfall History Advertisement Caberfae Peaks Lift Hours |Day||Daytime Hours||Nighttime Hours| |Mon||10:00 AM-5:00 PM||N/A| |Tue||10:00 AM-8:00 PM||4:00 PM-8:00 PM| |Wed||10:00 AM-8:00 PM||4:00 PM-8:00 PM| |Thu||10:00 AM-8:00 PM||4:00 PM-8:00 PM| |Fri||10:00 AM-10:00 PM||4:00 PM-10:00 PM| |Sat||10:00 AM-9:00 PM||4:00 PM-9:00 PM| |Sun||10:00 AM-8:00 PM||4:00 PM-8:00 PM| Caberfae Peaks Snow & Ski Conditions How much snow did Caberfae Peaks get today? Find the latest snow report for Caberfae Peaks, with ski conditions, recent snow totals and snowfall in the weather forecast. Scroll left to see the most recently recorded Caberfae Peaks snow totals for the last five days or scroll right to see the Caberfae Peaks snow forecast for the next three days. Read the snow reporter comments (if provided) for more details on skiing at Caberfae Peaks for the day. See base depth recorded at the upper mountain, mid mountain and lower mountain stations, along with the current weather at the summit and base elevations, including wind mph and direction. Click through for a full weather forecast. How many lifts are open and how many runs are open at Caberfae Peaks? Check out the Caberfae Peaks ski report, including number of lifts open, acres open and runs open, as well as the terrain park status. Click Add to Compare to see a side-by-side comparison of Caberfae Peaks vs. other ski resorts (up to 10 total). Caberfae Peaks snow reports are sourced directly from the ski resorts and are only recorded during the official ski season's opening to closing dates. Terrain Acres Open N/A of 200ac Parks & Pipes Terrain Parks 2 Park Last Reshaped 1/29 Nordic Total Tracks 9.3mi Advertisement All languages Pretty good coverage everywhere, only a couple of spots near the bottom where it’s thin. Yesterday by 4 there were a lot of slush moguls. New Snow | Spring Snow Sunday: great conditions! Probably even better now. Enough snow for backcountry/tree lines. South peak wasn't open yet. Get ou there and shred, y'all!! 4" New Snow | Powder Latest News Discover Michigan Skiing this JanuaryMichigan ski areas across the state have teamed up with McDonald’s restaurants to offer a popular and very affordab... Slopes & Trails Abound in Michigan’s Upper PeninsulaPowder glade skiing, uncrowded lift lines, scenic trails and terrain parks for every skier ability level. If this s...
https://www.onthesnow.com/michigan/caberfae-peaks-ski-golf-resort/skireport.html?offset=4&numRows=2
Table of Contents How do you find the number of electrons in an atom? The number of protons, neutrons, and electrons in an atom can be determined from a set of simple rules. - The number of protons in the nucleus of the atom is equal to the atomic number (Z). - The number of electrons in a neutral atom is equal to the number of protons. How many electrons are there in this atom? If we gots an electrically neutral atom, then if there are the 8 positively charged, massive particles, i.e. protons, and there are by SPECIFICATION, THERE MUST be 8 electrons in the NEUTRAL ATOM….and these electrons are conceived to whizz about the nuclear core in all sorts of fancy orbits, with fancy energies. How many electrons are in a krypton atom? 2,8,18,8 Krypton/Electrons per shell Who discovered electron? J.J. Thomson Although J.J. Thomson is credited with the discovery of the electron on the basis of his experiments with cathode rays in 1897, various physicists, including William Crookes, Arthur Schuster, Philipp Lenard, and others, who had also conducted cathode ray experiments claimed that they deserved the credit. Where is krypton found? Earth’s atmosphere Although traces are present in meteorites and minerals, krypton is more plentiful in Earth’s atmosphere, which contains 1.14 parts per million by volume of krypton. The element was discovered in 1898 by the British chemists Sir William Ramsay and Morris W. How many total electrons does element 15 have? How many total and valence electrons are in a neutral phosphorus atom? A neutral phosphorus atom has 15 total electrons. Two electrons can go into first shell, eight in the second shell, and it has five more in the third shell. The third shell is the outer valence shell, so it has 5 valence electrons. Are protons and electrons equal? An atom contains equal numbers of protons and electrons . Since protons and electrons have equal and opposite charges , this means that atoms are neutral overall. Which atom contains exactly 16 neutrons? if you look at the periodic table, phosphorus has 15 electrons and protons, and 16 neutrons. Who found Neutron? Chadwick In 1927 he was elected a Fellow of the Royal Society. In 1932, Chadwick made a fundamental discovery in the domain of nuclear science: he proved the existence of neutrons – elementary particles devoid of any electrical charge. Can we see electron? Now it is possible to see a movie of an electron. Previously it has been impossible to photograph electrons since their extremely high velocities have produced blurry pictures. In order to capture these rapid events, extremely short flashes of light are necessary, but such flashes were not previously available. Is krypton poisonous? Krypton is a non-toxic asphyxiant that has narcotic effects over the human body. Krypton-85 is highly toxic and may cause cancers, thyroid disease, skin, liver or kidney disorders.
https://massinitiative.org/how-do-you-find-the-number-of-electrons-in-an-atom/
Most any packaged food that involves boiling (like boxed macaroni-and-cheese dinners) will have "high altitude" cooking instructions. I have here a box of Hamburger Helper that says: The reason foods have these instructions is because the boiling point of water changes with altitude. As you go higher, the boiling temperature decreases. Advertisement At sea level, the boiling point of water is 212 degrees F (100 degrees C). As a general rule, the temperature decreases by 1 degree F for every 540 feet of altitude (0.56 degrees C for every 165 meters). On top of Pike's Peak, at 14,000 feet, the boiling point of water is 187 degrees F (86 degrees C). So pasta or potatoes cooked at sea level are seeing 25 degrees more heat than pasta or potatoes cooked on Pike's Peak. The lower heat means a longer cooking time is needed. Pressure cookers work in the opposite direction. A pressure cooker raises the pressure so that the water boils at a higher temperature. A typical pressure cooker applies 15 pounds of pressure, so the boiling point of water rises to 250 degrees F (121 degrees C) at sea level. The higher temperature means that foods take less time to cook. Here are some interesting links:
https://recipes.howstuffworks.com/tools-and-techniques/question63.htm
Russian Honey Cake. You can have Russian Honey Cake using 6 ingredients and 3 steps. Here is how you achieve that. Ingredients of Russian Honey Cake - It’s 50 g of honey. - It’s 250 g of flour. - It’s 3 of eggs. - You need 100 g of butter. - Prepare 50 g of sugar. - You need to taste of Cream cheese for topping. Russian Honey Cake instructions - Mix butter with honey and sugar. When they become cold add egg then flour. Divide into 5 round cake pans. - Every layer takes 5 minutes to bake in the oven. - To assemble the cake, cover each layer with cream cheese. Garnish with crumbs of crusty cake..
https://eastlondontemple.com/615-2020-russian-honey-cake/
This invention relates to improvements in closures for opening of various types and, more particularly, to a closure for the open top of the bed of a pickup truck or the front opening of a cabinet for containing electrical equipment. BACKGROUND OF THE INVENTION Closure or cover apparatus of multiple panels of various types have been proposed and used for the open top of the bed of a pickup truck. Disclosures relating to this general subject matter are found in the following U.S. Pat. Nos. 2,997,330, 3,069,199, 3,858,744, 4,284,303, 4, 313,636, 4,418,954, 4,550,945, 4,615,557, 4,695,087, 5,009,457, and 5, 011,214. For the most part, the mounting of panels on the closures of these patents are quite complex in construction and have had limited use as to the way in which the panels can be selectively opened and closed. Moreover, the closures of conventional designs are generally not waterproof, and rain water can penetrate the closures. Thus, goods carried in the truck bed while the closure is in place can often be damaged. Because of these limitations of conventional multiple panel closures for the open tops of the beds of pickup trucks, a continuing need exists for improvements in such closures. The present invention provides a solution to the complexity problem. In the use of access doors for electrical equipment cabinets or housings, a relatively large housing requires that the door be generally a single, relatively massive panel. This requires a large amount of space to open the door since the cabinet itself is quite large to be able to house a great amount of electrical equipment therewithin. Cabinet doors of electrical equipment swing on vertical doors are typically of one-piece construction. As a result, the doors are relatively large and difficult to open and close. A need, therefore, continues to exist for closures for large openings of cabinets for electrical equipment to simplify the way in which the interiors of such cabinets are accessed. The present invention satisfies this need as well. SUMMARY OF THE INVENTION The present invention is directed to an improved multiple panel closure which can be used for covering the open top of the bed of a pickup truck. The closure can also be used as an access door for the front opening of a cabinet for housing electrical equipment. In either case, the closure of the present invention permits any one of four panels to be opened independently of the other panels or the panels can be moved to side positions in which the entire top opening or front opening of the pickup truck and the cabinet, respectively, can be exposed. The advantages of the present invention when used with the pickup truck include the providing of a low profile when the panels are in their side board positions. The pickup truck also has folding brackets and subframes to provide superior outward strength. Also, there is a unique double hinge which provides for the movement of closure panels from closed, coplanar positions to fully open, side positions. The structure of the closure of the pickup truck provides a number of panels to be moved into positions in which heavy materials, such as lumber, can be carried on racks or removable crosspieces when the panels are in the side board positions. Moreover, provision is made for providing flexibility for hauling items larger than the height, length and width of the closure in the closed position. Theft resistance of the closure is superior because subframe hinge pins for mounting the panels can be locked under the closure. The center panels are locked by way of a cam lock or hasp which also locks the tail gate. Access to the interior of the truck bed can be had from the sides of the bed and at the rear of the bed when the tailgate is in a down position. A further important feature is the fact that the center pair of panels of the closure of the present invention can be opened and access can still be maintained as the two side openings remain closed by the outer side panels. In the housing for electrical equipment, the panel cover of the present invention allows quick, convenient access to operate the equipment in the cabinet. When the doors or closure panels are fully open, such as for wire pulling and/or testing, they block approximately 50% less access space to other certain electrical components in the housing, especially when other components are located across from one another which is common in the electrical equipment industry. The closure of the present invention also maintains strength and rainproof integrity and, with the addition of gaskets in appropriate areas, the system may be constructed dust tight. This would allow design engineers to make more efficient use of the interior space of the cabinet. The primary object of the present invention is to provide an improved closure for a framed unit, such as the top opening of a bed of a pickup truck or the front opening of a cabinet housing for electrical equipment wherein multiple panels of the closure can be selectively opened or closed yet the panels themselves can be moved into standby side positions to achieve full or partial open conditions for the opening. Other objects of this invention will become apparent as the following specification progresses, reference being had to the accompanying drawings for an illustration of the invention. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a perspective view of a pickup truck having four panels thereon for closing the top opening of the bed of the truck, the panels extending fore and aft and being generally parallel with each other; FIG. 2 is a schematic, rear elevational view of the pickup truck, showing the four panels in their closed positions and further showing a pair of braces at the rear of the pickup truck for supporting the panels in their open position; FIG. 3 is a view similar to FIG. 2 but showing the outer two panels in their open, vertical positions; FIG. 4 is an enlarged, fragmentary, perspective view of the bed of the pickup truck showing the rain gutters for the closure panels and a three point latch for releasably holding the panels in closed positions; FIG. 5 is a fragmentary, schematic view of the three point latch system used with the panels of FIG. 4; FIG. 6 is a view similar to FIGS. 2 and 3 but showing all four panels raised to full, vertical sideboard positions and with a cross bar in place; FIG. 6A is a view similar to FIG. 6 but showing an increase in the sideboard height with the addition of sliding brackets; FIG. 6B is a perspective view of the truck showing the sideboard position to provide accessibility to the vehicle simply by opening the panels on either side of the vehicle; FIG. 7 is a view similar to FIG. 6 showing the way in which a cross bar of FIG. 6 is put into place; FIG. 8 is a view similar to FIG. 6 but showing the right-hand pair of panels in a side board position and a brace at the front end of the bed of the pickup truck; FIG. 8A is a schematic rear elevational view of the left-hand closure panels with the outer panel being partially raised and in dashed lines; FIG. 8B is a view similar to FIG. 8A but showing the right-hand closure panels; FIG. 9 is a fragmentary, perspective view, parts being broken away, of the hardware for hinging a pair of right-hand panels of the pickup truck of FIG. 1 to a base structure on the truck bed; FIG. 10 is a perspective view of an electrical cabinet having four doors which are hinged about vertical hinge lines and can be opened and closed in the same manner as the panels of the pickup truck of FIG. 1; FIG. 11 is a cross-sectional view of the cabinet doors when the outer panels are in their closed position; FIG. 12 is a schematic view of a three point latch system for releasably locking an end panel of the set of door panels of FIG. 10 on the cabinet; FIG. 13 is a view similar to FIG. 10 but showing the way in which the center pair of door panels of the cabinet can be opened while the outer doors remain closed; and FIG. 14 is a view similar to FIGS. 10 and 13, but showing the way in which all four door panels of the cabinet can be opened to gain access to the interior of the cabinet. DETAILED DESCRIPTION OF THE DRAWINGS The preferred embodiment of the closure assembly of the present invention is broadly denoted by the 10 and is adapted for use on a fixed structure or a moving structure. For purposes of illustration, assembly 10 is shown in FIGS. 1-8 as being applicable for closing the open top of the bed of a pickup truck 12 having front wheels 14, rear wheels 16 and a swingable tailgate 18 hinged for rotation about a horizontal axis 20 so that the tailgate can drop from a closed position as shown in FIG. 1 to an open position. Closure 10 has a set of four panels denoted by the numerals 22, 24, 26 and 28. The panels are parallel with each other and extend fore and aft of the truck bed as shown in FIG. 1. The closure assembly 10 has hinge means (hereinafter described) which allows the panels 22, 24, 26 and 28 to pivot relative to the bed and relative to adjacent panels so as to selectively open the bed by opening one or more panels without having to open all of the panels. For instance, outer panels 22 and 28 can be pivoted into upright positions, as shown in FIG. 3, from the horizontal, closed positions shown in FIG. 2. Moreover, outer panel 22 can be pivoted into its vertical position while panel 28 remains in or is pivoted out of its horizontal position as shown in FIG. 2. Similarly, inner panels 24 and 26 can individually be opened while the outer panels 22 and 28 remain closed or horizontal. FIG. 6 shows that panels 22, 24, 26 and 28 can be pivoted from the closed positions of FIG. 1 to open, side board positions of FIG. 6. In this view, a cross bar 90 can be provided at each of several locations on the bed for hauling lumber on the cross bars, if desired. All of the foregoing positions of panels 22, 24, 26 and 28 are achieved by virtue of pivot hardware of the type shown in FIGS. 8A and 8B and with reference to FIG. 4. Hinging of the panels 22, 24, 26 and 28 occurs as follows: panel 22 pivots about left-hand hinge pin 58 (FIG. 8A); panel 24 pivots about left-hand hinge pin 58 (FIG. 8A); panel 26 pivots about right-hand hinge pin 58a. (FIG. 8B); and panel 28 pivots about hinge pin 58a. Panels 22 and 24 pivot as a unit about left-hand hinge pin 58 (FIG. 8A) when they are moved together into their side board positions shown in FIG. 6. Similarly, panels 26 and 28 pivot as a unit about right- hand hinge pin 38 (FIG. 8B) when they are moved together into their side board positions of FIG. 6. Panels 22, 24, 26 and 28 are coupled together to form closure assembly 10 in the following manner: a pair of C-shaped members 30 is provided for the left and right side margins of the truck bed, respectively, each C- shaped member 30 being shown in FIGS. 8A and 8B. One flange 32 (FIG. 9) of each member 30 is secured by fasteners, such as machine screws or the like, to the upper surface 34 (FIGS. 8A, 8B and 9) of a rigid bar or strap 36 which extends across each of the front and back portions, respectively, of the truck bed as shown in FIGS. 8A and 8B, the front strap 36 not being shown. Strap 36 can be a crosspiece 90 (FIG. 6) used to haul lumber on the truck bed. While fasteners, such as machine screws, have been described herein to secure members 30 to the truck, a frame that bolts together could be provided in place of member 30. Such frame will be attached primarily by set screw clamps to the upper lip 34 of the truck. The only bolts that require drilling holes in the truck are fasteners 77 (FIG. 3). Hinge pin 38 is carried at one end thereof on another flange 41 of each C-shaped member 30 as shown in FIG. 8B. The hinge pin 38 passes through the sleeve 42 (FIG. 9) of a hinge plate 44 coupled by a transversely U-shaped subframe member 46 which extends laterally from the hinge pin 38 and is integral with one side 48 of a transversely U- shaped channel member 50 having a rigid, L-shaped hinge support element 52 rigidly secured to a bottom web 54 of element 50. Member 50 extends fore and aft between the bars 36 at the front and rear ends of the bed of the vehicle. Moreover, members 30 extend from the rear end of the bed of the pickup vehicle to the front end thereof as shown in FIG. 4. Hinge support element 52 has a plurality of hinge leaves 56 thereon and the hinge leaves alternate with hinge leaves 57 and are pivotal relative to each other and to the hinge pin 58 (FIG. 9). Panel 22 is provided with an inner side flange 60 (FIG. 8A) which is coupled to alternate hinge leaves 56 (FIG. 9) so that the panel 22 can move from the full line position of FIG. 8A through the dashed line position of FIG. 3 to a generally upright position (not shown). Thus, this allows opening of the space 63 (FIG. 4) normally covered by panel 22 so that access can be gained to the interior of the truck or vehicle bed with only panel 22 in an open position as shown in FIG. 3. A gas spring or spring bias means 65 in the form of a piston and cylinder assembly is pivotally mounted at one end 66 (FIG. 8A) to a subframe member 46 and at the opposite end to the panel 22. Gas spring 65 effectively supports and releasably holds the panel 22 in a generally upright position as shown in FIG. 3. In its closed position, panel 22 has an overlapping lip 67 (FIG. 8A) which directs water downwardly along a generally vertical path and onto the ground in event of rain falling on the top surface of panels 22, 24, 26 and 28 of assembly 10. Moreover, each of the C-shaped member 30 (FIG. 8A) channels water fore and aft to keep the system 10 water-tight, the ends of members 30 being open. Panel 24 is pivotally mounted to the adjacent hinge pin 58 by being connected with the remaining hinge leaves 57 which are not connected to the flange 60 of panel 22. To this end, panel 24 has a flange 69 (FIG. 8A) which is connected to alternate hinge leaves 57 (FIG. 9); thus, while panel 22 is in a closed position as shown in FIG. 2, panel 24 can be opened to permit access to the space denoted by the numeral 63a (FIG. 4) which is normally covered by panel 24. Some means can be provided to stabilize and keep the panel 24 in a generally upright position. Also, panel 24 has an inner side lip 71 (FIGS. 8A and 8B) which extends between the front and rear ends of the bed of the truck and which is also received within a U-shaped trough-like element 72 on the adjacent inner side of panel 26 (FIG. 8B). Element 72 channels rain water to the front or back of the bed and onto the ground from the open ends of element 72. Referring once again to panels 22 and 24, the entire portion of the space 63 and the space 63a which is normally covered by panels 22 and 24 can be opened by first swinging panel 24 about the axis of adjacent left- hand hinge pin 58 such that the panel 24 lies on panel 22 (FIG. 8); then, subframe 46 (FIG. 9) is lifted and pivoted about the axis of the adjacent left-hand hinge pin 38 (FIG. 8A) so that the panels 22 and 24 will essentially be in a vertical, side board position as shown in FIG. 6. Similarly, panels 26 and 28 are moved in the same manner into their side board position as shown in FIG. 6. To support panels 22 and 24 while in their upright positions, a pair of tubular members 75 (FIG. 6) are secured in upright positions by fasteners 77 (FIG. 6) to the bed 79 of the vehicle. A pivot pin 78 interconnects a bar 80 with the upper end of each member 75 as shown in FIG. 6. The member 75 is cut away from the upper end thereof so that the upper end of shaft 80 can pivot into the generally horizontal position shown in FIG. 2 when the corresponding panel 22 is in its closed position as shown in FIG. 2. Shaft 80 is received telescopically within a tubular extension 82 (FIG. 6) which is pivotally mounted by a pin 84 to the underside of the adjacent U-shaped member 50 (FIG. 8A). Thus, each member 75, shaft 80 and extension 82 are coupled at the ends thereof to bed 79 and member 50, respectively so as to stabilize the panels 22 and 24 and to allow the panels 22 and 24 to be raised as a unit to its full sideboard position with the removable support bar 90 raised and inserted in sideboard supports 75. FIG. 8B shows the same structure as FIG. 8A except that FIG. 8B shows a hinge structure for right-hand panels 26 and 28. Since the structures of FIGS. 8A and 8B are the same, the panels 26 and 28 can be moved from their closed positions as shown in FIG. 2 into their full sideboard positions shown in FIG. 6. Moreover, FIG. 6 shows the removable support bar 90 (FIG. 6) raised and inserted in the sideboard support 75 in the same manner as that described above with respect to the left-hand panels 22 and 24. A pair of cross bars 90 having end legs 92 (FIGS. 6 and 7) can be provided for the open upper ends of removable support bars 82 as shown in FIG. 6 and 7. The closure 10 has a hasp 96 (FIG. 1) which interconnects panels 24 and 26. Before panel 24 can be shifted into an open position to open the corresponding space 63a, the hasp 96 must be released or unlocked. In operation, the panels of assembly 10 typically are in closed positions shown in FIG. 2 in which each shaft 80 and its extension 82 are generally horizontal and extend laterally from the corresponding upright member 75 in underlying relationship to the corresponding member 46. Thus, there is a substantially maximum amount of space in the bed of the truck in underlying relationship to the panels to haul all kinds of goods. When it is decided to open or gain access to one side space, such as space 63 (FIG. 4) of the bed, the corresponding panel, such as panel 22, typically is unlocked, if it is locked at all. Generally, the outer panels 22 and 28 will each have a three point latch of the type shown in FIG. 5 wherein a pair of rods 100 are coupled to a rotating member 102 having an axis of rotation 104 and a handle 106 which is accessible to the user at the side of the closure 10 as shown in FIG. 4. The outer ends of rods extend through holes 108 in subframe members 46 (FIG. 9) so that, to unlock the corresponding panel, such as panel 22, a lock on handle 106 is unlocked, following which the handle 106 is rotated to retract the ends of the rods 100 from holes 108 (FIG. 5). Then, the adjacent panel 22 or 28 can be pivoted into its upright position as shown in FIG. 3 to gain access to the corresponding space 63 or 63a of the bed of the truck. FIG. 3 shows the open portions of the bed of the truck when panels 24 and 26 are in their open positions. FIG. 8 shows the front of the vehicle bed near the pickup cab, there being a gas spring 93 extending downwardly and inwardly between the extension 82 and a crosspiece 89, a feature which is not objectionable because there is no space available adjacent to the cab and rearwardly of the cab as in the case of the rear portion of the truck bed in which gas springs 65 are used. With the addition of sliding side panels or brackets 150 (FIG. 6A) to panels 24 and 26, sideboard height can easily be increased by 50% for hauling or extra height for temporary canvas covered camper shell. It is possible to still maintain excellent strength due to rigid design of subframe and folding brackets. The principles of the present application can be applicable to a fixed object in the form of a cabinet of metal or the like, such as an electrical cabinet for containing high voltage electrical equipment. Such a cabinet is shown in FIGS. 10, 13 and 14 in which the cabinet 120 includes a housing 122 having a top 124, a bottom 126, a rear wall 128, a pair of side walls 129 and four doors 130, 132, 134 and 136 as shown in FIGS. 10, 13 and 14. The doors are provided with pivot structure of the type shown and described with respect to FIG. 4, 8A, 8B and 9. To this end, the bottoms and the top portions of the doors will be provided with the structure shown in FIG. 11 including hinge pins 38, hinge pins 58, and optionally a gas spring 65. The doors will be free to open in the manner shown in FIGS. 10, 13 and 14. Either or both left-hand doors 130 and 132 can be opened to gain access to the respective space covered by these doors normally. FIG. 13 shows the inner doors 132 and 134 which can be opened as the outer doors 130 and 136 remain closed. FIG. 14 shows doors 130, 132, 134 and 136 in their full open or side board positions. In such a case, the entire opening of the housing 122 is accessible to a workman. FIG. 12 shows a three-point latching system similar to the system of FIG. 5. In operation, the doors are opened or closed in the same manner as that mentioned above with respect to panels 22, 24, 26 and 28 except that doors 130, 132, 134 and 136 are pivotal on generally vertical hinge axes. However, the trough or gutter features of the present invention are still available for use with the doors of the cabinets shown in FIGS. 10, 13 and 14. Any rain striking the top or front panels faces of the cabinet when the doors are closed will be channeled downwardly to the ground or to a trough which channels water away from the cabinet. The cover of the present invention can be used on a variety of enclosures. For example, the cover can be used with hydraulic control panels, storage cabinets, and any enclosure that has limited frontal access space or that would be desirable to access from left, right and center but still be able to open for 100% access. The sideboard position accessibility is gained simply by opening covers on either side of the vehicle.
Donated food is accepted at our warehouses in Newark and Milford weekdays between 8:30 a.m. and 4:00 p.m. Newark Headquarters: 222 Lake Drive Newark, DE 19702 We have a contact-free donation drop-off zone located by the barn on the side of our building. You may park in the cul-de-sac. If you have a perishable donation, please bring it around back. Park in the spots by the loading ramp and someone will come out to your car. (If it’s past 3:00 p.m., please knock on the glass door by the ramp) Milford Branch: 1040 Mattlind Way Milford, DE 19963 Please enter the door on the side of the building that says donation drop-offs. Please note, the Food Bank of Delaware cannot accept hard candy, lollipops, soda in cans and bottles, chocolate bars or pieces, gum and soft candy such as marshmallows, caramels, taffy, licorice, gummy items, savory snacks such as chips puffed cheese snacks, pork rinds and sweet snacks including cakes, cookies, donuts, ice cream, pastries and popsicles.
https://www.fbd.org/drop-off-a-food-donation/
Technicians of the Company will get an average salary increase of Rs 25,200 per month. According to a wage settlement agreement entered into by Hyundai Motor India with the workers of its Chennai plant, the technicians will receive an average salary hike of Rs 25,200 per month, spread over a period of three years. The settlement will be implemented with retrospective effect from April 2018 and will stay effective until March 2021. The technicians will get 55 per cent of the hiked salary in the first year, 25 per cent in the second year and 20 per cent in the third year. The agreement was signed between the Hyundai management and the United Union of Hyundai Employees (UUHE). In other words, every month the technicians will get a raise of Rs 13,860 in salary in the first year, Rs 6,300 in the second year and Rs 5,040 in the third year. Hyundai Motor exports its cars to more than 91 countries worldwide, including Africa, Middle East, Latin America and Asia Pacific. Earlier this year, workers of Hyundai Motor’s Chennai plant had held a day’s token hunger strike in protest of delayed salary disbursement and inconclusive wage settlement discussions with the management. The permanent workers, under the recognised union, had even abstained from overtime operations, accusing the Company of using trainees to do the work of the skilled and permanent workers. The management of course denied these allegations asserting that it only allowed skilled workers on the production lines, and that trainees were not onboarded without a year’s intensive training and supervision by seniors. At the Company’s Sriperumbudur plant, there are said to be more than 2,200 permanent workers, apporx. 3,500 trainees and another 5,000 contract workers.
https://www.hrkatha.com/news/negotiations/hyundai-signs-wage-settlement-agreement/
PROBLEM TO BE SOLVED: To provide a negative electrode mixture which enables the rise in cycle performance, and a lithium ion battery having a high cycle performance. SOLUTION: A negative electrode mixture comprises: a negative electrode active material; and a halogen-containing sulfide solid electrolyte glass ceramic. The negative electrode active material is graphite. The halogen-containing sulfide solid electrolyte glass ceramic is expressed by the following formula (1): LaMbPcSdXeOf (1) (where L represents an alkali metal; M represents B, Al, Si, Ge, As, Se, Sn, Sb, Te, Pb, Bi, or a combination thereof; X represents I, Cl, Br, F, or a combination thereof; and a to f satisfy 0<a≤12, 0≤b≤0.2, c=1, 0<d≤9, 0<e≤9, 0≤f≤9 respectively). COPYRIGHT: (C)2015,JPO&INPIT
This paper demonstrates lasing of the whispering gallery modes in polymer coated optofluidic capillaries and their application to refractive index sensing. The laser gain medium used here is fluorescent Nile Red dye, which is embedded inside the high refractive index polymer coating. We investigate the refractometric sensing properties of these devices for different coating thicknesses, revealing that the high Q factors required to achieve low lasing thresholds can only be realized for relatively thick polymer coatings (in this case ≥ 800 nm). Lasing capillaries therefore tend to have a lower refractive index sensitivity, compared to non-lasing capillaries which can have a thinner polymer coating, due to the stronger WGM confinement within the polymer layer. However we find that the large improvement in signal-to-noise ratio realized for lasing capillaries more than compensates for the decreased sensitivity and results in an order-of-magnitude improvement in the detection limit for refractive index sensing. © 2016 Optical Society of America 1. Introduction Whispering gallery modes (WGMs) are resonances that occur when light is trapped by total internal reflection (TIR) within a structure presenting at least one axis of revolution. Common WGM resonators include spheres [1,2], disks and toroids [3,4], optical fibers [5–7], and capillaries [8,9]. WGM resonators have attracted considerable interest within the sensing community largely due to their potential use in refractometry-based fluid sensing and label-free biosensing applications . While there are many examples of remarkable sensing performance using such optical resonators [3,4,11], the interogation strategies typically used are cumbersome and impractical. The typical approach involving evanescent coupling to excite and probe the WGMs, relies on the use of a tapered optical fiber or prism that is phase-matched to the resonator. In such a setup the coupling efficiency is however never perfectly stable, and the coupler can cause undesired resonance wavelength shifts or Q-spoiling due to slight movements and vibrations . In contrast, fluorescence-based resonators do not have the same practical limitations. Rather than using a waveguide or prism to evanescently couple light into and out of the device, a fluorophore is used to excite the resonances indirectly. This permits remote excitation and detection of a WGM-modulated fluorescence spectrum. The performance of fluorescence-based microresonators is, however, in general far worse than for their passive counterparts. This is due in part to the significantly lower achievable Q factors resulting from absorption, scattering, and the intrinsically broad linewidth of the emitters , as well as intrinsic resonator asphericity . One particular strategy that can improve the performance is to induce lasing of the WGMs. It has been shown previously that lasing of the WGMs in dye-doped microspheres enables a two fold increase in Q factor, resulting in improved detection limits for refractive index sensing compared with microspheres operated below the lasing threshold . Among the different resonator geometries supporting WGMs, capillaries have the unique property of having the evanescent fields extend into and sample the medium inside the resonator, which is particularly interesting because the resonator itself serves as a microfluidic channel. Fluorescent capillary resonators have previously been demonstrated [9, 16–19], in essence building on the considerable body of literature on fluorescent microspheres going back many years . In this context, the gain medium can be introduced through a thin high-refractive-index coating deposited onto the inner surface of the capillary. Examples of such coatings include dye-doped polymers and semiconductor quantum dots . The resonator itself then provides the optofluidic channel through which a liquid solution can be delivered and sampled simultaneously. Previously, lasing has been exploited in microspheres to decrease the resonance linewidths (i.e. increase the Q factors) and therefore improve the detection limit , or to enable alternative sensing modalities involving for instance mode-splitting for the detection of nanoparticles . The first demonstration of any type of lasing WGMs in capillaries was performed by Knight et al. , in which a thin-walled capillary was simply filled with a rhodamine 6G solution. Subsequently several variations of the same concept were reported [25–27], although filling the entire capillary with a gain medium is not readily compatible with the main purpose of using these structures as optofluidic refractive index sensors. Although lasing has previously been shown in high refractive index polymer coated capillaries [28–30], these demonstrations have been restricted to the use of very high refractive index polymer (n>1.67) and thick polymer coatings (1.9 µm) , making these lasing microcapillaries unsuitable for refractive index sensing applications. Here we investigate the lasing behavior of optofluidic capillary resonators and their application to refractive index sensing. We demonstrate the first channel-coated lasing microcapillary suitable for refractive index sensing and investigate the conditions under which lasing occurs. We analyze the refractometric sensing capacity of these devices below and above the lasing threshold, determining whether lasing provides a significant benefit to the overall performance. 2. Materials and methods 2.1 Preparation of active optofluidic resonators Commercial silica capillaries (Beckman Coulter) with inner and outer diameters of 50 and 360 µm, respectively, were processed as follows. First the polyimide coating was removed from the outer surface of the capillary using a blowtorch and the residue was wiped off with ethanol. Different solutions of poly (benzyl methacrylate) (PBZMA; Polysciences) in tetrahydrofuran (THF) were then prepared with various concentrations (25, 50, 75 and 100 mg/mL) for coating the inner surface of the capillary. Another solution was also prepared, consisting of a fluorescent dye (Nile Red) dissolved in THF up to its solubility limit. The dye was the gain medium to be doped within the deposited polymer coating. This particular organic dye was chosen for its excellent lasing properties and convenient emission spectrum between 590 and 630 nm. Subsequently, 200 µL of each polymer solution was combined with 10 µL of the gain medium solution, and each resulting mixture was allowed to fill a capillary by capillary forces. Once the capillaries were filled with the fluorescent polymer solutions described above, they were placed in an oven at 70°C for 1 h, allowing the solvent to evaporate. During this process, as the solvent evaporates the meniscus leaves behind a thin layer of polymer on the channel surface as it retreats down the length of the capillary . The thickness t of the resulting polymer layer was then measured by cleaving several sections of the given polymer coated capillary and examining them end-on using a field emission scanning electron microscope (FEI Quanta 450 FEG, 10 kV) in the secondary electron mode. Figure 1 shows typical SEM images for the four different polymer coated capillaries. The SEM images permit the characterization of the relationship between thickness of the deposited polymer coating and the polymer solution concentration (Table 1). There was excellent controllability, with a near-linear relationship between polymer solution concentration and coating thickness. The uniformity of the layer along the length of the capillary was however impossible to assess due to the enclosed nature of the resonator. Fig. 1 Once the relationship between the polymer solution concentration and the resulting thickness of the deposited layer was established, a new batch of capillaries was prepared but this time adjusting the amount of dye in each sample to ensure it matches the optimum lasing concentration of 5 µg/mL . This concentration enables one to reach the maximum gain of the fluorescent dye, thereby reducing the lasing power threshold. Increasing the dye concentration beyond this point results in self-quenching due to non-radiative energy transfer between nearby dye molecules . 2.2 Optical setup The optical setup shown in Fig. 2 was used for measuring the WGM spectra of the dye-doped polymer coated capillaries. The excitation of the WGMs was achieved using either a 532 nm CW laser (JDS Uniphase), or, for the case of characterizing the lasing behavior, a 532 nm frequency doubled Nd:YAG laser (Quanta-Ray INDI, Spectra Physics, 10 Hz, 5 ns pulses). Neutral density filters were used to adjust the excitation power in the latter case, while an energy meter (Gentec-E0 M link) was used for measuring the absolute pulse energy delivered to the capillary. A 10 × microscope objective was used for focusing the laser (spot size ~400 µm in diameter) onto the capillary, which was mounted onto a 3-axis translation stage (Thorlabs NanoMax). A second microscope objective (20 × ) was used to collect the WGM-modulated fluorescence signal from the capillaries. A longpass filter (λ = 550 nm) was used to remove the scattered pump laser light, and an analyzer was used for selecting the WGM polarization. Here the TE polarization (electric field parallel to the capillary axis) was selected to increase mode visibility, since the TE modes tend to exhibit higher Q factors compared with the TM modes, and hence also have lower lasing thresholds . We note however that the refractive index sensitivity of the TE modes is marginally smaller compared with the TM modes . Fig. 2 The fluorescence signal was spectrally resolved using a UV-VIS spectrometer (Horiba iHR 550) equipped with a 1200 lines/mm grating and a 2048-pixel CCD camera (Horiba Synapse). This system has a nominal spectral resolution of 18 pm. One end of the capillary was then connected to a syringe pump, allowing liquids with different refractive indices to be pumped through the capillary for refractometric measurements. The solutions used were different concentrations of glucose in water, since this system is simple to work with and the refractive index as a function of concentration is well known . Fluid that flowed out of the ejection end of the capillary was simply allowed to collect inside a small container. 2.3 Optical constants of PBZMA The refractive index of PBZMA is nominally 1.568 at a wavelength of 589.3 nm . Polymers can, however, have significant dispersion across the broad gain bands associated with fluorescent dyes. Since no data is available in the literature for the wavelength-dependence of the optical constants of PBZMA, they had to be measured using variable angle spectroscopic ellipsometry (VASE). First, a polymer film was fabricated by spin coating PBZMA in either toluene or tetrahydrofuran (THF) at a rate of 6100 rev/minute onto a standard silicon wafer. The reflection spectra were measured on a JA Wollam model M-2000V spectroscopic ellipsometer, and the measured ellipsometric ψ and Δ values (relating to the reflected amplitude and phase) were least-squares fit using a standard Fresnel model to extract the optical constants, Cauchy coefficients, and the thickness of the polymer. The PBZMA films deposited from toluene were found to be smooth over large areas, as inferred from the uniformity of the film interference colors over a range of ~1 cm2, whereas those deposited from THF had observable surface ripples on a millimeter scale. The smoother polymer film yielded smaller uncertainties in the values of the Cauchy coefficients in the model fit. The resulting optical constants for the polymer film are shown in Fig. 3, with the coefficients of the Cauchy model m(λ) = A + B/λ2 + C/λ4 found to be A = 1.5420 ± 0.0003, B = 0.00628 ± 0.00003 and C = 0.000240 ± 0.000004, where m is the refractive index. The refractive index obtained at the reference wavelength of λ = 589.3 nm was found to be slightly lower than the one quoted by the manufacturer (1.563 vs. 1.568, respectively) . Fig. 3 3. Results and discussion 3.1 Whispering gallery mode structure To model the whispering gallery modes of the coated capillaries, estimate the Q factor and refractometric sensitivity, and to verify experimental observations, the electric field profiles of the modes were first modeled following the method described in Ref , which is based on an earlier theory for spherical resonators . Accordingly, the refractive index profile of the coated capillary is,35],Eq. (2), the definitions are as usual: Jl(R) is the cylindrical Bessel function of the first kind and Hl(1,2)(R) are the cylindrical Hankel functions given by Hl(1)(R) = Jl(R) + iYl(R) and Hl(2)(R) = Jl(R) – iYl(R), respectively, with Yl(R) being the cylindrical Bessel function of the second kind and the argument R = mk0r. The relative field intensity is found by setting I proportional to |ES|2. Setting the electric field and its derivatives equal at the boundaries as appropriate for the TE polarization (E field parallel to cylinder axis) one obtains, By numerically solving Eqs. (3) and (4) for the complex roots k0, the quality factor is then simply Q = Re[k0]/2Im[k0] and the resonance wavelength is λ0 = 2π/Re[k0]. The electric field intensity profiles are calculated by first solving for the constants Al and Dl that satisfy the continuous boundary conditions for the TE polarization. The results are shown in Fig. 4 for 800, 600, 400 and 200 nm polymer coatings. Each of the intensity profiles shown is for the resonance closest to 610 nm, in accordance with the experimental results in Fig. 5. Finally, the refractive index sensitivity of the capillary channel is given by,Eq. (2) and squaring the result. As shown later in Table 1, the Q factors computed using this model agree reasonably closely with the experimental results for the non-lasing capillaries. Moreover, the free spectral ranges predicted by Eqs. (1)-(5) are found to be virtually identical to the measured values. Fig. 4 Fig. 5 3.2 WGMs in polymer coated capillaries The capillaries were then filled with millipore water using the connected syringe pump and the fluorescent WGMs for each capillary were excited with the 532 nm CW laser as pump source (~1 mW, 0.1s acquisition). Typical WGM spectra for the capillaries with different coating thicknesses are shown in Fig. 5. The thinnest coating (200 nm) exhibited only background fluorescence and no modes were observed, although the numerical calculation of the electric field shown in Fig. 4(D) predicts the existence of very low-Q WGMs. At a polymer thickness of 400 nm, the modes were clearly visible (Q ~460) despite the considerable background noise. The WGM signal-to-noise ratio and Q factor increased further for the 600 and 800 nm polymer layer thicknesses. For the 800 nm polymer coating a Q factor as high as ~2800 was observed. Additionally, a periodic modulation of the WGM spectrum was apparent for the thicker films, in which every second mode was distinctly more intense. This effect probably arises from interference associated with reflections coming from the outer walls of the capillary . 3.3 Lasing WGMs in coated capillaries The lasing capability of the capillaries supporting WGMs was investigated by replacing the CW pump laser with the frequency doubled Nd:YAG laser. The WGM spectra of the different capillaries were then acquired as a function of the incident pulse energy. Figures 6(A)-6(C) show the typical behavior observed as the pump energy is increased. As reported for other fluorescent resonators, such as microspheres , a clear transition or threshold exists upon which some modes start lasing. The fluorescence intensity of the resonance modes and their Q factors, exhibit two different regimes depending on the pulse energy; i.e. the fluorescence regime and the stimulated emission regime, as shown in Fig. 6(D). For the various capillaries tested, only those with a polymer coating thickness of 600 or 800 nm exhibited this lasing behavior, with lasing thresholds of 16 ± 2 µJ and 1.2 ± 0.1 µJ, respectively. The large error on the lasing threshold is predominately due to the fluctuation of the laser pulse energy ( ± 11%). Fig. 6 For both of the lasing capillaries, significant increases in the Q factor were observed upon transitioning into the lasing regime. The 600 nm thick polymer coated capillaries exhibited a Q factor of 1800 below and 4000 above the lasing threshold, while those with the 800 nm coating exhibited a Q factor of 2800 below and 6000 above the lasing threshold. In both cases the increase in Q factor was by approximately a factor of 2, which is comparable to values for lasing WGMs in dye-doped microspheres . Surprisingly, the resonance position was also found to vary with the pump energy as seen in Fig. 6(E), although this behavior hasn’t been reported for other polymer-based WGM lasers (the vast majority being made of polystyrene). Whereas the thermo-optic constants for polystyrene can be found in the literature , the values are not available for PBZMA. Thus, whilst an intensity-induced redshift is probably consistent with heating effects in the polymer, capillary, and channel region, the lack of any data on PBZMA renders difficult an unambiguous interpretation of this phenomenon. This behavior highlights the need to keep the pump energy constant to avoid any drift in the resonance position for refractive index sensing applications. 3.4 Refractive index sensing with lasing and non-lasing capillaries Refractive index sensing measurements were performed with both the 800 nm thick polymer coated lasing capillary and the 400 nm polymer coated non-lasing capillary. Although the 600 nm polymer coated capillary also exhibited lasing behavior, it proved impossible to sustain the lasing operation over the timeframe required for completing the refractive index sensing measurements due to rapid photobleaching of the gain medium at the higher pump power required. Increasing the pump energy was avoided as this affects the resonance wavelength as previously stated. Therefore we limited the refractive index sensing measurements to the following two scenarios; (i) using the non-lasing capillary with presumably the highest refractive index sensitivity (i.e. 400 nm thick polymer coating) and (ii) the capillary with the highest Q factor and also stable lasing operation (i.e. 800 nm thick polymer coating). To measure the refractometric sensitivity, different concentration solutions of glucose in water were pumped into each capillary. The typical behavior of two consecutive first order resonances (see complete spectrum in Fig. 7(C)) of the lasing capillary with 800 nm thick polymer coating is demonstrated in Figs. 7(A) and 7(B), as the concentration of glucose increases. The central positions of the resonances were calculated by fitting them with Lorentzian functions. The resonance positions are shown in Fig. 7(D) as a function of the refractive index of the fluid inside the capillary. Under these conditions, a refractive index sensitivity of 2.9 ± 0.4 nm/RIU was measured with the lasing capillaries for the most intense peak, and the second mode considered with lower intensity (e.g. Fig. 7(B)) exhibited similar sensitivity (3.0 ± 0.6 nm/RIU). The sensitivities were also measured below the lasing thresholds and were found to be within the errors stated above. The non-lasing capillary (400 nm polymer coating) tested under the same conditions had a refractive index sensitivity of 23.0 ± 0.8 nm/RIU. Note that the experimental results consistently have a factor 2 higher sensitivity compared to the predicted values. While the origin of this discrepancy remains uncertain, it has been observed before in non-lasing fluorescent capillaries , in which it was tentatively attributed to interference effects caused by reflection of a portion of the emitted fluorescence at the external interface. Alternatively, a fluctuation of the polymer coating thickness inside the capillary, which is however inherently difficult to measure, could be partially responsible for the higher-than-expected refractive index sensitivities . Fig. 7 It becomes apparent that there is a tradeoff in achieving lasing in optofluidic capillaries. A thicker polymer coating is required to sustain the high Q factors necessary to achieve lasing and to also minimize photobleaching by ensuring a low lasing threshold. The thicker polymer coating however reduces the refractive index sensitivity. The sensitivity by itself can however be a rather misleading parameter for evaluating the performance of a given resonator . The detection limit (DL) which represents the smallest refractive index change that is measurable is more relevant . The detection limit of the resonator is given by the ratio of the sensor resolution (R) and the resonator sensitivity (S), DL = R/S . Several formalisms have been developed for calculating the (wavelength shift) resolution, R. For example, White et al. calculate the resolution based on the accuracy which resonance maxima can be determined, whereas Silverstone et al. base the resolution on model fits and Fourier analysis. Using the approach given by White et al. , the sensor resolution is 3σ of the noise in the system and can be approximated by,40]. We find that the 800 nm polymer coated lasing capillaries (SNR = 50 dB) have an approximately 127 times higher wavelength shift resolution than the 400 nm polymer coated non-lasing capillaries (SNR = 10 dB). The refractive index sensitivity of the lasing capillary is however 8 times lower than for the non-lasing capillary. Despite this the detection limit is actually an order of magnitude greater (i.e. a 13-fold increase) for the lasing capillary (1.2 × 10−3 RIU and 1.6 × 10−2 RIU for the 800 nm lasing and 400 nm non-lasing capillaries respectively). Alternatively, using the formalism by Silverstone et al. , the wavelength shift resolution is given by,Eq. (8) and factoring in the lower sensitivity, one obtains an overall detection limit 16 times smaller compared with the non-lasing capillaries. Both formalisms for calculating the wavelength shift resolution therefore suggest significantly improved detection limits for the lasing capillary by around an order of magnitude. The increase in the wavelength shift resolution overwhelms the decreased sensitivity imposed by the necessity to use thicker polymer coatings to achieve stable lasing in the lasing microcapillaries. 4. Conclusion In this paper we demonstrated for the first time lasing of the whispering gallery modes in polymer coated optofluidic capillaries for refractive index sensing. The lasing operation requires the introduction of a gain medium such as Nile Red dye into the high-refractive-index polymer coating. The coating also has to be of sufficient thickness to support modes with high Q factors that permit low-enough lasing thresholds to avoid the issue of photobleaching. The refractive index sensitivities of optofluidic resonators with different polymer coating thicknesses were characterized, revealing that high Q factor lasing capillaries (requiring thicker coatings) exhibit significantly lower refractive index sensitivities. However, the detection limit, given by the ratio of sensor resolution and sensitivity, improves significantly upon lasing. This is mainly because of the dramatic increase in signal-to-noise ratio and Q factor when the sensors are pumped above their lasing thresholds. While other hollow resonators, such as microcapillaries and microbubbles, have been used for refractive index sensing with even lower detection limits [42, 43], our platform does not require the use of a fiber taper for probing the WGM resonances. Instead our approach relies on the gain medium inside the resonator for free space excitation and collection of the WGMs, making the setup simpler to use and implement. Further improvements could be made to the sensing properties, by using a more efficient gain medium which would allow for a reduction in the lasing threshold or a reduction in Q factor requirements. The latter would make it possible to achieve lasing with thinner polymer coatings. Inducing lasing in a 400 nm thick polymer coating, would for example allow for a detection limit in the range of 10−5 RIU to be reached. Acknowledgments The authors acknowledge the support of T. M. M’s ARC Georgina Sweet Laureate Fellowship, and a University of Adelaide Priority Partner Grant. The authors also thank Dr. Georgios Tsiminis for his technical support and Dr. Kristopher J. Rowland for his advice. NSERC and and AITF provided support for this work. References and links 1. F. Vollmer and S. Roy, “Optical resonator based biomolecular sensors and logic devices,” J. Indian Inst. Sci. 92, 233–251 (2012). 2. V. Lefevre-Seguin, “Whispering gallery mode lasers with doped silica microspheres,” Opt. Mater. 11(2-3), 153–165 (1999). [CrossRef] 3. L. He, S. K. Ozdemir, J. Zhu, W. Kim, and L. Yang, “Detecting single viruses and nanoparticles using whispering gallery microlasers,” Nat. Nanotechnol. 6(7), 428–432 (2011). [CrossRef] [PubMed] 4. J. Zhu, S. K. Ozdemir, Y.-F. Xiao, L. Li, L. He, D.-R. Chen, and L. Yang, “On-chip single nanoparticle detection and sizing by mode splitting in an ultrahigh-Q microresonator,” Nat. Photonics 4(1), 46–49 (2010). [CrossRef] 5. J. Wang, T. Zhan, G. Huang, P. K. Chu, and Y. Mei, “Optical microcavities with tubular geometry: properties and applications,” Laser Photonics Rev. 8(4), 521–547 (2014). [CrossRef] 6. C. M. Hessel, M. A. Summers, A. Meldrum, M. Malac, and J. G. C. Veinot, “Direct patterning, conformal coating, and erbium doping of luminescent nc-Si/SiO2 thin films from solution processable hydrogen silsesquioxane,” Adv. Mater. 19(21), 3513–3516 (2007). [CrossRef] 7. P. Bianucci, J. R. Rodríguez, F. C. Lenz, J. G. C. Veinot, and A. Meldrum, “Mode structure in the luminescence of Si-nc in cylindrical microcavities,” Physica E 41(6), 1107–1110 (2009). [CrossRef] 8. I. White, H. Zhu, J. Suter, X. Fan, and M. Zourob, “Label-free detection with the liquid core optical ring resonator sensing platform,” in Biosensors and Biodetection, A. Rasooly, and K. Herold, eds. (Humana, 2009), pp. 139–165. 9. C. P. K. Manchee, V. Zamora, J. W. Silverstone, J. G. C. Veinot, and A. Meldrum, “Refractometric sensing with fluorescent-core microcapillaries,” Opt. Express 19(22), 21540–21551 (2011). [CrossRef] [PubMed] 10. M. R. Foreman, J. D. Swaim, and F. Vollmer, “Whispering gallery mode sensors,” Adv. Opt. Photonics 7(2), 168–240 (2015). [PubMed] 11. M. D. Baaske, M. R. Foreman, and F. Vollmer, “Single-molecule nucleic acid interactions monitored on a label-free microcavity biosensor platform,” Nat. Nanotechnol. 9(11), 933–939 (2014). [CrossRef] [PubMed] 12. Z. Guo, H. Quan, and S. Pau, “Near-field gap effects on small microcavity whispering-gallery mode resonators,” J. Phys. D Appl. Phys. 39(24), 5133–5136 (2006). [CrossRef] 13. Y. Zhi, J. Valenta, and A. Meldrum, “Structure of whispering gallery mode spectrum of microspheres coated with fluorescent silicon quantum dots,” J. Opt. Soc. Am. B 30(11), 3079–3085 (2013). [CrossRef] 14. N. Riesen, T. Reynolds, A. François, M. R. Henderson, and T. M. Monro, “Q-factor limits for far-field detection of whispering gallery modes in active microspheres,” Opt. Express 23(22), 28896–28904 (2015). [CrossRef] [PubMed] 15. A. François, T. Reynolds, and T. M. Monro, “A fiber-tip label-free biological sensing platform: A practical approach toward in-vivo sensing,” Sensors (Basel) 15(1), 1168–1181 (2015). [CrossRef] [PubMed] 16. G. Huang, V. A. Bolaños Quiñones, F. Ding, S. Kiravittaya, Y. Mei, and O. G. Schmidt, “Rolled-up optical microcavities with subwavelength wall thicknesses for enhanced liquid sensing applications,” ACS Nano 4(6), 3123–3130 (2010). [CrossRef] [PubMed] 17. S. Lane, J. Chan, T. Thiessen, and A. Meldrum, “Whispering gallery mode structure and refractometric sensitivity of fluorescent capillary-type sensors,” Sensor Actuat. Biol. Chem. 190, 752–759 (2014).
https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-24-12-12466&id=344147
3 EUR to GYD 3 EUR = 625.81 GYD at the rate on . €1 = $208.60 -$0.36 (-0.17%) at the rate on 2022-10-05. The cost of 3 Euros in Guyana Dollars today is $625.81 according to the “Open Exchange Rates”, compared to yesterday, the exchange rate decreased by -0.17% (by -$0.36). The exchange rate of the Euro in relation to the Guyana Dollar on the chart, the table of the dynamics of the cost as a percentage for the day, week, month and year.
https://exchangerate.guru/eur/gyd/3/
03/04/2019 / By Mary Miller If a thief were to break into your home right now, how protected would your valuables be? Do you simply place them in your bedside drawer at night, or do you lock them away in a sturdy safe? Most people tend to place their valuables in one of these locations, which makes them the areas that thieves would most likely target first. A locked drawer can easily be picked, while a locked safe can be stolen entirely and forcefully opened somewhere else. Such hiding spots attract attention, but if you were to hide your belongings someplace less interesting, a thief would be more likely to overlook them than if they were stored in a secure but conspicuous place. Keep your valuable items safely hidden in plain sight by keeping them in these secret hiding places. (h/t to MDCreekmore.com). Rather than throwing away empty food cans, why not repurpose them as makeshift storage containers for jewelry, money, and other important items? First, buy a cheap can of whatever food item you don’t normally eat. You’re just after the can, not what’s inside it, so be sure to choose the can based on the size that you want. Next, you should remove the can’s label. Trace the label on a piece of firm but flexible cardboard and cut out the pattern. You should now have a piece of cardboard that is identical in size to the label. Then, take the can and carefully cut it in half with a hacksaw. Once you have emptied the can of its contents, you should wash it clean and place the piece of cardboard inside the bottom half of the can. This will serve as the inner support for your storage container. Fill the lower half of the can with your valuables, and cover it with the upper half. Seal the can with tape and glue the label back on. Keep your new container in your pantry, hidden alongside other cans. If you have an old vacuum cleaner that no longer works, you can use its bag as another convenient hiding place for important belongings. Of course, you should make sure to first thoroughly clean the bag before placing anything in, otherwise you might end up with dusty and grimy jewelry. If you don’t have an old or broken vacuum cleaner, you might be able to find a second-hand one at a thrift store. For convenient access, you will want a vacuum cleaner with a bag that opens up easily. If you have any family members living with you, be sure to inform them that the vacuum cleaner is not for actual cleaning use. (Related: Prepping tip: How to hide your valuables in plain sight.) This is a clever trick used in many movies, but despite its prevalence in pop culture and its common practice in real life, this hiding spot should not be underestimated. There is a reason why it works so well. If you have a large library of books at home, your hollowed-out book can easily blend in with the rest of your collection. A potential thief wouldn’t waste his time inspecting each and every book. The trick is to make the spine of your book look as uninteresting as possible. If you want to take it a step further, you can hollow out a thick stack of old magazines that have been tightly bound together using twine. A thief is even less likely to take any interest in such a mundane hiding place. Learn more ways to protect your valuables by going to Preparedness.news. Sources include:
https://homesteading.news/2019-03-04-hidden-in-plain-sight-secret-hiding-spaces-for-your-valuables.html
After two meetings and hours of debate, the Santa Clara Planning Commission has made recommendations to the City Council concerning SB9. As discussed at the last Planning Commission meeting,... Tag: Nancy Biagini A plan to help Santa Clara deal with the implementation of SB 9, the new state law that allows single-family homeowners to... SPONSORED Santa Clara’s Planning Commission has pushed the Climate Action Plan update to the City Council. The current Climate Action Plan was put... What was supposed to be a routine item concerning a development proposal for two Santa Clara properties led to a larger discussion... Thursday night the Santa Clara City Council dismissed City Manager Deanna Santana in a 4-2 vote, with Santa Clara Mayor Lisa Gillmor... With no planning items on the Planning Commission’s schedule, the commissioners used the Feb. 16 meeting to start looking at Santa Clara’s... Santa Clara’s City Council will soon be asked to approve the Patrick Henry Drive Specific Plan. The City’s Planning Commission voted on... Santa Clara’s Jan. 10 Planning Commission meeting dealt with the Patrick Henry Drive Specific Plan, state housing laws implemented in 2021 and... City Politics November 2018 Election Results: Santa Clara Mayor, City Clerk, Council Members Decided Thousands of Santa Clara voters cast their vote in the November 2018 elections. As of Wednesday, Nov. 7, 100 percent of all... SPONSORED Campaign donations have accelerated since the September campaign finance reports, with plenty of treats from real estate and union interests for mayoral...
https://www.svvoice.com/tag/nancy-biagini/
--- author: - 'Daniel M. Kane' title: 'Unary Subset-Sum is in Logspace' --- Introduction ============ In this paper we consider the Unary Subset-Sum problem which is defined as follows: Given integers $m_1,\ldots,m_n$ and $B$ (written in unary), we define the subset sum problem to be that of determining whether or not there exists an $S\subseteq [n]$ so that $\sum_{i\in S} m_i = B$ (note that for this problem the $m_i$ are often assumed to be non-negative). Let $C=|B| + \sum_{i=1}^n |x_i|+1$. This problem can be solved using a standard dynamic program using space $O(C)$ and time $O(Cn)$. The dynamic program makes fundamental use of this large space and it is interesting to ask whether this requirement can be removed. Unary Subset-Sum has been studied in small-space models of computation as early as 1980 in [@book], where they showed that it was in $NL$. Since then the problem was studied in [@comp], where Cho and Huynh devised a complexity class between $L$ and $NL$ that contained Unary Subset-Sum as supporting evidence that it is not $NL$-complete. This problem was listed again in [@overview] claiming it to be an open problem as to whether or not it is in $L$. It was recently shown in [@alternate] that this problem was in Logspace. Unfortunately, their algorithm is somewhat complicated. We provide a simple algorithm solving this problem in Logspace. Our Algorithm ============= The basic idea of our algorithm will be to make use of the generating function $\prod_{i=1}^n (1+x^{m_i}) = \sum_{S\subseteq[n]} x^{\sum_{i\in S} m_i}$ to compute the number of solutions to our problem modulo $p$ for a number of different primes $p$ (we show how to do this in Lemma \[mainLemma\]). Pseudocode for our algorithm is follows:      =      =\ $c:=0$\ $p: = \textrm{NextPrime}(C)$\ $\textrm{While}(c \leq n)$\ If $\sum_{x=1}^{p-1}x^{-B}\prod_{i=1}^n (1+x^{m_i}) \not\equiv 0 \pmod{p}$\ Return True\ $c:=c + \lfloor \log_2(p)\rfloor$\ $p := \textrm{NextPrime}(p)$\ End While\ Return False Space Complexity ---------------- There are several things that must be noted to show that this algorithm runs in logspace. First we claim that $p$ is never more than polynomial in size. This is because standard facts about prime numbers imply that there are at least $n$ primes between $C$ and $\textrm{poly}(C,n)$, and each of these primes causes $c$ to increase by at least 1. We also note that $\sum_{x=1}^{p-1}x^{-B}\prod_{i=1}^n (1+x^{m_i})$ can be computed modulo $p$ in Logspace. This is because we can just keep track of the value of $x$ and the current running total (modulo $p$) along with the space necessary to compute the next term. The product is computed again by keeping track of $i$ and the current running product (modulo $p$) and whatever is necessary to compute the next term. The exponents are computed in the obvious way. Finally primality testing of poly-sized numbers can be done by repeated trial divisions in Logspace, and hence the NextPrime function can also be computed in Logspace. Correctness ----------- We now have to prove correctness of the algorithm. Let $A$ be the number of subsets $S\subseteq [n]$ so that $\sum_{i\in S} m_i = B$. \[mainLemma\] For $p$ a prime number, $p> C$. Then $$\sum_{x=1}^{p-1}x^{-B}\prod_{i=1}^n (1+x^{m_i}) \equiv -A \pmod{p}.$$ Where again $A$ is the number of subsets $S\subseteq [n]$ so that $\sum_{i\in S} m_i = B$. Note that $$x^{-B}\prod_{i=1}^n (1+x^{m_i})=\sum_{S\subseteq [n]} x^{\sum_{i\in S} m_i - B}.$$ The idea of our proof will be to interchange the order of summation and show that the terms for which $\sum_{i\in S} m_i \neq B$ cancel out. Notice that each exponent in this sum has absolute value less than $p-1$. Interchanging the sums on the right hand side, we find that $$\sum_{x=1}^{p-1}x^{-B}\prod_{i=1}^n (1+x^{m_i}) = \sum_{S\subseteq [n]} \sum_{x=1}^{p-1}x^{\sum_{i\in S} m_i - B}.$$ We note that: $$\sum_{x=1}^{p-1} x^k \pmod{p} \equiv \begin{cases} -1 \ & \textrm{if} \ k \equiv 0 \pmod{p-1} \\ 0 \ & \textrm{else} \end{cases}.$$ If $k$ is a multiple of $p-1$, then all terms in the sum are 1 modulo $p$ and the result follows. Otherwise, we let $g$ be a primitive root mod $p$ and note that instead of summing over $x=1$ to $p-1$ we may sum over $x=g^\ell$ for $\ell=0$ to $p-2$. Then $$\sum_{x=1}^{p-1} x^k \equiv \sum_{\ell=0}^{p-2} g^{k\ell} \equiv \frac{1-g^{k(p-1)}}{1-g^k} \equiv \frac{1-1}{1-g^k} \equiv 0.$$ Hence $$\sum_{x=1}^{p-1}x^{-B}\prod_{i=1}^n (1+x^{m_i}) = \sum_{S\subset [n]} \sum_{x=1}^{p-1}x^{\sum_{i\in S} m_i - B} \equiv \sum_{\substack{S\subseteq [n] \\ \sum_{i\in S} x_i \equiv B \pmod{p-1}}} -1.$$ Since $p-1$ is larger than $C$, $\sum_{i\in S} x_i \equiv B \pmod{p-1}$ if and only if $\sum_{i\in S} x_i = B$. Hence this sum contributes -1 for each such $S$ and so the final sum is $-A$. We are now ready to prove correctness. If $\sum_{x=1}^{p-1}x^{-B}\prod_{i=1}^n (1+x^{m_i}) \not\equiv 0 \pmod{p}$ for some $p>C$, then by our Lemma, this means that $A\not\equiv 0 \pmod{p}$. In particular, this means that $A\neq 0$, and that therefore there is some such $S$. Consider an integer $d$ which is equal to the product of the primes $p$ that have been checked so far. Then $d$ is a product of distinct primes $p$ so that $-A \equiv \sum_{x=1}^{p-1}x^{-B}\prod_{i=1}^n (1+x^{m_i}) \equiv 0 \pmod{p}$. Therefore $d|A$. Furthermore it is the case that $d\geq 2^c$. It is clear from the definition of $A$ that $0\leq A \leq 2^n$. Therefore if $c>n$, $d>2^n$ and $d|A$, which implies that $A=0$, and that therefore there are no solutions. Hence our algorithm always outputs correctly. Extensions ========== There are some relatively simple extensions of this algorithm. For one thing, our algorithm does more than tell us whether or not $A$ is equal to 0, but also tells us congruential information about $A$. We can in fact obtain more refined congruential information than is apparent from our Lemma. We can also use this along with the Chinese Remainder Theorem to compute a numerical approximation of $A$. Finally a slight generalization of these techniques allows us to work with $m_i$ vector valued rather than integer-valued. Computing Congruences --------------------- We show above how to compute $A$ modulo $p$ for $p$ a prime larger than $C$. But in fact if $p$ is any prime and $k>1$ any integer, $A$ can be computed modulo $p^k$ in $O(\log(p^k))$ space. If $p>C$, then we have that $$A \equiv \frac{1}{p-1} \sum_{x=1}^{p-1}x^{-B}\prod_{i=1}^n (1+x^{m_i}) \pmod{p}.$$ On the other hand if $p\leq C$, the above expression will only count the number of subsets that give the correct sum modulo $p-1$. We can fix this by letting $q=p^\ell$ for some integer $\ell$ so that $q>C$. Then for the same reasons that the above is true, it will be the case that $$A \equiv \frac{1}{q-1} \sum_{x\in \mathbb{F}_q^*}x^{-B}\prod_{i=1}^n (1+x^{m_i}) \pmod{p}.$$ Where ${\mathbb{F}}_q$ is the finite field of order $q$. If we have $k>1$ and $p>C$ we note that again for the same reasons $$A \equiv \frac{1}{p-1} \sum_{x\in \mu_{p-1}}x^{-B}\prod_{i=1}^n (1+x^{m_i}) \pmod{p^k}.$$ Where $\mu_{p-1}$ is the set of $(p-1)^{st}$ roots of unity in ${\mathbb{Z}}/p^k$. This computation can be performed without difficulty in ${\mathbb{Z}}/p^k$. We again run into difficulty if $p<C$. We can let $r=(p-1)p^\ell$ for some integer $\ell$ so that $r>C$. It will then be the case that $$rA \equiv \sum_{x\in \mu_r} x^{-B}\prod_{i=1}^n (1+x^{m_i}) \pmod{p^{k+\ell}}.$$ The right hand side can easily be computed in ${\mathbb{Z}}/p^{k+\ell}$, and dividing by $r$ gives $A\pmod{p^k}$. Approximating the Number of Solutions ------------------------------------- It is also possible in Logspace to approximate the number of solutions, $A$, computing logarithmically many significant bits. This can be done using the Chinese Remainder Theorem. Suppose that $p_1,\ldots,p_k$ are distinct primes. By the above we can compute $A$ modulo $p_i$ for each $i$. Let $N=\prod_{i=1}^k p_i$, and $N_i=\frac{N}{p_i}$. The Chinese Remainder Theorem tells us that $$A \equiv \sum_{i=1}^k N_i \left( A \pmod{p_i} \right) \left( N_i^{-1} \pmod{p_i}\right) \pmod{N}.$$ Or in other words, $$\frac{A}{N} \equiv \sum_{i=1}^k \left( \frac{1}{p_i}\right)\left( A \pmod{p_i} \right) \left( N_i^{-1} \pmod{p_i}\right) \pmod{1}.$$ Now we can compute $A$ modulo $p_i$ by the above. We can also compute $N_i^{-1} \equiv \prod_{j\neq i} p_j^{-1} \pmod{p_i}$. Hence we can compute each term in the sum to logarithmically many bits. Hence in logspace we can compute $$\frac{A}{N} \pmod{1}$$ to logarithmically many bits of precision. If $2A>N>A$, this allows us to compute logarithmically many significant bits of $A$. We can find such an $N$ by starting with an $N>2^n\geq A$ and repeatedly trying $N$ at least half as big as the previous $N$ until $N<2A$ (we can find our next $N$ by either removing the prime 2 from $N$ or replacing the smallest prime dividing $N$ by one at least half as big (which exists by Bertrand’s postulate)). It should also be noted that this ability to approximately count solutions in Logspace allows us to approximately uniformly sample from the space of solutions in Randomized Logspace. This is done by deciding whether or not each element is in $S$ one-by-one and putting it in with probability nearly equal to the proportion of the remaining solutions that have that element in $S$. Vector-Valued Inputs -------------------- We consider the slightly modified subset sum problem where now $m_i$ and $B$ lie in ${\mathbb{Z}}^k$, and again we wish to determine whether or not there exists and $S$ so that $\sum_{i\in S} m_i = B$. If we let $C$ be one more than the sum of the absolute values of the coefficients of the $m_i$ plus the absolute values of the coefficients of $B$, a slight modification of our algorithm allows us to solve this problem in $O(k\log(C))$ space and $C^{O(k)}$ time (in particular if $k=O(1)$, this runs in $O(\log(C))$ space and $C^{O(1)}$ time). There are two ways to do this. One is simply to treat our vectors as base $C$-expansions of integers and reduce this to our previous algorithm. Another technique involves a slight generalization of our Lemma. In either case we let $m_i=(m_{i,1},\ldots,m_{i,k})$, $B=(B_1,\ldots,B_k)$. For the first algorithm, we let $m_i' = \sum_{j=1}^k C^{j-1}m_{i,j}$ and $B'=\sum_{j=1}^k C^{j-1}B_{j}.$ We claim that for any $S\subseteq[n]$ that $\sum_{i\in S}m_i = B$ if and only if $\sum_{i\in S}m_i' = B'$, thus reducing this to an instance of our original problem. The claim holds because $$\sum_{i\in S} m_i'-B' = \sum_{j=1}^k C^{j-1} \left(\sum_{i\in S} m_{i,j} - B_j \right) = \sum_{j=1}^k C^{j-1} e_j.$$ Since the $e_j$ are all integers of absolute value less than $C$, this sum is 0 if and only if, each of the $e_j$ are 0. Hence $\sum_{i\in S}m_i = B$ if and only if $\sum_{i\in S}m_i' = B'$. Another way to do this is by generalizing our Lemma. In particular it can be shown using similar techniques that if $A$ is the number of subsets $S$ that work, and if $p$ is a prime bigger than $C$ that $$-A \equiv \sum_{x_1,\ldots,x_k=1}^{p-1} \left(\prod_{i=1}^k x_i^{-B_i} \right)\left(\prod_{i=1}^n \left(1+\prod_{j=1}^k x_j^{m_{i,j}}\right) \right)\pmod{p}.$$ Given this, there is a natural generalization of our algorithm. It should also be noted that both of these techniques allow us to use the above-stated generalizations to our algorithm in the vector-valued context. This generalization also allows us to solve some related problems, such as the Unary 0-1 Knapsack problem. This problem is defined as follows: You are given a list of integer weights $w_1,\ldots,w_n$, a list of integer values, $v_1,\ldots,v_n$, and an integer bound $B$. The objective is to find a subset $S\subseteq [n]$ so that$\sum_{i\in S} v_i$ is as large as possible subject to the restriction that $\sum_{i\in S}w_i \leq B$. We do this by determining all possible pairs of $(\sum_{i\in S}w_i,\sum_{i\in S} v_i)$ by applying our algorithm to $m_i=(w_i,v_i)$ and $B=(w,v)$ for all $|w|\leq \sum_{i=1}^n |w_i|, |v|\leq \sum_{i=1}^n |v_i|$. Of the pairs $(w,v)$ for which there is a solution, we keep track of the largest $v$ that corresponds to a $w\leq B$. From this pair it is also not hard to use our algorithm to find a subset $S$ which achieves this bound. [\[9\]]{} Alvarez, C. and Greenlaw, R. *A Compendium of Problems Complete for Symmetric Logarithmic Space*, Computational Complexity, Vol. 9(2), 2000. S. Cho and D. Huynh *On a complexity hierarchy between L and NL* Inform. Process. Lett. 29, 1988, 177–182. Michael Elberfeld, Andreas Jakoby, Till Tantau *Logspace Versions of the Theorems of Bodlaender and Courcelle*, Foundations of Computer Science, 2010. B. Monien and I. H. Sudborough *Formal language theory* In Formal Language Theory, ed. R. V. Book. Academic Press, 1980.
Q: Why does this not work? Program in C The program I am writing is in C. It is supposed to have the user input a distance, and then calculate the angle at which you would need to shoot a projectile to reach that said distance, at a set velocity: #include <stdio.h> #include <math.h> #define MuzzleVelocity 1000 #define PI (3.141592653589793) int main (int argc, const char * argv[]) { float G, meters, tf, angle, range; G = 9.8; /* M/sec^2 */ printf("Enter the range of Enemy Sub in Meters>"); scanf("%f", &meters); printf("\nTo hit the sub at %f meters", meters); angle = (180/PI)* .5* asin(range / ((pow(MuzzleVelocity, 2))/G)); printf ("\n Use the angle %f degrees", angle); return 0; } If anyone could explain the line beginning with int main as well, that would be extremely helpful, I mainly edited the beginning from a previous problem, and substituted the variables I thought I needed to input. The correct output should be: Enter the range of Enemy sub in Meters> XXX.XX To hit the sub at XXX.XX meters Use the angle XX.XX A: The problem (though you never actually tell us what 'not working' means) is likely that you are assigning the user input to meters: printf("Enter the range of Enemy Sub in Meters>"); scanf("%f", &meters); But then in your equations, you use the value of range, which has not been set to any value, and will thus have a completely random value in it: angle = (180/PI)* .5* asin(range / ((pow(MuzzleVelocity, 2))/G)); Replace range with meters in the equation, or vise-versa in the scanf.
When will you be adding a ludeme for a 3D Game Board cube with NxNxN cells? Each cell can be either empty or occupied by one of the players' game play pieces (e.g., circle, square or triangle if there are 3 players). Adjacency is defined as sharing a 1x1x1 cell face with a neighboring cell. Also, the game I want to implement in Ludii is Qua, with BGG entry: https://boardgamegeek.com/boardgame/312939/qua. In addition to neighboring cell adjacency left-right, front-back, and above-below, there is game board cube face adjacency, where each game player has two opposite game board cube faces assigned to them. These cube faces are adjacent to the NxN cells of the game board cube that are on the surface of that cube face. RE: 3D Game Board Cube - cambolbro - 07-18-2020 Hi, 3D game boards are in the pipeline and should be provided in a version later this year, once we work out a suitable way to unambiguously view 3D structures and allow unambiguous selection of every playable site in the GUI. This probably won't be for at least a couple of months, but we'll keep your request in mind. How are 3D connections handled in Qua, does it use "overpasses cut underpasses"? i.e. if player A has a connection along the board level and player B builds a bridge of connected pieces across it, such that the lower pieces are buried and Player A's lower connection is no longer visible, does that connection still count? Regards, Cameron RE: 3D Game Board Cube - QuaGamer - 08-24-2020 (07-18-2020, 07:45 AM)cambolbro Wrote: Hi,In Qua, the 3D cube game board is a simple 3D matrix of cells = NxNxN cells, where N is the number of cells along each edge of the cube. Each cell is either empty or contains exactly one player's piece. Pieces don't move. When a player occupies an empty cell on their turn, that cell remains filled with that player's piece for the rest of the game. This is similar to the 4x4x4 3D game board for Qubic: https://www.boardgamegeek.com/boardgame/13714/qubic. So, no. Qua does not use overpasses cut underpasses. It does not build up from a 2D board starting point. Each of the 3 players own two opposite game board cube faces are adjacent to all the cells on those faces. Connections are based on cell face to cell face adjacency, or cell face to game board cube face adjacency. In Qua, all three players can conceivably connect their two game board cube faces without blocking each other. This is actually the objective in the "Cooperative Qua" game play variant. For more information about Qua, please read: http://www.abstractgames.org/qua.html RE: 3D Game Board Cube - QuaGamer - 08-29-2020 Note: 3D chess games, like 3D XYZ Chess: https://boardgamegeek.com/boardgame/317984/3d-xyz-chess, also use this type of game board. RE: 3D Game Board Cube - QuaGamer - 09-12-2020 I specifically would like a board ludeme called "cube" that extends the "square" ludeme. It takes one integer parameter, which is the length of the game board in places in 3 orthogonal directions. It has the same four regions as a square: N and S, W and E. It has two additional regions U and D for up and down in the third orthogonal direction. Each place has the same four orthogonally adjacent places in the N, E, W, and S directions plus two new orthogonally adjacent places in the U and D directions for a total of 6 orthogonally adjacent places. Each place in the cube has a triple as its coordinate. The first two are the same as for the square: A letter followed by a number. The new, third element of the coordinate will need its own count identifier. For displaying a cube game board, I suggest an offset perspective view of N layers of NxN squares. Here is a notional example for a 5x5x5 cube: https://drive.google.com/file/d/1t0QmRE9SvE6FF3snCLMW4D-17Q0hXRrX/view?usp=sharing RE: 3D Game Board Cube - QuaGamer - 09-13-2020 Nevermind. I found the square.class in the Ludii-1.0.5.jar file. RE: 3D Game Board Cube - QuaGamer - 09-25-2020 I've looked at the Java code for ludemes. It is more complicated than what I am willing to learn right now. I will wait for a team member to write the cube ludeme. RE: 3D Game Board Cube - QuaGamer - 10-13-2020 Has anyone on the development team started work on a "cube" ludeme? I am willing to test code and provide suggestions. For example, start simple and implement a draft face adjacency cube ludeme to work out graphics presentation challenges. You may want to wait and add edge diagonal and corner diagonal adjacency connections later. In looking at the java code, I realize the cube ludeme probably needs its own 3D parent java modules and cannot extend the square ludeme directly. Comment: Directional language issues are important to sort out. North, South, East, West, Up, Down makes sense in 3D. However, you already set a 2D precedent for Top=North, Bottom=South, Left=West, Right=East. I suggest when you implement the Cube ludeme (or sooner in anticipation), you replace Top with Back and Bottom with Front, so that Top can be used with Up and Bottom can be used with Down. I also suggest Row, Column and Stack language for multiple "places" (i.e. locations/cells/vertices) and "layers" adjacent to each other in the 3rd dimension. For example, a row of layers in the West-East direction, a column of layers in the North-South direction and a stack of layers in the Up-Down direction. RE: 3D Game Board Cube - Eric Piette - 10-14-2020 Hi Woody, No we do not work on the 3D games right now. We are focused on the ERC project (Digital Ludeme Project) around ancient games. And of course, no 3D games in the ancient times ;) But we will warn you when we will work on it, even if I do not think that will be in the next months. Regards,
https://ludii.games/forums/printthread.php?tid=88
Fu, Y., Yu, H., Yeh, C-K, Zhang, J. J. and Lee, T.-Y., 2019. High Relief from Brush Painting. IEEE transactions on visualization and computer graphics, 25 (9), 2763-2776. Full text available as: | PDF | 08419282.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial No Derivatives. 11MB | | Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via [email protected]. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. Official URL: Volume: 25 , Issue: 9 , Sept. 1 2019 ) DOI: 10.1109/TVCG.2018.2860004 Abstract Relief is an art form part way between 3D sculpture and 2D painting. We present a novel approach for generating a texture-mapped high-relief model from a single brush painting. Our aim is to extract the brushstrokes from a painting and generate the individual corresponding relief proxies rather than recovering the exact depth map from the painting, which is a tricky computer vision problem, requiring assumptions that are rarely satisfied. The relief proxies of brushstrokes are then combined together to form a 2.5D high-relief model. To extract brushstrokes from 2D paintings, we apply layer decomposition and stroke segmentation by imposing boundary constraints. The segmented brushstrokes preserve the style of the input painting. By inflation and a displacement map of each brushstroke, the features of brushstrokes are preserved by the resultant high-relief model of the painting. We demonstrate that our approach is able to produce convincing high-reliefs from a variety of paintings(with humans, animals, flowers, etc.). As a secondary application, we show how our brushstroke extraction algorithm could be used for image editing. As a result, our brushstroke extraction algorithm is specifically geared towards paintings with each brushstroke drawn very purposefully, such as Chinese paintings, Rosemailing paintings, etc.
https://eprints.bournemouth.ac.uk/31215/
RELATED APPLICATION FIELD OF THE INVENTION This application is a continuation of U.S. patent application Ser. No. 16/448,474 filed 21 Jun. 2019, the content of which as filed is incorporated herein by reference in its entirety. The present invention relates to a computer vision system and method, employing machine learning and in particular deep neural networks, for classifying (and monitoring changes in) an image (such as a medical image), based on structural or material segmentation. Possible medical imaging applications include Computed Tomography (CT), Magnetic Resonance (MR), Ultrasound, HRpQCT (High-Resolution peripheral Quantitative Computed Tomography), and Pathology Scanner imaging. BACKGROUND Computer vision and image processing techniques have been applied to medical image analysis. Some computer-aided systems achieve the analysis with two steps: segmentation and quantitative calculation. Segmentation is the process of segmenting (or differentiating) structures or objects in an image, such as a medical image, from one another by differentiating pixels (in 2D image) or voxels (in 3D image) in the image. Based on the segmentation, quantitative features such as volume, shape, and density are calculated. For example, lesion size and shape are calculated after the lesion has been segmented in a brain CT or MRI scan; the bone mineral density may be calculated after the femoral neck is segmented in a hip DXA (dual-energy x-ray absorptiometry) scan. A doctor may make a diagnosis or treatment decision after he or she has compared such calculated values with healthy reference data. For example, a T-score is the standard score of a patient's bone mineral density compared to the young normal reference mean. The WHO (World Health Organization) defines osteoporosis as a T-score of −2.5 or lower, that is, a bone density that is two and a half standard deviations or more below the mean of a 30-year-old healthy man/woman. The segmentation may be achieved manually, semi-manually, or automatically. In an example of manual segmentation, a user operates a computer to move a rectangular box over a hip DXA scan and thereby select the region of the femoral neck. Semi-manual segmentation may be performed by an image processing program employing a user's initialisation or input. For example, a user may operate a computer to draw an approximate bone boundary on a wrist CT scan; the program then adjusts the approximate boundary into a contour that segments bone from the surrounding tissues. Automatic segmentation may be performed by utilizing the features of the object of interest, such as intensity values, edges and shapes. In one existing example, a voxel-value based thresholding method is used to segment bone from the surrounding tissues in CT scans. Some other programs use machine learning algorithms to train a classifier to segment abnormal tissues in medical images. For example, a feature-based machine learning algorithm, such as a support vector machine and a decision tree, may be used as a classifier by using tumour images and normal images as training data. The trained classifier slides through the new image “window” by “window” to segment any image regions of tumour tissues. Machine learning algorithms have shown promising accuracy and efficiency in this field. However, it is a significant challenge to both collect sufficient training data and to annotate the training data. The training images must be annotated by experts, which is a tedious and time-consuming process. Moreover, in some applications, it may be very difficult or nearly impossible to accurately annotate the training images, even for experts. For example, in bone quality assessment, a transitional zone exists at any sample composed of both cortical and trabecular bones. The transitional zone comprises the inner cortex adjacent to the medullary canal and trabeculae abutting against the cortex contiguous with the endocortical surface. The transitional zone is a site of vigorous bone remodelling. It is important to identify and segment this region in bone microstructure assessment but, owing to limitations in image resolution, it is essentially impossible for an expert to annotate this region both accurately and consistently. Without annotated images as training data, the segmentation model cannot be trained. In the last few years, deep learning or deep neural networks have outperformed human in many visual recognition tasks such as natural image classification. In an exemplary CNN (Convolutional Neural Network) implementation, the network comprises input layer, hidden layers, and an output layer. An image is fed into the network through the input layer. The image is sampled and applied with convolutional operations to generate hidden layers. The output of each layer is used as input to the next layer in the network. The output layer is fully connected at the end that will output a classification result. Training data are images with classification labels. The training process obtains the parameters of the neural network. After the training is finished, a new image will be processed by the neural network with the obtained parameters to generate a classification result. For example, a deep neural network algorithm may be used to train a model to determine the condition (for example, no, mild, moderate, severe) of diabetic retinopathy from OCT (Optical Coherence Tomography) images. However, this end-to-end solution brings two problems in clinical practices. First, the end-to-end solution is a black box: the input is the medical image, and the output the classification of diseases or conditions. It is difficult to interpret the process whereby the neural network makes its decision—so it is difficult for the user to assess the reliability of the classification results. Secondly, this solution requires a substantial amount of training data. As discussed above, in medical applications annotating or labelling the training data is a tedious and time-consuming process. Collecting enough training data for each category of each type of classification result thus poses a significant challenge. SUMMARY a segmenter configured to form one or more segmentations of a structure or material in an image (comprising, for example, a medical image) and generate from the segmentations one or more segmentation maps of the image including categorizations of pixels or voxels of the segmentation maps assigned from one or more respective predefined sets of categories; a classifier that implements a classification machine learning model configured to generate, based on the segmentations maps, one or more classifications and to assign to the classifications respective scores indicative of a likelihood that the structure or material, or the subject, falls into the respective classifications; and an output for outputting a result indicative of the classifications and scores. According to a first aspect of the invention, there is provided a system for classifying a structure or material in an image of a subject, comprising: In an embodiment, the classifier generates the one or more classifications based on the segmentations maps and non-image data pertaining to the subject. The system may be configured to train the classification machine learning model. i) a structure segmenter configured to generate structure segmentation maps including categorizations of the pixels or voxels assigned from a predefined set of structure categories, ii) a material segmenter configured to generate material segmentation maps including categorizations of the pixels or voxels assigned from a predefined set of material categories, and/or iii) an abnormality segmenter configured to generate abnormality segmentation maps including categorizations of the pixels or voxels assigned from a predefined set of abnormality or normality categories. In an embodiment, the segmenter comprises: In an example, the structure segmenter is configured to employ a structure segmentation machine learning model to generate the structure segmentation maps, the material segmenter is configured to employ a material segmentation machine learning model to generate the material segmentation maps, and the abnormality segmenter is configured to employ an abnormality segmentation model to generate the abnormality segmentation maps. The structure segmenter may be configured to train the structure segmentation machine learning model, the material segmenter to train the material segmentation machine learning model, and/or the abnormality segmenter to train the abnormality segmentation model. In an embodiment, the system further comprises a segmentation map processor configured to process the segmentation maps before the segmentation maps are input by the classifier. In an example, the segmentation map processor is configured to down-sample the segmentation maps. In an embodiment, the classification machine learning model comprises a neural network, a support vector machine, a decision tree, or a combination thereof. For example, the classification machine learning model may comprise a neural network that includes convolutional neural network layers and fully-connected neural network layers. In an embodiment, the image is a medical image, and the classifications correspond to probabilities that the structure or material, or the subject, will sustain a specified condition or symptom in respective timeframes. On an example, the timeframes include a shorter-term timeframe, a longer-term timeframe, and at least one intermediate-term timeframe intermediate the shorter-term timeframe and the longer-term timeframe. In another example, the condition or symptom is bone fracture. In an embodiment, the image is a medical image, and the classifications correspond to probabilities that the structure or material, or the subject, will sustain respective conditions or symptoms. In an example, the conditions or symptoms are bone conditions. In an embodiment, the image is a medical image, and the classifications correspond to probabilities of respective rates of disease or pathology progression. For example, the classifications may comprise classifications corresponding any one or more of: stable, modest deterioration, and accelerated deterioration. In an embodiment, the image is a medical image, and the classifications correspond to probabilities of efficacy of respective treatment options. For example, the treatment options may include an antiresorptive treatment and/or an anabolic treatment. In an embodiment, the image is a medical image, and the classifications correspond to respective medical conditions. For example, the medical conditions may include any one or more of: osteomalacia, tumour, osteonecrosis and infection. In an embodiment, the classification machine learning model is a model trained with image data and non-image data relating to training subjects, and generates the respective scores based on image data (typically constituting one or more images) and non-image data relating to the subject. forming one or more segmentations of a structure or material in an image; generating from the segmentations one or more segmentation maps of the image including categorizations of pixels or voxels of the segmentation maps assigned from respective predefined sets of categories of the structure or material; using a classification machine learning model to generate, based on the segmentations maps, one or more classifications and to assign to the classifications respective scores indicative of a likelihood that the structure or material, or the subject, falls into the respective classifications; and outputting a result indicative of the classifications and scores. According to a second aspect of the invention, there is provided a computer-implemented method for classifying a structure or material in an image of a subject, comprising: In an embodiment, the classification machine learning model is used to generate the one or more classifications based on the segmentations maps and non-image data pertaining to the subject. The method may include training the classification machine learning model. i) generating structure segmentation maps including categorizations of the pixels or voxels assigned from a predefined set of structure categories, ii) generating material segmentation maps including categorizations of the pixels or voxels assigned from a predefined set of material categories, and/or iii) generating abnormality segmentation maps including categorizations of the pixels or voxels assigned from a predefined set of abnormality or normality categories. In an embodiment, forming the one or more segmentations comprises: For example, the method may include employing a structure segmentation machine learning model to generate the structure segmentation maps, a material segmentation machine learning model to generate the material segmentation maps, and an abnormality segmentation model to generate the abnormality segmentation maps. In particular, the method may include training the structure segmentation machine learning model, the material segmentation machine learning model, and/or the abnormality segmentation model. According to this aspect, there is also provided a classification of a structure or material in an image of a subject, generated according the method of this aspect. According to a third aspect of the invention, there is provided a computer-implemented diagnostic method, comprising the method of the second aspect. dividing annotated segmentation maps and annotated non-image data into a training set and a testing set (such that, as a result, the training set and the testing each include some annotated segmentation maps and some annotated non-image data), the annotated segmentation maps obtained by segmenting one or more images; implementing a classification machine learning model, including initializing parameters of the classification machine learning model; updating the parameters of the classification machine learning model by running a learning algorithm on the training data; testing the classification machine learning model on the testing data; evaluating whether the classification machine learning model has satisfactory performance; and when the performance is found to be satisfactory, outputting the classification machine learning model for deployment or flagging the classification machine learning model as fit for deployment. According to a fourth aspect of the invention, there is provided a computer-implemented method for training a classification machine learning model for classifying a structure or material in an image of a subject, the method comprising: This aspect may be used in conjunction or in combination with (or as a part of) the second aspect, such as to train the classification machine learning model of the second aspect. The method may include segmenting the one or more images (such as in the course of generating the annotated segmentation maps). In an embodiment, the method includes, when the performance is found to be unsatisfactory, collecting more image and non-image data for training the classification machine learning model. The classification model can be trained by various machine learning algorithms, so may comprise—for example—a neural network, a support vector machine, a decision tree, or a combination thereof. Thus, in one embodiment, the classification machine learning model comprises a neural network having a plurality of layers comprising artificial neurons, wherein the parameters comprise layer numbers, neuron numbers, neuron weights, and neuron function parameters; and testing the classification machine learning model includes testing the classification machine learning model on the testing data. In an embodiment, updating the parameters includes determining a gradient of a loss function. In an embodiment, the images are medical images and the non-image data comprise clinical records. In an embodiment, the method includes dividing the annotated segmentation maps and the annotated non-image data into the training set, a development set and the testing set, and using the development data to investigate the learning procedure and to tune the parameters (and, when the classification machine learning model comprises a neural network, tune the layers). According to a fifth aspect of the invention, there is provided a computer program comprising program code configured, when executed by one of more computing devices, to implemented the method of any one or more of the second to fourth aspects. According to this aspect, there is also provided a computer-readable medium, comprising such a computer program. It should be noted that any of the various individual features of each of the above aspects of the invention, and any of the various individual features of the embodiments described herein including in the claims, can be combined as suitable and desired. DRAWINGS In order that the invention may be more clearly ascertained, embodiments will now be described by way of example with reference to the following drawing, in which: FIG. 1 is a schematic view of a classification system according to an embodiment of the present invention; FIG. 2 FIG. 1 is a high-level schematic diagram illustrating the operation of the classification system of , in calculating bone fracture risk of a subject from a medical imaging scan; FIG. 3 FIG. 1 is a schematic view of the operation of the segmenter of the classification system of ; FIG. 4 FIG. 1 is a schematic illustration of the operation of classifier of the classification system of ; FIG. 5 FIG. 1 is a flow diagram of the training of the classification neural network of the classification system of . DETAILED DESCRIPTION FIG. 1 10 is a schematic view of a classification system for classifying a structure or material in a medical image (based on structural and material segmentation), according to an embodiment of the present invention. FIG. 1 10 12 14 16 14 16 12 18 20 10 10 20 18 14 20 18 10 10 Referring to , system comprises a classification controller and a user interface (including a GUI ). User interface is provided for representing information to a user and for receiving input (including feedback) from a user; it typically comprises one or more displays (on one or more of which may be displayed the GUI ), a web browser, a keyboard and a mouse, and optionally a printer. Classification controller includes at least one processor and a memory . System may be implemented, for example, as a combination of software and hardware on a computer (such as a personal computer or mobile computing device), or as a dedicated image segmentation system. System may optionally be distributed; for example, some or all of the components of memory may be located remotely from processor ; user interface may be located remotely from memory and/or from processor . For example, system may be implemented in a service-oriented architecture, with its components communicating with each other over a communication network such as a LAN (local area network), WAN (wide area network) or the internet. System may be deployed in the cloud, and its use shared by users at different locations. 10 10 In certain other embodiments, system is implemented as a standalone system (of software and hardware) or as standalone software executable by a computer, and deployed in one location; for example, system may be deployed in a hospital, medical clinic or other clinical setting. 20 18 Memory is in data communication with processor , and typically comprises both volatile and non-volatile memory (and may include more than one of each type of memory), including RAM (Random Access Memory), ROM and one or more mass storage devices. 18 22 24 24 24 26 28 18 30 32 34 a b c As is discussed in greater detail below, processor includes a segmenter (which includes a structure segmenter , a material segmenter , and an abnormal segmenter in the form of an abnormal material segmenter ), an segmentation map processor , and a non-image data processor . Processor also includes a classifier , an I/O interface and a results output . 20 36 38 40 42 44 46 48 50 52 54 56 24 24 24 44 46 48 44 46 48 52 54 56 a b c Memory includes program code , image data , non-image data , segmentation models (including, in this example, structure segmentation models , material segmentation models , and abnormality segmentation models in the form of abnormal material segmentation models ), segmentation maps (including, in this example, structure segmentation maps , material segmentation maps , and abnormality segmentation maps in the form of abnormal material segmentation maps ). Structure segmenter , material segmenter and abnormal material segmenter train the respective segmentation models , , , and use segmentation models , , to perform segmentation on incoming images and generate structure segmentation maps , material segmentation maps , and abnormal material segmentation maps , respectively. 20 58 30 50 40 12 18 36 20 Memory also includes a classification machine learning model in the form of a classification neural network , which is trained and used by classifier to perform classification by using segmentation maps and non-image data . Classification controller is implemented, at least in part (and in some embodiments entirely), by processor executing program code from memory . 24 56 c It should be noted that, as the present embodiment relates to the classifying of structures and/or materials in a medical image, abnormal material segmenter may also be referred to as an abnormal tissue segmenter, and abnormal material segmentation maps may also be referred to as abnormal tissue segmentation maps. 32 38 40 20 38 40 In broad terms, I/O interface is configured to read or receive medical image data and non-image data pertaining to a subject, and to store these data as image data and non-image data of memory for processing. Image data is typically in the form, in this embodiment, of a medical image of—for example—a region of the body of a subject. Non-image data typically includes subject or patient information from various structured and non-structured data sources, collected throughout a subject's medical consultations, treatment and follow-up consultations. 40 Subject structured data may include basic subject information such as sex, age, weight, height; laboratory test results such as blood test results and DNA test results; treatment data such as the types of medication and dosage; and questionnaire data such as smoking and drinking habits and fracture history. Subject unstructured data may include text documents of laboratory results, doctors' notes, and radiological reports. Non-image data may in a variety of formats, such as numerical data, text, voice, and video. 22 38 44 46 48 38 52 54 56 38 30 50 40 32 34 14 Segmenter processes image data (constituting one or more medical images) and uses structure segmentation models , material segmentation models and abnormal material segmentation models to generate—from image data —structure segmentation maps , material segmentation maps and abnormal material segmentation maps , respectively, which characterize image data in different ways. Classifier then inputs the resulting segmentation maps and non-image data , and generates therefrom results in the form of a classification output. The classification output is, in this embodiment, presented to users or used for further analysis via I/O interface and at either results output and/or user interface . 30 58 The classification output of classifier (in this embodiment, generated using classification neural network ) comprises a respective condition score for each of one or more classifications (and preferably for each of a plurality of possible classifications). Each score represents a predicted likelihood that the subject falls into the corresponding classification. In the present example of bone fragility assessment, the classifications are “negligible fracture risk”, “imminent fracture risk”, “intermediate-term fracture risk”, and “long-term fracture risk”. The classification output is described in more detail below. In an alternative embodiment, the classifier outputs a respective disease progression score for each of one or more condition progression states. Each score represents a predicted likelihood that a current condition will progress to another condition. For example, in bone fragility assessment, the disease progressions may include “stable”, “modest deterioration”, and “accelerated deterioration”. In still another embodiment, the classifier outputs a respective treatment score for each of multiple treatment options. Each score represents a predicted likelihood that the treatment is the most efficient for the patient. For example, in a bone fragility assessment, the treatment options may include “antiresorptive”, “anabolic”, and “antiresorptive +anabolic”. In a further embodiment, the classification output comprises a score for each of one or more possible classifications corresponding to known medical conditions or pathologies. For example, in a bone fragility assessment, these classifications could be “osteomalacia”, “tumour”, “osteonecrosis” and “infection”. In such an embodiment, the resulting scores represent the degree to which the (e.g. bone) sample of the subject conforms to that classification/condition. If only one classification has a significant score, or one classification has a score that is significantly greater than all other scores, that classification may be regarded as a diagnosis, or suggested diagnosis, of the corresponding condition or pathology. In certain embodiments, the classification output comprises two or more sets of such scores (selected from the aforementioned examples or otherwise). FIG. 1 38 Returning to , as will be appreciated by the skilled person in this art, image data —constituting one or more medial images—comprises data generated by one or more of a variety of medical image modalities (such as HRpQCT, or High-Resolution peripheral Quantitative Computed Tomography) implemented by one or more medical imaging devices (such as a HRpQCT scanner). Each of these devices scans a sample (whether in vivo or in vitro) and creates a visual representation, generally of a portion of the interior of a subject's body. The medical images may depict, for example, a part of a body or a whole body of a subject (e.g. the brain, the hip or the wrist). The medical images might be acquired by scanning the same sample or body part using different imaging modalities, as different imaging modalities may reveal different characteristics of the same sample or body part. The medical images might be acquired by scanning different body parts using the same image modalities, as different body parts of the same patients might provide different insights towards a better diagnosis of diseases or conditions. For example, in bone fragility assessment, both the wrist and the leg of a patient may be scanned by an HRpQCT scanner (or indeed acquired by scanning the different samples or body parts using different imaging modalities) to provide information for use in assessing a subject's bone quality. 38 The image data may constitute a 2D (two-dimensional) image that may be represented as a 2D array of pixels, or a 3D (three-dimensional) image that may be represented as a 3D array of voxels. For convenience, the medical images described below are 3D images that may be represented as a 3D array of voxels. 38 22 42 50 52 54 56 52 54 56 56 As mentioned above, the one or more received medical images, stored in image data , are segmented by segmenter , using trained segmentation models , into respective segmentation maps . Each segmentation map , , characterizes the respective medical image differently. A structure segmentation map represents the medical image as one or more different anatomical structures from a predefined set of structures. For example, a wrist CT scan may be segmented into compact cortex, transitional zone, and trabecular region. Material segmentation map represents the medical image into multiple different materials from a predefined set of materials. For example, a wrist CT scan might be segmented into mineralized material, fully mineralized material, red marrow in the trabecular region, and yellow marrow in the trabecular region. An abnormal material segmentation map represents the medical image as normal material and abnormal material (or, in this example, normal tissue and abnormal tissue). For example, a tumour or fracture might be segmented from a wrist CT scan and represented in an abnormal material segmentation map as ‘abnormal’. 50 30 40 30 50 40 30 30 Segmentation maps are inputted into classifier , in combination with non-image data . Classifier generates one or more classification outputs based on segmentation maps and the non-image data . Input data of classifier is generally multi-dimensional, so classifier is implemented with machine learning algorithms, such as a neural network, support vector machine, decision tree, or a combination thereof. 30 58 58 In this embodiment, classifier employs or is implemented as classification neural network (though in other embodiments, other machine learning algorithms may be acceptable), including—in this example—convolutional neural network layers and fully-connected neural network layers. Classification neural network is trained with training data, as is described below. 10 34 14 50 50 As mentioned above, the ultimate classification output is outputted by system to a user via results output or user interface . The classification output may optionally include a visual presentation of one or more of the corresponding segmentation maps . Segmentation maps may be presented in case they can assist a user in interpreting the classification output, such as in assessing the reliability of the results. FIG. 2 FIG. 2 60 10 62 62 10 62 100 is a high-level schematic diagram illustrating the operation of system in calculating bone fracture risk of a subject from a medical imaging scan, in this example a wrist HRpQCT scan (also shown in negative at ′). As shown in , system receives the wrist HRpQCT scan comprising a plurality of slices. (As will be appreciated by the skilled person, an HRpQCT scan can comprise or more slices, but four slices are depicted in the figure for simplicity.) 22 62 52 22 62 54 62 64 52 54 56 40 30 52 54 66 66 68 68 68 68 66 a b c d Segmenter segments scan into a structure segmentation map in which the scan is segmented into compact cortex, outer transitional zone, inner transitional zone, and trabecular region. Segmenter segments scan into material segmentation map , in which scan is segmented into surrounding muscle, surrounding fat, yellow marrow adipose, and red marrow adipose. Data comprising segmentation maps , , abnormal material segmentation maps and non-image data (e.g. clinical factors including sex and age) are processed by trained classifier to generate classification outputs. The classification outputs include segmentation maps , and a table or report . Table or report includes, in numerical and/or graphical form, fracture probabilities in each category of fracture risk: imminent fracture risk (fracture within two years: t<2 y), intermediate-term fracture risk (fracture within two to five years: 2≤t<5 y), long-term fracture risk (fracture in five to ten years, 5≤t≤10 y), and negligible fracture risk . In the illustrated example, the probability that the wrist is at risk of fracture within two years is 95.6%, that the wrist is at a risk of fracture in two to five years 2.4%, that the wrist is at a risk of fracture in five to 10 years 1.6%, and that the wrist is at negligible risk of fracture is 0.3%. In other words, the probability that the subject will not have a wrist fracture in the next five years (either because the wrist has negligible risk of fracture or because there is only a long-term fracture risk) is only 4.4%. Table or report does not include a diagnosis (e.g. that the subject has osteoporosis), but it will be appreciated that these probabilities may be of great value, including—for example—to prompt the subject to pursue a diagnosis, such as by undergoing medical examination or consultation. FIG. 3 70 22 22 38 50 is a schematic view at of the operation of segmenter . Segmenter is configured to receive input includes one or more medical images (from image data ) and to process the images so as to generate segmentation maps . The medical images might be acquired using the same imaging modality by scanning different body parts of a patient. For example, in some applications of assessing bone quality, both wrist and leg of a patient might be scanned by an HRpQCT scanner for the assessment. The medical images might be acquired using different imaging modalities by scanning the same or different body parts of a patient. For example, in some other applications of assessing bone quality, both the wrist HRpQCT scan and the hip DXA scan of a patient are acquired for the assessment (though again bearing in mind that the medical images may be acquired by scanning the different samples or body parts using other imaging modalities). FIG. 3 22 72 72 38 38 38 38 74 74 72 72 36 50 1 n 1 n 1 n 1 n 1 n Referring to , segmenter implements one or more processing branches 1 to n, (labelled , . . . , in the figure) corresponding to medical images 1 to n of the subject (labelled , . . . , ). In the case of plural processing branches, medical images , . . . , may be due to—for example—different imaging modalities (labelled , . . . , ), as is the case in the illustrated example, different body parts, different scans of a single body part, or a combination two or more of these. Respective segmentation branches , . . . , are configured to receive an image, to segment the image according to image type (such as with different program code ), and to generate the branch output (comprising the segmentation maps of the input image). 22 22 To process a respective input medical image, segmenter is configured first to select a processing branch of processing branches 1 to n according to the type of the input image. Segmenter ascertains the type of image according to the sample (e.g. scanned body part) and imaging modality, information that can be determined from the respective image, including from metadata stored in a header file of the medical image and/or from the file type. For example, the information of scanned body part and imaging modality may be accessed from the metadata. 24 24 24 52 54 56 22 24 24 24 24 24 24 24 24 24 24 24 24 a b c a b c a a b b c c a b c 1 n 1 n 1 n Each input medical image 1 to n is processed by one or more of three segmentation programs (viz. structure segmenter , material segmenter , and abnormal material segmenter ) into the corresponding segmentation maps , , . Segmenter thus employs up to n instances each of segmenters , , (labelled , . . . , , , . . . , , and , . . . , , respectively), either in parallel or sequentially, though the number of such instances of each segmenter , , (being from 0 to n in each case) may differ. 24 24 24 72 72 24 24 24 52 52 54 54 56 56 24 24 24 a b c a b c a b c 1 n 1 n 1 n 1 n FIG. 3 Structure segmenter , material segmenter , and abnormal material segmenter may generate respective segmentation maps in each processing branch , . . . , . In , for example, structure segmenter , material segmenter , and abnormal material segmenter generate respective segmentation maps corresponding to medical imaging modalities 1 to n; the resulting structure segmentation maps, material segmentation maps and abnormal tissue segmentation maps are correspondingly labelled structure segmentation maps , . . . , , the material segmentation maps , . . . , and the abnormal tissue segmentation maps , . . . , . It should be noted, however, that in some applications it may not be possible or desirable to generate all three types of segmentation map. This may be due, for example, to the limitations of the images, of the imaging modalities, or of segmenters , , (arising, for example, from limitations in segmenter training data). 24 24 24 50 24 52 a b c a Structure segmenter , material segmenter , and abnormal material segmenter assign to each voxel of these segmentation maps one or more ‘types’ (or ‘categories’) from respective predetermined sets of types (or categories). Thus, in this embodiment structure segmenter assigns a respective structure type (from a predefined set of structure types) to each voxel in the scan. For example, a wrist HRpQCT scan is segmented into a structure segmentation map in which each voxel in the scan is assigned a structure type (or category) from the set of “surrounding tissues”, “compact cortex”, “transitional zone”, and “trabecular region”. 24 24 54 b b Material segmenter assigns a respective material type (from a predefined set of material types) to each voxel. For example, in this embodiment, material segmenter segments a wrist HRpQCT scan into a material segmentation map in which each voxel in the scan is assigned a material type from the set of “mineralised material”, “fully mineralised material”, “red marrow adipose”, and “yellow marrow adipose”. 24 24 56 24 24 c c c c Abnormal material segmenter assigns a respective abnormality or normality type (from a predefined set of abnormalities or normality types, such as a set comprising “normal” and “abnormal”) to each voxel. For example, in this embodiment, abnormal material segmenter segments a wrist HRpQCT scan into an abnormal tissue segmentation map in which each voxel in the scan is assigned either “normal” or “abnormal”. Optionally, in certain other embodiments, abnormal material segmenter can distinguish different types of abnormality, and the predefined set of abnormality or normality types may include—in addition to “normal”—and one or more specific abnormalities particular to the sample type under examination; if the sample is bone, these may include, for example, “fracture crack” or “bone tumour”. In such an embodiment, the set may optionally include “abnormal” for cases in which abnormal material segmenter cannot determine a specific type of abnormality. 24 24 24 24 24 24 24 a b c a b c a In some implementations, structure segmenter , material segmenter , and abnormal material segmenter assign respective types with confidence limits or probabilities to each voxel in the medical image. In some other implementations, structure segmenter , material segmenter , and abnormal material segmenter may assign a plurality of types (each optionally with a confidence limit or probability) to each voxel in the medical image. For example, structure segmenter —when segmenting a wrist HRpQCT scan—may assign both “transitional zone” and “trabecular region” to ambiguous voxels, but with respective (and typically different) confidence limits or probabilities. 22 50 42 Segmenter generates segmentation maps by using the trained segmentation models (including structure segmentation models, material segmentation models and abnormal material segmentation models). The segmentation models are trained using machine learning algorithms (such as a neural network, a support vector machine, a decision tree, or a combination thereof). In this embodiments, the segmentation models are trained using deep neural networks that comprises multiple layers include convolutional neural network layers, fully connected neural network layers, normalisation layers, and multiplicative layers. 22 22 Segmenter may also (or alternatively) perform segmentation using non-machine learning based methods, such as a method based on the location of edges, corners, and transitional slopes, or on global features such as histogram and intensity values of the image. For example, U.S. Patent Application Publication No. 2012/0232375 A1 (“Method and System for Image Analysis of Selected Tissue Structures”) discloses a method for segmenting the transitional zone between the compact cortical and trabecular region from a wrist CT scan, based on local and global features of a bone: in many applications, it would be suitable to implement this method in segmenter . FIG. 4 80 30 30 50 22 40 is a schematic illustration of the operation of classifier . Classifier is configured to receive input that includes segmentation maps (generated by segmenter ) and non-image subject data , and to process that input so as to generate one or more classification results. 50 40 30 26 28 50 30 26 26 However, both the segmentation maps and non-image subject data are processed before being passed to classifier by segmentation map processor and non-image data processor , respectively. For example, in some implementations, it may be expedient to down-sample segmentation maps into lower resolution maps, such as to allow faster image processing by classifier ; such processing, if desired or required, is performed by segmentation map processor . In some implementations, segmentation map processor sets the type of any voxels (in a particular segmentation map) that have been assigned more than one type (though typically with different confidence limits or probabilities), such as by assigning to the voxel the type that has the higher or highest probability. 28 28 Non-image data may include structured and unstructured data. Non-image data processor is configured to employ a variety of techniques to process any structured data by extracting features from it, in each case according to the structure and form of the data. For example, structured data are typically stored and maintained in structured data storage such as database tables, .json files, .xml files and .csv files. Non-image data processor extracts features from structured data by querying the required parameters and attributes from the data's respective sources. 28 28 28 28 28 Non-image data processor processes unstructured data in two steps: firstly by converting it into structured data, then by extracting features from the converted data. The conversion method employed by non-image data processor is specific to each source. For example, to convert a doctor's notes into structured data, non-image data processor employs a trained model of optical character recognition (OCR) to convert the notes into text recognisable by a computer. Non-image data processor then parses the converted text using keywords such as, in this example, “fractures”, “pain”, “fall”, etc. Once the unstructured data has been converted into structured data, non-image data processor then extracts features from the now structured data. 40 22 30 82 84 86 The processed non-image data and segmentation maps are passed to classifier , which uses these inputs to generate a classification outputs comprising classification score (such as disease condition scores , disease progression scores , and/or treatment scores ). FIG. 5 90 58 30 92 94 22 is a flow diagram of the training of classification neural network of classifier with a deep neural network, according to an embodiment of the present invention. As shown in the figure, at step , data—including medical image data and non-image data—are collected or input for training and testing. At step , the collected images are segmented by segmenter so as to generate segmentation maps (as described above). 96 38 40 At step , the image data and non-image data are annotated with labels provided by qualified experts with domain knowledge. In a medical application, the training classification outputs may be determined based on subject clinical records. For example, if the classification output is to include a fracture probability score, then the training output is the timing (post-scan) of a fracture, ascertained from the subject's medical history—and, where no fracture is apparent, recorded as “negligible risk” (or some equivalent designation). If the classification output is to include a score for categories that correspond to known medical conditions, then the training output is the actual medical condition of the subject, also ascertained from the subject's clinical records. 98 50 40 At step , the data (comprising segmentation maps and non-image data ) is split into a training set, a development or ‘dev’ set (which may be omitted in some implementations), and a testing set, each for a different use. The training set is the data on which the learning algorithm is to be run; the dev set is the data used to tune the parameters; the testing set is the data to be used to evaluate the performance of the trained model. 100 At step , the layers of the deep neural network are implemented. Each layer consists of artificial neurons. An artificial neuron is a mathematical function that receives one or more inputs and sums them to produce an output. Usually, each input is separately weighted, and the sum is passed through a non-linear function. As the neural network learns, the weights of the model are adjusted in response to the error (the difference between the network output and the annotations) it produces until the error cannot be reduced further. 102 58 104 58 58 At step , the parameters of classification neural network —including layer numbers, neuron numbers, neuron weights, and neuron function parameters, etc.—are initialized. At step , the learning algorithm runs on the training data set to update the parameters of classification neural network . For example, the parameters might be updated by determining a gradient of a loss function. The loss function is calculated by the labelled classification output and output generated by classification neural network . The dev data may be used to investigate the learning procedure and tune the layers and parameters. 106 30 58 108 30 92 At step , classifier —provided with classification neural network —is tested on the testing data. At step , an evaluation is made as to whether the performance of classifier is satisfactory. If the performance is unsatisfactory, processing returns to step where more training data is collected. 108 110 30 If, at step , the performance is found to be satisfactory, processing continues at step , where the trained classifier is outputted or flagged for deployment, or released for use. Processing then ends. It will be understood to persons skilled in the art of the invention that many modifications may be made without departing from the scope of the invention, in particular it will be apparent that certain features of embodiments of the invention can be employed to form further embodiments. It is to be understood that, if any prior art is referred to herein, such reference does not constitute an admission that the prior art forms a part of the common general knowledge in the art in any country. In the claims which follow and in the preceding description of the invention, except where the context requires otherwise due to express language or necessary implication, the word “comprise” or variations such as “comprises” or “comprising” is used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.
Q: Impossible to make a scalogram of a signal with 262144 samples I have a signal with 262144 samples and "sampled" with 200Msps. I want to use the cwt (Continuous Wavelet Transform), for making a time-frequency scalogram. Also, I worked in some scripts in python and Matlab, but it's very hard for the machine for make the process and empty the memory of the computer with 262144 samples. The codes and examples than I run, only work with 1024 samples... 512 samples, and it's very simple for the computer to work with it. But for 262144 SAMPLES !, a 1D signal, it's impossible to work well and output a great scalogram (It's the target). The signal has frequencies around 10KHz, 20KHz... 200KHz, 1MHz... 2MHz. I'm really comfortable with Python language, very powerfull and great for data mining, also Matlab, but not eficient in memory use... I think... Regards and I waiting any suggestions or code in Python, also toolboxes in Python. A: Usually this is done by chopping a long signal into shorter windows, doing the transform on each window, then recombining the results. For finding 10 kHz signals from 200 Msps data, you will need either longer windows than 1k samples, or to downsample the data before doing the transform.
Q: Eliminating functions from system of PDE in Mathematica Mathematica users, Let's see an example of three PDEs: How can I eliminate variables q1[x,y,z] and q2[x,y,z] from a system of three PDEs in Mathematica. In final form, I need just one equation in function of p1[x,y,z]. Why this code is working well when I want to eliminate q1 and q2 from a1, a2 and a3, but doesn't work when I want to eliminate q1 and q2 from a1, a2, a3 and a4? a1=A1*D[p1[x,y,z],{z,2}]-A2*(D[p1[x,y,z],{x,2}]+ D[p1[x,y,z],{y,2}]+D[q1[x,y,z], {x,1}]+D[q2[x,y,z],{y,1}])==0; a2=A9*(A2*D[(D[q1[x,y,z],{x,1}]+D[q2[x,y,z],{y,1}]),{x,1}]+A2*(D[q1[x,y,z],{x,2}]+ D[q1[x,y,z],{y,2}]))-A4*(D[p1[x,y,z],{x,1}]+q1[x,y,z])-A8*D[q1[x,y,z],{z,2}]==0; a3=A9*(A6*D[(D[q1[x,y,z],{x,1}]+D[q1[x,y,z],{y,1}]),{y,1}]+A7*(D[q2[x,y,z],{x,2}]+ D[q1[x,y,z],{y,2}]))-A1*(D[p1[x,y,z],{y,1}]+q2[x,y,z])-A4*D[q2[x,y,z],{z,2}]==0; a4=A09*A006*D[(D[q2[x,y,z],{x,1}]+D[q2[x,y,z],{y,1}]),{y,1}]+A004*D[q1[x,y,z],{z,2}]==0; rule=Flatten[{#[x,y,z]->#,Derivative[n_,m_,o_][#][x_,y_,z_]-># x^n y^m z^o}&/@{p1,q1,q2}]; invRule=(x_->z_):>z Derivative[Sequence@@x][p1]; ww1=Eliminate[{a1,a2,a4}/.rule,{q1,q2}]/.(x_==y_)->(x-y); ww2=CoefficientRules[ww1,{x,y,z}]; t=Total[ww2/.p1->1/.invRule]==0; t A: We start with the given differential polynomials. a1 = A1*D[p1[x, y, z], {z, 2}] - A2*(D[p1[x, y, z], {x, 2}] + D[p1[x, y, z], {y, 2}] + D[q1[x, y, z], {x, 1}] + D[q2[x, y, z], {y, 1}]); a2 = A9*(A2* D[(D[q1[x, y, z], {x, 1}] + D[q2[x, y, z], {y, 1}]), {x, 1}] + A2*(D[q1[x, y, z], {x, 2}] + D[q1[x, y, z], {y, 2}])) - A4*(D[p1[x, y, z], {x, 1}] + q1[x, y, z]) - A8*D[q1[x, y, z], {z, 2}]; a3 = A9*(A6* D[(D[q1[x, y, z], {x, 1}] + D[q1[x, y, z], {y, 1}]), {y, 1}] + A7*(D[q2[x, y, z], {x, 2}] + D[q1[x, y, z], {y, 2}])) - A1*(D[p1[x, y, z], {y, 1}] + q2[x, y, z]) - A4*D[q2[x, y, z], {z, 2}]; a4 = A09*A006* D[(D[q2[x, y, z], {x, 1}] + D[q2[x, y, z], {y, 1}]), {y, 1}] + A004*D[q1[x, y, z], {z, 2}]; Take three prolongations (that is, derivatives with respect to each variable in turn). diffpolys = {a1, a2, a3, a4}; vars = {x, y, z}; d2 = Join[diffpolys, Flatten[Outer[D, diffpolys, vars]]]; d3 = Union[Join[d2, Flatten[Outer[D, d2, vars]]]]; d4 = Union[Join[d3, Flatten[Outer[D, d3, vars]]]]; pvars = Cases[Variables[d4], Derivative[__][p1][__]]; qvars = Cases[Variables[d4], Derivative[__][q1 | q2][__]]; Length[d4] (* Out[135]= 80 *) Length[qvars] (* Out[136]= 110 *) 110 unknowns in 80 polynomials is not promising. Undaunted, we forge ahead. Timing[ gb = GroebnerBasis[d4, pvars, qvars, MonomialOrder -> EliminationOrder, CoefficientDomain -> RationalFunctions];] (* Out[137]= {7.540000, Null} *) So here it is. gb {A006*A09*A1*A4*Derivative[0, 1, 2][p1][x, y, z] + A006*A09*A1*A8*Derivative[0, 1, 4][p1][x, y, z] - A006*A09*A2*A4*Derivative[0, 3, 0][p1][x, y, z] + ((-A006)*A09*A2*A8 - A006*A09*A1*A2*A9)*Derivative[0, 3, 2][p1][ x, y, z] + A006*A09*A2^2*A9*Derivative[0, 5, 0][p1][x, y, z] + (A006*A09*A1*A4 - A004*A2*A4)* Derivative[1, 0, 2][p1][x, y, z] + (A006*A09*A1*A8 + A004*A1*A2*A9)*Derivative[1, 0, 4][p1][x, y, z] - A006*A09*A2*A4*Derivative[1, 2, 0][p1][x, y, z] + ((-A006)*A09*A2*A8 - A006*A09*A1*A2*A9 - A004*A2^2*A9)* Derivative[1, 2, 2][p1][x, y, z] + A006*A09*A2^2*A9* Derivative[1, 4, 0][p1][x, y, z] + ((-A006)*A09*A2*A8 - 2*A006*A09*A1*A2*A9)* Derivative[2, 1, 2][p1][x, y, z] + 3*A006*A09*A2^2*A9* Derivative[2, 3, 0][p1][x, y, z] + ((-A006)*A09*A2*A8 - 2*A006*A09*A1*A2*A9 - A004*A2^2*A9)* Derivative[3, 0, 2][p1][x, y, z] + 3*A006*A09*A2^2*A9* Derivative[3, 2, 0][p1][x, y, z] + 2*A006*A09*A2^2*A9* Derivative[4, 1, 0][p1][x, y, z] + 2*A006*A09*A2^2*A9* Derivative[5, 0, 0][p1][x, y, z]} This does seem to match your result. The fact that I do not understand why your method works is no indication of shortcoming (other than perhaps on my part).
What is birding? Simply put, birding (or birdwatching) is the act of enjoying wild birds. If you’ve ever watched a bird at a backyard feeder, photographed a bird in a beautiful setting, or stopped to listen to a bird singing along your favorite trail, then you are already a birder! Birding is enjoyed by many people in many ways around the world. What do I need to get started? You really don’t need much equipment to get started in birding. You can use binoculars or a spotting scope to see birds better at a distance. A good field guide to birds of your region will help you identify the species that you encounter. Binoculars and a field guide make finding and identifying birds a little easier, but even without them, you can enjoy the birds around you. How do I identify a bird? You don’t need to identify every bird you see or hear, but figuring out the names of the birds you find can be a big part of the fun. It can also make it easier to learn more about the species, such as migration patterns and conservation needs, by looking the bird up in a book or online once you’ve identified it. When you encounter an unfamiliar bird, ask yourself a few questions. What is the general size and shape of the bird? (For example, is it a small, sparrow-like perching bird or a medium-sized duck?) What are the main colors and markings? (Is it bright yellow with black wings or brown with a spotted breast?) What is the bird doing? (Is it climbing down a tree trunk or soaring overhead?) And in what type of habitat is the bird? (Is it in an open grassland or in the middle of a forest?) Birds are incredibly diverse, with thousands of species across the globe, and the answers to these types of questions can really clue you in to what type of bird you have found. It is also helpful to remember that birding is more than just watching: Many birds produce unique songs and calls, and paying attention to these sounds will help you find and identify more birds. A good field guide, which can either be a book or a smartphone app, will really help you know what birds to expect in your area and how to identify them. Visit your local bookstore or library, ask a teacher, or search online to find a field guide to birds of your region. How do I connect with other birders? Birding is a fun activity to do on your own but is even better when shared with others. Join a bird walk at a park or refuge, connect with a local Audubon Society chapter, visit a World Migratory Bird Day event, or find and follow social media pages and groups to meet other birders, discover new places to explore, and learn more about the birds in your region. Connecting with the birding community is one of the best things about being a birder!
https://www.migratorybirdday.org/business-1/30941-2/
Q: RxJS: debounceTime on Rx.Observable.interval is not working as expected In the following line of code I am expecting the printing of here every 2 seconds. But nothing is being printed: Rx.Observable.interval(1000).debounceTime(2000).subscribe(x => console.log('here')) However in the following line of code, here is printed every 2 seconds, as expected: Rx.Observable.interval(2000).debounceTime(1000).subscribe(x => console.log('here')) In the first case I am expecting an event stream of 1 second period to be debounced to 2 seconds period. This does not seem to work. And in the second case I am expecting an event stream of 2 seconds period to be debounced to 1 second period. This seems to work. Why is the first case not working as expected? Is there something wrong in my expectation? A: You may confuse debounce with throttle. debounceTime For every item, wait X ms until no other item is emitted, and only then emits the item. Rx.Observable.interval(1000).debounceTime(2000).subscribe(x => console.log('here')) All items are dropped since an item will always be emitted within 2000 ms. throttleTime Emits an item if no other items were emitted during the last X ms. Otherwise, the item is dropped. Rx.Observable.interval(1000).throttleTime(2000).subscribe(x => console.log('here')) Prints an item every 2000 ms.
Q: Nesting a dictionary within another dictionary, grouping by values in a Pandas Dataframe In this previous question: Nesting a counter within another dictionary where keys are dataframe columns , @Jezrael showed me how to nest a counter within another dictionary. My dataframe has another column which is effectively a superset of the ID, and is not named in a way which allows for the SuperID to be logically derived from an ID. SuperID ID Code E1 E1023 a E1 E1023 b E1 E1023 b E1 E1023 b E1 E1024 b E1 E1024 c E1 E1024 c E2 E1025 a E2 E1025 a E2 E1026 b Using the dictionary which was produced in the last stage, d = {k: v.value_counts().to_dict() for k, v in df.groupby('ID')['Code']} print (d) {'E1023': {'b': 3, 'a': 1}, 'E1024': {'c': 2, 'b': 1}, 'E1025 : {'a' : 2}, 'E1026 : {'b' : 2}} I would like to perform another level of nesting, where the SuperID is the key of the outer dictionary with the inner dictionary being the dictionary produced above, with IDs grouped by SuperID. So the dictionary should effectively be of the format: new_d = {k: v for k in df.SuperID, v in df.groupby('SuperID')[ID FROM d]} {'E1': {'E1023': {'b':3, 'a':1}, 'E1024' : {'c':2, 'b': 1}...} 'E2': {'E1025: {'a' : 2}...}} I would like to keep the original dictionary, produced by @Jezrael to allow me to perform an easy lookup by ID which I will need to do at a latter stage. A: Use nested dictionary comprehension: d = {k: {k1: v1.value_counts().to_dict() for k1, v1 in v.groupby('ID')['Code']} for k, v in df.groupby('SuperID')} print (d) {'E1': {'E1023': {'b': 3, 'a': 1}, 'E1024': {'c': 2, 'b': 1}}, 'E2': {'E1025': {'a': 2}, 'E1026': {'b': 1}}}
If you’re up-to-date with progress in natural language processing research, you’ve probably heard of word vectors in word2vec. Word2vec is a neural network configuration that ingests sentences to learn word embeddings, or vectors of continuous numbers representing individual words. [Related Article: Watch: State of the Art Natural Language Understanding at Scale] The neural network accepts a word, which is first mapped to a one-hot vector (all 0s, except for a single 1). That vector is used as input to a fully-connected hidden layer with linear activation functions (e.g. no feature transformation) called an embedding layer. The embedding layer is followed by a softmax output layer with one neuron for each word in our corpus. The objective is to accurately predict the probability the word appears in the same sentence as each other word. Put simply, we’re attempting to model the word’s context accurately. Once performance is deemed satisfactory, the output layer is detached. All that are left are the input and dense embedding layers. The latter gives us access to what we want — the learned word vectors. Representing a word by a bunch of numbers can be baffling, as can determining why we’d even want to. There’s no perfect explanation for why this works. However, some intuition can explain why a word vector is acceptable and useful to represent this information. Why Word Vectors? Word vectors’ primary use is to intuit something about what they mean by how they relate to each other. That can be weird to think about, but it helps us solve very sophisticated tasks in natural language processing. For example, think about the basic setup for an analogy: A is to B as X is to Y. For example, husband is to wife as king is to queen. We need to understand something about the relationship between a husband and wife to understand there’s a similar relationship between a king and queen. You could give someone collections of texts on the subject and let them slowly start to figure out what the commonality is. This is very complex for a machine to understand. How do you encode information about how a husband and wife are related? How do we discover that commonality with royalty? And how do we determine the difference between regular and royal couples? Word vectors offer an intuitive (albeit imperfect) solution. Using a vector representation in N dimensions, we can analyze the vectors’ orientations relative to one another and make comparisons. The husband and wife word vectors will be oriented in a very specific way, with a given angle and distance from each other. We might observe a similar orientation between the king and queen word vectors, within a given tolerance. This is purely heuristic, but it seems to work pretty well. Plus, it’s the closest we have to capturing the concept of an analogy in a statistical way. A similarly elementary operation will allow us to understand how concepts ‘mesh’ to create new ones. For example, we might add the vectors for man and royalty together. You can probably guess that man + royalty = king. Learned word vectors can arrive at the same conclusion. Word Vectors In Practice [Related Article: Watch: Understanding Unstructured Data with Language Models] Naturally, the quality of our derived word vectors depends on the amount of training data we have, instability during training and other factors. These are difficult to account for when building a model that is, ultimately, heuristic in nature. With that said, word vectors are a novel way to represent natural language information that might be lost in other encodings. The extremely active field of natural language computing will undoubtedly find new ways to exploit the approach soon. Editor’s note: Want to learn more about NLP in-person? Attend ODSC East 2020 in Boston this April 13-17 and learn from the experts directly!
https://opendatascience.com/why-word-vectors-make-sense-in-natural-language-processing/
Aberystwyth University has put forward proposals to generate its own wind energy under a £2.5m project. The scheme would include three turbines and would help protect the institution from future increases in energy costs. If constructed, the three 250kw turbines could generate enough electricity to meet up to 10% of the university’s current annual electricity needs and save a significant predicted amount in running costs over the 20-year life of the turbines. The initiative, dubbed the Aberystwyth University Sustainable Future project, would also assist in meeting targets to reduce carbon emissions and provide a teaching resource for students to access as part of their studies. The university is undertaking a feasibility study to establish what the planning and business requirements would be for developing the scheme. University pro vice-chancellor Rebecca Davies said: “Our requirement for energy and the price of electricity are both rising. Generating our own renewable, carbon-neutral energy is a fantastic way of helping the university meet both of these challenges over the coming years. It makes economical and environmental sense. “The turbines would form part of a wider commitment from the university to save energy and reduce our carbon emissions. This includes investment in insulation, energy-efficient boilers and heating systems, low-energy lighting, photovoltaic panels and solar thermal technology.” If constructed the turbines would be located in two sites owned by the university; one on land next to the A487 at the Penglais Student Village development approaching Aberystwyth and two on land opposite Gogerddan Campus next to the A4159. The turbines would measure 30m in height to the hub, with a total height of 45m to the tip of each blade. Members of the public are invited to find out more about the project at the foyer of the university’s Old College site from January 28 to February 6, at the Students’ Union from February 9 to 13, and at the foyer of IBERS on the Gogerddan Campus from February 16 to 25. A member of the Aberystwyth University Sustainable Future team will be available to talk to members of the public about the project from between noon and 2pm at each location. Ms Davies added: “We want to engage with people from the student community, Aberystwyth town, and the wider Ceredigion area on this proposal.” The university may submit planning applications for the two sites in the spring. A decision from Ceredigion council’s planning committee is anticipated in the autumn, and if positive, construction could begin in spring next year. This article is the work of the source indicated. Any opinions expressed in it are not necessarily those of National Wind Watch. The copyright of this article resides with the author or publisher indicated. As part of its noncommercial effort to present the environmental, social, scientific, and economic issues of large-scale wind power development to a global audience seeking such information, National Wind Watch endeavors to observe “fair use” as provided for in section 107 of U.S. Copyright Law and similar “fair dealing” provisions of the copyright laws of other nations. Send requests to excerpt, general inquiries, and comments via e-mail. |Wind Watch relies entirely | on User Funding Share:
https://www.wind-watch.org/news/2015/01/21/ambitious-plans-for-45m-high-wind-turbines-costing-2-5m-at-aberystwyth-university-unveiled/
Since sin²(x) + cos²(x) = 1, cos²(x) = 1 - 9/25 = (25-9)/25 = 16/25. Therefore, cos(x) = ±4/5. Since the sin^(-1)(3/5) is in the first quadrant, the cosine must also be positive, and we have cos(x) = 4/5 as an exact answer, and, as a bonus, no calculator was used. is not a power of sin(x); it's a notation for a function called the inverse sine (or invsin or arcsin) of x. The inverse sine of x is the acute angle whose sine is x, so the equation above implies that x = sin(w). If x is negative, its inverse sine is negative. Tell me, hatcher777, do you have a textbook which explains the basics of trigonometry (and also mathematical notation in general), including what I've just explained? I ask because your main problem seems to be that you are completely unfamiliar with some of the concepts. I am willing to explain things here, but it will take time. You could proceed much faster using a good introductory textbook. The inverse tangent works rather like inverse sine, but the inverse cosine is a bit different because cosine is an even function, whereas sine and tangent are odd functions.
http://mymathforum.com/algebra/247-exact-values-2.html
BACKGROUND DETAILED DESCRIPTION It is common to share documents between users to allow for review of, or remote collaboration in creation of, the documents. The document may be a word processing document that is initially authored by first user. The first user may then provide a copy of the document to a second user for the second user to review and possibly modify. This review and modification process may involve one or more additional users. Also, the first user may modify the document. A result of the modification(s) made to the multiple copies of the documents may be that a particular part of the multiple copies of the documents may have been modified in different ways. In other words, there may be multiple versions of the particular part of the document. When a merge of the multiple versions of the document is later attempted, some conventional mechanisms present markers to a user indicating existence of a conflict in the different versions of the document. A user would then manually resolve the conflict, which may be tedious and time-consuming. Documents can be in modular form to provide for enhanced flexibility and improved efficiency. A “modular document” refers to a document that is composed of separately identifiable component documents. A “document” refers to a container of data that is identifiable. The component documents of a source document are combined to allow for proper presentation (e.g., viewing or listening) of the modular document. Combining the component documents also achieves a desired behavior of the modular document (e.g., load appropriate objects such as images or text, retrieve variable data from a selected source or sources, etc.). Some component documents can be shared by multiple modular documents, and any one of the component documents of a modular document can be modified or replaced. A modular document generally refers to any data structure container that has references to other documents that in combination make up the modular document. Under certain scenarios, it may be desirable to share a modular document among multiple users. To enable such sharing, one or more instances of the modular document can be provided to one or more corresponding users. An “instance” of a modular document refers to either a modified version or unmodified copy of the modular document. The original modular document can be referred to as a “first” instance of the modular, while a copy or different version of the modular document can be referred to as a “second” instance of the modular document. Modifications can be made by users to the first instance of a modular document and/or the one or more second instances of the modular document. At a later point in time, an attempt can be made to merge the first instance with one or more of the second instances. In performing the merge, it may be the case that there may be conflicts between different instances of the modular document, where the different instances have become different versions by having different modifications performed on them. For example, a first user may have modified a particular part of one of the instances in a first manner, while another user has modified the same part in another instance of the modular document in a second manner. In such a scenario, conflict resolution is performed when merging different instances of the same modular document. Some conventional techniques involve presenting a marker of a conflict existing in different versions of a document to a user for the user to manually perform the conflict resolution. In a large modular document where there can be many conflicts between different versions of the modular document, such a manual process of conflict resolution is not efficient. In accordance with some embodiments, to perform conflict resolution between two or more different instances of a modular document, a merge definition can be provided in the modular document. The merge definition provides information regarding how conflicts are to be resolved. For example, the merge definition can indicate that a newer version of a part of any document replaces an older version of the same part. Whether a version is newer or older can be based on timestamps associated with the versions. Alternatively, the merge definition can specify that a change made by a higher-level user takes precedence over a change made by a lower-level user. The levels of the user can be defined by the hierarchy of an enterprise, (e.g., company, government agency, educational organization and so forth), where an example of a higher-level user is a supervisor, manager, director, or officer. Other types of merge definitions can be defined in other scenarios. By incorporating a merge definition into the modular document, conflict resolution can be defined on a per-modular document basis, such that a generic conflict resolution policy does not have to be defined across all modular documents. A generic conflict resolution policy that applies to all documents may not take into account various scenarios or specifications. During a merge process of multiple instances of the modular document, the merge definition of the modular document is accessed to determine how any conflicts between the different instances are resolved. Merging of the multiple instances of the modular document that are different versions of the modular document causes a single instance that is the merged document to be created. In one example, a modular document can include the following component documents: a component document containing branding information (such as company logo) that is added to an output of the modular document; a component document containing style information related to the modular document; a component document containing legal notices; a component document containing product images; a component document containing descriptions of a product; a component document containing terms and conditions associated with the sale of a product; and so forth. By using a modular document that is composed of multiple component documents, it is possible to modify the modular document by changing just one or more of the component documents. This enhances efficiency since the entire modular document does not have to be changed in response to a change in the component document. As discussed above, a modular document is made up of component documents. In alternative embodiments, each document (a modular document or a component document) can include parts. For example, the parts of a document can include a data part, a style part, an image part, and so forth. A document may be constructed from parts of another document. For example, one document may refer to and re-use just the style and image parts of another document. As a result, in these embodiments, a dependency reference can identify both a component document plus the part of the component document to be used in the construction. More generally, a “part” of a document refers to a portion less than the entirety of the document. One example type of a part of a document is a component document as discussed above. The ensuing discussion refers to merge processes for merging multiple instances of modular documents containing component documents—however, it is noted that the same or similar techniques are applicable for merging multiple instances of modular documents containing parts. FIG. 1 100 102 104 100 104 is a block diagram of an exemplary arrangement that includes a source computer (or other source electronic device) and one or more destination computers (or other destination electronic devices) that are coupled over a data network to the source computer . Examples of the data network include any one or more of a local area network (LAN), a wide area network (WAN), the Internet, and so forth (whether wired or wireless). 100 106 106 108 110 112 108 106 FIG. 1 The source computer includes storage media , which can be implemented with one or more disk-based storage devices and/or one or more integrated circuit (IC) or semiconductor memory devices. As shown in the example in , the storage media contains a modular document that references multiple component documents . Although just one modular document is shown, it is noted that the storage media can contain more than one modular document. 112 It is also noted that in some applications, at least one of the component documents can itself be a modular document that references other component documents. Thus, generally, a first modular document can reference component documents, where it is possible that at least one of the component documents is a second modular document that in turn references additional component documents. Moreover, it is also possible that at least one of the additional component documents is a third modular document that references further component documents. This hierarchical referencing of modular documents can be performed to any depth. 106 115 102 115 102 115 108 102 108 102 The storage media also stores destination computer document information , which contains information identifying documents that are stored at a particular one of the destination computers . For example, the destination computer document information can indicate which component documents are available at the particular destination computer . The destination computer document information can be used to determine which component documents of the modular document are to be sent to the particular destination computer when sending a copy of the modular document to the particular destination computer . 100 114 106 114 116 100 104 102 The source computer further includes a processor connected to the storage media . The processor is connected to a network interface that allows the source computer to communicate over the data network with the destination computers . 100 117 In addition, the source computer includes a modular document merge software that manages merging of conflicting versions of a modular document in accordance with some embodiments. FIG. 1 102 120 122 100 120 124 126 102 104 102 128 124 128 As further shown in , a destination computer includes a storage media that contains one or more modular documents , which can be instances of modular documents at the source computer . The storage media is connected to a processor , which is connected to a network interface to allow the destination computer to communicate over the data network . The destination computer also includes application software that is executable on the processor . The application software can be a word processing software, a web browser, or any other software that is able to process and present modular documents. 102 129 124 The destination computer further has a modular document merge software that is executable on the processor to merge different versions of a modular document. FIG. 2 FIG. 2 108 108 202 204 206 208 202 206 204 208 illustrates an example of the modular document . The modular document contains references , to corresponding component documents and . In the example of , the first reference contains a document identifier (which uniquely identifies the component document ). The second reference contains a document identifier for the component document . 210 108 212 214 206 208 108 210 212 214 108 In accordance with some embodiments, a merge definition is contained in the modular document , and merge definitions and can also be contained in component documents and , respectively. When a second instance of the modular document is created, the merge definitions , , and can be copied to the second instance. It is possible that the merge definitions associated with modular document (first instance) and the merge definitions associated with the second instance are subjected to different modifications, which can cause such merge definitions to also have conflicts. To address this issue (as discussed further below), additional merge definitions at different hierarchical levels can be defined to resolve merge definition conflicts. 206 208 216 218 Each component document and further includes respective version information and , to identify the version of the respective component document. FIGS. 2A and 2B FIG. 2 FIG. 2A FIG. 2A FIG. 2B 1 2 1 1 1 1 1 2 illustrate an example based on the structure of . In , a first instance and second instance of modular document A are provided, where at the point in time represented by , the first and second instances of modular document A are identical copies, since both instances refer to version W of component document and to version B of component document . At a later point in time, as represented by , users may have modified component document for each instance of modular document A such that component document of the first instance has been modified to version X, and component document of the second instance has been modified to version Y. Both versions X and Y of component document are successors to version W of component document . Note however that component document remains unchanged—it remains at version B. 1 1 1 When the first and second instances of modular document A are later brought together (such as at either the source or destination computers or at some other location) and merged, the merge process will create a new version Z of component document that is combined according to a merge definition. Thus, modular document A will have component document with versions W, X, Y, and Z, with Z being a successor to all of the other three versions W, X, and Y and therefore the “latest” and the one that is used when component document without reference to a specific version is requested. In some embodiments, modular documents are defined using the Extensible Markup Language (XML). With XML, it is possible to add into a document the merge definition to define how different versions of the document are to be merged. In a more specific example, the merge definition can be expressed as an XSLT (eXtensible Stylesheet Language Transformations) transform that can be embedded into a document. In other embodiments, the modular documents and merge definitions can be according to other formats. FIG. 3 FIG. 3 100 102 117 129 is a flow diagram of a process of merging multiple instances of the modular document. The merging of multiple instances of a modular document can be performed at a document processing system. For example, an instance of a modular document can be loaded into the document processing system that already has another instance of the modular document. The loading of instances of a modular document into a document processing system can be part of a business process or workflow, such as due to a modular document being returned to an author after review. The document processing system can be represented by the source computer , one of the destination computers , or some other computer. The procedure of can be performed by the merge module or of the source computer or destination computer, or by a similar merge module executed on another computer. 302 The merge module receives (at ) a request to merge multiple instances of a modular document, where the request can be automatically generated as a result of detection of loading of a second instance of the modular document where a first instance already is present. Alternatively, the request can be received from a user, for example, through a graphical user interface (GUI). 304 305 306 In response to the request, the merge module retrieves (at ) the multiple instances of the modular document. If the multiple instances of the modular document are determined (at ) to be identical, then the merge process exits as the merge process does not have to be performed. On the other hand, if the multiple instances are different versions, then the merge process continues. To merge the multiple instances of the modular document, a merge definition is accessed (at ). In one embodiment, the merge definition that is accessed is the merge definition in the first instance of the modular document. In a different embodiment, the merge definitions found in each of the first instance of the modular document and second instance(s) of the modular document are accessed to determine whether there are conflicts between the merge definitions found in the first instance of the modular document and in the second instance(s). A procedure to resolve conflicts in merge definitions is discussed further below. In a modular document with multiple component documents, the component documents themselves can have merge definitions, which can also be accessed for potentially resolving any conflicts within the component documents. 308 310 Next, the merge module resolves (at ) conflicts in the multiple instances of the modular document (including conflicts within different instances of component documents) according to the accessed merge definition(s). A conflict can be due to the first instance of the modular document having a first version of a particular component document, and a second instance of the modular document having a different version of the particular component document. The merge definition can specify which version of the particular component document is to be selected for use. In some embodiments, the merge definition allows production of a merge processing entity based on the definitions contained in the merge definition to merge the different versions of the modular document. A merged document is then produced (at ). FIG. 4 FIG. 4 117 129 100 102 402 404 405 illustrates a process of resolving conflicts in merge definitions of the different instances of the modular document, according to a further embodiment. The process of can also be performed by the merge module or in the source computer or destination computer , or by a merge module in another computer. The merge definitions found in the multiple instances of the modular document are compared (at ). Next, it is determined (at ) if a conflict exists. If no conflict exists, then the merge definition is output (at ) for use, and the merge definition conflict resolution process ends. 406 However, if a conflict in the merge definitions exists, then merge definitions at a next hierarchical level are accessed (at ). A hierarchical arrangement of merge definitions can be defined. A first level of the merge definitions relate to resolving conflicts between different instances of the modular document. A second level of merge definitions specify how conflicts in the first level merge definitions are to be resolved. It is possible that the second level merge definitions can also have conflicts, in which case a third level of merge definitions are accessed to resolve conflicts in the second level merge definitions. This procedure can recursively proceed up the hierarchical arrangement of merge definitions. Ultimately, a final fallback conflict resolution policy exists to deal with cases where conflicts in the multiple levels of merge definitions cannot be resolved. Alternatively, if conflict resolution of conflict(s) in the merge definitions at multiple hierarchical levels is not possible, the conflicts can be presented to a user (such as through a GUI) to allow the user to select how conflict resolution is to be performed between merge definitions. 408 410 406 406 408 410 412 405 Next, the next level merge definitions are compared (at ) to identify any conflicts in the next level merge definitions. If a conflict is present (as determined at ) then the process proceeds to task to recursively access the next level of merge definitions. The tasks of , , and are repeated until no conflicts are detected, in which case any conflicts in the lower level merge definitions are recursively resolved (at ). After conflicts in the lower level merge definitions have been resolved, a selected merge definition is output (at ). FIG. 5 500 502 504 506 506 502 506 502 In alternative embodiments, conflict resolution can be defined using embedded programs provided in the modular documents. For example, as shown in , a modular document includes an embedded program can include one or more methods (software routines) and one or more policies designed specifically for the corresponding modular document. The policy associated with the embedded program controls how the embedded program resolves conflicts during merging of modular documents. In this embodiment, the policy associated with the embedded program can be considered the merge definition. A request to merge modular documents would then be the stimulus to cause the embedded programs in the modular documents to be invoked for conflict resolution to perform merging of the modular documents. 117 129 114 124 FIG. 1 FIG. 1 Instructions of software described above (including the merge software and merge software of ) are loaded for execution on a processor (such as processor or in ). The processor includes microprocessors, microcontrollers, processor modules or subsystems (including one or more microprocessors or microcontrollers), or other control or computing devices. As used here, a “processor” can refer to a single component or to plural components (e.g., one or more CPUs in one or more computers). Note that the instructions of the software discussed above can be provided on one computer-readable or computer-usable storage medium, or alternatively, can be provided on multiple computer-readable or computer-usable storage media distributed in a large system having possibly plural nodes. Such computer-readable or computer-usable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. Data and instructions (of the software) are stored in respective storage devices, which are implemented as one or more computer-readable or computer-usable storage media. The storage media include different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; and optical media such as compact disks (CDs) or digital video disks (DVDs). In the foregoing description, numerous details are set forth to provide an understanding of the present invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these details. While the invention has been disclosed with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover such modifications and variations as fall within the true spirit and scope of the invention. BRIEF DESCRIPTION OF THE DRAWINGS Some embodiments of the invention are described with respect to the following figures: FIG. 1 is a block diagram of an exemplary arrangement that includes multiple computers, in which an embodiment of the invention can be incorporated; FIG. 2 is a schematic diagram of a modular document containing references to component documents, according to an embodiment; FIGS. 2A-2B illustrate an example in which a component document of a modular document has been modified differently in different instances, for which a merge process according to an embodiment can be applied; FIG. 3 is a flow diagram of a process of merging multiple modular documents, in accordance with an embodiment; FIG. 4 is a flow diagram of resolving conflicts in merge definitions, according to a further embodiment; and FIG. 5 illustrates a modular document with an embedded program, according to an alternative embodiment.
Forecasting daily stock price is an important task in financial time series area. And it is known that the singular value decomposition Entropy has a predictive power for stock market. This study attempts to develop various models and compare their performances in predicting the daily KOSPI200 index. The models are based on a singular value decomposition process which has various correlation and entropy methods: Pearson correlation, Kendall correlation, Shannon entropy, Renyi entropy, Max-entropy, Min-entropy. Input variables include moving time window singular value decomposition entropy series which are the combination of two correlations and four entropies. Support Vector Regression is used to predict daily KOSPI200 index and the model performance is evaluated using accuracy measures such as MAE, MAPE, and RMSE of the forecasting values. As a result of its application, investors may have a guidance of their trading strategy. Caraiani, P. (2014). The predictive power of singular value decomposition entropy for stock market dynamics. Physica A: Statistical Mechanics and its Applications, 393, 571-578. Gu, R., Xiong, W., & Li, X. (2015). Does the singular value decomposition entropy have predictive power for stock market?—Evidence from the Shenzhen stock market. Physica A: Statistical Mechanics and its Applications, 439, 103-113. Gu, R., & Shao, Y. (2016). How long the singular value decomposed entropy predicts the stock market?—Evidence from the Dow Jones Industrial Average Index. Physica A: Statistical Mechanics and its Applications, 453, 150-161. Maasoumi, E., & Racine, J. (2002). Entropy and predictability of stock market returns. Journal of Econometrics, 107(1), 291-312.
http://www.ec2017.org/kim/
JustCoding News: Inpatient, December 7, 2011 As the weather cools, the heat is on coders to properly report the high number of pneumonia cases they tend to see during the winter months. It's not always easy, considering the changing face of pneumonia testing and treatment and the number of documentation requirements for coding. In particular, cases "without a smoking gun," such as pneumonia without a positive chest x-ray, can be particularly challenging for clinician and coder alike, said Lolita M. Jones, RHIA, CCS, sole principal of Lolita M. Jones Consulting Services in Fort Washington, MD. Jones spoke along with Joy J. King, RHIA, CCS, CCDS, principal of Joy King Consulting, LLC, in Birmingham, AL, during HCPro's September 8 audio conference, "Top ICD-9-CM Trouble Spots: Master Clinical Background and Coding Guidelines for Accurate Coding." However, a solid understanding of both the clinical aspects and the guidelines for pneumonia coding can help coders correctly report pneumonia during the long winter months and beyond. Diagnosing pneumonia "Research is showing that we shouldn't be surprised to see more and more clinical diagnoses in the absence of positive chest x-rays," Jones said. There are a number of reasons why. For example, Jones noted that even when looking at the same x-rays, radiologists couldn't always agree whether an infiltrate was present, according to an article by Edward Doyle in the February 2006 issue of Today's Hospitalist. Doyle also found that CT scans may actually be a better tool for diagnosing clinical pneumonia. "A chest x-ray is probably still the first line of defense, the first thing a physician orders to figure out if [pneumonia] is present, but we may be getting to a point when a CT scan of the chest may soon become the prevailing test," Jones said. That said, the chest x-ray for infiltrates may remain the go-to diagnosis tool simply because of a number of quality care initiatives that require physicians to treat pneumonia patients within a certain number of hours after admission. Those initiatives "are looking for the physicians to say, for example, ‘I treated this particular condition because I found infiltrates on the x-ray so I know I'm treating pneumonia,' " Jones said, explaining that a physician knowing in his or her gut that a patient has pneumonia regardless of a negative chest x-ray doesn't always cut it with external review organizations. "Unfortunately, the quality guidelines and parameters out there haven't caught up with the fact that there are a number of conditions out there that can be treated based on signs and symptoms even if diagnostic tests are negative, and pneumonia is one of them." Another challenge for pneumonia diagnoses is the rise of drug-resistant organisms. Prescribed antibiotics may not work the first or even second time. Drug resistance was fairly uncommon in the past, according to Jones, but that's no longer the case. "We need to be sensitive to the fact that you can't say it wasn't pneumonia because the first round of antibiotics didn't work ... the patients may have a clinical pneumonia that is due to a bacteria or virus, for which the patient is drug-resistant," she said. As the medical industry adapts, coders may see other alternative testing methods documented more frequently in the record. For example, physicians may order C-reactive protein (CRP) tests to test for bacterial pneumonia. This is a finger stick test that measures the patient's level of CRP, which is stimulated by bacteria and rises in the presence of an infection. A very high level can indicate pneumonia, according to Doyle. "[Instead of] the tests that we're used to seeing and the treatment protocols that we're used to seeing, I think we really are moving into the next generation. And what we've become used to seeing in the past—what a lot of our queries have been based around, along with tests, documentation training, in-services with physicians—a lot of it needs to begin to move forward according to what's really being seen out there," Jones said. "So much of it is changing. We have to look at what is going on right now and how the practice of medicine is changing." Pneumonia coding guidelines Despite the changing clinical preferences for diagnosing pneumonia, the positive chest x-ray is still considered the standard by recovery audit contractors (RAC), the Office of Inspector General, and other auditors, King said. As such, coders need to watch for it in documentation. Coders should also look for indication that fluids were provided to the patient, particularly prior to a chest x-ray. If a patient presents with dehydration, he or she would usually need fluids before an infiltrate would show up on a chest x-ray regardless of the presence of pneumonia, King explained. Without a positive chest x-ray, coders should look for other signs and symptoms documented in the record, she said. These could include a heart rate above 100 bpm or a respiratory rate above 25, rales, crackles, rhonchi, a dullness to percussion, or decreased breath sounds. "Certainly, if you have a physician advisor or champion, or a [clinical documentation improvement] program, communication with the attending physicians about the importance of documenting more about their clinical diagnosis of pneumonia when they don't have that infiltrate is going to be increasingly important," King said. Another issue coders should watch for is hypoxemia with pneumonia. Unlike with respiratory failure, hypoxemia is not inherent to pneumonia. Per Coding Clinic, Second Quarter 2006, if a physician documents hypoxemia in the record, coders need to report it separately from the code for the pneumonia diagnosis, according to King. Finally, there is the ongoing issue of assuming causal organisms based on sputum cultures. As discussed in Coding Clinic, Second Quarter 1998, coders simply cannot do it, King explained. "Sputum cultures are often misleading or negative," she said. "The physician must actually document a link between the results on the culture and the pneumonia itself in order for coders to link them. This continues to be something that coders struggle with." ICD-10 coding for pneumonia Documentation requirements shouldn't change much with the switch to ICD-10-CM/PCS, according to Jones. "We still need a definitive diagnosis of pneumonia, or at the very least a statement on the discharge summary that the pneumonia was not ruled out so that we can know what we're dealing with from a coding standpoint." The codes, however, will certainly change; there are numerous codes for pneumonia in the new system, including the following: J18.0 (bronchopneumonia, unspecified organism) J18.1 (lobar pneumonia, unspecified organism) J18.2 (hypostatic pneumonia, unspecified organism) J18.8 (other pneumonia, unspecified organism) J18.9 (pneumonia, unspecified organism) "This is just a sampling of the numerous codes out there for ICD-10," Jones explained. "There's a completely different batch of codes for when you have an organism specified." In particular, coders may want to note ICD-10-CM code J18.9, which is essentially the replacement code for the ICD-9-CM code 486 (pneumonia, unspecified organism), she said. Additional clinical pneumonia coding guidelines Need more guidance on coding for clinical pneumonia? Coding Clinic may well have the answers you need; the topic has been addressed in the publication numerous times. King notes the following Coding Clinic issues Fourth Quarter 2010, p. 135 First Quarter 2010, pp. 3, 12 Third Quarter 2009, p. 16 Fourth Quarter 2008, pp. 69, 140 Second Quarter 2006, pp. 20, 24 Second Quarter 2003, pp. 21–22 Fourth Quarter 1999, p. 6 Third Quarter 1998, p. 7 Second Quarter 1998, pp. 3–5, 7 First Quarter 1998, p. 8 Third Quarter 1997, p. 9 Fourth Quarter 1995, p. 52 Third Quarter 1994, p. 10 First Quarter 1994, pp. 17–18 Third Quarter 1993, p. 9 First Quarter 1993, p. 9 First Quarter 1992, pp. 17–18 First Quarter 1991, p. 13 Third Quarter 1988, pp. 11, 13 M-A 1985, p. 6 This list can be a very valuable tool as you develop guidelines for coding and querying for your coding staff, King says. *MAGNET™, MAGNET RECOGNITION PROGRAM®, and ANCC MAGNET RECOGNITION® are trademarks of the American Nurses Credentialing Center (ANCC). The products and services of HCPro, Inc. are neither sponsored nor endorsed by the ANCC. The acronym "MRP" is not a trademark of HCPro or its parent company.
Q: Automatically reading values off a line graph in R I have a line graph which I have generated in R from a dataset, but I'd now like to be able to read-off the Discharge values at each hour so I can add these to another dataset for later use. As you can see, there are only 3 values in the graph for discharge (Y-axis): 0, 1.606, 0. The X-axis values correspond to the given Y-axis values: 0, 0.65 and 0. I now need to get the Y-axis values for hours 0, 1, 2, 3, 4 (approx. 0, 1.4, 0.97, 0.48, 0), and automatically generate a list of the values. I am using my custom function to generate the graph: PlotSyntheticUnitHydrograph <- function(Qp,Tp,Tb) { dataPlotsX <- c(0,Tp,Tb) dataPlotsY <- c(0,Qp,0) yRange <- range(0,(c(0,Qp))) plot(dataPlotsX,dataPlotsY, type="o", col="blue", ylim=yRange, ann=FALSE) title(xlab="Time (hours)", col.lab="blue") title(ylab="Discharge (m^3/s per 10mm)", col.lab="blue") } R code to run function to generate this graph PlotSyntheticUnitHydrograph(1.606,0.6509467,4.04712,5) For reference: Qp = Peak Discharge (the Y-axis peak on the graph) Tp = Time to Peak Discharge Tb = Base time; time taken to reduce from peak discharge to 0 (4.04 hours in this case) Is this possible to do within R? Thanks A: It's not clear that the plot itself has anything to do with this question. Rather, you have a function defined on a couple of points, and you would like to interpolate its value. For that, there is a function approx that will take the original function and return the linear interpolated value over a set of points. x <- c(0, .6509467, 4.04712) y <- c(0, 1.606, 0) Then we can do > approx(x, y, 1:4) $x [1] 1 2 3 4 $y [1] 1.44093787 0.96805270 0.49516752 0.02228235
Inner and outer planets are what our solar system is divided into based on a few differences. Humans have always been interested in the intergalactic world and have always tried to gather as much information as possible. Now we know more about the galaxy than we have ever and can easily distinguish between the two types of planets. Inner vs Outer Planets The main difference between inner and outer planets is that inner planets are seen much closer to the sun, therefore, having greater solar access while the outer planets are seen much further away from the sun thereby giving them lesser sunlight access and also giving the whole orbital area a colder aura. Inner planets are the planets present between the sun and the asteroid belt. This closer association with the sun gives the planets a pretty hot or humid planet temperature making some of them not suitable for the survival of organisms. This planet’s temperature determines many other factors in all of the inner planets such as the terrain and the way the planet has been formed. Outer planets are placed at a much larger distance from the sun and beyond the great asteroid belt. These planets have attained a great drop in temperature due to the distance from the sun. Such a drop in the temperature affects the general aura around the planet creating a huge difference in the planet’s terrain and atmospheric growth. Comparison Table Between Inner and Outer Planets What is Inner Planets? The inner planets in the Milky Way are Mercury, Venus, Earth, and Mars. Inner planets are all placed in their orbits that are around the sun and are present in the galactic space between the sun and an asteroid belt. This means that their position in the Milky Way is very close to the sun when compared to most other things in space. Being this close to the sun gives the planets a great advantage in many cases. The higher temperature has to lead to the formation of a species inhabitable environment in our planet Earth. The greater temperature has also helped in creating the different surface type that is unique to each of the four inner planets. All the inner planets have a rocky terrain accompanied by the gradual formation of a mountain or hill-like structures and valleys. The major composition of the inner planets is all minerals and inert elements like silver and platinum. The surface of most of the inner planets is rich in silicone and iron thereby helping scientists come up with certain speculation. This speculative thinking is that all the inner planets have a core that has iron content in the form of molten iron. This has given the inner planets another name that is terrestrial planets. The periods of revolution around the sun are low as they are close to the sun. The planets tend to have a higher density as the major component in all the planets is rocky in structure. The orbits that the terrestrial planets revolve around are closed and therefore revolve in a complete elliptical form. The most fascinating thing about the inner planets is that despite being similar in all the properties, the general environment of each planet is unique. The surface of each planet along with the atmospheric thickness and composition are all different for the inner planets. While earth plays host to water, no traces of it have been found in any other inner planet as of now. The spin of each planet around their axis tends to be slower in their case. The inner planets possess a very little number of moons with earth having a single moon. What is Outer Planets? The outer planets are Jupiter, Saturn, Uranus, and Neptune. The position of the outer planets in the galaxy is much beyond the asteroid belt. Due to their large distance from the sun, the outer planets are pretty cold and the atmospheric temperature is usually below zero. These planets are mainly composed of inert gases such as helium. This inertness of the gases affects the general environment of these planets. Some planets are greater in acidic content and can therefore be dangerous for survival if not protected extensively. The period of revolution of the outer planets tends to be much larger as their distance from the sun is equally big. The planets are all in general very low in density and their densities can be considered equal to or less than that of water. Outer planets are also commonly called Jovian planets. This name is concerning one of the major outer planets that is Jupiter. The orbits of the Jovian planets are all broken and therefore not always do the planets form a proper ellipse while rotating around the sun. It isn’t normal for all the outer planets to have the same composition as they are all made up of different gases. The atmosphere is considered strongly magnetic and has a great storm-like center due to the great magnetic fields present in the atmosphere. The low temperature and the strong gravitational pull of the planets keep the atmosphere intact as it is gaseous. The planets often experience a stormy hurricane that is termed the Coriolis effect that can be seen from the earth. The Great Red Spot in Jupiter and the Great Dark Spot in Neptune are two clear examples of the Coriolis effect. Main Differences Between Inner and Outer Planets - While the inner planets are found between the sun and the asteroid belt, the outer planets are well beyond the asteroid belt and at a greater distance from the sun. - The outer planets have a greater number of moons of varying sizes while the number is small for the inner planets. - The orbits of the outer planets are not a complete ellipse and are broken while the orbits of the inner planets are a complete ellipse and therefore give them a shorter period of revolution. - The general composition of all the inner planets is the same and is rocky while the composition of the outer planets even though it is all gas, the types of gases are varying. - The density of the inner planets is much greater than that of the outer planets. Conclusion All the planets in the galaxy are composed of either gases or rocky terrain-like structures. Irrespective of whether the planet is inner or outer, they all have to follow a specific orbit around which they have to move. The difference in densities is all related to the general characterization of the composition of each planet that falls into each category. Not all planets have mineral composition nor do all have gaseous composition. Venus and Mercury are too close to the sun to be considered suitable for life. Other planets in the inner planets category are still under the human radar for any characteristics that may indicate living beings. Saturn is the only planet as of now that has a density less than that of water.
https://askanydifference.com/difference-between-inner-and-outer-planets/
Q: Is $\zeta=\frac{x dy \wedge dz+y dz \wedge dx+z dx \wedge dy}{r^3}$ exact in the complement of every line through the origin? $r=\sqrt{x^2+y^2+z^2}$ of course. If the line is the $z$ axis, it is given in the book (Rudin) that $\zeta=d \left( -\dfrac{z}{r} \dfrac{xdy-ydx}{x^2+y^2} \right)$ I've managed to figure out 2 similar cases by myself: $x$ axis: $\zeta=d \left( -\dfrac{x}{r} \dfrac{ydz-zdy}{y^2+z^2} \right)$ $y$ axis: $\zeta=d \left( -\frac{y}{r} \dfrac{zdx-xdz}{x^2+z^2} \right)$ I have a strong feeling that it's true for any line, but I'm having great difficulties of proving so. Any suggestions? A: Let $A,B,C$ be an orthonormal (and positively oriented/ordered) basis of $\mathbb R^3$ with $A$ in the direction of your line. Given position $p = (x,y,z),$ define new coordinate functions $a = p \cdot A, \; b = p \cdot B, \; c = p \cdot C.$ Then you get $da, \; db, \; dc.$ Then try $$ d \left( - \frac{a}{r} \; \frac{b dc - c db}{b^2 + c^2} \right) $$
Pressemeddelelse fra Messukeskus According to the holiday travel forecast, which surveys Finns’ travel plans, 2019 was a busy travel year for Finns. On average, Finns made four trips in Finland and three trips abroad in 2019. “Domestic travel increased somewhat during 2019, and there seems to be a continuing interest in domestic travel, especially in city destinations. Finns also feel the pull of destinations abroad, as only 6 per cent of the respondents said that they would not travel abroad in 2020”, says, Anne Lahtinen, research manager at Kantar. More than half of the respondents said that they intend to visit city destinations abroad. The trip is likely to be made to a destination in Southern Europe, where 30% of the respondents plan to travel, or to Estonia, which is favoured by 25% of the respondents. In the year 2020, in addition to cities, people will travel abroad for beach holidays and far-off destinations. “The changeable summer weather and the long and rainy autumn in Finland in 2019 probably at least partly explain why as many as 44% of the survey respondents are planning a beach holiday abroad. Compared to last year, there are more Finns with a yearning for sunshine and warm weather who are setting their sights on far-off destinations”, says Heli Mäki-Fränti, managing director at the Association of Finnish Travel Agents, AFTA. In contrast to long-haul travel, more and more Finns find holidaying in Finland in their own or rented holiday home attractive. More money will be spent on travel in 2020 One in three respondents is planning to spend more money on travel in 2020 than in the previous year. More than half will spend at least as much money on trips as in 2019. In the past year, money was spent especially on accommodation in Finland, as well as on flying and travelling by train. “When asked where they would travel if money was no object, the vast majority of the respondents said they would head to Australia, Japan, the United States, the Maldives or New Zealand. In Europe, the top two dream destinations are Italy and Iceland. It will be interesting to see where Finns will direct their increased travel spending. Will there be an increase in the popularity of destinations in Finland and the surrounding region, or will Finns fulfil their dreams of travelling to far-off destinations?”, says Anne Lahtinen, research manager at Kantar. Experts trusted when booking trips When booking a holiday trip, Finns increasingly rely on travel service providers and travel agencies. Some 68% book their trips through the websites of airlines, ship companies, railway companies, bus companies or other travel service providers, while 44% use the websites of travel agencies to book their trips. Most accommodation is booked through sales or comparison shopping websites for hotels and other travel services, or, increasingly, directly from hotel websites. More information and survey results: Matka Nordic Travel Fair, communications manager Eva Kiviranta, tel. +358 40 775 6609, [email protected] Kantar, research manager Anne Lahtinen, tel. +358 50 560 3186, [email protected] Tourism Survey 2020 by the Matka Nordic Travel Fair. Implemented by Kantar. The survey target group consisted of over-18-year-old people from Finland, excluding the Åland Islands. A total of 1,061 people answered the survey in November 2019.
https://www.tidende.dk/tidende/indland/2020/01/15/prm-2020-holiday-travel-forecast-by-matka-nordic-travel-fair-expect-to-find-a-finnish-tourist-on-a-beach-or-city-holiday-abroad-or-in-the-middle-of-nature-in-finland/
BACKGROUND OF THE INVENTION A common routine for drummers during practice sessions and when warming up on their drums is to play with two drum sticks in each hand. The added weight allows drummers to strengthen their wrist muscles. When the weights are removed from the sticks, the drummer notices an increase in control, speed and agility while playing. This practice has several drawbacks, however. First, extra drum sticks are bulky and can be difficult for the drummer to maneuver and control. The extra sticks are also inconvenient for the drummer to carry. Further, the added drum sticks are awkward and bulky to manipulate and can easily slip around in the drummer's hands while practicing. Moreover, additional drum sticks distort the true shape and feel of the drum sticks in the musicians' hands. There is therefore a need in the art for a more convenient and efficient means of improving drum playing skills which allows drummers to strengthen their wrist muscles without distorting the normal size and feel of the drum sticks. Accordingly, it is a primary objective of the present invention to provide a method and means of improving drum playing skills which eliminates the need for the drummer to practice using two sticks in each hand. It is another objective of the present invention to provide a method and means of improving drum playing skills which is convenient for the drummer to practice. It is a further objective of the present invention to provide a method and means of improving drum playing skills which does not distort the normal feel of the drum sticks. It is yet a further objective of the present invention to provide a method and means of improving drum playing skills which is not bulky or awkward to use. It is another objective of the present invention to provide a method and means of improving drum playing skills which is economical. The method and means of accomplishing each of the above objectives as well as others will become apparent from the detailed description of the invention which follows hereafter. SUMMARY OF THE INVENTION The invention describes a method and means for improving drum playing skills which also strengthens the drummer's wrist muscles. The method includes the application of at least one weight to one or both drum sticks during warm-up or practice. The weights are small and easy to carry and are not bulky when placed on the drum sticks. Further, the weights conform to the shape of the drumstick, and therefore do not distort the true shape and feel of the drum sticks in the musicians' hands. DESCRIPTION OF THE DRAWINGS FIG. 1 is a perspective view of a preferred embodiment of a drum stick weight in accordance with the present invention. FIG. 2 is a sectional view of a preferred embodiment of a drum stick weight in accordance with the present invention taken along lines 2- -2 of FIG. 1. FIG. 3 is an elevational view of a preferred embodiment of a drum stick weight in accordance with the present invention as shown on a drum stick. FIG. 4 is an elevational view of an alternative embodiment of drum stick weights in accordance with the present invention as shown on a pair of drum sticks. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT The drum stick weights of the present invention are generally designated in the drawings by the reference numeral 10. Each weight 10 generally includes weighted material 20 and may also have a means of attaching 30 the weighted material 20 to the drum stick 40. The weighted material 20 used in the drum stick weight 10 can be manufactured from a variety of materials that are conventionally used in making weights for other purposes including metals such as lead, iron, graphite, and steel and various other materials including stone and wood. These materials can also be used in combination. The only requirement for the weighted material 20 is that it be sufficiently heavy so that it does not require large quantities of the material to add the requisite amount of weight to the drum sticks 40. The weighted material 20 should generally be included in an amount that adds from about one to seven ounces of total weight to the drum stick 40. The preferred weight is about two ounces. More or less weight can be added or subtracted from the drum stick weight 10 depending on the personal needs and preferences of the individual drummer. However, if too much weight is added, it may be too difficult for the drummer to maneuver the drum sticks 40. Conversely, if insufficient weight is added, the drummer will not derive a benefit from using the drum stick weights 10. The drum stick weight 10 may include a means of attaching 30 the weighted material 20 to the drum stick 40. The only requirement for the means for attaching 30 is that it somewhat conform to the shape of the drum stick 40 so that it is not overly bulky and/or distort the feeling of the drum stick 40 in the drummer's hand. A preferred means of attaching the weighted material 20 to the drum stick 40 is through the use of a wrap-around attachment 30 which evenly distributes the weight around the drum stick 40. Such wrap-around means of attachment 30 is shown in a preferred embodiment of the drum stick weights 10 in FIGS. 1- 3. The attachment means 30 can generally be made of any type of material that is sturdy enough to hold the weighted material 20. Such materials include cotton, rayon, leather, burlap, nylon, plastic etc. The attachment means 30 can be molded in one piece or made of two pieces of separate material which can be used to &quot;sandwich&quot; the weights 20. The attachment means 30 may be formed in a &quot;hoop&quot; so that it can simply be slid onto the drum stick 40. The attachment means 30 can also be made to include at least one fastener 32 on at least one end 34 to enclose the drum stick weight 10 around the drum stick 40. The fastener 32 can be any type of conventional fastener, such as Velcro, snaps, buttons, tape, glue etc. Velcro is preferred since it allows the user to easily adjust the tightness of the fit of the weight 10 on the drum stick 40 and also compensates for drum sticks 40 of different widths. The attachment means 30 can also be made of a flexible or accordion pleated material that can be stretched to snugly fit around the drum stick 40. The weighted material 20 itself can also be curved so that it conforms to the shape of the drum stick 40. Thus, the weighted material 20 can then be slipped directly onto the drum stick 40 without the need for a separate attachment means 30. Furthermore, a liner can be placed along the inside part of the curved weighted material 30 which contacts the drum stick 40, so that the weighted material 30 better grips the drum stick 40 to prevent the drum stick weight 10 from sliding up and down the stick 40 or completely slipping off. Such liners can be made of plastic, rubber, or other material that is capable of creating a frictional surface between the weighted material 20 and the drum stick 40. FIG. 1 shows a preferred embodiment of the drum stick weights 10 wherein the weighted materials 20 are elongated and placed in pockets 36 in the center portion 38 of the attachment means 30. The pockets 36 serve a number of functions including preventing the weighted material 20 from falling out of the attachment means 30, protecting the drummer's hands and drums from impact with the weighted material 20, and likewise protecting the weighted material 20 from damage. The pockets 36 also position the weighted material 20 so that it is evenly distributed around the circumference of the drum stick 40. The pockets 36 may optionally have openings so that one or more weighted material 20 may be removed or added to the drum stick weight 10 so that the overall weight of the drum stick 40 may be easily varied. The drum stick weight 10 shown in FIG. 1 is generally oval in shape. However, it can also be rectangular, round, square, or any other shape capable of wrapping around the drum stick 40. Oval is preferred. Each end 34 of the attachment means 30 is preferably reinforced with a heavy material, such as leather. Here, the attachment means 30 is shown with Velcro fasteners 32 on each end 34. While the drum stick weight 10 does not have to be of any particular length or width, as a practical matter it should have a horizontal length of between about 3 to 8 inches and a vertical width of between about 1 to 4 inches. If the length is more than 8 inches, the drummer's hands may contact the weight 10, thereby distorting the feeling of the drum sticks 40. If the drum stick weight 10 is wider than 4 inches, it may add too much bulk to the drum stick 40. Another embodiment of the drum stick weights 10 is shown in FIG. 4 and is designated as 10a. Here, at least one barbell shaped weight 20a is secured to the drum sticks 40 using elastic bands 30a. The weighted material 20a can also be similarly secured using non-elastic fasteners, such as string. There may also be a pad 22 positioned between the weighted material 20a and the drum stick 40 to prevent the weighted material 20a from sliding or scratching the drum stick 40. In practice, when the drummer begins warming up, he/she takes one or more drum stick weights 10 and slides it on or wraps it around the drum stick(s) 40. The weight 10 is placed on the drum stick 40 so that it preferably avoids contacting the drummer's hand while playing. While the drummer is playing, the increased weight increases the amount of strength necessary for the drummer to navigate the drum sticks 40 around the drums and provides increased resistance. Once the practice session or warm-up is over, the drum stick weights 10 are removed from the drum sticks 40. Once removed, the drummer will immediately notice that the suddenly lighter drum sticks 40 are much easier to maneuver while playing. With prolonged use, the drummer will notice increased muscle strength in the wrists and arms, which will contribute to an increased ability to play. Drummers can easily transport the compact drum stick weights 10 to and from practice sessions and gigs by slipping them in their pockets, drum stick holders, or drum cases. The invention has been shown and described above in connection with the preferred embodiment, and it is understood that many modifications, substitutions, and additions may be made which are within the intended broad scope of the invention. From the foregoing, it can be seen that the present invention accomplishes at least all of the stated objectives.