content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Travel above Dubrovnik by cable car, a breathtaking way to admire the city from above, before immersing yourself in its history, discovering its most fascinating monuments on foot. Our excursion begins with a coach transfer to the lower station of the recently restored cable car station. The ride takes just 4 minutes to reach the summit of Mount Srd, where we can enjoy a magnificent view of the city of Dubrovnik. The original Dubrovnik cable car, service which was inaugurated in 1969, was used by visitors and locals to enjoy the splendid view of Dubrovnik's old city and the surrounding area. The cable car system was destroyed during the war of independence that began in 1991, and remained out of service until its recent complete restoration. After a brief orientation tour at the top of the hill, we can take some photographs of this breathtaking panorama. A short walk takes us to the nearby Museum of the Croatian War of Independence, situated in a wing of the Imperial Fortress on Mount Srd, symbol of the defence of Dubrovnik from 1991-1995. Photos, prints, arms, war maps, military equipment and objects from the daily lives of the residents and defenders of Dubrovnik during the siege are on display in the museum in addition to original sound and video recordings and military battle flags. The return cable car journey takes us back to the lower station, just a ten-minute walk from Ploce Gate, the Eastern entrance to Dubrovnik's old town near to the old port, where our walking tour begins. We stroll along the Stradun, Dubrovnik's main street, passing the Church of Saint Blaise, the Onofrio fountain, the Cathedral, Sponza Palace_,_the drawbridge and Pile Gate. Guests can return to the ship on the shuttle buses provided. The activity of the cable car is strictly connected to the weather conditions.
https://www.costacruises.com/excursions/21/2180.html
Amid a crisis like the COVID-19 pandemic, corporations are dramatically confronting the adverse implications of this outbreak. A team of IN'P lawyers produced this document to support corporations in tolerating these challenging impacts whilst providing a healthy, sustainable and productive work environment. The purpose of this document is to provide some legal guidance and insights to corporations, including: How UAE Employers Can Support their Business (Section II); Update re: How UAE Employers Can Support their Business (Section III) Remote Services Provided by Dubai Notary & Dubai Economy (Section IV); COVID-19 Effect on the Education Business: Know Your Rights (Section V); and Immigration Alert: Expired Residency and Visitors’ Visas Extended Until End of December (Section VI). How UAE Employers Can Support their Business The novel coronavirus (“COVID-19”) has been declared pandemic by the World Health Organization in March 2020 and with person-to-person transmission, reported cases have continued to grow exponentially. As COVID-19 continues to unfold, employers are facing avalanche uncertainty over what they must do and what they can do. As governments and public health authorities adjust their policies to respond to the challenge, companies need guidance on how best to address the situation with regard to their employees. Accordingly, we have listed key points on employers’ rights under UAE Federal Law No. 8 of 1980 (“UAE Labour Law”) and recommended safety measures followed by global businesses. Employers Rights under UAE Labour Law Sick Leave: The UAE Labour Law entitles all employees, who are actually sick, a maximum sick leave of 90 calendar days in any 12-month period. The salary of the sick employees during this period will be paid as follows: the first 15 days are payable at full pay; the next 30 at half pay; and the remaining 45 days are unpaid. In light of the recent situation of COVID-19, where most of the countries around the globe have announced closure of their air-space and governments has been taking precautionary measures to apply social distancing, employees have to abide by these precautionary measures in order to reduce any chances of contamination. In case the employee recklessly did not respect (i) the country travel restrictions and/or (ii) the employer’s internal policy in relation to the current situation, and consequently got infected, it is in our views that sick leave as organized under the UAE Labour Law would not apply. This is because the sickness of the employee in question has occurred as a result of his/her recklessness behavior. However, employers may choose to class the absence as sickness and apply normal sick pay rules as a precautionary measure to protect their business. Annual Leave: in the UAE, employees are entitled to a minimum annual leave of 30 calendar days. An employer has the right to determine the time and periods of annual leave and therefore, employees could be required to take paid annual leave provided, of course, that they have sufficient leave balance available. In a situation where the annual leave entitlement is exhausted and the employee cannot return to work due to COVID-19, employers may place them on unpaid leave. However, employees cannot be forced to accept unpaid leave. This has to be mutually agreed in writing between both parties. Salary Reduction: as a general rule under the UAE Labour law, employers cannot compel employees to accept salary reduction. However, they can discuss salary reduction plan and terms with employees and an employee might want to accept the cut for the greater good of the company. Any agreement to reduce salaries must be accepted by both parties (employer and employee) and noted in writing (please consider the amendment of the employee’s contract with the Ministry or free-zone authority as the case may be). Although cost-cutting reflex is understandable, employers are obligated to make responsible decisions to keep your business afloat. Accordingly, employers should consider calculating pay cut percentage based on a hierarchy system. We believe that cost cutting should be top down approach, i.e. senior leaders/officers as these cuts will cover for the salaries of employees that are the bottom of the pyramid (operational day-to-day). Another tip in relation to cost cutting- it is more favorable to the employee if the reduction apply to the allowances and not to the basic salary, which will have no effect on his EOSB later on. Un-paid Leave: similar to the salary reductions, employer cannot force the employee to go on an un-paid leave. This principle is not organized under the UAE Labour Law and has never been tested in practice. Therefore, whilst this is an option for employers to reduce the running cost of salaries, this has to be done by way of written agreement with the employee. Unpaid paid leave may take the form of: specific period of, for example, one, two or three months; or un-paid leave of a week per month, with the understanding that the employee would work three weeks only each month. Layoffs: employers are not allowed to terminate employees’ contracts with no cause and, in such event, employers will likely face claims for arbitrary dismissal as per the UAE Labour Law. Further, as per recent publications, the Ministry of Human Resources and Emiratizations (“MOHRE”) confirmed that their stance thus far is that the COVID-19 outbreak has not been declared a force majeure event and in the absence of special provisions relating to the exceptional measures taken as a result of the COVID-19 outbreak, the status quo remains and both employers and employees will be bound by their rights and obligations as outlined under the UAE Labour Law and their relevant agreements. Thus, we believe that forced terminations will be treated as a breach of contract, and applicable precedents as to unlawful termination shall apply. Hiring and Termination of employment contracts before commencement: Unfortunately, due to the COVID-19 outbreak, many employers have to withdraw employment offers given to potential employees (offers given before the outbreak started), as this became a financial burden to the business. However, this situation may be considered as a breach of contract causing the potential employee material damages, having resigned from his/her current position. Furthermore, the MOHRE did not declare COVID-19 as a force majeure event. Should your company be in a similar situation where it needs to withdraw an offer which has already been accepted by the employee, we strongly encourage employers to review all relevant employment agreements/offers and consult with their legal counsels to further understand their rights. How to Protect your Business Employers should review and update the business continuity plan to ensure sustainability of their business, cash flow and capital. Setting people up to work from home is certainly the most logical, safer and popular, route for an employer. Ensuring therefore that you have the proper infrastructure in place is critical which will include making sure there are sufficient laptops, mobile phones, printers and any job specific technology or facilities available for all employees who may need them. But that’s not the only thing to consider: the duty to protect the health and safety of workers extends to home working. Employers should be carrying out risk assessments which will vary depending on the type of work and how long the employee is expected to work at home. How to protect your employees Employers owe their staff a duty to protect their health and safety. As part of this employers should communicate with employees to assure them that the employer is aware of the issues, monitoring the situation and taking steps to plan for disruption. The communication should also relay the current government advice in relation to infection prevention, including in relation to hygiene. As well as communicating advice, employers should make sure there are adequate hand-washing facilities available at the workplace. The advice from government and health professionals is that regular proper hand washing is the best way to protect ourselves against infection. Providing alcohol-based hand gel and tissues, is also sensible, for staff but also for members of the public who may use facilities, such as in hotels, shops and cafes. UPDATE Re: How UAE Employers Can Support their Business On 26 March 2020, and in view of the declared nationwide state of emergency due to COVID-19, the UAE Ministry of Human Resources and Emiratisation (“MOHRE”) issued Ministerial Resolution No. (279) of 2020 on Employment Stability in Private Sector during the Period of Application of Precautionary Measures to curb the Spread of Novel Coronavirus (“New Decree”). The New Decree primary purpose is to help business owners in the UAE to protect their business and develop a contingency plan to prepare for a range of eventualities regarding the business impact of COVID-19, through providing further guidance to the private sector on certain topics already regulated under UAE Federal Law No. 8 of 1980 (“UAE Labour Law”). The New Decree would only be valid during the period of the precautionary measures to overcome this pandemic disease and protect the future of the country. The New Decree does not apply to UAE nationals, but rather to foreign employees only. The New Decree primarily allows private sector employers and business units that are affected by the precautionary measures to curb the spread of COVID-19 to apply gradual and progressive measures in conjunction with their employees to address and tackle the negative economic impact on their businesses. These measures are as follows: Implementing a remote work system Granting employees paid leave Granting employees unpaid leave Temporarily reducing salaries during the aforementioned period Permanently reducing salaries Further, as the business owners in the UAE are facing unprecedented challenges amidst the ongoing COVID-19 crisis and with no certainty as to when this crisis will end, the New Decree provides some relief by enforcing the above actions while taking into consideration the following: Temporary Salary Reduction: under article 5 of the New Decree, businesses seeking to temporarily reduce the salary of an employee during the said period shall sign an additional annexure provided by the MOHRE. The validity of this temporary annexure is limited to the agreed upon term, or as long as this New Decree remains valid, whichever is earlier. Permeant Salary Reduction: under article 6 of the New Decree, businesses seeking to permanently reduce the salary of an employee should first obtain the MOHRE’s approval by applying for “employment contract details amendment” service. Redundant Employees: The New Decree does not openly address the redundancy option, or introduce new regulations or obligations on the employers’ side who wishes to make employee(s) redundant. Though, the New Decree directs employers who have more staff than needed (impliedly means who wants to make employees redundant), they must register the additional staff on the virtual job market, so that they can be recruited by other businesses. Meanwhile, these employers will remain obliged to provide employees with accommodation, healthcare insurance and other employment commitment, except for salary, as long as they are in the country or until they are hired by other employers. While some businesses may think that this is a reliable/economic option to their operation during this critical period (which will allow them to make partial redundancy), we believe that the regulator’s view of this provision is in favour of the employee for the following reasons: the employer will let go of the employee and continue to pay for his accommodation and other living allowances, which will be sufficient for the employee to survive during this critical period; the employer does not know for sure when the Application of Precautionary Measures period would last -during which it keeps paying the allowances-, in order to calculate its continuing loss; and it is in the best interest of the employer not to make the employee redundant and seek one of the options under article 2 of the New Decree. New Hires: under article 4 of the New Decree, business seeking to recruit during the suspension of overseas hiring shall find reasonably suitable replacement locally and apply for internal work permits through the MOHRE online system. The main purpose behind this recommendation is to allow sectors that have high demand for hiring now to be able to secure its needs from other sectors, which have been affected by the COVID-19 financial distress. Job Seekers: under article 7 of the New Decree, job seekers in the UAE shall register with the virtual job market and apply for available opportunities based on their qualifications and experience. In conclusion, UAE employer strategies to address the crisis may implicate a variety of employment laws and regulations. This could lead to flooding the UAE courts with employment cases. Accordingly, business owners need to consider reasonable measures to protect their businesses and minimise losses. Unlike the wider community interpretation of the legal effect of this New Decree on the employees in the UAE, we believe that it aims to provide several creative solutions to the employers that may reduce the redundancy decisions as a first option, rather than facilitating or allowing employers to take extreme measures against their employees. Please note that the above is only a summary of the New Decree and not a legal advice, as we expect further guidance to be issued by the MOHRE in relation to the application of this New Decree and the obstacles that will rise once applied. Remote Services Provided by Dubai Notary & Dubai Economy As a result of the declared nationwide state of emergency due to COVID-19, the Dubai courts have initiated the following remote services for Dubai Notary and Dubai Economy. Dubai Notary Services: The Notary Public offices in the UAE are now closed as a result of the COVID-19 pandemic. The government has introduced remote notarial services until 9 April 2020 to be able to continue to provide companies and individuals with minimum services necessary for their continued operation. The following are the limited notarial services currently being provided remotely for: all types of powers of attorney; acknowledgments; legal notices; agreements related to civil companies for the purpose of their incorporation and introducing any amendments to their memoranda of association; and local agency service agreement. The remote Notary Public services will be conducted via video call during the hours between 8am and 4pm Sunday to Thursday. The requirements for translated documents, supporting documents and such are still required for remote notarization. We will continue to provide our usual services to assist with the remote notarization process for the limited notarizations listed above (including translation services). Dubai Economy Services: Dubai Economy has requested the avoidance of the Happiness Centers for a period of two weeks but they are still available to answer any queries by phone or e-mail. We will continue to provide our usual services in connection with Dubai Economy. In conclusion, in response to the COVID-19 situation, Dubai has put in place remote notarial and corporate services for certain transactions in order to assist companies in continuing the operations. Please note that the above is only a summary of the new measures that have been in place and not legal advice. COVID-19 Effect on the Education Business: Know Your Rights The COVID-19 pandemic has had unprecedented effects on individuals and corporations all around the world, causing major societal shifts and has resulted in unforeseeable changes to many businesses worldwide. This article focuses on the effect of COVID-19 on a very sensitive industry that, in one way or another, is relevant to all of society. No matter where you are, what sector of business you are in or even how old are you, you may somehow be connected to the education sector. Whether you are a parent, student or even a school owner/manager, this article will help you understand your rights and obligations as a result of the COVID-19 outbreak. Since the beginning of March 2020, after the WHO announced that the COVID-19 virus is a global pandemic, schools, colleges and universities have been scrambling to deliver their curriculums as smoothly as possible through 21st century technology under the concept of “Distance Learning/Online Education”. Whilst the decision of schools to shift their service from “Live Education” to “Online Education” was due to the UAE’s government direction to close schools as a measure of fighting the spread of the COVID-19, this is legally considered as a unilateral change of the schools’ obligation, whilst expecting the parents’ obligations (mainly payment of fess) to remain the same. There are no explicit provisions in the UAE’s education laws that account for this type of situation, therefore we must turn to the principles of contract law. To examine this situation further, the concepts of force majeure, good faith and hardship must be considered. In short, force majeure is a mechanism by which parties to a contract, each of whom are under a different set of obligations, are either fully or partially released from their obligations in response to a force majeure event. Namely these events would cover acts of God, such as hurricanes or tsunamis, and they may be extended to pandemics such as COVID-19. Article 273 of the UAE Civil Transactions Law No.5 of 1985 (“Civil Transactions Law”) describes two situations of force majeure; either the performance of the contract by a party is impossible and so that party is discharged, in which case the corresponding obligation of the other party is also discharged and the contract is automatically canceled; or if the performance of the contract by a party is partially impossible, that impossible part is discharged and the corresponding part for the other party is also discharged. This may also be for a temporary, not necessarily permanent, period. The parties would then have the option to cancel the contract. Returning to the contractual obligations of both the school and the parents, we notice that schools have essentially invoked this concept of partial impossibility under force majeure to modify their obligations to provide Online Education, instead of what they have offered the parents at the beginning of the relation, which is Live Education. Whether Online Education is a sufficient replacement to Live Education is a heavily debated argument. In other countries, the concept of Online Education has been known for decades and is subject to its own set of regulations, rules, overseeing bodies and systems. However, this is not the case we are all encountering at the time being. While Online Education provides flexibility, availability and self-direction, and it is clearly the only way that education may continue during these trying times, it comes with many pitfalls, for example: There is not much accountability to or supervision by teachers; Limited interaction with other students and teachers; Technological hurdles; and Parents are not trained professionals to oversee this process (especially with higher grades). In summary, this is not what Online Education has ever promoted. As the nature of the schools’ and education providers’ obligations have changed, this may trigger the corresponding change in the parents’ obligations to pay reduced fees, on the basis that students are no longer using the institutions’ utilities, transportation, meal options, social activities etc. Online Education must be supervised by parents who are facing potential pay cuts or redundancies and are juggling their family life with their work and professional commitments at a completely new level. The benefits that schools are gaining from the reduction in their costs should ideally be reflected back to the parents by way of a discount or refund on tuition. As of the date of this article, within the UAE, there has only been one school group, which announced a 20% reduction in school fees. Other schools may follow suit, but thus far this has not been the case. Although the invocation of force majeure should be enforced on both sides, education providers are changing their obligations without allowing the other side - the parents who are paying the full fees - to benefit from the force majeure mechanism too. Article 246 of the UAE Civil Transaction Law imposes a degree of good faith on contracts and places them in the wider context of observed customs and the nature of the transaction. What this means for our purposes is that the interpretation of the schools’ and the parents’ obligations should be re-examined in light of the current events. Both parties need to exercise a degree of good faith in how they perform their obligations and in what they accept from the other side. Education providers should take into account the circumstances of the parents if they expect the parents to accept the alternative ways in which they are performing their obligations so as to not destroy the mutual benefits that were originally agreed upon. Under Article 249 of the UAE Civil Transaction Law, a party to a contract who is facing partial impossibility to perform its obligation may ask the court to amend the parties’ obligation to reset the contract economic balance. To put it simply, the court may find that the only legally reasonable response to the effect of the pandemic for the purposes of continuing education was to switch to Online Education rather than Live Education, and correspondingly, reduce the financial burden on the parents. This is the hardship approach in which a judge, at his discretion, may alter obligations to re-balance them in the interests of justice. Conclusion The education providers have amended their obligations of delivering Live Education system as a result of the partial impossibility due to the COVID-19 outbreak. Well, this is entirely legal on the basis of force majeure and partial impossibility of delivering their original services, however the takeaway from all of this is clear: the decrease in the use of the education providers’ resources should be reflected in what the parents pay in annual fees. Some examples on which the education providers can make a compromise is through: a carried-over discount on transportation, uniforms or tuition fees for next year; refund on pre-paid transportation, tuition or activities fees; providing payment and instalment schemes for the various fees; applying specific discounts on fees to parents who confirm they have been affected financially due to COVID-19 too; or exchange the value of the fees already paid for other/future school services and activities. Lastly, the best way in which education providers and parents can reach an understanding is for the education providers to open up a line of communication to hear the parents’ reasonable suggestions, complaints and worries, and education providers to be creative and flexible. This approach would go a long way to show good faith on both sides. Immigration Alert: Expired Residency and Visitors’ Visas Extended Until End of December 2020 As there is tremendous uncertainty on when the “normal life” will resume i.e. when will the novel coronavirus (“COVID-19”) end, the UAE government has been working continuously to support its residents and mitigate the impact of COVID-19. The outbreak of the COVID-19 has been declared pandemic by the World Health Organization a few months back. As has happened with other recent global health emergencies, governments in many jurisdictions have implemented a range of temporary immigration-related measures in order to contain the spread of COVID-19. Accordingly, the UAE continued its protective measures by implementing travel restrictions, however, it provided some relief for its residents as follows: Extension of UAE residence visas until 31 December 2020, applicable on visas expired from 1 March 2020. This extension applies to residents both inside and outside the country. Extension of visitor visas until 31 December 2020, applicable to visitors inside the UAE with visas expired from 1 March 2020 onwards. Extension of UAE Nationals’ Identification Cards until 31 December 2020, applicable to ID Cards expired from 1 March 2020. This extension applies to nationals both inside and outside the country; Extension of government services expiring on 1 March 2020 onwards, for a renewable period of three months starting from 1 April 2020. This applies to all federal government services, including documents, permits, licenses and commercial registers. Although the new governmental updates did not explicitly mention canceled visas, we expect canceled visa holders to be given a grace period, without incurring any penalties for overstaying. This would be in line with the overall policy and approach implemented by the UAE government during the COVID-19. With the safety of our dedicated lawyers and clients as top priority, we stand ready to help clients through the unprecedented times ahead of us. For further support or query, please contact us. Stay safe and healthy. Sincerely yours,
https://www.iflr1000.com/NewsAndAnalysis/matouk-bassiouny-and-ibrahim-the-novel-coronavirus-covid-19-guidance-document-uae/Index/10706
After breakfast, we directly go for a well-paced London Historical Walking Tour which will accompany you to see including Buckingham Palace, Parliament House, Westminster Abbey, Tower Bridge. After breakfast, it is time to visit Royal Observatory, situated on a hill in Greenwich Park, overlooking the River Thames. It played a major role in the history of astronomy and navigation, and is best known as the location of the prime meridian, and thereby gave its name to Greenwich Mean Time. On our way to City, we pass Tower Bridge and Tower of London, a historic castle located on the north bank of the River Thames in central London. It has played a prominent role in English history and served as palace, arsenal, treasury, and even a prison. It is home to the Royal Mint, Crown Jewels of England and having survived many bloody battles and outlived many British Monarchs. In the morning, we drive straight to Lake District, west to the north of England. On our way, we will stop by Dove Cottage. Dove Cottage was the first family home of Britain’s greatest poet, William Wordsworth. The tour will help you explore the traditional Lakeland cottage and home of the famous poet William Wordsworth and discover what life was like at the turn of the 19th Century. We continues on to the Lake District. The Lake District National Park is the most cherished and popular national park in the United Kingdom. We will have a special experience today to have a high English tea in the World of Beatrix Potter. No lunch. After Breakfast, we are direct to Oxford. The university town of Oxford has a crucial place in the history of Britain’s education and countless notable figures have studied here over the centuries! Have a look at our choice of Oxford Walking tours and walk the pretty cobbled streets for yourself. Oxford’s amazing architecture and absorbing history make this city a must-visit location. After lunch, we are going to Heathrow, flying back to Singapore.
http://zoeyzhou.com/%E7%AC%AC%E4%B8%80%E6%AC%A1%E5%86%99%E8%8B%B1%E6%96%87%E6%8A%A5%E4%BB%B7/
Strengthening the capacities and competitiveness of the member companies is one of the major efforts of the Economic Chamber, being the oldest and largest business association in the country. Through these efforts, we implement projects in the interest of the business community to provide support for specific economic sectors and companies in order to help them promote the work process, develop their technology and labor capacities, and to create innovative products and services that are needed and meet the requirements for sale on the European market. Construction is one of the drivers of the Macedonian economy, covering 7.2% of the active business entities and 6.7% of the employees in the country. Apart from the direct impact on the economy, construction engages roughly twenty other business sectors. Projects such as SeeTheSkills that target the construction sector and aim to stimulate the demand for energy efficient skills in the sector contribute to the development of sustainable and competitive construction that delivers many benefits to the economy, state and society. The SEEtheSkills project will be carried out in the following three years in partnership with 10 entities from 5 countries: North Macedonia, Slovakia, Slovenia, Spain, and the Netherlands, with the Economic Chamber of North Macedonia being the main implementation partner. The Project aims to achieve better visibility, access and recognition of the energy skills among partner countries through cross-validation based on learning outcomes, while applying a novel 3V approach (visible, validated, valuable) to directly stimulate market demand for energy efficiency skills in the construction sector. The current legal framework that regulates and defines the term “energy efficient construction” in our country is the Law on Energy Efficiency (Official Gazette of the RNM No. 32/2020 and 110/2021) was adopted last year, transposing the provisions from Directive 2012/27/EU on energy efficiency and Directive 2010/31/EU on the energy performance of buildings.The Law itself defines the term energy efficiency as proportion between the achieved useful performance and the energy input to achieve this useful performance. The Law also defines the terms “energy performance of buildings”, “energy performance certificate”, “energy control” and moreover, the Law distinguishes between 1) energy efficiency in public sector, 2) energy efficiency during transmission, distribution and energy supply, and 3) energy efficiency of buildings. However, the Law does not define the term “energy efficient construction.” Thereby, we can note that the current regulations lack specific definition of the term energy efficient construction and it is linked to building construction, being one of the construction segments, that is, with buildings as structures from the building construction segment, while the requirements for building efficiency are regulated in detail by the Rulebook on Energy Performance of Buildings. The energy efficiency policy, according to current regulations, is a constituent part of the policies in the area of energy, economy, sustainable development and environmental protection, and it is implemented via measures and activities concerning efficient energy use. Consequently, energy efficient construction is currently a part of the policies in the area of energy, and the competent ministry in this sector is the Ministry of Economy. Even during the development of this significant Law, the Association of Construction, Building Materials and Non-metal Industries indicated that the regulations that cover the energy efficiency of buildings/structures, including the installations, should be within the purview of the Law on Construction. According to the business sector, the Law on Energy Efficiency should determine the legal framework and the course at a strategic level, and in this regard, the Law should regulate and impose obligations, foremost to public sector entities, to entities that sell, distribute and supply energy, manufacture or sell products that spend energy, provide services that improve the energy efficiency, “large enterprises”, irrespective of the sector they come from, considering that they are believed to bear particular social responsibility, including saving and efficient use of energy, as well as to the state bodies in terms of adopting strategies, documents, and measures to stimulate energy efficiency and exercise control over their implementation. The energy efficiency of structures that are being designed, constructed or reconstructed in turn, regardless of their type, residential or “industrial and commercial”, should be regulated by the Law on Construction, following the example of certain regional countries (Croatia), , as an issue within the competence of the Ministry of Transport and Communications, according to the business sector, and the details of the operation should be regulated with bylaws (rulebooks), considering that the procedures for adopting and amending bylaws have been simplified in accordance with the requirements and the situation on field, and are within the competence of the ministries. The reason for this is the fact that the design, audit, supervision and construction are regulated in the Law on Construction, and the control over the operation of the participants in the construction is assumed by appropriate inspection services from the Ministry of Transport and Communications and the units of local self-government. Drawing from the aforementioned, and considering that announcements of adoption of a new Law on Construction exist for a longer time and an initial draft version has been prepared, the first recommendation for the institutions would be to examine, in consultation with the business community, the possibility for appropriately defining and treating the issue of energy efficient construction in this essential regulation and in the bylaws toward its implementation, aiming to systematize every issue that covers the operation of the construction companies into a single legislative act, to ensure its consistent application and thereby, to stimulate the demand for energy efficient construction.
https://www.mchamber.mk/Default.aspx?mId=3&evid=69794&lng=2
A trifecta of extreme heat's implications on food systems – on farmworker health, crop yields, and the resulting nutritional intake of individuals – requires the attention of policymakers, community-based organizations, and the public so that we can mitigate, prepare for, and respond to heat hazards in a coordinated effort. This article explores the consequences of extreme heat on food systems. It provides examples of tools in the "toolbox of resilience solutions" we can implement to immediately safeguard farmers and the production of high-quality and ample food. Extreme heat's impact, from farm to fork The implications climate change will have on our global food system – from what's produced on our farms to what's put on our forks – are significant and can act as a threat multiplier. As global average temperatures continue to rise, the amount of food produced, and the nutritional quality of that food will shift even as more frequent heatwaves directly impact the health of farmworkers and damage crops. The effects of the rising temperatures we see today on our food system will touch everyone, albeit to widely varying degrees. The countless stories of berries and fruit scorched and ruined by the June-July heatwave in the Pacific Northwest are a warning sign of what is yet to come. This trifecta of implications – on farmworker health, on crop yields, and the resulting nutritional intake of individuals – requires the attention of policymakers, community-based organizations, and the public so that we can mitigate, prepare for, and respond to heat hazards in a coordinated effort. Farmers, under the sun There are already deep inequities faced by the essential workers who produce, distribute, and prepare our world's food and those in our communities who cannot access sufficient healthy food.Rising temperatures threaten to intensify these issues. However, governments, decision-makers, and community members can build resilience solutions that will protect people, economies, and agricultural systems. Farmworkers in the United States, of whom 83% are Hispanic, die from heatstroke at 20 times that of all other professions. This is due in part to the nature of their work, spending the hottest days of the year directly under the sun where they may feel pushed to forgo shade and water breaks. However, systemic factors —like the continued legal exclusion of farmworkers from nationwide overtime pay protections — make them even more vulnerable to poor working conditions. The acute risks placed on our nation's nearly 3 million farmworkers, those who quite literally help feed America, are both a human health and equity crisis. Also impacted are the pastoralists and fishing communities around the globe, both their health and their ability to continue earning their incomes in a hotter climate. While this year's heat waves resulted in a renewed push for increased protections for farmers in the United States, with Washington State issuing an emergency heat exposure rule on July 9th, more can be done to decrease risk and increase the resilience of our farms and the people who sustain them. Rising temperatures, dwindling yields Looking more closely at heat's impact on agricultural yields, staple crops – like wheat, rice, maize, and soybeans that provide the bulk of calories consumed in many countries around the world – respond negatively to higher temperatures. Studies have demonstrated an estimated reduction in yields upwards of 18% for some crops. New research from the University of Colorado mentions that "heat waves could cause ten times more crop damage than is now projected." On top of direct yield loss, scientists anticipate increased mortality of livestock – due to direct impacts of heat as well as feed shortages, increased prevalence of disease, and water scarcity. Projected reductions in agricultural yields are a massive threat, anticipated to push more people into food insecurity. This plight already affects more than 750 million people globally and over 10% of American households. It’s about quantity and (nutritional) quality In terms of impact on the actual quality of food produced, rising temperatures (together with higher carbon dioxide concentrations) can reduce the mineral and protein content of wheat, rice, and other staple crops. Experiments done by researchers around the globe and compiled by Harvard University found a reduced content of essential minerals like zinc, iron, and selenium that humans require for good health across crop varieties and geographies. While more research is needed, millions of people already suffer from nutritional deficiencies. If the food we consume contains fewer nutrients in a hotter future, many more will be pushed into malnutrition. Zinc deficiency, for example, raises the risk of premature delivery for pregnant women, lowers a child's ability to fight illness, and affects their growth and development. An estimated 17% of the global population already suffer from zinc deficiency. Trickle down economic consequences Beyond pushing people towards hunger, heat's impact on agriculture threatens entire nations' economic growth potential, especially where agriculture is a significant contributor to GDP and employment. We saw what a shock like the COVID-19 pandemic did to economies dependent on tourism, but agriculture is an even larger contributor in many countries. In India, for example, 40% of the population works in agri-business – and this contributes approximately 15% of India's national GDP. Furthermore, transporting food safely to communities worldwide requires complex supply chains that can be vulnerable to extreme heat, given the increased need for cold storage, which threatens to increase food waste. Read now: Extreme Heat in the United States - An Assessment of Economic and Social Effects Solutions for More Resilient and Equitable Food Systems As research continues to inform our understanding of heat hazards, we must invest in emerging and innovative solutions while also taking action to apply existing science and institute good practices. Solutions to mitigate and adapt to extreme heat's implications on our food systems include: 1. Expanding policy protections for farmworkers: For starters, over a quarter of the world's population earns their livelihoods through agriculture. Their health and safety are critical to a well-functioning food system. Protections like mandated breaks and increased access to shade, water, and healthcare can be put in place. 2. Introducing new and well-established climate-smart production practices: More climate-resilient crops and diversified farms can help combat increased temperatures and boost the resilience of farms. The World Resources Institute suggests that more diversified, integrated farms that blend aspects of crops, livestock, and forestry can result in more resilient productivity (better soil nutrient content, cooler micro-climates, less environmental degradation). This is essential in our fight against food and nutrition insecurity. 3. Increasing the use of innovative financial products that allow societies to recover quickly: Financial mechanisms can help buffer yield shortfalls caused by heat waves and other climate-related disasters. Parametric insurance, for example, triggers quick payments based on a pre-defined, transparent parameter such as temperature or rainfall levels.This mechanism could be implemented for agribusinesses, communities, or countries to provide near-immediate financial support from a heatwave, providing the safety net needed to protect livelihoods and our food supply as it moves from farm to fork. Swift and widespread action is needed to increase food system resilience. Policymakers should develop strategies that plan for the heat waves that we know will continue to come, allowing them to 'flip the switch' on protections for communities – and having insurance solutions in place will ensure a way to fund the implementation. The scope of these policies should be reassessed and expanded according to the severity of summer heat over the coming decades. Explore more To learn more about how the consequences of heat stress vary across regions, socioeconomic groups, and economic sectors, read our latest report Extreme Heat in the United States – An Assessment of Economic and Social Effects, which quantifies the impacts of heat in the U.S. under current and future conditions.
https://www.onebillionresilient.org/post/extreme-heat-pressure-cooking-our-food-system
A new study by Utility Bidder revealed the countries and industries with the highest level of corporate debt. Luxembourg was the country with the highest debt, at 391.54% of their GDP, whilst the restaurant/dining industry was the sector with the highest debt percentage of its total capital at 113.82%. Is covering the findings of interest? Every country has debt in many forms, such as: public debt, personal private debt, and of course corporate debt. To judge the level of the country’s corporate debt, we have looked at the corporate debt as a percentage of the nation’s GDP. Countries with the most corporate debt 1- Luxembourg has the highest amount of corporate debt, totalling almost 400% of its GDP. One of the reasons for this is that Luxembourg has around 17,000 corporations registered per square kilometre of the tiny nation, therefore there is a high level of corporate money and debt compared to GDP. Luxembourg is generally a rich country with one the highest GDPs per capita of any country in the world. 2- Ireland had the second-highest level of corporate debt, and the Emerald Isle is the only other country with corporate debt levels of more than 300% of its GDP. In fact, the Irish total is around 175% higher than the average of all the countries we studied. Ireland is a European home to many global tech firms such as Apple, Google, and Microsoft. 3- Another small island state comes in third in the rankings, and although Cyprus has a very different climate, it does have a similarly high level of corporate debt to Ireland. In fact, its debt is just under 250% of the GDP of the country. Although Cyprus is not officially a tax haven, it does offer companies benefits that other European countries do not, and consequently, there are a large number of corporations in the country. 4- The minuscule Mediterranean island of Malta has the fourth-highest amount of corporate debt in relation to its GDP. Maltese based companies pay the lowest tax on profits anywhere in the European Union, and consequently, it is very attractive to large multinational companies. Similarly to Luxembourg, because Malta is so small it only has the 129th largest GDP in the world according to the World Bank, and this combined with a high number of companies makes it easy for corporate debt to outweigh it. 5- By far the largest nation in terms of size, population, and world stature in the top five of the rankings is France. French corporate debt is equal to 194% of its GDP, and although it is below the 200% mark, the rate is still the fifth highest in the world. This level is incredible considering that France has the seventh-largest GDP of any country in the world and to put this into comparison the UK (which has a similar level economy) has corporate debt levels under 100% of its GDP. The corporate sectors with the most debt While we have established that corporate debt is rife around the world, which corporate sectors contribute the most to this debt? By looking at data from the United States (the largest economy in the world) we can see which sectors have the highest levels of debt. Overall the US has a corporate debt level of 142.79% of its estimated GDP of $21.43 trillion, which ranks them as the 21st highest nation. The sector’s level of debt was ranked by their debt to capital percentage; this is calculated by dividing all of the industries’ interest-bearing debt and then dividing it by the total capital (cash and assets) and then converted to a percentage figure. 1-The restaurant and dining sector have the highest levels of debt in relation to their capital funds with 114% being 3.6% higher than the next closest and 59% higher than the average. 2-The tobacco industry has been struggling in recent years as far fewer people in younger generations are smoking. Moreover, inflated taxes and advertising bans have hurt the tobacco companies ability to sell their products. This has resulted in a debt percentage of 110%. 3-Other financial services are any financial services that are not banking or insurance, and these other financial services had the third-highest level of corporate debt in the United States. Although their level of 95% is a fair way behind tobacco, it is still over 40% higher than the average. Read More:
https://thecorner.eu/news-the-world/world-economy/which-countries-have-the-highest-levels-of-corporate-debt/98441/
Providing migration analysis and research: Welcome to the website of Dr Marcus Engler! I am a social scientist, migration researcher, and consultant. I have been analysing trends, debates, and political developments regarding (forced) migration, integration, and asylum in Germany, Europe, and globally for many years. I have extensive experience, and I work for and with many organisations in Germany and other countries. In recent years, I have worked for the Humboldt University of Berlin, the University of Osnabrück (IMIS), the Expert Council of German Foundations on Integration and Migration (SVR), several (political) foundations, the GIZ (Deutsche Gesellschaft für Internationale Zusammenarbeit, German Corporation for International Collaboration), and the UN Refugee Agency (UNHCR). My work encompasses many diverse activities: I conduct research projects and offer consultancy services; I give lectures; I run seminars, workshops, and training programmes tailored to different target audiences; I am an author; and I am available to take part in panel discussions, both as a participant or facilitator. Lastly, I am a point of contact for media enquiries regarding migration and integration. My interests and expertise focus on the following themes: · International refugee and migration processes and migration routes · Protection of refugees globally, and international sharing of responsibility · Extremely short-term perspectives which ignore the fact that migration has occurred since man first walked the earth; that migration processes can last several years; and that their concomitant integration processes can sometimes span several generations · Debates shaped by ideology and emotion which often obscure the complexity of the motives underlying migration as well as the complexity of migrants’ life projects and needs · Narrow geographical perspectives which focus on the interests and needs of the host country without considering the complexity of migration processes and the importance of migrants, their countries of origin, and their countries of transit In view of these limitations of public and political discussions, it is my personal goal to: · Keep my work dispassionate and fact-based (although I will acknowledge and consider emotionally charged debates)
NHTSA studies behaviors and attitudes in highway safety, focusing on drivers, passengers, pedestrians, and motorcyclists. We identify and measure behaviors involved in crashes or associated with injuries, and develop and refine countermeasures to deter unsafe behaviors and promote safe alternatives. Our recently published reports and research notes are listed chronologically below. To the right are additional resources including Traffic Techs. |Title| | | Reducing Distracted Driving Among Adults: Child-to-Adult Interventions Distracted driving is a problem for drivers and their passengers. Several programs exist to reduce the distracted driving habits of people who are already drivers. However, there are few programs that teach children before they become drivers, especially in the elementary school, how to intervene with a driver (usually a parent) who is distracted and none that have been evaluated. Only one program was identified that developed both a lesson to teach elementary school children how to intervene with a distracted drivers and an evaluation of the lesson. The COVID-19 pandemic made it necessary to pivot from the classroom to online and to broaden the program to include high school as well as elementary school students. Among high school students, the program produced a statistically significant increases in students’ knowledge of distracted driving and what they need to say to their drivers to refrain from driving distracted, statistically significant increases in the frequency of intervening with parents and passengers (but not friends), and a reported decrease in distracted driving of their parents and friends. |DOT HS 813 328| | | Estimated Contribution of Peak-Hours Non-Commercial Vehicle Traffic to Fatality Rates, Research Note, Traffic Safety Facts This Traffic Safety Facts Research Note explores the relationship between the decline in vehicle miles traveled (VMT) associated with the COVID-19 pandemic and the increased fatality rate observed for 2020. It hypothesizes that the fatality rate relative to previous years is due in part to a decrease in peak-hours (i.e. 6–9 a.m., 3–6 p.m.) non-commercial vehicle traffic – that is, a decrease in commuting. To draw comparisons with 2020 the author use the most recent National Household Travel Survey, Fatality Analysis Reporting System, and FHWA VMT data to estimate separate peak and non-peak, non-commercial vehicle fatality rates for 2017. The estimated peak-hours non-commercial vehicle fatality rate for 2017 was .5 per 100m VMT, while the non-peak hours non-commercial fatality rate was 1.27 per 100m VMT. Excluding peak-hours non-commercial vehicle traffic, 2017 had an overall fatality rate of 1.48 per 100m VMT. The fatality rate for 2020 was 1.34 per 100m VMT. The author therefore conclude that decreased peak-hours non-commercial vehicle traffic associated with the COVID-19 pandemic, stay-at-home orders, and increases in remote working contributed to 2020’s increased fatality rate relative to previous years. |DOT HS 813 340| | | Understanding and Using New Pedestrian and Bicycle Facilities Research has explored the benefits of innovative pedestrian and bicycle facilities, but it is unclear how pedestrians and bicyclists learn to properly use them. This report provides information on new pedestrian and bicycle treatments and (1) the behavior and knowledge of pedestrians, bicyclists, and drivers traversing through, on, and around the new facilities, and (2) law enforcement activity around the facilities. A systematic literature review as well as a review of current practices in outreach was conducted. |DOT HS 813 317| | | Safety in Numbers: A Literature Review In pedestrian and bicyclist safety, Jacobsen’s 2003 “Safety in Numbers” (SIN) theory posits an inverse relationship between the extent of walking and bicycling and the probability of motorist collisions. This literature review summarizes SIN research, identifying implications of the work chronologically, developing the SIN concept and subsequent work testing and expanding the theory. It considers study fields and areas of practice including engineering, planning and land use, sociology, psychology, education, public health, enforcement, human factors, and others. This breadth was especially important due to wide audience who may apply this review results to their future practice. These include State Highway Safety Offices, national organizations interested in the SIN topic, constituents from the FHWA, planners, engineers, educators, advocacy groups, policymakers, State DOTs, metropolitan planning organizations, and roadway users -- motorists, pedestrians, bicyclists -- and law enforcement. |DOT HS 813 279| | | Risk Factors for Young Drivers in Fatal and Non-Fatal Crashes: Supplementary Report This is supplementary report accompanies the report titled, Risk Factors for Young Drivers in Fatal and Non-Fatal Crashes. |DOT HS 813 303B| | | Risk Factors for Young Drivers in Fatal and Non-Fatal Crashes This report analyzed data from young drivers 16 to 20 years old from the Fatality Analysis Reporting System (FARS) years 2013 to 2017, and from the second Strategic Highway Research Program’s Naturalistic Driving Study (SHRP2 NDS). The data permitted a comparison of trends between age and amount of driving experience for a similar range of variables. Although young driver risk appeared to decline with increasing age, young drivers were at higher risk than 35-year-olds for most factors. Some situations were particularly risky for young drivers relative to 35-year-old and relative to other kinds of situations. The results from this study may be useful for developing graduated driver licensing as well as driver education content. |DOT HS 813 303A| | | Synthesis of Studies That Relate Amount of Enforcement to Magnitude Of Safety Outcomes - Technical Appendix This is the Technical Appendix for the Synthesis of Studies That Relate Amount of Enforcement to Magnitude Of Safety Outcomes report, DOT HS 812 712-A. |DOT HS 813 274-B| | | Synthesis of Studies That Relate Amount of Enforcement to Magnitude Of Safety Outcomes The National Cooperative Research and Evaluation Program (NCREP) identifies and funds research and evaluation projects that improve and expand State highway safety countermeasures. One such topic is measuring the impact of various amounts of traffic enforcement on changes in safety outcomes. The project team identified 80 relevant studies for inclusion in the synthesis. Current literature only supported findings related to occupant protection enforcement. No relationship between levels of enforcement and safety outcomes could be identified for distracted driving, alcohol-impaired driving, speeding, or aggressive driving. However, for all targeted behaviors, the enforcement campaigns evaluated were effective in improving safety outcomes even though the combination of these evaluations could not provide sufficient evidence to establish a relationship between the level of resources used and the magnitude of the safety improvement. |DOT HS 813 274-A| | | Research on Older Adults’ Mobility: 2021 Summary Report This report summarizes a meeting to spotlight research on older adults’ mobility held in January 2021– dubbed ROAM, Research on Older Adult Mobility – provided a forum to share news of completed research, report on the progress of ongoing studies, and highlight priorities for future work. Participants included medical professionals, occupational therapists, State DMV officials, mobility service provider specialists, automated driving system/advanced driver assistance system experts, and other academic and private sector research professionals. The meeting supported equity in traffic safety as it addressed disparities faced by older adults. |DOT HS 813 317| | | Visual Scanning Training for Older Drivers This study examined the effectiveness of a visual scanning training program administered by an occupational therapist as an intervention to improve visual scanning performance of healthy older drivers. Participants included 89 licensed drivers age 70 and older. The training program consisted of four, 1-hour sessions. Participants completed three on-road evaluations: pre-intervention, immediately post-intervention, and 3 months post-intervention. During the evaluations a camera recorded driver face video to support later analyses of the frequency, duration, and direction of eye glances away from the forward line of sight. Analyses of driving data showed no significant differences between intervention and control groups on driving or glance measures.
https://www.nhtsa.gov/behavioral-research
Organisations need them and leaders want them, but how do you even create a high performing team? Well, it’s important to first understand what a high performing team is and how they’re different from an ordinary work group. In an ordinary work group, members tend to work independently, focus on their own objectives, distrust colleagues and not participate in decisions that affect the whole group. In contrast, a high performing team understands that both personal and team goals can be achieved with the support from others. These members also participate in decisions affecting the team whilst practicing open and honest communication. So now that’s covered, lets look at how you can create your very own high performing team. Learn about your team members It’s important to first learn about each team member. You need to know their strengths, their weaknesses, what motivates them and what skills they possess. This will help you create a team structure and allocate necessary roles. If someone doesn’t fit their role then you’re unlikely to get the right performance out of them. Set the direction It’s essential for a high performing team to know what they’re working towards. Setting clear and achievable goals will ensure that everyone in your team is committed to the same direction. When creating a goal, you need to make clear what the team need to achieve and when they need to achieve it by. When doing this, it’s important to clarify that each team member understands the impact of their work and how it contributes towards the overall team goal. Make sure you understand the type of goal that you’re setting. Providing your team with a challenge will increase productivity. But don’t go too far! Your team can tell if they’re set unrealistic goals and the result from this will be a low level of performance. Practice open communication Setting goals shouldn’t be the only reason for team communication. A high performing team needs to be focused on what they need to achieve. To avoid any lapses in concentration, you must ensure that all team members are continually informed, updated and on target. Continuous individual and team meetings are a great way to not only keep everyone else informed, but to also keep yourself updated. From these meetings you’ll be able to understand the team’s performance, any concerns they have and any ideas they have. Using online platforms like Trello will also help you and your team understand what everyone is currently working on. There also needs to be trust and transparency within a team. With no trust, you will just have a team full of individuals with no efficiency, innovation or collaboration. This brings us onto the next point… Adopt an innovative and collaborative environment Is there any point in having a team that doesn’t work with each other? Communication is not just about informing and updating team members. Everyone should be able to share ideas, information and thoughts. Centralising communications through meetings, performance boards and online platforms will help create an environment where team members can share ideas and thoughts on different projects. Recognise and reward performance Acknowledging and rewarding performance will not just boost morale, it will also motivate your team to achieve more. A motivated team will strive for further success if they know they will be rewarded for it. There are different ways you can reward the team. This can range from verbal praise, social outings and financial rewards. It’s best to use both individual and team rewards. Learn specific techniques and skills Unfortunately, there is only so much you can learn from a blog. Receiving specialist training on this subject will provide you with the full knowledge and guidance on how to build your very own high performing team. With this in mind, I’m pleased to inform you that a new date has been confirmed for our Developing High-Performing Teams CPD certified training course. Through a combination of interactive workshops and case studies, you will obtain the tools, techniques and skills needed to develop a high performing team. You will have the chance to learn from Hayley Lewis, a chartered psychologist and expert in high-performing teams. With the help of Hayley’s expertise, you will leave the day with a personalised action plan which you can take back to your organisation. Over to you: If you’re thinking that this sounds beneficial either for yourself or for a colleague, get in touch to have a chat or make a booking. We would love to hear from you. If you would like to learn more about this training day or would like to book your place, please call 0800 542 9440 or email [email protected]. Do you have any other team development tips? Are there any specific techniques that you would like to share? Tweet us using #UMGTraining @UModernGov Do you have a team of staff at your organisation who would benefit from Developing High-Performing Teams training? We also offer this course as a highly flexible In-House training session, delivered direct to your organisation on a date to suit you. Contact our In-House Training team on [email protected] to find out more.
https://www.moderngov.com/2018/06/build-high-performing-team/
German Chancellor Angela Merkel has stated that she had difference of opinion with Russian President Vladimir Putin since his 2001 speech in the Bundestag. Germany's outgoing Chancellor Angela Merkel has stated that she felt a difference of opinion with Russian President Vladimir Putin since his 2001 speech in the Bundestag. Angela Merkel, in an interview with the Suddeutsche Zeitung newspaper, informed that "significant differences" had persisted between her and Putin since he spoke in the Bundestag in 2001, according to ANI. While responding to a question about building a relationship of trust with Putin when she took office in 2005, Merkel asserted that she had a difference of opinion with the Russian President. She underlined that the collapse of the Soviet Union was tragic for the Russian President. However, they were happy with the end of the "Cold War". She added that she could not have predicted when she took office that "he would annex Crimea." "It has always been clear to me, even when he spoke in the Bundestag in 2001, that there are significant differences between us. For the Russian President, the collapse of the Soviet Union is a tragic event, we, on the contrary, felt the joy of the end of the Cold War, the joy of German and European unity," ANI quoted Merkel as saying. Moreover, she stated she could not imagine that a military conflict would break out in eastern Ukraine, "almost at the border of the European Union." As per the ANI report, the tensions between Russia and the West have been rising since 2014 when the conflict broke out in Ukraine. Putin, Merkel & Macron discuss Ukraine Issue Last week, Germany's outgoing Chancellor Angela Merkel, Russian President Vladimir Putin and French President Emmanuel Macron in a phone conversation recently discussed the Ukraine issue. According to a statement by the Kremlin, during their conversation, Merkel, Putin and Macron even noted the importance of implementing the 2015 Minsk agreements as the only possible basis for a settlement. They also stressed on their interest in enhancing the coordination efforts of Russia, Germany and France in the "Normandy format", as per the Kremlin statement. During the conversation, the leaders discussed some other international issues, including those related to the fight against terrorism on the African continent.
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a Continuation-in-part of U.S. application Ser. No. 16/352,423, filed Mar. 13, 2019, which Application claims the benefit of U.S. Provisional Application 62/642,324, filed Mar. 13, 2018, the disclosures of which are hereby incorporated herein by reference in their entireties. BACKGROUND 1. Field 2. Description of the Related Art Apparatuses and methods consistent with exemplary embodiments relate Various systems benefit from suitable mechanisms and methods for dealing with sensor inaccuracy. For example, various attitude and heading reference system (AHRS) approaches may benefit from systems and methods for providing multiple strapdown solutions. An inertial strapdown system may use rate sensors and accelerometers to compute, among other things, roll, pitch, and heading, and in some cases position. If one of these sensors fails, or is not accurate enough for any reason, for example due to a noise source, the attitude (roll and pitch) and heading may become either invalid or inaccurate. A Built-In-Test (BIT) can detect sensor failures and mark the output of a sensor as failed when the sensor fails its BIT. However, one shortcoming of such a solution is the loss of attitude/heading when a sensor fails in the field. Furthermore, it is difficult to design a BIT such that the attitude/heading solution of a failing sensor is marked as invalid before it becomes too highly inaccurate, but is not needlessly marked as invalid. In other words, it is hard to balance between trusting inaccurate data (BIT doesn't fail when it should) and generating false or spurious reports of failure (BIT fails when it shouldn't). SUMMARY Example embodiments may address at least the above problems and/or disadvantages and other disadvantages not described above. Also, example embodiments are not required to overcome the disadvantages described above, and may not overcome any of the problems described above. According to an aspect of an example embodiment, an inertial navigation method is provided comprising: receiving, at a controller, an output from each of a plurality of three-axis sensors; determining, by the controller, a plurality of solutions, each of the plurality of solutions based on the output of one of the plurality of three-axis sensors; applying, by the controller, a Gaussian curve to the plurality of solutions; weighting, by the controller, each of the plurality of solutions based on a position on the Gaussian curve of each of the plurality of solutions, thereby determining a plurality of weighted solutions; calculating, by the controller, a roll, a pitch, and a heading of a device based on the plurality of weighted solutions; and iteratively repeating the receiving, determining, applying, weighting, and calculating. The inertial navigation method may further comprise outputting the roll, the pitch, and the heading of the device at each iteration. The Gaussian curve may be a Probability Distribution Function. The Gaussian curve may be one of a Cauchy Distribution and a Logistic Distribution. The applying, the weighting, and the calculating may comprise: applying, by the controller, a first Gaussian curve to a first selection of the plurality of solutions related to the roll and pitch of the device; applying, by the controller, a second Gaussian curve to a second selection of the plurality of solutions related to the heading of the device; weighting, by the controller, each of the first selection of the plurality of solutions based on a position on the first Gaussian curve of each of the first selection of the plurality of solutions, thereby determining a first plurality of weighted solutions; weighting, by the controller, each of the second selection of the plurality of solutions based on a position on the second Gaussian curve of each of the second selection of the plurality of solutions, thereby determining a second plurality of weighted solutions; calculating, by the controller, the roll and the pitch of the device based on the first plurality of weighted solutions; and calculating, by the controller, the heading of the device based on the second plurality of weighted solutions. According to an aspect of an example embodiment, an inertial navigation system is provided comprising: a plurality of three-axis sensors, each configured to measure physical quantities from which can be computed a roll, a pitch, and a heading of a device; a controller, operatively coupled to each of the plurality of three-axis sensors, the controller configured to perform operations of: receiving an output from each of the plurality of three-axis sensors; determining a plurality of solutions, each of the plurality of solutions based on the output of one of the plurality of three-axis sensors; applying a Gaussian curve to the plurality of solutions; weighting each of the plurality of solutions based on a position on the Gaussian curve of each of the plurality of solutions, thereby determining a plurality of weighted solutions; calculating the roll, the pitch, and the heading of the device based on the plurality of weighted solutions; and iteratively repeating operations of the receive, the determine, the apply, the weight, and the calculate. According to an aspect of an example embodiment, a non-transitory computer-readable medium is provided, encoded with instructions that, when executed in hardware, perform an inertial navigation process comprising: receiving, at a controller, an output from each of a plurality of three-axis sensors; determining, by the controller, a plurality of solutions, each of the plurality of solutions based on the output of one of the plurality of three-axis sensors; applying, by the controller, a Gaussian curve to the plurality of solutions; weighting, by the controller, each of the plurality of solutions based on a position on the Gaussian curve of each of the plurality of solutions, thereby determining a plurality of weighted solutions; calculating, by the controller, a roll, a pitch, and a heading of a device based on the plurality of weighted solutions; and iteratively repeating the receiving, determining, applying, weighting, and calculating. BRIEF DESCRIPTION OF THE DRAWINGS The above and/or other aspects will become apparent and more readily appreciated from the following description of example embodiments, taken in conjunction with the accompanying drawings, in which: The accompanying drawings are provided for purposes of illustration and not by way of limitation. FIG. 1 illustrates a system according to an example embodiment; FIG. 2 illustrates a method according to an example embodiment; FIG. 3 illustrates a Probability Distribution Function (PDF) according to an example aspect; FIGS. 4A and 4B illustrate differences in results between applying an average weighting method and applying a Gaussian method to a sample set A, and a sample set B, respectively, according to an example embodiment; and FIG. 4C illustrates examples of Gaussian curves with different centers and weights. DETAILED DESCRIPTION Reference will now be made in detail to example embodiments which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the example embodiments may have different forms and may not be construed as being limited to the descriptions set forth herein. It will be understood that the terms “include,” “including”, “comprise, and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be further understood that, although the terms “first,” “second,” “third,” etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections may not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Various terms are used to refer to particular system components. Different companies may refer to a component by different names—this document does not intend to distinguish between components that differ in name but not function. Matters of these example embodiments that are obvious to those of ordinary skill in the technical field to which these exemplary embodiments pertain may not be described here in detail. One or more example embodiments may provide additional reliability for inertial strapdown systems using multiple redundant sensors. If multiple sensors are used (for example, multiple three-axis rate sensors, multiple accelerometers, both, or any other desired multiple sensors), and multiple strapdown solutions are calculated, the resulting roll, pitch, heading and any other desired values can be combined using a desired algorithm, such as a weighted average, to compute a more reliable output than one that depends on just one of each type of sensor. One or more example embodiments may enhance reliability in two ways: (1) the solution may be less affected by noise in the sensors, because the sensors' noise may, to a certain extent, cancel each other out and (2) a RIT can be done by simply comparing the output roll, pitch, and heading of the multiple sensors, and if one strapdown solution differs from the others by some threshold amount, that solution can be ignored in the combined output. The combined output need not be invalidated just because one or more of the solutions is not used. As long as a sufficient minimum number of solutions agree with each other, the combined output can be considered valid. In this way, redundancy of multiple inertial systems may be achieved within one inertial product. One or more example embodiments may be used with avionics, space vehicles, guided weapons, such as missiles, hand-held devices, or any other application which may use an inertial strapdown solution for computing one or more of roll, pitch, heading, position (or any other desired parameter). Sensor Arrangement 1 2 One or more example embodiments may implement multiple strapdown solutions in a single Attitude and Heading Reference System (AHRS) product (or, if desired, in multiple AHRS products). One or more example embodiments may provide a product that may contain multiple sets of three rate sensors: x, y, and z. Each set of x, y, and z sensors may be in a single three-axis package, or may include three separate packages arranged orthogonally. The multiple sets of three packages may be arranged to have all of their x axes parallel, y axes parallel, etc. Alternatively, they may be arranged to have the first sensor's positive x axis parallel to the second sensor's positive y axis, the first sensor's positive y axis parallel to the second sensor's negative x axis, and so on. They may even be arranged in a non-parallel manner, for instance, one x axis may point into the middle of the first (or second, or third, etc.) octant of another three-axis triad. In all these cases, the rates generated from the sensors can be mathematically rotated to provide multiple rate vectors in the same three-dimensional Cartesian reference system. The number of x-y-z triads could be increased by combining axes from different sensor triads. For instance, the x axis from sensor with the y and z axes from sensor . In this manner, as many as eight different three-dimensional rate vectors could be generated from two three-axis sensor triads. Each three-dimensional rate vector may then be fed into its own strapdown algorithm to compute its own set of roll and pitch (and heading, if a heading reference, such as a magnetometer, is provided). Combining the Outputs of the Individual Strapdown Solutions when One Sensor, of a Plurality of Sensors, Fails As long as all of the sensor triads are providing accurate, valid data, the combining of the strapdown outputs (e.g., roll, pitch and heading) can be straightforward: they can simply be averaged. However, if one sensor triad fails, there may be a graceful way to drop this sensor's strapdown solution from the average without causing steps in the output. In other words, the output, roll, pitch and heading, can be smooth as the combined solution transitions from using all of the sensors, to using all except one triad. One or more example embodiments may provide a solution that utilizes a weighted average or another suitable solution. At each iteration of a new output (e.g., roll, pitch and heading), the median of the strapdown outputs may be computed. All solutions that are within some defined limit (e.g., call it limit 1) of the median may receive a weight of 1. As the difference between a particular solution and the median moves from limit 1 to some outer limit (e.g., limit 2), the weight may transition from 1 to zero (e.g., it could go linearly between 1 and zero, or by some non-linear formula or any other desired transition). When the difference is beyond limit 2, that solution may have a weight of zero, and may not be included in the combined solution. One reason for using the median instead of the mean may be that as one solution begins to move away from the others, the median does not move off with it, whereas the mean does. Circular Median and Circular Weighted Mean According to an example aspect, for roll and heading, a circular version of the median and weighted mean may be computed. A circular median may be computed by taking the sine and cosine of all the angles, finding the median of the sines and cosines, and computing the quadrant-specific arctangent using the median sine and cosine. A weighted circular mean may be computed by taking the sines and cosines of all the angles, and computing a weighted mean of the sines and cosines. Then, the quadrant-specific arctangent may be computed from the weighted mean of the sines and the weighted mean of the cosines. For pitch, a simple median and weighted mean can be used, or a circular version. This is because the range for pitch is only +/−90 degrees. Output Validity According to this example embodiment, the attitude validity may be determined not by testing the individual sensors, but by comparing the output attitude/heading solutions. If one solution begins to drift away from the others, it may be dropped—gradually phased out by the weighting scheme described above (or any other suitable scheme). In order for the combined solution to remain valid, a minimum number of solutions can be established, and that minimum number of solutions may be maintained within some minimum range of the median (described above). Heading validity may be established separately from roll/pitch validity, because in some installations there may be no heading reference (such as a magnetometer), so the system outputs roll and pitch. FIG. 1 FIG. 1 110 112 114 illustrates a system according to an example embodiment. As shown in , a system can include a plurality three-axis sensors , , . There can be more or fewer than three such sensors. Moreover, the sensors may be other kinds of sensors, such as two-axis sensors or eight-axis sensors. These sensors, therefore, are provided as an illustration and not by way of limitation. 110 112 114 110 112 114 Each of sensors , , may be a strapdown sensor. A strapdown sensor may be a sensor that does not require an inertial platform as a mounting point, but may be strapped down at any desired place on a vehicle. The particular mounting mechanism of strapping with straps is not required. Optionally, sensors , , may be mounted to a rotating platform. 120 120 120 120 110 112 114 120 The system can also include controller . Controller can be any suitable hardware device, such as an application specific integrated circuit (ASIC) or central processing unit (CPU). For example, controller may be one or more chip in a line replaceable unit (LRU). According to one or more example aspects, the controller may be packaged together with the sensors , , . Alternatively, or in addition, the controller may be part of a vehicle guidance system of a vehicle. The vehicle may be, for example, an unmanned aerial vehicle (UAV) or other vehicle. 120 110 112 114 110 112 114 110 112 114 120 Controller may be configured to receive, as inputs, the outputs of sensors , , . The sensors , , may provide raw outputs or signals representative of roll, pitch, and optionally heading. The sensors , , may provide roll, pitch, and heading in a coordinate system of the corresponding sensor. The controller may then be calibrated to interpret the sensor data with a predetermined motion of the device, by comparison to other known values, or any other desired way. 120 120 Optionally, the controller may determine each of a plurality of strapdown solutions. Each strapdown solution may provide roll and pitch. Optionally, each strapdown solution may also include heading. The controller may weight each solution, or part thereof, based on a relation between a given output and the other outputs. 120 120 There are various ways that this weighting can be done. For example, the controller may determine a median of the solutions, and may weight each solution based on a relation between a given input and the mean. The controller may then determine roll, pitch, and heading for the device based on the weighted plurality of inputs. According to one or more example embodiments, the weighting can take account of the roll and pitch separately, while in other example embodiments, roll and pitch can be weighted together. According to one or more example embodiments, heading may be performed by a different underlying sensor type. Thus, in certain example embodiments, the heading may be weighted separately from pitch and roll, even when pitch and roll are weighted together. The controller can be configured to weight a given solution with a weight of 1 when the given solution is within a first predetermined threshold of the median. The controller can be configured to weight the given solutions with a weight that scales from 1 to 0 when the given solution is beyond the first predetermined threshold of the median but within a second predetermined threshold of the median. This scaling can be a linear scale or any other desired scale. The use of a linear scale may permit a relatively smooth and graceful transition from a solution being considered and a solution being eliminated. The controller can be configured to weight the solution with a weight of 0 when the given output input is beyond the second predetermined threshold of the median. This can be a way in which the solution can be considered as invalid. After a predetermined time of a sensor's solution being weighted as zero, that solution can be removed from consideration, even as to determining a median. In some cases, this may mean removing the entire sensor from consideration or merely removing a particular roll, pitch, or heading solution from consideration. According to one or more example embodiments, the heading solution may be removed from consideration while the roll and pitch solutions may continue to be considered. This approach may have an advantage of permitting sensors to continue in partial use. 120 The median of at least one of the roll, the pitch, or the heading can be determined by computing a circular median or mean, as explained above. This calculation can be performed by the controller . 120 130 130 130 130 120 The controller can report the roll, pitch, and heading of the device, for example to a navigation system of the device. The navigation system of the device may, for example, be an autopilot system. The navigation system may include its own memory, processors, computer program instructions, and the like. Alternatively, the navigation system may integrated with the controller as a single unit, including by way of example only as a single chip. 130 140 145 130 140 145 130 120 110 112 114 The navigational system may, based on output of the controller, provide commands to control surface(s) of the device and/or provide commands to engine(s) of the device. For example, the navigational system may determine that a roll, pitch, or heading of the device should be altered, and consequently may send a message to a rudder, as an example of control surface(s) , to change positions. In a copter-based implementation, such as a quadcopter or any other multirotor helicopter, the engine(s) may similarly have its speed adjusted by the navigation system to correct a roll, pitch, or heading to a desired roll, pitch, or heading based on information provided from the controller based on data sourced by three-axis sensors , , . 120 110 112 114 150 150 120 150 120 150 130 160 FIG. 1 The controller may also provide information, based on data sourced by three-axis sensors , , , to at least one user interface . the interface may be a user interface of a navigational display either in the device (as shown in ) or remote from the device (not pictured). The user interface may have its own graphics card, display, processor, and memory, or may be integrated with the controller . The user interface may use the information from controller to display the device and/or the environment of the device in an appropriate attitude. The user interface and navigation system may receive additional information from other units, such as from an altimeter , which may be a barometric altimeter. FIG. 2 FIG. 2 FIG. 1 FIG. 2 FIG. 1 210 120 110 112 114 illustrates a method according to an example embodiment. The method of may, for example, be implemented using the system of . The method of can include, at , receiving, at a controller, output of a plurality of three-axis sensors configured to measure physical quantities (e.g. acceleration, rotational rate), from which can be computed roll, pitch, and heading for a device. This may be controller and sensors , , and in , for example. FIG. 2 220 230 The method of can also include, at , determining, by the controller, a plurality of solutions each solution of the plurality of solutions based on respective output of the plurality of three-axis sensors. The method can further include, at , weighting, by the controller, each of the plurality of solutions based on a relation between a given solution and the other solutions of the plurality of solutions. The weighting can be performed separately for heading. 240 The method can additionally include, at , reporting, by the controller, the roll, pitch, and heading of the device. According to one or more example embodiments, only the roll and pitch may be reported. The reporting can include reporting the roll, pitch, and heading of the device to a navigation system, to a user interface of an aircraft, or to both. Reporting to other devices is also permitted. 235 The method can include, at , obtaining a median of the plurality of solutions. Thus, the relation by which the weighting occurs can be a relation to the median. The median of at least one of the roll, the pitch, or the heading can be determined by computing a circular median or circular mean, as described above. The weighting can include weighting a given solution of the plurality of solutions with a weight of 1 when the given solution is within a first predetermined threshold of the median. The weighting can also include weighting the given solution of the plurality of solutions with a weight that scales from 1 to 0 when the given solution is beyond the first predetermined threshold of the median but within a second predetermined threshold of the median. The weighting can further include weighting the given solution of the plurality of solutions with a weight of 0 when the given solution is beyond the second predetermined threshold of the median. Weighting Outputs of Strapdown Solutions Using Application of Gaussian Curve According to one or more example embodiments, a Gaussian curve may be iteratively determined for the outputs, including one or more of roll, pitch, and heading, of multiple sensors, and a weight may be applied to one or more of the sensor outputs based on the Gaussian curve. Noise may be assumed to be additive white Gaussian. A variance of noise is estimated from the median value of the wavelet coefficients at the first scale. Then, a threshold is determined that is based on normalizing the noise distribution. Noisy coefficients and the best threshold may be determined. The inverse wavelet transform is then performed, and the resulting signal is the denoised signal. However, according to one or more example embodiments, rather than removing white noise by assuming that the noise has a Gaussian profile, a Gaussian curve may be iteratively applied to the sensor outputs, and the sensor outputs may then be iteratively weighted based on their position on the curve. This iterative application of the Gaussian profile, and iterative weighting may avoid a step response in which one sample/sensor output goes from being considered to being omitted entirely. The weighting of samples/sensor outputs may be linearly reduced. When the Gaussian curve is applied iteratively, at each iteration, the values near the center of the curve may receive full or near full weighting, while values closer to the tails of the function may receive less, near zero, or zero weight, as they approach the end of the tail. FIG. 3 The Gaussian curve applied may be a Probability Distribution Function (PDF), which is bell-shaped, continuous, and smooth. illustrates a PDF according to an example aspect. Alternately, any of a variety of other curve types may be used, including, but not limited to a Cauchy Distribution, and a Logistic Distribution. FIG. 4A FIG. 4B FIGS. 4A and 4B illustrates a difference in results between applying an average weighting method and applying a Gaussian method according to this example embodiment, to a sample set A: 12.1, 11.5, 10.8, and 16.7; and illustrates a difference in results between applying an average weighting method and applying a Gaussian method according to this example embodiment, to a sample set B: 12.1, 11.5, 10.8, and 22.3. In sample set A, a single outlier is 16.7, while in sample set B, the single outlier is 22.3. It is evident that, with respect to the average result, the output shifted somewhat substantially from 12.775 to 14.175 from sample A to sample B, due to the effect of the outlier. However, the Gaussian weighted output shifted only slightly from 11.998 to 11.891, discounting the faulty sensor represented by the outlier. In these examples of , the width of the Gaussian distribution (weights) is a linear function of the difference between the maximum and minimum samples. The center of the Gaussian distribution is the average of all the samples. Therefore, according to an example embodiment of the solution, the width and the center of the weighting curve moves with the sample set. These adjustments may be calculated by one of skill in the art. According to one or more example embodiments, a Gaussian curve may be applied to sensor outputs related to the roll and pitch of a device, and a different Gaussian curve may be applied to sensor outputs related to the heading of a device. In this way, the solutions related to the roll and pitch of the device may be weighted differently than the solutions related to the heading of the device. FIG. 4C illustrates examples of Gaussian curves with different centers and weights. Example embodiments described above may be practically applied in an aircraft, such as a UAV. Nevertheless, one or more example embodiments may be used in any of a variety of manned and unmanned aircraft, including rotorcraft, spacecraft, UAVs, and missiles, in any of a variety of manned and unmanned watercraft, including surface craft, hovercraft, and submarines, and in hand-held devices. Other practical implementations and use cases are also considered. It may be understood that the exemplary embodiments described herein may be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each exemplary embodiment may be considered as available for other similar features or aspects in other exemplary embodiments. While exemplary embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope as defined by the following claims.
In Berengaut’s (The Estate of Wormwood and Honey, 2012) second novel, two women discuss love, physics, infidelity, polyamory, mathematics, the Holocaust and the importance of family. Imagine if My Dinner with Andre, with its emphasis on dialogue and the nuanced analysis of past adventure and philosophy, took place between two contemporary, highly accomplished women having lunch. Berengaut has accomplished something remarkable: a novel composed entirely of dialogue, with no chapter breaks, that is riveting from beginning to end. Sabine Stern is an academic who is regularly invited to speak at top-tier universities, while Renata Rubinstein is a world-famous, wealthy intellectual married to a genius mathematician named Mark. Ostensibly, Sabine has arranged to meet Renata about a “personal matter,” but the conversation takes on incredible scope and depth, traversing the two women’s vast experience and knowledge, including their ancestors’ time in Nazi Germany and concepts of poetry and sexual fidelity. The narrative begins to take shape when Sabine mentions that she received an email out of the blue from Renata’s husband about a mathematical concept related to her work. Gradually, the women discover that their connection is about more than a simple chance email; the interweaving of their backgrounds, philosophies and approaches toward living has the potential to dramatically alter each of their destinies. The book’s strength is simultaneously its weakness. As Sabine and Renata discuss at one point: “It’s hard to believe that, once upon the time, people used to read philosophy for pleasure.” “It is a rather strange experience, reading those books. You understand the words, you sort of think that you understand the sentences, but the sense of paragraphs—not to say anything about whole chapters—is completely elusive.” Indeed, the engrossing narrative, which twists and turns through a variety of historical anecdotes and personal experiences, has no natural breaks, almost forcing readers to finish it in one sitting. However, the experience is a richly rewarding one, and the surprise ending is poignant without being sentimental. These mature, thoughtful women are unlike almost any others in popular contemporary literature, and their conversation—long, gorgeous, encompassing—is one of the most memorable in literature of the last 10 years. An unusually structured, thoroughly researched and deeply felt work that creates an intimate portrait of two women and the decades they have thoughtfully inhabited.
https://www.kirkusreviews.com/book-reviews/julian-berengaut/isnt-easy-me/print/
Guide about Process of Writing an Essay To convey a respectable quality essay, it is important that you understand the method associated with writing. Each essay writer should follow these three phases while writing an essay: Pre-writing Stage: Brainstorming and Preparation Writing Stage: Writing the Essay Post-Writing Stage: Proofreading and Cross-Checking All of these stages is moreover explained thoroughly under. Pre-Writing Stage: Brainstorming and Preparation Brainstorming and planning is the prewriting stage. You can find out with regards to how to write paper for me .At this stage, you collect your contemplations and the satisfactory investigation material required for your essay. It incorporates the accompanying advances: Understanding the requirement: You need to understand what has been mentioned from you by your instructor. You should understand the request; this may consolidate narrowing down the checking models to the deliverable core interests. Picking a topic: The accompanying stage is to pick an optimal topic for your essay. It may be a topic that is as per your advantage or your understanding. Driving investigation: After you have picked a topic, you should coordinate exploration. To research means that you get the mastery about the topic and collect satisfactory information to help your arguments. Settling a recommendation statement: A proposition statement is a short explanation of your topic. It gives the readers a 10,000 foot viewpoint of what they will moreover examine in the essay. A hypothesis statement ought to be settled at this stage so you know the heading of the essay. Making a graph: The last advance of the prewriting stage is to form a diagram for your essay. A design is a comprehensive blueprint of your essay in centers. It consolidates a one-line statement basically the whole of your body sections. Writing Stage: Writing the Essay This is the stage where you write the essay using all the information from the prewriting stage. A fair essay can be formed remembering the accompanying core interests: Write your essay according to the real plan: An essay should have a show, body, and end. The show is a singular entry, and the chief segment of your essay presents the topic. The body of the essay should have no under three segments. Each part should follow the format of the topic sentence, supporting nuance, verification, and shutting sentence. Such a part configuration will help in conveying clarity and comprehensibility to your essay. The end is the last piece of your essay that summarizes the whole essay in a singular segment. It is better if the essay closes on a bright or a happy note. Also, you can moreover end it with something to mull over on, for the readers. On the off chance that you need help , you can get it from online paper writing service Start the essay with a thought grabber: Attention grabbers can be famous references or any startling reality about the topic. You can similarly give an anecdotal start to your essay. These thought grabbers catch your reader to the essay. Give the establishment information: Make sure that you give an optimal measure of establishment information about the topic. This may consolidate the history of the topic or portraying some fundamental terms. Recognize the hypothesis statement inside the Introduction: Your proposition statement might be convincing if you place it inside the show section. The hypothesis statement is significantly situated toward the completion of the show entry. Post-Writing Stage: Proofreading and Cross-Checking After you are done writing your essay, reliably get it altered by an English language ace or any other individual knowing the subject. Moreover, cross-mind the remote possibility that you have supported your stance in the proposition statement through your essay. The phonetic or sensible stumbles in your essay can be discarded through altering and cross-checking. Moreover, ace info can help you to deal with your essay. These essay writing stages guarantee that a writer can make a respectable essay without any problem. It is significant to follow the mentioned essay writing stages as each stage is a phase towards getting extraordinary engravings.
https://bresdel.com/blogs/100669/Guide-about-Process-of-Writing-an-Essay
Feb 13, 2018 Denver Public Works and the NDCC hosted an open house on January 17 to update community members on plans to build a bicycle/pedestrian bridge in the Elyria-Swansea neighborhood and provide a look at what this bridge would look like. The bridge will span the railroad tracks near the intersection of 47th Avenue and York Street, enabling people to safely walk between the Elyria and Swansea neighborhoods when trains are at the crossing. Construction is expected to begin in early 2019 and be completed in early 2020. At last month’s meeting, community members were able to view an animated flyover video of what the structure will look like with its stairways and triangle-shaped ramps on either side of the train tracks. Display boards showing various possibilities for bridge aesthetics and public space uses also allowed the community to provide input on elements like concrete textures, railing styles, lighting levels and the use and character of public spaces next to the bridge.
https://www.denvergov.org/content/denvergov/en/north-denver-cornerstone-collaborative/latest-news-from-the-ndcc/2018/47th-and-york-open-house.html
The makeup look I created for the blogger meet-up at the Museum of Decorative Arts and Design. To be honest, for the meet-up I just wanted something bold. I like to play with my makeup looks when attending events, because usually going to the office I can't be bothered to do a full on face - mascara, some brow gel and maybe a lipstick is usually my go-to. As I'm still playing with my new Morphie Brushes eyeshadow palette, I decided to play with the green eyeshadows. At first I was going for a really dark green smokey eye, but I really liked the bright green shade (the one in the center of the lid) so I decided to brighten up that smokey eye with it. Next post is going to be the #ootd from that day, so stay tuned!! 10 comments on "MOTD | GREEN LANTERN" OMG!!!! Ammmazzzing! that green eyeshadow is just gorgeous on you! Jā, viņiem mājaslapā tas nav norādīts. Tikai tad, kad tu it kā veic pasūtījumu un ievadi valsti uz kurieni pasūti, parādās aprēķins. Pēdējo reizi, kad skatījos bija laikam 19 euro.
https://www.alksne.lv/2015/10/green-lantern.html
How do we combat the “culture of disengagement” (Cech, 2014) in engineering education? How do we effectively prepare students for the sociotechnical nature of engineering practice? As engineering educators, our responses to these questions often emphasize contextualization. Efforts to encourage engagement with public welfare, sociotechnical thinking, or social justice among engineering students often begin - and sometimes end - with illuminating the broader context of engineering practice and problems. For socially-minded engineering educators, contextualization is nearly always a virtue. This paper analyzes and critiques practices of contextualizing engineering. Based on a qualitative content review of recent engineering education literature, we first describe and classify different modes of contextualization. In some cases, contextualizing means adding personal context or alternative perspectives to cultivate empathy with users or stakeholders (e.g. Gupta et al., 2016). In others, contextualization is part of integrating sociotechnical thinking into engineering curriculum (e.g. Claussen et al., 2019). This takes a variety of forms, but often includes examination of the socio-cultural contexts of engineering problems and foregrounding the social aspects of engineering problem definition (e.g. Erickson et al., 2020). A third mode of contextualization is found in social justice-centered approaches to engineering, which contextualize by emphasizing the often obscured power relations that engineering contains and upholds (Riley and Claris, 2003). The first two approaches take contextualization as their primary end. Adding additional context is intended to deepen students’ understanding of a problem, but not necessarily to suggest how they ought to solve it. The third approach, social justice-oriented engineering, takes a stronger normative stance. Contextualization here is a means to help students identify social injustices that engineers can then help to ameliorate (Leydens, Lucena, and Nieusma, 2014). We interpret the results of our content review through our personal experiences as researchers and educators in STS and engineering education. We, like many engineering educators, are wary of overly prescriptive ethics instruction which elides power dynamics and places too much onus on individual actors (Tang and Nieusma, 2017). Contextualization as an end is a tempting solution; however, we also recognize the risks of illuminating complexity without providing direction (Nieusma, 2015). We see flaws in our own balancing act, often defaulting to more contextualization in an effort to render content more acceptable to students and engineering colleagues, or to avoid charges of bias. Ultimately, we argue for a balance of contextualization and normativity. We promote an alternative approach to contextualizing engineering that emphasizes engineers’ civic responsibilities and, crucially, the integration of their intersectional roles as citizens and professionals. This mode of contextualization embraces the idea of sociotechnical thinking but encourages engineers to work towards public welfare as an end goal.
https://peer.asee.org/contextualization-as-virtue-in-engineering-education
The Center For Hellenic Studies collaborates with several partners to promote Hellenic Studies and the electronic publication of scholarly works. Click on the links following each overview to read more about CHS partners. The Milman Parry Collection The Milman Parry Collection of Oral Literature is the largest single repository of South Slavic heroic song in the world. It comprises the following separate collections. All of these are currently housed in Harvard University's Widener Library, Room C: - The texts and recordings of oral literature, including epic, lyric songs, and ballads, some stories, and conversations with singers and others, made by Professor Milman Parry of the Department of the Classics at Harvard University during the summer of 1933 and from June 1934 to September 1935, in Yugoslavia. Over 3,500 double-sided aluminum discs, with a playing time of ca. 4 min. each. Transcriptions of these songs are contained in ninety-five notebooks (14 cm. x 14 cm., 120sides in each); dictated songs are contained in ca. 800 notebooks (14 cm. x 14 cm., 70 sides in each). - The Albanian Collection of some one hundred dictated epic texts was made by Lord in the north Albanian mountains in the Fall of 1937. These texts are contained in eleven notebooks (14 cm. x 14 cm., 200 sides in each.) - The Lord Collection consists of epic texts collected by him in Yugoslavia in the summers of 1950, 1951, and 1966. The last of these is little known, but contains Christian songs from the mountain ranges from Niß to Prijepolje. These songs are contained on thirty-five reel tapes (acetate). - The Lord and Bynum Collection consists of texts collected by Lord and Bynum in Yugoslavia in the summers of 1962-1965 and 1967. The curators of the collection have received a Library Digital Initiative grant to digitize the entire collection. The digitization of materials in the Collection will have great benefit for researchers working in the fields of Classics, Folklore and Mythology, Comparative Literature and Slavic. It will also represent an invaluable teaching aid. The proposed project involves the digitization of both sound recordings and handwritten texts contained in the Milman Parry Collection of Oral Literature in Widener Library. As noted above the Collection comprises several large collections of Serbo-Croatian, Bulgarian, and Albanian songs, among other traditions, but by far the largest and most famous of these collections is that by Milman Parry and Albert Lord (1933-1935) in the former Yugoslavia. Parry and Lord were interested in the live performances of oral epic and made recordings on ca. 3,500 12" aluminum disks. These recordings were later transcribed in ninety-five notebooks. The first part of the project calls for the digitization of these sound recordings and their handwritten transcriptions. Besides the recorded songs, the 1933-1935 collection also contains songs that were taken down in Bosnia, Macedonia, Hercegovina, Croatia and so on by the traditional method of dictation in some 800 notebooks. Digitized versions of these songs will be tagged to records in an already existing electronic database compiled by Matthew Kay (also published in hardcopy as part of the series The Milman Parry Studies in Oral Tradition, which researchers can use to search by singer, song, region, etc. Digitization will help make this invaluable collection, described by Béla Bartók as "a most important collection, unique of its kind" (The New York Times, June 28, 1942.), available to scholars worldwide through the Milman Parry Collection website, which is currently being developed. In addition to the 1933-35 collection, the curators intend to digitize texts belonging to two smaller collections, the Albanian Collection (1937) and the Lord Collection (1950-51). Visit the Milman Parry Collection Home Page The Stoa Consortium The Center for Hellenic Studies is a partner with The Stoa Consortium, a leader in refereeing electronic scholarly publications in the humanities and developing new models of scholarly collaboration. Along with the Stoa Consortium, the Center enthusiastically promotes such models of electronic publication. The CHS offers to contributors mutually non-exclusive rights of electronic publication of all material on the web site. That is, authors may reproduce or publish in other venues any material that they contribute to the CHS web site. Once a project, article, or book has been published by the CHS on its web site, it cannot be removed at the request of the author or editor. The content of various projects may at times be updated, revised, and/or redesigned to meet the changing needs of the web site as a whole, as well as to keep up with scholarship in the field. The authorship and editorship of individual contributions will be recognized and acknowledged on the CHS web site, and instructions for the citation of web publications will be provided to readers. Visit The Stoa Consortium website The Ilex Foundation The Center For Hellenic Studies is currently collaborating with The Ilex Foundation on the digitization of The Milman Parry Collection. The Foundation promotes the study of humanistic traditions that derive from the civilizations of the Mediterranean and the Near East, and is dedicated to the dissemination of the research it promotes, seeking new models of publication that allow for a workable coexistence of hard-copy and electronic versions. Visit The Ilex Foundation website The Foundation for the Hellenic World The Center for Hellenic Studies is a partner with The Foundation for the Hellenic World, a not-for-profit cultural institution based in Athens, Greece. The Foundation uses state-of-the-art, cutting-edge information and computer technology in its pursuit of the research, awareness and understanding of Hellenic history and culture. Visit The Foundation for the Hellenic World website Lexington Books The Center for Hellenic Studies is a partner with Lexington Books, which publishes "Greek Studies: Interdisciplinary Approaches Series." Building on the foundations of scholarship within the disciplines of philology, philosophy, history, and archaeology, this series spans the continuum of Greek traditions extending from the second millennium, B.C., to the present - from the Archaic and Classical periods to today.The aim is to enhance perspectives by applying various disciplines to problems that have in the past been treated as the exclusive concern of a single given discipline. This series is edited by CHS Director Gregory Nagy. Visit the Lexington Books website Athens Dialogues The Athens Dialogues project is an interdisciplinary program aiming to explore the relationship between Greek culture and the modern world. In the new phase an emphasis is given to the problems that concern modern man and the world that will have been formed within the next few decades. During the academic years 2012, 2013 and 2014, three events took place yearly. Interdisciplinary approaches, stimulating debates, diachronic perspectives and the broad participation of great minds of our times made the Athens Dialogues a unique opportunity not to be missed.
https://chs.harvard.edu/CHS/article/display/1199.partners-landing
Presented by AIA Houston and the National Organization of Minority Architects (NOMA), in partnership with the City of Houston’s Complete Communities Initiative. Design professionals, creative individuals, and students are invited to engage with residents, partners, and leaders by participating in a design workshop that will help bring ideas from Houston’s historically underserved communities to life. By lending your knowledge, experience, and skills we can move these projects forward. Designers are encouraged to sign up, as teams or individuals, for the community design workshop on Saturday, April 10. Individuals will be matched with others to create teams. All teams will receive an anchor project from one of the Complete Communities action plans to charette ideas. During the community design workshop, teams will meet with Complete Community Neighborhood planners, developers, and community stakeholders to inform design ideas and provide community insight. Final presentations to the community will be on Saturday, April 24. Workshop particpants are encouraged to attend the kickoff panel discussion on February 2. Approved for 2 HSW. Important Dates: Tuesday, February 2, 6pm – Kick-Off Panel Discussion Friday, February 5 – Deadline for Design Team registration Saturday, April 10, 10am – Community Engagement Workshop (New Date) Saturday, April 24, 10am – Community Presentation (New Date) Participating communities include: Acres Homes, Alief-Westwood, Fort Bend, Gulfton, Kashmere Gardens, Magnolia Park - Manchester, Near Northside, Second Ward, Sunnyside and Third Ward. Registrations: Design Team Workshop Registration (each team member must register and enter the same firm/school and team name) Residents or Stakeholder Workshop Registration Deadline to register is Tuesday, April 6 What is Complete Communities? Complete Communities works to improve Houston’s historically underserved neighborhoods by enhancing quality services and amenities through attracting private capital investment. The Complete Communities approach involves working closely with the residents of communities to understand their needs, identify new and existing community projects and opportunities, and facilitate opportunities for collaboration with partners across the city. How will these designs help? Participants are expected to attend the kickoff panel discussion to learn more about the Complete Community initiative, the featured communities, community-led design strategies, and the anchor projects. Complete Communities focuses on capital projects and strategic programming that eliminate structural barriers and improve the quality of life for Houston’s most vulnerable populations. The most significant outcome for the major anchor projects is to correct historic underinvestment (i.e., redlining) that remains evident in Complete Communities neighborhoods to this day. The projects will serve as a catalyst to attract new development and expand resources throughout the area. These workshops will include community residents from our Complete Communities, individuals with development interest in the communities, as well as design, planning, and development professionals. It is valuable to have many different stakeholders in one room, not only to provide input, but to create the opportunity to listen to each other. Workshop Guidelines and additional information can be found here.
https://aiahouston.org/v/event-detail/Designing-for-Impact-Community-Engagement-Workshop/1pu/
New Zealand Mud Snails Documented In The Au Sable River New Zealand Mud snails have been discovered in the East Branch of the Au Sable River, just downstream of the Harrietta Hills Trout Farm – Grayling. These invasive snails can move throughout a system and damage the fishery. On June 6, 2016 a regional EPA lab confirmed that the snails discovered were indeed New Zealand Mud Snails. Representatives from Anglers of the Au Sable and Mason-Griffith Founders Chapter of TU will assist Dr. Mark Luttenton of Grand Valley State University, who first found the snails during routine sediment sampling, in immediately conducting additional sampling to determine if mud snails are found in any other locations of the upper Au Sable River system. What should anglers do? The section of river downstream of the fish farm to the confluence with the mainstem Au Sable is now confirmed infected. We encourage anglers and others to avoid fishing or wading in this section of river. All anglers fishing anywhere in the Au Sable or nearby rivers, should take great care to prevent the spread of this invasive species. - Gear should be thoroughly cleaned and disinfected between uses. - Use a stiff brush to remove visible snails from wading shoes and waders, boats, anchors, ropes, and landing nets. Ensure that all mud and debris is removed. Take care to clean around gravel guards on waders. - Several chemicals have been found effective for killing these snails. One is full strength Formula 409 cleaner. Equipment should be soaked in this solution for a minimum of 10 minutes, then thoroughly rinsed with clean water and allowed to dry. - Snails can live for many days out of water so inspect wading boots inside and out thoroughly after cleaning. - It may also be time to reconsider using felt soles. They can pick up and hold more snails and are more difficult to clean and disinfect effectively. If we all follow these simple precautions we may be able to help stop or at least slow the progression of this invasive species. Mud snail identification The New Zealand Mud snails are small, up to 1/8” in length (up to 50 can fit on the face of a dime). They range from light brown to black in color; have 5-6 whorls, of gradually increasing size from one end to the other, and are right-handed snails (meaning when held with the small tip upward, the opening should face you to your right side). They reproduce prolifically and can reproduce asexually meaning a single specimen can result in thousands of offspring. If you think you have found a New Zealand Mud Snail contact:
https://www.ausableanglers.org/new-zealand-mud-snails-documented-in-the-au-sable-river/
Whether you’re a sentimentalist or downsizer, here are our tips to help you declutter your home. Attempting to organize closet space can be a daunting challenge. We’ve all been there—staring at mounds of clothes piled so high that one wrong move could send them toppling, trying to pair mismatched shoes strewn across the floor, and spending more than 30 minutes trying to untangle necklaces from bracelets from rings. We interviewed Sarah Giller Nelson, owner of Less is More Organizing Services, and asked her advice on how to efficiently organize a messy closet and how to maintain upkeep once it’s organized. From creating a plan to decluttering, here’s our quick step-by-step guide to organizing your closet. Before approaching a messy closet, Sarah recommends coming up with a plan. “Ask yourself: What do you want the end result to look like? What has been working well? What is the problem?” Perhaps you have too much stuff and need more room to add new clothes to your closet or maybe you have way too many shoes taking up space. “If there is too much stuff, the goal should be to downsize. If it is hard to find things, organize so that all of the same clothing type is together,” says Sarah. After coming up with a goal, whether that be wanting more space for new clothes or desiring to see your closet floor, consider your lifestyle habits and fashion sense. According to Sarah, by considering your wardrobe and lifestyle, you’ll be able to envision how your closet should be organized. “If you go to the gym often, make sure to put all of your gym clothes and accessories where you can access them easily. If you only make it there occasionally, put the gym gear further away,” says Sarah. Once you have a plan in place, remove everything from your closet—hangers, clothes, boxes, etc.—and lay them out on a clean sheet to be organized into piles. Sarah recommends approaching the decluttering process by separating clothing into subcategories. For example, set aside shirts in one pile, pants in another. Then, create a donation pile and include clothes that aren’t stained, haven’t been worn in over a year (formal clothes are an exception), are unflattering, or are uncomfortable. Sarah acknowledges that giving away clothes can be difficult, especially if you spent a lot on them. “It is okay to donate clothes you are only keeping because you spent a lot of money on them. The money has been spent—it’s not coming back, but you do need the space to store the clothes you love.” Once you’re finished adding clothes to the donation pile, put the items in a trash bag and consider dropping them off at a Goodwill or local women’s shelter. Review the remaining clothes, taking stock of your favorites and addressing why you like them so much. “If your other shoes, jeans, dresses, etc. measure up to this standard, keep them. Otherwise, donate. It is so easy to get dressed each morning when you are surrounded only by what you love and need,” says Sarah. Before putting clothes and shoes back into your closet, give the space a thorough cleaning. Though an empty closet can be considered “clean,” there’s still room to wipe down shelves and racks and vacuum or mop the closet floor. Get rid of any trash you find during this cleaning process, like clothing tags, store receipts, and empty shopping bags. After you finish cleaning, be intentional about putting the items you use most in easily accessible places. Store your most-used items at eye level, less-used items below, and least-used items on higher shelves. For example, if you have a 9–5 job that consumes most of your week, hang work clothes at eye level, casual clothes a level below, and special occasion clothing in the back of your closet. “Use upper shelves and back corners for items that aren’t used frequently. To make your closet feel put together, invest in coordinating hangers, drawer dividers, and labeled bins,” Sarah recommends. You can also buy chic storage baskets to store clothes and make your closet look even more polished. For smaller accessories like jewelry, sunglasses, and scarves, invest in built-in drawers or labeled bins. Just because you’ve successfully organized your closet doesn’t mean you should forget to assess your closet every now and then. Sarah acknowledges that good habits are the key to keeping any space in your home organized. “Keep a ‘Donate’ bag at the bottom of the closet. As you discover clothes that are stained, unflattering, or you no longer want, put [them] in the bag. Once the bag is full, take it to a shelter or thrift store.” You should also spend 15 minutes each month wiping down shelves, removing trash, and cleaning the closet floors for optimal cleanliness. Whether you’re a sentimentalist or downsizer, here are our tips to help you declutter your home. Turn the chore of spring cleaning into an event! Combine your spring cleaning list with your favorite booze. No cleaning task is too large with a cocktail in hand. Here are the most important questions to ask and factors to consider when hiring a home cleaning service or individual. Keep your kids safe and your home sparkling. Here’s what you need to know about chemical cleaning products in your home, what to avoid, and how to choose kid-friendly products. What’s the best carpet cleaner? We’ve done the research on carpet cleaning products, chemicals, and DIY carpet cleaning solutions. View the 2019 cleaning guide. From toothbrushes to toothpaste tubes to tampons, get the lowdown on all the sustainable product swaps you can make in your home. A growing number of Americans work from home, more than eight million of them full-time. We asked three business and management experts how this growing segments of workers can maximize productivity when working from home, and the answer was unanimous: create space. That totally clean feeling is pretty nice—now it’s more accessible than ever. Creating a learning environment in the home can encourage active learning among children and foster the development of important cognitive, social, and motor skills. We’ll show you how to establish a physical learning space and provide tips on ways to encourage learning outside of school. We surveyed more than 1,000 American singles about turn-ons and turn-offs of when it comes to a potential romantic partner’s home. Would you date someone who lives in a “rough” neighborhood? The majority of singles say no. Total abstinence from rich holiday food simply isn't the answer. Learn from the pros how to healthfully indulge. The rush of the holidays is no reason to find yourself sleep deprived come January 1. See our top 9 tips for getting a great night’s sleep all holiday season long. The best bathroom scale for a minimalist home. The smart home scale provides weight, BMI, and body composition and works with smart devices and its app to help you—and up to five household members—easily manage weight.
https://housemethod.com/rooms/step-by-step-guide-to-organizing-closet/
Chandrayaan-1 now in lunar transfer trajectory Yesterday, following a fifth orbit-raising manoeuvre, the Chandrayaan-1 spacecraft successfully settled into a trajectory that will take it to the Moon. After launch on 22 October, the spacecraft was first injected into an elliptical 7-hr orbit around Earth, between 255 km and 22 860 km above our planet. After five engine firings, Chandrayaan-1 spiralled outwards in increasingly elongated ellipses around Earth, until it reached its lunar transfer orbit on 4 November at 00:26 CET (04:56 Indian standard time). In the fifth and last orbit-raising manoeuvre, the spacecraft’s 440 Newton liquid-fuel propelled engine was fired for about two and a half minutes. The lunar transfer orbit’s farthest point from Earth is about 380 000 km. The spacecraft, which is being monitored from the Spacecraft Control Centre at the Indian Space Research Organisation’s ISRO Telemetry, Tracking and Command Network (ISTRAC) in Bangalore, is working very well. Chandrayaan-1’s Terrain Mapping camera (TMC) was successfully tested on 29 October and provided its first images, depicting Earth. Chandrayaan-1 will approach the Moon on 8 November 2008 when the spacecraft’s liquid-fuel propelled engine will be fired again. This manoeuvre, called lunar orbit insertion, will decelerate the spacecraft to allow the Moon’s gravity to capture it into an elliptical lunar orbit. A series of further manoeuvres will then progressively lower the altitude of Chandrayaan-1 around the Moon until it reaches its final 100 km circular orbit. Note for editors: The previous four orbit-raising manoeuvres took place on 23, 25, 26 and 29 October 2008, respectively. Chandrayaan-1, India’s first mission to venture beyond Earth’s orbit, is led by ISRO. ESA has coordinated and supported the provision of the three European instruments on board (C1XS, SARA, SIR-2), and assisted ISRO in areas such as flight dynamics and is supporting data archiving and processing. As a result of the collaboration, ESA and ISRO will share the data from their respective instruments. Other international partners in the mission include Bulgaria and the USA. For more information:
http://www.esa.int/Our_Activities/Operations/Chandrayaan-1_now_in_lunar_transfer_trajectory
- • What is capacitance? - • Dielectric. - • permittivity. - • Dielectric strength and maximum working voltage. - • Calculating the charge on a capacitor. Capacitance The amount of energy a capacitor can store depends on the value or CAPACITANCE of the capacitor. Capacitance (symbol C) is measured in the basic unit of the FARAD (symbol F). One Farad is the amount of capacitance that can store 1 Coulomb (6.24 x 1018 electrons) when it is charged to a voltage of 1 volt. The Farad is much too large a unit for use in electronics however, so the following sub-units of capacitance are more useful. |Sub unit||Abbreviation||Standard notation| |micro Farads||µF||x 10-6| |nano Farads||nF||x 10-9| |pico Farads||pF||x 10-12| Remember however, that when working out problems involving capacitance, the formulae, the values used must be in the basic units of Farads, Volts etc. Therefore when entering a value of 0.47nF for example, into a formula (or your calculator) it should be entered in Farads using the Engineering Notation version of Standard Form as: 0.47 x 10-9 (Download our Maths Tips booklet for more information). Capacitance depends on four things; 1.The area of the plates 2.The distance between the plates 3.The type of dielectric material 4.Temperature Of these four, temperature has the least effect in most capacitors.The value of most capacitors is fairly stable over a "normal" range of temperatures. Capacitor values may be fixed or variable. Most variable capacitors have a very small value a few tens or hundreds of pF) The value is varied by either: - •Changing the area of the plates. - •Changing the thickness of the dielectric. Capacitance (C) is DIRECTLY PROPORTIONAL TO THE AREA OF THE TWO PLATES that directly overlap, the greater the overlapping area, the greater the capacitance. Capacitance is INVERSELY PROPORTIONAL TO THE DISTANCE BETWEEN THE PLATES. i.e. if the plates move apart, the capacitance reduces. The Dielectric The electrons on one plate of the capacitor affect the electrons on the other plate by causing the orbits of the electrons within the dielectric material (the insulating layer between the plates) to distort. The amount of distortion depends on the nature of the dielectric material and this is measured by the permittivity of the material. Permittivity Permittivity is quoted for any particular material as RELATIVE PERMITTIVITY, which is a measure of how efficient a dielectric material is. It is a number without units which indicates how much greater the permittivity of the material is than the permittivity of air (or a vacuum), which is given a permittivity of 1 (one). For example, if a dielectric material such as mica has a relative permittivity of 6, this means the capacitor will have a permittivity, and so a capacitance, six times that of one whose dimensions are the same, but whose dielectric is air. Dielectric Strength Another important aspect of the dielectric is the DIELECTRIC STRENGTH. this indicates the ability of the dielectric to withstand the voltage placed across it when the capacitor is charged. Ideally the dielectric must by as thin as possible, so giving the maximum capacitance for a given size of component. However, the thinner the dielectric layer, the more easily its insulating properties will break down. The dielectric strength therefore governs the maximum working voltage of a capacitor. Maximum Working Voltage (VDCwkg max) It is very important when using capacitors that the maximum working voltage indicated by the manufacturer is not exceeded. Otherwise there will be a great danger of a sudden insulation breakdown within the capacitor. As it is likely that a maximum voltage existed across the capacitor at this time (hence the breakdown) large currents will flow with a real risk of fire or explosion in some circuits. Charge on a Capacitor. The charge (Q) on a capacitor depends on a combination of the above factors, which can be given together as the Capacitance (C) and the voltage applied (V). For a component of a given capacitance, the relationship between voltage and charge is constant. Increasing the applied voltage results in a proportionally increased charge. This relationship can be expressed in the formula; Q = CV or C = Q/V or V = Q/C Where V is the voltage applied, in Volts. C is the capacitance in Farads. Q is the quantity of charge in Coulombs. So any of these quantities can be found provided the other two are known. The formulae can easily be re-arranged using a simple triangle similar to the one used for calculating Ohm´s Law when carrying out resistor calculations.
https://learnabout-electronics.org/ac_theory/capacitors03.php
Photo credit: © Eatwell101.com Here’s a great recipe for brioche, the result is truly delicious: a soft bun with a buttery taste… Almost perfect for your breakfasts or tea afternoon! Brioche is the richest of all breads: usually made from the same ingredients but we add a form of fat, sugar and eggs to create this rich and soft texture. Ingredients list Bread Bun Recipe - 1/2 lb (250g) flour - 1 ounce (30g) sugar - 1 teaspoon salt - 1/3 teaspoon (2g) active dry yeast (or 1/3 ounce – 10g fresh yeast) - 3 eggs + 1 yolk for gilding - 6 ounces (165g) softened butter Baking instructions 1. Dissolve yeast in warm water, let stand for about ten minutes. 2. Put the flour, sugar and salt in a bowl. Make a well in the flour, put the diluted yeast with water (yeast must not touch the sugar and salt for a start) and the eggs then mix together. 3. Then add the soft butter into small pieces. Knead the dough for about ten minutes, it should no longer stick to your fingers at the end of kneading. 4. Let the dough rise for about two hours at room temperature. Divide the dough into four and form four small balls of dough. Arrange them in a cake pan and again let the dough rise for two hours. Gently brush the brioche with a brush with egg yolk. 5. Preheat your oven to 360°f (180°c) and bake the bread bun for about 20 minutes. It’s ready!
https://www.eatwell101.com/homemade-bread-bun-recipe
“There are as many definitions of personality as there are personality psychologists” is what Sternberg stated about personality (Intelligence and Personality /Sternberg). Unfortunately, this statement isn’t far from the truth. Personality is one of the most general and unclearly defined terms in psychology (Eysenck, 1957). This essay evaluates trait theories of personality on the basis of Block, Weiss and Thorne’s (1979) definition of personality: Personality refers to “more of less stable internal factors that make one person’s behaviour consistent from one time to another, and different from the behaviour other people would manifest in comparable situations”. To begin with it will present a general description of trait theories. It then assesses trait theories on several levels of analysis. It begins by looking at the validity and reliability of assessment forms for traits and the resulting predictive value specific traits in people will have on behaviour. It then evaluates individual and situational factors that affect predictability. The extent to which trait theories can be used to predict behaviour and in which situations. An assessment of the practical application and benefit the development of trait theories has had in different areas follows. Finally trait theories of personality are compared to other personality theories. It will conclude that…. Trait theories focus on describing personality by rating people as high or low on a limited number of traits (characterisitics or tendencies to behave in a specific kind of way in specific situations) or dimensions ( continuum of possible traits running between two opposite traits). The theories differ from one another mainly in terms of how many traits or dimensions are considered necessary to adequately describe personality. This can, to a large extent, be explained by the fact that the different theories deal at different levels of generality. Cattell for example, deals at primary factor level (gives more detailed picture of personality, but reliability and separaility is questionable). Eysenck in contrast, deals on a second order level. Cattels 16 factors or traits are intercorrelated, they can be further factor analyzed. When they are factor analysed, Eysenck’s 2 traits appear as superfactors. A description of personality in which more factors or traits are used will produce a more differentiated description of personality in which less distinctions are lost, whilst a theory in which fewer more general traits are used will yield more stable results that are more probable to recur in other analyses. The most widely accepted trait theory nowadays is McCrae and Costa’s “Five Factor Theory of Personality (1987)” It claims that people’s personalities can be described using five factors, the “Big Five”. Different theories still name and interpret these factors differently. A widely used way to summarize them however is by using the acronym OCEAN (Openness to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism). The validity and reliability of assessement forms for trait personality tests is influenced by a number of factors. First of all, personality tests are greatly affected by a person’s mood when taking the tests, which makes them less reliable. Reliability can be determined by comparing the person’s scores if they take the same test twice (test-retest reliability), comparing scores on two different versions of the same tests (interform reliability), or comparing score on two parts of the same test (e.g., even versus odd questions – split-half reliability). Secondly, reliability and validity of personality tests can also be affected by the inconsistency between self report and actual behavior. A person’s might desire to convey a particular impression to the person scoring the test or using the test results and give answers that do not point towards what they would truly do or think; but rather what they may they think the examiner wants to hear, or that they think will make them look best. Thirdly, there is the possibility of discrepancy in how others view a person compared to how the person sees her/himself. People are often unaware of the biases they possess about themselves. Through these factors, the main criticism about trait theories can be understood: Personality traits often do not predict or correlate with behaviour. The predictability of behaviour can be affected by individual and situational factors.First of all, people who describe their behaviour as dependent on the situation, luck or behaviour of powerful others shape are difficult to predict because they are controlled from without, not within. Secondly people who are conscious or defensive of socially desirable things to do or be. will not behave according to their natural preferences, but to what they believe will get more approval. This again, makes them more difficult to predict. Thirdly, traits are better predictors of behaviour when the social content is familiar, informal and private; when instructions are general or don’t really exist and when there is a range of choice for behaviours and responses are broad based. Bearing in mind the mentioned points affecting behaviour predictability, ways in which the predictive validity in trait ratings can be increased is now considered. First of all, multiple behavioural observations should be made. A combination of various personality assessement forms such as interviews, rating scales, personality inventors and projective techniques to predict behaviour is likely to yield better results. Secondly, raters should be very familiar with the person that is being rated. Thirdly, it is wise to have several people observe the behaviour to avoid subjective ratings. Finally it is important, that assessement forms with the most valid available measure of the attributes in question are selected. In the following section, the benefits of trait theories of personality are discussed. First of all, according to Allport, trait theories of personality provide means of understanding the uniqueness of people’s styles and behaviour. As traits initiate and direct the individual’s behaviour in unique ways, knowledge of their operation increases the understanding of individuals immensely. Introversion and extroversion traits are found in most trait theories. In Eysenck’s trait theory of personality especially, these traits are found to play a major role in prediciting behaviour in various situations. First of all, extroverts have a much higher pain tolerance. Thirdly, introverts are expected to perform more poorly in the presence of music or any kind of external stimuli. They will take less study breaks and be more vigilant in tasks. Because they consider less information in stressful conditions then extroverts do they will come to a decision more quickly and are likely to make ill-informed judgements. more likely to base their judgements of others on stereotypic information when pressured. This could result in unfair decisions about job candidates. Introveraion va extrovertion also has implications on suggested diats and health issues. It has for example been found, that the performance of both introverts and extroverts increases with the consumption of moderatle levels of drinks containing caffeine. However, the performance of introversts declines considerablewith higher levels of caffeinated corree, whears the performance of extraverts continues to improve. Neuroticism versus stability traits, such as those represented in Eysenck’s trait theory also influence behaviour in significant ways.. Neurotics experience high levels of fear and anxiety in stressful situations such as getting involved in very intimate relationships. And are therefore predicted to try and decrease levels of intimacy (Campbell & Rushton, 1978). ntroverts ;earn rules more quickly and efficiently. Because it is more difficult to condidtion extraverts, they experience less inhibition with respect antisocial behaviour. This has as a result, that criminals tend to be extroverts. A further criticism is that the testing is based entirely on self-reports and is therefore likely to be heavily influenced by the respondent’s mood at the time. f) Trait theorists would argue, that a person’s behaviour is consistent and predictable in different situations. However, situationists claim that behaviour varies significantly from situation to situation and interactionists argue that behaviour is defined by the continuous interaction between the person and the situation. (p at work) Humanists and Existentialists tend to focus on the understanding part. They believe that much of what we are is way too complex and embedded in history and culture to “predict and control.” Besides, they suggest, predicting and controlling people is, to a considerable extent, unethical. Behaviorists and Freudians, on the other hand, prefer to discuss prediction and control. If an idea is useful, if it works, go with it! Understanding, to them, is secondary. A large number of psychologists regard the discovery and validation of the Big Five as one of the major breakthroughs of contemporary personality psychology. This essay examines the successes and failures of trait theories on … levels. 1) It examines whether the measuring instruments to define traits used in these theories are valid and reliable. 2) Then it will examine the differences and similarities between several trait theories and the relevance on credibility these have. 3) Predictive value of trait theories on real life behaviour. 4) Individual and behavioural differences in quality of predictions of trait theories. Both Eysenck and Cattell made use of questionnaires or rating scales in which the participants were asked to answer carefully phrased questions concerning themselves. The data assembled was then evaluated by intricate statistical techniques to provide scores indicating the strength of the factor in each individual. The impact results of research in this area have had on the development of psychology (nor in laboratory or in clinic). The reasons for this are amongst others the relatively low reliability (i.e. reproducibility) of the data; another the controversial nature of statistical techniques; It is also very difficult to make predictions about behaviour in the complex social setting of real life from responses made in the miniature situation of a test. (Dictionary of the mind) Advantages: The first degree of analysis should constitute of an evaluation of the achievements and strengths of trait theories. First of all, trait theories are empirical, testable and they have been tested. Secondly, trait theories have provided a technology for scientific research (which many other versions of personality theories have not), linking studies of individual differences to general psychology. Many things can be criticised about trait theories as well however. As Mischel (1968) put it, general traits are an illusion. Questions about how important the situation is in determining behaviour have to be asked. Studies such as Zimbardo’s prison experiment point towards the fact, that situations play a significant role in determining behaviour. It can furthermore be noticed, that trait theorists need to pay more attention to personality in social context. Furthermore, trait theories tend to generalise very much and do not yield an in-depth description of the person tested. In Personality: a psychological interpretation (1937) Allport reviewed almost 50 different definitions of personality. The statements by these psychologists clearly show, that simply due to the fact, that there will be many different theories of personality. Amongst these, this essay examines trait theories of personality, which again are numerous with many different approache. Conclusion: They do not attempt to explain how a person got the personality that he/she has. Allport described personality as open and constantly evolving, changing and becoming. He found situational influences to have an effect, but behaviour is ultimately determined by the individual’s own perception of these influences. This means, that behaviour that seems to be controlled by external forces is really controlled by internal forces. He was convinced, that the individual is unique in behaviour and thought and that the traits people seem to share with others are actually also unique or idiosyncratic. Allport strongly pushed what he called idiographic methods — methods that focused on studying one person at a time, such as interviews, observation, analysis of letters or diaries, and so on.
https://primetimeessay.com/critically-evaluate-trait-theories-personality/
The present invention relates to a method for addressing and searching a unit or word to be searched from an assembly of such units or words arranged in order of the Japanese syllabary or of the alphabet as catchwords in a dictionary. In an attempt to form a dictionary or the like into a corresponding information file, for example, the file is generally characterized in that the code length necessary for memory of the head portion and the content portion of each word (i.e., each unit object to be searched) involves a large range of variation depending on the particular word. Therefore, memory spaces must be allotted and maintained in anticipation with due regard to such variation and these memory spaces are accordingly addressed to permit the search for the corresponding words. According to such a procedure, however, the memory spaces would be unreasonably large. The word itself, on the other hand, may be variable from one character word to a dozen character words and, therefore, the length of a word is variable. Additionally, character combinations follow no particular rule and the number of words are thus countless. If the conventional procedure is employed to search the object as set forth above, a large number of bits will be necessary for a large number of words even when each word contained in a group of words (i.e., assembly of unit objects to be searched) is coded by a serial number indicating the order in which each word of said group of words is arranged. As a result, a large number of detector elements and processor mechanisms will be required and the code portion of the file will be large. Accordingly, the speed at which the object is detected or searched would be necessarily low in practical operation. In addition to the problems set forth above in connection with the information capacity, there is another requirement that the device may establish or the operator may know what position in the assembly of arranged and coded unit words is occupied by a particular word. However, it would be difficult in view of the irregularity peculiar to the character combination of each word to realize the former and it would be impossible in view of limited human ability and large number of words contained in a dictionary or the like to realize the latter unless the address of said particular word to be searched is indicated by another dictionary. Here there would no longer be any efficiency of the mechanical search. It is demanded and desired, therefore, that the operation of search be achieved in such a manner that the elementary factors which form a word are successively put in the order of character combination particular to the word by operating members such as depression keys corresponding to 51 characters of the Japanese syllabary or 26 characters of the English alphabet. Although such mechanical input or reading out has been commonly employed in the information transfer system such as Telex, the information search would require a much more bulky structure of the device in view of the required memory capacity. With respect to combinations of the alphabet, for example, each of 26 alphabet characters may be coded by 5 bits so that 100 bits should be allotted for code marks of each word if the maximum length of a single word is given as the length of 20 characters. The device would be highly costly because of the large space required by the identification marks and the complexity of the detector elements and processing mechanisms. To overcome the above disadvantages the present invention provides an improved method in which the number of bits which form code marks for identification or designation of unit objects to be searched (i.e. words) which are successively arranged may be substantially reduced by making the best use of the context in said successive arrangement.
the background of cybersecurity and cyber attacks. What is a cyber threat? A cyber threat is an act intended to steal data or cause some sort of digital harm. The term is exclusively used to describe information security matters nowadays. A cyber attack is set up against digital devices using cyberspace. Cyberspace is a virtual space that is becoming a metaphor to help us understand the digital weaponry that intends to harm us. The intent of the attacker, as well as the potential impact, depends upon the severity of the attack. These attacks can be quite serious, potentially threatening lives. What is the need to protect from cyber threats? Cyber threats are a very serious issue these days. They can cause electrical blackouts, breaches of national security secrets, theft of valuable, sensitive data like medical records. They can damage phone and computer networks or paralyze systems to steal the data. These threats are getting more serious. The definition of cybersecurity states that cybersecurity risks are present in every organization and aren't always under IT's direct control. Organizational leaders are making technology-related choices every day. So we can say that the data security solutions are also increasing with the increased cyber risk. Types of cybersecurity threats Cybersecurity threats are mainly divided into three broad categories based on the gain that can be achieved by the attackers: Virtually every cyber threat falls into one of these three modes. 6 common cyber threats Now, let's discuss the safety measures that an organization or individual should take. Cyber defense for an individual The cyber defense best practices are simple for individuals. In most cases the battle is between the consumer and the hacker. Some of the important safety measures that can be taken are password hygiene and anti-virus software. Cyber defense for businesses As shown in previous paragraphs, cyber threats are one of the greatest dangers to any organization or business. Organizations are taking serious actions to make themselves safe and secure. One step they are taking is hiring highly skilled cybersecurity professionals as shown by recent surveys.
https://vmblog.com/archive/2020/02/27/are-you-safe-from-cyber-threats.aspx
20-25 ea. American Lamb shanks 4 ea. Red wine, 750 ml bottle 1½ ea. Red wine vinegar ¼ cup Juniper berries 4 ea. Allspice berries, whole 4 ea. Black peppercorns, whole 2 tsp. Bay leaves 4 ea. Kosher salt as needed Lamb stock or chicken stock 3-4 cups Mashed potatoes as needed Mint, to garnish as needed Method Macerate golden raisins in port overnight. To blanch garlic, start each time with cold water. Bring to a full rolling boil. Drain and repeat the process two more times. The garlic should be tender. Combine red wine, red wine vinegar, juniper berries, allspice berries, peppercorns and bay leaves. Place lamb shanks in marinade for 2 days, turning daily to ensure the meat is marinated evenly. Preheat the oven to 300°F. Drain, reserving marinade, and season shanks with salt and pepper. Place shanks in a pot that will hold them snugly and add marinade and stock. Cook for approximately 3 hours at 300°F, covered. Turn shanks every 45 minutes-the meat should be nice and tender but still hold to the bone. When tender enough, remove the shanks reserving all the braising juices. Keep shanks warm in a low oven. Strain braising juices of spices and skim off fat. This can be done the day before serving. In a saucepot, add port, macerated golden raisins, and blanched garlic to the strained juices, heat and reduce to a sauce consistency. You may thicken if desired with butter and flour. Plate the heated shanks with mashed potatoes, and pour sauce over the top, arranging garlic and raisins over each serving. Garnish with mint.
https://www.ciaprochef.com/americanlamb/redwinelambshanks/
TALLAHASSEE, Fla. – Undeterred by a shaky start from his veterans, Florida State coach Leonard Hamilton turned to a quartet of youngsters for a spark in his 13th basketball season-opener with the Seminoles, Saturday against tournament-test Manhattan. Freshmen Dayshawn Watkins, Robbie Berwick and Phil Cofer teamed with sophomore big man Jarquez Smith on a 24-8 closing run, wiping out and 10-point deficit and restoring order, as the Seminoles pulled away to an 81-66 victory at the Donald L. Tucker Center. Readily admitting that coaches generally shy away from opening opponents like Manhattan – a 25-win team with an NCAA Tournament appearance last season – Hamilton didn’t hesitate to go to his bench when the Jaspers knocked down their first four 3-point attempts and built a 23-13 lead in the opening half. “They [Manhattan] create a lot of havoc and I thought we were extremely tentative at the beginning of the game,” Hamilton said. “The success of this team this year will be in direct proportion to how much we are able to develop a rotation that will be able to perform at an efficient level when some of the guys are not playing well, or are in foul trouble, or you have an injury or illness. “Tonight our guys exceeded some of my expectations. I thought Robbie and Dayshawn played a lot better than they’ve practiced…Their ability to come into the game and be calm and execute the things we had in our game plan said an awful lot about them.” FSU’s youngsters first solved Manhattan’s pesky zone press, then went about attacking the basket and the deficit. A pair of Cofer free throws tied the game at 30-30 and a free throw by Smith with 1:40 remaining put FSU in front to stay. Junior Devon Bookert and Berwick drained 3-pointers down the stretch for a 37-31 halftime lead that was never seriously threatened. The Seminoles held the Jaspers to 29 percent from the field and held a commanding 32-12 scoring advantage in the paint in a game that was aesthetically marred by a combined 65 fouls, 80 free throws and 42 turnovers. “That affected rhythm,” said Manhattan coach Steve Masiello, regarding the fouls. “I thought we got a little stagnant…Fouling negates your rhythm and hustle. Give Florida State credit. They got out to our shooters and their size is so imposing that when you get in the lane they force you to make tough 2’s. They’re a very good defensive team. They don’t give you anything easy.” Florida State received plenty of help offensively, as well, with six Seminoles contributing double-figure scoring efforts. Aaron Thomas led the way with 14 points, while Smith and Kiel Turpin contributed 13 each. Montay Brandon added 12 while Bookert and Berwick finished with 11. Smith was very much a part of the game-changing run in the first half, scoring six of his career-high 13 points, highlighted by a dunk off a Bookert dish that provided an emotional lift and trimmed FSU’s deficit to two. Smith said he was merely playing to his teammates, especially the freshmen trio that had provided a spark. “Those three guys always bring energy,” Smith said. “What was running through my head is I have to match their energy. When they got me going, they were able to get everyone else going. When people would sub in, everyone was hyped and ready to play even more… “It’s a special feeling to be able to lift the entire team up from where we started. When our veterans got back in and picked up where we left off, we felt really good.” One year removed from being a role-playing freshman, Smith is now part of the starting lineup, and his teammates have noticed a dramatic change. “He’s grown a lot,” Brandon said of Smith. “Last year, he’d be pouting if he had a turnover. Today, he drove down the sideline and they took the ball from him. Something triggered and after that he played like a grown man. He wouldn’t have done that last year.” The late first half play of the youngsters certainly left an impression on the most veteran of Seminoles, who scored the first eight points of the second half to build a 14-point lead and never really looked back. “They gave us our energy,” Brandon said. “Seeing them break the press and get us our lead, we didn’t want to lose it because they had worked so hard to gain it.” Added Thomas: “Once we regrouped, relaxed and played our game, everything else fell in place.” The Seminoles will return to the court at home Tuesday to face Northeastern, with tipoff set for 7 p.m.
https://seminoles.com/youngsters-spark-noles-to-season-opening-win/
344 S.W.2d 80 (1961) Daisy BROWN, Appellant, v. KROGER COMPANY, a corporation, Respondent. No. 48293. Supreme Court of Missouri, Division No. 1. March 13, 1961. Briney & Welborn, Bloomfield, Kearby & Calvin, Poplar Bluff, for appellant. Henson & Henson, Poplar Bluff, for respondent. COIL, Commissioner. The appellant, Daisy Brown, as plaintiff below, sought to recover $25,000 as damages for injury she allegedly sustained as the result of the negligence of defendant below, Kroger Company, a corporation. At the close of plaintiff's evidence the trial court directed a defendant's verdict. Plaintiff appealed from the ensuing judgment and contends that the trial court erred in so ruling for the reason that her evidence was sufficient to make a submissible case. *81 Defendant operated a supermarket at Dexter. On February 28, 1958, plaintiff and her husband were customers in that store. Plaintiff wished to buy some bottles of Pepsi-Cola and, having often traded there, knew they were located on the bottom shelf of a 3-shelf soda section. She grasped the handle of a cardboard carton containing six bottles of Pepsi-Cola, intending to place it in the self-service cart she was using. Suddenly she realized that a bottle had fallen from the carton to the floor, had broken, and that her leg was wet and bleeding. She then placed the carton on the floor on its side, stepped back and discovered that she had a cut just above the ankle on the inside of her lower limb. Prior to that time there were no empty cartons or broken glass on the floor in that area and there was no sign or notice warning customers of the danger in picking up old, worn cartons. Plaintiff did not strike the carton or its contents against anything as she moved it from the shelf. There was no person other than plaintiff's husband in the vicinity at the time. Mr. Brown was behind his wife and realized that she had picked up an item and then heard something hit the floor. He then noticed that soda was running on the floor and that his wife's leg was bleeding. He saw a broken bottle and a cardboard carton on the floor on its side with other bottles of Pepsi-Cola still in the carton. He noticed that part of the bottom of the carton was torn loose at the corner. A Kroger clerk testified that he was at the cash register, heard a noise, went to the soda section and saw a cardboard carton on the floor "in the middle" of the broken soda bottle and did not notice whether the carton was wet; that it was a "Mack" carton and its bottom had been torn loose. He shoved it under the shelf and later, either the next day (February 28, 1958, was a Friday) or the following Monday, examined it and saw that it looked as though it was an old carton that had been wet and had dried. Defendant's answers to interrogatories established that Max Merriman was the Kroger store manager on the day of the accident and on April 8, 1959, the date when his deposition was taken by plaintiff; that among his duties as store manager were to "see that all displayed merchandise is properly price-tagged or stamped" and to "promote customer and employee safety within the store at all times." Over defendant's objection these parts of the store manager's deposition were admitted as admissions to show notice to and knowledge of Kroger as to the danger involved in using soda cartons that theretofore had been wet: that a bottle of Pepsi-Cola was the heaviest bottle of soda Kroger handled except one; that a carton of six full bottles of Pepsi-Cola weighed ten pounds; that there had been prior occasions when soda bottles had fallen through the bottoms of cartons; that a cardboard carton is "not fit for re-use after it has been wet and saturated with water, the bottom comes out"; that there were occasions when "all vendors, on a rainy day, drive up with no tarp on their truck, with an item like that, with a paper carton, and come in and the bottoms be wet and soft"; that on occasions he as store manager had called the "plants" about the wet cartons and had had wet and damp cartons taken off the shelves when he would observe the weakened condition of the carton after it had been placed on the shelf; that there had been previous similar incidents involving Pepsi-Cola cartons and other kinds of cartons. Defendant contends that those portions of the store manager's deposition were not admissible in evidence because the manager was not a party, was admittedly present in the courtroom and available to testify, and the parts offered "went beyond the testimony necessary to show knowledge and included testimony on the issue of negligence and was therefore inadmissible under the hearsay rule." The record shows that the portions of the deposition in question were offered as *82 admissions to show defendant's notice and knowledge of the condition of the cartons. The objection in the trial court was only that the store manager was not a party and was present in the courtroom and available to testify. Indeed defendant specifically stated that he was not objecting "to the admissibility of that type of evidence, it is your manner in offering the evidence at this time. I don't think you are entitled to do it by deposition when the witness is available in the court room, and the only exception would be if he was a party to the lawsuit." It is apparent that defendant did not object on the ground that part of the proffered statements did not tend to show knowledge on the part of defendant but tended to prove negligence. Defendant did not at the trial and does not in its brief point out wherein the parts of the store manager's deposition read in evidence "included testimony on the issue of negligence." It is not apparent to us that any of the admitted statements did not pertain to matters tending to show knowledge on the part of the defendant through its store manager. We need not therefore consider the question whether deposition statements of the store manager pertaining to matters within the scope of his authority at the time of the accident and at the time of the deposition were admissible against his employer as tending to prove negligence on the employer's part, even though the manager was not a party and was available as a trial witness. We consider and rule only the question whether the store manager's deposition statements tending to show notice to and knowledge on the part of his employer, the defendant in the case on trial, were admissible for the purpose of showing such notice and knowledge even though the store manager was not a party to the lawsuit and was in the courtroom at the time the parts of his deposition were offered. Plaintiff's averments of defendant's specific negligence made the existence of the knowledge of the Kroger Company a material fact question. The deposition statements made by Kroger's store manager pertained to matters within the scope of the manager's employment both at the time of the accident and at the time he gave his deposition. The statements were offered against Kroger as tending to prove its knowledge as to whether wet soda cartons were supplied and the suitability of such cartons for use as containers for bottles of Pepsi-Cola. Such knowledge could have been acquired by Kroger only through its proper employee or other agent and evidence tending to show that knowledge in the form of deposition statements by its employee whose assigned duties encompassed the necessity that he acquire knowledge on that subject in order to fulfill his duties as store manager was admissible on the question of Kroger's knowledge, even though the manager was not a party and was present and available to testify at the trial. Henry v. First National Bank of Kansas City, 232 Mo.App. 1071, 115 S.W.2d 121, 132[29], 133[32] [33] [34]; State ex rel. S. S. Kresge Co. v. Shain, 340 Mo. 145, 101 S.W.2d 14, 17[2-5]. See also 31 C.J.S. Evidence § 348, pp. 1121, 1122, and Bowyer v. Te-Co., Inc., Mo., 310 S.W.2d 892, 895[1]. Defendant calls attention to Section 492.400 RSMo 1949, V.A.M.S., providing when a deposition may be read in evidence. That statute, of course, does not affect the admissibility of portions of a deposition where they are admissible as admissions against the interest of a party; and that is true even though the deponent is present in court. Henry v. First National Bank of Kansas City, supra, 232 Mo.App. 1071, 115 S.W.2d 133[33]. The parties agree that the principles of law involved in storekeeper-invitee cases generally are applicable to the present facts. Certainly no new question is presented in so far as concerns those principles of law. The decisive questions here pertain to the application of those well-established principles to these particular facts. *83 Defendant owed plaintiff-invitee the duty to exercise ordinary care to keep its premises reasonably safe for her use including the exercise of ordinary care to avoid causing injury to her while she was on defendant's premises properly engaged in the usual activities of a shopper following the procedures and using the facilities provided. The basis for liability in this type of case is a knowledge of the storekeeper of an unsafe condition or of a danger to a shopper, superior to that of the invitee. Consequently, the storekeeper must have actual or constructive knowledge of the danger and he is not liable for injury which results from a danger that is obvious or is or should be as well known to the invitee as to the storekeeper. And it follows, of course, that there is no duty on the storekeeper to warn if the danger is obvious or is or should be as well known to the invitee as to the inviter because a warning would not acquaint the invitee with any fact that he did not or should not know. Stafford v. Fred Wolferman, Inc., Mo., 307 S.W.2d 468, 473[1-3]; Barken v. S. S. Kresge Co., Mo.App., 117 S.W.2d 674, 678[1-3]; 65 C.J.S. Negligence § 45a, p. 524. Defendant contends there was no evidence tending to prove that the carton was defective prior to the incident and, even if so, no evidence that defendant either placed the carton on the shelf or that it had actual or constructive notice of its presence there. Defendant, to support its contention of lack of evidence to show that the carton was defective, calls attention to the fact that defendant's clerk testified that when he looked at the carton the next day or the following Monday it looked as though it had been wet and had then faded and dried. Defendant insists that the only reasonable conclusion from the clerk's testimony was that the carton had become wet when it was on the floor at the place where the Pepsi-Cola had spilled and, therefore, the fact that the carton on the following day or the following Monday looked as though it had been wet and had dried was no evidence that it was in that condition when plaintiff lifted it from the soda shelf. We do not agree. In the first place the clerk's testimony does not compel the inference that the carton became wet at the time plaintiff placed the carton on the floor on its side with the remaining Pepsi-Cola bottles still in it. The clerk's testimony in that respect was this: "Q. When you pushed the carton under the shelf, did you notice whether it was wet or not then? A. No, I never noticed then. Q. Where was it with reference to this bottle of soda on the floor? A. Well, I would say it was right in the middle of it." That testimony supports an inference that "right in the middle of it" may have meant in the middle of the broken glass that was on the floor. Furthermore, there was no testimony as to where the Pepsi-Cola that came from the broken bottle went, i. e., whether it spread over a large area, formed a puddle, or what the situation was with respect to the wetness on the floor. But more important is the fact that defendant's contention overlooks the positive testimony that the bottom of the carton gave way and the bottle of Pepsi-Cola fell through a hole in the bottom of the carton prior to the time plaintiff placed the carton on the floor. It seems apparent, therefore, that the carton was defective as it sat on the soda shelf. Consequently, even if the side of the carton got damp or wet as it laid on its side after the accident, it was unlikely that the bottom of the carton then got wet, and, in any event, the conclusion is not compelled that the carton was not defective by reason of having been wet and having dried prior to the time plaintiff attempted to move it from the soda shelf to the self-service cart. To support its contention that defendant had no knowledge, actual or constructive, that the defective carton was on the shelf, defendant suggests that its store manager's admissions that he knew that cartons which had been wet and had then dried were not *84 fit for use and that he knew that soda of various kinds was sometimes delivered to that store in cartons unfit for use, tended only to prove a fact which everyone should have known, viz., that a cardboard carton becomes weakened by use. It may well be, as defendant contends, that plaintiff, as a mature woman, knew or should have known that to reuse pasteboard cartons which had theretofore been wet and had dried might be a dangerous practice. The decisive fact, however, is that there was no evidence that plaintiff knew, and certainly she should not be charged with knowledge as a matter of law, that defendant would provide such unfit and dangerous cartons on its soda shelf and invite her and other customers to remove those defective and dangerous cartons and carry them from the store or to a cart; at least in the absence of a warning which would cause plaintiff and other customers to inspect each carton before they removed it from the soda shelf. Defendant contends further that there was no evidence to show actual knowledge of the condition of this particular carton or that it had been on the shelf a sufficient length of time for an employee to have inspected it. The evidence did not show who placed the carton of Pepsi-Cola on the shelf. Inasmuch as there was evidence that one of the duties of the store manager was to see that all displayed merchandise was properly price-tagged or stamped, it would not be unreasonable for a jury to conclude that one of defendant's employees placed the carton on the shelf. Assuming, however, that the various soda suppliers placed the cartons on the shelves, still, in view of the knowledge which the store manager had of prior instances in which wet or damp cartons had been placed on the shelves and of the fact that cartons which had been wet and had dried were unfit for use, the jury reasonably could have found that defendant owed invitees the duty to inspect the cartons either before they were placed on the shelves or before they were made available to customers. Contrary to defendant's above-noted contentions, we are of the opinion that the evidence, considered from a standpoint favorable to plaintiff, was such that the jury reasonably could have found these facts: Just after plaintiff picked up a carton of six full bottles of Pepsi-Cola, a portion of the bottom of the carton gave way and one of the bottles fell to the floor, broke, and, as a result, plaintiff was injured; the reason the portion of the bottom of the carton gave way was that the carton had theretofore been wet and had dried; the carton in question was not fit for use for the purpose it was intended to serve as it sat on defendant's shelf and was dangerous when so used; defendant knew or in the exercise of ordinary care should have known that some of the cardboard cartons, filled with bottles of soda, which were delivered to its store might be defective by reason of those cartons having been wet and soft at the time they were delivered; defendant, in the exercise of ordinary care, reasonably could have foreseen the likelihood of some injury to its invitees by reason of permitting such defective cartons to be and remain on its soda shelves; defendant knew or in the exercise of ordinary care should have known that cartons in such condition constituted a danger to a customer who picked up such a carton containing six bottles of soda weighing ten pounds; the carton of Pepsi-Cola involved was placed on defendant's shelf either by defendant's employees or by the supplier of Pepsi-Cola; plaintiff did not know of the defective condition of the carton which she picked up and did not know and, in the exercise of ordinary care, should not have known, that defective cartons would be found on defendant's soda shelf. Inasmuch as the evidence supported the foregoing findings, the jury reasonably could have reached the conclusion that defendant failed to exercise ordinary care to avoid causing injury to plaintiff while she was an invitee on defendant's premises properly engaged in the routine procedure and properly using the facilities defendant *85 had provided for shoppers, in that defendant was negligent in permitting the defective carton to be on its soda shelf for sale to invitees without warning plaintiff of the necessity to inspect each carton for its suitability for the purpose it was intended to serve before removing it. The judgment is reversed and the case is remanded for a new trial. HOLMAN and HOUSER, CC., concur. PER CURIAM. The foregoing opinion by COIL, C., is adopted as the opinion of the court. All concur.
An artist of impeccable elegance and poise, Clara-Jumi Kang has carved an international career performing with the leading orchestras and conductors across Asia and Europe. Winner of the 2010 Indianapolis International Violin Competition, Kang’s other accolades include 1st prizes at the Seoul Violin Competition (2009) and the Sendai Violin Competition (2010). Having made her concerto debut at the age of five with the Hamburg Symphony Orchestra, Kang has since performed with leading European orchestras including the Leipzig Gewandhaus, Cologne Chamber Orchestra, Kremerata Baltica, Rotterdam Philharmonic, Orchestre National de Belgique and the Orchestre de la Suisse Romande. In the USA she has performed with orchestras including the Atlanta, New Jersey, Indianapolis and Santa Fe Symphony Orchestras, whilst elsewhere highlights have included appearances with the Mariinsky Orchestra, NHK Symphony Orchestra, Tokyo Metropolitan Symphony Orchestra, New Japan Philharmonic, Hong Kong Sinfonietta, NCPA Beijing Orchestra, Macao Philharmonic and the Taipei Symphony. A prominent figure in Korea, Clara-Jumi Kang has performed with all of the major Korean orchestras and in 2012 was selected as one of the top 100 “Most promising and influential people of Korea” by major Korean newspaper Dong-A Times. She returns annually to Korea for tours and was awarded the 2012 Daewon Music Award for her outstanding international achievements, as well as being names Kumho Musician of the Year in 2015. She has collaborated with eminent conductors including Valery Gergiev, Lionel Bringuier, Vladimir Fedoseyev, Andrey Boreyko, Christoph Poppen, Vladimir Spivakov, Yuri Temirkanov, Gidon Kremer, Gilbert Varga, Lü Jia, Myun-Whun Chung, Heinz Holliger and Kazuki Yamada. Clara-Jumi Kang’s first solo album entitled “Modern Solo” was released on Decca in 2011 and featured works including Schubert’s Erlkönig and Ysaÿe Solo Sonatas. Her second recording for the label of Brahms and Schumann Violin Sonatas with pianist Yeol-Eum Son was released in 2016. A devoted chamber musician, Kang is a regular visitor to festivals across Asia and Europe, with recent highlights including the Pyeongchang, Hong Kong, Ishikawa and Marvao Chamber Music Festivals. She is also a member of the Berlin Spectrum Concerts series and has collaborated with artists including Boris Berezovsky, Boris Brovtsyn, Eldar Nebolsin, Gidon Kremer, Guy Braunstein, Julian Rachlin, Maxim Rysanov, Misha Maisky, Sunwook Kim, Vadim Repin and Yeol Eum Son. European concerto highlights of the 2018/19 season include engagements with the Orquesta Sinfónica de Castilla y León/Petrenko, Musikkollegium Winterthur/ D. Jurowski, Nordic Chamber Orchestra/Ollikainen, Rheinische Philharmonie/Walker, Deutsche Radio Philharmonic/Delamboye, Dalasinfoniettan/Blendulf, Moscow Soloists/ Bashmet and Concerto Budapest/ Keller. Further afield, she returns to Japan for performances with the Sapporo Symphony Orchestra/Koizumi, whilst engagements in China take her to the Hangzhou Philharmonic Orchestra/Sinaisky and the Shenzen Symphony Orchestra. Recital tours take Kang to Italy and Korea in collaboration with pianists Sunwook Kim and Alessio Bax whilst chamber music performances include the Spectrum Concerts series at the Berlin Philharmonie and Pyeongchang Chamber Music Festival. Born in Germany to a musical family, Clara-Jumi Kang took up the violin at the age of three and a year later enrolled as the youngest ever student at the Mannheim Musikhochschuhle. She went on to study with Zakhar Bron at the Lübeck Musikhochschule and at the age of seven was awarded a full scholarship to the Julliard School to study with Dorothy Delay. She took her Bachelor and Masters degrees at the Korean National University of Arts under Nam-Yun Kim before completing her studies at the Munich Musikhochschule with Christoph Poppen. Clara-Jumi Kang currently plays the 1708 “Ex-Strauss” Stradivarius, generously on loan to her from the Samsung Cultural Foundation Korea. This weekend, Clara-Jumi Kang performs Glazunov’s Violin Concerto in A minor with Nagoya Philharmonic, alongside conductor Kazuhiro Koizumi. Violinist Clara-Jumi Kang performs with the Nordic Chamber Orchestra in three concerts under conductor Eva Ollikainen. On 10th and 11th May, Kang performs Beethoven’s Violin Concerto with Dalasinfoniettan in venues in Rattvik and Falun in Sweden. Kang performs Korngold’s Violin Concerto Op 35 with the orchestra in two concerts in the Symphonic hall, Valladolid, under conductor, Vassily Petrenko. This week sees Clara-Jumi Kang return to Chinak, where tonight she performs Sibelius Violin Concerto with the Shenzhen Symphony Orchestra under the baton of Christian Ewald Clara Jumi-Kang will be performing Tchaikovsky’s Violin Concerto with Musikkellegium Winterthur tomorrow (5th January) in Zurich and on 9th January in Winterthur. This week, Clara-Jumi Kang is in China performing with Hangzhou Philharmonic in the Grand Theatre for a performance on Sunday 18th November. Clara-Jumi Kang will perform Bruch’s Violin Concerto in G minor with Sapporo Symphony Orchestra tomorrow evening and on Saturday afternoon, alongside conductor Kazuhiro Koizumi. This week sees Clara-Jumi Kang travelling to Germany to perform Wieniawski’s Violin Concerto No. 2 in two concerts with Deutsche Radio Philharmonie, alongside conductor Enrico Delamboye. This week, Kang performs with the Rheinische Philharmonie as part of the Robecco Summer Nights Series in Amsterdam.
https://www.sulivansweetland.co.uk/clara-jumi-kang/
A confirmed booking exists when either written or email confirmation of that booking is accepted by the Landlord or their representative. The guest acknowledges notice that the property is one to which Paragraph 9 of schedule of the Housing Act 1988 applies whereby the guest has the right to occupy the flat for the purpose of his/her holiday only and whereby there will be no security of tenure within the terms of the said Act. The guest agrees to use this property solely for the purpose of a private holiday residence for the maximum of people shown on the booking correspondence and agrees not to: - use the property for any improper, illegal, or immoral purposes - to sub-let the property - cause (nor allow any guests or visitors to cause) any nuisance, annoyance or disturbance to neighbours, or to the Landlord, or to the Landlord’s agent - smoke or allow smoking in the property - keep any pets in the property without prior notice to the Landlord. Failure to comply with the requirements above will result in the agreement being terminated. A non-refundable deposit representative of 1/4 of the total cost of the holiday will be taken at the time of booking. The remainder of the payment is due 6 weeks before arrival. If arrival is less than 6 weeks from booking the full payment is due when the booking is confirmed. If a cancellation is received within 6 weeks of the start date, payment in full is due, unless the property is re-let for the period. Should the property be re-let for a reduced fee, the guest will be liable to pay the difference between the original agreed payment and the reduced fee for the re-let. An administration charge of £25 will be made for cancelled bookings. It is recommended that guests take out cancellation insurance. The property will be available between 3 and 8pm on the start date unless a separate agreement is reached. The way in which the keys will be delivered or collected will be agreed when the final payment is received. The property shall be vacated by 10:00am on the departure date unless a separate agreement is reached. Keys will be returned in accordance with arrangements made when the keys are provided. The property is a no smoking property and shall be left in a clean and tidy state. Rubbish should be disposed of as explained in the instructions within the property. The guest agrees to repair, replace or pay for any items damaged through neglect, misuse, or carelessness on the part of the guest or any visitors to the property. When damage occurs, either by negligence or deliberate action of a guest, the guest agrees to indemnify the Landlord against any associated losses, including lost income and the sourcing of alternative accommodation should that be required. The guest will notify the Landlord of any repairs which are necessary and allow the Landlord or Landlord’s agent access to carry out repairs. If for unforeseen circumstances a property becomes unavailable for a confirmed booking, then the guest must be informed as soon as possible. The landlord, or their representative, must offer the guest similar or higher standard accommodation. If this alternative is unacceptable to the guest, the guest is then entitled to a full refund of monies paid to date. The landlord’s liability is limited to monies paid to date. The terms, conditions, contracts and disputes in connection with this agreement are governed by The Laws of England and Wales.
http://www.durhamholidayhomes.co.uk/bookings/holiday-let-terms-conditions/
In brief: Is the rate of autism rising? In brief Is the rate of autism rising? If you consider only certain statistics, you might conclude that children in advanced industrial societies are suffering from an epidemic of autism. Figures published by the U.S. Department of Education and based on the number of children receiving special education show an exponential rise in the number of cases during the 1990s. But a critical study published in the journal Pediatrics shows that these numbers are unreliable, and a careful English survey confirms that children born in 1998 are no more likely to develop autistic disorders than those born in 1992. The American data, collected under the Individuals with Disa-bilities Education Act and presented annually to Congress, indicate that there were 4 cases of autism per 10,000 children in 1993 and 25 cases per 10,000 in 2003 — a sixfold increase. But these findings contain internal inconsistencies. Apparently, as many children are being newly diagnosed at age 15 as at age 8, yet the symptoms of autism appear before age 3, and most studies show that the diagnosis is usually made before age 8. Oddly, the government's figures also show a drop in reports of new cases between ages 11 and 12, just when an increase might be expected because children are making the difficult transition from elementary to middle school.
https://www.health.harvard.edu/newsletter_article/In_brief_Is_the_rate_of_autism_rising
GOVERNMENT RIGHTS STATEMENT ACCORDING TO 37 C.F.R. §1.821(c) or (e)—SEQUENCE LISTING SUBMITTED AS A TXT AND PDF FILES BACKGROUND BRIEF SUMMARY DETAILED DESCRIPTION EXAMPLE Example: RAAC02760: An Acetylxylan Esterase This invention was made with government support under Contract Number DE-AC07-991D13727 and Contract Number DE-AC07-051D14517 awarded by the United States Department of Energy. The government has certain rights in the invention. Pursuant to 37 C.F.R. §1.821(c) or (e), files containing a TXT version and a PDF version of the Sequence Listing have been submitted concomitant with this application, the contents of which are hereby incorporated by reference. Dilute acid hydrolysis to remove hemicellulose from lignocellulosic materials is one of the most developed pretreatment techniques for lignocellulose and is currently favored (Hamelinck et al., 2005) because it results in fairly high yields of xylose (75% to 90%). Conditions that are typically used range from 0.1 to 1.5% sulfuric acid and temperatures above 160° C. The high temperatures used result in significant levels of thermal decomposition products that inhibit subsequent microbial fermentations (Lavarack et al., 2002). High temperature hydrolysis requires pressurized systems, steam generation, and corrosion resistant materials in reactor construction due to the more corrosive nature of acid at elevated temperatures. Lower temperature acid hydrolyses are of interest because they have the potential to overcome several of the above shortcomings (Tsao et al., 1987). It has been demonstrated that 90% of hemicellulose can be solubilized as oligomers in a few hours of acid treatment in the temperature range of 80° C. to 100° C. It has also been demonstrated that the sugars produced in low temperature acid hydrolysis are stable under those same conditions for at least 24 hours with no detectable degradation to furfural decomposition products. Finally, sulfuric acid typically used in pretreatments is not as corrosive at lower temperatures. The use of lower temperature acid pretreatments requires much longer reaction times to achieve acceptable levels of hydrolysis. Although 90% hemicellulose solubilization has been shown (Tsao, 1987), the bulk of the sugars are in the form of oligomers and are not in the monomeric form. The organisms currently favored in subsequent fermentation steps cannot utilize sugar oligomers (Garrote et al., 2001) and the oligomer-containing hydrolysates require further processing to monomers, usually as a second acid or alkaline hydrolysis step (Garrote et al., 2001). Other acidic pretreatment methods include autohydrolysis and hot water washing. In autohydrolysis, biomass is treated with steam at high temperatures (˜240° C.), which cleaves acetyl side chains associated with hemicellulose to produce acetic acid that functions in a similar manner to sulfuric acid in acid hydrolysis. Higher pretreatment temperatures are required as compared to dilute sulfuric acid hydrolysis because acetic acid is a much weaker acid than sulfuric. At temperatures below 240° C., the hemicellulose is not completely hydrolyzed to sugar monomers and has high levels of oligomers (Garrote et al., 2001). In hot water washing, biomass is contacted with water (under pressure) at elevated temperatures 160° C. to 220° C. This process can effectively hydrolyze greater than 90% of the hemicellulose present, and the solubilized hemicellulose was typically over 95% in the form of oligomers (Liu and Wyman, 2003). Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius Embodiments relate to a nucleotide sequence of the genome of , or a homologue or fragment thereof, in combination with at least one sequence that is heterologous to . In one embodiment, the nucleotide sequence is SEQ ID No. 1 or a homologue or fragment thereof. In another embodiment, the nucleotide sequence has at least 90% sequence identity to SEQ ID No. 1. Alicyclobacillus acidocaldarius. Embodiments may further relate to an isolated and/or purified nucleic acid sequence comprising a nucleic acid sequence encoding a polypeptide selected from the group consisting of a polypeptide having at least 90% sequence identity to SEQ ID No. 2, the nucleic acid sequence in combination with at least one sequence that is heterologous to Alicyclobacillus acidocaldarius Embodiments also relate to the use of isolated and/or purified polypeptides encoded by a nucleotide sequence of the genome of , or a homologue or fragment thereof. In one embodiment, the polypeptide is SEQ ID No. 2 or a homologue or fragment thereof. In another embodiment, the polypeptide has at least 90% sequence identity to SEQ ID No. 2. In these and other embodiments, the polypeptide has activity as an acetyxylan esterase. In embodiments, the polypeptides may be acidophilic and/or thermophilic. In further embodiments, the polypeptides may be glycosylated, pegylated, and/or otherwise post-translationally modified. Embodiments include methods of at least partially degrading, cleaving, or removing polysaccharides, lignocellulose, cellulose, hemicellulose, lignin, starch, chitin, polyhydroxybutyrate, heteroxylans, glycosides, xylan-, glucan-, galactan-, and/or mannan-decorating groups. Such methods may comprise placing a polypeptide having at least 90% sequence identity to SEQ ID No. 2 in fluid contact with a polysaccharide, lignocellulose, cellulose, hemicellulose, lignin, starch, chitin, polyhydroxybutyrate, heteroxylan, glycoside, xylan-, glucan-, galactan-, and/or mannan-decorating group. These and other aspects of the disclosure will become apparent to the skilled artisan in view of the teachings contained herein. Lignocellulose is a highly heterogeneous three-dimensional matrix comprised of cellulose, hemicellulose, and lignin. Many fuels and chemicals can be made from these lignocellulosic materials. To utilize lignocellulosic biomass for production of fuels and chemicals via fermentative processes, it is necessary to convert the plant polysaccharides to simpler sugars, which are then fermented to products using a variety of organisms. Direct hydrolysis of cellulose by mineral acids to monomers is possible at high temperature and pressure, leading to yield losses due to thermal decomposition of the sugars. One strategy to reduce these yield losses is to use cellulases and potentially other enzymes to depolymerize the polysaccharides at moderate temperatures. Addition of acid-stable thermotolerant hydrolytic enzymes such as cellulases and xylanases to the biomass slurry during the pretreatment enables the use of lower temperatures and pressures, as well as cheaper materials of reactor construction, reducing both the capital and energy costs. Another approach is to combine the reduced severity pretreatment with enzymes together with fermentation under the same conditions, using a single organism that produces the enzymes to degrade the material as well as ferment the sugars to the value-added product of choice. For commercially available enzymes to be used for this purpose, the pretreatment slurry must be neutralized and cooled to 40° C. to 50° C., adding significant cost to the process. Hence, it would be an improvement in the art to degrade the soluble oligomers produced using acid, autohydrolysis or hot water washing pretreatments, at reduced severity and without neutralization using, for example, thermophilic and/or acidophilic enzymes. Alicyclobacillus acidocaldarius Embodiments of the disclosure relate in part to the gene sequences and protein sequences encoded by genes of . Of particular interest for polysaccharide depolymerization are esterases including acetylxylan esterases. In embodiments, the acetylxylan esterases may be thermophilic and/or acidophilic. Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius. The present disclosure relates to isolated and/or purified nucleotide sequences of the genome of including SEQ ID No. 1 or fragments thereof. In embodiments, these sequences may be in combination with at least one sequence that is heterologous to The present disclosure likewise relates to isolated and/or purified nucleotide sequences, characterized in that they are selected from: a) a nucleotide sequence of a specific fragment of the sequence SEQ ID No. 1; b) a nucleotide sequence homologous to a nucleotide sequence such as defined in a); c) a nucleotide sequence complementary to a nucleotide sequence such as defined in a) or b), and a nucleotide sequence of their corresponding RNA; d) a nucleotide sequence capable of hybridizing under stringent conditions with a sequence such as defined in a), b) or c); e) a nucleotide sequence comprising a sequence such as defined in a), b), c) or d); and f) a nucleotide sequence modified by a nucleotide sequence such as defined in a), b), c), d) or e). Nucleotide, polynucleotide, or nucleic acid sequence will be understood according to the present disclosure as meaning both a double-stranded or single-stranded DNA in the monomeric and dimeric (so-called “in tandem”) forms and the transcription products of said DNAs. Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius In embodiments, the sequences described herein may be in combination with heterologous sequences. As used herein, “heterologous sequence” refers to sequences which are either artificial (not found in nature) as well as sequences that are not found in directly connected to the sequences from described herein. Thus, any sequence, when added to a sequence from , creates, as a whole, a sequence that is not found in , is a “heterologous sequence.” Examples of heterologous sequences include, but are not limited to, promoters, enhancers, tags, terminators, and hairpins that are not operatively linked to the sequence from as found in nature. Aspects of the disclosure relate to nucleotide sequences which it has been possible to isolate, purify or partially purify, starting from separation methods such as, for example, ion-exchange chromatography, by exclusion based on molecular size, or by affinity, or alternatively fractionation techniques based on solubility in different solvents, or starting from methods of genetic engineering such as amplification, cloning, and subcloning, it being possible for the sequences of the invention to be carried by vectors. Alicyclobacillus acidocaldarius Isolated and/or purified nucleotide sequence fragment according to the disclosure will be understood as designating any nucleotide fragment of the genome of , and may include, by way of non-limiting example, length of at least 8, 12, 20, 25, 50, 75, 100, 200, 300, 400, 500, 1000, or more, consecutive nucleotides of the sequence from which it originates. Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius Specific fragment of an isolated and/or purified nucleotide sequence according to the disclosure will be understood as designating any nucleotide fragment of the genome of , having, after alignment and comparison with the corresponding fragments of genomic sequences of , at least one nucleotide or base of different nature. Homologous isolated and/or purified nucleotide sequence in the sense of the present disclosure is understood as meaning isolated and/or purified a nucleotide sequence having at least a percentage identity with the bases of a nucleotide sequence according to the invention of at least about 80%, 81%, 82%, 83%, 84%, 85%, 86%, 87%, 88%, 89%, 90%, 91%, 92%, 93%, 94%, 95%, 96%, 97%, 98%, 99%, 99.5%, 99.6%, or 99.7%, this percentage being purely statistical and it being possible to distribute the differences between the two nucleotide sequences at random and over the whole of their length. Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius Specific homologous nucleotide sequence in the sense of the present disclosure is understood as meaning a homologous nucleotide sequence having at least one nucleotide sequence of a specific fragment, such as defined above. Said “specific” homologous sequences can comprise, for example, the sequences corresponding to the genomic sequence or to the sequences of its fragments representative of variants of the genome of . These specific homologous sequences can thus correspond to variations linked to mutations within strains of , and especially correspond to truncations, substitutions, deletions and/or additions of at least one nucleotide. Said homologous sequences can likewise correspond to variations linked to the degeneracy of the genetic code. The term “degree or percentage of sequence homology” refers to “degree or percentage of sequence identity between two sequences after optimal alignment” as defined in the present application. Ad. App. Math J. Mol. Biol. Proc. Natl. Acad. Sci Two amino-acids or nucleotidic sequences are said to be “identical” if the sequence of amino-acids or nucleotidic residues, in the two sequences is the same when aligned for maximum correspondence as described below. Sequence comparisons between two (or more) peptides or polynucleotides are typically performed by comparing sequences of two optimally aligned sequences over a segment or “comparison window” to identify and compare local regions of sequence similarity. Optimal alignment of sequences for comparison may be conducted by the local homology algorithm of Smith and Waterman, 2: 482 (1981), by the homology alignment algorithm of Neddleman and Wunsch, 48: 443 (1970), by the search for similarity method of Pearson and Lipman, . (U.S.A.) 85: 2444 (1988), by computerized implementation of these algorithms (GAP, BESTFIT, FASTA, and TFASTA in the Wisconsin Genetics Software Package, Genetics Computer Group (GCG), 575 Science Dr., Madison, Wis.), or by visual inspection. “Percentage of sequence identity” (or degree of identity) is determined by comparing two optimally aligned sequences over a comparison window, where the portion of the peptide or polynucleotide sequence in the comparison window may comprise additions or deletions (i.e., gaps) as compared to the reference sequence (which does not comprise additions or deletions) for optimal alignment of the two sequences. The percentage is calculated by determining the number of positions at which the identical amino-acid residue or nucleic acid base occurs in both sequences to yield the number of matched positions, dividing the number of matched positions by the total number of positions in the window of comparison and multiplying the result by 100 to yield the percentage of sequence identity. The definition of sequence identity given above is the definition that would be used by one of skill in the art. The definition by itself does not need the help of any algorithm, said algorithms being helpful only to achieve the optimal alignments of sequences, rather than the calculation of sequence identity. From the definition given above, it follows that there is a well-defined and only one value for the sequence identity between two compared sequences which value corresponds to the value obtained for the best or optimal alignment. In the BLAST N or BLAST P “BLAST 2 sequence,” software which is available in the web site www.ncbi.nlm.nih.gov/gorf/bl2.html, and habitually used by the inventors and in general by the skilled person for comparing and determining the identity between two sequences, gap cost which depends on the sequence length to be compared is directly selected by the software (i.e., 11.2 for substitution matrix BLOSUM-62 for length>85). Complementary nucleotide sequence of a sequence of the disclosure is understood as meaning any DNA whose nucleotides are complementary to those of the sequence of the disclosure, and whose orientation is reversed (antisense sequence). Hybridization under conditions of stringency with a nucleotide sequence according to the disclosure is understood as meaning hybridization under conditions of temperature and ionic strength chosen in such a way that they enable the maintenance of the hybridization between two fragments of complementary DNA. By way of illustration, conditions of great stringency of the hybridization step with the aim of defining the nucleotide fragments described above are advantageously the following. The hybridization is carried out at a preferential temperature of 65° C. in the presence of SSC buffer, 1×SSC corresponding to 0.15 M NaCl and 0.05 M Na citrate. The washing steps, for example, can be the following: 2×SSC, at ambient temperature followed by two washes with 2×SSC, 0.5% SDS at 65° C.; 2×0.5×SSC, 0.5% SDS; at 65° C. for 10 minutes each. The conditions of intermediate stringency, using, for example, a temperature of 42° C. in the presence of a 2×SSC buffer, or of less stringency, for example, a temperature of 37° C. in the presence of a 2×SSC buffer, respectively, require a globally less significant complementarity for the hybridization between the two sequences. The stringent hybridization conditions described above for a polynucleotide with a size of approximately 350 bases will be adapted by the person skilled in the art for oligonucleotides of greater or smaller size, according to the teaching of Sambrook et al., 1989. Among the isolated and/or purified nucleotide sequences according to the disclosure, are those which can be used as a primer or probe in methods enabling the homologous sequences according to the disclosure to be obtained, these methods, such as the polymerase chain reaction (PCR), nucleic acid cloning, and sequencing, being well known to the person skilled in the art. Among said isolated and/or purified nucleotide sequences according to the disclosure, those are again preferred which can be used as a primer or probe in methods enabling the presence of SEQ ID No. 1, one of its fragments, or one of its variants such as defined below to be diagnosed. The nucleotide sequence fragments according to the disclosure can be obtained, for example, by specific amplification, such as PCR, or after digestion with appropriate restriction enzymes of nucleotide sequences according to the invention, these methods in particular being described in the work of Sambrook et al., 1989. Such representative fragments can likewise be obtained by chemical synthesis according to methods well known to persons of ordinary skill in the art. “Modified nucleotide sequence” will be understood as meaning any nucleotide sequence obtained by mutagenesis according to techniques well known to the person skilled in the art, and containing modifications with respect to the normal sequences according to the disclosure, for example, mutations in the regulatory and/or promoter sequences of polypeptide expression, especially leading to a modification of the rate of expression of said polypeptide or to a modulation of the replicative cycle. “Modified nucleotide sequence” will likewise be understood as meaning any nucleotide sequence coding for a modified polypeptide such as defined below. Embodiments of the disclosure likewise relate to isolated and/or purified nucleotide sequences characterized in that they comprise a nucleotide sequence selected from: a) the nucleotide sequence of SEQ ID No. 1, or one of its fragments; b) a nucleotide sequence of a specific fragment of a sequence such as defined in a); c) a homologous nucleotide sequence having at least 90% identity with a sequence such as defined in a) or b); d) a complementary nucleotide sequence or sequence of RNA corresponding to a sequence such as defined in a), b) or c); and e) a nucleotide sequence modified by a sequence such as defined in a), b), c) or d). Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius Among the isolated and/or purified nucleotide sequences according to the disclosure are the nucleotide sequences of SEQ ID Nos. 8-12, or fragments thereof and any other isolated and/or purified nucleotide sequences which have a homology of at least 80%, 81%, 82%, 83%, 84%, 85%, 86%, 87%, 88%, 89%, 90%, 91%, 92%, 93%, 94%, 95%, 96%, 97%, 98%, 99%, 99.5%, 99.6%, or 99.7% identity with the sequence SEQ ID No. 1 or fragments thereof. Said homologous sequences can comprise, for example, the sequences corresponding to the genomic sequences . In the same manner, these specific homologous sequences can correspond to variations linked to mutations within strains of and especially correspond to truncations, substitutions, deletions and/or additions of at least one nucleotide. Embodiments of the disclosure comprise the isolated and/or purified polypeptides encoded by a nucleotide sequence according to the disclosure, or fragments thereof, whose sequence is represented by a fragment. Amino acid sequences corresponding to the isolated and/or purified polypeptides can be encoded according to one of the three possible reading frames of the sequence of SEQ ID No. 1. Embodiments of the disclosure likewise relate to the isolated and/or purified polypeptides, characterized in that they comprise the polypeptide of SEQ No. 2, or one of its fragments. Among the isolated and/or purified polypeptides, according to embodiments of the disclosure, are the isolated and/or purified polypeptides of the amino acid sequences of SEQ ID Nos. 13-17, or fragments thereof or any other isolated and/or purified polypeptides which have a homology of at least 80%, 81%, 82%, 83%, 84%, 85%, 86%, 87%, 88%, 89%, 90%, 91%, 92%, 93%, 94%, 95%, 96%, 97%, 98%, 99%, 99.5%, 99.6%, or 99.7% identity with the sequence of SEQ ID No. 2, or fragments thereof. Embodiments of the disclosure also relate to the polypeptides, characterized in that they comprise a polypeptide selected from: a) a specific fragment of at least 5 amino acids of a polypeptide of an amino acid sequence according to the invention; b) a polypeptide homologous to a polypeptide such as defined in a); c) a specific biologically active fragment of a polypeptide such as defined in a) or b); and d) a polypeptide modified by a polypeptide such as defined in a), b) or c). In the present description, the terms polypeptide, peptide and protein are interchangeable. In embodiments of the disclosure, the isolated and/or purified polypeptides according to the disclosure may be glycosylated, pegylated, and/or otherwise post-translationally modified. In further embodiments, glycosylation, pegylation, and/or other post-translational modifications may occur in vivo or in vitro and/or may be performed using chemical techniques. In additional embodiments, any glycosylation, pegylation and/or other post-translational modifications may be N-linked or O-linked. In embodiments of the disclosure any one of the isolated and/or purified polypeptides according to the disclosure may be enzymatically active at temperatures at or above about 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, and/or 95 degrees Celsius and/or may be enzymatically active at a pH at, below, and/or above 7, 6, 5, 4, 3, 2, 1, and/or 0. In further embodiments of the disclosure, glycosylation, pegylation, and/or other post-translational modification may be required for the isolated and/or purified polypeptides according to the disclosure to be enzymatically active at pH at or below 7, 6, 5, 4, 3, 2, 1, and/or 0 or at temperatures at or above about 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, and/or 95 degrees Celsius. Aspects of the disclosure relate to polypeptides that are isolated or obtained by purification from natural sources, or else obtained by genetic recombination, or alternatively by chemical synthesis and that they may thus contain unnatural amino acids, as will be described below. A “polypeptide fragment” according to the embodiments of the disclosure is understood as designating a polypeptide containing at least 5 consecutive amino acids, preferably 10 consecutive amino acids or 15 consecutive amino acids. In the present disclosure, a specific polypeptide fragment is understood as designating the consecutive polypeptide fragment encoded by a specific fragment nucleotide sequence according to the invention. “Homologous polypeptide” will be understood as designating the polypeptides having, with respect to the natural polypeptide, certain modifications such as, in particular, a deletion, addition, or substitution of at least one amino acid, a truncation, a prolongation, a chimeric fusion, and/or a mutation. Among the homologous polypeptides, those are preferred whose amino acid sequence has at least 80% or 90%, homology with the sequences of amino acids of polypeptides according to the disclosure. “Specific homologous polypeptide” will be understood as designating the homologous polypeptides such as defined above and having a specific fragment of polypeptide according to the disclosure. In the case of a substitution, one or more consecutive or nonconsecutive amino acids are replaced by “equivalent” amino acids. The expression “equivalent” amino acid is directed here at designating any amino acid capable of being substituted by one of the amino acids of the base structure without, however, essentially modifying the biological activities of the corresponding peptides and such that they will be defined by the following. Examples of such substitutions in the amino acid sequence of SEQ ID No. 2 may include those isolated and/or purified polypeptides of the amino acid sequences of SEQ ID Nos. 13-17. These equivalent amino acids can be determined either by depending on their structural homology with the amino acids which they substitute, or on results of comparative tests of biological activity between the different polypeptides, which are capable of being carried out. By way of nonlimiting example, the possibilities of substitutions capable of being carried out without resulting in an extensive modification of the biological activity of the corresponding modified polypeptides will be mentioned, the replacement, for example, of leucine by valine or isoleucine, of aspartic acid by glutamic acid, of glutamine by asparagine, of arginine by lysine etc., the reverse substitutions naturally being envisageable under the same conditions. In a further embodiment, substitutions are limited to substitutions in amino acids not conserved among other proteins which have similar identified enzymatic activity. For example, the figures herein provide sequence alignments between certain polypeptides of the disclosure and other polypeptides identified as having similar enzymatic activity, with amino acids common to three or more of the sequences aligned as indicated in bold. Thus, according to one embodiment of the disclosure, substitutions or mutations may be made at positions that are not indicated as in bold in figures. Examples of such polypeptides may include, but are not limited to, those found in the amino acid sequences of SEQ ID Nos. 13-17. In a further embodiment, nucleic acid sequences may be mutated or substituted such that the amino acid they encode is unchanged (degenerate substitutions and/or mutations) and/or mutated or substituted such that any resulting amino acid substitutions or mutations are made at positions that are not indicated as in bold in figures. Examples of such nucleic acid sequences may include, but are not limited to, those found in the nucleotide sequences of SEQ ID Nos. 13-17 or fragments thereof. Alicyclobacillus acidocaldarius The specific homologous polypeptides likewise correspond to polypeptides encoded by the specific homologous nucleotide sequences, such as defined above, and thus comprise in the present definition polypeptides which are mutated or correspond to variants which can exist in , and which especially correspond to truncations, substitutions, deletions, and/or additions of at least one amino acid residue. “Specific biologically active fragment of a polypeptide,” according to an embodiment of the disclosure will be understood in particular as designating a specific polypeptide fragment, such as defined above, having at least one of the characteristics of polypeptides according to the disclosure. In certain embodiments the peptide is capable of acting as an Acetylxylan esterase. Alicyclobacillus acidocaldarius The polypeptide fragments according to embodiments of the disclosure can correspond to isolated or purified fragments naturally present in a or correspond to fragments which can be obtained by cleavage of said polypeptide by a proteolytic enzyme, such as trypsin or chymotrypsin or collagenase, or by a chemical reagent, such as cyanogen bromide (CNBr). Such polypeptide fragments can likewise just as easily be prepared by chemical synthesis, from hosts transformed by an expression vector according to the disclosure containing a nucleic acid enabling the expression of said fragments, placed under the control of appropriate regulation and/or expression elements. “Modified polypeptide” of a polypeptide according to an embodiment of the disclosure is understood as designating a polypeptide obtained by genetic recombination or by chemical synthesis as will be described below, having at least one modification with respect to the normal sequence. These modifications may or may not be able to bear on amino acids at the origin of specificity, and/or of activity, or at the origin of the structural conformation, localization, and of the capacity of membrane insertion of the polypeptide according to the disclosure. It will thus be possible to create polypeptides of equivalent, increased, or decreased activity, and of equivalent, narrower, or wider specificity. Among the modified polypeptides, it is necessary to mention the polypeptides in which up to 5 amino acids can be modified, truncated at the N- or C-terminal end, or even deleted or added. The methods enabling said modulations on eukaryotic or prokaryotic cells to be demonstrated are well known to the person of ordinary skill in the art. It is likewise well understood that it will be possible to use the nucleotide sequences coding for said modified polypeptides for said modulations, for example, through vectors according to the disclosure and described below. The preceding modified polypeptides can be obtained by using combinatorial chemistry, in which it is possible to systematically vary parts of the polypeptide before testing them on models, cell cultures or microorganisms, for example, to select the compounds which are most active or have the properties sought. Chemical synthesis likewise has the advantage of being able to use unnatural amino acids, or nonpeptide bonds. Thus, in order to improve the duration of life of the polypeptides according to the disclosure, it may be of interest to use unnatural amino acids, for example, in D form, or else amino acid analogs, especially sulfur-containing forms, for example. Finally, it will be possible to integrate the structure of the polypeptides according to the disclosure, its specific or modified homologous forms, into chemical structures of polypeptide types or others. Thus, it may be of interest to provide at the N- and C-terminal ends compounds not recognized by proteases. The nucleotide sequences coding for a polypeptide according to the disclosure are likewise part of the disclosure. The disclosure likewise relates to nucleotide sequences utilizable as a primer or probe, characterized in that said sequences are selected from the nucleotide sequences according to the disclosure. Alicyclobacillus acidocaldarius It is well understood that the present disclosure, in various embodiments, likewise relates to specific polypeptides of , encoded by nucleotide sequences, capable of being obtained by purification from natural polypeptides, by genetic recombination or by chemical synthesis by procedures well known to the person skilled in the art and such as described in particular below. In the same manner, the labeled or unlabeled mono- or polyclonal antibodies directed against said specific polypeptides encoded by said nucleotide sequences are also encompassed by the disclosure. Embodiments of the disclosure additionally relate to the use of a nucleotide sequence according to the disclosure as a primer or probe for the detection and/or the amplification of nucleic acid sequences. The nucleotide sequences according to embodiments of the disclosure can thus be used to amplify nucleotide sequences, especially by the PCR technique (polymerase chain reaction) (Erlich, 1989; Innis et al., 1990; Rolfs et al., 1991; and White et al., 1997). These oligodeoxyribonucleotide or oligoribonucleotide primers advantageously have a length of at least 8 nucleotides, preferably of at least 12 nucleotides, and even more preferentially at least 20 nucleotides. Other amplification techniques of the target nucleic acid can be advantageously employed as alternatives to PCR. The nucleotide sequences of the disclosure, in particular the primers according to the disclosure, can likewise be employed in other procedures of amplification of a target nucleic acid, such as: the TAS technique (Transcription-based Amplification System), described by Kwoh et al. in 1989; the 3SR technique (Self-Sustained Sequence Replication), described by Guatelli et al. in 1990; the NASBA technique (Nucleic Acid Sequence Based Amplification), described by Kievitis et al. in 1991; the SDA technique (Strand Displacement Amplification) (Walker et al., 1992); the TMA technique (Transcription Mediated Amplification). The polynucleotides of the disclosure can also be employed in techniques of amplification or of modification of the nucleic acid serving as a probe, such as: the LCR technique (Ligase Chain Reaction), described by Landegren et al. in 1988 and improved by Barany et al. in 1991, which employs a thermostable ligase; the RCR technique (Repair Chain Reaction), described by Segev in 1992; the CPR technique (Cycling Probe Reaction), described by Duck et al. in 1990; the amplification technique with Q-beta replicase, described by Miele et al. in 1983 and especially improved by Chu et al. in 1986, Lizardi et al. in 1988, then by Burg et al., as well as by Stone et al. in 1996. In the case where the target polynucleotide to be detected is possibly an RNA, for example, an mRNA, it will be possible to use, prior to the employment of an amplification reaction with the aid of at least one primer according to the invention or to the employment of a detection procedure with the aid of at least one probe of the disclosure, an enzyme of reverse transcriptase type in order to obtain a cDNA from the RNA contained in the biological sample. The cDNA obtained will thus serve as a target for the primer(s) or the probe(s) employed in the amplification or detection procedure according to the disclosure. The detection probe will be chosen in such a manner that it hybridizes with the target sequence or the amplicon generated from the target sequence. By way of sequence, such a probe will advantageously have a sequence of at least 12 nucleotides, in particular of at least 20 nucleotides, and preferably of at least 100 nucleotides. Embodiments of the disclosure also comprise the nucleotide sequences utilizable as a probe or primer according to the disclosure, characterized in that they are labeled with a radioactive compound or with a nonradioactive compound. 32 35 3 125 The unlabeled nucleotide sequences can be used directly as probes or primers, although the sequences are generally labeled with a radioactive element (P, S, H, I) or with a nonradioactive molecule (biotin, acetylaminofluorene, digoxigenin, 5-bromodeoxyuridine, fluorescein) to obtain probes which are utilizable for numerous applications. Examples of nonradioactive labeling of nucleotide sequences are described, for example, in French Patent No. 7810975 or by Urdea et al. or by Sanchez-Pescador et al. in 1988. In the latter case, it will also be possible to use one of the labeling methods described in patents FR-2 422 956 and FR-2 518 755. The hybridization technique can be carried out in various manners (Matthews et al., 1988). The most general method consists in immobilizing the nucleic acid extract of cells on a support (such as nitrocellulose, nylon, polystyrene) and in incubating, under well-defined conditions, the immobilized target nucleic acid with the probe. After hybridization, the excess of probe is eliminated and the hybrid molecules formed are detected by the appropriate method (measurement of the radioactivity, of the fluorescence or of the enzymatic activity linked to the probe). The disclosure, in various embodiments, likewise comprises the nucleotide sequences according to the disclosure, characterized in that they are immobilized on a support, covalently or noncovalently. According to another advantageous mode of employing nucleotide sequences according to the disclosure, the latter can be used immobilized on a support and can thus serve to capture, by specific hybridization, the target nucleic acid obtained from the biological sample to be tested. If necessary, the solid support is separated from the sample and the hybridization complex formed between said capture probe and the target nucleic acid is then detected with the aid of a second probe, a so-called “detection probe,” labeled with an easily detectable element. Another aspect of the present disclosure is a vector for the cloning and/or expression of a sequence, characterized in that it contains a nucleotide sequence according to the invention. The vectors according to the disclosure, characterized in that they contain the elements enabling the expression and/or the secretion of said nucleotide sequences in a determined host cell, are likewise part of the disclosure. The vector may then contain a promoter, signals of initiation and termination of translation, as well as appropriate regions of regulation of transcription. It may be able to be maintained stably in the host cell and can optionally have particular signals specifying the secretion of the translated protein. These different elements may be chosen as a function of the host cell used. To this end, the nucleotide sequences according to the disclosure may be inserted into autonomous replication vectors within the chosen host, or integrated vectors of the chosen host. Such vectors will be prepared according to the methods currently used by the person skilled in the art, and it will be possible to introduce the clones resulting therefrom into an appropriate host by standard methods, such as, for example, lipofection, electroporation, and thermal shock. The vectors according to the disclosure are, for example, vectors of plasmid or viral origin. One example of a vector for the expression of polypeptides of the disclosure is baculovirus. These vectors are useful for transforming host cells in order to clone or to express the nucleotide sequences of the disclosure. The invention likewise comprises the host cells transformed by a vector according to the disclosure. These cells can be obtained by the introduction into host cells of a nucleotide sequence inserted into a vector such as defined above, then the culturing of said cells under conditions allowing the replication and/or expression of the transfected nucleotide sequence. Arabidopsis The host cell can be selected from prokaryotic or eukaryotic systems, such as, for example, bacterial cells (Olins and Lee, 1993), but likewise yeast cells (Buckholz, 1993), as well as plant cells, such as sp., and animal cells, in particular the cultures of mammalian cells (Edwards and Aruffo, 1993), for example, Chinese hamster ovary (CHO) cells, but likewise the cells of insects in which it is possible to use procedures employing baculoviruses, for example, Sf9 insect cells (Luckow, 1993). Embodiments of the disclosure likewise relate to organisms comprising one of said transformed cells according to the disclosure. Alicyclobacillus acidocaldarius The obtainment of transgenic organisms according to the disclosure overexpressing one or more of the genes of or part of the genes may be carried out in, for example, rats, mice, or rabbits according to methods well known to the person skilled in the art, such as by viral or nonviral transfections. It will be possible to obtain the transgenic organisms overexpressing one or more of said genes by transfection of multiple copies of said genes under the control of a strong promoter of ubiquitous nature, or selective for one type of tissue. It will likewise be possible to obtain the transgenic organisms by homologous recombination in embryonic cell strains, transfer of these cell strains to embryos, selection of the affected chimeras at the level of the reproductive lines, and growth of said chimeras. The transformed cells as well as the transgenic organisms according to the disclosure are utilizable in procedures for preparation of recombinant polypeptides. It is today possible to produce recombinant polypeptides in a relatively large quantity by genetic engineering using the cells transformed by expression vectors according to the disclosure or using transgenic organisms according to the disclosure. The procedures for preparation of a polypeptide of the disclosure in recombinant form, characterized in that they employ a vector and/or a cell transformed by a vector according to the disclosure and/or a transgenic organism comprising one of said transformed cells according to the disclosure are themselves comprised in the present disclosure. As used herein, “transformation” and “transformed” relate to the introduction of nucleic acids into a cell, whether prokaryotic or eukaryotic. Further, “transformation” and “transformed,” as used herein, need not relate to growth control or growth deregulation. Alicyclobacillus acidocaldarius. Among said procedures for preparation of a polypeptide of the disclosure in recombinant form, the preparation procedures include employing a vector, and/or a cell transformed by said vector and/or a transgenic organism comprising one of said transformed cells, containing a nucleotide sequence according to the disclosure of coding for a polypeptide of A variant according to the disclosure may consist of producing a recombinant polypeptide fused to a “carrier” protein (chimeric protein). The advantage of this system is that it may enable stabilization of and/or a decrease in the proteolysis of the recombinant product, an increase in the solubility in the course of renaturation in vitro and/or a simplification of the purification when the fusion partner has an affinity for a specific ligand. More particularly, the disclosure relates to a procedure for preparation of a polypeptide of the invention comprising the following acts: a) culture of transformed cells under conditions allowing the expression of a recombinant polypeptide of a nucleotide sequence according to the disclosure; and b) if need be, recovery of said recombinant polypeptide. When the procedure for preparation of a polypeptide of the disclosure employs a transgenic organism according to the disclosure, the recombinant polypeptide is then extracted from said organism. The disclosure also relates to a polypeptide, which is capable of being obtained by a procedure of the disclosure, such as described previously. The disclosure also comprises a procedure for preparation of a synthetic polypeptide, characterized in that it uses a sequence of amino acids of polypeptides according to the disclosure. The disclosure likewise relates to a synthetic polypeptide obtained by a procedure according to the disclosure. The polypeptides according to the disclosure can likewise be prepared by techniques, which are conventional in the field of the synthesis of peptides. This synthesis can be carried out in homogeneous solution or in solid phase. For example, recourse can be made to the technique of synthesis in homogeneous solution described by Houben-Weyl in 1974. This method of synthesis consists in successively condensing, two-by-two, the successive amino acids in the order required, or in condensing amino acids and fragments formed previously and already containing several amino acids in the appropriate order, or alternatively several fragments previously prepared in this way, it being understood that it will be necessary to protect beforehand all the reactive functions carried by these amino acids or fragments, with the exception of amine functions of one and carboxyls of the other or vice-versa, which must normally be involved in the formation of peptide bonds, especially after activation of the carboxyl function, according to the methods well known in the synthesis of peptides. Recourse may also be made to the technique described by Merrifield. To make a peptide chain according to the Merrifield procedure, recourse is made to a very porous polymeric resin, on which is immobilized the first C-terminal amino acid of the chain. This amino acid is immobilized on a resin through its carboxyl group and its amine function is protected. The amino acids that are going to form the peptide chain are thus immobilized, one after the other, on the amino group, which is deprotected beforehand each time, of the portion of the peptide chain already formed, and which is attached to the resin. When the whole of the desired peptide chain has been formed, the protective groups of the different amino acids forming the peptide chain are eliminated and the peptide is detached from the resin with the aid of an acid. The disclosure additionally relates to hybrid polypeptides having at least one polypeptide according to the disclosure, and a sequence of a polypeptide capable of inducing an immune response in man or animals. Advantageously, the antigenic determinant is such that it is capable of inducing a humoral and/or cellular response. It will be possible for such a determinant to comprise a polypeptide according to the disclosure in a glycosylated, pegylated, and/or otherwise post-translationally modified form used with a view to obtaining immunogenic compositions capable of inducing the synthesis of antibodies directed against multiple epitopes. These hybrid molecules can be formed, in part, of a polypeptide carrier molecule or of fragments thereof according to the disclosure, associated with a possibly immunogenic part, in particular an epitope of the diphtheria toxin, the tetanus toxin, a surface antigen of the hepatitis B virus (patent FR 79 21811), the VP1 antigen of the poliomyelitis virus or any other viral or bacterial toxin or antigen. The procedures for synthesis of hybrid molecules encompass the methods used in genetic engineering for constructing hybrid nucleotide sequences coding for the polypeptide sequences sought. It will be possible, for example, to refer advantageously to the technique for obtainment of genes coding for fusion proteins described by Minton in 1984. Said hybrid nucleotide sequences coding for a hybrid polypeptide as well as the hybrid polypeptides according to the disclosure characterized in that they are recombinant polypeptides obtained by the expression of said hybrid nucleotide sequences are likewise part of the disclosure. The disclosure likewise comprises the vectors characterized in that they contain one of said hybrid nucleotide sequences. The host cells transformed by said vectors, the transgenic organisms comprising one of said transformed cells as well as the procedures for preparation of recombinant polypeptides using said vectors, said transformed cells and/or said transgenic organisms are, of course, likewise part of the disclosure. Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius. The polypeptides according to the disclosure, the antibodies according to the disclosure described below and the nucleotide sequences according to the disclosure can advantageously be employed in procedures for the detection and/or identification of , in a sample capable of containing them. These procedures, according to the specificity of the polypeptides, the antibodies and the nucleotide sequences according to the invention which will be used, will in particular be able to detect and/or to identify an Alicyclobacillus acidocaldarius The polypeptides according to the disclosure can advantageously be employed in a procedure for the detection and/or the identification of in a sample capable of containing them, characterized in that it comprises the following acts: a) contacting of this sample with a polypeptide or one of its fragments according to the disclosure (under conditions allowing an immunological reaction between said polypeptide and the antibodies possibly present in the biological sample); and b) demonstration of the antigen-antibody complexes possibly formed. Any conventional procedure can be employed for carrying out such a detection of the antigen-antibody complexes possibly formed. By way of example, a preferred method brings into play immunoenzymatic processes according to the ELISA technique, by immunofluorescence, or radioimmunological assay processes (RIA) or their equivalent. Thus, the disclosure likewise relates to the polypeptides according to the disclosure, labeled with the aid of an adequate label such as of the enzymatic, fluorescent or radioactive type. Such methods comprise, for example, the following acts: deposition of determined quantities of a polypeptide composition according to the disclosure in the wells of a microtiter plate, introduction into said wells of increasing dilutions of serum, or of a biological sample other than that defined previously, having to be analyzed, incubation of the microplate, introduction into the wells of the microtiter plate of labeled antibodies directed against pig immunoglobulins, the labeling of these antibodies having been carried out with the aid of an enzyme selected from those which are capable of hydrolyzing a substrate by modifying the absorption of the radiation of the latter, at least at a determined wavelength, for example, at 550 nm, detection, by comparison with a control test, of the quantity of hydrolyzed substrate. Alicyclobacillus acidocaldarius The polypeptides according to the disclosure enable monoclonal or polyclonal antibodies to be prepared which are characterized in that they specifically recognize the polypeptides according to the disclosure. It will advantageously be possible to prepare the monoclonal antibodies from hybridomas according to the technique described by Köhler and Milstein in 1975. It will be possible to prepare the polyclonal antibodies, for example, by immunization of an animal, in particular a mouse, with a polypeptide or a DNA, according to the disclosure, associated with an adjuvant of the immune response, and then purification of the specific antibodies contained in the serum of the immunized animals on an affinity column on which the polypeptide which has served as an antigen has previously been immobilized. The polyclonal antibodies according to the disclosure can also be prepared by purification, on an affinity column on which a polypeptide according to the disclosure has previously been immobilized, of the antibodies contained in the serum of an animal immunologically challenged by , or a polypeptide or fragment according to the disclosure. The disclosure likewise relates to mono- or polyclonal antibodies or their fragments, or chimeric antibodies, characterized in that they are capable of specifically recognizing a polypeptide according to the disclosure. It will likewise be possible for the antibodies of the disclosure to be labeled in the same manner as described previously for the nucleic probes of the disclosure, such as a labeling of enzymatic, fluorescent or radioactive type. Alicyclobacillus acidocaldarius Alicyclobacillus acidocaldarius The disclosure is additionally directed at a procedure for the detection and/or identification of in a sample, characterized in that it comprises the following acts: a) contacting of the sample with a mono- or polyclonal antibody according to the disclosure (under conditions allowing an immunological reaction between said antibodies and the polypeptides of possibly present in the biological sample); and b) demonstration of the antigen-antibody complex possibly formed. Alicyclobacillus acidocaldarius The present disclosure likewise relates to a procedure for the detection and/or the identification of in a sample, characterized in that it employs a nucleotide sequence according to the disclosure. Alicyclobacillus acidocaldarius More particularly, the disclosure relates to a procedure for the detection and/or the identification of in a sample, characterized in that it includes the following acts: a) if need be, isolation of the DNA from the sample to be analyzed; b) specific amplification of the DNA of the sample with the aid of at least one primer, or a pair of primers, according to the disclosure; and c) demonstration of the amplification products. These can be detected, for example, by the technique of molecular hybridization utilizing a nucleic probe according to the invention. This probe will advantageously be labeled with a nonradioactive (cold probe) or radioactive element. For the purposes of the present disclosure, “DNA of the biological sample” or “DNA contained in the biological sample” will be understood as meaning either the DNA present in the biological sample considered, or possibly the cDNA obtained after the action of an enzyme of reverse transcriptase type on the RNA present in said biological sample. A further embodiment of the disclosure comprises a method, characterized in that it comprises the following acts: a) contacting of a nucleotide probe according to the disclosure with a biological sample, the DNA contained in the biological sample having, if need be, previously been made accessible to hybridization under conditions allowing the hybridization of the probe with the DNA of the sample; and b) demonstration of the hybrid formed between the nucleotide probe and the DNA of the biological sample. The present disclosure also relates to a procedure according to the disclosure, characterized in that it comprises the following acts: a) contacting of a nucleotide probe immobilized on a support according to the disclosure with a biological sample, the DNA of the sample having, if need be, previously been made accessible to hybridization, under conditions allowing the hybridization of the probe with the DNA of the sample; b) contacting of the hybrid formed between the nucleotide probe immobilized on a support and the DNA contained in the biological sample, if need be after elimination of the DNA of the biological sample which has not hybridized with the probe, with a nucleotide probe labeled according to the disclosure; and c) demonstration of the novel hybrid formed in act b). According to an advantageous embodiment of the procedure for detection and/or identification defined previously, this is characterized in that, prior to act a), the DNA of the biological sample is first amplified with the aid of at least one primer according to the disclosure. Further embodiments of the disclosure comprise methods of at least partially degrading, cleaving, and/or removing a polysaccharide, lignocellulose, hemicellulose, lignin, chitin, heteroxylan, and/or xylan-decorating group. Degrading, cleaving, and/or removing these structures have in the art recognized utility such as those described in Mielenz 2001; Jeffries 1996; Shallom and Shoham 2003; Lynd et al. 2002; Vieille and Zeikus 2001; Bertoldo et al. 2004; and/or Malherbe and Cloete 2002. Embodiments of methods include placing a recombinant, purified, and/or isolated polypeptide having at least 90% sequence identity to SEQ ID No. 2 in fluid contact with a polysaccharide, lignocellulose, hemicellulose, lignin, chitin, heteroxylan, and/or xylan-decorating group. Further embodiments of methods include placing a cell producing or encoding a recombinant, purified, and/or isolated polypeptide having at least 90% sequence identity to SEQ ID No. 2 in fluid contact with a polysaccharide, lignocellulose, hemicellulose, lignin, chitin, heteroxylan, and/or xylan-decorating group. As used herein, “partially degrading” relates to the rearrangement or cleavage of chemical bonds in the target structure. In additional embodiments, methods of at least partially degrading, cleaving, and/or removing a polysaccharide, lignocellulose, hemicellulose, lignin, chitin, heteroxylan, and/or xylan-decorating group may take place at temperatures at or above about 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, and/or 95 degrees Celsius and/or at a pH at, below, and/or above 7, 6, 5, 4, 3, 2, 1, and/or 0. Further embodiments of the disclosure may comprise a kit for at least partially degrading, cleaving, and/or removing a polysaccharide, lignocellulose, hemicellulose, lignin, chitin, heteroxylan, and/or xylan-decorating group, the kit comprising a cell producing or encoding a recombinant, purified, and/or isolated a polypeptide having at least 90% sequence identity to SEQ ID No. 2 and/or a recombinant, purified, and/or isolated a polypeptide having at least 90% sequence identity to SEQ ID No. 2. The disclosure is described in additional detail in the following illustrative example. Although the example may represent only a selected embodiment of the disclosure, it should be understood that the following example is illustrative and not limiting. In embodiments of the disclosure any one of the isolated and/or purified polypeptides according to the disclosure may be enzymatically active at temperatures at or above about 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, and/or 95 degrees Celsius and/or may be enzymatically active at a pH at, below, and/or above 7, 6, 5, 4, 3, 2, 1, and/or 0. In further embodiments of the disclosure, glycosylation, pegylation, and/or other post-translational modification may be required for the isolated and/or purified polypeptides according to the invention to be enzymatically active at pH at or below 7, 6, 5, 4, 3, 2, 1, and/or 0 or at temperatures at or above about 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, and/or 95 degrees Celsius. All references, including publications, patents, and patent applications, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. While this disclosure has been described in the context of certain embodiments, the present disclosure can be further modified. This application therefore encompasses any variations, uses, or adaptations of the disclosure using its general principles. Further, this application encompasses such departures from the present disclosure as come within known or customary practice in the art to which this disclosure pertains and which fall within the limits of the appended claims and their legal equivalents. Alicyclobacillus acidocaldarius FIGS. 1A and 1B Provided in SEQ ID NO:1 is a nucleotide sequence isolated from and encoding the polypeptide of SEQ ID NO:2. As can be seen in , SEQ ID NO:2 aligns well with other proteins identified as esterases. Of particular importance, it is noted that where amino acids are conserved in other esterases, those amino acids are generally conserved in SEQ ID NO:2. Thus, the polypeptide provided in SEQ ID NO:2 is properly classified as an acetylxylan esterase. The polypeptides of SEQ ID NOs:13-17 are representative examples of conservative substitutions in the polypeptide of SEQ ID NO:2 and are encoded by nucleotide sequences of SEQ ID NOs:8-12, respectively. The nucleotide sequences of SEQ ID NOs:1 and 8-12 are placed into expression vectors using techniques standard in the art. The vectors are then provided to cells such as bacteria cells or eukaryotic cells such as 519 cells or CHO cells. In conjunction with the normal machinery present in the cells, the vectors comprising SEQ ID NOs: 1 and 8-12 produce the polypeptides of SEQ ID NOs: 2 and 13-17. The polypeptides of SEQ ID NOs: 2 and 13-17 are then isolated and/or purified. The isolated and/or purified polypeptides of SEQ ID NOs: 2 and 13-17 are then demonstrated to have activity as acetylxylan esterases. The isolated and/or purified polypeptides of SEQ ID NOs: 2 and 13-17 are challenged with xylan or xylo-oligosaccharide. The isolated and/or purified polypeptides of SEQ ID NOs: 2 and 13-17 are demonstrated to have activity as acetylxylan esterases. FIG. 2 represents the activity of four different preparations of SEQ ID NO: 2. Therein, the activity (units/mg) as an acetylxylan esterase is shown at 10, 15, and 20 minutes. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 depicts a sequence alignment between SEQ ID NO:2 (RAAC02760), an acetylxylan esterase, and gi|944285589, gi|916736932, gi|917405684, gi|954102927, and gi|955294285 (SEQ ID NOs:3-7 respectively) which are all esterases. Amino acids common to all of the sequences are indicated by a “*”, while amino acids with only conservative substitutions are indicated by “:”. FIG. 2 depicts a representation of the activity of four different preparations (represented as Xs, triangles, squares, and diamonds) of SEQ ID NO: 2. Therein, the activity (units/mg) as an acetylxylan esterase is shown at 10, 15, and 20 minutes.
Last summer one of our local grocery stores carried some locally grown cantaloupe and it was SO good! We kept going back and getting more, we just couldn’t get enough. We were sad thinking that the season was going to end and we wouldn’t have any more fresh cantaloupe to eat. So…we decided to buy a bunch to freeze, thinking that we would be able to thaw and eat them like they were fresh. Well, we were a little disappointed that they didn’t quite meet up to the same quality after it had been frozen. So I have had three gallon bags full of cut up cantaloupe sitting in my freezer since then, not knowing how to use it up…until now! Enter…Cantaloupe Smoothie! We LOVED this recipe (even my kids who don’t necessarily like to eat cantaloupe!) and I hope you enjoy it too! Cantaloupe Smoothie • 2 bananas, peeled (fresh or frozen) • 3 cups cantaloupe, diced (fresh or frozen) • 2 cups orange juice • 2 cups plain yogurt • 2 tsp. vanilla • 4+ teaspoons honey Place all the ingredients in a blender and blend until smooth. Enjoy!
https://sherigraham.com/cantaloupe-smoothie/
The Jim Murray Memorial Foundation (JMMF) is a 501 (c) (3) nonprofit organization, established in 1999 to perpetuate the Jim Murray legacy, and his love for and dedication to his extraordinary career in journalism. The JMMF mission is to award five (5) $5,000 scholarships annually to print journalism students through a nationwide essay competition. The Jim Murray Memorial Foundation presents its annual essay competition for a $5,000 scholarship to be awarded to a Trinity College student who meets the following guidelines: ~The Trinity College student must be a Connecticut resident. ~An applicant must be a sophomore intending to declare an English major or a junior who has declared the English major and will graduate in spring of 2015 (or 2016). ~An applicant must have at least a “B” (3.0) grade point average and be enrolled full-time. ~An applicant must show integrity of character, interest in and respect for his/her fellow humans, and the energy to use his/her writing talents to the fullest extent. If you are interested, please contact Margaret Grasso in the English Department, ext. 2455 or via email – [email protected] . Trinity College-Hartford is Jim Murray’s alma mater. The English Department at Trinity College will review essays and declare a finalist to be named a Murray Scholar . All submissions must be delivered to Margaret Grasso no later than Monday, April 14, 2014. Essay Format Guidelines: A. Essay Format: 1. No more than 1000 words, but no less than 750 words. 2. Must be prepared with a one-inch (1”) margin, 12-point font, double-spaced. 3. Applicant’s first and last name and word count in the upper right corner of each page 4. Multiple pages must be numbered bottom-center. *********************************************************** Write a column that tells the story of an event, incident or person who figures prominently in the sports history of your university. It could be a memorable game, or coach, or player, or even an artifact that has become an indelible part of the story of your college (or maybe town). *********************************************************** The Jim Murray Memorial Foundation will issue a $5,000 scholarship check to Trinity College to be used for the scholarship recipient's 2014-2015 academic needs. less Download iCal Google Calendar Yahoo Calendar Windows Live *Reminders include change notification. Email Remind me 0 minutes 5 Minutes 10 Minutes 15 Minutes 30 Minutes 45 Minutes 1 Hour 2 Hours 3 Hours 4 Hours 5 Hours 6 Hours 7 Hours 8 Hours 9 Hours 10 Hours 11 Hours 12 Hours 18 Hours 1 Day 2 Day 3 Day 4 Day 5 Day 1 Week 2 Weeks before event starts. Set Reminder for all occurrences of this event.
http://calendar.trincoll.edu/PublicMasterCalendar/EventDetails.aspx?data=hHr80o3M7J4xIElysINQ09X1G8dSjiSXI6otHCDEFCpnM9ERz06GGGxn%2BwjJLysq
The expression begins with an opening brace and ends with a closing brace. START and END can be either positive integers or single characters. The START and the END values are mandatory and separated with two dots .., with no space between them. The INCREMENT value is optional. If present, it must be separated from the END value with two dots .., with no space between them. When characters are given, the expression is expanded in lexicographic order. The expression expands to each number or characters between START and END, including the provided values. An incorrectly formed expression is left unchanged. Here’s the expression in action: echo{0..3} When no INCREMENT is provided the default increment is 1: 0 1 2 3 You can also use other characters. The example below prints the alphabet: echo{a..z} a b c d e f g h i j k l m n o p q r s t u v w x y z If the START value is greater than END then the expression will create a range that decrements: for i in {3..0}doecho"Number: $i"done Number: 3 Number: 2 Number: 1 Number: 0 When an INCREMENT is given, it is used as the step between each generated item: for i in {0..20..5}doecho"Number: $i"done Each generated number is greater than the preceding number by 5: Number: 0 Number: 5 Number: 10 Number: 15 Number: 20 When using integers to generate a range, you can add a leading 0 to force each number to have the same length. To pad generated integers with leading zeros prefix either START and END with a zero:
\#1\#2[0=1=to 0[\#2]{}\#1 -201]{} Introduction ============ The interaction of localized magnetic flux (fluxons) with defects (natural or artificial) or impurities in superconductors or junctions has an important effect in the properties of bulk superconductors or the behavior of Josephson junctions correspondingly [@conf]. The flux trapping from defects which is of major importance in Josephson junctions [@barone] can modify the properties of polycrystalline materials with physical dislocations, for example grain boundary junctions [@dimos]. In this category one can also consider grain boundary junctions in YBa$_2$Cu$_3$O$_7$ [@sarnelli] where the tunneling current is a strongly varying function along the boundary. This strong inhomogeneity makes them good candidates for SQUID type structures [@gross1]. Phenomenologically the current-voltage characteristics of grain boundary junctions are well described [@gross2] by the resistively shunted junction model [@mccumber]. The grain boundary lines often tend to curve, while the junction is very inhomogeneous and contains nonsuperconducting impurities and facets of different length scales[@miller; @ayache]. The linear increase of the critical current with length in grain boundary junctions with high-$T_c$ superconductors, which is a different behavior from the saturation in the inline geometry of a perfect junction, can be explained by the presence of impurities[@fehren]. Therefore it is interesting to study flux trapping in impurities especially when it can be controlled. Modern fabrication techniques can with relative ease engineer any defect configuration in an extremely controlled way. In bulk materials are several types of defects that can influence the critical current in high temperature superconductors like YBa$_2$Cu$_3$O$_x$ materials. They include 3d inclusions, 2d grain boundaries and twin boundaries, and point defects like dopants substitutions, oxygen vacancies[@conf]. For example the homogeneous precipitation of fine Y$_2$BaCuO$_5$ non-superconducting particles in the melt processing of YBa$_2$Cu$_3$O$_x$ leads to high $J_c$ values due to the particle pinning centers[@oka]. Similar behavior is observed in NdBa$_2$Cu$_3$O$_x$ bulk crystals with Nd$_4$Ba$_2$Cu$_2$O$_{10}$ particles[@takagi]. Of interest is also the case of the peak effect in twin-free $Y123$ with oxygen deficiency. In this case, one sees a linear increase (peak effect) of the critical current at small magnetic fields, when growth is under oxygen reduction[@wolf]. For the fully oxidized crystal one expects a decrease. The peak effect is attributed to flux trapping. Information on the defect density and activation energies can also be obtained from the I-V characteristics, as was the case for several types of defects which were also compared to $Au^+$ irradiated samples with artificial columnar defects[@camerlingo]. These columnar defects also act to trap flux lines in an YBCO film which is considered as a network of intergrain Josephson junctions modulated by the defects. In this case assuming a distribution of contact lengths one finds a plateau in the critical current density vs. the logarithm of the field[@mezzetti]. The study of long size of impurities is going to give information beyond theories which concern small amplitude of inhomogeneities [@virocur]. Also it is possible for a direct comparison of the numerical results with experiments in long junctions obtained with electron beam lithography [@kroger]. This is a powerful technique which allows the preparation and control of arrays of pinning centers. Another method is the ionic irradiation which produces a particular kind of disordered arrays, consisting of nanosized columnar defects [@camerlingo; @beek]. The variation of the critical current density can also occur due to temperature gradients [@krasnov]. The activity in the area of high critical current densities in the presence of a magnetic field is hampered by defects due to the difficulty of having a high quality junction with a very thin intermediate layer. Thus significant activity has been devoted, since for example the energy resolution of the SQUID [@ketchen] and the maximum operating frequency of the single flux quantum logic circuit [@likharev], to name a few applications, depend inversely and directly respectively on the plasma frequency $\omega_p$, with $\omega_p \sim J_c^{1/2}$. The fundamental response frequency of Josephson devices, the Josephson frequency $\omega_J$, also depends on the critical current density. On the other hand, a drawback is that high critical current densities lead to large subgap leakage currents [@kleinsasser] and junction characteristics degrade rapidly with increasing $J_c$. Variations in the critical current density also influence the $I-V$ characteristics introducing steps under the influence of both a static bias current and the irradiation with microwaves [@reinisch]. In that case the variation is quite smooth (of $sech$ type ), so that the fluxon and its motion could be described by a small number of collective coordinates. Interesting behavior is also seen in both the static and dynamic properties for the case of a spatially modulated $J_c$ with the existence of “supersoliton” excitations [@oboznov; @larsen] and the case of columnar defects [@tinkham; @balents] or disordered defects [@fehren]. The trapping of fluxons can be seen in the $I_{max}(H)$ curves where we also expect important hysteresis phenomena when scanning the external magnetic field. The hysteresis can be due to two reasons: (i) One is due to the non-monotonic relation between flux and external magnetic field [@caputo] arising from the induced internal currents, and (ii) from the trapping or detrapping of fluxons by defects. The effect of a defect on a fluxon and the strength of the depinning field depends strongly in the size of the defect, the type of defect and the position of the defect. Here we will consider case where the widths of the defects is of the order of the Josephson penetration depth. In this range we expect the strongest coupling between fluxons and defects. We will also consider the case of a few defects in the low magnetic field region where pinning and coercive effects are important. The organization of the paper is as follows. In Sec. 2 the sine-Gordon model for a Josephson junction is presented. In Sec. 3 we present the results of the critical current $I_{max}$ versus the magnetic field of a junction with an asymmetrically positioned defect. The variation of the $I_{max}$ and the flux content $N_f$ with the defect critical current density and the position are presented in sections 4 and 5 respectively. The effect of multiple pinning centers is examined in sections 6 and 7. In Sec. 8 we examine a defect with a smooth variation of the critical current density. In the last section we summarize our results. The junction geometry ===================== The electrodynamics of a long Josephson junction is characterized from the phase difference $\phi(x)$ of the order parameter in the two superconducting regions. The spatial variation of $\phi(x)$ induces a local magnetic field given by the expression $${\cal H}(x) = \frac{d { \phi}(x)}{dx}.~~~\label{phix},$$ in units of $H_0=\frac{\Phi_0}{2\pi d \lambda_J}$, where $\Phi_0$ is the quantum of flux, $d$ is the magnetic thickness and $\lambda_J$ is the Josephson penetration depth. The magnetic thickness is given by $d=2\lambda_L +t$ where $\lambda_L$ is the London penetration depth in the two superconductors and $t$ is the oxide layer thickness. The $\lambda_J$ is also taken as the unit of length. The current transport across the junction is taken to be along the $z$ direction. We describe a 1-D junction with width $w$ (normalized to $\lambda_J$) in the $y$ direction, small compared to unity. The normalized length in the $x-$direction is $\ell$. The superconducting phase difference $\phi(x)$ across the defected junction is then the solution of the sine-Gordon equation $$\frac{d^2 { \phi}(x)}{dx^2} = \widetilde{J}_c(x)\sin[{\phi(x)}],~~~\label{eq01}$$ with the inline boundary condition $$\frac{d { \phi}}{dx}\left|_{x=\pm\frac{\ell}{2}}\right. =\pm \frac{{I}}{2}+H,~~~\label{eq02}$$ where $I$ and $H$ are the normalized bias current and external magnetic field. $\widetilde J_c(x)$ is the local critical current density which is $\widetilde J_c=1$ in the homogeneous part of the junction and $\widetilde J_c=j_d$ in the defect. Thus the spatially varying critical current density is normalized to its value in the undefected part of the junction $J_0$ and the $\lambda_J$ used above is given by $$\lambda_J=\sqrt{\frac{\Phi_0}{2\pi \mu_0 d J_0}},$$ where $\mu_0$ is the free space magnetic permeability. One can also define a spatially dependent Josephson penetration depth by introducing $\widetilde{J}_c(x)$ instead of $J_0$. This is a more useful quantity in the case of weak distributed defects. In the case of overlap boundary conditions Eqs. (\[eq01\]) and (\[eq02\]) are modified as $$\frac{d^2 { \phi}(x)}{dx^2} = \widetilde{J}_c(x)\sin[{\phi(x)}] -I,~~~\label{eq0a}$$ and $$\frac{d { \phi}}{dx}\left|_{x=\pm\frac{\ell}{2}}\right. = H.~~~\label{eq0b}$$ We can classify the different solutions obtained from Eq. (\[eq01\]) with their magnetic flux content $$N_f = \frac{1}{2 \pi} (\phi_R-\phi_L) ,~~~\label{phi}$$ in units of $\Phi_0$, where $\phi_{R(L)}$ is the value of $\phi(x)$ at the right(left) edge of the junction. Knowing the magnetic flux one can also obtain the magnetization from $$M= \frac{2\pi}{\ell} N_f-H. ~~~~~\label{magn}$$ For the perfect junction, a quantity of interest is the critical magnetic field for flux penetration from the edges, denoted by $H_{c1}$. For a long junction it is equal to 2 while for a short it depends on the junction length. Due to the existence of the defect this value can be modified since we have the possibility of trapping at the defects. For a short junction we have penetration of the external field in the junction length, so that the magnetization approaches zero. For a long junction it is a non-monotonic function of the external field $H$. To check the stability we consider small perturbations $u(x,t)=v(x)e^{st}$ on the static solution $\phi(x)$, and linearize the time-dependent sine-Gordon equation to obtain:$$\frac{d^2 v(x)}{dx^2} -\widetilde J_c(x)\cos\phi(x) v(x)= \lambda v(x) ,~~~\label{eq10}$$ under the boundary conditions $$\frac{dv(x)}{dx}|_{x=\pm \frac{\ell}{2}}=0,$$ where $\lambda=-s^2$. It is seen that if the eigenvalue equation has a negative eigenvalue the static solution $\phi(x)$ is unstable. There is considerable eigenvalue crossing so that we must monitor several low eigenvalues. This is especially true near the onset of instabilities. Asymmetric defect ================= In the following we will consider the variation of the maximum critical current as a function of the magnetic field for several defect structures. We start with a long ($L>\lambda_J$) junction of normalized length $\ell=10$ with a defect of length $d=2$ which is placed $D=1.4$ from the right edge. Thus the defect is of the order of $\lambda_J$. We plot in Fig. 2a the maximum critical current $I_{max}$ variation with the magnetic field. The different curves correspond to phase distributions for which we have a maximum current at a given value of the magnetic field $H$. The overlapping curves called modes have different flux content as seen in Fig. 2b where we plot the magnetic flux in units of $\Phi_0$ for zero current versus the external field. The magnetic flux is only a weak function of the external current. For the perfect junction there is no overlap in the magnetic flux between the different modes. In fact each mode has flux content between $n\Phi_0$ and $(n+1)\Phi_0$ and therefore is labelled the ($n,n+1$) mode [@caputo]. Here in the case of the defect the range (at zero current) of flux for each mode can be quite different and the labelling is with a single index $n=0,1,2, ...$ corresponding in several cases to the $(0,1)$, ($1,2$), ($2,3$),... modes of the perfect junction. There are in several cases several modes with similar flux. To distinguish them we add a letter following the index $n$. The maximum $I_{max}$ is obtained for mode $1$ and the increase comes from the trapping of flux by the defect. We have to note that the $(d,e)$ part of this mode is a continuation of the $(a,b)$ part of mode $0$. In both cases we have entrance of flux from the no defect part of the junction and the instability in the critical current occurs when $\phi(-\ell/2)=\pi$. Here and in the following we will take this to mean equal to $\pi$ modulo $2\pi$. For the maximum current (at $H < 0$) the equation is $H-I/2=-2$. This can be understood from the pendulum phase diagram, where the $\phi_x=-2$ is the extremum slope, and thus the relation $I_{max}=4+2H$ holds. For the $(b,c)$ part of mode $0$ the flux enters from the right where the defect is. This reduces the critical current compared to the perfect junction $0$ mode [@caputo; @owen]. Note that $0$-mode has its critical current $I_{max}$ peak slightly to the left of $H=0$ in the $I_{max}$ vs $H$ diagram, and to the left of $N_f=0$ in an $I_{max}$ vs $N_f$ diagram (see Fig. 11a). Also in the absence of current, reversing the direction of $H$ only changes the sign of the slope $d\phi/dx$, but the phase difference (in absolute value) at the two ends will be the same. Thus the $0$ mode at $I=0$ extends for $-1.6\le H \le 1.6$. This is not clearly seen due to curve overlapping in the left side. Comparing the $I_{max}$ for the modes $1$ and $-1$ we see that $I_{max}(1)>I_{max}(-1)$. In both cases a fluxon (or antifluxon) is trapped in the defect. The major difference in the $I_{max}$ comes mainly from the phase distribution which in the mode $1$ case leads to a large positive net current in the undefected side, while in the $-1$ mode the net current in the undefected side is very small. For the mode $1$, at $H\approx 0$ and zero current the instability happens due to the competition of the slope of the phase at the defect center and at the right edge, while at the other end at $H=2$, the field at the defect center becomes equal to the external field applied at the boundaries and there is no such competition. In this case the instability sets in due to the critical value of the phase at the undefected boundary (i.e. $\phi_x(-\ell/2)=2$). The situation is analogous for the mode $-1$. For $H\approx 0$ the instability sets in due to the depinning of the antifluxon while for $H=-2$ due to the critical value of the phase at the free defect part of the junction. For the mode $0$ we have no fluxon trapping at the defect, even though the instability at the two extremes with $H=\pm1.6$ at zero current is caused from the tendency to trap a fluxon or antifluxon correspondingly at the defect. At higher values of the magnetic field ($|H|>1.6$) we have stability for a range of non-vanishing current values as will be discussed below. Thus this value can be considered as the minimum value for the introduction of fluxons in the junction. Let as remark that for the perfect junction, or a junction with a centered defect, the corresponding values for fluxon introduction would be equal to $2$. Thus there is a decrease of the critical field as the defect moves away from the center. For the $0$ mode a centered defect would have no influence on the solution. The results for the maximum current are in agreement with the stability analysis. In Fig. 2c we present the lowest eigenvalue $\lambda_1$ for the different modes in zero external current $I=0$ as a function of the magnetic field $H$. The sudden change in slope for the modes $-1, 1$ is because at that point a new eigenvalue becomes lower. The $\lambda_1$ is positive denoting stability and becomes zero at the critical value of the magnetic field, where a mode terminates. The symmetry about zero magnetic field is due to the symmetric boundary conditions for $I=0$. Change of the sign of $H$ changes the sign of the phase distribution, but the $\cos\phi$ in (\[eq10\]) remains unchanged. This symmetry is being lost when a finite current is also applied. Also there are solutions (not presented in the figure) for which the stability analysis gives negative eigenvalues i.e instability. These solutions may be stabilized when we insert multiple impurities. In Fig. 3a we specifically draw only the $1$ mode, to be discussed in more detail. Here we changed the procedure, in searching for the maximum current. Up to now we followed the standard experimental procedure, i.e. we scan the magnetic field and for each value of $H$ we increase the current $I$, starting from $I=0$, until we reach the maximum current. Here we consider the possibility that for $I>0$ there is also a lower bound in the value of the current for some values of the magnetic field. This requires a search where we vary both $H$ and $I$ simultaneously. Thus we see that for $H<0$ there is a lower bound given approximately by the line $H+I/2\approx H_{cl} $, where $H_{cl}\approx 0$ is the critical value of $H$ at $I=0$, for which we have depinning of the trapped fluxon. Over this curve the slope $\phi_x$ at the right end (near defect) is kept constant and equal to $H_{cl}$ and above this line the fluxon remains pinned and it should be stable. This line ends at $H=-1$, since in that case the extremum value $\phi_x=-2$ is reached in the left end. Increasing now in that range of $H$ the bias current we find that also the $I_{max}$ curve extends further to the left. The equation for this line is approximately given by $H-I/2=-2$, with an extremum at $\phi_x(-\ell/2)=-2$. Thus the instability on this line arises from the left side (far from the defect). It extends up to $H=-1$ for a long junction and joins the other line $H+I/2=H_{cl}$. The above calculations were done for a long junction so that the fields at the two ends do not interfere. For shorter length however the two ends feel each other and in that case the two instabilities are not independent. This means that the tail of the defect free side field will compete with the slope of the trapped field. Then the two lines $H+I/2=H_{cl}$, and $H-I/2=-2$, end before they meet (at $H\approx-1$) at a cutoff magnetic field. Also for short junctions we expect the straight lines to have some curvature. A similar discussion holds for the right end of the $1$ mode. Again there is a lower current (positive) bound given by $H-I/2=2$ due to instability at the left end ($\phi_x(-\frac{\ell}{2})=2$), and an upper bound given by $H+I/2=H_{cr}$, where $H_{cr}\approx 2.8$, due to fluxon depinning. On the same diagram, we show the lower bound for negative currents. Thus we see that there is strong asymmetry for positive and negative currents. Remark that for negative currents the mode $1$ is very similar to the mode (1,2) with no defect [@caputo]. This is because the right boundary is determined by an instability at the undefected side. The left boundary is again very close because $H_{cl}\approx 0$. So an interesting effect of the defect is that we have this strong asymmetry for positive and negative currents. We would get a similar picture if we considered the $-1$ mode. In fact we get the same curves (as for mode $1$) if we put $I\rightarrow -I$ and $H\rightarrow -H$. This is consistent with the $-1$ mode shown in Fig. 2a. The discussion can also be extended to the other modes. In Fig. 3b we show the result of a similar scan for the $0$ mode, but for the sake of shortness we will not discuss the $-1,-2,2$ modes. In any case when the number of fluxons increases one must rely on numerical calculations rather than simple arguments. Variation with the defect critical current ========================================== In the previous section we considered the case of a microresistance defect. With present day masking techniques we can also consider any finite critical current (lower or higher) in the defect. This situation can also arise very often in junctions with high critical current densities, where small variations in the thickness can create strong critical current density variations. Thus for the previous asymmetric defect configurations we will study the effect of the defect critical current density in the magnetic interference pattern $I_{max}(H)$. We will concentrate on the $0$ and $1$ modes. [*(i) mode $0$:*]{} In Fig. 4 we see the $I_{max}(H)$ variation for the mode $0$ for decreasing values of the critical current density from $j_d=2$, to $j_d=0$. Let us discuss first the case for $j_d \le1$. For the perfect junction where $j_d=1$ we have a symmetric distribution about $H=0$. As we decrease $j_d$ the flux content of this mode (and the extremum $H$) is symmetrically reduced (see Fig. 2b). It is not apparent from the drawing, due to the superposition of several curves on the left side of the diagram, but as expected the range of the magnetic field is symmetric about $H=0$ at zero current. The corresponding $I_{max}(H)$ curves, however, are not symmetric. The right hand side of the curves is displaced towards smaller critical fields with decreasing $j_d$. This means that the critical field at $I=0$ to introduce a fluxon from the ends is decreased due to the existence of the defect which acts with an attractive force on the fluxon. The curves are linear and can be approximated by the equation $I(H)=4-2(H+\delta H_c)$, where $\delta H_c$ is the decrease in the critical field $H_{c0}$ and depends on $j_d$. A similar decrease happens for negative magnetic fields where the defect tries to pin an antifluxon. Even for higher currents the right side critical field is determined by the tendency of the defect to attract a fluxon. The left hand side, however remains rigid (but is shifted along the line). This is due to the entrance of magnetic flux from that part of the junction where there is no defect. The instability in the critical current occurs when ($\phi(-\ell/2)=\pi$) for every value of $j_d$. From the pendulum phase diagram which is the classical analog of the Josephson junction, the extremum occurs at $\partial_x\phi (-\frac{\ell}{2})=-2$, or $H-I/2=-2$ which is the equation for this triangular side. At near zero current the critical field is influenced from the attractive action of the defect. At low currents and extreme negative magnetic field the $I_{max}$ curve shows a re-entrance behavior so that it is not stable at low and high currents, but only for a finite intermediate range of current values. This way we reconcile the different origins of the instability mechanisms $\phi(-l/2)=\pi$ at high current and the defect influence discussed for the right hand side of the mode. For $j_d>1$ we see an increase in the $I_{max}$, while the critical magnetic field at $I=0$ remains almost constant at about $H_{cr}\approx 1.9$. The instability at that point is due to the trapping of flux in the region between the positive defect and the right edge of the junction. The field for that is expected to be near $H=2$ if the right undefected part is of length of the order of Josephson length. Thus it is the same value for flux penetration from the perfect junction edges. It will vary weakly with $j_d$. [*(ii) mode $1$:*]{} The mode $1$ in the perfect junction has a full fluxon for magnetic field $H=0.07$. The phase distribution is about $\theta=\pi$ where the energy has a minimum. At the end of this mode at $H=2.07$ where two fluxons have entered the junction the phase changes from $\phi(-\ell/2)=-\pi$ to $\phi(\ell/2)=3\pi$. When the defect is inserted this mode is significantly modified due to the flux trapping in the defect. In Fig. 5 we see the magnetic interference pattern for this mode for different values of the defect critical current density. For $0 < j_d < 0.7$ the $I_{max}$ vs $H$ curves are displaced downwards, and a fluxon is trapped in the defect. We notice that all the curves for $j_d<0.7$ have the same critical magnetic field $H=2$ for $I=0$. This is because at this end of the mode, at $I=0$ the instability arises at the side with no defect where the phase reaches the critical value $\phi=\pi$ (modulo $2\pi$). Of course as discussed in the previous section we have a reentrant behavior above $H=2$. At the other end for small magnetic field the instability is due to depinning of the trapped fluxon. For $0.7 < j_d \le 1.0$ the defect can trap the flux only for $H<H_{cd}$, where the value of the $H_{cd}$ depends on the defect critical current $j_d$ and in Fig. 5 it is shown for $j_d=0.9$. Notice that for this value of $j_d$ the fluxon is very weakly trapped, and the untrapping process happens slowly over a range of magnetic field values. For $H > H_{cd}$ the fluxon has moved away from the defect, and for this weak defect the junction does not feel it. The critical current goes abruptly close to the curve for the perfect junction. We conclude that the behavior of the junction for values of $j_d$ close to $j_d=1$ is determined by the ability of the defect to trap one fluxon. This can be seen also from the change in the lowest eigenvalue variation with the external field $H$, at values of the critical density $j_d > 0.7$, in Fig. 6. For $j_d>1$ (thin lines in Fig. 5) it has a similar form as for $j_d=1$, i.e. there is no fluxon trapping. Again, as in the $0$ mode, the $H_{cr}$ at $I=0$ stays around $2.0$ and is again due to the trapping of flux in the right edge. In Fig. 7 we present the evolution of $I_{max}$ with the defect critical current density $j_d$ for a magnetic field $H=1.5$. For this value of the magnetic field there are no solutions with trapped fluxons for $j_d > 0.83$. The lowest eigenvalue at $I=0$ becomes zero at this point. For $j_d > 0.83$ and $H > 1.5$ there are solutions which are not trapped. For these solutions the maximum current coincides with the one of the perfect junction and there is a discontinuity in the curves. Notice the point at $j_d=1.0$. In the same figure we also show the magnetic flux at $I=0$ and at $I_{max}$ which is almost constant as a function of $j_d$ as expected, with small difference between the two different current curves. Variation with the defect position ================================== In Fig. 8a we see the evolution of the critical current at zero magnetic field as we move the defect from the right edge of the junction $D=0$ to the left edge where $D=8$. The position is measured from the edge of the junction to the nearest edge of the defect. We examine the several modes separately: [*(i) mode $0$*]{} For this mode and for $I=0$, we are able to find solutions for all the defect positions. As we can see in Fig. 8b the corresponding magnetic flux at $I_{max}$ is slowly changing and equal to zero when the defect is in the junction center. But when the defect is placed close to the ends the magnetic flux at the maximum current deviates from zero. The critical current for this mode is symmetric for defect positions about the junction center, and has its maximum value when the defect is at the center. This is because at that position it does not influence the solution at the edges which is very close to the undefected case, while near the center the phase is almost zero. But when the defect comes close to the junction ends the defect cuts into the area by which the current flows, and the critical current is reduced. For even smaller distances $D=0.2$, and $D=0$ there is a jump to solutions which correspond to a current, which is much higher than that of the $0$ mode for nearby $D$ values. This is because the defect cuts negative current regions and for this position we have an increase of the critical current. In fact these solutions (see $++$ symbols in Fig. 8a) are very close to the solutions of a perfect junction within the undefected area, except that now the defect at the edge can give contribution to the flux but no contribution to the current. Thus the flux is much higher than that of the $0$ mode and it approaches that of mode $1$. Nevertheless these points should be considered as a separate mode. In fact they are part of a branch (crosses). In these distances there are no other modes for $H=0$. Similar results were obtained by Chow $et$ $al.$ [@chow] where they attributed this enhancement in the $I_{max}$ for small distances to a self field which was generated by the current, penetrating into the defect and resisted any further penetration of field. To overcome this resistance it was necessary to apply a higher current. But they do not distinguish between modes with different flux content, and their evolution with the defect position. [*(ii) modes $1$, $-1$*]{} For these modes we do not have solutions for all the defect positions at $I=0$ and $H=0$, but only in the range $1.4 < D < 6.6$ as seen in Fig. 8c where the lowest eigenvalue is plotted as a function of the defect position for the different modes. The curves for the 1 and -1 modes coincide, while the 0 mode shows a change of slope corresponding to the last two points ( $D=0 $ and 0.2 discussed above) which belong to another curve. Mode $-1$ has a trapped antifluxon in the defect. When the defect is to the left ($4.0 < D <6.5$), then the instability in the current of mode $-1$ at $H=0$ is created at the right-end of the junction when the phase reaches the critical value $\phi(l/2)=\pi$. This instability occurs for currents which are less than those necessary to unpin the antifluxon. Notice that increasing the current there is no competition with the slope of the antifluxon trapped in the left end. Thus at this point (for $ 4.0 < D < 6.5 $) the maximum current is very close to the undefected junction mode $0$, except that in this case $N_f\approx -1$ is close to an antifluxon. At the other end ($D < 4.0$) the instability for mode $-1$ is caused by the depinning action of the applied current, which takes now much smaller values (close to zero) because of competition with the pinned fluxon. The phase distribution at the defect free end is that expected for $H=0$ and $I$ close to zero. The mode $1$ with a fluxon trapped has a symmetrically reflected (about the center) form in $I_{max}$ vs $D$ and the instability for $D>4.0$ occurs at the left end of the junction, which is the opposite case of mode $-1$. The eigenvalue becomes zero at the positions $D=1.4$ and $D=6.6$. The $\lambda_1(D)$ curve coincides for the modes $1$ and $-1$ due to the fact that the phase distributions for the same $D$ for these modes are symmetric about $x=L/2$, and the $\cos\phi(x)$ that enters the eigenvalue equation is the same. In the rest we examine the variation of the critical value at which the instability sets in, as we scan the magnetic field in the positive (negative) direction $H_{cr} (H_{cl})$ for zero current, for the different modes, as a function of defect position. This instability can be attributed to the pinning, or the depinning field or to the critical value of $\frac{d\phi}{dx}$ at the defect free edge, depending on the particular mode that we are considering. Explicitly for the mode $0$ the instability in the $H_{cl} (H_{cr})$ is due to the pinning of a fluxon (antifluxon), respectively. In this mode the defect has no influence for positions close to the center as seen in Fig. 9a and 9b. However as we move the defect close to the edges the pinning field $H_{cl} (H_{cr})$ is reduced in absolute value because it is easier to trap a fluxon (antifluxon). For the mode $1$ the $H_{cr}$ is constant for all defect positions. This is due to the fact that at $I=0$ it is the phase distribution at the undefected edge of the junction that determines the instability. Notice that due to the reentrant character the critical magnetic field takes higher values at larger bias currents which vary with defect position. The $H_{cl}$ curve depends on the phase distribution near the defect and therefore is strongly defect position dependent. For the mode $-1$ the picture is reversed compared with the $1$ mode. In this case the $H_{cl}$ is constant while the $H_{cr}$ varies with position. Note that in this mode the depinning of an antifluxon is the reason that causes the instability at $H_{cr}$. Two symmetric pinning centers ============================= As noted defects (with $j_d<1$) or inhomogeneities in the junction can play the role of pinning centers for a fluxon. In this section we discuss more precisely the effect of multiple pinning centers on the magnetic interference patterns $I_{max}(H)$ and the flux distribution. The pinning effect of the Josephson junction has also been analyzed in [@yamashita1; @yamashita2], by using a simple mechanical analog. The analogies of the mixed state of type II superconductors and vortex state of the Josephson junction has been discussed in these references. In Fig 10a we present, as an example, the critical current $I_{max}$ versus the magnetic field for a junction which contains two defects of length $d=2$ placed symmetrically at a distance $D=2$ from the junction’s edges. We examine the following modes grouped according to flux content: [*(i) modes $0$, $0a$*]{} These modes have magnetic flux antisymmetrical around zero field, as seen form Fig. 10b where the magnetic flux is plotted versus the magnetic field. At $I=0$ and magnetic field $H=-0.7$, the $0a$ mode contains one fluxon trapped in the left defect, while an antifluxon exists at the other part of the junction. As $H$ increases towards $0.7$ the picture changes slowly, so that the antifluxon is pinned in the right defect. The stability analysis shows that this mode is unstable. We remark that there are also other unstable modes near zero flux, which we will not present here. For example there is another unstable mode with the same flux as $0a$ but a much higher critical current (the same as the $0$ mode). Mode $0$ has phase distributions which are similar to the corresponding mode of the homogeneous junction since it has no trapped flux in each defect. [*(ii) modes $1l$, $1r$*]{} These modes have magnetic flux close to unity, and are both stable. For the mode $1r$ one fluxon has been trapped to the right defect, and in the mode $1l$, the vortex is trapped in the left defect. Due to the symmetry this mode has the same magnetization as the mode $1r$, but the critical current is reduced. The phase distribution for the modes $1r$, and $1l$, at zero current are related by $\phi_{1l}(x)=2\pi-\phi_{1r}(-x)$. The maximum field $H=1.9$ (at $I=0$) for both modes is determined by an instability at the defect free side. At the other extreme there is a competition at the fluxon side between the applied field and the field created by the pinned fluxon. Thus the critical field at $H=-0.62$ can be considered as a coercive field and below this value the fluxon gets unpinned. The two modes have characteristically different currents and this depends on the current through the fluxon free defect, since the pinned fluxon itself gives no major contribution. Thus the maximum current is much larger for the $1r$ mode. The opposite would be true if we look for negative currents. There are also the symmetrically situated modes that correspond to an antifluxon in the left or right defect, which are not shown in Fig. 10a. The respective flux is antisymmetric with $H$ around $H=0$. In Fig. 10a we also show the mode $2$ with flux around $2$ fluxons. Several unstable modes are not shown, for the sake of clarity. Their analysis however, can show the connection between different modes, while a defect in the correct place with proper characteristics can stabilize these solutions. We conclude that depending on the positions where the vortex is trapped we may have modes with the same magnetic flux content, but different critical currents. Also due to soliton localization on the defects, we may have stable states with magnetic flux close to unity, for zero magnetic field. These states together with the one existing in the homogeneous junction form a collection of stable states in a large $H$ interval. We must comment here that states with unit flux, for zero magnetic field ($H=0$) exist in the homogeneous junction, as a continuation of the stable ($1$) mode to negative magnetic fields, but as we found in a previous work [@caputo], are unstable. So we may argue here that the presence of defects stabilizes these states. In comparing the results for one (Fig. 2a) and two defects (Fig. 10a) we see some similarities and differences. In the case of two defects new modes appear but also the region of stability of the equivalent modes is different. This is more clearly seen in Fig. 11 where we plot the $I_{max}$ vs $N_f$ for both cases. This presentation is useful since the $N_f$ is a nonlinear function of $H$. This plot (Fig. 11a) is a combination of Figs. 2a and 2b. We should point out that the maximum peak in the current in both case comes due to the trapping of a fluxon in the defect at the right side. The maximum of $0$-mode is very close in both cases and this happens because this mode does not involve fluxon trapping. The $1r$ mode for the two defect case is very close to the $1$ mode of the single defect, since in both cases there is a fluxon trapped in the same side. In the two defect case we see an enlargement of the region of stability so that the modes overlap. The thin continuation lines in modes $0$ and $1$ for the single defect are in the reentrant region of flux as discussed in section 2. Symmetric distribution of pinning centers ========================================= In this section we study as an example the case where a junction of length $\ell=14.2$ contains three defects of length $d=2$, and the distance between them is $2$. The length was augmented, so that we keep the same width of the defects when we increase the number of the defects, since we saw that the width of the order $d =2$, gives the possibility of fluxon trapping and increased maximum current when the defect is situated asymmetrically. We will study the phase distribution at $I=0$ and try to extract information about the critical field values and magnetization. We find the following modes grouped according to flux content: [*i) modes $0$, $0l$, $0r$, $0c$*]{} In Fig. 12a we present the critical current versus the magnetic field for the modes with magnetic flux around zero (see Fig. 12d). This is indicated by the $0$ symbol. There are four modes belonging in this category, which are stable. The solutions for the mode $0$ are similar to the homogeneous junction mode $0$, with no flux trapping in the defects. The only difference is that the instability in the critical field occurs when the phase at one edge, reaches a value, which is smaller (due to pinning) than the corresponding value for the undefected junction, which is $\phi(-\ell/2)< \pi$. The same was true for the two defect case. Mode $0c$ has the maximum critical current $I_{max}=5.08$ for $H=0$. One antifluxon is trapped to the leftmost defect, one fluxon to the rightmost, and the phase in the center defect is constant. The trapping at the edge defects leads to a positive current distribution between them, for this particular length, and enlarges the maximum current. The same type of mode was not found for the two defect case (with a shorter junction length), and we conclude that the extra defect along with the increased junction length stabilizes this solution. For the mode $0l$ one fluxon is trapped in the left defect where the phase changes about the value $\phi=\pi$. The antifluxon is distributed at the other two defects, where the phase is about the values $3\pi/2$ (or $\pi/2$), and we have a cancellation of the positive and negative current density in this region. Similar for the mode $0r$ the fluxon is trapped to the right defect, and the current is distributed with opposite sign to the other two defects. These modes are similar to the $0a$ mode for the two defect case. [*ii) modes $1l$, $1c$, $1r$*]{} In Fig. 12b we see the maximum current versus the magnetic field for the modes with magnetic flux around $N_f=1$ (see Fig. 12e). There are three modes with flux close to $N_f=1$ each of which corresponds to the trapping of one fluxon in one defect. In the mode $1c$ the fluxon is trapped in the center defect. In the mode $1l$ ($1r$) it has been trapped in left (right) defect. Due to the symmetry the lowest eigenvalue, and the magnetic flux coincides for these two modes, but as we showed in the previous section, their critical currents are different, depending on the tunneling current distribution in the region with no trapping. The $1r$ mode corresponds to a higher critical current. [*iii) modes $2$, $2a$, $2b$*]{} In Fig. 12c we see the maximum current versus the magnetic field for the modes with magnetic flux around $N_f=2$ (see Fig. 12f). Only the mode $2$ corresponds to stable solutions. There we have two fluxons trapped in the side defects. In mode $2a$ one fluxon is trapped in the right defect, while in mode $2b$ this trapping occurs in the center defect. We conclude that distributed pinning centers are more effective in trapping the vortex, and lead to an increased critical current. Some conclusions will continue to be valid for larger number of defects where we keep the defect width and separation fixed. In that case we also expect the results to change significantly when there is either a periodic array of defects, where we expect higher fluxon modes to give the highest current peak [@tinkham]. Defect with a smooth variation of current density ================================================= Up to now we considered defects with abrupt changes in the local critical current density and the question arises whether the abruptness of $J_c$ variation is crucial in the significant change in $I_{max}$ for the $n=1$ mode. We will see that similar effects exist for smoother variation, where again the fluxon pinning is an important feature. For this reason we chose a single defect at the junction center with a smoothly varying critical current density given by $$\widetilde J_c (x)=\tanh ^2\left[ \frac{2}{\mu}(x-x_0) \right],~~~\label{delta}$$ where the defect is centered at $x_0$, and the width is determined by $\mu$. In Fig. 13 we show the results for the case $x_0=7.6$ and $\mu=2$, which can be compared with the results of the asymmetric defect in Fig. 2a. For the modes shown the curves are very similar and thus we see that the main results survive since the defect strengths are similar. Of course there is a quantitative difference. But most of the stability criteria described earlier are still valid. In Fig. 14 we consider the effect of the form of current input and compare the case of inline with overlap for a smooth defect situated at the center of the junction i.e. $x_0=0$ with $\mu=0.5$. In Fig. 14a we present the $I_{max}$ for inline boundary conditions, and we show only the $-1,0,1$ modes. The $0$ mode is not influenced at all from the defect since all the phase variation is at the boundaries. There is a strong similarity with $I_{max}$ for the $1$, and $-1$ modes. The reason is that in these cases there is a trapped fluxon or antifluxon at the center and at zero current and magnetic field the phase variation dies out at the boundaries. Thus when increasing the current at $H=0$ towards $I_{max}$ we have the same situation at the boundaries as for the $0$ mode and the instability happens at close $I_{max}$ values. Of course due to the pinning, the fluxon content is very different from the $0$ mode. The $-1$ and $1$ modes have an enhanced $I_{max}$ and the small difference in $I_{max}$ from the $0$ mode is attributed to the small influence of the trapped fluxon to the boundaries. Let us remark that a similar situation was seen in Fig. 8a for the square well defect, when the defect position is at the center for $H=0$. By comparing with Fig. 9 the $H_{cl}$ and $H_{cr}$ values we see close agreement with the case of $j_d=0$ in the defect. These results could change for a smaller length junction or if we move the defect towards the edges (as seen in Fig. 8a). For the same defect we also investigated the effect of the overlap current input, where the current is distributed along the whole junction. In Fig. 14(b) we present the maximum current per unit junction length versus the magnetic field, and it should be compared to the inline case in Fig. 14(a). We see a significant change for the $-1$ and $1$ modes. Of course at $I=0$ both current inputs give the same solutions, but $I_{max}$ is much smaller for the overlap boundary conditions. This is from the fact that due to the applied current the fluxon is pushed against the pinning barrier until it is overcome at the critical current. In the absence of applied current the phase at the defect center is $\phi(0)=\pi$, while the application of the current pushes the fluxon to the edge of the defect which is taken to be near the point where the curvature of the defect critical current distribution changes sign. So we can consider in this case this maximum current as a measure of the pinning force. In Fig. 15a we plot the magnetic flux $N_f$ at zero current versus the magnetic field $H$ for the inline case. The lowest eigenvalues for the different modes versus the magnetic field are seen in Fig. 15b. For a homogeneous junction the $0$ mode is the only stable state available at $H=0$. However in the problem we consider here, the mode $1$($-1$) exists and it is stable for $H=0$ and corresponds to the localization of the soliton (antisoliton) in the inhomogeneity. For these modes we have pinned flux at $H=0$, with $\phi(0)=\pi$, and $\frac{d\phi}{dx}=2$. In Fig. 16 we show the evolution of $\phi$ and $\frac{d\phi}{dx}$, for the mode $1$, as we change the magnetic field at $I=0$. Near $H=-1.9$ the fluxon content is near zero and for $H<-1.9$ an instability sets in due to the depinning of the fluxon. This is because the slope at the pinned fluxon competes with the opposite slope tried to be imposed by the external negative magnetic field at the boundaries. At the other end the flux is equal to $2$, and the instability sets in when $\phi$ at the boundaries approaches $\pi$ (or odd multiplies). The range of $H$ values for the $1$ mode, when the defect is at the center is significantly broadened and gives a corresponding range for the flux of two fluxons. Usually each mode has about one extra fluxon and in particular for the perfect junction it contains only one extra fluxon. This is because the defect is at the center and far enough from the edges where the magnetic field is applied and therefore even for negative fields there is no significant competition, with the field at the defect center. This is especially true when the distance of the defect from the edges is grater than $2\lambda_J$. When however the defect is near the edge the instability sets in before we cross to negative magnetic fields. The maximum current $I_{max}$ for the mode $0$ is greater than in the modes $1$,$-1$, but is reduced compared with the $I_{max}$ for the mode $0$ in the homogeneous junction, in zero field. In [@filippov] they approximated this reduction in an analytical calculation using a delta function for the defect potential, and they found $\Delta I_{max}=-\mu/2L\approx0.02$. In [@filippov] they arrived at the analytical result, by minimizing the fluxon free energy, for the maximum overlap current versus the magnetic field $H$, for these modes, which is a good approximation of the numerical solution we consider here in the limit $L \gg 1$. Conclusions =========== In several applications it is desirable to work in an extremum of the current for a region of the magnetic field. This can be achieved by the appropriate distribution of defects so that the negative lobes of the current distribution in the junction due to the fluxons are trapped in the defect with no contribution to the current. Of course if the defect is isolated (far from other defects or the edges) we expect zero contribution to the current. Due to the effect of the applied current and magnetic field at the boundaries in certain cases we can obtain positive current lobes outside the defect. In several cases in section 3 this was the reason for the increased current. Because the control of magnetic field is very easy compared to other system parameters (like temperature, disorder, etc.) the measurement of the effect of magnetic field on junction behavior, provides a convenient probe for the junction. The calculation of the $I_{max}$ can characterize the quality of the junction or verify the assumed distribution of defects when they are artificially produced. The spatial variation of the critical current density on low $T_c$-layered junctions, and high $T_c$ grain boundary junctions can be directly imaged with a spatial resolution of $1\mu m$ using low temperature scanning electron microscopy $(LTSEM)$ [@huebener; @gerdemann]. Information on smaller scale inhomogeneities has to rely on the magnetic field dependence of the maximum tunneling current $I_{max}$. The purpose of this paper is the consideration of large defects in order to study the interaction between fluxons and defects and give estimates of the coercive field for pinning or depinning of a fluxon from a defect. The region of consideration puts us far from the region of perturbation calculations and is amenable to direct experimental verification since it is easy to design a junction with the above characteristics. The defects influence strongly the low fluxon modes. At high magnetic fields larger than the depinning field of a single fluxon we expect only minor effect and fluxon trapping. Of course for a large number of defects interesting behavior can be obtained.[@oboznov; @larsen] The interaction between fluxons in the few defect case also assists to overcome coercive fields and untrap fluxons. The results of two trapped fluxons in the two defect case show that the fluxons are strongly coupled and one cannot consider an exponential interaction type potential between the fluxons. Also the critical current in a long junction, cannot be calculated as the Fourier transform of the spatial distribution of the critical current density $J_c(x)$, at least for weak magnetic fields. For strong magnetic fields, where we have the field penetrating uniformly the junction, as is the case for short junctions, we recover the diffraction like pattern. In summary we saw that the bounds of the different modes determined by the stability analysis depend on two factors: (i) the instability at the boundaries away from the defect when $\phi_x$ reaches its extremal values equal to $\pm 2$, and (ii) the instability due to the pinning or depinning of a fluxon by the defect. If the junction is near one end then we saw that both criteria play a role in determining the instability, independently in different areas. In general, however, there will be coupling between defects and the edges (surface defects) especially in the case of multiple defects. Defects also introduce hysteresis phenomena which are weaker in the case of smooth defects. We also saw that due to fluxon trapping in general we see a reentrant behavior, i.e. there are regions of magnetic field for which there is both an upper and a lower bound on the maximum current. We also find that due to the pinning of magnetic flux from the defect there exist additional stable states in a large interval of the magnetic field. The abrupt change in the critical current density is not crucial for the trapping. Similar results are expected from smooth defects, with quantitative differences. The above results can be checked experimentally since it is easy to design a junction with a particular defect structure, using masking techniques. In fact a few parameters or characteristics could give at least partial information on defect properties. In particular the measurement of $H_{cr} $ or $H_{cl}$ can give some information of the defects near the edges. Also one can imagine the situation where we scan locally with an electron beam affecting thus the local critical current and observe the variation of the $I_{max} $ as we increase the heating. Once a fluxon is trapped we can decrease the heating (or increase $j_d$) and observe the variation of $I_{max}$. Thus one can have pieces of information to put together in guessing the defect structure that might fit the whole $I_{max}$ pattern. The extension to many defects requires considerable numerical work. It is hoped, however, that some of the stability criteria will still be useful. [**Acknowledgements**]{} One of us (N. S.) would like to acknowledge the ESF/FERLIN programme for partial support. Part of this work was done under grant PENED 2028 of the Greek Secretariat for Science and Research. [99]{} See several papers in the Conference Proceedings on [*Vortex Matter in Superconductors at Extreme Scales and Conditions*]{}, Eds. V. V. Molshchakov, P. H. Kes, E. H. Brandt in Physica [**C 332**]{}, Nos. 1-4, May (2000). A. Barone, and G. Paterno, [*Physics and Applications of the Josephson Effect*]{} (Willey, New York, 1982). D. Dimos, P. Chaudhari, and J. Mannhart, Phys. Rev. B [**41**]{}, 4038 (1995). E. Sarnelli, P. Chaudhari, and J. Lacey, Appl. Phys. Lett. [**62**]{}, 777 (1993). R. Gross, P. Chaudhari, M. Kawasaki, M.B. Ketchen, and A. Gupta, Phys. Rev. Lett. [**57**]{}, 727 (1990). R. Gross, P. Chaudhari, D. Dimos, A. Gupta, and G. Kozen, Phys. Rev. Lett. [**64**]{}, 228 (1990). D.E. McCumber, J. Appl. Phys. [**39**]{}, 3113 (1968). D. J. Miller, H. Talvacchio, D. B. Buchholz, R. P. H. Chang, Appl.Phys. Lett. [**66**]{}, 2561 (1995). J. Ayache, A. Torel, J. Lesueur, U. Dahmen, J. Appl. Phys. [**84**]{} 4921 (1998). R. Fehrenbacher, V. B. Geshkenbein and G. Blatter, Phys. Rev. [**B45** ]{},5450 (1992). T. Oka, Y. Itoh, Y. Yanagi, H. Tanaka, S. Takashima, Y. Yamada and U. Mizutani, Physica [**C 200**]{}, 55 (1992). A. Takagi, T. Yamazaki, T. Oka, Y. Yanagi, Y. Itoh, M. Yoshikawa, Y. Yamada and U. Mizutani, Physica [**C 250**]{}, 222 (1995). Th. Wolf, A-C. Bornarel, H. Kupfer, R. Meier-Hirmer, and B. Obst, Phys. Rev. [**B 56**]{}, 6308 (1997). C. Camerlingo, C. Nappi, M. Russo, G. Testa, E. Mezetti, R. Gerbaldo, G. Ghigo and L. Gozzelino, Physica [**C 332**]{}, 93 (2000). E. Mezzetti, A. Chiodoni, R. Gerbaldo, G. Ghigo, L. Gozzelino, B. Minetti, C. Camerlingo and A. Monaco, Physica [ **332**]{}, 115 (2000). M. Virokur, and A.E. Koshelev, JETP [**70**]{}, 547 (1990). K. Kroger, L.N. Smith, and D.W. Jllie, Appl. Phys. Lett. [**39**]{}, 280 (1981). C. J. van der Beek, M. Konczykowski, R. J. Drost, P. H. Kes, A. V. Samoilov, N. Chikumoto, S. Bouffard, M. V. Feigel’man, Physica [**C 332**]{}, 178 (2000). V. M. Krasnov, V. A. Oboznov and N. F. Pedersen, Phys. Rev. [**B55**]{}, 14486 (1997). M. B. Ketchen, IEEE trans. Magnetics [**27**]{}, 2916 (1991). K. K. Likharev and V. K. Semenov, IEEE Trans. Appl. Supercond. [**1**]{}, 3 (1991). A. W. Kleinsasser, W. H. Mallison, R. E. Miller and G. B. Arnold, IEEE Trans. Appl. Supercond. [**5**]{}, 2735 (1995). G. Reinisch, J. C. Fernandez, N. Flytzanis, M. Taki and S. Pnevmatikos, Phys. Rev. B [**38**]{}, 11284 (1988). V. A. Oboznov and A. V. Ustinov, Phys. Lett. [ **A139**]{}, 481 (1989). B. H. Larsen, J. Mygind and A. V. Ustinov, Phys. Letters [**A193**]{}, 359 (1994). M. A. Itzler and M. Tinkham, Phys. Rev. [**B51**]{}, 435 (1995). L. Balents and S. H. Simon, Phys. Rev. [**B51**]{}, 6515 (1995). J.-G. Caputo, N. Flytzanis, Y. Gaididei, N. Stefanakis, and E. Vavalis, Supercond. Sci. Technol. [**13**]{}, 423 (2000). C.S. Owen, D.J. Scalapino, Phys. Rev. [**164**]{}, 538 (1967). T.C. Chow, H. Chou, H.G. Lai, C.C. Liu, Y.S. Gou, Physica C [**245**]{}, 143 (1995). T. Yamashita, L. Rinderer, K. Nakajima, and Y. Onodera J. Low Temp. Phys. [**17**]{}, 191 (1974). T. Yamashita, and L. Rinderer, J. Low Temp. Phys. [**21**]{}, 153 (1975). A.T. Filippov, Yu.S. Gal’pern, T.L. Boyadjiev, and I.V. Puzynin, Phys. Lett. A [**120**]{}, 47 (1987). [**21**]{}, 153 (1975). R. PP. Huebener, Adv. Electron. Elect. Phys. [ **70**]{}, 1 (1988). R. Gerdemann, K. D. Husemann, R. Gross, L. Alff, A. Beck, B. Elia, W. Reuter and M. Siegel, J. Appl. Phys. [**76**]{}, 8005 (1994).
gravity is a variable that controls the effect of gravity on an object. The higher the gravity the more "pull" it has. Technically, gravity is an accelerator variable because its value is added to vspeed (usually) every step, and thus causing an objects downward movement to be accelerated. Normally, you will want to set gravity direction to 270, making gravity pull down on an object. This will only effect the object's vspeed. However, this can be changed if you want an odd gravity effect. Examples Example of Use gravity = 0.5; Related Pages - gravity - a page about video game gravity in general. - vspeed - the vertical speed of an object. - gravity direction - the direction of gravity.
http://gamedesign.wikidot.com/gamemaker:gravity
Vegan stuffed shells are creamy, cheesy comfort food! Jumbo pasta shells are stuffed with a savory ricotta filling covered in a bright and simple tomato sauce. Course Dinner, Main Dish Cuisine Italian Keyword dairy-free Italian, vegan Italian, vegan ricotta cheese, vegan stuffed shells Prep Time 30 minutes Cook Time 30 minutes Total Time 1 hour Servings 5 Author Beth Hornback Equipment Food Processor 9x13 casserole dish Ingredients Sauce 2 28 oz cans crushed tomatoes (I love the Muir Glen brand with basil) 2-3 cloves garlic, minced 1 teaspoon salt 1 teaspoon olive oil, optional Ricotta 1 cup raw cashews 1 tablespoon nutritional yeast 1 tablespoon white or yellow miso (I use chickpea miso) 1 teaspoon salt 3 cloves garlic, peeled 1 14 oz package extra-firm tofu, drained 1 cup fresh basil leaves, optional but recommended* Shells 1 12 oz box of jumbo pasta shells (you’ll have enough filling for about 21 shells) Instructions Make the sauce (if using) In a medium saucepan, saute minced garlic and salt in olive oil or a splash of water for 1-2 minutes. Add crushed tomatoes, and stir to combine. Simmer over low heat, stirring occasionally. Cook the pasta Bring a large pot of water to a boil. Add a generous pinch of salt to the water and add the pasta shells. Cook according to the package instructions. Strain the cooked pasta into a bowl of cold water to stop the cooking process and keep the shells fresh until ready to use. Make the ricotta In a food processor fitted with the s-blade, add the cashews, garlic, miso paste, salt, and nutritional yeast. Pulse until the mixture resembles coarse crumbs but is not pureed. Add the drained tofu, breaking it up into pieces as you go. Pulse until the mixture is fully combined and still has some texture from the cashews. Remove the mixture to a bowl and stir in the chopped basil, if using. Stuff the shells Preheat the oven to 350F. Drain the shells from the water bath, gently shaking to remove any water hiding in the shell cavity. Add the ricotta mixture to a piping bag or plastic food storage bag and snip off the end to leave about a half inch opening. Alternatively, you can spoon the ricotta mixture into the shells, it just is a little messier. Spread 2-3 cups of tomato sauce on the bottom of a 9x13 casserole dish or similar size baking pan. Stuff the shells, nestling them into the sauce as you go. Depending on how much filling you use, you should have enough for at least 21 shells. Spoon more sauce over the stuffed shells, cover the pan with foil, and bake for 30 minutes at 350F. Serve immediately with a salad or side of your choice. Video https://youtu.be/1BfKngaVN5w Notes I kept this recipe as simple as I could, but if you want to add more plant power to the vegan ricotta, stir in some fresh chopped spinach before you stuff the shells. Arugula could be good too! Mmm...or artichoke hearts. Maybe olives? The point is, you've got options. This meal is a perfect candidate for bulk cooking. If you double the sauce and ricotta, you can freeze the extra for another meal. Like the best damn vegan lasagna ! You will have leftover jumbo shells. This is just how it works. Accept it and move on with your life. ;) If you’re feeling extra, thin out a ½ cup of savory cashew cream with water until it’s a pourable consistency and drizzle it over the top of the shells after the 30 minute cooking time. Broil until browned and then enjoy! I usually don’t feel the need to do this because the filling is so rich as it is, but just wanted to throw that option out there.
https://passtheplants.com/wprm_print/recipe/12878
What is a bitcoin change address? When you send funds from your bitcoin wallet, the specified amount of funds are sent to the intended bitcoin address and the remainder of the funds being stored in the sending bitcoin address are sent to what is referred to as a "change" bitcoin address associated with the same bitcoin wallet. The change is not available to spend until the bitcoin transaction gets confirmed on the bitcoin network (funds get delivered to the intended address). Consider the example of taking a $20 bill out of your wallet to pay for a $5 cup of coffee. You would give the $20 to the cashier, and the cashier would give you back the $15. While the $15 belongs to you, it is not available for the time it takes the cashier to give you back your change. You can choose to spend your change before your original bitcoin transaction confirms, although it is not recommended. To change this setting in your wallet, go to Settings, then Advanced to turn on Use Unconfirmed Funds. Note that it will take longer for bitcoin transactions with unconfirmed inputs to get confirmed on the bitcoin network regardless of the fee included with the transaction. Note: Change addresses work the same way on the Bitcoin Cash network. Do I still own old bitcoin addresses? How do I Fix My Wallet if it is showing the Wrong Balance and/or Transaction History?
https://support.bitpay.com/hc/en-us/articles/115003063823-What-is-a-bitcoin-change-address-
November 26, 2010: The U.S. Army has reported that some 857,000 medals have been awarded to the 1.2 million soldiers who have served in Iraq and Afghanistan. That's 48 percent as many medals awarded during World War II, when six times as many soldiers served overseas. It's also 30 percent of those awarded during Vietnam, where 25 percent more soldiers served. This odd pattern is the result of the excessive number of medals given out during the Vietnam war. November 26, 2010: This has not been forgotten. Five years ago, American troops began grumbling about what was perceived as disrespectful use of Bronze Star medals as "attaboy" awards for officers and senior NCOs who served in Iraq and Afghanistan, or for lower ranking personnel you want to pin a medal on for no good reason (like giving an IED victim, who was just in the wrong place at the wrong time, something in addition to a Purple Heart). This inflation tends to be less with the higher awards, especially the Medal of Honor, as events leading to receiving these are extensively investigated, and often publicized. This awards inflation was a very unpopular aspect of the Vietnam war, and became a major embarrassment after the 1983 Grenada invasion (where the army tried to award more medals than there were troops involved, but the public caught wind of it and forced the brass to back off.) It was feared that another such scandal appears to be brewing. Compared to World War II, that is what is happening. The only good news is that it is not as bad as it was during Vietnam. In the American military, awards for valor go from the Bronze Star (which can also be awarded for non-combat accomplishment), the Silver Star, the Distinguished Service Cross and the Medal of Honor. There are also several lesser awards for non-combat service, plus the Purple Heart for those wounded or killed in combat. Logging In Previous Post Next Post X Help Keep Us Soaring We need your help! Our subscription base has slowly been dwindling. We need your help in reversing that trend. We would like to add 20 new subscribers this month. Each month we count on your subscriptions or contributions. You can support us in the following ways:
https://strategypage.com/htmw/htmoral/articles/20101126.aspx
How to Select The Optimum Fixed Time Maintenance Intervals Think about your maintenance program. How often are your PMs scheduled? How were those frequencies established? If you are in the majority, the chances are that the frequencies were either established from the OEM manual, or by someone in the department without data. Establishing the correct frequency of maintenance activities is critical to the success of any maintenance program. Too infrequently and the organization is subjected to failures, resulting in poor operational performance. Too frequently, and the organization is subjected to excess planned downtime and an increased probability of maintenance induced failures. So how do you establish the correct maintenance frequencies for your organization? There are three different approached to use, based on the type of maintenance being performed; - Time-Based Maintenance - On-Condition Maintenance - Failure Finding Maintenance This article will focus on Time Based Maintenance Tasks. Time-Based Maintenance Tasks “The frequency of a scheduled task is governed by the age at which the item of or component shows a rapid increase in the conditional probability of failure” (RCM2). When establishing frequencies for Time Based Maintenance, it is required that the life be identified for the component based on data. With time-based failures, a safe life and useful life exists. The safe life is when no failures occur before that date or time. Unless the failure consequence is environmental, or safety related, the safe life would not normally be used. The useful life (economic life limit), is when the cost of consequences of a failure starts to exceed the cost of the time-based maintenance activity. There is a trade-off at this point between the potential lost production and the cost of planned downtime, labour, and materials. So how is the safe life or useful life established? It is established using failure data and history. This history can be reviewed using a Weibull Analysis, Mean Cumulative Failure Analysis or even a Crow-AMSAA Analysis to statistically determine the life of the component. Once that life is determined using a statistical analysis, the optimum cost effective frequency must be established. Establishing the Optimum Economic Frequency This formula is used to establish the economic life of the component, balancing the cost of the downtime vs. the cost of the replacement. Where; - CT= The total cost per unit of time - Cf= The cost of a failure - CP= The cost of the PM - T = The time between PM activities The formula will provide the total cost based on the maintenance frequency. Since the calculation can be time-consuming, Dodson developed a table which can be used if; - The time to fail follows a Weibull Distribution - PM is performed on an item at time T, at the cost of CP - If the item fails before time = T, a failure cost of Cf is incurred - Each time a PM is performed, the item is returned to its initial state “as good as new” Therefore when using the table, use formula; T=mѲ+δ. Where; - m is a function of the ratio of the failure cost to PM cost and the value of the shape - Ѳ is the scale parameter of the Weibull distribution - δ is the location parameter of the Weibull distribution In the example below, you can see how the table can be used with the formula; The cost for a PM activity $60. The cost of a failure for the same item is $1800. Given the Weibull parameter of B=3.0, O=120 days, and δ =3 how often should the PM be performed? - Cf/ CP = x - 1800/60 = 30 The table value of m given a shape parameter B of 3.0 is 0.258. Therefore; - T=mѲ+δ - T = (0.258)(120)+3 = 33.96 - T = 34 days for each PM As you can see, determining the frequency of Fixed Time Maintenance tasks is not as simple as picking a number out of a manual or based on intuition. Armed with this information, a cost effective PM frequency based on data can be developed for your Fixed Time Maintenance tasks. This will ensure the right maintenance is done at the right time, driving your plant performance further. Does you Fixed Time Maintenance Tasks have this level of rigor behind them? Why, not? After all, your plant performance (operational and financial) depends on it. Stay tuned for next week’s post on establishing frequencies for On-Condition tasks. Remember, to find success; you must first solve the problem, then achieve the implementation of the solution, and finally sustain winning results.
https://accendoreliability.com/establishing-fixed-time-maintenance-intervals/
This article was co-authored by Natasha Dikareva, MFA. Natasha Dikareva is a San Francisco, California based sculptor, and installation artist. With over 25 years of ceramics, sculpting, and installation experience, Natasha also teaches a ceramic sculpture workshop titled "Adventures in Clay" covering concept development, hand-building techniques, texture, and glazing techniques. Her work has been featured in solo and group exhibitions at the Beatrice Wood Center for the Arts, Abrams Claghorn Gallery, Bloomington Center for the Arts, Maria Kravetz Gallery, and the American Museum of Ceramic Art. She has taught at the University of Minnesota and the American Indian OIC School. She has been awarded the Excellence Award at the 1st World Teapot Competition, Best in Show at the 4th Clay & Glass Biennial Competition, and a Grand Prize at the American Museum of Ceramic Art. Natasha holds an MFA from the University of Minnesota and a BFA from Kiev Fine Arts College. There are 17 references cited in this article, which can be found at the bottom of the page. This article has been viewed 4,549 times. When you’re an artist, you typically want to spend more time creating art than focusing on the business side of things. However, selling your artwork is a great way to help you support your artistic lifestyle, and if you sell it online, you’ll be able to reach a wider audience. To do that, first you need to digitize your work and take steps to protect it. Then, choose the platforms where you want to offer your artwork, weighing the pros and cons of each. Finally, spend time marketing yourself as an artist to attract buyers. Steps Method 1 Method 1 of 3:Preparing Your Artwork - 1Scan your artwork or take photos of it if it’s a physical piece. If your artwork is a drawing or a smaller painting, use a high-resolution scanner to upload the piece. For larger paintings or 3-D artwork such as sculptures, try taking several high-quality digital photographs, instead. Photograph the piece in a well-lit room, and take pictures from different angles. Also, consider taking a few up-close shots to showcase any intricate designs or interesting details. X Research source - For the best quality, be sure to clean the lens on your camera or the glass on your scanner before you digitize your piece. - If you're scanning in a piece that's bigger than your scanner, scan it in sections. Then, use an image editing software to combine all of the images into one piece, and take the time to line the edges up carefully. - Do not edit the pictures of the artwork in any way that might change its appearance. For instance, don’t use any filters or color correctors—you want potential buyers to see the art exactly as it will arrive. However, it is okay to crop the images, if you’d like. Tip: If you created a digital artwork, you won’t need to scan it in, but do ensure you save your work at the highest resolution possible. - 2Add a watermark to your images to protect yourself from theft. Create an image in your photo editing software that contains your signature, artistic logo, or website information. Make the image somewhat transparent so it won’t completely obscure the art, then place it over the picture of your artwork. This makes it harder for someone to take your work and attempt to pass it off as their own. X Research source Advertisement - Try to place the watermark somewhere that would be difficult to crop out. For instance, if you have a painting of a mountain scenery, you might put your watermark at the base of the mountains or in the sky above them. - Also, include a copyright notice somewhere on your image, using the copyright symbol (©), your name, and the date the piece was created. For instance, you might put, “© Krista Sans, 2019” in the bottom corner of your picture. - 3Use an inventory list to keep track of all of the art you have for sale. If you’re just selling one or two pieces, it might seem like you don’t need an inventory list. However, it’s a good idea to set a spreadsheet with the name of each piece (or an identifying number) and the site or sites where it’s listed. X Research source - Each time you sell one of your artworks, reference the inventory list to see if it was listed on more than one website. If you had it for sale in multiple places, make sure to remove it so you don’t accidentally sell a piece that you don’t have anymore. - Also, whenever a piece sells, update your inventory list with the sales price. This will help you keep track of your income if your art sales are enough to claim on your taxes at the end of the year. - A spreadsheet is easy to use when you’re just starting out, but as your business grows, you may find it more convenient to use specialized software to manage your inventory, instead. - 4Research similar artists to determine how to price your work. It can be hard to know exactly how much to sell your artwork for, especially if you’re just starting out as an artist. Look through the artwork on some of the sites where you’re considering selling your pieces and see what other artists are selling their work for, especially artists who are similar to you in style and skill level. X Expert Source Natasha Dikareva, MFA Ceramics & Sculpting Instructor Expert Interview. 5 May 2020. Then, keep your prices within 10-25% of theirs to ensure buyers will feel like your prices are fair. X Research source - Be honest with yourself about the quality of your work. You might find a buyer even if your finished work isn’t perfect, but not if you overcharge. - If you offer your art on multiple websites, be sure the pricing is consistent. - Look at other pieces in a similar medium. For instance, fine art photography will be priced differently from a painting or a sculpture. - Keep your prices lower if you're a beginner. Then, increase them as you make more sales. X Expert Source Natasha Dikareva, MFA Ceramics & Sculpting Instructor Expert Interview. 5 May 2020. Tip: Make sure to check what other art has actually sold for, not just its listing price. Just because another artist is charging something for a piece doesn’t mean it will actually sell for that. - 5Stock up on shipping supplies if you're selling physical art. If you're selling drawing, paintings, sculptures, crafts, or any other physical goods, you'll need to be prepared to ship them. The exact supplies you'll need will depend on what you're shipping, but it's a good idea to purchase what you'll need in advance. That way, once a piece sells, you can package it quickly and get it out to the buyer as soon as possible. Advertisement - Art that was created on paper or poster-paper can be rolled and mailed in a shipping tube. For canvas paintings, sculptures, and crafts, you'll need a cardboard box and packing supplies like bubble wrap, foam peanuts, or corrugated cardboard. These will prevent the piece from sliding around in the box and will cushion it from damage during shipping. X Research source - For very large or heavy pieces, you may need to ship them in a wooden crate, and you might want to use a specialty moving or courier service. - Typically, the buyer is responsible for shipping costs, but it's up to you whether you want to include those costs in the original price of the piece or add them on at the end. However, some art marketplace websites may already have policies that dictate how the shipping is included. Method 2 Method 2 of 3:Finding a Platform - 1Sell your art through your website for total control over the process. One of the easiest ways to sell your artwork online is through your own website. Use a content management system to design your website, and make sure there’s an easy-to-use, secure way to purchase your art directly from the site. Then, upload any work you have for sale. Be sure to update your website any time a piece sells! X Research source - By selling on your website, you won’t have to compete with other artists’ work or pay commissions, and you’ll have full control over the site layout. However, you may not reach as wide of an audience, so you may still want to offer your work on various marketplaces in addition to your website. - To help prevent copyright infringement, consider uploading low-resolution thumbnails of your digitized artwork, along with a few high-quality close-up shots to show off the detail in your piece. - Some of the more popular content management systems for artists include Shopify, Wix, Squarespace, Weebly, and WordPress. - 2Utilize social media to sell within your personal network. If you’re active on social media, try leveraging your followers into sales. Post thumbnails of your artwork on your social media to direct potential buyers to your website, or list your pieces on a social media marketplace to allow your followers to buy directly from your social media profile. X Research source - Be especially mindful of protecting your images with low-res thumbnails, watermarks, and copyright notices if you put them on social media, because it’s easier for other people to take credit for your work. - Be careful not to bombard your followers with posts. Try not to post more than once a day or once every other day. - 3Post your work on various art marketplaces to reach a wider audience. An art marketplace is similar to an online art gallery, and it can be a great way to attract buyers who might not have otherwise found your work. Some marketplaces might require you to pay for a membership to list your artwork, while others will take a commission of your final sale. Carefully read the fine print on each site, or research articles comparing different platforms to find the one that’s right for you. Once you choose the site or sites you like best, register an account and upload your art. X Research source - Some of the most popular art marketplaces include ArtPal, Artfinder, and Saatchi Art. You can also offer your work on large marketplaces like Amazon, Etsy, and eBay. - Use an email address you check frequently when you register your account, as you may need to answer questions from any potential buyers. - Browse around each marketplace to find the ones that fit your style and medium the best. For instance, Etsy is a great place to sell handmade jewelry, wall art, furniture, and other physical pieces. Tip: If you’re an established artist, you may already have art displayed at a gallery. If that’s the case, contact a representative at the gallery and ask them to include your pieces in their online inventory. X Research source - 4Choose a print-on-demand website if your art has mass-market appeal. Print-on-demand websites typically allow buyers to select a piece of artwork they like, then order it to be printed on a variety of different objects. If your style is commercial, trendy, or appeals to a specific niche of buyers, print-on-demand sites can be a great way to get your work out there without having to do a lot of hands-on selling. X Research source - This is a great option for art that was created digitally, as well as paintings and drawings that you scan into your computer. - For instance, your art might be printed on phone or laptop cases, apparel, decals, drinkware, and more. - Some popular print-on-demand sites include Printify, Redbubble, Society6, and Zazzle. - 5Try selling your illustrations to a stock library. If you can create illustrations quickly, you may be able to have some commercial success by selling your art to a stock library. Look up different websites that offer stock illustrations for sale, and find a few whose style matches up with yours. Then, register an account and take any tests that might be required to begin work as a freelance illustrator. X Research source - For instance, you may have to submit sample illustrations that fit specific content and technical criteria before you’ll be allowed to sell your work on a certain site. - On these sites, you’ll often be working on commission, rather than submitting your own original ideas. - Check out sites like iStock, Adobe Stock, and Getty Images, for instance. - 6Take commissions if you're open to creating art on request. One way to make your art more commercially marketable is to take personal commissions. When you do this, you’ll work closely with a buyer to help combine their vision with your personal artistic flair. To do this, advertise on your website or social media that you take commissions. Ask the person to send you their commission request, then send them back a proposal, including the dimensions of the piece and what you’ll charge. X Research source Advertisement - If you take commissions, always get a contract in writing, as well as a deposit. This protects you in case the person decides they decide they don’t want to pay you for your work or they dispute the final price. - This can be a good option for everything from sculpting to crafts to fine art. Method 3 Method 3 of 3:Marketing Yourself - 1Identify your audience. In order to know how to successfully market yourself, you need to know who’s most likely to be interested in your work. Once you know your audience, find the blogs, websites, and print publications they're likely to follow. Focus your marketing efforts in these areas to easily attract new followers, leaving you more time to create new art. X Research source - For instance, if your style is an updated take on the style of the Old Masters, your target audience might be wealthy art collectors. - If your style is bold and graphic, you might appeal more to followers of street art. - 2Keep your website and social media up-to-date. Even if you’re selling your work on a marketplace or print-on-demand service, you should still have a professional-looking website where people can learn more about you. It’s also a good idea to have some pieces for sale on your site, no matter where you’re selling it. Frequently update your website with your latest work, and post regular status updates to your social media accounts to stay engaged with your followers. X Research source - On your website, include a compelling bio telling readers a little about yourself and your art. - Share work from other artists on your social media pages. If they return the favor, you'll be introduced to a whole new network of followers. Tip: Consider making a business page to promote your art, rather than relying on your personal social pages. On many social media platforms, you can then pay a fee to promote your posts, which can help you reach a wider audience. - 3Start a blog to provide your followers with a steady stream of new content. Try to pick a specific angle to write about, then post new content regularly. For instance, you might blog about the process of creating art, or you could feature artists or write about the latest art news. This can help you keep your audience engaged, and if your content is relevant and interesting, you'll likely attract new followers, as well. X Research source - You don't necessarily have to post every day, but however often you choose to post, be consistent. For instance, you might only post twice a month, but post on the same two days every month so your followers know what to expect. - 4Get involved with art competitions and societies in order to network. One way to increase exposure to your work is to get involved with the art community. Submit your work to various art competitions, attend gallery openings and art shows in your area, and consider joining an online or local art club. X Research source - Try joining groups or following pages that deal with the same medium you work in. For instance, if you create sculptures, you might join a Facebook group for sculptors or follow other sculptors' Insta feeds. This can help you network with other artists and the people who follow them. - Showing your work and being around other artists might help open doors that will lead to sales down the road. - 5Provide prompt, courteous customer service. Any time you work with a client, whether that’s a website representative or someone who’s buying directly from you, do your best to be polite and professional, and respond to any inquiries as quickly as possible. This will show your clients that you’re easy to work with, which may make them more likely to purchase from you again in the future. X Research source - For instance, if someone emails you through your website to ask about a particular painting, send them a response at your earliest convenience. Give them any details they ask for, along with details that might interest them, like the inspiration for the piece or a new technique that you tried. - 6Ask your customers to leave reviews. If you have a positive interaction with a customer, ask them to leave you a review on your website or social media page. If others see that you’re getting positive feedback, it might influence their decision whether to purchase from you in the future. X Research source Advertisement - If you get negative feedback, don’t delete it. Instead, do your best to address the issue with the unhappy client. When the issue is resolved, ask if they would update their review. Community Q&A References - ↑ https://artanddesigninspiration.com/step-by-step-directions-to-scan-your-artwork/ - ↑ https://www.artbusinessinfo.com/how-to-protect-artwork-online.html - ↑ https://www.artiststrong.com/why-do-artists-keep-an-inventory-of-their-art/ - ↑ Natasha Dikareva, MFA. Ceramics & Sculpting Instructor. Expert Interview. 5 May 2020. - ↑ https://veryprivategallery.com/how-to-price-my-art/ - ↑ Natasha Dikareva, MFA. Ceramics & Sculpting Instructor. Expert Interview. 5 May 2020.
https://www.wikihow.com/Sell-Your-Art-Online
Balboa Peninsula flags may soon be decommissioned, as city says pole is on public land A civic conundrum is unfolding in Newport Beach’s Balboa Peninsula just in time for Independence Day, as the city’s code enforcement team considers removing a flagpole with a long but somewhat mysterious provenance for encroaching onto publicly owned land. Officers responding to a complaint arrived Tuesday to a row of homes along the oceanfront Newport-Balboa Bike Trail, where the U.S. and California state flags flapped freely atop a flagpole mounted into a sandy beach berm. Neighbors say the pole has likely been there since around the time the nearby houses were built in the 1950s. “The flagpole’s been here for 70 years, before I even existed,” said Kelly Brown, who rents a property nearby and has become a caretaker of the structure, hoisting fresh standards when old ones become wind ravaged and tattered. “I work from home,” Brown continued. “Every day I sit here, stare at the ocean, look at these flags and do my business.” But city officials say the pole’s placement on public land poses a problem. That’s why one code enforcer during Tuesday’s visit, after placing inquiries about ownership, tagged the pole for removal. A notice announced it would be torn down Friday. Angry locals, who say the flags not only pay homage to patriots and veterans but provide a perfect photo op for tourists visiting the beach, sprang into action. “Imagine the visual of a town, on the weekend before July 4, removing the American flag,” said attorney Judd Shaw, who lives near Brown. “How un-American. It’s not a political flag — it’s the United States flag.” Shaw and other Balboa Peninsula residents emailed members of the Newport Beach City Council this week asking for intervention. Their pleas did not fall on deaf ears. City Councilman Noah Blom said the city needed to do some research into the history of the structure but maintained on Wednesday the flagpole would stay if he had anything to do with it. “We’re just not going to be ripping down American flags. Especially around the Fourth of July,” he said. “If code enforcement has a direct problem with it, any one of the City Council members could pull it up for consideration.” Newport Beach Community Development Director Seimone Jurjis said determining the fate of the pole may be beyond the city’s reach if it sits on state-owned beach land. In that case, the California Coastal Commission would have the final say on encroachment. He cited an earlier case, in which the landscaping of some Peninsula Point homeowners had begun to creep onto state-owned beach sand. Although Newport Beach officials attempted to work out a compromise, the commission ordered the plants to be torn out and fined offenders. “Since we don’t own the land, we don’t have the coastal jurisdiction for it,” Jurjis said. Further, unpermitted structures in public areas can present liability issues to public entities. “If a bicyclist accidentally crashed into that flagpole and someone falls and injures themselves, who’s at fault in this?” Jurjis posed. “Either the city didn’t act to remove the flagpole in a timely manner, or they allowed [it] to continue. Ultimately we’ll be at fault.” City spokesman John Pope confirmed Wednesday removal of the flagpole had been “put on pause” to give the city more time to investigate the pole’s origin and determine whether the land it’s on belongs to the city or the state. He said the matter was brought to the city’s attention by a resident but said such complaints are kept anonymous. Brown can’t imagine why anyone would have cause to complain about a flagpole that’s been standing undisturbed for so long. “I’ve had flags hanging there for four years and no one’s ever complained. People have even come from all over the world to take pictures in front of these flags,” she said. “Whoever complained, as far as I’m concerned, karma’s going to get them in the butt.” All the latest on Orange County from Orange County. Get our free TimesOC newsletter. You may occasionally receive promotional content from the Daily Pilot.
https://www.latimes.com/socal/daily-pilot/news/story/2021-06-24/balboa-peninsula-flags-may-soon-be-decommissioned-as-city-says-pole-is-in-public-land
Q: Why does $\ln(x)\ln(1-x)\mid^1_0=0$ $$\ln(x)\ln(1-x)\mid^1_0=0$$ I see this result once or twice a day in various forms without proof. Could someone just write the proof so I don't embarrass myself if someone on the street or in a grocery store asks me? One comment for the proof was to expand ln(x) about 1: I don't see how $\ln(1-x) ((x-1)-\frac{1}{2}(x-1)^2+\frac{1}{3}(x-1)^3-\dots)$ helps. It's still $-\infty*0$ at 1, and a non-convergent taylor at 0. Another proof was to use l'hopital's, so I'm still doing something wrong: $\lim_{x\to{0/1}}\frac{\ln(x)}{1/\ln(1-x)}=\frac{1}{x(1-x)}$ is still indeterminant but no longer l'hoptilable. I vaguely remember a proof in the margin of a complex variable book using the limit after a substitution involving e^x but the text has 700 pages and I don't know what page it was. A: You've made a mistake in your differentiation: $$\lim_{x\rightarrow 0^+} \frac{\ln(x)}{\frac{1}{\ln(1-x)}} = \lim_{x\rightarrow 0^+} \frac{\frac{1}{x}}{\frac{1}{(\ln(1-x))^2 (1 - x)}} = \lim_{x\rightarrow 0^+} \frac{(\ln(1-x))^2 (1 - x)}{x}$$ which is finite, as it is the derivative of $(\ln(1-x))^2 (1 - x)$ at $0$. You don't need to know the value as the limit as $x \rightarrow 1^-$ is the same, as a simple substitution of $u = 1 - x$ will tell you. Thus, $$\ln(x)\ln(1 - x)|_0^1 = \lim_{x\rightarrow 1^-} \ln(x)\ln(1 - x) - \lim_{x\rightarrow 0^+} \ln(x)\ln(1 - x) = 0$$
print(255) // decimal literal print(0000'00FF) // hexadecimal literal print(00'FF) // short hexadecimal literal print(F'F) // ultrashort (single-byte) hexadecimal literal print(377o) // octal literal print(1111'1111b) // binary literal print(255'000) // decimal literalOutput: 255 255 255 255 255 255 255000 Note that digit separator 'is necessary for hexadecimal literals. [Someone might want to put in code a credit card number (like 1234'5678'9012'3456, which is hexadecimal (but not decimal!) literal in 11l), but how frequently this is really needed in practice?] Why 'instead of _— there may be a valid variable name (e.g. FFFF_FFFFor C0_DE).
http://11l-lang.org/doc/integer-literals/
Questions tagged [pencil] Questions about the tools used for drawing lines and shading to a canvas. Digitally, a pencil is a freehand painting tool to draw pixels. Unlike like Brush, it can't have soft edges and results in hard "pixelated" edges. 8 questions 0 votes 1answer 48 views Photoshop pencil tool missing edge pixels In photoshop, if i click in 1 place to draw a dot, then hold shift and click anywhere else to draw a straight line, a line is drawn but with missing pixels along the edge. This is extremely annoying ... 2 votes 0answers 29 views In Krita, How to Create Graphite/Pencil Smudge Effect I'm trying to emulate the following in Krita: I take a graphite stick and rub it on a piece of paper. I then rub off the loose pieces of graphite onto a cloth. I then rub this cloth onto a piece of ... 0 votes 1answer 107 views What is the difference between the pencil tool, the pen tool, and the brush tool in Adobe Illustrator? Is there any difference between the three besides that the brush and pencil are much more flexible than the pen? 1 vote 1answer 46 views does this drawing style have any particular name? can somebody please tell me whether this drawing style has any particular name? where can i learn this drawing style? This is certainly some hindu traditional drawing style, but i dont know whether i ... 0 votes 2answers 237 views Some brushes not working (Photoshop CC 2018) I'm starting to learn drawing in Photoshop with my Surface Pro and for some reason some brushes will just not draw. Basic brushes work, but in other categories like in dry brushes (sorry my brush ... 1 vote 1answer 303 views An issue with pencil tool in Illustrator 2019, pencil's default fill and stroke set to none, how to fix it? I was using the pencil tool a lot in previous versions of Illustrator (2015 particularly). It was very handy when I was drawing highlights and shadows on illustration, because I could draw a shape ... 0 votes 1answer 57 views How Does One Recreate Paint Brushes and Other Drawing Tools in Photoshop Accurately? I'm wondering how to recreate a paint brush or any drawing tool in Photoshop. I tried making a brush similar to a paint brush but it didn't turn out right and the stroke doesn't look like a fluid ... 3 votes 1answer 489 views How to draw random colours dots? How can I draw dots with a specific size and random colors? Is it possible to create a brush or something similar to do it? With the pencil if I hold Ctrl + click I can draw a black dot, but is ...
https://graphicdesign.stackexchange.com/questions/tagged/pencil
If you love chocolate and have a sweet tooth, you will want to whip up this easy keto fudge recipe. It’s super simple and you just heat up everything on the stove and place it in the fridge to let it firm up. You can add your other ingredients to it like pecan, walnuts, peanut butter and extracts to add even more flavor. You may need to experiment to get it just the way you like it when adding extract. Just start with a teaspoon and go from there. KETO CHOCOLATE FUDGE Ingredients - ½ cup heavy cream - ½ cup stevia - 4½ ounces sugar-free chocolate chips - 2 oz. butter - 2 tsp. vanilla extract Instructions - Place heavy cream in a saucepan over medium heat. - Add the stevia and vanilla and heat until dissolved. - Add the chocolate to the cream while whisking, and continue stirring until chocolate has melted. - Remove from heat and stir in the butter. - Pour the mixture into a lined rectangle dish and set in the fridge for 3 hours. Nutrition Facts KETO CHOCOLATE FUDGE Serves: 12 |Amount Per Serving:| |Calories||123.47 kcal| |% Daily Value*| |Total Fat 9.99 g||15.4%| |Saturated Fat 6.25 g||31.3%| |Trans Fat 0.17 g| |Cholesterol 25.34 mg||8.4%| |Sodium 11.58 mg||0.5%| |Total Carbohydrate 2 g||0.7%| |Dietary Fiber 0.29 g||1.2%| |Sugars| |Protein 0.75 g| |Vitamin A 8.12 %||Vitamin C 0.14 %| |Calcium 2.0 %||Iron 0.75 %| * Percent Daily Values are based on a 2,000 calorie diet. Your daily values may be higher or lower depending on your calorie needs.
https://leanketo.com/keto-chocolate-fudge/
As a Pianist, you can do many tricks to completely change how a song sounds to the audience. One of these is PolyRhythm’s, i.e playing contrasting and layered lines of rhythm vertically. You will need to have a very good understanding of rhythm values (how long a given note lasts), syncopation (the accenting of certain beats) , and tempo (“pulse” of the song) to do this. Technique-wise, you will need a great degree of hand/finger interdependence, as each section of your hand is playing a different part (kind of like tapping on the desks with your fingers) Here’s “Clocks” by Coldplay, played normally : At 0:15 , I add the PolyRhythm. She is playing the main melody on the top (1/8 notes and 1/4 notes) my right hand is playing a series of rapid triplets (1-2-3-1-2-3) and my left hands is playing 1/8th notes on a bass line with every 3rd note dotted (means hold down slightly longer). The difference in sound between the triplets & eighth notes is what creates the syncopation, which the ear hears as an “off beat” pulsating across any of the triplets. Hear below : Now, I bring things back to a familiar supporting pulse by playing 1/8th notes as chords with no accent. This supports the top because she starts the melody. Its very important to be able to listen to the other parts and know the structure of the song so you wont clash when other instruments or sections are playing important bars in the song (like the verse/chorus) I also sneak in a quote of the main theme played staccato 🙂 Try doing this to any song you like! There is no right or wrong way to play with rhythms as long as they are all on tempo, and the note values make sense in respect to the beat (and ultimately, to the listener.) If you have trouble, I highly recommend using a metronome and starting slow. No one got good at anything musical on the first attempt! Full video below of the complete song :
https://roamingviews.com/category/music/page/3/
As you hear Len Kasper say at the end of the video above, “the old adage is, if you think you’re gonna hit into a double play, strikeout”. This is something I’ve heard being joked about before, but I wanted to see if there was actually something to it. Is there a situation in which a manager should tell a player to not swing? Just take the strikeout. The situation I looked at is bases loaded, with 0 or 1 out and the pitcher batting. I just felt like pitchers tended to hit into so many rally killing double plays in this situation. Therefore, I believed it may actually be better to just accept the strikeout and let the lead off hitter get his shot. To begin, this question doesn’t come about unless pitcher hitting is really awful, and it is really awful. Just look at the graph above if you need any proof. The graph shows the average batting average at each position over the last 20 years. Every position is clumped together and then there’s pitchers. They just can’t hit at a major league level. Not only that, but they were worse than ever in 2017 and they are even worse in 2018 so far. For answering the main question here, I looked at data with this exact criteria from seasons 2013 to 2017. This situation (bases loaded, 0 or 1 out, pitcher batting) has happened 331 times over that time. 43 have resulted in hits and 54 have resulted in grounded into double plays. Pitchers are hitting .143 in these situations. I calculated the positive win probability by using the formula below. Win probability, also referred to as win expectancy, is the percent chance a particular team will win based on the score, inning, outs, runners on base, and the run environment. This is calculated based on historical data. I did this to get the win probability added from each at bat. Win Probability Added = Win Probability after outcome (1B, 2B, 3B…) – Win Probability of Situation (bases loaded no outs) Let’s call these the positive values. I did the same with negative values (double play, strikeout, ground out- fielder’s choice, pop out or fly out- not Sac Fly). All of these negative values except a double play, advance no runners, make one out, and decrease the win probability at the same rate. I considered both “home to first” and “second to first” double plays. I then took then took the percentage each outcome occurred in this situation. For example, singles happened 12.33% of the time in this situation. I did this in order to weigh the algorithm for how often each outcome occurs in MLB pitcher at bats with the bases loaded. I multiplied this number by the win probability difference I found previously for each difference outcome. I then added all the values to create a total score of whether or not the at bats on average were positively affecting the team. I repeated this same process, but with the “bases loaded, one out situation” for pitchers. I also evaluated the situations of “bases loaded one out” and “bases loaded two out” with all lead off hitters in these situations from 2013 to 2017. This accounts for what would happen after the pitcher were to strikeout if he didn’t swing at any pitches. Okay, now let’s look at the results and compare them to the question I asked. The point is to see if the batter following the pitcher actually has a better chance of increasing the team’s chance to win than the pitcher does with one less out. Let’s look at the chart below of the results from my algorithm. The probabilities I used for the table below are based on first inning probabilities, but the same conclusions can be drawn from each inning’s data. Outcome Expectancy |Situation||Pitcher||Lead Off Hitter| |Pitcher up 0 outs||-3.74||0.06| |Pitcher up 1 out||-5.43||-12.08| There are no units since these numbers come from my algorithm, I made that up so there is no proper unit. What is important to see is the comparisons and whether they are positive (likely to increase team’s chance of winning) or negative (decrease team’s chance of winning). We actually see something I didn’t expect. It’s better off with “no out bases loaded” for the pitcher up to take the strikeout. However, with 1 out and the pitcher up, you might as well let him hit. There is a much larger chance that you don’t score with the lead off hitter up and 2 outs than the pitcher up with 1 out. I attribute this to how important it is not to have two outs. With two outs in an inning, run expectancy drops tremendously. In addition, if you don’t swing with the pitcher with one out, there is no chance you score on sac fly. You also cannot score on a sac fly with your lead off hitter since there are then 2 outs. This is a general rule of thumb to use, but of course game situations and the hitting ability of the batter will influence decision-making. Better hitters should be allowed to swing away more. Only two pitchers in the data set hit a grand slam: Madison Bumgarner and Travis Wood. Also, I know Reds pitcher Anthony DeSclafani did hit a grand slam off the Cubs last week, but it was in a two out situation in which the pitchers have nothing to lose and might as well swing away. Guys who show they can hit should swing away still, but majority of pitchers should abide by these rules. The other idea for pitchers is to bunt in this situation. That can absolutely work if you have a pitcher who is a skilled bunter. Jon Lester has done this before and is a very good bunter. It’s better off to just bunt with him in this situation. In the video we see Lester execute a bases loaded sacrifice bunt to perfection. However, if you do bunt you could risk a double play or a fielder’s choice at home and your pitcher then has to run the bases. I was curious about this whole situation and was surprised that striking out with one out and the pitcher up wouldn’t be the better option, but it is with no outs. I understand some people will counter that with that players want to compete and won’t want to just strikeout, but I’m just saying with no outs and most pitchers up, it gives the team a better chance if they strikeout. Obviously teams need to know their own pitchers abilities when making decisions and this theory should not be set in stone. This situation doesn’t come up too often, but it’s still interesting to see how each team handles it.
https://renegadesportsanalytics.blog/should-a-player-ever-just-take-a-strikeout/
Electrochemical batteries, such as lithium-ion and lead-acid cells, experience degradation over time and during usage, leading to decreased energy storage capacity and increased internal resistance. Being able to predict the rate of degradation and the remaining useful life (RUL) of a battery is important for performance and economic reasons. For example, in an electric vehicle, the driveable range is directly related to the battery capacity. For energy storage asset valuation, depreciation, warranty, insurance and preventative maintenance purposes, predicting RUL at design stage and during operation is crucial, and the investment case is strongly dependent on the degradation behaviour . To estimate accurately the second hand value of assets such as EVs and grid batteries, credible predictions of RUL are required. Unfortunately, battery degradation is caused by many complex interacting chemical and mechanical processes [2, 3], and physical modelling from first principles is very challenging. To mitigate uncertainty in lifetime, batteries are often over-sized and under-used, which results in increased system costs and sub-optimal performance. Hence, new approaches for accurate health prognostics are required, and form an important component of a modern battery management system or energy management system. Since the performance of a battery in an application is largely dependent on its nominal capacity and internal resistance, the state of health (SoH) is typically defined by one or both of these parameters. In the present case we consider just cell capacity as the SoH metric, but the methods outlined in this paper could be applied to any other SoH metric, such as internal resistance, or capacity at some nominal C-rate. A variety of techniques may be applied for SoH measurement and estimation , but in this paper we simply assume that SoH metrics are available, for example from a battery management system. The conventional approach to battery SoH forecasting is to fit a parametric function to a broad set of ageing data measured under controlled laboratory conditions. Careful judgement is required to decide on the exact form of parametric model to use. For example, Schimpe investigated both calendar and cycle ageing of lithium iron phosphate (LFP) batteries with respect to temperature and state of charge (SoC) and found that capacity evolved with time according to |(1)| where is the capacity fade at some point in time, are empirically fitted stress factors that are a function of temperature, charging current, time and SoC, is the total charge throughput to time , and is the charge throughput only during charging, to time . The stress factors typically fit an Arrhenius equation of the form |(2)| where is a fitted constant, is some input such as current or the reciprocal of temperature, and is reference value for that input. Very similar approaches have been developed by others for LFP batteries , and for a variety of other chemistries including NMC lithium-ion [7, 8] and lead-acid . These empirical degradation models are essentially parametric curve fitting using specified underlying functions such as exponentials, square roots etc. For some kinds of battery degradation data, such as , these approaches may give a reasonable fit to the measured behaviour, although there is very little information in the literature about their long term predictive accuracy. These approaches also require the form of the model to be specified a priori, for example (1) assumes decoupling of inputs, and this may not be the case. Additionally, many degradation datasets exhibit an accelerated capacity fade regime in later life (see ), and this approach is not able to model such a regime change. Also, accuracy may be limited when environmental and load conditions differ from the training dataset. As an alternative approach to empirical parametric functions fitted to laboratory test data, others have developed ‘first principles’ electrochemical models of battery ageing. These propose and model a set of underlying physical ageing mechanisms. For example a popular ageing mechanism is growth of the anode solid electrolyte interphase (SEI) through reduction of the ethylene carbonate in the electrolyte, modelled as a diffusion-limited single step charge transfer reaction [12, 13]. This can be augmented to include additional physics related to lithium plating , particle cracking and other mechanisms. Although reasonable results are demonstrated for calendar ageing, huge challenges remain with respect to parametrisation and validation of such models, and what physics to include to capture all the relevant ageing mechanisms and their interactions. In contrast to these approaches, so-called data-driven battery ageing models are beginning to be investigated. These have some similarities with the empirically fitted functions previously discussed, but new techniques from machine learning allow much greater flexibility in these models than can be obtained using pre-specified parametric functions. The simplest formulation of this is direct fitting of capacity data with respect to time, or cycle count, which allows RUL estimation by extrapolation to future values. A variety of data-driven techniques have been explored in this context, including non-parametric approaches such as support vector machines[16, 17, 18, 19], and Bayesian non-parametric approaches such as Gaussian process (GP) regression [20, 21, 22] . A non-parametric model is one whose expressivity (as would increase with the degree of a polynomial, for instance) naturally adapts to the complexity of data. Rather than having no parameters, a non-parametric model is perhaps better thought of as one with a number of parameters that can scale with the data and could become arbitrarily large. Bayesian approaches naturally incorporate estimates of uncertainty into predictions, allowing a model to acknowledge the varying probabilities of a range of possible future health values, rather than just giving a single predicted value. These approaches have been demonstrated to work well when a battery health dataset is available for batteries that have all been cycled in a similar way. For example, our previous work on RUL prediction applied a multiple-output Gaussian process model to incorporate data from multiple batteries, all cycled in the same way, demonstrating a large improvement in accuracy of RUL estimation over existing methods. However, for real world RUL prediction at design stage, or for preventative maintenance, a much more flexible approach is needed that allows health predictions to be made as a function of the changing stress factors such as time, charge throughput and temperature etc. The previously discussed parametric models can incorporate dependence on external inputs, but are limited to pre-specified functions. In other words, they assume that the shape of the degradation trajectory is known a priori, which limits their applicability. To address this, we introduce the idea of a Bayesian non-parametric transition model for battery health. Rather than fitting the SoH data directly as a function of time or cycle count, the model predicts the changes in SoH from one point to the next as the battery is used, as a function of the usage. This is explained in detail in the next section. 2 Method 2.1 Transition model The approach in this paper formulates a transition model to predict the capacity changes between periods of usage that we term ‘load patterns’. We define this differently to a standard battery charge-discharge cycle, instead it is the time-series of current, voltage and temperature data between any two capacity measurements or estimates, and . Load patterns do not need to be uniformly spaced, i.e. they could be short or long periods of usage, and might include multiple charge-discharge events. The goal of a regression problem is to learn the mapping from input vectors x to outputs , given a labelled training set of input-output pairs , where is the number of training examples. In the present case, the inputs are vectors of selected features (see section 2.2) for load pattern , and the outputs are the corresponding differences in measured capacity between load pattern and . The underlying model takes the form , where represents a latent function and is an independent and identically distributed noise contribution. The learned model can then be used to make predictions on a set of test inputs (i.e. load patterns where we wish to estimate the capacity), producing outputs , where is the number of test indices. In our case we are interested in predicting the capacity changes in a new – previously unseen – battery cell, which has been exposed to a known test regime. This is called the validation or test dataset. 2.2 Input feature extraction Each load pattern, , may contain within it an arbitrary number of time steps, . However, in order to use the inputs in our model, since the capacity measurements are only known per load pattern, we must first map time-series data to a fixed size input vector. In other words, assuming there are time-steps within a load pattern , then the measurements , , are mapped to a single -dimensional input vector, x, where is the number of features of interest. Irrespective of the number of time steps in a load pattern, the size of the input vector x is the same. For each load pattern, , the features to be extracted are defined by prior assumptions about what causes a battery to age. As discussed in the preceding section, there are many possible different stress factors that affect battery ageing, depending on the dataset and model. However, in the dataset used it was found that accurate results could be obtained with only a small number of factors (see table 4), as follows: The first component of the input vector, for the th load pattern, is the total time elapsed during the load pattern, given by where and are the times at the start and end of the load pattern respectively. The second component is the charge throughput, , during the load pattern, i.e. the total absolute current through the cell during the load pattern, given by The third component is the absolute time value, in seconds, since the beginning of the whole dataset, As discussed later, for the dataset considered here, it was found that the choice of model and number of overlapping load patterns were generally more important for determining predictive accuracy than the inclusion of additional input features. However, with a larger dataset, additional features could improve predictive accuracy. These might include the following: Firstly, the present cell capacity, Secondly, the time elapsed during which certain conditions are met. This is achieved by defining a selection of current, voltage and temperature ranges, and evaluating the time spent by the battery within these ranges: for , where , and are the parameters of interest, and their upper and lower bounds respectively. For example, a battery’s aging behaviour is expected to be affected by high or low temperatures . Hence, one might define the duration of time the battery spends (1) below 0 C, (2) between 0 and 40 C, and (3) above 40 C as three distinct inputs: An example of an input vector for a single load pattern is given in Table 1. In this case, inputs were defined for ranges of temperature and current. Of course, additional inputs could also be defined by voltage ranges, but these have been omitted here for clarity of presentation. Note that the sum of all the times spent in each parameter range (e.g. in each temperature or current range) must equal the total time elapsed within that load pattern. 2.3 Example data Fig. 1 shows an exemplary schematic of the first 4 load patterns for a single cell. There is one capacity measurement () at the very start of the cell’s life and then 4 subsequent measurements () at later times. The load patterns consist of everything that occurs between each capacity measurement; each load pattern is translated into equal sized input vectors, . 2.4 Evaluation The model predictions are evaluated using three different metrics, which reflect the quantities of interest in a practical application. The first is the root-mean-squared error (RMSE) in the mean output of the model (i.e. the capacity differences), defined as |(3)| where is the number of points to be evaluated (i.e. all points in the test dataset), is the measured capacity difference using the test dataset and is the estimated mean capacity difference predicted by the model, each between load pattern and . The second is the RMSE in actual capacity, defined as |(4)| where is the measured capacity (using the test dataset) and is the estimated mean capacity, each at load pattern . This may also be expressed as a normalised value, to facilitate comparison with other studies, whereby the absolute capacities may be of different magnitudes: |(5)| Note that it is possible for a model to perform well in one of these metrics but poorly in the other. For instance, if a model over-predicts every second load pattern but under-predicts on alternate load patterns, the overall capacity evolution may be accurate (implying good ), but the individual predictions might not be (implying poor ). Hence, a good model should have low values of both these metrics. Thirdly, since the approach used here is probabilistic, the accuracy of the uncertainty estimates can also be quantified using the calibration score (CS). This is defined as the frequency of measured results in the test dataset that are within a predicted credible interval. Within a interval, corresponding to a 95.4% probability for a Gaussian distribution, the CS is given by |(6)| Therefore, should be approximately 0.954 if the uncertainty predictions are accurate, using the techniques outlined in this paper. Higher or lower scores indicate under- or over-confidence, respectively. 2.5 Gaussian process regression This section gives a brief overview of Gaussian process regression, the main approach chosen in this paper for modelling the transition in health from one load pattern to the next. A Gaussian process (GP) defines a probability distribution over functions, and is denoted as: |(7)| where and are the mean and covariance functions respectively, denoted by |(8)| |(9)| For any finite collection of input points, say , this process defines a probability distribution that is jointly Gaussian, with some mean and covariance given by . Gaussian process regression is a way to undertake non-parametric regression with Gaussian processes. Rather than suggesting a parametric form for the function and estimating the parameters (as in parametric regression), we instead assume that the function is a sample from a Gaussian process as defined above. In this work, we use the Matérn covariance function: |(10)| with output scale , smoothness hyperparameter,(larger implies smoother functions) and is the modified Bessel function. This kernel was chosen because it is suitable for functions with varying degrees of smoothness, although similar performance was observed using other common kernels, including the squared exponential . A fuller discussion of various different kernels that may be used for GP regression in the context of battery health prediction is given in . Finally, we also compare performance against a linear kernel, since this is equivalent to Bayesian linear regression: |(11)| where is a constant defining the offset of the linear function. The mean function of the GP is commonly defined as , and we follow this convention here. Now, if one observes a labelled training set of input-output pairs , predictions can be made at test indices by computing the conditional distribution . This can be obtained analytically by the standard rules for conditioning Gaussians , and (assuming a zero mean for notational simplicity) results in a Gaussian distribution given by |(12)| where |(13)| |(14)| The values of the covariance hyperparameters may be optimised by minimising the negative log marginal likelihood defined as . Minimising the NLML automatically performs a trade-off between bias and variance, and hence ameliorates over-fitting to the data. Given an expression for the NLML and its derivative with respect to (both of which can be obtained in closed form), can be estimated using gradient-based optimization. The Python GPy library was used to implement these algorithms. 2.6 Gradient boosting As a state-of-the-art comparison to Gaussian process regression, we also investigated predictive performance with an alternative technique, gradient boosting. This is a popular data-driven time series modelling approach based on combining an ensemble of weak prediction models into a stronger model . While this approach is not inherently probabilistic, and does not output a full covariance matrix for the predictions, it can be trained using quantile regression (QR) to approximately predict a probability distribution. Quantile regression deliberately introduces a bias in the prediction in order to estimate statistics. The loss function is modified such that instead of identifying the mean of the variable to be predicted, QR seeks the median and any other desired quantiles. To identify the upper and lower bounds of a prediction interval, QR is repeated at several different quantiles. One advantage of this method is that asymmetric intervals can be predicted. On the other hand, it is not clear how the confidence intervals forshould be calculated from the values for , since the full covariance matrix is unavailable. In this case, we simply centred the intervals around the mean, and fitted a Gaussian distribution in order to achieve this. 3 Dataset The battery dataset used here was obtained from the NASA Ames Prognostics Center of Excellence Randomized Battery Usage Repository . The data in this repository were first used in Ref. for an investigation into capacity fade under randomized load profiles. The data are randomised in order to better represent practical battery usage. This is ideal for training a data-driven model. Fig. 3 gives smoothed histograms computed from the cell data showing the ranges of times, charge throughput, currents, voltages and temperatures that are explored by this dataset. An overview of the battery dataset is given in Table 2. The cells used have a relatively high energy density, but short lifetime. The remainder of this subsection describes the cycling and characterisation procedure, based on . For this study we used data from 26 of the 28 total battery cells available in the repository (cells 16 and 17 were omitted, since these were found to contain spurious data resulting in certain cycles having negative duration). The cells were grouped into 7 groups of 4, with each group undergoing a different randomized cycling procedure as described in Table 3. The first 5 groups were cycled at room temperature throughout the duration of the experiments, whilst groups 6-7 were cycled at 40 C. In all cases a characterisation test was periodically carried out, whereby a 2 A charge-discharge cycle was applied (i.e. approximately 1C) between the cell voltage limits – these discharge curves were used to evaluate the capacity as an indicator of state of health. There were a total of 950 discharge curves available across all cells (i.e. curves per cell). The cell capacity was calculated by integrating the current from each of the 2 A charge curves. Calculated capacities for the cells in each group are plotted against time in Fig. 4. The evolution of the capacity is quite different for each group of cells. 4 Results We considered 6 different configurations of data-driven transition model, as defined in Table 4 , in order to show a range of comparisons in predictive accuracy. In each case, the model was trained on the data from even numbered cells (i.e. all the mappings between inputs and capacity drops across all of those cells), and subsequently tested on the odd numbered cells. Models 1 and 2 use a GP with a Matérn kernel. The difference between these two models is the way in which long term trends are captured. For model 1, data from the preceding 6 load patterns were all used as inputs for the mapping, and the total time elapsed was not included as an input. For model 2, only data from the current load pattern was used, but to capture long term trends it was necessary to also include the total time elapsed as an additional input. Models 3 and 4 are analogous to models 1 and 2, except a linear kernel was used in the GP rather than a Matérn kernel. This gives a simple base case for comparison. Using a linear kernel is equivalent to implementing Bayesian linear regression, and the key point to note in this context is that it provides far less flexibility for the model predictions compared with a Matérn kernel. Models 5 and 6 are also analogous to models 1 and 2, except that, rather than using a GP, they use a different regression technique called gradient boosting, as was introduced in section 2.6. The predicted versus actual for each approach is shown in Fig. 5. Model 1 was the best performing of the 6 cases tested, with and of Ah and Ah respectively. Normalised capacity prediction error for model 1 was 4.3%. Finally we present in more detail in Fig. 6 the evolution of the capacity for each of the cells in the test dataset, using the best performing approach (model 1). 5 Discussion The results given in section 4 show that model 1 accurately predicts the capacity trajectory, and provides reasonable, if slightly over-cautious, estimates of the uncertainty, indicated by the calibration score being close to 0.954. The true capacity generally lies within the interval denoted by the blue shaded region in Fig. 6. The model is also seen to be capable of predicting both positive and negative capacity differences. For instance, it is apparent in Fig. 6 that, although the capacities experience a long-term downward trend, they also experience occasional step increases. The model correctly predicts the timing of a number of these instances, e.g. for cell 7 at day . As an aside, the physical explanation for these increases is not clear; they may in fact be an artefact of the measurement process, possibly arising when reference tests are performed, after the cell is unused for some time. However, regardless of their cause, accounting for these effects is essential since the capacity measurement provided in a real application could also manifest similar behaviour. Regarding feature selection, the fact that model 1 performs better than model 2 in the case of the dataset used here suggests that valuable information is being extracted from the inputs over the previous load patterns, which is not available from using just the total time elapsed as an additional input. Models 3 and 4, based on a linear kernel as noted earlier, perform considerably more poorly than the other approaches in terms of capacity prediction error, indicating that the simple linear combination of the inputs is insufficient to predict battery health for the dataset considered here, and the nonlinearities captured by the Matérn kernel are significant in this case. Their calibration scores also indicate over-confidence. The models based on gradient boosting are slightly less accurate in terms of mean predictions than models 1 and 2 and it is also noteworthy that they are erroneously over-confident, as indicated by their low calibration scores. Finally, we note that the train/test split used in this paper (whereby the even numbered cells are used for training and the odd numbered cells for testing) ensures that there is at least one training cell in each of the 7 groups of differently cycled cells, Table 3. Inferior results may be obtained if this were not the case, e.g. if the first N cells were used for training and the remaining 26-N used for testing, since in the latter case the model would be extrapolating beyond the region of the input space used for training. In practice, the performance of these methods will rely on using a sufficiently large training set being available, such that a large range of input conditions are covered. 6 Conclusions This paper has developed a new technique for battery health prediction based on a Bayesian non-parametric model that estimates the change in capacity over a particular period of time as a function of how the battery was used during that period. A simple histogram-based feature selection approach was presented and models were trained using data from NASA . It was found that the best performing approach used Gaussian process regression with a Matérn kernel function, and that time elapsed and charge throughput were the most important features to incorporate within the model, given the dataset used in this paper. It was also found that more accurate results could be achieved by considering the preceding 6 load patterns to capture longer range trends, rather than using absolute time as an input feature. Automated feature selection would be worth future investigation. The best case results presented have a relative accuracy on mean capacity predictions that is within 5% of the actual values. To our knowledge this is one of the first papers to actually quantify battery health predictive accuracy comprehensively, and this is one of the most accurate long range predictions of future capacity seen to date. The approaches explored in this paper offer an interesting insight into how the stress factors that drive degradation actually influence the capacity trajectory. It is noteworthy that, despite having a dataset that includes a wide range of temperatures and currents, in this case it was found that time elapsed and charge throughput were the dominant inputs. However, a naive modelling approach that uses a simple linear combination of inputs results in very inaccurate predictions, as shown by the GP regression results using linear kernels. There are a number of interesting next steps to explore. First, it would be useful to test these ideas against a much larger dataset to show their general validity and explore in more detail the sensitivity of the approach to additional inputs. Second, prior knowledge about expected degradation behaviour could be included as an extension to this work by including a parametric mean function within the GP framework. Third, in the present work, when the model is used predictively, it assumes perfect knowledge about the inputs, i.e. that the future current, voltage and temperature time series are known in advance. In practice this will not be the case, since depending on the application these variables depend on driving style or market conditions, ambient weather conditions etc. Predicting these inputs is a separate but important issue. Acknowledgments This work was funded by Continental AG and an RCUK Engineering and Physical Sciences Research Council grant, ref. EP/K002252/1. References - F. Wankmueller, P. R. Thimmapuram, K. G. Gallagher, A. Botterud, Impact of battery degradation on energy arbitrage revenue of grid-level energy storage, Journal of Energy Storage 10 (2017) 56–66. - C. R. Birkl, M. R. Roberts, E. McTurk, P. G. Bruce, D. A. Howey, Degradation diagnostics for lithium ion cells, Journal of Power Sources 341 (2017) 373–386. - P. Ruetschi, Aging mechanisms and service life of lead–acid batteries, Journal of Power Sources 127 (1-2) (2004) 33–44. - A. Farmann, W. Waag, A. Marongiu, D. U. Sauer, Critical review of on-board capacity estimation techniques for lithium-ion batteries in electric and hybrid electric vehicles, Journal of Power Sources 281 (2015) 114–130. - M. Schimpe, M. von Kuepach, M. Naumann, H. Hesse, K. Smith, A. Jossen, Comprehensive modeling of temperature-dependent degradation mechanisms in lithium iron phosphate batteries, Journal of The Electrochemical Society 165 (2) (2018) A181–A193. - J. Wang, P. Liu, J. Hicks-Garner, E. Sherman, S. Soukiazian, M. Verbrugge, H. Tataria, J. Musser, P. Finamore, Cycle-life model for graphite-LiFePO 4 cells, Journal of Power Sources 196 (8) (2011) 3942–3948. - J. Schmalstieg, S. Käbitz, M. Ecker, D. U. Sauer, A holistic aging model for Li (NiMnCo) O2 based 18650 lithium-ion batteries, Journal of Power Sources 257 (2014) 325–334. - M. Ecker, J. B. Gerschler, J. Vogel, S. Käbitz, F. Hust, P. Dechent, D. U. Sauer, Development of a lifetime prediction model for lithium-ion batteries based on extended accelerated aging test data, Journal of Power Sources 215 (2012) 248–257. - R. Dufo-López, J. M. Lujano-Rojas, J. L. Bernal-Agustín, Comparison of different lead–acid battery lifetime prediction models for use in simulation of stand-alone photovoltaic systems, Applied Energy 115 (2014) 242–253. - C. Birkl, D. A. Howey, Oxford Battery Degradation Dataset 1, http://dx.doi.org/10.5287/bodleian:KO2kdmYGg (2017). - S. J. Harris, D. J. Harris, C. Li, Failure statistics for commercial lithium ion batteries: A study of 24 pouch cells, Journal of Power Sources 342 (2017) 589–597. - C. Kupper, W. G. Bessler, Multi-scale thermo-electrochemical modeling of performance and aging of a lifepo4/graphite lithium-ion cell, Journal of The Electrochemical Society 164 (2) (2017) A304–A320. - M. B. Pinson, M. Z. Bazant, Theory of sei formation in rechargeable batteries: capacity fade, accelerated aging and lifetime prediction, Journal of the Electrochemical Society 160 (2) (2013) A243–A250. - X.-G. Yang, Y. Leng, G. Zhang, S. Ge, C.-Y. Wang, Modeling of lithium plating induced aging of lithium-ion batteries: Transition from linear to nonlinear aging, Journal of Power Sources 360 (2017) 28–40. URL http://dx.doi.org/10.1016/j.jpowsour.2017.05.110http://linkinghub.elsevier.com/retrieve/pii/S0378775317307619 - R. D. Deshpande, D. M. Bernardi, Modeling Solid-Electrolyte Interphase (SEI) Fracture: Coupled Mechanical/Chemical Degradation of the Lithium Ion Battery, Journal of The Electrochemical Society 164 (2) (2017) A461–A474. doi:10.1149/2.0841702jes. URL http://jes.ecsdl.org/lookup/doi/10.1149/2.0841702jes - X. Hu, J. Jiang, D. Cao, B. Egardt, Battery health prognosis for electric vehicles using sample entropy and sparse Bayesian predictive modeling, IEEE Transactions on Industrial Electronics 63 (4) (2016) 2645–2656. - M. A. Patil, P. Tagade, K. S. Hariharan, S. M. Kolake, T. Song, T. Yeo, S. Doo, A novel multistage Support Vector Machine based approach for li ion battery remaining useful life estimation, Applied Energy 159 (2015) 285–297. - D. Wang, Q. Miao, M. Pecht, Prognostics of lithium-ion batteries based on relevance vectors and a conditional three-parameter capacity degradation model, Journal of Power Sources 239 (2013) 253–264. - A. Nuhic, T. Terzimehic, T. Soczka-Guth, M. Buchholz, K. Dietmayer, Health diagnosis and remaining useful life prognostics of lithium-ion batteries using data-driven methods, Journal of Power Sources 239 (2013) 680–688. - K. Goebel, B. Saha, A. Saxena, J. R. Celaya, J. P. Christophersen, Prognostics in battery health management, IEEE Instrumentation & Measurement Magazine 11 (4) (2008) 33. - B. Saha, K. Goebel, Uncertainty management for diagnostics and prognostics of batteries using Bayesian techniques, in: Aerospace Conference, 2008 IEEE, IEEE, 2008, pp. 1–8. - W. He, N. Williard, M. Osterman, M. Pecht, Prognostics of lithium-ion batteries based on Dempster–Shafer theory and the Bayesian Monte Carlo method, Journal of Power Sources 196 (23) (2011) 10314–10321. - R. R. Richardson, M. A. Osborne, D. A. Howey, Gaussian process regression for forecasting battery state of health, Journal of Power Sources 357 (2017) 209–219. - C. E. Rasmussen, Gaussian processes for machine learning, Citeseer, 2006. - K. P. Murphy, Machine learning: a probabilistic perspective, MIT press, 2012. - C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006. - B. Bole, C. Kulkarni, M. Daigle, Randomized battery usage data set, NASA AMES prognostics data repository. - B. Bole, C. S. Kulkarni, M. Daigle, Adaptation of an electrochemistry-based li-ion battery model to account for deterioration observed under randomized use, in: Proceedings of Annual Conference of the Prognostics and Health Management Society, Fort Worth, TX, USA, Vol. 29, 2014.
https://deepai.org/publication/battery-health-prediction-under-generalized-conditions-using-a-gaussian-process-transition-model
Hi ya’ll! Today I want to share 6 simple steps that will help you achieve your goals! After having 2 weeks off with my family for Christmas break I am finally getting back into routine. And if you have read very many of my posts you know that I am a HUGE fan of routines. (hehe) However, getting back into routine and reviewing the goals I have set for this year has been a bit overwhelming. I was texting my sister the other day not only about my goals for the year, but also what I needed to catch up on. And as I was typing it all out I could feel my stress levels rising…how was I going to get it all accomplished? So, after a small freak out, I realized that sitting here and convincing myself of how impossible these tasks seemed was not helping my cause. A good plan and affirmative action was the only way I’d see the results I was aiming for! So many of us struggle from year to year to stay focused and motivated to accomplish our goals and resolutions. (Including myself!) So, here are some steps to help you, and me, stay motivated and encouraged! Why is achieving your goals important? When you achieve your goals it leaves you with a sense of satisfaction and pride. Which in turn boosts your confidence. And when your confidence is boosted you have more drive to better yourself and the world around you. As you can see, achieving your goals leads to a chain reaction of not only material accomplishments but also to a better outlook on life and your well-being. 6 steps to help you achieve your goals #1- Start a “Goal” Planner. I must admit that I am guilty of setting goals for myself (in my head) for the day or week and never following through with them. However, I have noticed that if I take the time to write those goals down on paper I am more likely to follow through with them. There are studies that show there is more power (oomph) behind a goal when it is written down because of a brain-to-hand connection (almost like you are speaking it into existence). I know that sounds odd but I am living proof that there must be some truth to that! My opinion on the matter is that when we actually take the time to write down a goal (no matter how big or small) our brain automatically categorizes it as important. And when our brain categorizes it as important we are more likely to achieve that goal. Plus, a visual reminder makes a great motivator! These Daily Planner pages are a great way to achieve your goals! #2- Don’t schedule goals too closely together. One thing that I have found to be the biggest discouragement and a “goal accomplishing killer” is being overwhelmed. Sometimes we can get overly ambitious and set 50 goals to accomplish in one week. We write it out and have a plan and get super pumped, but as we all know we cannot write out every second of our day and stick to it. Life can throw a curve into your day and your tightly scheduled day falls apart. So my advice is to give yourself some wiggle room in your day or week. Allow yourself time to accomplish one goal before moving to the next. Or if you have one BIG goal, break it down into sections. Start by writing down ALL of your goals and to-do’s on a sheet of paper. Then set 3 goals per day or week (depending on your regular schedule) in your planner. After achieving those 3 goals then add more to your day or your week if you have spare time. Finding a balance between setting and accomplishing will lessen the overwhelm and ultimately help you achieve your goals. #3- Find an accountability partner to help you achieve your goals. A lot of times we think that asking for help is a sign of weakness. However, I feel that having someone to keep me in check is more of a motivation. Because how many of us want to let someone down? Not me! I want to make them proud and hear them cheering on the sidelines. Having an accountability partner also comes in handy when we need an outside perspective on a goal that we are struggling with. We often get too wrapped up in the problem and can’t see a solution. But your accountability partners vision isn’t clouded and can offer valuable insight! However, I do not recommend just asking anyone to be your support system. Be sure to ask someone you fully trust and respect their opinion, like a close friend or family member. #4- Surround yourself with encouragement. Encouragement can come in many different forms. (e.g. friends/family, uplifting music, or inspirational sayings on wall/desk art, scripture.) By surrounding yourself with encouragement you are boosting your confidence and pushing yourself to achieving your goals faster. Which leads us into tip #5! #5- Ignore the lies in your head that say you can’t achieve your goals. You are your own worst enemy. That is why tips #4 and #5 are so important! When we get frustrated we automatically start mentally beating ourselves up. We convince ourselves of every lie in the book- “I’m not smart enough or talented enough.” “It’s hopeless.” “I’m a failure.” We end up believing that we have reached our limit. But the truth is, if we simply take a step back and refocus we will see that we are capable of so much more! So by having that accountability partner and surrounding yourself with encouragement you will be more likely to stay on the path to achieving your goals! #6- Don’t lose sight of your goal and the reason behind it! Remember the reason “why” you set this goal and let that be your motivation. Sometimes we can get wrapped up in our “goal journey” that we forget what brought us on this journey to begin with. But if we keep the reason why we set that goal at the forefront of our minds, we are more likely to stay focused and not give up. I personally feel that every goal should have an origin story or “reason” behind it. Because without it there would be no motivation to accomplish it. For example, I want to clean out my window sills. This goal has no life changing or exciting reasoning behind it, which can make it feel pointless. However, if I give it a reason, such as, so my family won’t be breathing in the mold that is growing on the sills, I am more likely to make it a priority. So, as you are setting your goals, I encourage you to give it a reason why. This will increase the motivation, help you prioritize, and achieve your goals faster! Conclusion Remember, you deserve this! Yes, it will take time. And yes, it will take hard work and dedication. But the end result will definitely be worth it! So I encourage you today to not give up! Create a plan, find your support, ignore the lies, and achieve your goals! You May Also Like: Create your own customized planner pages in Canva for FREE with this tutorial. Or check out my “already done for you” 6 Page Planner Bundle in my Etsy shop!
https://mommybytrade.com/achieve-your-goals-with-these-6-simple-steps/
While college football continues to figure out how bring the sport back in the midst of the coronavirus pandemic, the Big 12’s commissioner thinks it will come back with fans in the stands. Big 12 Commissioner Bob Bowlsby said Friday during a SiriusXM interview that he believes there will be people in the stadiums to watch their colleges play, just not as many as others are used to seeing. “It really depends on how things go between now and then,” Bowlsby told hosts Dave Archer and Ari Temkin. “I think it’s fair to ask the question: If it’s safe for the kids to be on the field in close contact to one another, why wouldn’t it be safe for fans to be in the stands at social distance? “Now you take an 80,000-seat stadium, though, and you might end up with 20,000 people there.” The look of live college football has been one of the chief topics throughout the pandemic and the process of starting the 2020 season. Ticket sales are a healthy chunk of college athletic programs’ revenue. Ohio State athletic director Gene Smith told reporters earlier this week that Ohio Stadium, which can seat a shade over 102,000, could seat about 20-22,000 using social distancing guidleines, according to models the university used. That number could rise to as many as 50,000 if those guidelines were relaxed. Sports fans would be fine with near-empty or empty stadiums, according to an ESPN poll released in early May, as long as they could see some live events again. Among the 1,004 polled, 65 percent favored sports returning even if fans weren’t in the stands. That number jumped to 76 percent if players were kept in hotels and their contact with others was closely monitored. While Bowlsby is optimistic those seats won’t be completely empty, he admits the process of making those seats safe for fans to use will be a very involved task. “I think we’ll have fans in the stands,” Bowlsby said. “When you think about how difficult it is to do hospital-level disinfecting in a weight room or locker room or a training room, think about doing it for an entire stadium — the entry ways, the lines at the restrooms, the lines at the concessions stands, sitting that far apart in the stands. It’s a very large undertaking.” ••• It won’t be just football, and men’s and somen’s basketball allowed to hold voluntary workouts on June 1. The NCAA announced Friday that Division I athletes in all sports will be allowed to participate in voluntary activities starting on that date. The Division I Council also voted to extend the waiver to allow eight hours of required non-physical activities in all sports.
--- address: - 'Dipartimento di Matematica, Università della Campania “L. Vanvitelli”, viale Lincoln, 5, 81100 Caserta, Italy' - 'St Hilda’s College, University of Oxford, Cowley Place, Oxford OX4 1DY, UK' - 'School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS, and School of Mathematics, University of Edinburgh, King’s Buidings, Peter Guthrie Tait Road Edinburgh EH9 3FD, UK' author: - 'Paola D’Aquino' - Jamshid Derakhshan - 'Angus Macintyre${}^{\dag}$' bibliography: - 'bibadeles.bib' title: Truncations of Ordered Abelian Groups --- **Introduction** {#sec-introduction} ================ This paper is a component of two pieces of research, one by D’Aquino and Macintyre [@PDAJM],[@elem-prod], and one by Derakhshan and Macintyre [@elem-rest]. The common theme is the model theory of local rings $V/(\alpha)$, where $V$ is a Henselian valuation domain, and $\alpha$ is a nonzero nonunit of $V$. If $v: V\rightarrow P$ is the valuation of $V$, with $P$ the semigroup of non-negative elements of the value group $\Gamma$ of the fraction field $K$ of $V$, $v$ induces a ”truncated valuation” from $V/(\alpha)$ onto the segment $[0,v(\alpha)]$ of $P$ defined by $v(x+(\alpha))=v(x)$ if $v(x)<v(\alpha)$, and $v(x+(\alpha))=v(\alpha)$ if $v(x)\geq v(\alpha)$. The segment $[0,v(\alpha)]$ inherits from $\Gamma$ an ordering $\leq$, with $0$ as least element and $v(\alpha)$ as greatest element. From our assumption on $\alpha$, $v(\alpha)\neq 0$. Next, $[0,v(\alpha)]$ gets a truncated semi-group structure as follows. Let $\oplus$ be the addition on $\Gamma$. Define, for $\gamma_1,\gamma_2\in [0,v(\alpha)]$ $$\gamma_1+\gamma_1=\mathrm{min}(\gamma_1\oplus\gamma_2,v(\alpha)).$$ The basic laws are $$v(x+y)\geq \mathrm{min}(v(x),v(y))$$ $$v(xy)=\mathrm{min}(v(\alpha),v(x)+v(y)).$$ Ideas connected to this, but much more sophisticated, appear in the work of Hiranouchi [@hiranouchi-survey],[@hiranouchi-ext]. Forgetting the valuation in the preceding, we have an ordered abelian group $\Gamma$, with order $\leq$, addition $\oplus$, subtraction $\ominus$, zero $0$, and a distinguished element $\tau>0$ (where $\tau=v(\alpha)$ in the preceding). We then define as above a “truncated addition” $+$ on $[0,\tau]$, giving us an example of a truncated ordered abelian group (TOAG). Our main result produces a natural first-order set of axioms in the language $\{\leq,0,\tau,+\}$ for truncated ordered abelian groups (these axioms are true in initial segments of ordered abelian groups), and proves that each truncated ordered abelian group (i.e. each model of these axioms) is an initial segment of an ordered abelian group. **The axioms** ============== 1.2. [*Obvious axioms.*]{} The following are obvious, via immediate calculations in $P$. [*Axiom*]{} 1. Addition $+$ is commutative. 2\. $x+0=x$. 3\. $x+\tau=\tau$. 4\. If $x_1\leq y_1$ and $x_2\leq y_2$ then $x_1+x_2\leq y_1+y_2$. 1.3. [*Less obvious axioms.*]{} 5\. Addition $+$ is associative. Suppose $x,y,z$ in $[0,\tau]$. 1\. $x \oplus y \oplus z \leq \tau$. Then $$x+(y+z)=x+(y\oplus z)=x\oplus (y\oplus z)=(x\oplus y)\oplus z=(x+y)+z$$ 2\. $x\oplus y \oplus z >\tau$. 1\. $y\oplus z\geq \tau$ and $x\oplus y \geq \tau$. Then $$x+(y+z)=x+\tau=\tau$$ $$(x+y)+z=\tau+z=\tau.$$ 2\. $y\oplus z <\tau, x\oplus y \geq \tau$. Then $$x+(y+z)=x+(y\oplus z)=min(x\oplus (y\oplus z),\tau)=min(x\oplus y\oplus z,\tau)=\tau,$$ $$(x+y)+z=\tau+z=\tau.$$ 3\. $y\oplus z\geq \tau$ and $(x\oplus y)<\tau$. Then $$x+(y+z)=x+\tau=\tau$$ $$(x+y)+z=(x\oplus y)+z=min((x\oplus y)\oplus z,\tau)=min(x\oplus y\oplus z,\tau)=\tau.$$ 4\. $y\oplus z<\tau,~ x\oplus y<\tau$. Now $x\oplus (y\oplus z)=(x\oplus y)\oplus z$ and so $x\oplus(y+ z)=(x+ y)\oplus z$ and so $$min(x\oplus (y+z),\tau)=min((x+y)\oplus z,\tau),$$ so $x+(y+z)=(x+y)+z$. 1.4. [*Axioms concerning cancellation.*]{} 6\. If $x+y=x+z <\tau$, then $y=z$. $x\oplus y=x+y$ and $x\oplus z=x+z$ in this case, so use cancellation in $P$. 7\. If $x\leq y<\tau$, then there is a unique $z$ with $x+z=y$. Immediate from definition and the fact that $P$ is the non-negative part of $\Gamma$. We write $y {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x$ for the $z$ in the above. 8\. There are (in general, many) $z$ in $[0,\tau]$, for $x$ in $[0,\tau]$, so that $x+z=\tau$, and there is a minimal one to be denoted $\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x$. Obvious by working in $P$ and taking $\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x=\tau -x$. We define $\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}\tau$ as $0$. 9\. $\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)=x$. For $x<\tau$, $$\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)=\tau - (\tau-x).$$ For $x=\tau$, $$\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)=\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}0=\tau=x.$$ 1.5. [*Crucial Axioms.*]{} There now follows a series of axioms which are basic in what follows. It is not clear to us what are the dependencies between these axioms over the preceding nine. 10\. Suppose $0\leq x,~ y<\tau$ and $x+y=\tau$. Then $y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)=x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)$. Both are equal to $(x\oplus y) \ominus \tau$. Before getting to the remaining axioms, we prove some useful lemmas without using Axiom 10. 1.6. \[lem1\] Suppose $0\leq y\leq z<\tau$. Then $\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z \leq \tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y$. We have that $y+(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)=\tau$ and $z+(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)=\tau$. If $(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z) >(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)$, then (Axiom 8) $$z+(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)<\tau,$$ so $$y+(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)<\tau,$$ contradiction. \[lem2\] For $x,y<\tau$, $\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x=\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y$ implies $x=y$. Assume $\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x=\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y$. Then $$\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)=\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y),$$ so (Axiom 9) $x=y$. \[cor1\] If $x<y<\tau$, then $\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y<\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x$. By Lemma \[lem2\], $$\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y \leq \tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x.$$ If $\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y=\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x$, then $x=y$. 1.7. [*A miscellany of other axioms.*]{} In the course of proving Associativity in Theorem 1 we need various axioms about $+,{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}$, and $\tau$. Each of these axioms is true (and with a trivial proof) in the $[0,\tau]$ coming from the ordered abelian group $\Gamma$ with $\oplus$, so it is natural to use them. One may hope to deduce them from the axioms listed already, but we have not succeeded in doing so. Thus we settle for quite a long list of “ad hoc” axioms, which we now consider in the order in which they occur in the proof of Associativity in 2.8. 11\. Assume $y+z<\tau$,   $x+(y+z)=\tau$,   and $y+x<\tau$. Then $$x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(y+z))=z {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(x+y)).$$ . By Axiom 5 we may safely write $x+y+z$ for $x+(y+z)$ and $(x+y)+z$ and will do so henceforward. In all that follows, we construe $x,y,z,\tau$ as in the $[0,\tau]$ in $(P,<_P,\oplus)$ the non-negative part of an ordered abelian group $\Gamma$, where $<$ is identified with $<_P$ (namely, the restriction of $\leq$ to $P$), and $+$ with the truncation of $\oplus$. Then $$x\oplus y\oplus z=\tau \oplus \epsilon,$$ for some $\epsilon$ in $P$. Now $y\oplus z<\tau$ and $y\oplus x<\tau$, so $x\oplus y\oplus z<2\tau$, so $\epsilon < \tau$. Let $$\mu=\tau-(y+z),$$ $$\delta=\tau-(x+y).$$ Then $x=\mu+\epsilon$, so $x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(y+z))=\epsilon$ and $z=\delta+\epsilon$, so $$z{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(x+y))=\epsilon,$$ giving the verification. The subsequent verifications are at the same level of difficulty. 12\. Assume $y+z<\tau$,   $x+y+z=\tau$,   $y+x=\tau$, and $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))<\tau$. Then $$z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))=x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(y+z)).$$ (Convention: Any time we write $A{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}B$ we assume $A\geq B$). We have $$x\oplus y \oplus z=\tau\oplus \epsilon$$ for some $\epsilon \in P$, and $$y\oplus x=\tau\oplus \gamma,$$ for some $\gamma \in P$. Now $x\oplus y \oplus z<2\tau$, so $\epsilon < \tau$; and $y\oplus z<2\tau$, so $\gamma<\tau$. Let $\mu=\tau-(y+x)$. Then $x=\mu\oplus \epsilon$, so $$x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(y+z))=\epsilon$$ whereas, $$z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))=z+(y-(\tau-x))=x+y+z-\tau=\epsilon.$$ 13\. Assume $y+x=\tau$ and $y+z<\tau$. Then $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))<\tau$. Let $y\oplus x=\tau\oplus \epsilon$, where $0\leq \epsilon <\tau$. Then $$y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)=\epsilon.$$ and $$z\oplus \epsilon=z\oplus ((y\oplus x)\ominus \tau)=(x\oplus y\oplus z)\ominus \tau<\tau$$ since $x<\tau$ and $y+z<\tau$. 14\. Assume $y+z=y+x=\tau$ and $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))<\tau$. Then $$x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=(x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y))+x.$$ Let $$y\oplus z=\tau \oplus \epsilon$$ $$y\oplus x=\tau\oplus \delta,$$ with $0\leq \epsilon,\delta<\tau$. So $$y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=\epsilon,$$ $$x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)=\delta,$$ $$y\oplus z\oplus \delta=\tau\oplus \epsilon \oplus \delta,$$ and $$y\oplus x\oplus \epsilon=\tau\oplus \epsilon \oplus \delta,$$ so $x+\epsilon=z+\delta$ as required. 15\. Assume $y+z=\tau$,   $y+x=\tau$, and $x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=\tau$. Then $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))=\tau$. Let $$y\oplus z=\tau \oplus \delta,$$ $$y\oplus x=\tau\oplus \epsilon.$$ as before. Then $$y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=\delta$$ $$y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))=\epsilon.$$ $$x\oplus (y\ominus (\tau \ominus z))=(x\oplus y\oplus z)\ominus \tau$$ since $$x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=\tau$$ $$x\oplus y\oplus z\geq 2\tau$$ Now $z\oplus (y\ominus(\tau\ominus x))=(x\oplus y \oplus z)\ominus \tau$, so $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))=\tau$ since $x\oplus y\oplus z\geq 2\tau$. 16\. Assume $$y+z=y+x=x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=\tau.$$ Then $$(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)){\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)=(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau\dot z)){\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x).$$ $$(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)){\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)=$$ $$((y\oplus x)\ominus \tau) \ominus (\tau \ominus z)=$$ $$(x\oplus y \oplus z)\ominus (2\tau)=$$ $$((y\oplus z)\ominus \tau)\ominus (\tau \ominus x)=$$ $$(y {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)){\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x).$$ **Truncated ordered abelian groups and ordered abelian groups** =============================================================== 2.1. A truncated ordered abelian group (TOAG) is a linear order $[0,\tau]$ with a $+$ satisfying Axioms 1-16. 2.2. \[toag\] Let $[0,\tau]$ be a truncated ordered abelian group with $+$ and $\leq$. Then there is an ordered abelian group $\Gamma$, under $\oplus$ and $\leq_{\Gamma}$, with $P$ the semigroup of non-negative elements, and an element $\tau_P$ of $P$ so that $[0,\tau]$ with $+$ and $\leq$ is isomorphic to $[0,\tau_P]$ with the addition and order induced by $\oplus$ and $\leq_P$ on $[0,\tau_P]$. We begin by constructing $P$, and then we laboriously verify that it has the required properties. 2.3. [*Construction.*]{} $P$ is $\omega \times [0,\tau)$, where $\omega$ is the set of finite ordinals $k$ under ordinal addition ($+$) and order ($\leq$). $[0,\tau)$ has the order induced form $[0,\tau]$. $P$ is lexicographically ordered with respect to the two orderings just specified. Let $\leq_P$ be the lexicographic order. Let $0=<0,0>\in P$, the least element of $P$. Let $\tau_P=<1,0>\in P$. We have $[0,\tau_P)=\{0\} \times [0,\tau)$, giving natural order isomorphisms $[0,\tau_P)\cong [0,\tau)$ and $[0,\tau_P]\cong [0,\tau]$. Now we define $\oplus$ on $P$. 2.4. [*The Case1/Case 2 distinction for $<y,z>\in [0,\tau)^2$.*]{} This comes up all the time in what follows, and the axioms from Axiom 10 on relate to it. $y+z<\tau$ $y+z=\tau$ Note that both are symmetric in $y$ and $z$ and are complements of each other. The main point is that Case 2 is equivalent to both $y\geq \tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z$ and $z\geq \tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y$. Moreover Axiom 10 implies that in Case 2 $$y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)=z{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y).$$ 2.5. [*Defining $\oplus$.*]{} We define $<k,y> \oplus <l,z>$ to be $<k+l,y+z>$ if $<y,z>$ in Case 1, and $<k+l+1,y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)>$ if $<y,z>$ in Case 2. 2.6. $\oplus$ is commutative, with $0$ as neutral element. Let $<k,y>,<l,z>\in P$. 1\. If $<y,z>$ in Case 1 (and so then is $<z,y>$). Then $$(<k,y>\oplus <l,z>=<k+l,y+z>=<l+k,z+y>=<l,z>\oplus <k,y>.$$   [*Case*]{} 2. If $<y,z>$ in Case 2 (again symmetric). Then $$<k,y>\oplus <l,z>=<k+l+1,y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)>=<l+k+1,z {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)>,$$ by Axiom 10. If $y=0$ we are in Case 1 so $<k,y>\oplus <l,z>=<k+l,z>$, so if $k=0$, this is equal to $<0,z>$. This completes the proof in all the cases. 2.7. [*$\leq_P$ and $\oplus$*]{} If $<k,x>\leq_P <l,y>$ and $<m,z> \leq_P <n,w>$, then $$<k,x>\oplus <m,z> ~ \leq ~ <l,y>\oplus <n,w>.$$ There are four cases: a\) $(x,z)$ and $(y,w)$ both in Case 1. b\) $(x,z)$ in Case 1, $(y,w)$ in Case 2. c\) $(x,z)$ in Case 2, $(y,w)$ in Case 1. d\) $(x,z)$ in Case 2, $(y,w)$ in Case 2. Now suppose the hypothesis of the lemma, i.e. $k\leq l, x\leq y, m\leq n, z\leq w$. a\) $<k,x> \oplus <m,z>=<k+m,x+z)$ and $$(l,y)\oplus (n,w)=(l+n,y+w),$$ and the required inequality follows from the four inequalities of the hypothesis and Axioms 1-10. b\) $<k,x>\oplus <m,z>=<k+m,x+z>$, and $$<l,y>\oplus <n,w>=<l+n+1,y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}w)>,$$ and the required inequality follows since $k+m\leq l+n+1$. c\) $<k,x>\oplus <m,z>=(k+m+1,x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))$, and $$<l,y>\oplus <n,w>=<l+n,y+w>.$$ But since $y\geq x$, and $w\geq z$, we have $y+w=\tau$, contradiction. This case does not occur. d\) $<k,x>\oplus <m,z>=<k+m+1,x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z>$, and $$<l,y>\oplus <n,w>=<l+n+1,y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}w)>.$$ But now $x\leq y$, and (by Lemma \[cor1\]) $\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z\geq \tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}w$, so $$x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)\leq y-(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)\leq y-(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}w)=y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}w)$$ (we are in Case 2), giving the result. 2.8. [*Associativity.*]{} Verifying this is the most tedious task of all. We can profit a bit from having already proved commutativity. We need to prove: $$(1) \ \ \ \ \ \ \ \ <k,x>\oplus (<l,y>\oplus <m,z>) =$$ $$(2) \ \ \ \ \ \ \ \ (<k,x>\oplus <l,y>)\oplus <m,z>=$$ $$(3) \ \ \ \ \ \ \ \ <m,z>\oplus (<l,y>\oplus <k,x>).$$ So we should calculate $(1)$ via, firstly, Case of $(y,z)$ and then Case of $x$ and right-hand coordinate of $<l,y>\oplus <m,z>$. Then do same for $(3)$, switching $x$ and $z$. The simplest situation for $(1)$ is: 1\. $(y,z)$ in Case 1 and then right-hand coordinate of $<l,y>\oplus <m,z>$ is in Case 1 with $x$. So calculation gives $y+z+x <\tau$ and value of $(1)$ is $$<k+l+m,x+y+z>.$$ This is exactly the same as we get from $(3)$ assuming $(y,x)$ in Case 1, and so is $z$ with right-hand coordinate of $<l,y>\oplus <k,x>$, and so we verify one instance of associativity, when $x+y+z<\tau$. 2\. $(y,z)$ in Case 1, and $x$ in Case 2 with right-hand coordinate of $<l,y>\oplus <m,z>$. So $y+z<\tau$ but $x+(y+z)=\tau$. Then value of $(1)$ is $$<k+l+m+1,x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(y+z))>$$ (and bear in mind that $x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(y+z))=(y+z){\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)$). With the same assumptions on $x,y,z$ we try to calculate the value of $(3)$,i.e. of $<m,z>\oplus (<l,y>\oplus <k,x>)$. From the preceding we have $y+z<\tau$ and $x+(y+z)=\tau$. Now we try to calculate $<l,y>\oplus <k,x>$. 1\. $y+x<\tau$. Then $$<l,y>\oplus <k,x>=<L+k,y+x>$$ and since $x+(y+z)=\tau$ we have $z+(x+y)=\tau$, whence Case 2 for $z$ and $y+x$, whence $$<m,z>\oplus (<l,y>\oplus <k,x>)=<m+l+k+1,z{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(x+y))>.$$ Now, Axiom 11 gives that if $$x+y+z=\tau,$$ and $y+z<\tau$ and $y+x<\tau$, then $$x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(y+z))=z{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(x+y)),$$ so we have $(1)=(3)$ in this subcase. 2\. $y+x=\tau$. Then $$<l,y>\oplus <k,x>=<l+k+1,x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)>= <l+k+1,y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)>$$ 2.1. $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))<\tau$. So $(3)$ is $$<m+l+k+1,z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))>$$ and by Axiom 12 we have $(1)=(3)$ in this subcase. 2.2. $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))=\tau$. By Axiom 13 this is impossible. 3\. $(y,z)$ in Case 2, and $x$ in Case 1 with right-hand coordinate of $<l,y>\oplus <m,z>$. So $y+z=\tau$, whence $$<l,y>\oplus <m,z>=<l+m+1,y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)>=<l+m+1,z{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)>$$ Then $x+((y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))<\tau$, and so $$<k,x>\oplus (<l,y>\oplus <m,z>)=<k+l+m+1,x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))>$$ which equals the value of $(1)$. With the same assumptions on $(x,y,z)$ we try to compute $(3)$. The assumptions $y+z=\tau$ and $x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))<\tau$. 1\. $y+x<\tau$. Then $<l,y>\oplus <k,x>=<l+k,y+x>$. 1.1. $z+(y+x)<\tau$. This is impossible, since $y+z=\tau$ is assumed. 1.2. $z+(y+x)=\tau$. So $(3)$ equals $$<k+l+m+1,z{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(y+x))>=<k+l+m+1,(y+x){\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)>$$ which equals (1) by Axiom 11. 2\. $y+x=\tau$. Then $$<l,y>\oplus <k,x>=<l+k+1,y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)>=<l+k+1,x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)>.$$ [*Subcase*]{} 2.1. $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau\dot x))<\tau$. So $(3)$ equals $$<k+l+m+1,(x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y))+z>=<k+l+m+1,(y {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))+z>$$ which equals $(1)$ by Axiom 14. 2.2. $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))=\tau$. But this together with the assumptions of the situation contradicts Axiom ?. So it does not occur. 4\. $(y,z)$ in Case 2 and $x$ in Case 2 with right-hand coordinate of $<l,y>\oplus <m,z>$. So $y+z=\tau$ whence $$<l,y>\oplus <m,z>=<l+m+1,y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)>=<l+m+1,z{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)>.$$ Then $x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=\tau$, so $$<k,x>\oplus (<l,y>\oplus <m,z>)=<k+l+m+2,x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)))>=<k+l+m+2,(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)) {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)>,$$ giving the value of $(1)$. With the same assumptions on $(x,y,z)$, we try to compute $(3)$. The assumptions are $y+z=\tau$ and $x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=\tau$. 1\. $y+x<\tau$. But this is inconsistent with Axiom 6 since $x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=\tau$. 2\. $y+x=\tau$. So $$<l,y>\oplus <k,x>=<l+k+1,y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))=<l+k+1,x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)>.$$ 2.1. $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))<\tau$. This is inconsistent by Axiom 15, since $$x+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z))=\tau$$ 2.2. $z+(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x))=\tau$. Then $(3)$ equals $$<k+l+m+2,(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)){\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)$$ and by Axiom 16 this equals $$<k+l+m+2,(y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)) {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x)>$$ so $(1)=(3)$. This concludes the proof that $\oplus$ is associative. 2.9. [*Cancellation.*]{} In the preceding we have established that $(P,\leq_P,\oplus)$ is a commutative ordered monoid with $0$ as least element. In 2.7 we show (en passant) that $<k,x>\leq <k,x>\oplus <l,y>$ for any $<k,x>$, $<l,y>$. It remains to prove cancellation or relative complementation, namely: If $<k,x> \leq <l,y>$ then there exists $<m,z>$ with $$<k,x>\oplus <m,z>=<l,y>.$$ Assume $<k,x>\leq <l,y>$. Then $k\leq l$. If $k=l$, then $x\leq y$. So take $m=0$ and $z=y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x$. If, however, $k<l$, there are two cases. 1\. $x\leq y$. So take $m=l-k$ and $z=y{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x$. 2\. $y<x$. We need $<m,z>$ so that $$<l,y>=<k,x>\oplus <m,z>,$$ and as usual the Case 1/Case 2 distinction on $x,z$ intervenes. But now $z$ is the unknown, with $x,y$ given. Suppose $z$ can be found in Case 1. So $x+z<\tau$, and then $y=x+z$, so $y\geq x$, contradicting our assumption. Thus $z$ can be found, if at all, only in Case 2, and then $$y=x {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)=z{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}x),$$ so $\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z=x {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y$. So $z=\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}y)$. What about $m$? We want $$<l,y>=<k+m+1,x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}(\tau {\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}z)>,$$ so we need only $$l=k+m-1,$$ so $m=(l-k)-1\geq 0$. This completes the proof of Theorem \[toag\]. 2.10. We are going to use Theorem \[toag\] only for discretely ordered TOAG, in fact only the $[0,\tau]$ coming from models of Presburger arithmetic. 2.11. [*Presburger truncated ordered abelian groups.*]{} In our applications we take Presburger arithmetic (cf. [@presburger]) to be formulated in the language of ordered groups with a distinguished constant $1$ (to denote the least positive element). We generally drop the distinction between the group $\Gamma$ and its non-negative part. Note that $0\neq 1$ in models of Presburger. We now consider Presburger truncated ordered Abelian groups, i.e. truncated ordered Abelian groups of the form $[0,\tau]$ with distinguished element $1$, the least positive element (we do not insist that $1<\tau$), which are truncations of models of Presburger. \[thm2\] A truncated ordered Abelian group $[0,\tau]$ with least positive element $1$ is a Presburger truncated ordered Abelian group if and only if it satisfies the following conditions: - $[0,\tau]$ is discretely ordered and every positive element is a successor, - For each positive integer $n$ and each $x$ in $[0,\tau]$ there is a $y$ in $[0,\tau]$ and an integer $m<n$ such that $$x=ny+m=(y+\dots+y)+(1+\dots+1).$$ ([*Note.*]{} When $m=0$ in $\Z$, $m=0$ in $[0,\tau]$ by definition.) Necessity is clear from the axioms of Presburger. For sufficiency, we argue as follows. Suppose $[0,\tau]$ satisfies Conditions (1) and (2) (and has $1$). Build $P$ as in the proof of Theorem \[toag\]. Clearly $1$ is the least positive element of $P$ and $P$ is discretely ordered. Let $<k,x>$ be a nonzero element of $P$, so $k\in \{0,1,2,\dots\}$ and $x \in [0,\tau)$. If $x\neq 0$, $$<k,x>=<k,x{\mathbin{\text{{ \ooalign{\hidewidth\raise1ex\hbox{.}\hidewidth\cr$\m@th-$\cr}}}}}1>+<0,1>,$$ so $<k,x>$ is a successor. If $x=0$, and $k\neq 0$, $<k,x>$ is successor of $<k-1,\tau-1>$. So $P$ is discretely ordered, and every positive element is a successor. To get the Euclidean division results, fix a positive integer $n$ and some $<k,x>$ in $P$. Let $k=na+b$, for non-negative integers $a,b$ with $b<n$. Now $$<k,x>=<k,0>\oplus <0,x>=<na,0>\oplus <b,0>\oplus <0,x>.$$ Now $$<na,0>=\underbrace{<a,0>\oplus \dots \oplus <a,0>}_{n \ \text{times}},$$ Also, if $b>0$, $$<b,0>=\underbrace{<1,0>\oplus \dots \oplus <1,0>}_{b \ \text{times}}$$ and $$<1,0>=<0,\tau-1>+1,$$ and if $1<\tau$ (other case trivial) $$\tau-1=nc+d,$$ for some $c\in [0,\tau-1]$, and $0\leq d<n$ (with usual conventions about multiplication by $m$). Finally, $<0,x>$ is also of the form $n\gamma+\delta$, with $0\leq \delta<n$ via Condition 2 for $[0,\tau]$. Thus $<k,x>$ is congruent “modulo” $n$ to an integer less than $n$, and we are done. **Elementary theories of Presburger truncated ordered abelian groups** ====================================================================== 3.1. When is $[0,\tau] \cong [0,\mu]$ if both are Presburger truncated ordered Abelian groups? Clearly the answer comes, by Theorem \[thm2\], from an answer to the question: what are the pure 1-types for Presburger arithmetic? The answer to this is well-known [@presburger]. Namely, the pure $1$-type of an element $x$ in $P$, the non-negative part of a model of Presburger, is determined by \(A) Whether or not $x\in \{0,1,2,\dots\}$, \(B) The remainder of $x$ modulo $n$ for each positive integer $n$. 3.2. \[thm3\] The elementary theory of a Presburger truncated ordered Abelian groups $[0,\tau]$ is determined by the Presburger $1$-type of $\tau-1$ (the penultimate element of $[0,\tau]$), i.e. by \(A) Whether $\tau -1 \in \{0,1,2,\dots\}$ \(B) The congruence class of $\tau-1$ modulo $n$ for each positive integer $n$. Moreover, any Presburger $1$-type can occur for some truncated ordered Abelian groups $[0,\tau]$. Immediate from the preceding.
--- abstract: 'We study the E-theory group $E_{[0,1]}(A,B)$ for a class of C\*-algebras over the unit interval with finitely many singular points, called elementary $C[0,1]$-algebras. We use results on E-theory over non-Hausdorff spaces to describe $E_{[0,1]}(A,B)$ where $A$ is a sky-scraper algebra. Then we compute $E_{[0,1]}(A,B)$ for two elementary $C[0,1]$-algebras in the case where the fibers $A(x)$ and $B(y)$ of $A$ and $B$ are such that $E^1(A(x),B(y)) = 0$ for all $x,y\in [0,1]$. This result applies whenever the fibers satisfy the UCT, their $K_0$-groups are free of finite rank and their $K_1$-groups are zero. In that case we show that $E_{[0,1]}(A,B)$ is isomorphic to ${\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}$, the group of morphisms of the K-theory sheaves of $A$ and $B$. As an application, we give a streamlined partially new proof of a classification result due to the first author and Elliott.' author: - Marius Dadarlat and Prahlad Vaidyanathan bibliography: - 'mybib.bib' title: 'E-theory for $C[0,1]$-algebras with finitely many singular points' --- [^1] Introduction ============ A deep isomorphism theorem of Kirchberg [@kirchberg] states that if $A$ and $B$ are strongly purely infinite, stable, nuclear, separable C\*-algebras with primitive ideal spectrum homeomorphic to the same space $X$, then $A \cong B$ if and only the group $KK_X(A, B)$ contains an invertible element, where $KK_X(A,B)$ denotes the bivariant K-theory for C\*-algebras over a space $X$. This leads to the question of computing $KK_X(A, B)$ and finding simpler invariants to understand this object. In the present paper we focus mainly on the case when $X$ is the unit interval. Recall that a C\*-algebra over a locally compact, Hausdorff space $X$ is one that carries an essential central action of $C_0(X)$, and by the Dauns-Hofmann theorem, every C\*-algebra with a Hausdorff spectrum $X$ can be thought of as a continuous $C_0(X)$-algebra (see Definition \[defn: cx\_algebra\]).\ In this paper, we obtain information on $KK_X(A,B)$ using the $E$-theory groups $E_X(A,B)$, [@park_trout],[@mdd_meyer]. It is known [@park_trout Theorem 4.7] that $E_X(A,B)$ coincides with $KK_X(A,B)$ when $X$ is a locally compact Hausdorff space and $A$ is a separable continuous nuclear $C_0(X)$-algebra. Furthermore, the fact that E-theory satisfies excision for all extensions of $C_0(X)$-algebras enables us to compute the $E_{[0,1]}$-group for a class of elementary $C[0,1]$-algebras using the E-theory of their fibers and the E-theory classes of the connecting maps. We make crucial use of the generalization of $E$-theory to C\*-algebras over non-Hausdorff spaces, as developed in [@mdd_meyer]. Elementary $C[0,1]$-algebras, studied in [@mdd_fiberwise], [@mdd_elliott] and [@mdd_pasnicu], act as basic building blocks for more complex $C[0,1]$-algebras (see [@mdd_elliott Theorem 6.2]) and this paper should be viewed as a stepping stone towards that more general situation. As a first application of our calculations, we give a streamlined proof of the main result of [@mdd_elliott], see Theorem \[thm:DE\].\ One reason which makes the computation of the $KK_X$-groups difficult is the prevalence of non-semisplit extensions over $X$. To illustrate this point, let us mention that the exact sequence of $C[0,1]$-algebras $0\to C_0[0,1)\to C[0,1]\to \mathbb{C}\to 0$ is not semisplit over $X=[0,1]$. This is more than a technical nuisance, since as pointed out in [@bauval Remarques 1], a six-term sequence of the form $$\xymatrix{ KK_X^0(\mathbb{C},D)\ar[r]& KK_X^0(C[0,1],D)\ar[r]& KK_X^0(C_0[0,1),D)\ar[d]\\ KK_X^1(C_0[0,1),D)\ar[u]& KK_X^1(C[0,1],D)\ar[l]& KK_X^1(\mathbb{C},D)\ar[l] }$$ cannot be exact for $D=C_0[0,1)$. Indeed, after computing each term one gets: $$\xymatrix{ 0\ar[r]& 0\ar[r]& \mathbb{Z}\ar[d]\\ 0\ar[u]& 0\ar[l]& 0\ar[l] {}. }$$ In contrast, the corresponding six-term exact sequence in $E^*_X$ is exact. This property plays an important role in our investigation of $C[0,1]$-algebras with finitely many singular points. The paper is organized as follows: In Section \[sec: e\_theory\], we revisit the construction of E-theory for C\*-algebras over a space $X$ and establish some preliminary lemmas. In Section \[sec: skyscraper\], we use the results in [@mdd_meyer] to describe $E_X(A,B)$ where $A$ is a sky-scraper algebra (Theorem \[thm: skyscraper\]). Section \[sec: elementary\] contains the main result of this paper (Theorem \[thm: ex\]), where we compute $E_X(A,B)$ for two elementary $C[0,1]$-algebras $A$ and $B$ whose fibers satisfy the condition $E^1(A(x),B(y)) = 0$ for all $x,y\in [0,1]$. In Section \[sec: k\_theory\], we apply these results to the case where the fibers satisfy the UCT of [@rosenberg-schochet], whose $K_0$-group is free of finite rank, and whose $K_1$-group is zero. In this case, we show (Theorem \[thm: uct\]) that the natural map $\Gamma_{A,B}: E_X(A,B) \to {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}$ is an isomorphism. The fact that this map is surjective was proved, through different means, in [@mdd_elliott]. The merit of our arguments is that, not only were we able to show that the map is also injective, but we showed it using more natural methods. Also, the fact that we do not require the UCT in Theorem \[thm: ex\] is a marked difference from [@mdd_elliott].\ The authors are thankful to the referee for the suggestion to revisit the classification result of [@mdd_elliott]. E-theory over a topological space {#sec: e_theory} ================================= We recall some basic definitions and facts from [@mdd_meyer].\ Let $A$ be a C\*-algebra, and let Prim($A$) denote its primitive ideal space equipped with the hull-kernel topology. Let $\mathbb{I}(A)$ be the set of ideals of $A$ partially ordered by inclusion. For a space $X$, let $\mathbb{O}(X)$ be the set of open subsets of $X$ partially ordered by inclusion. Then there is a canonical lattice isomorphism [@dixmier $\mathsection$ 3.2] $$\label{eqn:lattice_iso} \mathbb{O}(\text{Prim}(A)) \cong \mathbb{I}(A), \quad U \mapsto \bigcap \{\mathfrak{p}: \mathfrak{p} \in \text{Prim}(A)\setminus U \}$$ \[defn: cx\_algebra\] Let $X$ be a second countable topological space. A *C\*-algebra over $X$* is a C\*-algebra $A$ together with a continuous map $\psi: \text{Prim}(A) \to X$. If in addition $X$ is locally compact and Hausdorff, then this is equivalent to a \*-homomorphism from $C_0(X)$ to the center of the multiplier algebra of $A$ such that $C_0(X)A=A$. In this case, $A$ is called a $C_0(X)$-algebra.\ For $U \subset X$ open, let $A(U) \in \mathbb{I}(A)$ be the ideal that corresponds to $\psi^{-1}(U) \in \mathbb{O}(\text{Prim}(A))$ under the isomorphism . For $F \subset X$ closed, let $A(F) = A/A(X\setminus F)$. Both $A(U)$ and $A(F)$ are C\*-algebras over $X$.\ If $X$ is Hausdorff, $A(U) = C_0(U)A$. We write $\pi_x$ for the quotient map $A \to A(\{x\})$ and we say that $A$ is a continuous $C_0(X)$-algebra if the function $x\mapsto \|\pi_x(a)\|$ is continuous for all $a\in A$. An asymptotic morphism between two C\*-algebras $A$ and $B$ is a family of maps ${\varphi}_t : A\to B$, for $t \in T := [0,\infty)$ such that $t \mapsto {\varphi}_t(a)$ is a bounded continuous function from $T$ to $B$ for each $a \in A$, and $${\varphi}_t(a^{\ast} + \lambda b) - {\varphi}_t(a)^{\ast} - \lambda{\varphi}_t(b) \quad \text{ and }\quad {\varphi}_t(ab) - {\varphi}_t(a){\varphi}_t(b)$$ converge to $0$ in the norm topology as $t \to \infty$ for each $a, b \in A$ and $\lambda \in \mathbb{C}$.\ Equivalently, an asymptotic morphism can be viewed as a map ${\varphi}: A \to C_b(T,B)$ that induces a \*-homomorphism $$\hat{{\varphi}} : A \to B_{\infty} = C_b(T,B)/C_0(T,B).$$ Two asymptotic morphisms ${\varphi}_t$ and ${\varphi}^{\prime}_t$ are called equivalent (in symbols, ${\varphi}_t \sim {\varphi}^{\prime}_t$) iff $\hat{{\varphi}} = \hat{{\varphi}^{\prime}}$, ie. ${\varphi}_t(a) - {\varphi}^{\prime}_t(a) \to 0$ as $t \to \infty$ for each $a \in A$.\ Let $A$ and $B$ be C\*-algebras over a second countable topological space $X$. A \*-homomorphism $\theta : A\to B$ is called $X$-equivariant if $\theta$ maps $A(U)$ into $B(U)$ for all open sets $U \subset X$.\ An asymptotic morphism ${\varphi}: A\to C_b(T,B)$ is called approximately $X$-equivariant if, for any open set $U \subset X$, $$\label{eq:equiv} {\varphi}(A(U)) \subset C_b(T,B(U)) + C_0(T,B).$$ If $X$ is second countable, then an asymptotic morphism ${\varphi}: A\to C_b(T,B)$ is approximately $X$-equivariant if and only if it satisfies for only those open sets $U_n$ in a countable subbasis $(U_n)_n$ of the topology of $X$. If in addition $X$ is a locally compact Hausdorff space, then by [@mdd_meyer Lemma 2.11] ${\varphi}$ is an approximately $X$-equivariant morphism if and only if ${\varphi}$ is an asymptotic $C_0(X)$-morphism in the sense of [@park_trout Definition 3.1]. i.e. ${\varphi}(fa)-f{\varphi}(a)\in C_0(T,B)$ for all $f\in C_0(X)$ and $a\in A$. \[lem: restriction\] Let $A$ and $B$ be C\*-algebras over a second countable topological space $X$ and let $\psi_t:A\to B$ be an approximately $X$-equivariant asymptotic morphism. For any open set $U \subset X$, there is an approximately $X$-equivariant asymptotic morphism $\psi_t^U : A \to B$ such that $\psi_t^U \sim \psi_t$ and $$\psi_t^{U}(A(U)) \subset B(U) \fa t \in T.$$ For an open subset $U$ of $X$ let $$B_{\infty}(U):=\frac{C_b(T,B(U))+C_0(T,B)}{C_0(T,B)}.$$ Note that $B_{\infty}(X)=B_{\infty}$. Let $\pi:C_b(T,B)\to B_{\infty}$ be the quotient map. Let $s_U: B_\infty(U)\to C_b(T,B(U))$ be a set theoretic section of the surjective map $\pi_U :C_b(T,B(U))\to B_\infty(U)$ obtained by restricting $\pi$ to $C_b(T,B(U))$. Extend $s_U$ to a section $s:B_\infty \to C_b(T,B)$ of $\pi$. Then $\psi^{U}:=s\circ \hat \psi$ is an asymptotically $X$-equivariant asymptotic morphism equivalent to $\psi$ since $\hat \psi^{U}=\hat \psi$. Moreover, we have that $$\psi^{U}(A(U))=s(\hat \psi(A(U)))\subset C_b(T,B(U))$$ since $\hat\psi(A(U))\subset B_{\infty}(U)$ as a consequence of the assumption that $\psi(A(U)) \subset C_b(T,B(U)) + C_0(T,B)$. A homotopy of asymptotic morphisms from $A$ to $B$ is an asymptotic morphism from $A$ to $C([0,1],B)$. Let $\llbracket A,B\rrbracket_X$ denote the set of homotopy classes of approximately $X$-equivariant asymptotic morphisms from A to B, and let ${\llbracket \psi_t \rrbracket}$ denote the homotopy class of an approximately $X$-equivariant asymptotic morphism $\psi_t:A \to B$.\ It is immediate that equivalent asymptotic morphisms are homotopic. Let $X$ be a second-countable topological space and let $A$ and $B$ be separable C\*-algebras over $X$. Define $$\begin{split} E_X(A,B) = E_X^0(A,B) &= \llbracket SA\otimes \mathcal{K}, SB\otimes \mathcal{K} \rrbracket_X \\ E_X^1(A,B) &= E_X(A,SB) \end{split}$$ where $S$ denotes the suspension functor $SA = C_0(\mathbb{R},A)$.\ By [@mdd_meyer Theorem 2.25], there is a composition product $$E_X(A,B)\times E_X(B,C) \to E_X(A,C)$$ and $E_X(\cdot, \cdot)$ is the universal half-exact, C\*-stable homotopy functor on the category of separable C\*-algebras over $X$. There are six-term exact sequences in each variable of $E^{\ast}_X(A,B)$.\ Furthermore, if $X$ is a locally compact, Hausdorff space, then this definition of $E^{\ast}_X(A,B)$ coincides with that of Park and Trout (see [@mdd_meyer Proposition 2.29]) With these definitions in place, Proposition \[prop: open\_set\] is now a simple consequence of Lemma \[lem: restriction\]. \[prop: open\_set\] Let $X$ be a second-countable topological space and let $A$ and $B$ be separable C\*-algebras over $X$. For any open set $U \subset X$, the inclusion map $i: B(U) \to B$ induces a natural isomorphism $$i_{\ast} : E_X^{\ast}(A(U), B(U)) \xrightarrow{\cong} E_X^{\ast}(A(U), B).$$ \[prop: restriction\] Let $X$ be a second-countable topological space and let $A$ and $B$ be separable C\*-algebras over $X$. If $U \subset Y \subset X$ with $U$ open and $Y$ has the induced topology, then $$E_X^{\ast}(A(U), B(U)) \cong E_Y^{\ast}(A(U), B(U)).$$ Let us first observe that $A(U)$ and $B(U)$ are C\*-algebras over $Y$. Indeed if $W$ is open in $Y$, then $W=Y\cap V$ for some open subset $V$ of $X$ and $A(U)(W):=A(U\cap V)$. It is then clear that an asymptotic morphism ${\varphi}:A(U)\to C_b(T,B(U))$ is approximately $X$-equivariant if and only if it is approximately $Y$-equivariant. This concludes the proof. Sky-Scraper Algebras {#sec: skyscraper} ==================== In this section, we consider C\*-algebras over a space $X$ with exactly one non-trivial fibre, called sky-scraper algebras. We use [@mdd_meyer Theorem 3.2] to exhibit a short-exact sequence that computes $E_X(A,B)$, where $A$ is a sky-scraper algebra. In the following section, we use this exact sequence to isolate those points where a $C[0,1]$-algebra is not locally trivial. \[ex: skyscraper\] Let $D$ a separable C\*-algebra and let $X$ be a topological space. Fix a point $x\in X$ and define $i_x(D)$ to be the C\*-algebra $D$ regarded as a C\*-algebra over $X$ by setting $$i_x(D)(U) = \begin{cases} D & : x \in U \\ 0 & : x \notin U. \end{cases}$$ $i_x(D)$ is called the sky-scraper algebra with fiber $D$ at the point $x$. If $X$ is locally compact, the corresponding action of $C_0(X)$ is given by $\iota(f)(d) := f(x)d$. \[thm: skyscraper\] Let $i_x(D)$ be a sky-scraper algebra as in Definition \[ex: skyscraper\] and assume that $X$ is second countable. Let $\{U_n\}_n$ be a neighbourhood basis of open neighbourhoods of $x$ such that $U_{n+1} \subset U_n \fa n \in \mathbb{N}.$ Then, for any separable C\*-algebra $B$ over $X$, there is a short exact sequence $${0 \to {\mathop{\varprojlim}\,\hspace{-0.06in}^{1}}{E^{\ast + 1}(D, B(U_n))} \to E_X^{\ast}(i_x(D), B) \to {\mathop{\varprojlim}}{E^{\ast}(D, B(U_n))} \to 0}$$ Let $Y := (X, \tau)$ be a topological space whose underlying space is $X$, but whose topology $\tau$ is the topology generated by the sets $\{U_n\}_n$. We claim that $$\label{eq:finite} E_X^{\ast}(i_x(D), B) = E_Y^{\ast}(i_x(D), B)$$ To see this, consider an asymptotic morphism $\psi_t : i_x(D) \to B$ and the induced map $\psi : i_x(D) \to C_b(T,B)$ (we do not use suspensions for ease of notation, but it is clear that the same argument holds with suspensions). From the definition of $i_x(D)$, we see that $\psi_t$ is approximately $X$-equivariant if and only if $$\psi(D) \subseteq C_b(T,B(U)) + C_0(T, B) \fa \,\, U \text{open}\, \subset X, \, x \in U.$$ On the other hand $\psi_t$ is approximately $Y$-equivariant, if and only if $$\psi(D) \subseteq C_b(T,B(U_n)) + C_0(T, B) \fa \, n \in \mathbb{N}$$ But if $U \subset X$ is open and $x \in U$, then there is $n_0 \in \mathbb{N}$ such that $U_{n_0} \subset U$. Thus, if $\psi_t$ is asymptotically $Y$-equivariant, then $$\psi(D) \subseteq C_b(T,B(U_{n_0})) + C_0(T, B) \subseteq C_b(T,B(U)) + C_0(T, B)$$ and hence $\psi_t$ will be approximately $X$-equivariant as well. The converse is obvious since every open set in $Y$ is already open in $X$. Since the same argument applies to homotopies $\Phi_t : i_x(D) \to C([0,1],B)$, we obtain . With a view to apply the ${\mathop{\varprojlim}\,\hspace{-0.06in}^{1}}$-sequence from [@mdd_meyer Theorem 3.2], we define $ X_n := (X, \tau_n),$ where the topology $\tau_n$ is generated by the sets $\{U_1, U_2, \ldots, U_n\}$. We now claim that $$E_{X_n}^{\ast}(i_x(D), B) \cong E^{\ast}(D, B(U_n)).$$ As before (omitting suspensions), we consider an asymptotic morphism $\psi_t : i_x(D) \to B$, and note that, since $U_n \subset U_i$ for each $i \leq n$, $\psi_t$ is approximately $X_n$-equivariant if and only if $$\psi(D) \subseteq C_b(T,B(U_n)) + C_0(T, B).$$ Since $U_n \subset X_n$ is open, we may apply Lemma \[lem: restriction\] to obtain a map $$\eta : E_{X_n}(i_x(D), B) \to E(D, B(U_n)), \qquad {\llbracket \psi_t \rrbracket} \mapsto {\llbracket \psi_t^{U_n} \rrbracket}.$$ We claim that $\eta$ is bijective: Suppose that ${\llbracket \psi_t^{U_n} \rrbracket} = {\llbracket {\varphi}_t^{U_n} \rrbracket}$ in $E(D, B(U_n))$. Then there is a homotopy $\Phi_t: D \to C[0,1]\otimes B(U_n)$ connecting $\psi_t^{U_n}$ to ${\varphi}_t^{U_n}.$ Since $$\Phi_t(D) \subset C[0,1]\otimes B(U_n) \subset C[0,1]\otimes B,$$ $\Phi_t$ is an asymptotically $X_n$-equivariant map from $i_x(D)$ to $C[0,1]\otimes B$ connecting $\psi_t^{U_n}$ and ${\varphi}_t^{U_n}$. But by Lemma \[lem: restriction\], $\psi_t \sim \psi_t^{U_n}$ and ${\varphi}_t \sim {\varphi}_t^{U_n}$, and hence ${\llbracket \psi_t \rrbracket} = {\llbracket {\varphi}_t \rrbracket}$ in $E_{X_n}(i_x(D), B)$ and hence $\eta$ is injective.\ For surjectivity, we observe that any given asymptotic morphism ${\varphi}_t : D \to B(U_n)$, can be viewed as an $X_n$-equivariant asymptotic morphism $\psi_t:i_x(D)\to B$. We are now in a position to complete the proof. Since the collection $\{U_n\}$ forms a countable basis for the topological space $Y,$ we may apply [@mdd_meyer Theorem 3.2] to obtain a short exact sequence $${0 \to {\mathop{\varprojlim}\,\hspace{-0.06in}^{1}}{E^{\ast + 1}_{X_n}(i_x(D),B)} \to E_Y^{\ast}(i_x(D),B) \to {\mathop{\varprojlim}}{E^{\ast}_{X_n}(i_x(D),B)} \to 0}.$$ By our earlier identifications, this reduces to $${0 \to {\mathop{\varprojlim}\,\hspace{-0.06in}^{1}}{E^{\ast + 1}(D, B(U_n))} \to E_X^{\ast}(i_x(D), B) \to {\mathop{\varprojlim}}{E^{\ast}(D, B(U_n))} \to 0}. \qedhere$$ We now list some corollaries of Theorem \[thm: skyscraper\] which will be useful to us in the next section. \[cor: cor\_sky\_1\] If $U \subset X$ is an open set such that $x \notin \overline{U}$, then $$E_X^{\ast}(i_x(D), B(U)) = 0.$$ There exists an open neighbourhood $O$ of $x$ such that $O\cap U = \emptyset$. Consider a sequence of neighbourhoods $(U_n)_n$ of $x$ as in Theorem \[thm: skyscraper\] such that $U_1 = O$. Then $$E^{\ast}(D, B(U)(U_n)) = E^{\ast}(D, B(U\cap U_n)) \cong 0,$$ and the result follows from Theorem \[thm: skyscraper\]. \[cor: cor\_sky\_2\] Let $A$, $B$, $D$, $\{U_n\}_n$ and $x$ be as in Theorem \[thm: skyscraper\]. Suppose that the all the inclusions $B(U_{n+1})\hookrightarrow B(U_{n})$ are equivalences in E-theory. Then, for any fixed $k \in \mathbb{N}$: $$E_X^{\ast}(i_x(D), B) \cong E^{\ast}(D, B(U_k)).$$ The assumption that the inclusion maps $B(U_{n+1}) \hookrightarrow B(U_n)$ is an equivalence in E-theory implies that ${\mathop{\varprojlim}}{E^{\ast}(D, B(U_n))} \cong E^{\ast}(D, B(U_k))$ and $ {\mathop{\varprojlim}\,\hspace{-0.06in}^{1}}{E^{\ast+1}(D, B(U_n))} \cong 0. $ The conclusion follows now from Theorem \[thm: skyscraper\]. \[cor: e\_trivial\] Let $U \subset [0,1]$ be an open interval. For any two separable C\*-algebras $D, H$ $$E_{[0,1]}^{\ast}(C_0(U)\otimes D, C_0(U)\otimes H) \cong E^{\ast}(D,H)$$ Suppose that $U = (a,b)$, then by Proposition \[prop: restriction\], we may assume without loss of generality that $a = 0$ and $b = 1$. Now by Theorem \[prop: open\_set\] $$E_{[0,1]}^{\ast}(C_0(0,1)\otimes D, C_0(0,1)\otimes H) \cong E_{[0,1]}^{\ast}(C_0(0,1)\otimes D, C[0,1]\otimes H)$$ Now consider the short exact sequence $${0 \to C_0(0,1)\otimes D \to C[0,1]\otimes D \to i_0(D)\oplus i_1(D) \to 0}.$$ By Corollary \[cor: cor\_sky\_1\], $$E_{[0,1]}^{\ast}(i_0(D), C[0,1]\otimes H) \cong E^{\ast}(D, C_0[0,1)\otimes H) \cong 0$$ since $C_0[0,1)\otimes H$ is contractible. Similarly $$E_{[0,1]}^{\ast}(i_1(D), C[0,1]\otimes H) \cong 0.$$ Hence by using the six-term exact sequence in the first variable of $E^*_{[0,1]}$ and [@mdd_meyer Lemma 2.30] we obtain $$\begin{split} E_{[0,1]}^{\ast}(C_0(0,1)\otimes D, C_0(0,1)\otimes H) & \cong E_{[0,1]}^{\ast}(C[0,1]\otimes D, C[0,1]\otimes H) \\ & \cong E^{\ast}(D, C[0,1]\otimes H) \\ & \cong E^{\ast}(D, H) \end{split}$$ Now suppose $U = [0,a)$ or $U = (b,1]$, and since the proofs are identical, we assume that $U = [0,a)$. Furthermore, by using Proposition \[prop: restriction\], we may assume without loss of generality that $a = 1$. Now the proof is identical to the first part, except that we use the short exact sequence $${0 \to C_0[0,1)\otimes D \to C[0,1]\otimes D \to i_1(D) \to 0}$$ instead. Elementary $C[0,1]$-algebras {#sec: elementary} ============================ A $C[0,1]$-algebra is said to be locally trivial at a point $x \in [0,1]$ if there is an open neighborhood $U$ of $x$, and a C\* algebra $D$ such that $A(U) \cong C_0(U)\otimes D$. If $A$ is not locally trivial at $x$, we say that $x$ is a singular point of $A$.\ By an *elementary $C[0,1]$-algebra*, we mean an algebra which is locally trivial at all but finitely many points and moreover the algebra has a specific structure at the singular points as described below in Definition \[def:elementary-1\]. The importance of such algebras comes from the following theorem due to the first author and Elliott. \[thm:structure\][@mdd_elliott Theorem 6.2] Let $\mathcal{C}$ be a class of unital semi-projective Kirchberg algebras. Let $A$ be a separable unital continuous $C[0,1]$-algebra such that all of its fibers are inductive limits of sequences in $\mathcal{C}$. Then, there exists an inductive system $(A_k, \varphi_k)$ consisting of unital elementary $C[0,1]$-algebras with fibers in $\mathcal{C}$ and unital morphisms of $C[0,1]$-algebras $\varphi_k \in \text{\textnormal{Hom}}(A_k,A_{k+1})$ such that $$A \cong {\mathop{\varinjlim}}(A_k, \varphi_k)$$ A similar result is valid if one assumes that all the C\*-algebras in $\mathcal{C}$ are stable rather than unital. Theorem \[thm:structure\] applies to all continuous $C[0,1]$-algebras whose fibers are Kirchberg algebras satisfying the UCT and having torsion free $K_1$-groups. Furthermore, by [@mdd_fiberwise Theorem 2.5], any separable nuclear continuous $C[0,1]$-algebra over $[0,1]$ is $KK_{[0,1]}$-equivalent to a separable continuous unital $C[0,1]$-algebra whose fibers are Kirchberg algebras. Thus, the elementary $C[0,1]$-algebras are basic building blocks (in a K-theoretical sense) of all continuous $C[0,1]$-algebras.\ Elementary $C[0,1]$-algebras are given by the following data: Let $\mathcal{F}$ be a fixed class of separable C\*-algebras. Let $X = [0,1]$, and consider a partition $\mathcal{P}$ of $[0,1]$ given by $$0 = x_0 <x_1 < \ldots < x_n < x_{n+1} = 1$$ Write $F := \{x_1, x_2, \ldots, x_n\}$. We define a $C[0,1]$-algebra which is locally trivial at all points except possibly this finite set $F$ and has fibers in the class $\mathcal{F}$.\ Suppose that we are given C\*-algebras $$\{D_1, D_2, \ldots, D_n, H_1, H_2, \ldots, H_{n+1}\} \subset \mathcal{F}$$ and \*-homomorphisms $$\gamma_{i,0} : D_i \to H_i \qquad \gamma_{i,1} : D_i \to H_{i+1}$$ Define $$\begin{split} A = \begin{Bmatrix} ((d_1, d_2, \ldots, d_n), (h_1, h_2,\ldots, h_{n+1})) :& d_i \in D_i, h_i \in C[x_{i-1},x_i]\otimes H_i \\ & \text{ s.t. } \gamma_{i,j}(d_i) = h_{i+j}(x_i) \\ &\forall i \in \{1,2,\ldots, n\}, j\in \{0,1\} \end{Bmatrix} \end{split}$$ In other words, $A$ is the pull-back of the diagram $$\begin{CD} A @>>> \bigoplus_{i=1}^{n+1} C[x_{i-1},x_i]\otimes H_i \\ @VVV @VV{eval}V \\ \bigoplus_{i=1}^n D_i @> (\gamma_{i,0}, \gamma_{i,1}) >> \bigoplus_{i=1}^n H_i\oplus H_{i+1} \end{CD}$$ \[def:elementary-1\] A $C[0,1]$-algebra $A$ as above that is associated to the partition $\mathcal{P}$ with fibers in $\mathcal{F}$ and which satisfies the condition that *for each $i \in \{1,2,\ldots, n\}$, either $\gamma_{i,0}$ or $\gamma_{i,1}$ is the identity map,* is called an elementary $C[0,1]$-algebra. We denote the class of all such algebras by $\mathcal{E}(\mathcal{P}, \mathcal{F})$. Note that if $\mathcal{P}_1$ is a refinement of $\mathcal{P}_2$, then $\mathcal{E}(\mathcal{P}_2, \mathcal{F}) \subset \mathcal{E}(\mathcal{P}_1, \mathcal{F})$ since we may add singularities by choosing the maps $\gamma_{i,j}$ to be the identity maps. We define $$\mathcal{E}(\mathcal{F}) := \bigcup_{\mathcal{P}}\mathcal{E}(\mathcal{P}, \mathcal{F})=\text{class of all elementary $C[0,1]$-algebra with fibers in } \mathcal{F}.$$ When we write $A, B \in \mathcal{E}(\mathcal{F})$, we implicitly mean that we are choosing a common partition $\mathcal{P}$ as above. We are now concerned with computing $E_X(A,B)$ for $A, B \in \mathcal{E}(\mathcal{F})$. In order to simplify our future work, we fix $A$ as above, and define $B$ as $$\begin{split} B = \begin{Bmatrix} ((d_1', d_2', \ldots, d_n'), (h_1', h_2',\ldots, h_{n+1}')) :& d_i' \in D_i', h_i' \in C[x_{i-1},x_i]\otimes H_i' \\ & \text{ s.t. } \gamma_{i,j}'(d_i') = h_{i+j}'(x_i) \\ & \forall i\in \{1,2,\ldots, n\}, j\in \{0,1\} \end{Bmatrix} \end{split}$$ In other words, the fibers of $A$ will be $D_i$ or $H_j$ and the connecting maps will be $\gamma_{i,j}$, while the corresponding fibers of $B$ will be $D_i'$ or $H_j'$, and the connecting maps of $B$ will be $\gamma_{i,j}'$.\ Now consider the partition $\mathcal{P}$ as above, and write $$U = [0,1]\setminus F = \bigsqcup_{i=1}^{n+1} U_i$$ where $U_1 = [0,x_1), U_{n+1} = (x_n,1]$ and $U_i = (x_{i-1}, x_i)$ for $2 \leq i \leq n$. The short exact sequence $$\label{eqn: ses_a} {0 \to A(U) \to A \to A(F) \to 0}$$ yields a long exact sequence in E-theory $$\label{eqn: les_eab} \begin{CD} E_X(A(F),B) @>>> E_X(A, B) @>>> E_X(A(U), B) \\ @AAA @. @VV\delta V \\ E_X^1(A(U), B) @<<< E_X^1(A,B) @<<< E_X^1(A(F), B) \end{CD}$$ whose boundary map we denote by $$\delta : E_X(A(U),B) \to E_X^1(A(F),B)$$ As we will show later, this map $\delta$ holds the key to understanding $E_X(A,B)$. We begin by identifying the domain of this map. \[lem: au\_b\] The inclusion map $B(U) \hookrightarrow B$ induces an isomorphism $$E_X(A(U),B)\cong E_X(A(U),B(U)) \cong \bigoplus_{i=1}^n E(H_i,H_i')$$ The first isomorphism follows from Proposition \[prop: open\_set\]. Furthermore, since $E_{X}(A(U_i),B(U_j))=0$ if $i\neq j$, by additivity of $E_X$ in each variable: $$\begin{split} E_X(A(U),B(U)) &\cong \bigoplus_{i=1}^{n+1} E_X(A(U_i), B(U_i)) \\ &\cong E_{\overline{U_i}}(A(U_i),B(U_i) \qquad\qquad\text{(by Proposition \ref{prop: restriction})}\\ &\cong \bigoplus_{i=1}^{n+1} E(H_i,H_i') \qquad\qquad\quad\text{(by Corollary \ref{cor: e_trivial})} \end{split}$$ \[rem: delta\_u\] Consider the inclusion $B(U)\hookrightarrow B$ and the induced commutative diagram $$\begin{CD} E_X(A(U),B)) @> \delta >> E_X^1(A(F),B) \\ @AAA @AA \iota A \\ E_X(A(U),B(U)) @> \Delta_A >> E_X^1(A(F),B(U)) \end{CD}$$ By Lemma \[lem: au\_b\], the vertical map on the left is an isomorphism, so $$\ker(\delta) \cong \ker(\iota\circ \Delta_A).$$ We now compute the map $\Delta_A$. Consider the short exact sequence $${0 \to A(U) \to A \to A(F) \to 0}$$ and the boundary element in E-theory obtained from this sequence, $$\delta_A \in E_X^1(A(F),A(U))$$ and note that $\Delta_A$ is given by multiplication by this element $$E_X^1(A(F),A(U)) \ni \delta_A \times E_X(A(U),B(U)) \xrightarrow{\Delta_A} E_X^1(A(F),B(U))$$ The next two lemmas help us compute this map. \[lem: af\_bu\] $ E_X^1(A(F),B(U)) \cong \bigoplus_{i=1}^n (E(D_i,H_i')\oplus E(D_i,H_{i+1}')). $ By additivity of $E_X$ $$\begin{split} E_X^1(A(F),B(U)) &\cong \bigoplus_{i=1}^n E_X^1(A(x_i), B(U)) \\ &\cong \bigoplus_{i=1}^n E_X^1(i_{x_i}(D_i), B(U_i\cup U_{i+1})) \qquad\text{(by Corollary \ref{cor: cor_sky_1})} \\ &\cong \bigoplus_{i=1}^n E^1(D_i,B(U_i\cup U_{i+1})) \qquad\qquad \text{(by Corollary \ref{cor: cor_sky_2})} \\ &\cong \bigoplus_{i=1}^n (E^1(D_i,C_0(U_i)\otimes H_i')\oplus E^1(D_i,C_0(U_{i+1})\otimes H_{i+1}') \\ &\cong \bigoplus_{i=1}^n (E(D_i,H_i')\oplus E(D_i,H_{i+1}')). \end{split}$$ \[lem: delta\_a\] $ E_X^1(A(F),A(U)) \cong \bigoplus_{i=1}^n \left(E(D_i,H_i) \oplus E(D_i,H_{i+1})\right) $ and under this isomorphism $$\label{eqn: delta_a} \delta_A \mapsto (-{\llbracket \gamma_{i,0} \rrbracket}, {\llbracket \gamma_{i,1} \rrbracket})_{i=1}^n$$ The isomorphism from the statement follows from Lemma \[lem: af\_bu\] applied for $B=A$. In order to compute the image of $\delta_A$ under this isomorphism, we need the following notation: $U_{1,0} = [0,x_1], U_{n+1,1} = [x_n,1]$ and $U_{i,0} = (x_{i-1}, x_i]$, $U_{i,1} = [x_i,x_{i+1} )$ for $2 \leq i \leq n$. For each $i\in \{1,...,n\}$ and $j\in\{0,1\}$ consider the extension of $C[0,1]$-algebras $$\label{ds} {0 \to A(U_{i+j}) \to A(U_{i,j}) \to A(x_i) \to 0}$$ and the corresponding element $\delta_{i,j}$ that belongs to $E_X^1(A(x_i), A(U_{i+j}))$ which we may regard as direct summand of $E^1_X(A(F),A(U))$. We are going to show that $$\delta_A=\bigoplus_{i=1}^n (\delta_{i,0}\oplus \delta_{i,1})\in \bigoplus_{i=1}^n \left(E_X^1(A(x_i), A(U_{i} ))\oplus E_X^1(A(x_i), A(U_{i+1} ))\right).$$ To this purpose we will write explicitly an expression for the Connes-Higson asymptotic morphism $(\gamma_t)_{t\in [0,1)}$ that defines $\delta_A$, see [@mdd_meyer Prop. 2.23]. Let $(u_i^t)_{t\in [0,1)}$ be a contractive positive approximate unit of $C_0(U_i)$. For each $i$, choose two continuous maps $\eta_{i,0}\in C_0(x_{i-1},x_i]$ and $\eta_{i,1}\in C_0[x_i,x_{i+1})$ such that they assume values in $[0,1]$, they are equal to $1$ on a neighborhood of $x_i$ and such that $\eta_{i,1}\eta_{i+1,0}=0$ for $1\leq i <n$. It follows that we have the following asymptotic expression for $(\gamma_t)_{t\in [0,1)}:C_0(0,1)\otimes A(F)\to A(U)$, $$\gamma_t (f \otimes (d_i)_{i=1}^n)=\sum_{i=1}^n \, \left( f(u_i^t)\eta_{i,0} \otimes \gamma_{i,0}(d_{i}) + f(u_{i+1}^{t})\eta_{i,1} \otimes \gamma_{i,1}(d_{i})\right).$$ It is now clear that $\gamma_t$ decomposes in orthogonal sum of components $\gamma^{i,j}_t(f\otimes d_i)=f(u_{i+j}^t)\eta_{i,j} \otimes \gamma_{i,j}(d_i)$, $1\leq i \leq n$, $0\leq j \leq 1.$ But we now recognize $\gamma^{i,j}_t$ as representing the Connes-Higson asymptotic morphism defined by the extension . Next we are going to identify its class. We focus on the point $x_i$, and consider the map of extensions $$\begin{CD} 0 @>>> C_0(U_{i+j})\otimes H_{i+j} @>>> A({U_{i,j}}) @>>> i_{x_i}(D_i) @>>> 0 \\ @. @VV = V @VVV @VV \gamma_{i,j} V @. \\ 0 @>>> C_0(U_{i+j})\otimes H_{i+j} @>>> C_0({U_{i,j}})\otimes H_{i+j} @>>> i_{x_i}(H_{i+j}) @>>> 0 \end{CD}$$ We apply the functor $E_X(\cdot, A(U_{i+j}))$ to this sequence, and consider the relevant part of the resulting commutative diagram $$\begin{CD} E_X(C_0(U_{i+j})\otimes H_{i+j}, A(U_{i+j})) @> \delta^{i,j}_A >> E_X^1(i_{x_1}(D_1), A(U_{i+j})) \\ @A = AA @AA \gamma_{i,j}^{\ast} A \\ E_X(C_0(U_{i+j})\otimes H_{i+j}, A(U_{i+j})) @> \delta^{i,j} >> E_X^1(i_{x_1}(H_{i+j}), A(U_{i+j})) \end{CD}$$ where the map $\delta^{i,j}$ is given by multiplication by the boundary element $${\llbracket \delta_t \rrbracket} \in E_X^1(i_{x_i}(H_{i+j}), C_0(U_{i+j})\otimes H_{i+j}) \cong E^1(H_{i+j}, C_0(U_{i+j})\otimes H_{i+j}))$$ (by Corollary \[cor: cor\_sky\_2\]). If $j=0$, ${\llbracket \delta_t \rrbracket}$ corresponds under this isomorphism to the boundary map of the extension $${0 \to C_0(0,1)\otimes H_i \to C_0(0, 1]\otimes H_i \to H_i \to 0}$$ which can be identified with the element $-{\llbracket \text{id} \rrbracket} \in E(H_i, H_i)$. This accounts for the sign of the term ${\llbracket \gamma_{i,0} \rrbracket}$ in the expression .\ Similarly, if $j=1$, ${\llbracket \delta_t \rrbracket}$ corresponds to the boundary map of the extension of C\*-algebras $${0 \to C_0(0, 1)\otimes H_{i+1} \to C_0[0, 1)\otimes H_{i+1} \to H_{i+1} \to 0}$$ which can be identified with the element ${\llbracket \text{id} \rrbracket} \in E(H_{i+1},H_{i+1})$. This accounts for the difference in sign. The next lemma now follows from Remark \[rem: delta\_u\] and Lemmas \[lem: af\_bu\], \[lem: delta\_a\] \[lem: delta\_u\] There is a commutative diagram $$\begin{xy} \xymatrix{ E_X(A(U),B(U)) \ar[r]^{\Delta_A }\ar[d]_{\cong}& E_X^1(A(F),B(U)) \ar[d]_{\cong}\\ \bigoplus_{i=1}^{n+1} E(H_i,H_i') \ar[r] &\bigoplus_{i=1}^n (E(D_i,H_i')\oplus E(D_i,H_{i+1}')) } \end{xy}$$ Under this isomorphisms, $\Delta_A$ corresponds to the map $$(\beta_i)_{i=1}^{n+1} \mapsto (- \beta_i \circ {\llbracket \gamma_{i,0} \rrbracket}, \beta_{i+1}\circ {\llbracket \gamma_{i,1} \rrbracket})_{i=1}^n$$ \[rem: iota\] We saw in Remark \[rem: delta\_u\] that $$\ker(\delta) \cong \ker(\iota\circ \Delta_A)$$ where $\iota : E_X^1(A(F),B(U)) \to E_X^1(A(F),B)$ is induced by the inclusion $B(U) \hookrightarrow B$. In order to compute this kernel, consider the following long exact sequence coming from the extension ${0 \to B(U) \to B \to B(F) \to 0}:$ $$\begin{CD} E_X^1(A(F),B(U)) @> \iota >> E_X^1(A(F),B)) @>>> E_X^1(A(F),B(F)) \\ @A \Delta_B AA @. @VVV \\ E_X(A(F),B(F)) @<<< E_X(A(F),B) @<<< E_X(A(F),B(U)) \end{CD}$$ Thus, $$\ker(\iota) = \text{Im}(\Delta_B)$$ where $\Delta_B$ is given by multiplication by the boundary element $$\delta_B \in E_X^1(B(F),B(U))$$ As in Lemma \[lem: delta\_a\], we have $$E_X^1(B(F),B(U)) \cong \bigoplus_{i=1}^n E(D_i',H_i') \oplus E(D_i',H_{i+1}')$$ and under this isomorphism $$\label{eqn: delta_b} \delta_B \mapsto (-{\llbracket \gamma_{i,0}' \rrbracket}, {\llbracket \gamma_{i,1}' \rrbracket})_{i=1}^n$$ \[lem: Delta\] $$\begin{xy} \xymatrix{ E_X^1(A(F),B(U)) \ar[r]^-{\cong }& \bigoplus_{i=1}^n (E(D_i,H_i')\oplus E(D_i,H_{i+1}')) \\ E_X(A(F),B(F)) \ar[r]^{\cong}\ar[u]^{\Delta_B} &\bigoplus_{i=1}^n E(D_i,D_i')\ar[u] } \end{xy}$$ and under these isomorphisms $$\Delta_B((\alpha_i)_{i=1}^n) = (-{\llbracket \gamma_{i,0}' \rrbracket}\circ \alpha_i, {\llbracket \gamma_{i,1}' \rrbracket}\circ \alpha_i)$$ The map $\Delta_B$ is induced by the product $$E_X(A(F),B(F))\times \delta_B \in E_X^1(B(F),B(U))\to E_X^1(A(F),B(U).$$ We have already described all the terms that appear in this composition. \[thm: delta\] For $A,B \in \mathcal{E}(\mathcal{F})$ as above $$\label{eqn: ex} \begin{split} \ker(\delta) = \begin{Bmatrix} (\beta_i) \in \bigoplus_{i=1}^{n+1} E(H_i,H_i') : & \exists (\alpha_i) \in \bigoplus_{i=1}^n E(D_i,D_i') \text{ s.t.} \\ & \beta_i\circ {\llbracket \gamma_{i,0} \rrbracket} = {\llbracket \gamma_{i,0}' \rrbracket}\circ \alpha_i \text{ and } \\ & \beta_{i+1} \circ {\llbracket \gamma_{i,1} \rrbracket} = {\llbracket \gamma_{i,1}' \rrbracket} \circ \alpha_i \end{Bmatrix} \end{split}$$ By Remarks \[rem: delta\_u\] and \[rem: iota\] $$\begin{split} \ker(\delta) & \cong \ker(\iota\circ\Delta_A) \\ & \cong \{\beta \in E_X(A(U),B(U)) : \Delta_A(\beta) \in \ker(\iota)=\text{Im}(\Delta_B) \} \\ &\cong \{\beta \in E_X(A(U),B(U)) : \exists \alpha \in E_X(A(F),B(F)) \text{ s.t. } \Delta_A(\beta) = \Delta_B(\alpha)\} \end{split}$$ The expression in now follows from the description of $\Delta_A$ and $\Delta_B$ from Lemmas \[lem: delta\_u\] and \[lem: Delta\]. \[defn:classF\] We now specify a type of class $\mathcal{F}$ for which we can explicitly compute $E_X(A,B)$ for any $A,B \in \mathcal{E}(\mathcal{F})$ using the machinery developed above. Let $\mathcal{F}$ be a class of separable C\*-algebras such that $E^1(D,D') = 0$ for all $D,D'\in \mathcal{F}$. \[lem: af\_b\] If $A,B \in \mathcal{E}(\mathcal{F})$ with $\mathcal{F}$ as in Definition \[defn:classF\], then $ E_X(A(F),B) = 0. $ By the additivity of $E_X(\cdot, B)$ $$E_X(A(F),B) \cong \bigoplus_{i=1}^n E_X(A(x_i), B) \cong \bigoplus_{i=1}^n E_X(i_{x_i}(D_i), B).$$ Choose $\epsilon > 0$ small enough so that $(x_i-\epsilon, x_i+\epsilon)\cap F = \{x_i\}$, then by Corollary \[cor: cor\_sky\_2\] $$E_X(i_{x_i}(D_i),B) \cong E(D_i,B(x_i-\epsilon, x_i+\epsilon)).$$ Assume first that $\gamma_{i,0}'$ is the identity map (the case where $\gamma_{i,1}'$ is the identity is entirely similar), and consider the short exact sequence $${0 \to B(x_i,x_i+\epsilon) \to B(x_i-\epsilon,x_i+\epsilon) \to B(x_i-\epsilon,x_i] \to 0}.$$ Since $B(x_i-\epsilon,x_i] \cong C_0(x_i-\epsilon,x_i]\otimes H_i$, which is a cone, it follows that $$\begin{split} E(D_i,B(x_i-\epsilon,x_i+\epsilon)) & \cong E(D_i,B(x_i,x_i+\epsilon)) \\ & \cong E(D_i,SH_{i+1}) = 0 \end{split}$$ since $D_i, H_{i+1} \in \mathcal{F}$ Recall that if $B \in \mathcal{E}(\mathcal{F})$, then by Definition \[def:elementary-1\] for each $i \in \{1,2,\ldots, n\}$, either $\gamma'_{i,0}$ or $\gamma'_{i,1}$ is the identity map. The corresponding index is denoted by $j(i)$ and $j'(i)=1-j(i)$. In particular this means that $H'_{j(i)}=D'_i$ and $\gamma'_{i,j(i)}=\mathrm{id}$. \[thm: ex\] If $A,B \in \mathcal{E}(\mathcal{F})$ with $\mathcal{F}$ as in Definition \[defn:classF\], then $$E_X(A,B) = \left\lbrace (\beta_i) \in \bigoplus_{i=1}^{n+1} E(H_i,H_i') : \beta_{i+j'(i)}\circ{\llbracket \gamma_{i,j'(i)} \rrbracket} = {\llbracket \gamma_{i,j'(i)}' \rrbracket}\circ \beta_{i+j(i)}\circ {\llbracket \gamma_{i,j(i)} \rrbracket} \right\rbrace$$ By Lemma \[lem: af\_b\] and the exact sequence , we see that $$E_X(A,B) \cong \ker(\delta : E_X(A(U),B) \to E_X^1(A(F),B)).$$ But $\ker(\delta)$ was computed in Theorem \[thm: delta\]. We deduce that $$\label{eq:calculation} \begin{split} E_X(A,B) = \begin{Bmatrix} (\beta_i) \in \bigoplus_{i=1}^{n+1} E(H_i,H_i') : & \exists (\alpha_i) \in \bigoplus_{i=1}^n E(D_i,D_i') \text{ s.t.} \\ & \beta_i\circ {\llbracket \gamma_{i,0} \rrbracket} = {\llbracket \gamma_{i,0}' \rrbracket}\circ \alpha_i \text{ and } \\ & \beta_{i+1} \circ {\llbracket \gamma_{i,1} \rrbracket} = {\llbracket \gamma_{i,1}' \rrbracket} \circ \alpha_i \end{Bmatrix}. \end{split}$$ The various maps in this description of $E_X(A,B)$ are illustrated in the diagram below. $$\begin{xy} \xymatrix{ H_i\ar[ddd]_{\beta_i} &{} & H_{i+1}\ar[ddd]_{\beta_{i+1}} &{} & H_{i+2}\ar[ddd]^{\beta_{i+2}} \\ {} & D_i\ar[lu]_{\gamma_{i,0}}\ar[ru]^{\gamma_{i,1}} \ar@{-->}[d]^{\alpha_i}&{}&{D_{i+1}}\ar[lu]_{\gamma_{i+1,0}}\ar[ru]^{\gamma_{i+1,1}}\ar@{-->}[d]^{\alpha_{i+1}} &{}\\ {} & D'_i\ar[ld]^{\gamma'_{i,0}}\ar[rd]_{\gamma'_{i,1}} &{}&{D'_{i+1}}\ar[ld]^{\gamma'_{i+1,0}}\ar[rd]_{\gamma'_{i+1,1}} &{}\\ H'_i &{} & H'_{i+1} &{} & H'_{i+2} \\ } \end{xy}$$ Let us note that the equations $$\label{eq:calculation++} \beta_i\circ {\llbracket \gamma_{i,0} \rrbracket} = {\llbracket \gamma_{i,0}' \rrbracket}\circ \alpha_i, \qquad \beta_{i+1} \circ {\llbracket \gamma_{i,1} \rrbracket} = {\llbracket \gamma_{i,1}' \rrbracket} \circ \alpha_i$$ determine $\alpha_i$ uniquely since either ${\llbracket \gamma'_{i,0} \rrbracket}=\mathrm{id}$ or ${\llbracket \gamma'_{i,1} \rrbracket}=\mathrm{id}$. Indeed, using the notation introduced above, we deduce from that $\alpha_i=\beta_{i+j(i)}\circ {\llbracket \gamma_{i,j(i)} \rrbracket}$. Then we substitute this expression in the equation $\beta_{i+j'(i)}\circ {\llbracket \gamma_{i,j'(i)} \rrbracket}={\llbracket \gamma_{i,j'(i)} \rrbracket}\circ \alpha_i$ to obtain that $$\label{eq:calculation+} \beta_{i+j'(i)}\circ{\llbracket \gamma_{i,j'(i)} \rrbracket} = {\llbracket \gamma_{i,j'(i)}' \rrbracket}\circ \beta_{i+j(i)}\circ {\llbracket \gamma_{i,j(i)} \rrbracket}.$$ Conversely, if is satisfied for all $i$, then $\alpha_i:=\beta_{i+j(i)}\circ {\llbracket \gamma_{i,j(i)} \rrbracket}$ will satisfy both equations from . \[cor:pullback\] Let $Y, Z$ be two closed sub-intervals of $[0,1]$ such that their endpoints are not in $F$. Then $E_{Y\cup Z}(A(Y\cup Z), B(Y\cup Z))$ is the pullback of the following diagram $$\begin{xy} \xymatrix{ E_{Y\cup Z}(A(Y\cup Z), B(Y\cup Z))\ar@{->}[r]\ar@{->}[d] & E_Y(A(Y),B(Y))\ar@{->}[d] \\ E_Z(A(Z),B(Z))\ar@{->}[r] & E_{Y\cap Z}(A(Y\cap Z), B(Y\cap Z)) } \end{xy}$$ If $I=[a,b]$ is a closed sub-interval of $[0,1]$ such that its endpoints are not in $F$, let $i^0_I,i^1_I$ be uniquely defined by the requirement that $a\in U_{i^0_I}$ and $b\in U_{i^1_I}$. Let $Y$ and $Z$ be as in the statement. If $Y\cap Z=\emptyset$ the result follows from Theorem \[thm: ex\]. Thus we may assume that $Y\cap Z\neq \emptyset$ and moreover that $i^0_Y\leq i^0_Z \leq i^1_Y \leq i^1_Z$. In this case $i^0_{Y\cap Z}=i^0_Z$, $i^1_{Y\cap Z}=i^1_Y$ and $i^0_{Y\cup Z}=i^0_Y$, $i^1_{Y\cup Z}=i^1_Z$. The statement follows now immediately, since by Theorem \[thm: ex\] we have that for each sub-interval $I$ as above $$E_I(A,B) = \left\lbrace (\beta_i)_{i^0_I\leq i\leq i^0_I} : (\beta_i) \ \text{satisfy}\ \eqref{eq:calculation+}\right\rbrace.\qedhere$$ Morphisms of the K-theory sheaf {#sec: k_theory} =============================== In this section, we apply Theorem \[thm: ex\] to compute the group $E_{[0,1]}(A,B)$ using K-theory. More precisely, we show that if $A$ and $B$ are elementary $C[0,1]$-algebras whose fibers satisfy the UCT and have $K_0$-groups that are free of finite rank and zero $K_1$-groups, then there is a natural isomorphism $$E_{[0,1]}(A,B) \cong {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}$$ where $\mathbb{K}_0(\cdot)$ denotes the K-theory pre-sheaf, an invariant for $C[0,1]$-algebras introduced in [@mdd_elliott]. As an application, we give a partially new proof of the main classification result of [@mdd_elliott] which does not require two technical results, Theorem 2.6 and Theorem 8.1, from [@mdd_elliott] and instead it uses results of Kirchberg [@kirchberg].\ We recall the following definition from [@mdd_elliott $\mathsection 4$]. Let $X$ denote the unit interval and let $A$ be a $C[0,1]$-algebra. Let $\mathcal{I}$ denote the set of all closed subintervals of $X$ with positive length. To each $I \in \mathcal{I}$, associate the group $K_0(A(I))$, and to each pair $I, J \in \mathcal{I}$ such that $J \subset I$, associate the map $$r_J^I = K_0(\pi_J^I): K_0(A(I)) \to K_0(A(J))$$ where $\pi_J^I : A(I) \to A(J)$ is the natural projection.\ This data gives a pre-sheaf on $\mathcal{I}$ which is denoted by $\mathbb{K}_0(A)$.\ A morphism of pre-sheaves ${\varphi}: \mathbb{K}_0(A) \to \mathbb{K}_0(B)$ consists of a family of maps ${\varphi}_I : K_0(A(I)) \to K_0(B(I)))$ such that the following diagram commutes $$\label{eqn: k_map} \begin{CD} K_0(A(I)) @> {\varphi}_I >> K_0(B(I)) \\ @V r^I_J VV @VV r^I_J V \\ K_0(A(J)) @> {\varphi}_J >> K_0(B(J)) \end{CD}$$ The set of all such morphisms, denoted ${\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}$, has an abelian group structure. Note that, by [@mdd_meyer Proposition 2.31], for each $I \in \mathcal{I}$, there is a natural restriction map $$E_X(A,B) \to E_I(A(I),B(I)) \to E(A(I),B(I))$$ Multiplying $K_0(A(I)) = E(\mathbb{C},A(I))$ with $E(A(I),B(I))$ gives a map $$E_X(A,B) \to {\text{Hom}(K_0(A(I)), K_0(B(I)))}$$ Furthermore, if $J \subset I$, then the naturality of the restriction map ensures that the diagram commutes. Hence, we have a natural map $$\Gamma_{A,B} : E_X(A,B) \to {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}.$$ \[fdef:F0\] We now introduce a class of algebras for which this map is an isomorphism. Let $\mathcal{F}_0$ be the class of separable C\*-algebras $D$ satisfying the UCT and such that $K_0(D)$ is free of finite rank, and $K_1(D) = 0$. We define $\mathcal{E}(\mathcal{F}_0)$ to be the class of all elementary $C[0,1]$-algebras whose fibers lie in $\mathcal{F}_0$. Let us note that the UCT implies that if $D,H \in \mathcal{F}_0$, then $E^1(D,H) = 0$ and hence that $\mathcal{F}_0\subset \mathcal{F}$. Thus the results from the previous section apply to members of $\mathcal{E}(\mathcal{F}_0)$. Furthermore, for any $H,H' \in \mathcal{F}_0$, the UCT gives us an isomorphism $$\label{eqn: e_k_iso} E(H,H') \cong KK(H,H') \xrightarrow{} {\text{Hom}(K_0(H), K_0(H'))}.$$ Our goal is to show that the map $\Gamma_{A,B}$ is an isomorphism if $A, B \in \mathcal{E}(\mathcal{F}_0)$. In order to do this, we begin by choosing a subset of closed intervals which, roughly speaking, will allow us to capture the K-theory pre-sheaf from a finite amount of data: For each $i \in \{1,2,\ldots, n\}$, choose closed subintervals $V_{i,0}\subset (x_{i-1}, x_i]$ and $V_{i,1}\subset [x_i, x_{i+1})$ both containing $x_i$ and such that $V_i = V_{i-1,1}\cap V_{i,0} $ is a nondegenerate interval. Using the notation from Theorem \[thm: ex\], we consider the group $$G(A,B) = \{({\varphi}_i) \in {\text{Hom}(K_0(A(V_i)), K_0(B(V_i)))} : {\varphi}_{i+j'(i)}\circ [\gamma_{i,j'(i)}] = [\gamma_{i,j'(i)}']\circ {\varphi}_{i+j(i)}\circ [\gamma_{i,j(i)}] \}$$ Here $[\gamma_{i,j}]$ stands for $K_0(\gamma_{i,j}):K_0(D_i)\to K_0(H_{i+j})$. \[rem: theta\] There is an isomorphism of groups $\theta : E_X(A,B) \to G(A,B)$ Since each $V_i$ is a closed interval and $A(V_i)=C(V_i, H_i)$, $B(V_i)=C(V_i,H_i')$ we can identify ${\text{Hom}(K_0(A(V_i)), K_0(B(V_i)))}$ with ${\text{Hom}(K_0(H_i), K_0(H'_i))}$. The result follows now from Theorem \[thm: ex\] and the isomorphism . The map $\theta$ is induced by the functor that takes an E-theory element to the morphism that it induces on K-theory groups. We now construct a map $\nu : {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))} \to G(A,B)$ such that $\nu\circ \Gamma_{A,B} = \theta$. \[lem: nu\_injective\] The map $\nu : {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))} \to G(A,B) $ given by ${\varphi}\mapsto ({\varphi}_{V_i})_{i=1}^{n+1}$ is well-defined and injective. For any closed interval $I=[a,b]\subset (x_{i-1}, x_{i+1})$ with $x_i\in I,$ we use the extension $${0 \to C_0[a, x_i)\otimes H_{i}\oplus C_0(x_i, b]\otimes H_{i+1} \to A(I) \to D_i \to 0},$$ to see that $K_0(A(I))\cong K_0(D_i)$. A similar argument for $B$ shows that $K_0(B(I))\cong K_0(D'_i)$. In particular $K_0(A(V_{i,0}))\cong K_0(D_i) \cong K_0(A(V_{i,1}))$ and $K_0(B(V_{i,0}))\cong K_0(D'_i) \cong K_0(B(V_{i,1})).$ It follows that if ${\varphi}\in {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}$, then we can identify the two maps ${\varphi}_{V_{i,0}} = {\varphi}_{V_{i,1}} : K_0(D_i) \to K_0(D_i')$. On the other hand consider the inclusion $V_{i+1} \subset V_{i,1}$, and note that $K_0(A(V_{i+1})) \cong K_0(H_{i+1})$. We now see that the restriction map $$r^{V_{i,1}}_{V_{i+1}}:K_0(A(V_{i,1})) \to K_0(A(V_{i+1}))$$ is given by $[\gamma_{i,1}]$. A similar property holds for $B$. Since any ${\varphi}\in {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}$ is compatible with the restriction maps, the following diagram is commutative. $$\begin{CD} K_0(A(V_{i,1})) @> [\gamma_{i,1}] >> K_0(A(V_{i+1})) \\ @V {\varphi}_{V_{i,1}} VV @V V{\varphi}_{V_{i+1}} V \\ K_0(B(V_{i,1})) @> [\gamma_{i,1}'] >> K_0(B(V_{i+1})) \end{CD}$$ Thus, $$\label{eqn: alpha_i_1} {\varphi}_{V_{i+1}}\circ [\gamma_{i,1}] = [\gamma_{i,1}']\circ {\varphi}_{V_{i,1}}.$$ Applying the same argument to the inclusion $V_i \subset V_{i,0}$, we get $$\label{eqn: alpha_i_0} {\varphi}_{V_i}\circ [\gamma_{i,0}] = [\gamma_{i,0}']\circ {\varphi}_{V_{i,0}}.$$ We saw that ${\varphi}_{V_{i,0}} = {\varphi}_{V_{i,1}} : K_0(D_i) \to K_0(D_i')$. Since $\gamma'_{i,j(i)}=\mathrm{id}$, it follows from and that $${\varphi}_{V_{i+j'(i)}}\circ [\gamma_{i,j'(i)}] = [\gamma_{i,j'(i)}']\circ {\varphi}_{V_{i+j(i)}}\circ [\gamma_{i,j(i)}].$$ This shows that $\nu$ is well-defined. Now to prove injectivity, suppose that ${\varphi}_{V_i} = 0$ for all $i$. We need to show that ${\varphi}_I = 0$ for any non-degenerate interval $I \subset [0,1]$.\ Suppose first that $I$ contains at most one point of $F$. If $I\cap F = \emptyset$, then ${\varphi}_I = {\varphi}_{V_i}$ for some $i$, and there is nothing to prove. So suppose that $x_i \in I$ and that no other point of $F$ is in $I$. In that case, $K_0(A(I)) \cong K_0(A(x_i)) \cong K_0(D_i)$ and ${\varphi}_I$ can be identified with both ${\varphi}_{V_{i,0}}$ and ${\varphi}_{V_{i,1}}$, as seen earlier in the proof. Hence, one of the equations or (depending on which $\gamma_{i,j}'$ is the identity map) would ensure that ${\varphi}_I = 0$.\ Now consider any nondegenerate interval $I\subset [0,1]$ with $|I\cap F| \geq 2$, and write $I = I_1\cup I_2$, where $I_1$ and $I_2$ are two closed intervals such that $I_1\cap I_2 = \{x\}$ and $x \notin F$, and $I_1$ contains exactly one point of $F$. Then, $A(I)$ can be described as a pull-back $$\begin{CD} A(I) @>>> A(I_1) \\ @VVV @VVV \\ A(I_2) @>>> A(x) \end{CD}$$ Applying the Mayer-Vietoris sequence in K-theory, and using the fact that $K_1(A(x)) = 0$, we see that $$\begin{CD} 0 @>>> K_0(A(I)) @>>> K_0(A(I_1))\oplus K_0(A(I_2)) @>>> K_0(A(I_1\cap I_2))\\ @. @V {\varphi}_{I} VV @VV {\varphi}_{I_1}\oplus {\varphi}_{I_2} V @VVV\\ 0 @>>> K_0(B(I)) @>>> K_0(B(I_1))\oplus K_0(B(I_2)) @>>>K_0(A(I_1\cap I_2)) \end{CD}$$ Hence, it follows that if ${\varphi}_{I_1} = {\varphi}_{I_2} = 0$, then ${\varphi}_I = 0$. We know from the first part that ${\varphi}_{I_1} = 0$, so it suffices to prove that ${\varphi}_{I_2} = 0$. We then break up $I_2$ as we did with $I$ before and repeat the same process until we reach an $I_k$ such that $I_k$ contains at most one point of $F$, in which case ${\varphi}_{I_k} = 0$ and we can stop the inductive process. This proves the injectivity of $\nu$. \[thm: uct\] If $A,B \in \mathcal{E}(\mathcal{F}_0)$ (see Def. \[fdef:F0\]), then $\Gamma_{A,B} : E_X(A,B) \to {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}$ is an isomorphism. The maps $ \theta : E_X(A,B) \to G(A,B)$ and $\nu : {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))} \to G(A,B) $ satisfy $\nu\circ \Gamma_{A,B} = \theta$. By Lemma \[rem: theta\], $\theta$ is bijective, and hence $\Gamma_{A,B}$ is injective. By Lemma \[lem: nu\_injective\], $\nu$ is injective, and hence $\Gamma_{A,B}$ is surjective as well. Let $A$ be a separable continuous field over $[0,1]$ whose fibers have vanishing $K_1$ groups. By [@mdd_elliott Proposition 4.1], $\mathbb{K}_0(A)$ is a sheaf. We shall use Theorem \[thm: uct\] to give a streamlined proof of the main result of [@mdd_elliott], see Theorem \[thm:DE\]. *For the remainder of this section we make the blanket assumption that all the continuous fields that we consider are separable and their fibers are stable Kirchberg C\*-algebras with vanishing $K_1$-groups.*\ Our definition of elementary $C[0,1]$ algebras given in Def. \[def:elementary-1\] is a bit more general that the definition of elementary algebras in the sense of [@mdd_elliott]. To make the distinction we will call the latter special elementary. Suppose that $A$ is a special elementary continuous field of Kirchberg algebras. This means that $A$ is defined as the pullback of a certain diagram $\mathcal{D}$. Here is the description of $\mathcal{D}$ where we adapt the notation from [@mdd_elliott] to the present setting.\ Consider a partition $0 = x_0 < x_1 < \ldots < x_{n+1} = 1$, where $n=2m$. Let $A$ be as in Def. \[def:elementary-1\], but we require that $ D_{2i-1} = H_{2i} = D_{2i}$ and $\gamma_{2i-1,1} = \text{id} = \gamma_{2i,0}. $ Set $Y = [x_0,x_1]\cup [x_2,x_3]\cup \ldots \cup [x_{2m},x_{2m+1}]$, $Z = [x_1,x_2]\cup [x_3,x_4]\cup \ldots \cup [x_{2m-1}, x_{2m}]$ and $F = \{x_1, x_2, \ldots, x_{2m}\} = Y\cap Z.$ Define $$H = \bigoplus_{i=0}^m C[x_{2i},x_{2i+1}]\otimes H_{2i+1}\quad \text{ and }\quad D = \bigoplus_{i=1}^m C[x_{2i-1},x_{2i}]\otimes D_{2i}$$ whence $$H(F) = H_1\oplus \left( \bigoplus_{i=2}^m (H_{2i-1}\oplus H_{2i-1}) \right ) \oplus H_{2m+1}\quad \text{and}\quad D(F)=\bigoplus_{i=1}^{2m }D_i.$$ Consider the diagram $\mathcal{D}$ given by $$\begin{CD} H @>{\pi}>> H(F) @< \gamma << D \end{CD}$$ where $\pi$ is the restriction map, and $\gamma$ is the composition of the map $\gamma':D(F)\to H(F)$, with components $\gamma_{2i-1,0}:D_{2i-1}\to H_{2i-1}$, $\gamma_{2i,1}:D_{2i}\to H_{2i+1}$, with the restriction map $D\to D(F)$. $A$ is isomorphic to the pullback of the diagram $\mathcal{D}$ and we have an induced commutative diagram $$\begin{xy} \xymatrix{ A(Y)\ar[r]^{\pi}\ar[d] & A(F)\ar[d]_{\gamma'}& A(Z)\ar[l]_{\pi}\ar@{=}[d] \\ H\ar[r] & H(F) &D\ar[l]_{\gamma} } \end{xy}$$ Given $\mathcal{D}$ as above, and $B$ a continuous field over $[0,1]$, we denote by $\mathcal{D}B$ the diagram $$\begin{CD} B(Y) @>{\pi}>> B(F) @< {\pi} << B(Z). \end{CD}$$ Recall from that a fibered morphism $\varphi \in \text{Hom}_{\mathcal{D}}(A, B)$ is given a commutative diagram $$\begin{CD} H @>>> H(F) @<<< D \\ @V \varphi_Y VV @V \varphi_F VV @VV \varphi_Z V \\ B(Y)@>>> B(F) @<<< B(Z). \end{CD}$$ where the vertical arrows are injective monomorphisms of continuous fields. The combination of the two larger diagram above gives a morphism of diagrams $\mathcal{D}A \to \mathcal{D}B$ which induces a fiberwise injective morphism of continuous fields $\widehat{\varphi}:A \to B$. A morphism of fields induced by a fibered morphism is called *elementary* [@mdd_elliott p.806]. As in [@mdd_elliott], denote by $K_0(\mathcal{D})$, the diagram $$\begin{CD} K_0(H) @>{\pi_{\ast}}> >K_0(H(F)) @<{\gamma_{\ast}} <<K_0(D). \end{CD}$$ $\text{\textnormal{Hom}}(K_0(\mathcal{D}), K_0(\mathcal{D}B))$ consists of all morphisms of diagrams of groups $$\begin{CD} K_0(H) @>>> K_0(H(F)) @<<< K_0(D) \\ @V \alpha_Y VV @V \alpha_F VV @VV \alpha_Z V \\ K_0(B(Y)) @>>> K_0(B(F)) @<<< K_0(B(Z)) \end{CD}$$ that preserve the direct sum decomposition of the K-theory groups induced by the underlying partition of $[0,1]$. It is a K-theory counterpart of $\text{Hom}_{\mathcal{D}}(A, B)$. In [@mdd_elliott Prop.5.1], it is shown that each $\alpha \in \text{Hom}(K_0(\mathcal{D}), K_0(\mathcal{D}B))$ induces a morphism of sheaves $\widehat{\alpha} \in {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}. $ Let us also note that a morphism $\beta\in {\text{\textnormal{Hom}}(\mathbb{K}_0(B), \mathbb{K}_0(B'))}$ induces by restriction a morphism $\mathcal{D}\beta\in \text{\textnormal{Hom}}(K_0(\mathcal{D}B), K_0(\mathcal{D}B'))$. To simplify notation, we will often write $\beta$ in place of $\mathcal{D}\beta$. [[@mdd_elliott Prop.5.1]]{} \[lemma: elementary\_diagram\] - Suppose that $K_0(H_i)$ and $K_0(D_i)$ are finitely generated. If $B = \varinjlim B_n$ is an inductive limit of continuous fields $B_n$ over $[0,1]$, then $$\text{Hom}(K_0(\mathcal{D}), K_0(\mathcal{D}B)) \cong \varinjlim\text{Hom}(K_0(\mathcal{D}), K_0(\mathcal{D}B_n))$$ - If $\alpha \in \text{\textnormal{Hom}}(K_0(\mathcal{D}), K_0(\mathcal{D}B))$ and $\beta \in {\text{\textnormal{Hom}}(\mathbb{K}_0(B), \mathbb{K}_0(B'))}$, then $((\mathcal{D}\beta)\circ \alpha)\,\widehat{}=\beta\circ\widehat{\alpha}$. \(i) Since $\mathcal{D}$ is a finite diagram of finitely generated groups, the result follows using the continuity of the $K_0$-functor. (ii) is proved in [@mdd_elliott Prop.5.1]. We are now ready to reprove the classification theorem of [@mdd_elliott]: \[thm:DE\] Let $A,B$ be separable continuous fields over $[0,1]$ whose fibers are stable Kirchberg C\*-algebras satisfying the UCT, with torsion free $K_0$-groups and vanishing $K_1$-groups. If $\alpha \in {\text{\textnormal{Hom}}(\mathbb{K}_0(A), \mathbb{K}_0(B))}$ is an isomorphism of sheaves, then there is an isomorphism of continuous fields $\phi : A \to B$ such that $\phi_{\ast} = \alpha$ Recall that the inductive limit decomposition from Theorem \[thm:structure\] comes with more structure and properties that we now review. Specifically, in the inductive system $$A_1 \xrightarrow{\widehat{\varphi}_{1,2}} A_2 \xrightarrow{\widehat{\varphi}_{2,3}} \ldots \to A_n \xrightarrow{\widehat{\varphi}_{n,n+1}} A_{n+1} \to \ldots$$ with $A\cong \varinjlim A_k$, all connecting morphisms are elementary in the sense of [@mdd_elliott p.806]. In other words, each $A_k$ is the pull-back of a diagram $\mathcal{D}_k$, and each $\widehat{\varphi}_{k,k+1} \in \text{Hom}(A_k,A_{k+1})$ is induced by a fibered morphism $\varphi_{k,k+1}\in \text{Hom}_{\mathcal{D}_k}(A_k,A_{k+1})$. Moreover ${\varphi}_{k,\infty} \in \text{Hom}_{\mathcal{D}_k}(A_k,A)$ and $ \widehat{\varphi}_{k+1,\infty}\circ {\varphi}_{k,k+1}={\varphi}_{k,\infty}$. The fibers of $A_k$ are stable Kirchberg algebras whose $K_0$-groups are free of finite rank and their $K_1$-groups vanish. Similarly, let $(B_n)$ be a sequence approximating $B$, where $B_n$ is the pull-back of the diagram $\mathcal{D}_n'$, and ${\psi}_{n,n+1}\in \text{Hom}_{\mathcal{D}_n'}(B_n, B_{n+1}), {\psi}_{n,\infty} \in \text{Hom}_{\mathcal{D}_n'}(B_n,B)$ are the corresponding maps.\ By Lemma \[lemma: elementary\_diagram\](i), for $\alpha_1: = \alpha \circ (\varphi_{1,\infty})_{\ast}\in \text{Hom}(K_0(\mathcal{D}_1), K_0(\mathcal{D}_1 B))$, there is $m_1 \in \mathbb{N}$ and $\mu_1 \in \text{Hom}(K_0(\mathcal{D}_1), K_0(\mathcal{D}_1B_{m_1}))$ such that $\alpha_1 = (\widehat{\psi}_{m_1,\infty})_{\ast}\circ \mu_1$. Letting $n_1=1$ we have $\widehat{\mu_1} \in {\text{\textnormal{Hom}}(\mathbb{K}_0(A_{n_1}), \mathbb{K}_0(B_{m_1}))}$. Similary, for $\beta_1 := \alpha^{-1}\circ (\psi_{m_1,\infty})_{\ast} \in \text{Hom}(K_0(\mathcal{D}_{m_1}'),K_0(\mathcal{D}_{m_1}'A))$, there is $n_2 >n_1$ and $\eta_1 \in \text{Hom}(K_0(\mathcal{D}_{m_1}'), K_0(\mathcal{D}_{m_1}A_{n_2}))$ such that $\beta_1 = (\widehat{\varphi}_{n_2,\infty})_{\ast}\circ \eta_1$. This gives $\widehat{\eta}_1 \in {\text{\textnormal{Hom}}(\mathbb{K}_0(B_{n_1}), \mathbb{K}_0(A_{n_2}))}$. Combining the equations $\alpha \circ (\varphi_{n_1,\infty})_{\ast}=(\widehat{\psi}_{m_1,\infty})_{\ast}\circ \mu_1$ and $ \alpha^{-1}\circ (\psi_{m_1,\infty})_{\ast}=(\widehat{\varphi}_{n_2,\infty})_{\ast}\circ \eta_1$, we use Lemma \[lemma: elementary\_diagram\](ii) to deduce that $(\widehat{\varphi}_{n_2,\infty})_\ast \circ \widehat{\eta}_1\circ \mu_1=(\widehat{\varphi}_{n_1,\infty})_\ast=(\widehat{\varphi}_{n_2,\infty})_\ast \circ ({\varphi}_{n_1,n_2})_\ast$. By Lemma \[lemma: elementary\_diagram\](i) we conclude that after increasing $n_2$, if necessary, we can arrange that $\widehat{\eta}_1\circ \mu_1= ({\varphi}_{n_1,n_2})_\ast$ and hence $\widehat{\eta}_1\circ \widehat{\mu}_1= (\widehat{\varphi}_{n_1,n_2})_\ast.$ By induction, we construct a commutative diagram $$\begin{xy} \xymatrix{ \mathbb{K}_0(A_{n_1}) \ar[dr]^{\widehat{\mu}_1} \ar[rr]^{(\widehat{\varphi}_{n_1,n_2})_{\ast}} &{} & \mathbb{K}_0(A_{n_2}) \ar[dr]\ar[rr] &{} & \mathbb{K}_0(A_{n_3}) \ar[r] & \ldots \ar[r] & \mathbb{K}_0(A) \ar@<1.ex>[d]^{\alpha} \\ & \mathbb{K}_0(B_{m_1}) \ar[ru]^{\widehat{\eta}_1}\ar[rr]^{(\widehat{\psi}_{m_1,m_2})_{\ast}}&{} & \mathbb{K}_0(B_{m_2}) \ar[ru]\ar[rr] &{} & \ldots \ar[r] & \mathbb{K}_0(B)\ar@<1.ex>[u]^{\alpha^{-1}} } \end{xy}$$ By Theorem \[thm: uct\], we replace the diagonal arrows by $E_X$-theory elements and hence by $KK_X$-elements, since all involved continuous fields are nuclear [@park_trout]. By Kirchberg’s results [@kirchberg], we can further replace these $KK_X$-elements by morphisms of fields which are fiberwise injective and moreover, each triangle is commutative up to asymptotic unitary equivalence. This yields an isomorphism $\phi: A\to B$ with $\phi_\ast=\alpha$, by applying Elliott’s intertwining argument [@rordam_stormer Sec. 2.3]. [^1]: M.D. was partially supported by NSF grant \#DMS–1101305.
Which is the best Kilimanjaro route? We get that question asked a lot. And rightly so! There are many routes that lead to the summit of Mount Kilimanjaro in Tanzania. Each of them has it’s own pros and cons and should therefore be thoroughly researched in order to make an informed and educated choice. And remember that this choice is very much a personal one. Answering which route is ‘best’ for Kilimanjaro is tricky because people want different things out of their Kilimanjaro climb. From our experience, we have established the following 4 elements to be what climbers value most: - Acclimatisation profile and therefore success rates - The beauty and variety of terrain and scenery - How long is spent on the mountain - The total cost of the climb Keep these things in mind when reading the rest of this article and making your decision. But firstly, to equip you with the right knowledge for choosing your route up the roof of Africa, let’s make a start on your Kilimanjaro preparation. Here we have outlined the 7 different established routes on Mount Kilimanjaro: The 7 Kilimanjaro routes - Lemosho Route The Lemosho is the most beautiful Kilimanjaro route - Machame Route The Machame is the most popular Kilimanjaro route - Marangu Route The Marangu route only offers hut accommodation - Rongai Route The Rongai is the only Kilimanjaro route that approaches from the North - Shira Route The Shira route approaches from the Western side of Kilimanjaro - Northern Circuit The Northern Circuit is the newest and longest Kilimanjaro route - Umbwe Route The Umbwe is the shortest, steepest and hardest Kilimanjaro route “Deciding on a route up Kilimanjaro is definitely a personal choice and should be done so with care.”Chris Sichalwe Climbing Kilimanjaro route considerations So each of these 7 Kilimanjaro routes has its own advantages and disadvantages. They vary in scenery, difficulty, duration, acclimatisation profile, popularity, accommodation options and more. This can all seem pretty daunting – so much to consider! However, if you can clarify your priorities from the start, the decision becomes more of a process of elimination, and therefore relatively easy. To get started, you should be asking yourself the following questions when choosing a Kilimanjaro route: ◯ What do you want to get out of your climb? Are you interested in photography, or the varied terrains and climatic zones of the mountain, or is achieving the summit your sole goal? This will inform your choice of route on Kilimanjaro. ◯ Are you looking for a good acclimatisation profile? This is one of the biggest considerations when choosing a Kilimanjaro route. Routes have good acclimatisation profiles when they have ample climb high sleep low opportunities. Read more about what this means and safety on Mount Kilimanjaro. ◯ Are you looking for scenery? Some routes are better than others when it comes to scenery and the variety of terrain. For example the Lemosho, Northern Circuit and Shira routes all have stunning scenery, whilst the Umbwe lacks variety. ◯ What is your (and your team’s) level of fitness? Are you travelling solo? Do you have a group to think about? If you are going in a group make sure you pick a route best suited to everyone in your team. ◯ Do you have trekking experience? Are you a seasoned hiker with experience at altitude? Or are you a first time trekker? If so there are some routes that would not be suitable for you, such as the Umbwe. ◯ How long have you got for your climb? All the routes have different durations, depending on the amount of time you want to spend on the mountain. Remember that climbing Kilimanjaro is not a race, and ideally you will have at least 7 days on the mountain to soak in everything. ◯ How much money can you spend on the climb? Longer treks are inevitably more expensive. Shorter treks are cheaper, but you may find that you compromise on your chances of summiting, prime example being the once popular Marangu route. ◯ What kind of accommodation do you want? Different Kilimanjaro routes have different accommodation options, so it is important to know whether your chosen route has camping or huts available. The only route with hut accommodation available is the Marangu. ◯ What time of year are you available to go? You can realistically climb Kilimanjaro at any time, but some months are just better than others. If you are climbing in rainy season, the Rongai route may be a good option for you because it receives less rainfall. ◯ Do you want to avoid the crowds? Consider opting for a route that is less frequented by climbers. Though no Kilimanjaro route is completely quiet, some routes are more popular than others. For instance opting for the Northern Circuit will give you a more relaxed climb with less people. What Kilimanjaro route do Follow Alice recommend? Put simply, in an ideal world speed and cost should not be your primary consideration when choosing your route for Kilimanjaro. For example, the Marangu route offers the cheapest price, but that cost saving is often negated by the poor success rates, with less than 50% of those who attempt it making it to the top. Our goal is not to send as many climbers to Kilimanjaro as possible. Instead, we want to have happy clients returning with memories to last a lifetime and a successful ‘seven summit’ under their belt. We therefore regularly promote and recommend 2 Kilimanjaro routes to our clients: the Machame route and Lemosho route. These are, overall, the most scenic and richest in variety. In addition, both offer great opportunities to acclimatise. We’ve run through all of the routes individually below, including the logistics, the pros and the cons, and visual aids in the form of maps and graphs. If you have any questions, please do feel free to contact us any time. All of the Follow Alice team have climbed Kilimanjaro at least once, and our local leader Chris has summited over 300 times. Between us we have the knowledge to advise you on the most suitable route for you. “At first I was a little overwhelmed by all the options! There were a lot of things to consider when it came to choosing the right route up Kilimanjaro. I’m glad I spent the time researching and getting advice from the guys at Follow Alice.” Robert Jensen Get your Kilimanjaro guide! All you need to know for a successful Kilimanjaro climb. “This was my first proper trek so I wanted to choose a route with a really good acclimatisation profile, but also one with a variety of terrain and scenery. I’m glad I opted for the Lemosho!” Stefano Lemosho Route We have to admit, we have a soft spot for the Lemosho route! Because of it’s versatility, scenery and a rather untouched, wild start to the climb, Lemosho is often considered the route with the most variety. Spotting large wildlife, like antelopes is not very common, but possible! One of the really unique things about this route is that it offers trekkers the experience of hiking across the Shira Plateau, one of the largest high altitude plateaus in the world. The route approaches from the Western side of the mountain and is less frequented than other popular Kilimanjaro routes. It is effectively a variant of the Machame route with only the first 2 days of the trek differing. The acclimatisation profile of the Lemosho route is great, with repeated climb high sleep low opportunities throughout leading to high success rates. Most people complete the Lemosho route in 7 days, but it can be extended by one day to give yourself a little longer to acclimatise if needed. Camping is the only available option for the Lemosho route. Machame Route The Machame route is our second ‘favourite’ route on Kilimanjaro. Together with Lemosho, it is widely considered the most scenic with beautiful views and a rich variety of terrain. Therefore, it’s no coincidence that it is a very popular Kilimanjaro route, with the latest figures suggesting that just over 20,000 people climb the Machame route each year. It’s popularity is unfortunately therefore it’s only downfall, in that it can get quite overcrowded in peak season. The Marangu Route The Marangu route, nicknamed the ‘Coca Cola’ route, has, historically, been the most popular Kilimanjaro route. Around 12,000 people use it each year, most commonly used by trekkers on a budget. This is because it is the only Kilimanjaro route facilitating dormitory like accommodation with mattresses and basic amenities. This makes it a popular choice for budget Kilimanjaro operators that don’t have the right equipment to offer other Kilimanjaro routes. The Marangu route also only takes 5 days to complete, which makes also makes it a cheaper option compared to the other routes. However, the acclimatisation profile is mediocre which leads to a very low success rate, therefore not offering the best value for money. And whilst it offers rewarding views from the Saddle, it is less scenic than other Kilimanjaro routes due using the same trail for the ascent and descent. Otherwise, the route itself is easy with little to no steep parts. The scenery can be beautiful with a rain forest section and moorlands. Rongai Route The Rongai route is basically one long hike with a very gentle gradient and a low difficulty level. It is the only route that approaches from the Northern side of the mountain and is less frequently climbed. Only around 4000 people climb it each year. It is similar to the Marangu route due to it’s lack of climb high sleep low opportunities. It is therefore recommended to do the 7 day itinerary rather than the 6 day, giving you an important extra day to adjust to the altitude. It’s generally considered a good alternative to the more crowded routes such as the Marangu, especially for those climbers preferring camping over huts. Shira Route The Shira route is similar to the Lemosho except that its starting point is much higher. Approaching from the far Western side of the mountain, to reach the starting point you are looking at a lengthy transfer of around 4 hours to get to the Londorossi Park Gate at 2200m, and then a 4×4 ride which takes you up to the Simba River where the trail starts. You make your first night’s stop at 3500m! Starting at a higher altitude means that you can risk gaining altitude too quickly at the very start. If you do plan on doing the Shira Route, a pre-acclimatisation trek is recommended. This route features camping accommodation throughout the route, and is a quieter route than the others with a fantastic balance of scenery and less people. Northern Circuit Route The Northern Circuit route is both the newest as well as the longest route up to the peak. Some companies have already started to nickname it the 360 route or the Grand Traverse. The long journey allows for great acclimatisation leading to high success rates. Only a small number of climbers choose the Northern circuit route due to the additional time needed to complete it. It starts from the same point as the Lemosho route and offers similarly scenic views. Camping is the available option. Umbwe Route The Umbwe route is the shortest and steepest route on Kilimanjaro. It’s probably the least used trail due to the poor acclimatisation profile, with only 589 people climbing each year. Success rates can be low accordingly. The route approaches from the south and camping is the available option. Pre-acclimatisation is recommended for the ones who choose this route. Any questions?
https://www.followalice.com/kilimanjaro-routes/
Article 425 provides that “whoever, in anonymous or signed writing, image, symbol, or emblem, threatens to commit a crime against persons or property is punishable by imprisonment for one to three years and a fine of MAD 200 ($20) to MAD 500 ($50).” Meanwhile, Article 427 reads “if the threat provided for in article 425 … has been verbal, the penalty is imprisonment from six months to two years and a fine of MAD 200 ($20) to MAD 250 ($25). As for animal massacre, Article 603 provides that anyone who kills or mutilates animals “without necessity” is punishable with imprisonment from six days to three months and will receive a fine of MAD 200 ($20) to MAD 300 ($30), depending on where the offense occurred. According to the shepherd’s lawyer, Mountassir Bouabib, the sentence is less severe than the punishments the penal code suggests.
Frequently Asked Questions Until when can I modify the content of my cart? What locations can I get my items delivered to? Do I need to register if I only want to check the product range? How can I register on the site? How can I log in to the site? How can I shop in the Online store? How can I pick a time-slot for delivery and how much does this service cost? How can I set the delivery address? What products can I purchase hogyan kezeljük a lapos szemölcsöket a nőknél the Auchan Online store? Are the online prices any different from Auchan store prices? How can I find a product? How can I place a product in my cart, and how can I modify the content of my cart later? Is there a limit to the number of products I can buy? Kamagra, hol lehet venni? What do I need to do if I placed all the products I want to buy into the car and I would like to proceed? What happens during the delivery process? What can I do if I am not happy with the substitute product? How do you treat the products on which there is a deposit and their packing? When do I receive the products I purchased? How do I receive the invoice of the online shopping? Is there a minimum age under which one cannot take over the delivered items? Will the delivery assistant carry the items to my door even though I live upstairs? Zuhanyfejet hol tudok venni a Nyugati környékén? What happens if I am late for the delivery? What can I do? What happens if the product I chose is not available? Where can I modify the content of my cart? Until when can I modify the content of my cart? - А когда он все-таки догадывался о том, что именно делает тот или иной житель Лиза, многое из этих трудов представлялось ему совсем ненужным. - Они ступили в ее проем, сделали несколько шагов по коридору и совершенно неожиданно для себя очутились вдруг в огромной круглой камере, стены которой плавно сходились в трехстах футах над их головами. - Джизирак недвижимо сидел среди вихря цифр. You may modify your order until 10 pm on the day prior to delivery day. The cart can be reopened for modification until pm and all modifications must be finished until 10 pm, otherwise your items cannot be delivered in time-slot booked for the next day. Basic Trust points for your online shopping will be credited to your account. Trust points for promotional items will not be credited but we are working on our system so that these points will also be credited in the future. You can engage the online order and delivery service in all districts of Budapest and in könnyű ürülék a férgektől settlements in the proximity of the capital city. Click here to view our map and check if we deliver to your address. You do not need to register to check the product range or to place items in the cart. Registration is gyógyszerek szájszag kezelése necessary if you wish to purchase the products. You can register a few simple steps only! The easiest way to give your data is to click on the [Enter the store] menu item. Kapcsolódó kérdések: With one e-mail address you may register only one account. If you wish to log in, please click on [Log in] and enter your registered username and password. You do not need a registration to start shopping in the Online store, therefore you may freely browse our product categories and check the product details on the site. You can place items into your cart from the list view or from the product view. You can easily find the products through the search bar. It is important to know, that after completing your shopping, you need to register in order to finalise your order. The earliest time-slot for delivery can be booked on the day following the submittal of your order. Kamagra hol lehet venni | Férfi Patika You can book a hol tudok venni time-slot even two weeks in advance. Firstly, choose the day that suits you best in the table, then pick a two-hour time-slot. In the table you can check the service fees that belong to the various time-slots. Once you pick the time-slot, the service fee will be automatically added to the total price of your order. You can set your delivery address during finalising your order. First, provide the zip code of your settlement. You can set more than one address within the delivery area. Újdonságok a hoxa.hu-n Of course, the same rule applies to the promotional items, too. The prices on the website are identical with the ones you may see on the price tags in our stores at the moment of your shopping. There are several ways to find a product: Products are listed in categories e. You can simply find products using the search bar, and you can place the items in your shopping cart. A Kamagra egy kedvelt potencianövelő gyógyszer, sildenafil tartalmú mint a Viagra. You can add products to your cart on any product site of the online store. By clicking on the shopping cart icon you can easily delete the selected product or you can modify the quantity. Hol lehet venni Kamagra zselét? For every item there is an individual quantitative limit which shows the top quantity the company takes on to deliver. Auchan aims to make the products available for the most possible customers in the delivery areas, that is why the maximum orderable quantity of items has been set. The quantitative limit is clearly indicated in each case, as the system will not allow selecting a higher amount or more pieces of a given item. The shopping process is made up of three main steps that can be performed in any order you like: Placing the products in the cart — Booking a time-slot for delivery — Selecting delivery address and payment method. As the last step you will be requested to confirm the selected products. Before the last step, you can modify the content of your cart, the time-slot for delivery and the delivery address at any time. Finally you need to submit your order. - Hol tudok vásárolni? | FRONTLINE - Hol tudok venni kakaóbabot? (natúr, nem kandírozottstb) - Hol tudok venni divatszemüveget? (többi lent képpel is) - Gyógynövény anthelmintikum If you successfully submitted the order, hol tudok venni will receive a confirmation e-mail from Auchan. Your products will be delivered in a temperature-controlled van by one of our az emberi körféreg vázlata. When taking receipt of the products, you will of course have the opportunity to check if all your products have been delivered fresh, in prime quality and in the ordered quantity. Hol tudok venni olyan fényképalbumot, amibe én magam tudom beragasztani a If the product you wish to buy is not available on the day of the delivery, the delivery assistant will deliver a similar product substitute product. Of course, you do not have to accept the substitute product. If you decline toaccept the substitute product, you will only need to pay for the items you actually take receipt of. You can refuse to accept any item, if you hol tudok venni that the item fails to meet your requirements. Substitute products are highlighted on the delivery note and will be placed in a bag of different colour. If you decide that certain products do not meet your requirements, please feel free to give it back to the delivery assistant. The assistant then will deduce the price of the returned product s from the total amount you need to pay. The prices of products with a deposit on their packing include the hol tudok venni of of the packing deposittoo. The price of the packing is always displayed on the invoice as a separate item. You can return suck packing to any Auchan store where the deposit will be refunded. További ajánlott fórumok: Returnable packing cannot be handed over directly to the delivery assistant. Hol tudok venni your shopping you need to book a time-slot when your products will be delivered to you. You can select one of the available 2-hour time-slots. During the test period we will be delivering from Monday to Friday. It is important that for all time-slots there is a maximum number of deliveries we can fulfil. To be able to book your ideal time-slot we suggest you prepare your shopping list in advance and submit your order in time. You can submit your order even 2 weeks prior to the delivery. Please let our delivery assistant know if you need a VAT invoice. We will send you the electronic invoice of the items you actually took receipt of and paid for via e-mail.
http://salsamor.hu/884-hol-tudok-venni.php
When Midway mother Amy Roberts learned that her daughters wanted to set up a lemonade stand in their front yard last week, she couldn’t help but take a quick trip down memory lane. “I did it when I was a kid,” she said. “They made the lemonade themselves. They bought their own stuff with their allowance money.” But the nostalgia ended last Wednesday afternoon when an unidentified Midway Police Department officer, accompanied by Police Chief Kelli Morningstar, told the girls they no longer were allowed to sell their lemonade, she said. It all began when 10-year-old Skylar Roberts wanted to raise money for a trip to Splash in the Boro Waterpark in Statesboro. She enlisted the help of her sister, Kasity Dixon, 14, to run the stand. “Everyone in the neighborhood thought it was absolutely adorable. The girls at the bank were getting change and asked what time they should come by,” Amy Roberts said. “They were even announcing it on the IGA intercom that my kids were selling lemonade.” The girls made $30 when they were open for business the afternoon of June 28 — and they even sold lemonade to two policemen in a marked Midway Police car, Amy Roberts said. The girls, excited about their previous day’s earnings, decided to set up at about 2 p.m. June 29. According to Amy Roberts, who said she heard the scene unfold from inside her home, this reportedly is what happened: Sometime around 4 p.m., Morningstar, who was in the passenger side of a squad car that a male officer was driving, stopped as the cruiser entered the subdivision and told the girls they were not allowed to have their sign posted. Some time passed, Roberts said, and as the officers exited the subdivision, they stopped again. “I heard (the male officer) yell at them and say, ‘Girls! This is your last warning — you can’t sell it at all,’” Amy Roberts said. When she heard the exchange, she told her daughters, who were shaken by the scolding, to clean up the stand. In a phone interview with the Courier, Morningstar said that while entering the subdivision, the male officer, a rookie in training, told the girls they were not allowed to sell the lemonade. When the officers passed by on the way out of the development, they saw that the girls had not removed the stand. “They were told they could not sell their lemonade,” she said, adding that she did not see the girl’s mother accompanying them. “We do have several guidelines in place, regardless of who you are or your age.” Morningstar said she simply was enforcing the law and would treat any vendor — including a roadside fruit vendor — the same way. “You know, we don’t write the laws — we just enforce them,” she said. “Anytime there’s food or beverage or anything sold, it has to meet specific guidelines.” Morningstar said she was not aware that some of her officers reportedly had purchased lemonade from the stand the day before, and she said she does not know why they did not tell the girls to take down the stand then, she said. Now, the girls’ parents are trying to find out what their children were doing wrong and how they can continue their enterprise within the bounds of the law. “I don’t want them disobeying the law,” Amy Roberts said. While on his way home that evening, Skylar’s father, Jim Roberts, stopped to ask city officials what the girls had done wrong. Officials told him they were breaking a city ordinance that prohibits unlicensed peddling. “Anytime you make money, you’re supposed to get a business license,” Liberty County Zoning Administrator Gabriele Hartage said. “They have an option to get a transient merchant license or a peddler’s license.” Commercial sales at the stand’s location also are prohibited because it is zoned as agricultural-residential, she added. Hartage said that her office was aware of the situation but has not taken any disciplinary action against the children. The licenses are created to protect local business owners from traveling vendors, she said. Midway city clerk Lynette Cook-Osborne said transient merchant licenses for this type of business cost $50 per day and that occupational licenses for businesses with one to five employees cost $100 per year. Because minors cannot uphold a legal contract, the girls’ parents would have to seek a license on their behalf, Cook-Osborne said. Another possible reason for the police action is that anyone who sells consumable products is subject to licensing through the county health department, Hartage said. Health department spokeswoman Sally Silbermann, whose office has not been directly involved with this incident, said the guideline comes from the 152-page state-issued Food Service Rules. “It is our job to make sure that anyone selling food items — and lemonade is considered food — is taking the proper steps to avoid contamination,” she said. The law, which requires applicants to file their permit requests 10 days before they plan to begin vending, encompasses all types of food service, including temporary vendors. A temporary permit is $80. There are select exemptions for short-term tax-exempt operations, such as fairs and festivals. Mary Herring, administrative assistant with Liberty County Building and Licensing, said the issue is not likely to be on her department’s radar. “To me, that’s just kids more or less playing, and it’s only temporary,” she said. “I would think the law-enforcement officers would have bigger fish to fry.” When Midway mother Amy Roberts learned that her daughters wanted to set up a lemonade stand in their front yard last week, she couldn’t help but take a quick trip down memory lane.
https://coastalcourier.com/news/local-news/lemonade-stand-operation-goes-sour/
October 17, 2020 6:00:45 pm The Assam government on Saturday announced re-starting regular classes in schools and colleges, from standard six onward, on a voluntary basis, from November 2. Addressing the press in Guwahati, Assam Health Minister Himanta Biswa Sarma said classes will be held according to a staggered timetable and on separate days. Students of classes 8, 10 and 11 will have classes on Tuesdays, Thursdays and Saturdays, whereas students of 6, 7, 9 and 12 will have classes on Mondays, Wednesdays and Fridays. The government notification said, “There will be two batches for each class. However, if in any class, the total number of students is less than 20, then division in batch will not be required.” The first batch of classes will be held between 8:30 AM and 12:30 PM, whereas the second will be held from 1:30 PM to 4:30 PM. “Online mode of education will continue for students who prefer to attend online classes rather than physically attend school,” the notification said. In colleges, the first semester will have its classes on Mondays and Thursdays; third semester will have classes on Tuesdays, Wednesdays and Fridays; while the fifth semester will have classes on Tuesdays, Wednesdays, Fridays and Saturdays. The notification added that on Tuesdays, Wednesdays and Fridays the colleges shall arrange the classes in such a way so that crowding is avoided the notification suggested holding of classes in two shifts — morning and afternoon. “Students who are unable to attend classes in their respective colleges because of some extreme circumstances may opt to attend classes in a college of their locality with prior permission from the concerned authorities of both the colleges. However, this should be taken as a temporary measure,” the notification said. 📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines For all the latest North East India News, download Indian Express App. - - The Indian Express website has been rated GREEN for its credibility and trustworthiness by Newsguard, a global service that rates news sources for their journalistic standards.
https://indianexpress.com/article/north-east-india/assam/assam-announces-re-opening-of-schools-from-nov-2-attendance-voluntary-6759062/
Regional anesthesia of the infraorbital and inferior alveolar nerves during noninvasive tooth pulp stimulation in halothane-anesthetized cats. To determine whether anesthesia of the infraorbital and inferior alveolar nerves abolishes reflex-evoked muscle action potentials (REMP) during tooth-pulp stimulation in halothane-anesthetized cats. 8 healthy adult cats. In halothane-anesthetized cats, an anodal electrode was attached to the tooth to be stimulated and a platinum needle cathodal electrode was inserted in adjacent gingival mucosa. Cathodal and anodal electrodes were moved to the upper and lower canine, upper fourth premolar, and lower first molar teeth for stimulation; baseline REMP was recorded. A 25-gauge 1-cm needle was inserted 0.5 cm into the infraorbital canal. A 25-gauge 1-cm needle was inserted 1 cm rostral to the angular process of the ramus, and advanced 0.5 cm along the medial aspect. Chloroprocaine was injected at each site. Each tooth was stimulated every 10 minutes for 90 minutes. REMP was abolished within 10 minutes for all upper teeth, except for the upper canine tooth in 1 cat, and abolished within 10 minutes for lower teeth in 4 cats. In 1 cat, REMP was not abolished in the lower first molar tooth. In 3 cats, REMP was not abolished in the lower canine and first molar teeth. At 90 minutes, REMP was restored for all teeth except the lower canine tooth in 1 cat, for which REMP was restored at 120 minutes. Regional anesthesia of the infraorbital and inferior alveolar nerves may provide dental analgesia in cats.
Ignyta, Inc. (NASDAQ:RXDX) has a current consensus EPS estimate for the quarter of $-0.62 based on Street analyst predictions. The company is scheduled to next report earnings information on or around 2017-08-08 for the period ending 2017-06-30. In the same quarter last year, the company reported EPS of $-0.68. The company posted EPS of $-0.68 for the same quarter last year. Most recently, the company reported EPS of $-0.96. Prior to the last earnings report, Zacks Research had a consensus estimate looking for EPS of $-0.7. The gap between the estimate and actual number was $-0.26 which created a surprise factor of -37.14%. The surprise factor can cause substantial stock fluctuations after the earnings release. Before and after the earnings report, covering Wall Street analysts often make updates to their projections. Analysts taken into consideration by Zacks Research have a current mean target price of $18.8 on Ignyta, Inc. (NASDAQ:RXDX) shares. This target consists of 5 research analysts polled by Zacks Research. Analysts may have different stances on where they see the stock headed in the future. Among the polled analysts, the highest estimate sees the stock going to 20 in the next year. The analyst with the lowest target price views shares reaching $15 within the same period. In looking at the standard deviation of all estimates, we arrive at 2.167. Zacks Research also compiles analyst ratings using a scale that ranges from 1 to 5. If the company has a ratings score of 1, this would represent a Strong Buy. If the company has a 5 rating, this would signify a Strong Sell. Combining all the ratings on Ignyta, Inc. (NASDAQ:RXDX), the current mean stands at 1. Breaking those down we see that the ratings are as follows: 5 Strong Buy, 0 Rated Buy, 0 Rated Hold and 0 Rated Sell. Analyst recommendations and estimates are for informational purposes only and should be used along with a number of other factors when considering an investment position. Part of the data in this report is derived from Zacks Research and FactSet. Ratings and estimates change daily and thus the numbers may differ slightly if a new report has been issued within the last 24-hours. The consensus numbers take into account the reports from over 160 brokerage firms. The job of analysts is to issue recommendations for their clients, and not typically for the general public. Analyst forecasts, earnings estimates and price target projections are issued to help their clients make money through stock investments. We in no way are suggesting that readers make any decision based on the information in this report.
http://aikenadvocate.com/looking-ahead-for-ignyta-inc-nasdaqrxdx-are-these-shares-ready-to-go-higher/186739/
To contact the committee for more information, click here. Click here for information on the Seed Swap Click here for information on Seed Savers SEED LIBRARY Purpose The seed library is free to Hamilton County residents. It will be available during regular branch hours from late March through October. Seeds are provided for check out (as well as classes and resources about growing and saving seeds). Patrons may check out up to 5 packets per visit to a total of 15 packets per season, and plant them in their home garden. At the end of the growing season, you are asked to save your seeds to return to the library, or use them for your own garden next season. History of Committee We are excited to announce the opening of the first Hamilton County Seed Library (date tba).
https://hcmga.org/public-education/heirloom-seeds-program/seed-library
Nicaragua Surf Report for Tuesday, June 23, 2020 Hello, and welcome to your Tuesday surf report. Today welcomed a bit of new swell, but with it also delivered a bit of complicated conditions (and for me) a bit of a conundrum of timing today, as it was periodical of timing. Carlos, pictured here, was the only one to paddle out at 5PM. Click on that “read more” tab if you’d like to see more to the story. Nicaragua Surf Report - Report Photo 06/23/2020 10:01 PM It’s always good to see Jason out there ripping it up. Representing the Santa Marta CREW! Which is pretty much the most local crew we have here. (wanna donate a surfboard to to this local talent (look at his half broken surfboard in the photo). PLEASE make a comment to Mr. Chris or Baldo and they will will arrange the appropriate doings for this local up and comer! Nicaragua Surf Report - Report Photo 06/23/2020 10:07 PM Tony is one of my best friends here - he’s actually one of the first people I met whom shared living behind the lens. He’s also one of the most accomplished watermen here. Don’t believe it? Take a look at my earlier years living here when he charged most of the waves here solo.
https://www.surfnsr.com/nicaragua-surf-report/06-23-2020
13th International LS-DYNA Conference For a simple text search, enter your search term here. Multiple words may be found by combining them with AND and OR. The text in this field will be matched with items' contents, title and description. - A Finite Element Model of THOR Mod Kit Dummy for Aerospace Impact Applications Costin D. Untaroiu, Jacob B. Putnam (Virginia Tech), Jeff T. Somers (Wyle Science) New spaceflight vehicles are currently being developed to transport crews to space by NASA and several commercial companies. During the launch, and landing phase, vehicle occupants are typically exposed to spinal and frontal loading. To reduce the risk of injuries during these common impact scenarios, NASA has begun research to develop new safety standards for spaceflight. The THOR, an advanced multi-directional crash test dummy, was chosen to evaluate occupant spacecraft safety due to its improved biofidelity. Recently, a series of modifications were completed by NHTSA to improve the bio-fidelity of the THOR dummy. The updated THOR Modification Kit (THOR-k) dummy was tested at Wright-Patterson (WP) Air Base in various impact configurations, including frontal and spinal loading. A computational finite element (FE) model of the THOR was developed in LS-DYNA® software and was recently updated to match the latest dummy modifications. - A New Heat Transfer Capability Between CPM Gas and Its Surroundings Sheng Peng (LSTC) Corpuscular particle method (CPM) was developed for airbag deployment simulations. It took into account specifics like airbag folding technique, vent hole design, and interaction between discretized gas flow and airbag fabric to capture the effects of dummy impact on airbags, both fully inflated and out-of-position. It’s numerically very robust and the particle-based nature leads to elegant treatment of venting, porous leakage and gas mixing. Users find novel situations to apply the method and oftentimes new features are needed to better support these scenarios. Among these is the need of more comprehensive treatment of heat transfer. Based on kinetic molecular theory, CPM model behavior is heavily influenced by heat transfer. Yet, a full-blown coupled thermal analysis might not always be viable for a refined model. To enable modeling of heat transfer just in the neighboring structures of the CPM gas can provide a solution to this quandary. The design and implementation status will be discussed. Some other recent advances in CPM in LS-DYNA® will also be discussed. For example, airbag integrity checking reports to the user hard-to-discover abnormality in the airbag structure definitions in the input phase. - A New Way for the Adaption of Inverse Identified GTN-Parameters to Bending Processes Ioannis Tsoupis, Marion Merklein (Friedrich-Alexander-Universität Erlangen-Nürnberg) One major challenge in metal forming exists in sheet metal bending of modern lightweight materials like high‑strength low-alloyed steels (HSLA), since conventional methods of predicting failure in numerical simulation, like the forming limit diagram (FLD), can generally not be applied to bending processes. Moreover, fracture mechanisms are mainly depending on the microstructure, which is very fine-grained in HSLA steels composed with different alloying elements compared to established mild steels. Consequently the damage and failure behaviour of HSLA steels are changing. Especially for small curvature bending processes characterised by high gradients of strain and stress over the sheet thickness other failure criteria than the FLD have to be utilised. Within this paper a numerical study of the micromechanical based damage model Gurson-Tvergaard-Needleman (GTN, *MAT_120) is performed in LS-DYNA®, in order to realise an effective adaptability of the model for bending operations on HSLA steels. The material dependent damage parameters are determined by commonly used methodology of inverse numerical identification re-calculating the uniaxial tensile test. The minimisation of the mean squared error (MSE) of experimental and numerical global load displacement curves is realised by an optimisation algorithm using commercial software LS‑OPT®. For the adaption of the GTN-Model to the bending operation a strain-based calibration method is developed. This method is based on the comparison and adaption of the numerically calculated and the experimentally measured deformation field on the outer surface of the bent specimen. In this context the parameters are systematically varied again in the optimisation software LS-OPT. Their influence on the strain and damage evolution is analysed and discussed. On the one hand it is shown that it is possible to represent the strain evolution by adapting only one parameter instead of all parameters of the model and thus reducing the modelling effort for the user. On the other hand a big effect on the damage evolution and distribution can be identified. - A Simple Weak-Field Coupling Benchmark Test of the Electromagnetic-Thermal-Structural Solution Capabilities of LS-DYNA Using Parallel Current Wires William Lawson, Anthony Johnson (General Atomics Electromagnetics) To begin learning the coupled field capability of LS-DYNA and validate results, a simple simulation of parallel wires carrying current was run. The magnitude of the current in the wires is such that the coupling between the electromagnetic (EM), thermal and structural fields is weak, in the sense that the coupling is taken to be one way. That is, there is no feedback amongst the three field solutions. This allows us to compare LS-DYNA code and known analytical results for code validation to build confidence that the code is being correctly used. LS-DYNA results are also compared to ANSYS results when no analytical results are valid. In addition, this simulation allowed us to test the transfer of EM generated Ohmic heating to the thermal field, and the transfer of EM generated forces to the structural field, a necessary process for coupling fields. Furthermore, to be able to compare the code and analytical results, temperature-dependent material properties have not been included a decent approximation with the low currents used. The set-up of the coupled field model is discussed. Comparison of the LS-DYNA code and analytical results show good agreement where applicable. Comparison with ANSYS results is also good. - A Study on Preparation of Failure Parameters for Ductile Polymers Kunio Takekoshi, Kazukuni Niwa (TERRABYTE Co., Ltd) A study on preparation method of failure parameters for ductile polymers is presented using experimental results of high-speed tensile test for Polycarbonate and simulation results based on Semi-Analytical Model for Polymers (SAMP) constitutive model in LS-DYNA®. In addition, a comparative review of two widely used failure models, namely, total formulation and incremental formulation [2, 3], is carried out using Charpy impact test simulations where the failure parameters are prepared using the proposed method. It is found that the incremental formulation is excellent in predicting the experimentally observed behavior of notched Charpy impact test and non-notched Charpy impact test. - Accelerating Implicit LS-DYNA® with GPU Yih-Yih Lin (Hewlett-Packard Company) A major hindrance to the widespread use of Implicit LS-DYNA is its high compute cost. This paper will show modern GPU, cooperating with CPU, can help remove this hindrance. Performance improvement for Implicit LS-DYNA with GPU relative to that without, as well as from recent GPU and X86 processor, will be studied. The study will cover GPU related hardware issues, including GPU Boost, memory and PCI Express Interface. - Accuracy Issues in the Simulation of Quasi-Static Experiments for the Purpose of Mesh Regularization Anthony Smith (Honda R & D Americas Inc. ), Paul Du Bois (LS-DYNA® Consultant) Generating an LS-DYNA® material model from coupon-level quasi-static experimental data, developing appropriate failure characteristics, and scaling these characteristics to mesh sizes appropriate for a variety of simulation models requires a regularization procedure. During an investigation of an anisotropic material model for extruded aluminum, numerical accuracy issues led to unrealistic mesh regularization curves and non-physical simulation behavior. Sensitivity problems due to constitutive material behavior, small mesh sizes, single precision simulations, and simulated test velocity all contributed to these accuracy issues. Detailed analysis into the sources of inaccuracy led to the conclusion that in certain cases, double precision simulations are necessary for accurate material characterization and mesh regularization. - Adaptive Smoothed Particle Hydrodynamics Neighbor Search Algorithm for Large Plastic Deformation Computational Solid Mechanics Kirk Fraser (University of Quebec at Chicoutimi) Smoothed Particle Hydrodynamics (SPH) has quickly become one of the most popular mesh-free methods since its introduction in 1977. In the recent years, a great amount of research has been focused on addressing some of the common computational time associated with the SPH method. One of the remaining hurdles is the long computational associated with building the neighbor list. Because of the nature of the original SPH codes (astropyshics), the neighbor search is commonly performed for every element in the domain at each time step. In this work, we develop an optimized neighbor search algorithm that is suitable for deployment on NVidia graphics cards (GPU). The SPH code is written using CUDA Fortran. The algorithm can be used for large plastic deformation computational solid mechanics (CSM) problems. The search uses an adaptive algorithm that updates the neighbor list for individual SPH elements depending whether a plastic strain increment threshold is surpassed. The neighbor list as well as the inter-particle spacing (rij) is re-used for elements that do not surpass the search update criteria. Although in this work we use a Cell based search, the algorithm can be easily adapted for the Direct Search, the Verlet List or a Tree Sort approach. Monaghan’s artificial stress term is added to the momentum equation to suppress the common tensile instability. The XSPH approach is used to update the positions of the SPH elements. The algorithm is shown to reduce the overall computation time by up to 70% without loss of accuracy for CSM simulations when compared with the non-adaptive search method. - Advanced MPP Decomposition of a SPH Model Christian MOUTELIERE, Vincent LAPOUJADE (DynaS+), Antoine MILLECAMPS (SNECMA) SPH, Smoothed Particle Hydrodynamics, is a very efficient tool to model industrial problems where large deformations occur. However, one disadvantage of the SPH technique is the relative expensive cpu cost compared to standard Finite Elements. Using the MPP version of LS-DYNA® allows users to handle larger problems (up to more than millions of particles) in a reasonable time. Due to the meshfree nature of the SPH method, standard decompositions used for finite elements can sometimes lead to very bad speed-up of the code. Users have to be aware of some options and rules to define customized decompositions in order to minimize communications between processors and get very good load balancing. Two classes of models are presented for addressing all possible situations with respect to optimizing MPP decomposition of a calculation based in whole or in part on the SPH technology. The first one is a pure SPH model of a high velocity impact of a sphere on a plate. The second one is a coupled FE-SPH model of a bird impacting a set of fan blades of an engine. Two versions of the same problem will be studied: for the first, shell elements are used for the modeling of fan blades whereas for the second, solid elements are used. - Advanced Simulation of Polymer Composite SMC Compression Molding using Fluid-Structure Interaction in LS-DYNA® Dominic Schommer, Miro Duhovic, Florian Gortner, Martin Maier (Institut für Verbundwerkstoffe GmbH) Thermoset Sheet Molding Compounds (SMC) are becoming more and more popular as lightweight construction materials in the automotive industry. SMC compression molding is a forming process in which a pre-cut SMC-Prepreg is placed within a heated mold and is first pressed into shape before being cured. By closing the mold, the thermoset resin is forced to flow and takes the randomly orientated fiber reinforcement present, along with it. The flow behavior of the SMCs can be characterized by press rheometry. In a typical press rheometry test, certain data recorded during the test, specifically press force, tool closing speed, position and time together with the known tooling geometry (plate surface area), are used to develop and verify a finite element characterization model in LS-DYNA using the relevant Arbitrary Lagrange Eulerian (ALE) capable material model. In this work, the Fluid Structure Interaction (FSI) capabilities in LS-DYNA are used to model the flow ability of the SMC material. No independent effects of the resin cure on the materials rheology are taken into consideration. The characteristic data obtained from the real press rheometry tests are used to calibrate the material model so that it can be used to predict the mold filling behavior of more complex tooling scenarios. As an example for forming a complex real part, the compression molding of a ribbed automotive spoiler test part is analyzed upon complete closing of the mold. The goal of the simulation is to provide information about the suitability of the tooling design geometry and the processing parameters. A simplified two dimensional model of the ribbed automotive spoiler part shows, that the unrecognized effects inside the material cause a failure in the simulation in certain situations. A future version of the model should make it possible to analyze the nature of the SMC part, more specifically, the flow profile, fiber orientation and resulting volume fraction in the individual sections of the part, in particular the rib section, along with chemical curing of the resin. - Advances in LS-DYNA® Metal Forming (II) Li Zhang & Xinhai Zhu (LSTC) Some of the new features developed since the last conference will be discussed. 1) Lancing – instant and progressive Cutting of sheet metal during forming to alleviate thinning and splits. 2) Auto close of open trim curve loop Improvement in trimming simulation by automatically closing an open trim curve. 3) Tailor-rolled blank thickness specification Specification of thickness field of a tailor-rolled blank for any ensuing simulation. 4) Springback compensation referencing original tool mesh Compensation using the original tool mesh for iterations to improve tool surface geometry. 5) Springback compensation – for small part shape change Compensation made easier for those parts with small shape changes that do not affect springback results. 6) Simulation-based blank size optimization A significant development in blank size and trim line optimization of stamping dies. - Advances in LS-DYNA® Metal Forming (I) Xinhai Zhu & Li Zhang (LSTC) Some of the new features developed since the last conference will be discussed. 1) Gaging pin contact improvement New contact treatment for edge contact between gaging pin and sheet blank edge during gravity loading. 2) Output control for parameterized input Specifying D3PLOT and INTFORC outputs made easy for parameterized input. 3) Gravity loading – switching between implicit dynamic and implicit static Taking advantage of the best of both dynamic and static methods. 4) Polygon adaptive box A more flexible adaptive remeshing control. 5) Maximum ID specification for blank adaptive remeshing Setting starting element and node ID for an adaptive blank in a line-die simulation. 6) Flange unfolding Unfolding of deformable flanges onto addendum for trim line development. - Advances in Simulating Corrugated Beam Barriers under Vehicular Impact Akram Abu-Odeh (Texas A&M Transportation Institute) W-beam guardrail systems are the most common roadside railing systems used by many road authorities worldwide. They have been used for decades as roadside barrier to protect errant vehicles from intruding into hazardous areas. This paper gives a description of this rail system and recent methods to simulate its performance under roadside impacts. The availability of simulation technologies such as LS-DYNA® makes it possible to evaluate the performance of guardrail systems under given impact condition. A predictive simulation example and a subsequent crash test are presented as how simulation can be integrated into roadside safety hardware design process. - An Adaptive Meshfree Galerkin Method for the Three-dimensional Thermo-mechanical Flow Simulation of Friction Stir Welding Process C.T. Wu, W. Hu (LSTC), H.P. Wang (GM Research and Development Center) In this paper, new numerical modeling of material flow in the thermo-mechanical friction stir welding process is presented. In this numerical model, the discretization in space is derived by the meshfree Galerkin method using a Lagrangian meshfree convex approximation. The discrete thermal and mechanical equations are weakly coupled as the time advances using a forward difference scheme. A mortar contact algorithm is employed to model the stirring effect and heat generation due to frictional contact. Heat conductance between contacting bodies is considered as a function of contact pressure. A two-way adaptive procedure is introduced to the coupled thermo-mechanical system to surpass potential numerical problems associated with the extensive material deformation and spatial discretization. In each adaptive phase, a consistent projection operation utilizing the first-order meshfree convex approximation is performed to remap the solution variables. Finally, a three-dimensional thermo-mechanical coupled friction stir welding problem is analyzed to demonstrate the effectiveness of the present formulation. - An Enhanced Bond Model for Discrete Element Method for Heterogeneous Materials Zhidong Han, Hailong Teng and Jason Wang (LSTC) The enhanced bond model allows the Discrete Element Method (DEM) to simulate the heterogeneity and discontinuity at the individual particle level at the micro level. The traditional material models at the macro level are applied to each particle independently. This bond model bridges the behaviors of particles at macro and micro levels, and may be used for failure analysis of the homogeneous & heterogeneous materials, including composites, concretes. - An Introduction to the LS-DYNA® Smoothed Particle Galerkin Method for Severe Deformation and Failure Analyses in Solids C. T. Wu, Y. Guo, W. Hu (LSTC) This paper presents a new particle method in LS-DYNA for the severe deformation and failure analyses of solid mechanics problems. The new formulation is first established following a standard meshfree Galerkin approach for a solving of the partial differential equation of a linear elastic problem. A smoothed displacement field is introduced to the Galerkin formulation and leads to a regularized smoothed strain approximation. The resultant smoothed/regularized strain formulation can be related to the residual-based stabilization method for the elimination of zero-energy modes in the conventional particle methods. The discretized system of equations are consistently derived within the meshfree Galerkin variational framework and integrated using a direct nodal integration scheme. The linear formulation is next extended to the large deformation and failure analyses of inelastic materials. In the severe deformation range, adaptive Lagrangian or Eulerian kernel approach can be preformed to reset the reference configuration and maintain the injective deformation mapping at the particles. Several numerical benchmarks are provided to demonstrate the effectiveness and accuracy of the proposed method. - Analysis and Design of a Unique Security Bollard Installment Using LS-DYNA® for a K12 Vehicle Impact Joseph M. Dulka, Eric R. Dietrich, Kelley S. Elmore, Kendra D. Jones, Clyde S. Harmes, Robert H. Moyer (elmore engineering) This paper presents the design process for a unique security bollard providing protection against a K12 (M50 ) vehicle crash load. LS-DYNA was used to aid in the design of the security bollard to account for the highly dynamic and inelastic behavior during a vehicle impact. The bollard was installed along the top of a wall for a below-grade courtyard in order to maintain a building security perimeter, providing protection against a potential malevolent vehicle attack. Contrary to typical bollard installations where the foundation is supported on all sides with well compacted soil or other substrata, no significant support was provided on the protected side of the bollard foundation. As a result, this posed significant difficulty in the design of an effective security bollard required to resist a potential K12 (M50) vehicle impact load with zero vehicle penetration. The initial conceptual bollard design originated from the standard Department of State (DoS) DS-22 K-12 rated bollard system . Hand calculations were used to develop a preliminary bollard design with equivalent static design stopping forces based upon existing physical K12 test results. LS-DYNA aided the engineering team in observing structural and material responses characteristic of impact loading which may have otherwise not been perceived by method of traditional hand calculations. Utilizing LS-DYNA as not only an analysis tool, but a powerful design tool enabled the engineering team to optimize the design of the security bollard. - Analysis of Unsteady Aerodynamics of a Car Model in Dynamic Pitching Motion Using LS-DYNA® R7 Yusuke Nakae, Tsuyoshi Yasuki, Hiroshi Tanaka, Jiro Takamitsu (Toyota Motor Corporation) This paper describes the numerical analysis of unsteady aerodynamics of a car model in dynamic pitching motion using LS-DYNA R7. Large-Eddy simulations with ALE method were performed to clarify the effects of unsteady aerodynamic forces on aerodynamic characteristics of cars in dynamic motion. The forced sinusoidal pitching oscillation was imposed on the 1/4 scaled car model and the flow velocity was set to 27.78 m/s. The model was based on a real production car and it was simplified by removing its engine compartment cavity and smoothing its surface. Tires were fixed on the ground and separated from the car body. Unsteady aerodynamic forces acting on the model were investigated. And the mechanism of the differences between the aerodynamic forces acting on the car model in the dynamic motion and those in stationary states occur was mainly discussed. The computational results showed good agreement with the results of the high accuracy LES code computations. Also, results showed the differences between the aerodynamic forces in the dynamic pitching motion and those in the stationary states. Especially, the lift force showed remarkable differences. Even in the same posture of the pitch angle 0 degree (i.e. the posture in which the under floor of the car body is parallel to the ground), the lifts showed different values at stationary state and during nose-up or nose-down respectively. As a result of this analysis, it was revealed that these differences in the aerodynamic forces were mostly due to the changes of the surface pressure distributions around rear end of the front wheelhouse. The flow structures behind the front tires changed with volume shrinking or expanding of front wheel house owing to the car motion. These changes affected the surface pressure distributions. - Application and CAE Simulation of Over Molded Short and Continuous Fiber Thermoplastic Composite Helga Kuhlmann, Pieter Volgers, Zhenyu Zhang (DuPont) In the field of structural design, from aerospace and automotive to consumer packaging, numerical structural analysis using the finite element method (FEM) is becoming ever more important to accurately predict the performance of the considered part. In a highly competitive market, industries are demanding higher performance, improvements in fuel efficiency, increased recycling and greater safety, whether this is an airplane wing or mineral water bottle. In response to the above factors, there has been a significant increase in the application of composites. Today, finite element simulations are used extensively in the design and assessment by virtually all mayor industries. Finite element analysis (FEA) has become an integrated tool in this design and optimization. In this paper, beams constructed from over molded Short fiber Reinforced Thermoplastic on Continuous Fiber Reinforced Thermoplastics are described. One of the challenges is accurate CAE simulation of the static and dynamic behavior of the part. Model data are validated through correlation between coupon and sub-system physical tests, and further verified with results from quasi-static and impact tests. Physical test on beams confirmed good correlation between test and Finite Element Analysis. - Application of LS-DYNA for Auto NVH Problems Yun Huang, Zhe Cui (LSTC) NVH (Noise, Vibration and Harshness) is an important topic for the design and research of automotives. Increasing demands for improved NVH performance in automotives have motivated the development of frequency domain vibration and acoustic solvers in LS-DYNA. This paper presents a brief introduction of the recently developed frequency domain vibration and acoustic solvers in LS-DYNA, and the application of these solvers in auto NVH problems. Some examples are given to illustrate the applications. - Assessing Options for Improving Roadside Barrier Crashworthiness D. Marzougui, C.D. Kan, K.S. Opiela (George Mason University) The introduction of new crash test requirements raises questions about the efficacy of commonly used barriers that had been accepted under earlier test requirements. Seven commonly-used barriers were crash tested under the new MASH requirements in a recent NCHRP project. Three of the barriers tested did not meet the new requirements for the test with the 2270 kg vehicle. While the implementation of the MASH standards does not require hardware that passed the previous NCHRP 350 requirements to be re-evaluated, there is an interest in knowing whether these devices can be modified to meet the more stringent MASH requirements by DOTs. In another effort, these seven NCHRP crash tests were successfully simulated to provide an extended validation of the new finite element model of a Chevrolet Silverado pick-up truck as a surrogate for the 2270 kg test vehicle. This provided the opportunity to, among other things, evaluate the potential of various retrofit options for improving two of the three barriers that failed. An analyses of six modifications for the G9 Thrie-beam barrier and three variations of the G4(1S) guardrail median barrier was undertaken. A summary of the testing and simulation modeling of the two tests is presented as the basis for the simulation of modified versions of the barriers. The evaluation results are presented for each of the retrofit options and recommendations offered. - ATV and MATV techniques for BEM acoustics in LS-DYNA Yun Huang, Zhe Cui (LSTC) This paper presents the new ATV (Acoustic Transfer Vector) and MATV (Modal Acoustic Transfer Vector) techniques for BEM acoustics in LS-DYNA, which were implemented recently. Acoustic Transfer Vector provides the transfer function between the normal nodal velocity on structural surface and the acoustic response at selected field points; Modal Acoustic Transfer Vector provides similar transfer function, but is based on the excitation from modal shape vibrations. ATV and MATV reveal the inherent properties of structures and acoustic volume, and can be used to predict radiated noise from vibrating structures when combined with vibration boundary conditions. Particularly they are useful for the acoustic analysis of structures subjected to multiple load cases. Some examples are given to illustrate the application of the ATV and MATV techniques. For ATV, post-processing of the results in the form of binary plot database is also presented. - AutoMesher for LS-DYNA Vehicle Modeling Ryan Alberson, David Stevens (Protection Engineering Consultants), James D. Walker, Tom Moore (Southwest Research Institute) Software has been developed to automatically mesh CAD files in support of expedient modeling of armored vehicles and similar structures. The AutoMesher software is written in Python as well as LSTC’s Script Command Language (SCL). The SCL syntax is similar to C programming, but runs as a script within LSTC’s LS-PrePost® (LSPP) software application. A Python module is used as the interface and a wrapper for LSPP. By leveraging the functions in LSPP through the SCL, nine different algorithms were written to mesh I beams, T beams, angles, rods, plates, tubes, and surface-meshed formed shapes. Logic is used in these algorithms to identify the shape characteristics needed to define an equivalent FEA mesh of the CAD geometry, such as geometric planes that represent flanges or web components of an I-beam. These algorithms are the heart of the AutoMesher and can be used to generate more intelligent meshing solutions. The algorithms and software are described in this presentation. The AutoMesher software was developed by Protection Engineering Consultants (PEC) in support of the Defense Advanced Research Projects Agency (DARPA) Adaptive Vehicle Make (AVM) program, under subcontract to Southwest Research Institute (SwRI). AVM is an ambitious program to reduce the time required for the design, development, and production of complex defense cyber-mechanical systems, such as military ground vehicles, by a factor of five. - Batted-Ball Performance of a Composite Softball Bat as a Function of Ball Type Jennifer Yee (Combat, University of Massachusetts), James A. Sherwood (University of Massachusetts), Stephen Fitzgerald (Combat) The ideal models of softball bats and balls should have the flexibility to allow for the ability to capture how BBS varies as a result of changes in bat and ball constructions. If such models were available, then the design engineer could customize the bat design with the goal to maximize the BBS for a given ball construction. A credible finite element model of the ball-bat collision for softball is challenging. Achieving such a model is difficult primarily because of variations in the processing of the polyurethane cores of softballs which can yield different properties of the overall ball, e.g. hardness and liveliness, and the response of the softball during a bat-ball collision is rate dependent. The mechanical behavior of the composite bat is slightly less challenging to model because the bat material can be assumed to be essentially linear elastic unless significant material damage is induced during the collision. Experimental and finite element methods were used to model the collision between a composite softball bat and softballs of different COR (Coefficient of Restitution) and compression specifications. An example model is shown in Figure 1. Experimental bat characterization methods included barrel compression and modal analysis. Experimental softball characterization methods included COR, CCOR (Cylindrical Coefficient of Restitution), compression and dynamic stiffness. Finite element models were built in HyperMesh and analyzed in LS-DYNA®. Softballs were modeled using LS-DYNA material models #6, #57 and #83, and the composite softball bat was constructed according to the manufacturer’s specifications using *PART_COMPOSITE. Three methods to calibrate the finite element softball models were investigated and included “flat-surface” and “cylindrical-surface” coefficients of restitution and DMA (dynamic mechanical analysis). The “cylindrical-surface” test was found to be the most effective method of calibration to predict the batted-ball speed (BBS) as measured in bat/ball impact testing. This paper presents a summary of the complimentary experimental and finite element studies that were completed to develop a bat-ball collision model for the research of composite softball bats. Softballs were characterized using simple tests, and finite element models of the softballs were calibrated to yield good correlation to the experimental characterization tests. The calibrated softball models were then used to explore their ability to correlate with bat-ball collisions using a composite softball bat. - Benchmark of Frequency Domain Methods for Composite Materials with Damage using LS-DYNA® Myeong-Gyu Bae, Seonwoo Lee (THEME Engineering, Inc.) The composite material is widely used in the structures as aircrafts, satellites, ships, automobiles, and so on which demand light weight and high performance. A various type of damage could occur through low-speed impacts and fatigue loads. It is generally known that the assessment of the natural frequency by vibration testing is a very attractive method as a Non-Destructive Test (NDT) and the vibration response of a composite structure can be utilized as an indicator of damage. In this paper, a desirable FE modeling technique regarding composite types (Laminated/Sandwich) and laminate methods of composite material were investigated using LS-DYNA. Firstly, according to the laminated properties of composite material (number of layers, anisotropy, shape, etc.), frequency responses were compared between the latest theories and the latest version of LS-DYNA. Secondly, various types of damage in cantilever beam with composite material were represented and estimated in FE model and those frequency responses were compared among experiments, LS-DYNA, and other FE code. Finally, delamination phenomenon in rectangular plate with composite material was represented and estimated in FE model and those frequency responses were compared between experiment and LS-DYNA. It was evaluated and verified that the prediction for the tendency of natural frequency using the frequency domain method in LS-DYNA could be appropriate for composite materials with or without damage. - Benchmark of LS-DYNA® for Off-shore Applications according to DNV Recommended Practice C208 Marcus Lilja (DYNAmore Nordic AB) The use of non-linear FEA is growing in the offshore industry. Det Norske Veritas AS (DNV), one of the world’s largest ship and offshore classification societies to the maritime industry, has developed a Recommended Practice (DNV-RP-C208) on the usage of non-linear implicit finite element simulations in offshore applications. DNV-RP-C208 creates a de facto standard for structural load capacity analysis of off-shore structures. The Recommended Practice in combination with handbook formulas and empirical data create a de facto standard that will be used for investigations, studies and dimensioning off-shore structures for years to come. The Recommended Practice contains several benchmark problems with references solutions that can be used to verify a finite element software and the modeling methodology. This paper presents results from LS‑DYNA for a selection of these benchmark problems , ranging from beam bending problems with elasto-plastic behavior to instability and collapse analysis. All benchmark problems are solved using the implicit non-linear solver. Development of new features in LS‑DYNA and LS-PrePost® were necessary in order to complete the task. This paper presents results from the benchmarks, solution techniques, and the newly developed features. - Breaking Bad(ly) – Investigation of the Durability of Wood Bats in Major League Baseball using LS-DYNA® Eric L. Ruggerio, James A. Sherwood and Patrick J. Drane (University of Massachusetts) The bats used in Major League Baseball (MLB) are turned from a single piece of wood. Northern white ash had been the wood of choice until the introduction of hard rock maple in the late 1990s. Since the introduction of maple, there has been a measurable increase in the number of bats breaking into multiple pieces. These failures can be a significant factor during play, i.e. pieces of bats landing flying into the field of play, thereby distracting fielders from making the appropriate play for the given game situation. Observations of bat breakage in the field and in controlled conditions of lab testing of bats have shown the bat durability is a function of wood quality and bat profile. Wood quality is described by the density and the slope of grain of the wood. The bat profile is described by the variation in the diameter of the bat along its length. The bat properties that are preferred by players, i.e. low-density wood and a bat profile of a big barrel and a slender handle, are in direct contradiction with what makes for a durable bat. In this paper, LS-DYNA is used to develop calibrated models of the breaking of yellow birch wood bats in controlled lab conditions. The WOOD material model in combination with the ADD EROSION option using a maximum principal strain failure criterion was found to produce a credible simulation of the failure modes seen in wood baseball bats. - CAE Applications for Balanced Curtain Airbag Design Meeting FMVSS226 and System/Component Performance Bill Feng (Jaguar Land Rover) Curtain airbag is a key restraint component to protect occupants in the events of side impact (referred as First Impact) and rollover (referred as Second Impact). In the curtain airbag design during the vehicle programme, following requirements dominate the design. FMVSS226 Ejection Mitigation (EjM) requires curtain airbag provide adequate protection for rollover event. Restraint system performance for legal and consumer tests, such as FMVSS214 and NCAPs, requires good occupant head protection in the first impact. TWG Out-Of-Position (OOP) requires low risk deployment of curtain airbag for the occupants seating in out-of-position. In addition, curtain airbag design should ensure the integrity of surrounding trims, such as pillar trims, during the deployment at different environmental conditions. In 2011, NHTSA introduced the new regulation for rollover protection, FMVSS226 Ejection Mitigation. The requirement demands increased occupant containment in rollover and side crashes for belted/unbelted occupants and third rows of seating. The rule requires the linear impact tests at two energy levels and two inflation times (e.g. [email protected] and 178J@6s). The results of this requirement are the introduction of larger curtain airbag with higher power inflator for longer inflation. Since then, FMVSS226 EjM has become a key loadcase to define the curtain airbag inflator selection and curtain airbag design. However, the introduction of larger curtain and larger inflator has great challenge to the integrity of curtain airbag and surrounding trims, and OOP performance as well. Therefore, it is important that balanced performances between restraint system requirements and component requirements during the process of curtain airbag design and inflator selection. In this paper, CAE applications and studies have been conducted to gain the understanding of energy requirements and managements for balanced curtain design airbag to meet the multiple requirements on restrain system performance, EjM, OOP and component integrity. - CAE Workflow Coupling Stamping and Impact Simulations Henry Shibayama, Rohit Ramanna, Sri Rama Murty Arepalli, Arthur Camanho (ESI) Visual-Environment is an open and integrated user environment enabling simulation and analysis across multiple domains for efficient product engineering. A CAE workflow has been developed chaining stamping and impact simulations. The workflow originates from stamping matrix design (performing stamping simulations) to impact simulations considering the residual stress and thickness variation due to stamping process. The objective to be achieved is the creation of a fast end-to-end workflow, aiming at accurate impact simulations while taking into account the results from manufacturing processes. So far, impact simulations are performed considering the stamping simulation results. The next step of the project is to perform welding simulations and consider its residual stresses and distortion, aiming at more accurate impact simulations, through chaining and considering the process effects coming from stamping and welding analysis as well. Visual-Diemaker is a software tool focusing on the design of the stamping matrix, with some feasibility tools, such as tipping evaluations. Visual-Diemaker also integrated some tools for the set-up of stamping simulation. For development and evaluation of the methodology all simulations were performed using PAM-STAMP. Generated output results were M01 files (one per component), containing the residual stress and thickness variation. In a first step Visual-Process, a mass-customization and automation tool to support the automation of CAE tasks, converts automatically M01 (PAM-STAMP) file structure to LS-DYNA® key-words respectively syntax (using ELEMENT_SHELL_THICKNESS and INITIAL_STRESS_SHELL). As a second step, the same tools allows the CAE engineer to open the LS-DYNA impact model in Visual-Environment, then setting the components to be chained with the stamping results, define the reference nodes for the construction of a reorientation matrix, and finally define a different number of integration points through the thickness. This process basically reads and converts the M01 syntax and also adds the INCLUDE_STAMPED_PART syntax into the global impact model which references the converted M01 file. The purpose of the project is the ability of automated chaining between manufacturing and performance structural simulation in one and same environment. After achieving this development goal, further implementation and industrialization of this kind of analysis methodology into a CAE industrial department is expected as a reliable and fast way to proceed with chained analysis using different CAE solver. - Calibration of Material Models for the Numerical Simulation of Aluminium Foams – MAT 154 for M-PORE Foams @ 3 Loads Vito Primavera, Marco Perillo (EnginSoft SpA), A. Carofalo, M. De Giorgi, R. Nobile (University of Salento) Metallic foams are very promising for engineeristic applications due to their peculiar characteristics, like the high energy-absorbing property coupled with a reduced weight. Even if applications can be widespread in several fields, such as automotive, civil, aerospace, etc., industrial requirements are still far to be fully accomplished, especially in terms of technological processes and a whole mechanical characterization. Material modeling of metallic foams, like the aluminium ones, is a crucial point for performing accurate numerical simulations along with the design phase. Material models available in the explicit, non-linear finite element code LS-DYNA® represent a very efficient way to handle and to investigate foam behavior. An extended experimental/numerical activity has been set out at the aim to calibrate and validate suitable material models with respect to different aluminium foams and several loading conditions. While a previous phase of the activity has been focused on the assessment of a procedure addressed to point out, starting from the available experimental data, the key points of material model calibration, the current activity has been focused on the procedure application, i.e. the exploitation of the built-up methodology in respect of calibration of M-PORE open cells aluminium foam at three different loading conditions. A good number of foams material models are available in the LS-DYNA database, and further in the last years different enhancements have been performed at the goal to include the physical phenomenons able to increase the accuracy of the models. Amongst the available ones, MAT 154 (MAT_DESHPANDE_FLECK_FOAM) has been here chosen because it provides satisfactory results compared with the experimental ones, but at the same time it still requires to be studied for more loading conditions. Since the calibration process requires to optimize the material model free parameters according to different objectives, LS-DYNA has been coupled with modeFRONTIER®, Process Integration and Design Optimization software platform. Once all the FE (Finite Element) models related to the corresponding experimental tests have been integrated into modeFRONTIER, a first sensitivity analysis has been performed at the purpose to get confidence with MAT 154 behavior and then an efficient optimization phase in order to pursue the numerical configurations satisfying the different targets provided by experimental tests. Efficient and intuitive post-processing tools have been applied firstly to get a deep knowledge of the investigated phenomenons and eventually to look for the best solutions. - Car Body Optimization Considering Crashworthiness, NVH and Static Responses Phani Adduri, Gary Quinn, Juan P. Leiva, Brian C. Watson (Vanderplaats Research and Development, Inc) This paper demonstrates a design system to efficiently perform optimization based on responses computed from multiple LS-DYNA® analyses while also taking into consideration the linear loading conditions such as the ones for NVH and Static responses. The proposed design system, ESLDYNA, is based on the Equivalent Static Load (ESL) method, which requires the iterative process of non-linear structural analysis (LS-DYNA) and linear structural analysis and optimization (GENESIS). Unlike general purpose optimization software packages, it does not require a large number of analysis calls even for problems with large numbers of design parameters. Therefore, large-scale optimization techniques, such as topology, topometry and topography, can be easily employed. Several examples using different optimization techniques will be presented. One of the examples will include optimizing the design for frontal crash, normal modes and static loading conditions simultaneously. - Comparative Study of Material Laws Available in LS-DYNA® to Improve the Modeling of Balsa Wood Teddy MAILLOT, Vincent LAPOUJADE, Edith GRIPPON (DynaS+), Bernard TOSON, Nathalie BARDON, Jean-Jacques PESQUE (CEA CESTA) In order to compute the requirements for transporting packages in various situations using balsa wood as an energy absorber, a constitutive model is needed that takes into account all of the specific characteristics of the wood, such as its anisotropy, compressibility, softening, densification, and strain rate dependence. Completeness alone is not sufficient for the model, because it must perform appropriately in simulations that include many other non-linear situations, such as being subjected to friction, undergoing large deformations, and even failure. To improve their existing modeling within LS-DYNA, CEA CESTA, in partnership with I2M of Talence, carried out a major experimental campaign both on standard characterization tests and on more complex tests representative of the behavior of real structures. All these tests have been modeled using different LS-DYNA material laws to assess their respective limitations and achieve optimal modeling within the framework of material laws currently available in LS-DYNA. In a final validation phase, this optimized material law has been introduced in a finite element model representative of a real package to evaluate its effect relative to the initial law. - Comparison of Particle Methods : SPH and MPS Sunao Tokura (Tokura Simulation Research Corporation) SPH (Smoothed Particle Hydrodynamics) implemented in LS-DYNA® has been used widely in various industrial fields as a reliable and robust particle method. At present SPH is considered as one of major numerical simulation method for compressible fluid and solid materials. Recently a unique particle method called MPS (Moving Particle Simulation) has been developed and started to use for some industrial application as a CFD (Computational Fluid Dynamics) solver for incompressible flow. As most application for fluid flow in industry are incompressible, MPS may have a potential ability to treat such problems efficiently than SPH. Both methods have common characteristics that particles are used to discretize continuum domain to be solved. However, as the numerical procedures to solve the governing equation are very different, each numerical simulation method has both inherent advantages and disadvantages. This paper demonstrates the comparison of SPH and MPS for some engineering problems and intends to reveal the difference of these two methods. Comparison of numerical simulation techniques should be very useful for further understanding about multiphysics capability of LS-DYNA even for expert LS-DYNA users. Surface tension model, turbulence model, treatment of Newtonian and Non-Newtonian fluid, coupling with structures and other several topics are discussed. In addition an FSI (Fluid Structure Interaction) problem using MPS software and LS-DYNA is demonstrated in the presentation. In this FSI problem a vehicle is washed away by a tsunami and crashes against a rigid wall. Pressure of tsunami on the surface of the vehicle is computed by MPS software and the deformation of the auto body is calculated by LS-DYNA. - Comparison of the Brain Response to Blast Exposure Between a Human Head Model and a Blast Headform Model Using Finite Element Methods Rahul Makwana (DEP-Autoline Inc), Sumit Sharma (Eicher Trucks and Buses VE Commercial Vehicles Ltd.), Liying Zhang (Wayne State University) Impact induced traumatic brain injury (TBI) has been studied by physical testing using various surrogates, including cadavers, animals, and crash test dummies and by computer modeling including Finite Element (FE) models of human, animal and crash test dummy head. The blast induced TBI research and evaluation of a protective device call for a head model which can mimic wave propagation phenomena through different parts of the head. For proper investigation of head responses and resulting brain injuries due to primary blast exposure, the characteristics of a physical test headform including details of brain/skull anatomy and material properties of the head tissues must be critically designed. The current study was undertaken to numerically evaluate the blast performance of an anatomically realistic headform constructed with existing skull/brain simulant materials in comparison with human head model responses in order to propose a future headform which could be used for testing equipment in blast loading conditions. Quantitative biomechanical response parameters such as pressure, strain and strain rates within the brain were systematically monitored and compared between the blast anatomical headform and the FE human head model. The results revealed that the blast anatomical headform resulted in an average of about 20% over prediction of the biomechanical response parameters in the brain. The results imply that the plyometric based thermoplastic, polycarbonate, polymethylmethacrylate, and polyoxymethylene can be the suitable surrogate skull materials for simulating head responses under blast exposure. - Coupled Simulation of the Fluid Flow and Conjugate Heat Transfer in Press Hardening Processes Uli Göhner, Bruno Boll (DNYAmore GmbH), Inaki Caldichouri (LSTC), Tim Wicke (Volkswagen AG) Due to the increasing demands on lightweight design, stiffness and crash performance of automotive body components, the press hardening method becomes widely-used. The high strength of press hardened parts of up to 1.5 GPa results from the nearly complete conversion of austenite into martensite. This microstructural transformation, also known as 'hardening', happens during or subsequently to the forming process. In order to achieve a cooling rate which is high enough to get a martensitic microstructure in all regions of the blank, it has to be ensured that the heat transfer rate from the blank to the tool and inside the tool is sufficiently high. This is achieved at the press hardening lines of Volkswagen through the cooling of the tools with a fluid. - Coupling of the EM Solver with Mechanical and Thermal Shell Elements Pierre L’Eplattenier, Julie Anton, Iñaki Çaldichoury (LSTC) The Electromagnetics (EM) solver of LS-DYNA® has recently been extended to shell elements, in order to solve coupled EM/mechanical/thermal problems on thin plates, which appear in Magnetic Metal Forming and Welding experiments. Due to the magnetic diffusion of the EM fields through the thickness of the plate, which is a very important phenomenon that needs to be precisely solved, the EM part of the simulation still needs a solid mesh with several through thickness elements. This solid mesh, underlying the shell mesh is thus automatically built during the simulation and is used to solve the EM equations. The EM fields are then averaged or summed through the thickness in order to compute equivalent EM fields on the shells, and in particular an equivalent Lorentz force and Joule Heating which are used by the mechanical and thermal solvers. The model is presented and illustrated on some academic and industrial examples. Comparisons between solid and shells are presented. - Crash and Impact Simulation of Composite Structures by Using CAE Process Chain Ulrich Stelzmann, Madhukar Chatiri (CADFEM GmbH), Thorsten Schütz (Adam Opel AG), Anton Matzenmiller (Univ. of Kassel) The objective of this paper is to present a workflow for numerical modeling and simulation of carbon fiber reinforced plastic (CFRP) composite structures including CAE process integration. A computational constitutive model for anisotropic damage is developed to characterize the elastic-brittle behavior of fiber-reinforced laminated composites. The composite damage model is implemented within LS-DYNA® as user defined material subroutine. A CAE process chain which includes the manufacturing side of composites is also presented. - Crash Simulation of KTM “X-BOW” Car Front Impact Structure Katharina Fischer (KTM Technologies GmbH), Phelippe Pereira (ESSS), Madhukar Chatiri, Matthias Hörmann, Andre Stühmeyer (CADFEM GmbH) The goal of this presentation is to study the structural behavior of the KTM “X-BOW” crash box front impact structure in a 0° impact test against a rigid wall. The energy absorbing crash box is made of laminated composite sandwich material. A “shell-solid-shell” numerical approach is used to model the sandwich composite structure. Shell elements are used for the face layers whereas solid elements are used for aluminum honeycomb core. Shell elements consider the composite layering using *ELEMENT_SHELL_OFFSET_COMPOSITE within LS-DYNA® and will be bonded to the solid elements without node sharing. The composite structure is modeled using *MAT_054 and honeycomb structure is modeled using *MAT_126 within LS-DYNA. For comparison reasons, numerical and experimental results for intrusion, deceleration, velocity and displacement over time are presented. - Crash Test & Simulation Comparisons of a Pickup Truck & a Small Car Oblique Impacts Into a Concrete Barrier D. Marzougui, C.D. Kan, and K.S. Opiela (George Mason University) Detailed finite element (FE) models of a 2270 kg Chevrolet Silverado and a 1100 kg Toyota Yaris are used as surrogates for barrier crashworthiness under the new Manual for Assessment of Safety Hardware (MASH). MASH requires assessment of barriers for both large and small vehicles, hence the use of 2270P and 1100P test vehicles. Impacts of these two vehicles into a New Jersey-shaped concrete median barrier were simulated and compared to full-scale crash tests. The objectives of this effort included (1) demonstrating the viability of the FE models for the new MASH crashworthiness evaluation, and (2) describing the application of the newly developed roadside verification and validation (V&V) procedures to compare simulation results and crash test data. Comparisons of the simulation results and data derived from crash tests using “traditional” methods suggested that the models provided viable results. Further comparisons using the new V&V procedures provided (1) a structured assessment across multiple factors reflected in PIRT tables and (2) statistical comparisons of the test and simulation results allowing a more robust validation than previous approaches. These comparisons further confirmed that the new vehicle models were able to effectively replicate impacts for MASH tests and that the V&V procedures provided useful insights and increased confidence in the models. - Current Status of Subcycling and Multiscale Simulations in LS-DYNA® Thomas Borrvall (DYNAmore Nordic AB), Dilip Bhalsod, John O. Hallquist, Brian Wainscott (LSTC) Subcycling in explicit finite element simulations refers to the technique where a model is partitioned in levels of the characteristic time step of its constituting finite elements. Each sub model is then integrated independently of the others using a time step that pertains to that specific sub model, with the exception of special treatment at the interface between sub models. With the subcycling option in LS-DYNA, up to seven sub models are automatically generated, each integrated in steps of 1, 2, 4, 8, 16, 32 and 64 times the smallest characteristic time step of the entire model. To allow more control of the partition, the user may manually designate parts to be integrated at specific time steps. This is sometimes referred to as multiscale since it is mainly intended for detailed modeling of critical components in a large simulation model, i.e., different time scales are used in order to save CPU time. This paper presents the current status of this feature in LS-DYNA, including a detailed description of the involved algorithms and presentation of small to large scale numerical examples. - Designing a Radioactive Material Storage Cask Against Airplane Crashes With LS-DYNA® Gilles Marchaud, Louis Vilela, Stéphane Nallet (AREVA TN) For 50 years, AREVA TN has been supplying customer-focused, innovative transportation and storage solutions for radioactive material with the highest levels of safety and security. Transportation and storage casks are designed to comply with stringent regulations. For instance, the TN NOVA™ system, designed to store used fuel assemblies, is required to withstand the impact of a 20-ton aircraft at a velocity of 215m/s, despite the extremely small probability of such an event actually occurring. The TN NOVA™ system is composed of a sealed NUHOMS®-69BTH Dry Shielded Canister and a TN NOVA™ Overpack. The overpack has been designed to house the canister during the storage period and provide it with an efficient protection against airplane crash events. To achieve this, LS-DYNA® was invaluable in helping us to improve the preliminary design and to select the most damaging airplane impact configuration. LS-DYNA® analyses also made it possible to design an equivalent missile that causes deformations at least equal to those caused by an airplane crash. The equivalent missile model was updated thanks to a real test onto a concrete wall. Finally, the overpack design was successfully validated by a real test. The equivalent missile impacted a 1/3 scale mock-up of the canister-loaded overpack, fitted with strain gages and accelerometers. Leak tightness was preserved. The present paper will focus on the crashworthiness LS-DYNA® calculations and benchmarks that made this success possible. - Determining the Material Constants for Mullins Effect in Rubber Part One: Uniaxial William W. Feng, John O. Hallquist (LSTC) In this paper, the strain-energy density with Mullins damage function on unloading and subsequent reloading is considered. We introduce a damage function that has four material constants: two for unloading and two for subsequent reloading. The effect of these constants on unloading and subsequent reloading is studied for uniaxial extension. We determine these four material constants from a set of numerically generated uniaxial extension test data. The mathematical formulation has been implemented in LS-DYNA® for user application and evaluation. This paper will be extended to two-dimensional problems and a set of biaxial test data will be obtained and analyzed. The second part of this paper will be presented in another LS-DYNA conference. - Development & Validation of a Finite Element Model for a Mid-Sized Passenger Sedan D. Marzougui, D. Brown, H.K. Park, C.D. Kan, and K.S. Opiela (George Mason University) A Finite Element (FE) model of a mid-size passenger sedan was created by reverse engineering to represent that aspect of the fleet in crash simulation analyses. A detailed FE model of this vehicle was created to allow application for different types of crash scenarios. The initial version of the model includes detailed and functional representation of suspension and steering components. Material characteristics and thicknesses of the different components were determined from manufacturer’s information and coupon tests so that the simulated crash behavior would reflect actual impact test results. The model mass and inertial properties were compared to measurements made on the actual vehicle. Initially, the model was subjected to a series of debugging and verification simulations to insure that all components of the vehicle are included and appropriately connected. A series of validation tests followed to compare simulated and actual crash tests. Comparisons to full-scale crash tests indicated that acceleration pulses at different locations of the vehicle, deformations in the occupant compartment, and overall vehicle kinematics are similar. Detailed representation of the vehicle interior components and restraint systems is currently being incorporated in the model to provide opportunities to use FE occupant models in the vehicle and assess injury risks. - Development of Dynamic Punch test with DIC for Verification of Simulations with MAT224 Amos Gilat, Jeremy D. Seidt, Jeremiah T. Hammer, and Timothy J. Liutkus (Ohio State University, USA) Calibration and verification of simulations with LS-DYNA® in which plasticity and failure models, like MAT224, are used require data from well controlled experiments. One example is the simulation of containment during blade-off and disk failure in jet engines. This application requires accurate simulation of the penetration of a projectile at many combinations of impact speeds, projectile-target geometries, and temperatures. To validate the simulations of this application, a new dynamic punch test has been developed. In this test, shown schematically in the figure below, a round disk specimen is attached to the transmitter bar of a compression Split Hopkinson Bar (SHB) apparatus, and a punch is attached to the incident bar of the SHB apparatus. During a test, a compression wave is introduced into the incident bar which causes the punch to penetrate into the specimen. The full-field deformation of the back surface of the specimen is measured during the test by using the Digital Image Correlation (DIC) technique. This is possible because the specimen is supported by a slotted tubular adaptor that provides a stereographic view of the deforming back surface. The force measured in the transmitter bar of the SHB apparatus corresponds to the contact force between the punch and the specimen. Various states of stresses and different penetration modes (petaling, bending, plugging) can be obtained by changing the specimen thickness and punch geometry. Results from tests with specimens made of Ti-6Al-4V with punches of various geometries show that the punch geometry greatly influences the punching force and the failure mode. The 3D DIC and the force measurements provide data that can be used to construct and validate deformation failure models. - Development of Pedestrian Protection for the Qoros 3 Sedan Niclas Brannberg, Pere Fonts, Chenhui Huang, Andy Piper, Roger Malkusson (Qoros Automotive Co., Ltd) Pedestrian protection has become an important part of the Euro NCAP consumer test and to achieve a 5-star rating for crash safety, a good rating for the different pedestrian load cases is imperative. It was decided at the very start of the Qoros 3 Sedan’s development program that this should be made a priority. A skilled team of safety engineers defined the layout of the vehicle to support this target, and an extensive simulation program using LS-DYNA was planned to define and validate the design intent without compromising the design of the vehicle as well as maintaining all other important vehicle functions. This paper will provide an insight into part of the journey taken to establish a new vehicle brand in China, fulfilling high European safety standards at an affordable cost, and how Qoros succeeded in this mission with a combination of skills, extensive CAE analysis and finally the validation of the recipe during physical testing. The paper will highlight how the high rating for pedestrian protection was obtained and give a short overview of the complete safety development of the Qoros 3 Sedan. - Development of Researched Moving Deformable Barrier (RMDB) FE model for Oblique Crash Test Nobuhisa Ishiyama, Shinobu Tanaka, Satoshi Fukushima, Tsuyoshi Yasuki (Toyota Motor Corporation), Masahiro Saito (Toyota Technical Development Corporation), Jesse Buehler, Brian Tew (Toyota Motor Engineering & Manufacturing North America) This paper describes a finite element model for a Researched Moving Deformable Barrier (RMDB) that simulates an oblique crash test. National Highway Traffic Safety (NHTSA) is currently conducting research on oblique RMDB-to-Vehicle (Oblique) testing. The RMDB, which consists of an aluminum honeycomb and an outer cladding sheet, exhibited two deformation features after the oblique crash test. The first was cracks observed on the outer cladding sheet. The second was compressive deformation, mainly observed on the 0.724MPa aluminum honeycomb. The RMDB FE model was developed based on the SAE paper. The aluminum honeycomb had two layers with different stiffness and was modeled by shell element to capture compressive deformation. The outer cladding sheet was modeled by tied overlapping shell elements, in order to simulate crack propagation. The RMDB FE model was validated through the impactor test and the full car test. The results of the analyses using the model closely matched to the test results. The impactor model was developed to conduct impactor component testing. The aluminum honeycomb was glued to the jig and the impactor crashed into the aluminum honeycomb. The resulting fracture line on the outer cladding sheet and impactor acceleration data was correlated to test. Next, full car testing was performed refer to the SAE paper. The RMDB and car kinematics, velocity, structure deformation, and body intrusions largely matched to those from test. Cracks, generally corresponding to those in the test, were observed from analysis result in the outer cladding sheet. The aluminum honeycomb compressive deformation was also close to the test deformation result. Investigation of the effects of crack propagation in the outer cladding sheet revealed that deformation in the upper aluminum honeycomb showed difference depending on whether the outer cladding sheet had cracks ore not. Thus, reproducing the outer cladding sheet cracks is effective in simulating RMDB deformation. - Evaluation of ATD Models for Simulating Occupant Responses under Vertical Impact Ming Cheng, Doug Bueley, Lock-Sui Chin, Jean-Philippe Dionne, Neil Wright, Aris Makris (Med-Eng Holdings, LLC) In addition to traditional threats to vehicle occupants from frontal crash and side impact, passengers of military vehicles can also be subjected to vertical shock loading on their thoraco-lumbar spine and legs arising from the detonation of roadside bombs or landmines. In such explosion events, the vehicle hull is subjected to high level transient momentum loading, resulting in an acceleration impulse that transfers to the occupant through the vehicle floor and the seat. When conducting experimental blast testing, full-scale anthropomorphic test devices (ATD) are used to evaluate the survivability potential of passengers. Equivalent investigations involving ATD models are also conducted numerically. However, existing ATD numerical models have been developed mainly for frontal crash and side impact simulations, and have not been validated against vertical impact loading experienced by military vehicle occupants. The purpose of this paper is thus to compare blast loading results obtained with several ATD numerical models in a representative scenario. As a baseline, a simple drop test was experimentally conducted with a 50th percentile male ATD sitting on a platform, to simulate the vertical impact from a blast. Through the use of this simple structure, uncertainties arising from complicated seat and test fixture structures were avoided. During the test, the assembly consisting of the ATD and platform was placed in a controlled drop tower facility to generate an impact pulse on the ATD. The pelvis acceleration, lower lumbar force, and neck force were recorded. Simulations of this test were then performed with LS-DYNA® using different numerical ATD models. It was found that none of the numerical ATD models investigated could generate accurate enough responses, when compared to the experimental test with the physical ATD model. To extend observations from the above comparisons to more practical loading scenarios, blast off test simulations were also conducted and the results were compared with the signals recorded in an experimental blast off test. It is thus concluded that further enhancements to numerical ATD models are required for simulating occupant responses under vertical loading. - Evaluation of LS-DYNA® Corpuscular Particle Method for Side Impact Airbag Deployment Applications Chin-Hsu Lin, Yi-Pen Cheng (General Motors), Jason Wang (LSTC) A uniform pressure method, i.e. no pressure variation on bag surface and location, in LS-DYNA has been commonly used to simulate airbag deployment and interaction of airbag with the occupants. Another newly developed LS-DYNA CPM (Corpuscular Particle Methodology) has gained recognition and acceptance recently because it considers the effect of transient gas dynamics and thermodynamics by using a particle to represent a set of air or gas molecules and then a set of particles to represent the entire air or gas molecule in the space of interest. This innovative method, however, has yet be fully utilized and applied with confidence in airbag deployments simulation without systematic tests and validations to avoid non-physical tuning factors traditionally being applied to the uniform pressure airbag finite element models. In this paper, inflator closed and vented tank tests, static airbag deployment test, and linear impactor tests with various configurations and impact speeds are systemically conducted and then correlated with a CPM airbag model to determine whether the methodology can be applied for all the tests and whether any tuning factors should be applied in the process. This innovative LS-DYNA particle method has been fully investigated in this systematic study by correlating it with a comprehensive set of inflator tank tests, static airbag deployment, and rigid linear impactor tests. The correlations start from inflator closed and vented tank tests to verify the provided inflator characteristics, mass flow rate and temperature curves. The inflator characteristics will then be employed into static airbag deployment simulation to determine the airbag fabric heat convection coefficient, which is adjusted in this simulation to match the test pressure profile. This is the only parameter tuned to match the test pressure. This airbag model is then used to simulate those linear impact tests. With the systematic validations and correlations to avoid using tuning factors, the airbag model results in a good match of the overall airbag internal pressure and impactor deceleration histories with the tests and the simulations for all the linear impactor tests conducted. Effects of the inflator variations are also studied to illustrate the potential bounds of deceleration and airbag chamber pressure in impacts. - Fracture Prediction and Correlation of AlSi Hot Stamped Steels with Different Models in LS-DYNA® G. Huang, H. Zhu, S. Sriram (ArcelorMittal Global R&D E. Chicago Center), Y. Chen, Z. C. Xia, O. Faruque (Ford Motor Company) Reliable predictions of the fracture behavior in a crash event have become ever important in recent years as they will enable the reduction of physical prototype testing and the acceleration of vehicle development time while maintaining high safety standards. The increasing use of even stronger grades of Advanced High-Strength Steels (AHSS) such as hot-stamped boron steels provides particular challenges to fracture modeling due to their microstructures and processing conditions. This paper provides a brief description of the different fracture criteria and their implementation currently available in LS-DYNA to model ductile failure. The focus is the determination of parameters for selected fracture criteria for AlSi coated press -hardenable steels using calibration tests at the coupon level and supported by FEA simulations. - Further Advances in Simulating the Processing of Composite Materials by Electromagnetic Induction M. Duhovic, M. Hümbert, P. Mitschang, M. Maier (Institut für Verbundwerkstoffe GmbH), P. L’Eplattenier , I. Çaldichoury (LSTC) Continuous induction welding is an advanced material processing method with a very high potential of providing a flexible, fast and energy efficient means of joining together thermoplastic composites to themselves and metal alloys. However, optimization of the process is very difficult as it involves the interaction of up to four different types of physics. In the previous installments of this work, static plate heating and continuous induction welding simulations of carbon fiber reinforce thermoplastic (CFRTP) plates were presented looking in particular at point temperature measurements and 3D surface plots of the in-plane temperature distribution across the entire width of the joint on the top as well as the joining interface of the laminate stack. In this paper, static plate heating tests are once again revisited and the importance of through the thickness temperature behavior is considered. For a single plate, the through thickness temperature profile follows a predictable pattern when using an induction frequency producing a skin depth of the same thickness as the plate. For two stacked but unconnected plates, the temperature profile becomes less obvious, in particular for plate stacks of different thicknesses. By correctly simulating the through thickness temperature profile the heating behavior can be ultimately controlled via top surface air-jet cooling together with other induction equipment parameters giving an optimum heating effect at the joining interface. In addition, further developments in the induction heating electromagnetism module available in LS-DYNA® R7 are examined including the inclusion of an orthotropic electromagnetic material model as well as electrical contact and its resulting contact resistance and effect on the overall heating behavior - H-Point Machine and Head Restraint Measurement Device Positioning Tools and Validation Brian Walker, Liliana Cowlam, Jamie Dennis (Arup), Simon Albery, Nicolas Leblanc (Futuris) It is essential for seat manufactures to be able to accurately predict the H-Point position of a seat during the design stage, i.e. before the seat is actually built. This can be estimated empirically but this method is usually not sufficient to accurately determine how the manikin’s position is affected by subtle yet complex interactions within the seat and its trim. To aid this process, Arup have developed a positioning tool kit for use in conjunction with the Oasys PRIMER software . The positioning tool kit calculates the H-Points of the automotive seats as well as the backset measurement thus providing the scores of the head restraint. The benefit to the seat engineer of using the Oasys HPM positioning tool is increased confidence in the H-Point of a new seat design, and an opportunity to adjust the design to minimise H-point variation that may be measured in test. This improved understanding of the seat will allow more accurate predictions of whiplash performance and other crash test simulations where dummy positioning is critical. - High Strain Rate Testing and Modeling of Polymers for Impact Simulations Jorgen Bergstrom, David Anderson, David Quinn, Eric Schmitt, Stuart Brown, Samuel Chow (Veryst Engineering, LLC) The increased use of polymeric materials in impact and high strain rate applications is motivating the use of impact simulations during design. However, simulation of polymer impacts requires difficult-to-measure stress-strain behavior at high strain rates. Even when appropriate data is collected, accurate high strain rate constitutive models need to be fit to the data before being incorporated into a simulation code. This article presents a testing and constitutive modeling process using Veryst’s PolyUMod® and MCalibration® to achieve accurate impact simulations using polyether ether ketone (PEEK) as the example material. Low and high strain rate data is presented over a large strain rate range. Validation of the developed material model is performed by simulating a drop test in LS-DYNA® with comparison to measured drop test data. - Implementation of a New Continuum Damage Mechanics Model for Composites in LS-DYNA® Danghe Shi, Xinran Xiao (Michigan State University) A large amount of work has been done to simulate the crashworthiness of composite structures, particularly to evaluate the deformation behavior and to determine the energy absorbing efficiency. However, the existing simulation models generally need to introduce many non-measurable parameters which limited their practical applications. This work focused on the implementation and development of a thermodynamically consistent continuum damage mechanics (CDM) model called Ladevèze model. This model took into account stiffness recovery and inelastic strains, both damage and plastic strains. All the parameters needed in this model can be determined by experiment. Modified Ladevèze models were developed in order to adapt different damage and plasticity evolution laws for different fabric forms of composites. Three different versions of Ladevèze model were implemented in LS-DYNA and their predictive abilities were studied. - Improving Performance of LS-DYNA® Crash Simulation with Large Deformation by Modifying Domain Decomposition Shota Yamada (Fujitsu Limited) In modern high performance computing era, parallel computing has been a trend to improve the speed of computation. In the past we have found that just simply increasing the number of computing parallelism would not guarantee to achieve better performance especially when simulating large deformation using hundreds or more number of parallel processors. Through our past experience, to improve the computational performance, we had found it was necessary to tackle on the issue of load unbalance of calculation cost among processors and to seek for better strategy in domain decomposition. In general, calculation cost increases with respect to the extent of deformation. To reduce the unbalance of calculation cost among processors, ideally we would like to decompose domain to subdomains with same extent of deformation on all processors. Even it is possible, it would be difficult to achieve such ideal decomposition for the cases with only local deformation occurred in crash simulation. Therefore we come up with a new enhanced method to decompose the model by distributing calculation cost more uniformly in crash simulation. In this paper, I will reveal this enhanced method, present the results of improved performance of this method using several models of crash simulation, and discuss the efficiency of this method. - Improving the Precision of Discrete Element Simulations through Calibration Models Adrian Jensen, George Laird (Predictive Engineering), Kirk Fraser (Predictive Engineering, University of Quebec at Chicoutimi) The Discrete Element Method (DEM) is fast becoming the numerical method of choice for modelling the flow of granular material. Mining, agriculture and food handling industries, among many others, have been turning their attention towards this powerful analysis technique. In this paper, we present three simple calibration modeling tactics that should be the starting point for every DEM simulation of dry and semi-dry granular material. The three tests are designed to be as simple as possible in order to minimize the run time of the test simulations. The tests are developed to be run in a specific order, providing a sequential calibration procedure that does not involve multiple unknown variables in each test. Other standard testing methods are briefly discussed, such as the rotating drum and the shear cell (Jenike) tests. The complexity of these tests does not lend itself well to initial numerical model calibration as each test involves many unknown variables. However, they are mentioned as an extension of the three basic test models. The paper will help analysts to increase the precision and validity of their discrete element modelling work. - Increasing LS-DYNA® Productivity on SGI Systems: A Step-by-Step Approach Olivier Schreiber, Tony DeVarco, Scott Shaw, Aaron Altman (SGI) SGI delivers an unified compute, storage and remote visualization solution to our manufacturing customers that reduces overall system management requirements and costs . LSTC has now integrated Explicit, Implicit solver technologies into a single hybrid code base allowing seamless switching from large time steps transient dynamics to linear statics and normal modes analysis. There are multiple computer architectures available from SGI to run LS-DYNA. They can all run LSTC solvers using Shared Memory Parallelism (SMP), Distributed Memory Parallelism (DMP) and their combination (Hybrid Mode) as supported by LS-DYNA. Because computer resources requirements are different for Explicit and Implicit solvers, this paper will study how advanced SGI computer systems, ranging from multi-node Distributed Memory Processor clusters to Shared Memory Processor servers address the computer resources used and what tradeoffs are involved. This paper will also outline the SGI hardware and software components for running LS-PrePost® via SGI VizServer with NICE Software. CAE engineers, at the departmental level, can now allow multiple remote users create, collaborate, test, optimize, and verify new complex LS-DYNA simulations in a single system and without moving their data. - Inelastic Transversely Isotropic Constitutive Model for High Performance Polymer Fibers Subramani Sockalingam, Michael Keefe, John W. Gillespie Jr. (University of Delaware) High performance polymer fibers such as Kevlar, Spectra and Dyneema are widely used in ballistic impact applications. Under transverse compression at finite strains these fibers exhibit nonlinear inelastic behavior. The role of transverse compression during ballistic impact is not very well understood. In this work we implement a transversely isotropic inelastic constitutive model as a user defined material model (UMAT) in LS-DYNA®. A plasticity approach is used to model the material nonlinearity and a pseudo-elastic approach for the large residual strains in the transverse fiber plane. Based on the experimental results, the material nonlinearity and inelasticity are decoupled from the fiber direction. The UMAT predictions for a single Kevlar KM2 fiber under transverse compression are compared to the experimental load deflection under monotonic and cyclic loading. - Interaction Methods for the SPH Parts (Multiphase Flows, Solid Bodies) in LS-DYNA® Jingxiao Xu, Jason Wang (LSTC) Smooth particles hydrodynamics is a meshfree, Lagrangian particle method for modeling fluid flows and solid bodies. It has been applied extensively to the multiphase flows, heat conduction, high explosive problems and so on. In this paper, different interaction methods available in the LS-DYNA for SPH parts which have wide range of density and material properties are studied and compared. Node to node contacts fit well for the interaction between two SPH parts with high density ratio, the standard SPH interpolation method has better accuracy around the interfaces when two SPH parts have similar density and material properties. Different interaction approaches can be combined together in one model to reach the best results. Also the interactions between Lagrangian elements with SPH particles are discussed. Some examples are presented to show how to use different approaches with different combination of LS-DYNA keywords. - Introduction of Die System Module in LS-PrePost® Chunjie Zhang, Philip Ho, Xinhai Zhu (LSTC) Die system module (DSM) is developed to generate tool geometry in an early stage and to evaluate these result by forming simulation. DSM Graphics User Interface is designed to provide metal forming users a tool to generate die face more effectively. The main focus of module is placed on easy modification and reuse of existing design. This paper illustrates the algorithm and some special feature of DSM - Introduction of Rotor Dynamics Using Implicit Method in LS-DYNA® Liping Li, Roger Grimes (LSTC) Rotor dynamics is commonly used to analyze the behavior of structures ranging from jet engines and steam turbines to auto engines and computer disk drives. In such applications, the amplitude of structural vibration can become excessive when the speed of rotation approaches the system’s critical speed. This paper introduces a primary implementation of rotor dynamics in LS-DYNA and presents a validation study of this new implemented feature with exiting theoretical studies, as well as another finite element method software ANSYS. The structural vibration responses of four different models with beam, shell and solid elements, the shaft whirling orbit and Campbell diagrams are compared. It shows that the results from LS-DYNA have very good agreements with theoretical results and ANSYS simulation results. So it suggests that the LS-DYNA simulation is accurate for the cases investigated in this paper. - Investigation of Delamination Modeling Capabilities for Thin Composite Structures in LS-DYNA® S.A. Muflahi, G. Mohamed, S.R. Hallett (University of Bristol) Predictive capabilities to simulate the initiation and propagation of delamination in thin composite laminates have been investigated. Different element formulations (3D solids, 2D shells, and 3D thick shells), cohesive fracture models (commercially available in LS-DYNA 971 v6.1 and *USER_DEFINED constitutive behavior) and stacking procedures have been applied to representative composite models of increasing complexity to demonstrate their response, delamination failure modes and computational efficiency. It has been shown that stacks of 2D shell elements with nodal offsets with a user-defined constitutive model for cohesive elements can retain many of the necessary predictive attributes of delamination dominated failure while providing superior computational efficiency and flexibility required for industrial component scale design. - Isogeometric Analysis in LS-DYNA® Attila P. Nagy, David J. Benson (Dept. of Structural Engineering, UCSD), Stefan Hartmann (DYNAmore GmbH) Two new areas of development of isogeometric analysis in LS-DYNA are presented. The first, which is currently available, is mass scaling. The second, which will be available sometime during the next year, is the development of efficient integration methods for trimmed NURBS, which will allow a much more direct connection between CAD and analysis in LS-DYNA. Industrial applications of both are presented. Metal stamping is one of the most cost effective manufacturing methods for producing precision parts. Isogeometric analysis, which uses the same basis functions as the CAD programs used to design the shape of the part, is an attractive alternative to traditional finite element analysis for metal stamping. Mass scaling, and the underlying stable time step estimates, that are commonly used in metal stamping simulations are presented for isogeometric analysis. Additionally, a numerical algorithm is proposed to construct efficient quadrature rules for trimmed isogeometric elements as part of the standard pre-processing step. The motivation is to overcome the proliferation of quadrature points observed in competing adaptive and tessellation-based integration approaches. The constructed integration rule is considered to be optimal in the sense that the final quadrature points and weights satisfy the moment fitting equations with the trimmed domain up to a predefined tolerance. The resulting quadrature points are in the interior of the trimmed domain and positivity of the weights is preserved. The efficiency and accuracy of the scheme is assessed and compared to competing integration techniques. Selected problems of elastostatics and elasto-plastic dynamics are used to further demonstrate he validity of the approach. - JSD - Introduction of Integrated Seat Design System for LS-DYNA® Noriyo Ichinose (JSOL Corporation) Recently vehicle modelling is becoming more detailed and complex. Automotive companies are more and more directly evaluating dummy injury criteria in crash analysis. To evaluate injury criteria, a more detailed seat model is needed, because injury criteria are highly depending on seat structure and restraint system. In addition to the above, many types of LS-DYNA analysis are carried out during one seat design process (e.g. frontal impact, side impact, whiplash, and so on). Because these analyses use different dummy models, different loading conditions and sometimes different dummy/seat positions, the engineer needs to understand all regulations and make a big effort to prepare the input data. To reduce this effort in the demand for more detailed seat models, an integrated seat design system named JSD has been developed. - Key Parameters in Blast Modeling Using 2D to 3D ALE Mapping Technique Anil Kalra, Feng Zhu, King H Yang, Albert I King (Wayne State University) A numerical simulation is conducted to model the explosive detonation and blast wave propagation in the open air field. The mesh size and boundary conditions as well as size of air domain are the sensitive variables which may significantly affect the predicted pressure wave magnitude and rising time in blast simulations. The current approach focuses on determining the optimal key parameters to predict the blast wave accurately. A 2D to 3D mapping is performed to save the computational time. The blast induced high pressure waves are generated using the Arbitrary Lagrangian-Eulerian (ALE) formulation in the 2D domain and then mapped into a 3D space. The simulation results show that the aforementioned parameters govern pressure wave form in both 2D and 3D cases. A two-step mesh sensitivity study is performed: A parametric study is first conducted in the 2D air domain and then followed by a second one in the 3D domain while using 2D to 3D mapping. After that, as a case study in the biomedical applications, an anatomically detailed pig head finite element model is integrated with the 3D air domain to calculate the pressure gradient change inside the brain due to blast wave. The model predictions are compared with the experimental data and it has shown that the modeling strategy used can capture the biomechanical response of the surrogate with reasonable accuracy and reduced computational cost. - LS-DYNA® ALE/FSI Recent Developments Hao Chen, Jason Wang, Ian Do (LSTC) LS-DYNA ALE(Arbitrary Lagrange-Eulerian Method), equipped with its own fluid-structure interaction, aims to solve a series of transient engineering problems characterized by large momentum and energy transfer between Lagrange structures and ALE fluids. LS-DYNA ALE multi-material formulation solves multiple species of fluids in one ALE mesh. The fluid interfaces are tracked internally by our interface reconstruction algorithms at each of the advection cycle. Then our fluid-structure interaction algorithm is used to study the interactions between structures and those individual fluids. The FSI solver, invoked by the *CONSTRAINED_LAGRANGE_IN_SOLID card, is to couple between ALE fluid elements and Lagrange structure segments. The multi-material capability, together with its embedded coupling to structures, have been utilized by users from various engineering application areas such as tank sloshing, tire hydroplaning, bottle dropping, high explosive blasting, etc. Several recent developments and their engineering applications in LS-DYNA ALE/FSI package are presented here. - LS-DYNA® Big Data Processing, Mining and Visualization using d3VIEW Suri Bala (LSTC) LS-DYNA Data Processing, Storage and Visualization consume a lot of time and effort for every Engineer and Scientist who uses simulations to aid product development. This paper reviews commonly used workflows in simulation based product design to identify areas where d3VIEW can significantly reduce time and effort in data intensive tasks. In conclusion, this paper will demonstrate by example how d3VIEW provides advanced capabilities in data extraction, organization and visualization of LS-DYNA simulations to expedite the process of going from Data to Decision while providing extensive capabilities in mining historical LS-DYNA simulations. - LS-DYNA® HYBRID Studies using the LS-DYNA® Aerospace Working Group Generic Fan Rig Model Gilbert Queitzsch (Federal Aviation Administration), Cing-Dao Kan, Kivanc Sengoz (George Mason University), Thomas J. Vasko (Central Connecticut State University) In addition to the well-known parallel versions of LS-DYNA, the symmetric multiprocessing (SMP) version and massively parallel processor (MPP) version, LSTC offers an LS-DYNA HYBRID version that combines these two parallel programming models into a single code. The development of LS-DYNA HYBRID, which started in 2011, is focused on obtaining high code performance on large cluster environments. The intent of the current study is to investigate the LS-DYNA HYBRID performance, scalability, and output consistency using a modified LS-DYNA Aerospace Working Group Generic Fan Rig Model. The original model is an outcome of a Federal Aviation Administration (FAA) funded university project and it is used as a test case for the LS-DYNA Aerospace Working Group Test Case Suite. - LS-DYNA® Performance in Side Impact Simulations with 100M Element Models Alexander Akkerman, Yijung Chen, Bahij El-Fadl, Omar Faruque, Dennis Lam (Ford Motor Company) LS-DYNA has been used for vehicle crash simulations for many years. The models have increased in size over the years but in most cases do not exceed more than a few million elements. However, recently developed material models require much greater levels of refinement resulting in much larger models, perhaps as high as 100M elements. Simulating models of the order of 100M elements in turn requires much higher levels of scalability in order to be feasible in the vehicle development process. This paper will analyze LS-DYNA performance with a 100M-element sled side impact model running on up to 1,000 and more CPUs with various Intel processors and Infiniband interconnect technologies. - LS-DYNA® R7 : Free Surface and Multi-phase Analysis for Incompressible Flows Facundo Del Pin, Iñaki Çaldichoury, Rodrigo R. Paz (LSTC) LS-DYNA R7 introduced an incompressible flow solver which can track flow interfaces such as free surfaces or the interface between two fluids. Several industrial applications may be simulated with these features. In the area of free surface flows the effects of the lighter phase are neglected, i.e. in the case of water-air interfaces the air could be ignored if its effect does not change significantly the dynamics of the water phase. Some typical problems are wave propagation, dam break, sloshing problems and green water on decks. On the other hand problems where both phases should be taken into account are mixing problems, bubble dynamics and lubrication problems among others. In this work examples of both problems will be presented and explained. The set up process as well as the post processing will be detailed. Validation examples will be shown and compared to analytical or experimental solutions. Finally the current development status for some of the multiphase features will be discussed. - LS-DYNA® R7: The ICFD Solver for Conjugate Heating Applications Iñaki Çaldichoury, Facundo Del Pin, Rodrigo R. Paz (LSTC) LS-DYNA version R7 introduced an incompressible flow solver (ICFD solver) which may run as a standalone CFD solver for pure thermal fluid problems or it can be strongly coupled using a monolithically approach with the LS-DYNA solid thermal solver in order to solve complex conjugate heat transfer problems. Some validation results for conjugate heat transfer analyses have been presented at the 9th European LS-DYNA Conference (2013) . This paper will focus on a new output quantity, the heat transfer coefficient or ‘h’ which has recently been implemented in the ICFD solver. Its description, calculation and uses will be presented as well as some validation results. - LS-DYNA® Scalability Analysis on Cray Supercomputers Ting-Ting Zhu (Cray Inc.), Jason Wang (LSTC) For the automotive industry, car crash analysis by finite elements is crucial to shortening the design cycle and reducing costs. To increase the accuracy of analysis, in additional to the improvement in finite element technique, smaller cells of finite element meshes are used to better represent the car geometry. The use of finer mesh coupled with the need for fast turnaround has put increased demand on scalability of the finite element analysis. In this paper, we will use the car2car model to measure LS-DYNA scalability on Cray® XC30™ supercomputers, an Intel ® Xeon® processor-based system using the Cray Aries network. The scalability of different functions in LS-DYNA at high core counts will be analyzed. The MPI communication pattern of LS-DYNA will also be studied. In addition to that, we will also explore the performance difference between using one thread per core and two threads per core. Finally, we will explore the performance impact of using large Linux Huge Pages. - LS-DYNA: Status and Development Plan John Hallquist, Yun Huang, Iñaki Çaldichoury, Jason Wang (LSTC) Recent enhancements – John Hallquist Linear solver – Yun Huang LS-PrePost: ICFD & EM – Iñaki Çaldichoury Particle methods – Jason Wang - LS-OPT®: New Developments and Outlook Nielen Stander and Anirban Basudhar (LSTC) New features available in LS-OPT® 5.1 are discussed and illustrated. The main features include three new solver types, Parallel Feedforward Neural Networks, seamless variable de-activation for iterative methods, exporting of selected metamodel formulae, subregion-based Global Sensitivity Analysis, enhanced histogram visualization features and Viewer-based categorization of simulation results. - Manufacturing the London 2012 Olympic Torch Trevor Dutton, Paul Richardson (Dutton Simulation Ltd) A key part of the build-up to the London 2012 Olympic Games was the Torch relay for which each one of the 8,000 runners required a Torch. The design of the Torch comprised inner and outer skins of perforated aluminium formed into a triangular cross-section, which flared out towards the top to house the gas burner. Dutton Simulation was asked to assist with development of a process to manufacture the skins to the required accuracy and quality of finish; some of the key technical challenges are described in the paper. The first task was to develop a blank shape for the two forms and then confirm these with incremental forming simulation (using eta/DYNAFORM with the LS-DYNA® solver). The validated shapes – both the profile and the thousands of holes – were then cut by laser. In conjunction with developing the blank, the optimum forming process also had to be determined, to form the perforated sheet to the accuracy required for laser welding the joining seam. Several process concepts were explored before arriving at a four stage method. With aluminium as the raw material springback was already expected to be a factor; this was compounded by the holes further reducing the material stiffness and the relatively low strain in the form due to the large radii. Nonetheless, the geometry had to be formed to a very tight tolerance, both for the weld process and also to create a result free of cosmetic defects. LS-DYNA was used to determine the springback at each step of the forming process (Figure 1) and the springback compensation solution was used to provide the correction. DYNAFORM’s tools for cosmetic defect detection (stoning, reflect lines) were employed to check the result to the highest level of detail. - Maximizing Cluster Utilization for LS-DYNA® Using 100Gb/s InfiniBand Pak Lui, Gilad Shainer, Scot Schultz (Mellanox Technologies, Inc.) From concept to engineering and from design to test and manufacturing, the automotive industry relies on powerful virtual development solutions. Crash simulations are performed in an effort to secure quality, safety and accelerate the development process. As the models become more complex to better simulate the physical behavior in crash simulations, the computers that run as a cluster also need to be higher to meet the needs of the higher standards for simulating these more elaborate models. Among the various components in a compute cluster, the high performance network interconnect is an integral factor which is key in making the simulation run efficiently. The Mellanox Connect-IB™ InfiniBand adapter has introduced a novel high-performance and scalable architecture for high-performance clusters. The architecture was designed from the ground up to provide high performance and maximize scalability for the largest supercomputers in the world today and in the future. This paper demonstrates the new features and technologies driven by the Connect-IB InfiniBand adapters. Besides its raw abilities of delivering sub-microsecond latency and a full bandwidth of over 100Gbps using two of the FDR links, its hardware capabilities also includes CPU offloads, MPI collective operations acceleration and message transport services that make LS-DYNA to perform at scale. This paper also demonstrates running multiple parallel simulations to achieve higher cluster productivity, in an effort to exploit with this new level of performance available from the network. - Meso-Scale FEA modeling to Simulate Crack Initiation and Propagation in Boron Steel Yijung Chen, Omar Faruque, Cedric Xia, Alex Akkerman, Dennis Lam (Ford Motor Company) The scope of this paper focuses on the characterization and prediction of potential crack initiation and propagation in a boron-steel component under extreme impact load, utilizing a meso-scale FE (0.2 mm solid element) modeling with the MIT MMC (modified Mohr-Coulomb) fracture criterion. The MMC fracture criterion is implemented through LS-DYNA® *MAT224 and *MAT_ADD_EROSION with GISSMO option. A finite element mesh with total number of elements close to 100 million is created to investigate the accuracy of MMC criterion in predicting fracture of a boron component in a dynamic impact test. The CAE results are compared to sled test results for system force-deflection, part deformation mode and crack initiation and propagation. - Methodologies and Examples for Efficient Short and Long Duration Integrated Occupant-Vehicle Crash Simulation R. Reichert, C.-D. Kan, D. Marzougui, U. Mahadevaiah, R. Morgan, C.-K. Park, F. Tahan (George Mason University) Integrated occupant-vehicle analysis plays an important role in vehicle and occupant safety developments. Car manufacturers are using detailed full system models consisting of vehicle structure, interior, restraint systems, barrier, and occupant to develop safety measures and assure compliance with legal requirements, good rating results in consumer information tests, and vehicle safety in real life crash configurations. Suppliers are using sub-system models to design and optimize interior and restraint system components with respect to various component and system requirements. This paper describes efficient methodologies for fully integrated occupant-vehicle simulations as well as sub-system evaluations using prescribed motion in LS-DYNA®. Examples include different short duration impacts such as frontal and side impact configurations with termination times of less than 200 milliseconds, and long duration impacts such as rollover events with termination times of 400 to 2500 milliseconds. A frontal offset and a frontal oblique impact was simulated using a Toyota Yaris model, side impact simulations were conducted with a Ford Taurus model, and a Ford Explorer model was used for rollover evaluations. Occupant models used include a Hybrid III, a THOR (Test device for Human Occupant Restraint), a US side impact, and a WorldSID dummy, as well as a THUMS (Total HUman Model for Safety) human model. Simulation results are compared to available full-scale crash test data. Parametric studies have been conducted to examine the influence of different input and output parameters when using sub-models with prescribed motion. - Methods for Modeling Solid Sports Ball Impacts Derek Nevins, Lloyd Smith (Washington State University) Finite element modeling of dynamic sports ball impacts presents a substantial challenge. This is because, rather than displaying linear-elastic behavior, many sports balls are predominantly non-linear, inelastic and rate dependent. This is true of both softballs and baseballs, which exhibit strong rate-dependence and large energy dissipation characteristics in collisions occurring under play-like conditions. The development of finite element models of these balls is further complicated by the difficulty in measuring materials properties at strain rates and magnitudes representative of play. This work describes the development of novel ball models from data obtained under play-like conditions. Ball models were implemented in LS-DYNA® using the Low-Density Foam material model. Simulations were compared to empirical data collected over a range of ball speeds. Models displayed good agreement with experimental measures of energy dissipation and impact force and represent an improvement over commonly used viscoelastic models. - Mild Traumatic Brain Injury-Mitigating Football Helmet Design Evaluation M.S. Hamid (Advanced Computational Systems, LLC), Minoo Shah (IDIADA Automotive Technology) Concussion, as known as mild Traumatic Brain Injury (mTBI), is the most common sport-related head injury. Football is the most common sport with higher concussions in USA. Helmet is the equipment being used in mitigation of mTBI. There are numerous designs of helmets which meet the requirements of sport regulation committee. In this paper, a football helmet is evaluated using numerical methods. The brain and the tissues in human head are modelled using continuum Smoothed Particle Hydrodynamics (SPH). The brain tissues are generated by segmentation from human brain MRI data. The LSTC dummy is used to represent the football players. The brain tissue is fitted in the cavity of the dummy headform. Two different impact scenarios are simulated in this study. The results for these impact conditions are presented. - MME-Converter and MME-Report for LS-DYNA® Users Seung Hun Jeong (10DR KOREA Co., Ltd.) This paper will focus on the main features, benefits and use of MME-Converter and MME-Report which could be highly useful to LS-DYNA users for vehicle crash test analysis. With MME-Converter users can simply convert LS-DYNA result files, such as nodout, elout, deforc and rcforc to MME-filtered files through the auto-syntax analysis and then compare these converted MME-filtered files to real vehicle crash test data. The conversion of LS-DYNA files is carried out in accordance with the international occupant protection criteria including KNCAP, USNCAP, Euro NCAP and IIHS. Furthermore, MME-Report which is one-page reporting system using MME-filtered data helps users to create concise, professional engineering reports, so that engineers in CAE teams, could share the test results with each other and even use them for formal meetings or presentations. - Modal Dynamics in LS-DYNA® Roger Grimes (LSTC) In LS971 R71, LSTC has enhanced its capabilities in Modal Dynamics from previous versions. This talk will give an overview of the enhanced capabilities which include mode selection and modal damping. We will present an industrial example of using this capability including a comparison of using Modal Dynamics and a full simulation. - Modeling Nuclear Fuel Rod Drop with LS-DYNA® W. Zhao, J. Liu, W. Stilwell, B. Hempy, Z. Karoutas (Westinghouse Electric Company LLC) As a primary barrier to the fission product release, maintaining the structural integrity of fuel rod cladding has been a topic of great importance. To help better understand the structural behavior of the fuel rod in shipping and handling incidents, a detailed model for a typical pressure water reactor (PWR) fuel rod is being developed using LS-DYNA. The paper describes an on-going model development effort. For efficiency of the development process, a shortened version of the fuel rod is considered with full fuel pellet stack represented by five pellets. Nevertheless, the model contains all the structural features of the fuel rod, thus can be easily extended to obtain a full length fuel rod model. - Modeling of Automotive Airbag Inflators using Chemistry Solver in LS-DYNA® Kyoung-Su Im, Zeng-Chan Zhang, and Grant Cook, Jr. (LSTC) Airbags are part of an important vehicle safety system, and the inflator is an essential part that generates a specific volume of gas to the airbag for a short duration of time. Recently, we have developed numerical models of automotive airbag inflators in conjunction with the LS-DYNA® chemistry solver. In this presentation, we will demonstrate two different models: a conventional pyrotechnic inflator and a compressed, heated gas inflator. Detailed and comprehensive descriptions for constructing the keyword flies will be given and the results for the two models will be discussed. Limitations of the currently available models and future directions for coupling with the existing LS-DYNA® solvers, i. e., ALE and CESE solvers will also be presented. In addition, more advanced models will be proposed and discussed in detail. - Modeling Rebar: The Forgotten Sister in Reinforced Concrete Modeling (v2) Leonard E Schwer (Schwer Engineering & Consulting Services) As part of the “Blind Blast Simulation Contest 1 ,” organized by the University of Missouri Kansas City, participants were invited to submit predictions of reinforced concrete slabs subjected to air blast loading. There were two classes of concrete: normal strength f c ′ = 5 ksi (34.5 MPa) and high strength f c ′ = 15 ksi (103.5 MPa). The normal strength concrete was reinforced with Number 3 Grade 60 steel bars with yield strength of 68 ksi (469 MPa). The high strength concrete was reinforced with Vanadium Number 3 bars with nominal yield strength of about 83 ksi (572 MPa). Each concrete slab design was subjected to two different air blast wave forms with impulses of about 5.38 and 7.04 MPa-ms. For the purposes of this reinforcement modeling study, the normal strength f c ′ = 5 ksi (34.5 MPa) concrete reinforced with Number 3 Grade 60 steel bars with yield strength of 68 ksi (469 MPa) will be considered. A description of the reinforced concrete slab and associated modeling is presented next. Interested readers should review the associated web site for additional details. The overall concrete slab dimensions are 64x33.75x4 inches (1625.6x958.85x101.6 mm) with a single layer of reinforcement, as shown in Figure 1, on the side of the slab away from the blast. The concrete slab fixture consists of a steel frame with front and back steel cross supports at the long ends of the slab. Figure 2 shows the final assembly of the blast side fixture over the concrete slab. The slab is mounted in the depicted removable end of a large air blast simulator. - Modelling of Armour-piercing Projectile Perforation of Thick Aluminium Plates Kasper Cramon Jørgensen, Vivian Swan (NIRAS A/S) This study investigates the perforation process of armour-piercing projectiles on commercially available high-strength aluminium. A LS-DYNA® model is developed with thick target plates of aluminium alloy 7075-T651 and an incoming 7.62 mm armour-piercing projectile with an impact velocity of 850 m/s. A numerical formulation combining classic Lagrangian finite elements with an adaptive mesh algorithm is utilized to overcome large deformation challenges and more accurately predict failure mechanisms. Both aluminium target and projectile have been modelled as deformable with a modified version of the Johnson-Cook strain-rate and temperature dependent plasticity model, based on input parameters from literature. Main model results include projectile residual velocity after target perforation and prediction of target failure mechanism. The model results are validated against experimental results from live ballistic tests and a sensitivity study is carried out to identify influential material model parameters. - New Features of CE/SE Compressible Fluid Solver in LS-DYNA® Zeng-Chan Zhang, Grant O. Cook, Jr. & Kyoung-Su Im (LSTC) CESE compressible fluid solver is one of the new solvers in LS-DYNA R7.0. This solver is based on the space-time conservation element and solution element (CE/SE) method, originally proposed by Chang . The CE/SE method has many non-traditional features, such as (i) both local and global flux conservation are well maintained in space and time; (ii) shock waves can be captured automatically without using Riemann solvers or special limiters, etc. For more details about the CE/SE method, see the references [1, 2, 3]. This method is suitable for high-speed flows, especially with complex shock waves. In the past, the CESE method has been widely used in many different CFD-related areas, e.g., shock/acoustic wave interaction, detonation waves, cavitation, chemically reacting flows, etc. - New Ordering Method for Implicit Mechanics and What It Means for Large Implicit Simulations Roger Grimes, Cleve Ashcraft (LSTC) The most egregious serial bottleneck for Large Implicit Mechanics modeling for distributed memory parallel execution, independent of the application package, is the sparse matrix ordering for the direct matrix solution. LSTC is developing a new distributed memory ordering algorithm that is at least as effective as the serial algorithm METIS but is a fully scalable implementation. We will give an overview of the algorithm and the impact on some benchmark problems. - New Representation of Bearings in LS-DYNA® Kelly S. Carney, Samuel A. Howard (NASA Glenn Research Center), Brad A. Miller (Harding University, Searcy), David J. Benson (University of California San Diego, La Jolla) Non-linear, dynamic, finite element analysis is used in various engineering disciplines to evaluate high-speed, dynamic impact and vibration events. Some of these applications require rotation of some elements relative to other elements with various levels of constraints. For example, bird impacts on rotating aircraft engine fan blades is a common analysis done using this type of analysis tool. Traditionally, rotating machines utilize some type of bearing to allow rotation in one degree of freedom while offering constraints in the other degrees of freedom. Many times, bearings are modeled simply as linear springs with rotation. This is a simplification that is not necessarily accurate under the conditions of high-velocity, high-energy, dynamic events such as impact problems. For this reason, it is desirable to utilize a more realistic non-linear force-deflection relationship characteristic of real bearings to model the interaction between rotating and non-rotating components during dynamic events. The present work describes a rolling element bearing model developed for use in non-linear, dynamic finite element analysis. This rolling element bearing model has been implemented in LS-DYNA as a constraint, *CONSTRAINED_BEARING. - Newly Developed LS-DYNA® Models for the THOR-M and Harmonized HIII 50th Crash Test Dummies Chirag S. Shah, Suraush Khambati, Brock Watson, Nishant Balwan, Zaifei Zhou, Fuchun Zhu, Shiva Shetty (Humanetics Innovative Solutions, Inc.) Finite Element (FE) models of Anthropomorphic Test Device (ATD), commonly known as crash test dummies, have become increasingly applicable in automotive safety. A variety of ATDs models are widely used in many areas such as restraint development, automotive crashworthiness, occupant safety and other automotive environment related applications. With the increase in cost effectiveness of computational power, progressively complex and detailed computer models of ATDs have become more realistic in recent years. There has been growing demand for these models due to the inherent benefits of reduced cost and time in the product development cycle. The presented paper highlights the development process of two of such highly detailed frontal impact ATD models namely: THOR-M 50th and Harmonized Hybrid III (HIII) 50th in the LS-DYNA FE code. Both these dummy models represent anthropometry of a 50th percentile adult male. The current work describes the model development process and a controlled loading case for each of the dummies to illustrate the predictive capabilities of both models. The geometries and inertial properties for both dummy models are obtained from available drawings and hardware. The model connectivity and structural integrity are inspected by experiments and verified against hardware. Material tests have been conducted for all critical materials, enabling characterization using the latest material modeling techniques. The model’s material properties are implemented from physical test data after numerical parameter extraction and verification through coupon simulations, using available material cards. All the injury output sensors and instrumentation in these models are developed and implemented based on all possible instrumentation information in hardware. These models are then validated against a variety of component, sub-assembly, and full dummy level load cases, as a key for developing reliable models that meet industry expectations. A detailed validation case of the thorax is presented for the Harmonized HIII 50th and a neck validation case is presented for the THOR-M 50th dummy. The current development status has shown very reasonable predictive capabilities of these two models as evident in the illustrated loading conditions which range from component to full dummy level. - Nonlinear viscoelastic modeling for foams Veronika Effinger, André Haufe (Dynamore GmbH), Paul DuBois (Consultant), Markus Feucht (Daimler AG), Manfred Bischoff (University of Stuttgart) Lightweight design is one of the major principles in automotive engineering and has made polymer materials to inherent parts of modern cars. In addition to their lightweight potential thermoplastics, elastomers, fabric and composites also incur important functions in passive safety. In the age of virtual prototyping, assuring these functions requires the accurate modeling of the mechanical behavior of each component. Due to their molecular structure, polymer materials often show viscoelastic characteristics such as creep, relaxation and recovery. However, considering the general state of the art in crash simulation, the viscoelastic characteristics are mainly neglected or replaced by viscoplastic or hyperelastic and strain rate dependent material models. This is either due to the available material models that are often restricted to linear viscoelasticity and thus cannot model the experimental data or due to the time consuming parameter identification. In this study, a nonlinear viscoelastic material model for foams is developed and implemented as a user material subroutine in LS-DYNA®. The material answer consists of an equilibrium and a non-equilibrium part. The first one is modeled with a hyperelastic formulation based on the work of Chang and formerly implemented as *MAT_FU_CHANG_FOAM in LS-DYNA (*MAT_083). The second one includes the nonlinear viscoelastic behavior following the multiple integral theory by Green and Rivlin . The polyurethane foam Confor® CF-45 used as part of the legform impactor in pedestrian safety was chosen for its highly nonlinear viscoelastic properties to test the presented approach. The investigation shows the ability of the method to reliably simulate some non-linear viscoelastic phenomena such as saturation. - Numerical Investigation of Phase Change and Cavitation Effects in Nuclear Power Plant Pipes M. Souli, R. Messahel (University of Lille), B. Cohen (EDF UTO), N. Aquelet (LSTC) In the nuclear and petroleum industry, supply pipes are often exposed to high pressure loading which can cause to the structure high strains, plasticity and even in the worst scenario failure. Fast hydraulic transient phenomena such as Water Hammers (WHs) are of this type. It generates a pressure wave that propagates in the pipe causing high stress. Such phenomena are of the order of few msecs and numerical simulation can offer a better understanding and an accurate evaluation of the dynamic complex phenomenon including fluid-structure interaction, multi-phase flow, cavitation effects … For the last decades, the modeling of phase change taking into account the cavitation effects has been at the centre of many industrial applications (chemical engineering, mechanical engineering, … ) and has a direct impact on the industry as it might cause damages to the installation (pumps, propellers, control valves, …). In this paper, numerical simulation using FSI algorithm and the two One-Fluid Cavitation models “Cut-Off” and “HEM” of WHs including cavitation effects is presented. - Numerical Investigation of Landslide Mobility and Debris-Resistant Flexible Barrier with LS-DYNA® Yuli Huang, Jack Yiu, Jack Pappin, and Richard Sturt (Arup) Julian S. H. Kwan and Ken K. S. Ho (Hong Kong SAR Government) - Numerical Simulations to Investigate the Efficiency of Joint Designs for the Electro-Magnetic Welding (EMW) of the Ring-shaft Assembly H. Kim, J. Gould (Edison Welding Institute), J. Shang (American Trim), A. Yadav, R. Meyer (Caterpillar Inc), Pierre L'Eplattenier (LSTC) In this study, numerical simulations on electro-magnetic welding (EMW) were conducted for dissimilar materials joint of the ring-shaft assembly. LS-DYNA® electromagnetism module was adopted to simulate the EMW process. Simulation results were correlated with the EMW experimental works with two different joint designs, single and double flared lap joint. Two different materials, aluminum 6061-T4 and copper, C40, were used for the driver ring material on the stationary steel shaft. LS-DYNA simulation model was used to investigate the effects of impact angle and velocity on surface-layer bonding and joining efficiency of the driver ring on a steel shaft. Analytical modeling was also conducted to estimate the magnetic pressure between the coil and the ring. Experimentally, a 90-KJ machine was used at different energy levels. From these experiments, the double flared lap joint showed better joint efficiency and the copper showed better adhesion than aluminum at same energy levels. The performance of joint was evaluated by push-off testing. A double flared copper ring at 81-KJ gave the best performance of joint, and exceeded the required axial thrust load requirement. From the metallographic analysis, the interface of joint did not show the metallurgical bonding, however, strong mechanical interlocking was achieved. This study demonstrates the viability of EMW process for dissimilar material joining. - On Rollover Simulations of a Full-sized Sedan Ronald F. Kulak (RFK Engineering Mechanics Consultants LLC) Rollover crashes are responsible for many occupant injuries and fatalities. Rollover crash fatalities account for 36 percent of total fatalities for passenger cars and light trucks. Front seat occupants are vulnerable to head, neck and thoracic injuries resulting from impact with the collapsing roof structure. Modeling and simulation on parallel computing platforms using state-of-the-art software – such as LS-DYNA® – is an attractive and economical approach for studying the structural responses of the vehicle and occupant to rollover events. This paper presents simulations of rollover events of a full-sized sedan subjected to several initial vehicle orientations and front occupant positions. The National Crash Analysis Center database provided the finite element model for the full-size sedan. The front-seat occupant model is the Hybrid III finite element model developed by Livermore Software Technology Corporation, which represents the 50% male anthropomorphic test device (ATD). Thus, this study makes use of a single software platform for analyses of both the vehicle and occupant – leading to efficient computations. The current work focused on Single Event Single Rollovers (SESR). Several case studies are presented, and one case simulated a previously performed test using the Controlled Rollover Impact System (CRIS). The first case (far side impact) matched the CRIS Test 51502 initial release conditions, and the numerical simulations match the kinetic conditions when the vehicle contacted the ground – as calculated by rigid body dynamics. The second case looked at near side impact, and the third case looked at far side impact but with a 10 degree pitch angle. Results show that the largest neck forces occur for near side impact. Comparison of the first case simulation results with CRIS Test 51502 is examined for suitability of validating the finite element models to rollovers. - On the Parameter Estimation for the Discrete-Element Method in LS-DYNA® Nils Karajan (DYNAmore GmbH), Zhidong Han, Hailong Teng, Jason Wang (LSTC) The goal of this contribution is to discuss the assumptions made when modeling granular media with the discrete-element method (DEM). Herein, particular focus is drawn on the physical interpretation of the involved material parameters of the DEM in LS-DYNA. Following this, the influence of each parameter on the bulk behavior of granular media is investigated and different possibilities to estimate these parameters are presented. - On the prediction of material failure in LS-DYNA®: A comparison between GISSMO and DIEM Filipe Andrade, Andre Haufe (DYNAmore GmbH), Markus Feucht (Daimler AG) As a consequence of the worldwide tendency in reducing CO2 emissions by producing lighter and more energy-efficient products, the demand for accurate predictions regarding material behavior and material failure has greatly increased in recent years. In particular in the automotive industry, there is also an increasing interest in effectively closing the gap between forming and crash, since the forming operations may highly affect the crashworthiness of the produced parts. In this scenario, a correct depiction of material mechanical degradation and material fracture seems indispensable. Currently, there are several models implemented in LS-DYNA which have been developed to deal with material damage and failure. Many of them are complete constitutive models which consider elasto-plasticity coupled with damage formulations as well as with embedded failure criteria (e.g., *MAT_015, *MAT_052, *MAT_081, *MAT_104, *MAT_120, *MAT_153, among others). Alternatively, LS-DYNA also makes possible the definition of failure and damage through the keyword *MAT_ADD_EROSION, where the user can choose different failure models and fracture criteria which are, in turn, coupled with the selected plasticity model in an ad-hoc fashion. In this context, GISSMO (Generalized Incremental Stress-State dependent Damage Model) and DIEM (Damage Initiation and Evolution Model) are good candidates for the task of predicting ductile failure using LS-DYNA. However, many users still seem to have difficulties in using these models, meanwhile other users, who already master either GISSMO or DIEM, feel somewhat insecure in employing the concurrent model. These difficulties arise mainly because GISSMO and DIEM have been conceived following quite different interpretations of the phenomena that influence failure. For instance, in GISSMO the user has to input a failure curve as a function of the triaxiality (and also of the Lode parameter, in the case of solid elements) where this curve is used for the nonlinear accumulation of damage. This strategy intrinsically takes the strain path change into account, for which a numerical calibration based on experimental data is required. Furthermore, an instability curve may also be defined in GISSMO, where in this case, if instability achieves a critical value, the stresses are assumed to be coupled with damage, leading to a ductile dissipation of energy upon fracture. DIEM, on the other hand, allows the user to define multiple damage initiation indicators which evolve simultaneously. For example, the user can define a normal and a shear failure initiation criterion, the former as a function of triaxiality, the latter depending on the so called shear stress function. Additionally, a forming limit curve (FLC) can also be input in DIEM, where this criterion also evolves along the other two failure initiation criteria. The different damage initiation criteria can then be combined in a global damage evolution rule. Similarly to GISSMO, a certain number of experiments is required in order to properly fit the parameters necessary for DIEM. This contribution is an attempt to compare and better understand the differences between GISSMO and DIEM. In this respect, the main differences between both models and how they are intended to predict failure will be comprehensively discussed. Additionally, the calibration of a dual-phase steel using GISSMO and DIEM will be used to better highlight the differences between the models and how these are reflected in the final parameter fitting. - Optimization Design of Bonnet Inner Based on Pedestrian Head Protection and Stiffness Requirements Xiaomin Zeng, Xiongqi Peng (Shanghai Jiao Tong University), Hongsheng Lu (Shanghai Hengstar Technology Co. Ltd), Edmondo Di Pasquale (SimTech Simulation et Technologie) Pedestrian head impact with bonnet is one of the major causes for pedestrian severe injury or fatality. This paper proposes a multidisciplinary design optimization method for bonnet inner based on pedestrian head protection along with stiffness requirements. The static stiffness and headform collision procedure with regard to a particular industrial bonnet are analyzed. Parametric design and optimization analysis of this bonnet are carried out. Optimization solution significantly achieves better head protection effect under the premise of meeting the stiffness requirements, which validates the feasibility of this multidisciplinary optimization method and provides an approach for the optimal design of engine bonnet inner. This work shows the importance of a simultaneous approach of different disciplines in bonnet design. - Particle Blast Method (PBM) for the Simulation of Blast Loading Hailong Teng, Jason Wang (LSTC) This paper presents a particle blast method (PBM) to describe blast loading. The PBM is an extension of corpuscular method (CPM), which is coarse-grained multi-scale method developed for ideal gas dynamics simulation. It is based on the kinetic molecular theory, where molecules are viewed as rigid particles obeying Newton’s laws of mechanics, while each particle in the particle method represents a group of gas molecules. Pressure loading on structures is represented by particle-structure elastic collisions. The corpuscular method has been applied to airbag deployment simulation where the gas flow is slow. For blast simulation where gas flow is extremely high, the particle method has been improved to account for the thermally non-equilibrium behavior. Furthermore, to better represent gas behavior at high temperature, co-volume effects have been considered. The particle blast method could be coupled with discrete element method, make it possible to model the interaction among high explosive detonation products, the surrounding air, sand and structure. - PC3: Crash and Blast Analysis Post-Processor for Simulations and Live Tests Hadar Raz (Plasan Ltd.) For crash and blast tests of vehicles and sub-assemblies, simulations play an important role in the prediction of the test results. Some of the most important results are the occupants’ injury criteria, which are calculated by simulating ATDs and their various joints, accelerometers, etc’. Often in a simulation/test there are few ATDs, and there is an increasing demand for post-processing of the injury criteria in an automated way, as well as correlating the results between simulation and test, thus enabling easier calibration of the simulation. We present PC3 (Plasan Criteria Computation and Comparison), a software tool developed by Plasan, which enables easy calculation of simulation and test ATD results, and correlation of said results. Currently the program is able to read simulation data from LS-DYNA® binout database, and various test databases, such as ISO text files, CSV files, HDF5 database files and some others. An example of criteria calculation for blast simulation and test data will be shown, along with correlation between the two. - Prediction of the Drop Impact Performance of a Glass Reinforced Nylon Oil Pan Peter H. Foss (General Motors Global Research & Development) As part of a cooperative development project between General Motors, BASF and Montaplast, a glass reinforced nylon oil pan was designed, analyzed, molded and tested. The oil pan was molded from BASF’s Ultramid® B3ZG7 OSI, an "Optimized for Stone Impact" grade of impact modified 35% short glass filled PA6. One of the development tests run on the pans was a drop impact test. In this report we will compare the predicted and experimental impact response using Digimat and LS-DYNA® with an anisotropic elastic-viscoplastic material model with failure. The Digimat material model was reverse engineered from high-rate tensile stress-strain data provided by BASF. - Preload Release in a Steel Band under Dynamic Loading Eyal Rubin, Yoav Lev (RAFAEL Advanced Defense Systems LTD.) A steel band is tightened around a thin walled steel cylinder. The assembly is exposed to different dynamic loadings including shock and vibration. While tightening, the circumferential stresses developed in the band, decrease as a function of the distance from the bolts and the value of the coefficient of friction between the band and the cylinder. The cylinder elasticity also affects the amount of force distribution in the band. A rigid cylinder will result in a maximal distribution of internal tension forces in the band. Experiments show that dynamic loadings, such as shock and vibrations, release the initial preload of the tightening bolts, and average the distribution of internal tension forces in the band. The extent of the change in the internal forces distribution depends on the level of the dynamic loading. While the motivation of the work was to find a lower boundary to the tightening force, a severe shock was chosen to demonstrate this Phenomenon. As a result from the severe shock, the internal tension forces at different cross sections converged to the same final uniform force. The level of this final force varies, depending on the coefficient of friction. The maximum possible release of the internal tension forces in the band, as a function of the coefficient of friction between the cylinder and band, and the rigidity of the cylinder, was determined using LS-DYNA® explicit simulation. This method can be used to determine the initial tightening force of any assembly, in order to assure that it stands dynamic environmental conditions. - Rate Dependent Progressive Composite Damage Modeling using MAT162 in LS-DYNA® Bazle Z. (Gama) Haque, John W. Gillespie Jr. (University of Delaware) Performing experiments in numerical space and predicting accurate results are the main research focus of many computational mechanicians. These goals may in general sound challenging, however, makes perfect sense in cases where experiments are not possible, e.g., landing on Mars, sea waves impacting marine structures, crash landing of space shuttle, etc. Composite damage modeling plays a vital role in designing composite structures for damage tolerance, energy dissipating crash, impact, ballistic, and blast applications. A progressive composite damage model MAT162 is developed by Materials Sciences Corporation and further modified by the authors and implemented in explicit finite element analysis code LS-DYNA. A total of thirty-four material properties and parameters are required to define such a material model. Besides the ASTM standard test methods for determining the elastic and strength properties, the authors have developed a low velocity impact methodology in determining the rate insensitive model parameters. Recently, model validations with depth of penetration and ballistic experiments have been performed to determine the rate sensitive model parameters. These validated model parameters are used to predict composite damage and resistance behavior of composite structures made from plain-weave plain weave S-2 glass/SC15 composites under quasi-static, low velocity impact and crush, ballistic, and blast loading conditions. Analysis procedure and results of these numerical experiments will be presented. - Response of a Large Span Stay Cable Bridge to Blast Loading Cezary Bojanowski (Argonne National Laboratory), Marcin Balcerzak (Warsaw University of Technology) The computational analysis of engineering structures under blast loads faces three fundamental problems: (i) reliable prediction of blast loads imposed on structures, (ii) correct representation of material behavior, and (iii) global analysis of large scale structures. Despite the recent developments in Finite Element (FE) codes like LS-DYNA® and advancements in computational power, addressing all of these issues in a single simulation is not a straightforward task. In this paper, LS-DYNA capabilities were utilized to simulate the transient global response of a long span cable stayed bridge subjected to blast loading over the deck and to evaluate localized damage to the deck structure. Described in detail is the development of a global FE model of the Bill Emerson Memorial Bridge – a cable-stayed bridge crossing over the Mississippi River near Cape Girardeau, Missouri. The global model takes into account the structural details of the deck, support columns and the pretension in the stay cables. It was partially validated by comparing the calculated natural frequencies with those previously extracted by the Missouri Department of Transportation from data recorded during the 2005 earthquake of M4.1 on a Richter scale (Assessment of the Bill Emerson Memorial Bridge, Report No. OR08-003, September 2007). A detailed model of the central section of the deck was developed to simulate localized damage. Boundary conditions on the detailed model were applied through a sub-modeling technique based on the analysis of the global simplified model. The results show that a detonation of explosives of a typical size of passenger car and van bomb on a traffic lane in the mid span of the deck is not likely to cause a collapse of the bridge. The vibrations in the stay cables do not lead to yielding of the steel in the strands. The simulation of the local damage shows that – for the chosen van location – the blast may perforate the deck and deform the cross beam. The extent of the damage, however, depends greatly on the assumed erosion criteria. - Scalability of Implicit LS-DYNA® Simulations Using the Panasas® PanFS® Parallel File System Bill Loewe (Panasas, Inc.) As HPC continues its growth with Linux clusters using multi-core processor architectures, the I/O requirements further increase with higher-fidelity CAE modeling and workflow demands. This paper examines the parallel scalability characteristics of LSTC’s Finite Element Analysis software LS-DYNA for up to 288 processing cores for implicit mechanics simulations that have high I/O demands. The motivation was to quantify the performance and scalability benefits of parallel I/O in FEA software using a parallel file system, compared with both local storage and conventional NFS for implicit mechanics cases. This study was conducted on a Linux Intel Xeon cluster with a Panasas PanFS parallel file system and using a benchmark input provided by Roger Grimes from LSTC. For this study, relevant models used were based on current customer practice to demonstrate that LS-DYNA with parallel I/O can show a significant performance advantage and corresponding reduction in job overall time for advanced implicit simulations. - Simulating the Impact Response of Composite Airframe Components Karen E. Jackson, Justin D. Littell (NASA Langley Research Center), Edwin L. Fasanella (National Institute of Aerospace) In 2010, NASA Langley Research Center obtained residual hardware from the US Army’s Survivable Affordable Repairable Airframe Program (SARAP), which consisted of a composite fuselage section that was representative of the center section of a Black Hawk helicopter. The section was fabricated by Sikorsky Aircraft Corporation and was subjected to a vertical drop test in 2008 to evaluate a tilting roof concept to limit the intrusion of overhead mass items, such as the rotor transmission, into the fuselage cabin. As a result of the 2008 test, damage to the hardware was limited primarily to the roof. Consequently, when the post-test article was obtained in 2010, the roof area was removed and the remaining structure was cut into six different types of test specimens including: (1) tension and compression coupons for material property characterization, (2) I-beam sections, (3) T-sections, (4) cruciform sections, (5) a large subfloor section, and (6) a forward framed fuselage section. In 2011, NASA and Sikorsky entered into a cooperative research agreement to study the impact responses of composite airframe structures and to evaluate the capabilities of the explicit transient dynamic finite element code, LS-DYNA®, to simulate these responses including damage initiation and progressive failure. Finite element models of the composite specimens were developed and impact simulations were performed. The properties of the composite material were represented using both a progressive in-plane damage model (Mat 54) and a continuum damage mechanics model (Mat 58) in LS-DYNA. This paper provides test-analysis comparisons of time history responses and the location and type of damage for representative I-beam, T-section, and cruciform section components. - Simulating the Impact Response of Full-Scale Composite Airframe Structures Edwin L. Fasanella (National Institute of Aerospace), Karen E. Jackson, Justin D. Littell (NASA Langley Research Center), Michael D. Seal (Analytical Mechanics Associates, Inc.) NASA Langley Research Center obtained a composite helicopter cabin structure in 2010 from the US Army’s Survivable Affordable Repairable Airframe Program (SARAP) that was fabricated by Sikorsky Aircraft Corporation. The cabin had been subjected to a vertical drop test in 2008 to evaluate a tilting roof concept to limit the intrusion of overhead masses into the fuselage cabin. Damage to the cabin test article was limited primarily to the roof. Consequently, the roof area was removed and the remaining structure was cut into test specimens including a large subfloor section and a forward framed fuselage section. In 2011, NASA and Sikorsky entered into a cooperative research agreement to study the impact responses of composite airframe structures and to evaluate the capabilities of the explicit transient dynamic finite element code, LS-DYNA®, to simulate these responses including damage initiation and progressive failure. Most of the test articles were manufactured of graphite unidirectional tape composite with a thermoplastic resin system. However, the framed fuselage section was constructed primarily of a plain weave graphite fabric material with a thermoset resin system. Test data were collected from accelerometers and full-field photogrammetry. The focus of this paper will be to document impact testing and simulation results for the longitudinal impact of the subfloor section and the vertical drop test of the forward framed fuselage section. - Simulation of Compressive ‘Cone-Shaped’ Ice Specimen Experiments using LS-DYNA® Hyunwook Kim (Memorial University of Newfoundland) A laboratory scale compressive cone-shaped ice experiments were performed, and a numerical simulation model using LS-DYNA was developed. Modified material properties were applied based on a crushable foam model (MAT 63) as the ice properties. To simulate a saw-tooth pattern which is commonly observed through experiments in ice, an additional function of failure criteria, which is maximum principal stress, was included. Results of the experimental and numerical simulation were compared and represented a good agreement. The proposed numerical simulation model was extended to a larger scale and verified. - Simulation of High-Voltage Discharge Channel in Waterat Electro-Hydraulic Forming Using LS-DYNA® V. Mamutov (St. Petersburg State Polytechnical University), S. Golovashchenko (Ford Motor Company), A. Mamutov (Oakland University) The method of simulating of expansion of plasma channel during high-voltage discharge in water at electro-hydraulic forming (EHF) is developed using finite-element software package LS-DYNA® 971. The energy input into the plasma channel from the discharge circuit is taken into account. The energy deposit law is calculated from the experimentally obtained pulse current and voltage in the discharge circuit measured between the electrodes. The model of discharge channel is based on assumption of adiabatic channel expansion when the energy introduced into the channel from the external electrical circuit is only spent on increasing the internal energy of plasma and on the work of plasma expansion. Water is simulated as an ideally compressible liquid with bulk modulus of K = 2.35 GPa. The cavitation threshold for the liquid is defined as 0.1 MPa. The interaction between the channel and the water is simulated using Arbitrary Lagrange Eulerian (ALE) technique. The deformable blank is simulated using shell elements. The results of simulation of two variants of the discharge chamber are presented: for the long cylindrical chamber with long axisymmetrical discharge channel and for the compact chamber of arbitrary shape with short discharge distance. The developed numerical method is verified by comparing the results of simulation with the results from the test simulation, which is a one-dimensional axisymmetric finite-difference based problem with the same parameters. It is also verified by comparing the results of simulation and the experimentally measured pulse pressure in the discharge chamber with a known function of energy input. - Simulation of residual deformation from a forming and welding process using LS-DYNA® Mikael Schill (DYNAmore Nordic AB), Eva-Lis Odenberger (Industrial Development Centre in Olofström AB) Predicting the finished geometry of a part is a major issue for the manufacturing industry. This is a complex task, especially if the manufacturing involves several types of processes. In order to succeed, the complete manufacturing process has to be included in the simulation. For sheet metal forming, this has been done for quite some time where trimming, forming and springback are simulated in consecutive order. However, there are other manufacturing processes which affect the geometry of the finished part. In this paper, a welding process is added to the manufacturing process chain. The welding simulation is done using the novel material model *MAT_CWM with ghost element functionality. The aim of the paper is to investigate how the different process stages affect the final geometry of the part and how this is efficiently and accurately simulated with LS-DYNA. Further, an attempt is made to improve the part tolerance by springback compensation of the forming tools accounting for deviance from both springback and weld deformation. - Simulation of Various LSTC Dummy Models to Correlate Drop Test Results Ken-An Lou (ArmorWorks), David Bosen, Kiran Irde, Zachary Blackburn (ShockRide) Hybrid III Anthropomorphic Test Dummy (ATD) is primarily validated for frontal impacts from physical sled tests in an automotive incident but not for a military vehicle incident related to mine blast vertical impacts. Vertical drop tests were conducted using Hybrid III 50th percentile ATD. The purpose of conducting these tests was to identify which LSTC dummy model shows the best correlation with the test results. This paper presents the modeling correlation between LSTC’s 50th percentile RIGID, FAST, and DETAILED dummy models. A rigid seat without seat cushion was used in the drop tests so the surroundings the dummy interacted with during the test were very predictive. A total of three drop tests from the same drop height were completed to ensure consistency and repeatability of the test data. The simulation was correlated to the test data for occupant responses. - Simulation-Based Airbag Folding System JFOLD Version 2: New Capabilities and Folding Examples Shinya Hayashi (JSOL Corporation), Richard Taylor (Ove Arup & Partners International Limited) Computer simulation is playing an increasingly important role in the design, development and application of airbag safety systems. As folding patterns and airbag structures become more and more complex, users are turning to simulation based folding solutions to generate accurately folded models in a short space of time. To meet this demand, a new software tool called JFOLD has been developed by JSOL Corporation to enable successful airbag folding using LS-DYNA®. JFOLD’s intuitive and interactive system guides the user through the folding steps using flow-chart graphics, interactive tool positioning/resizing, tool motion control, animation preview and more. This paper introduces the new capabilities of JFOLD Version 2 and demonstrates various folding examples. JFOLD runs inside the powerful and popular pre-processor Primer. - Software for Creating LS-DYNA® Material Model Parameters from Test Data Eric Strong, Hubert Lobo (Matereality), Brian Croop (DatapointLabs) LS-DYNA contains a wealth of material models that allow for the simulation of transient phenomena. CAE Modeler is a generalized pre-processor software used to convert material property data into material parameters for different material models used in CAE. In a continuation of previously presented work, we discuss the extension of the CAE Modeler software to commonly used material models beyond MAT_024. Software enhancements include advanced point picking to perform extrapolations beyond the tested data, as well as the ability to fine-tune the material models while scrutinizing the trends shown in the underlying raw data. Advanced modeling features include the ability to tune the rate dependency, as well as the initial response. Additional material models that are quite complex and difficult to calibrate are supported, including those for hyperelastic and viscoelastic behavior. As before, the written material cards are directly readable into the LS-DYNA software, but now these can also be stored and cataloged in a material card library for later reuse. - Soil Modeling for Mine Blast Simulation Frank Marrs, Mike Heiges (Georgia Tech Research Institute) This paper presents the results of an effort to correlate an LS-DYNA® simulation of a buried mine blast with published experimental test data. The focus of the study was on simulating the effects of soil moisture content on the blast characteristics. A mathematical model for sand is presented that is based on several previously proposed models. The simulation correlates well with the results of a mine blast experiment, thus validating the material model for sand at varying levels of saturation. The model provides an excellent baseline for blast simulations of buried mines and a soil material model that can be expanded to include higher fidelity modeling, different soil types, and real-world applications.) - Sound Radiation Analysis of a Tire with LS-DYNA Zhe Cui, Yun Huang (LSTC) - Springback Calculation of Automotive Sheet Metal Sub-assemblies Volker Steininger, Xinhai Zhu, Q. Yan, Philip Ho (Tiwa Quest AG, LSTC) In recent years the spring back calculation of a sheet metal part after forming has achieved a high accuracy. Today we are able to calculate the spring back amount after all forming operations of the part including trimming, piercing and reforming. But the spring back of a single sheet metal part is only the first step to solve the problem. What matters at the end is the spring back of the assembly or the sub-assembly of multiple sheet metal parts. This paper describes a GUI for an efficient setup of a spring back calculation of multiple sheet metal parts, taken into account the complete forming history of the parts. It will be shown how to position the sheet metal parts, how to define an assembly sequence and joining method and the hemming process of outer and inner panels and how to launch the LS-DYNA® simulation. - TaSC® Product Status Willem Roux (LSTC) The LS-TaSC product status is presented. The current capabilities are discussed together with illustrative examples and release dates. In addition, the current development directions, such as new capabilities and CAE integration, are also revealed. - The Optimization of Servo Press Method for Sheet Metal Forming Jun-Ku Lee, Hyun-Cheol Kim (Theme Engineering, Inc.) Recently in the field of sheet metal forming, servo press, which can control the speed and position of the tool using a servo motor, is attractive method. Development process of servo press method is accelerated as the capacity of servo motor become bigger. In the future, it’s expected as a great alternative plan to replace conventional press method in order to improve quality, increase productivity, maintain integrity of tools, and reduce energy consumption. Motion control in servo press method has to be effectively optimized depending on the shape and characteristics of the material. However, in the industrial field, the controls of motion relied on experience or intuition of most skilled worker, so the workers can’t avoid many trials and errors to find the optimized motion. We try to implement the servo press method using a finite element analysis with LS-DYNA® in order to shorten the trials and errors . And furthermore, we try to find optimal motion with LS-OPT® . The front side member model of Numisheet 2011 BM03 was chosen for the analysis. We carefully consider stress relaxation and time scaling in order to implement servo press method. Then, we try to compare the following three cases to look into utility of optimized motion for the formability and productivity. - Conventional press method - Conventional press method controlled by speed - Servo press method Finally, we hope that the application of LS-OPT could be effectively used for the optimization of servo press method. - The Simulation and Formability Prediction of a DP600 Steel Reverse Draw - NUMISHEET 2014 Benchmark 1 Changqing Du, Kaiping Li (Chrysler Group LLC), Xiaoming Chen (U.S. Steel Corporation), Yuzhong Xiao, Xinhai Zhu (LSTC) In this study, the simulation and formability prediction of the DP600 Steel Revers Draw in the NUMISHEET 2014 Benchmark 1 is conducted using LS-DYNA®. The combinations of the material models and element formulations are evaluated for better strain path correlations between the simulation and the measurement at the specified points. Various input factors are considered in this study, including different material model and element type, mesh sizes, integration points and locations. In addition to the conditions given in the benchmark description, extra factors such as the friction effects and springback after drawbead forming process are also considered. The simulation results show that the properly selected yield function is critical for the stain path predictions to be in better agreement with the experimental measurements under such loading condition. In simulation the Formability-Index method is applied to determine the forming limit strains. With this method, the predicted limit strains of the on-set necking points, as well as the locations are compared with the measurement results reported in Benchmark 1 Analysis. - Theoretical Development of an Orthotropic Elasto-Plastic Generalized Composite Material Model Robert K. Goldberg and Kelly S. Carney (NASA Glenn Research Center), Paul Du Bois (George Mason University), Canio Hoffarth, Joseph Harrington, Subramaniam Rajan (Arizona State University), Gunther Blankenhorn (LSTC) Several key improvements in the state of the art have been identified by the aerospace community as desirable for inclusion in a next generation material model for composite materials to be incorporated within LS-DYNA®. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To address these needs, a multi-institution consortium has been formed to develop and implement a new composite material model within LS-DYNA. To date, the model development efforts have focused on creating and implementing an appropriate plasticity based model. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. The non-associative flow rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. - Three Dimensional Analysis of Induced Detonation of Cased Explosive Devon Downes, Manouchehr Nejad Ensan (Aerospace Portfolio, National Research Council), Amal Bouamoul (Defence Research & Development Canada–Valcartier) Fragments of aluminum impacting on Composition B explosive encased in rolled homogenous armour (RHA) steel were investigated through the LS-DYNA®. The investigation focused on shock to detonation simulations of Composition B, with the objective of determining both the critical velocity which would generate a shockwave strong enough to cause detonation of the explosive, as well as the resulting pressure profile of the detonation wave. Detonation scenarios at low, intermediate and high impact velocities were investigated. It was observed that at low impact velocity the explosive failed to detonate. At intermediate velocities, detonation was due to the development of localized hot spots caused by the compression of the explosive from the initial shockwave. Detonation was also caused by pressure waves reflecting against the casing of the explosive leading to the so-called sympathetic detonation. At high impact velocity, initiation of the explosive was caused by the initial incident pressure wave located immediately behind the top casing/explosive interface. - Through-Thickness Element Splitting for Simulation of Delamination in Laminated Composite Materials Ofir Shor and Reza Vaziri (The University of British Columbia) The increasing use of laminated composite materials in advanced industrial applications requires the ability to predict their behavior under the expected service loads. Laminated composite materials exhibit various damage and failure mechanisms, which can cause a degradation of their mechanical properties and lead to catastrophic structural failure. The debonding of adjacent laminate layers, also known as delamination, is considered to be one of the most dominant damage mechanisms in the failure of composite laminates; hence it is important to have numerical analysis tools that are able to predict its initiation and growth. Although methodologies to simulate delamination in composite materials exist, they are limited to small-scale models and are therefore not suitable for large-scale applications of practical significance. A new method is presented here that allows simulation of this type of failure mode in large-scale composite structures. This is achieved by locally and adaptively splitting the structural elements through their thickness, while introducing cohesive zones in regions where delamination has the potential to initiate. The delamination damage can thus propagate in the structure as the simulation progresses. A mechanical benchmark example (Figure 1) is solved using this approach and the results are verified against those obtained using other numerical methods. - Topology and Topometry Optimization of Crash Applications with the Equivalent Static Load Method Katharina Witowski, Heiner Müllerschön, Andrea Erhart, Peter Schumacher (DYNAmore GmbH) This paper deals with topology and topometry optimization of structures under highly nonlinear dynamic loading such as crash using equivalent static loads. The basic idea of the “Equivalent Static Load”- Method (ESL) is to divide the original nonlinear dynamic optimization problem into an iterative “linear optimization ↔ nonlinear analysis” process. The displacement field of the nonlinear dynamic analysis is transformed to equivalent linear static loads for a variety of time steps. This leads to an optimization with multiple linear static load cases. In an outer loop the nonlinear analysis is repeated to correct and adapt the displacement field. There are several previous papers on the idea of equivalent static loads, e.g. Shin MK, Park KJ, Park GJ: Optimization of structures with nonlinear behavior using equivalent load. Comp. Meth. Appl. Math.,196, p.1154-1167, 2007. This paper reports about experiences in the application of the ESL methodology on industrial problems from the automotive industry. For the nonlinear dynamic analysis LS-DYNA® is used, for linear topology and topometry optimization GENESIS from Vanderplaats R&D is applied. The investigations have been performed within a research project, founded by the association BMBF, with several partners from German automotive companies. On the application of the method on large scale problems numerous problems are encountered. Setting up a fully automated and robust process on an HPC cluster with nested linear and nonlinear finite element analysis and optimization for multiple load cases was a challenging task. The general objective of the investigations was to evaluate the suitability of the method for different types of crash and impact problems. The appraisement is with respect to quality and usability of the results and with respect to the numerical costs. - Update in dummy model enhancements and effective pre-processing Sebastian Stahlschmidt, Alexander Gromer, Reuben D´Souza, Ulrich Franz (DYNAmore GmbH) The FAT and PDB dummy models have been developed for more than a decade. The models are used by almost all OEMs and restraint system suppliers to enhance the passive safety performance of their vehicles. Nevertheless, the PDB is still launching new projects to further enhance the predictability of the ES-2, ES-2re, BioRID-II and WorldSID models. This paper presents the current enhancement projects at a glance. With the increasing quality of the dummy and the restraint system models, additional details related to model assembly have a significant influence on the overall accuracy. Thus, a more advanced pre-processing to assemble the finite element input is required. Such pre-processing might involve a sequence of pre-simulations and might take pre-stresses into account. In some cases the computational effort for the pre-processing exceeds the time needed to simulate the final load case. The paper presents ideas and solutions to simplify and to speed-up the pre-processing of the above mentioned dummy models without a loss in accuracy. The solutions utilize standard pre-processing tools and scripts as well as LS-DYNA® implicit and explicit time-stepping schemes. - Usage of LSTC_NCAC Hybrid III 50th Dummy in Frontal Occupant Simulation Ming-Pei Lin, Chih-Min Chang, Cho-Hsuan, Tsai, Chia-Hui Tai, Chun-Te Lee (HAITEC) Occupant simulation is very useful for vehicle restraint system and passive safety development. Since the requirements of safety assessment are more and more demanding, the occupant simulation models have to be more and more accurate. Dummy model is very crucial in occupant simulation, and is difficult for most vehicle manufacturer to build. To purchase a commercial dummy model is a very reasonable and safe choice. Alternatively, LSTC offers LSTC dummy models, which are free for LS-DYNA® users. It is foreseeable that LSTC dummy models may not be as stable as commercial ones. This article describes the usage of LSTC_NCAC Hybrid III 50th dummy in frontal occupant simulation, and try to give a preliminary guideline of usage in vehicle development. - Using LS-DYNA® to Simulate the Thermoforming of Woven-Fabric Reinforced Composites Corey D. Morris, Lisa M. Dangora and James A. Sherwood (University of Massachusetts) Thermoforming of fabrics is a composite manufacturing process that has the potential to yield quality parts with production costs and cycle times comparable to the fabrication of stamped metal parts. The thermoforming process, illustrated in Figure 1, begins with alignment of the fabric in a rigid frame. Typically, multiple ply layers are simultaneously stamped into the mold to achieve the desired part thickness and mechanical properties. The individual plies can be oriented and aligned in the frame to give the desired directional performance. The loaded frame is transported along shuttle rails to an oven where it is heated until the polymer matrix is hot enough to flow with reasonably low viscosity. Because the traditional materials used for this manufacturing process are fabrics with commingled tows or pre-impregnated sheets, there is no need for a resin infusion step. The frame is moved from the oven to the molding area and aligned between a punch and die; binder plates are conventionally used to apply force around the circumference of the part. The application of pressure to the binder plate induces in-plane forces in the fabric that can reduce wrinkling as it is drawn into the die by the punch. A velocity is then prescribed to the punch to force the ply stack into the die mold. The tools (punch, die and binder plate) are often heated to slow the rate at which the polymer matrix cools. The finished piece assumes the geometry of the die and punch and hardens into a solid part after the matrix has cooled. - Validation of Hydraulic Gas Damper Coupler and Crash Simulation of Large Rolling Stock Model in LS-DYNA® F Lancelot (ARUP), YH Zhu2, BH Li, KF Wang, CL Li (Changchun Railway Vehicle Co., Ltd) The hydraulic gas damper coupler (HGDC) is the most important energy absorbing component in rolling stock crash impact. The HGDC can effectively dissipate crash energy and reduce excessive structural loads at all impact speeds. In this paper, the LS-DYNA material *MAT_HYDRAULIC_GAS_DAMPER_DISCRETE_BEAM (*MAT_070) is used to simulate the HGDC coupler. Using this formulation, both static and dynamic characteristics have been replicated. The validated HGDC models have been incorporated into rolling stock frontal impact simulations. A simple mass-beam representation for the carriages and a full scale detailed model – containing 16 carriages and more than 28 million elements – have successively been analysed. This paper also presents the innovative pre-postprocessing and data storage methods developed by CNR and Arup to handle the very large FE models and results files. - Validation of the Simulation Methodology for a Mobile Explosive Containment Vessel David Karlsson (DYNAmore Nordic AB) A Mobile Explosive Containment Vessel (MECV) is a chamber for protection against effects caused by explosions and is used to safely secure, contain, transport, store or test explosive materials. The MECV has been tested for a charge equivalent to 8 kg of TNT and strain levels at several positions were measured. These test data were used for comparison and validation of two simulation techniques and if necessary improve the simulation methodology. The first technique uses a separate 2D-axisymmetric MMALE simulation for the explosive blast load calculation and it showed good agreement to the test. In this case, an axisymmetric blast simulation is first made and the pressure is recorded at the fixed boundary. Then an in-house developed program is used to map the blast load to the 3D structure simulation. The second, much more compute intensive technique, is to do a full 3D coupled MMALE simulation of the blast and structure. The second technique lead initially to lower strain levels compared to the test and a more detailed parameter study had to be performed to improve the simulation results. As conclusion, we now have two validated simulation techniques and procedures to make realistic explosive simulations of containment vessels. - Validation Studies for Concrete Constitutive Models with Blast Test Data Youcai Wu, John E. Crawford, Shengrui Lan, Joseph M. Magallanes (Karagozian & Case) Many concrete constitutive models are available for use in LS-DYNA®. A thorough validation related to their applicability for the types of problems at hand should be made before application of any of these models. The process for validating a constitutive model includes examining the results produced with the model related to the behaviors it exhibits, gathering a suite of measured data collection pertinent to the problem to be addressed, and comparisons of measured and computed data. This paper addresses issues related to blast response analyses, which include simplification of boundary conditions (such as support condition and contact interfaces), numerical discretization, and material modeling. It was found important that the strain rate effects should be imposed properly since blast loadings usually excite high frequency and high strain rate responses. The impact of boundary conditions was also identified through the numerical studies. - Verification and Validation of a Three-Dimensional Generalized Composite Material Model Canio Hoffarth, Joseph Harrington, Subramaniam D. Rajan (Arizona State University), Robert K. Goldberg, Kelly S. Carney (NASA-GRC, Cleveland), Paul Du Bois (George Mason University), Gunther Blankenhorn (LSTC) A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to accurately predict the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA® as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and validation of the material model using the T800-F3900 fiber/resin composite material. - Verification of Concrete Material Models for MM-ALE Simulations Swee Hong TAN, Roger CHAN, Jiing Koon POON and David CHNG (Ministry of Home Affairs, Singapore) Although there are many concrete material models in LS-DYNA®, very few appear to be valid for Multi-Material Arbitrary Lagrangian-Euler (MM-ALE) simulations. From a rudimentary point of view, it makes sense at the first instance, that the typical method of verification via simulating cylinder tests in triaxial and/or uniaxial stress states for Lagrangian format should work for MM-ALE as well. This paper shares the experiences gathered from attempts at simulating cylinder tests involving MM-ALE concrete material models. Useful insights were gained and they would form part of the considerations for future work.
https://www.dynalook.com/conferences/13th-international-ls-dyna-conference
PURPOSE: To provide a moistureproof composite plastic board having heat insulating function and moistureproof function and easy to handle by laminating a moistureproof sheet to a hollow plate-shaped object made of plastic. CONSTITUTION: A hollow plate-shaped object 1 made of plastic and a moisture proof sheet 2 are laminated and boned. As the hollow plate-shaped object 1 made of plastic, for example, a hollow laminated sheet made of plastic such as Prapal (R) [phonetic] (made by Japan Petrochemical Co.). By this method, a lightweight moistureproof composite plastic board easy to handle, having moistureproof properties, antifungal properties and dewing resistance, enhanced in plane compression strength and impact resistance and useful as a building material or a box making material is obtained. COPYRIGHT: (C)1993,JPO&Japio
The present application generally relates to data compression and, in particular, to a parallel implementation of an entropy encoder and an entropy decoder. Data compression, whether lossy or lossless, often uses entropy coding to encode a decorrelated signal as a sequence of bits, i.e. a bitstream. Efficient data compression has a wide range of applications, such as data, image, audio, and video encoding. By way of example, ITU-T H.264/MPEG AVC is a video coding standard widely used for encoding/decoding video. It defines a number of different profiles for different applications, including the Main profile, Baseline profile and others. There are a number of standards for encoding/decoding images and videos, including H.264, that employ lossy compression processes to produce binary data. For example, H.264 includes a prediction operation to obtain residual data, followed by a DCT transform and quantization of the DCT coefficients. The resulting data, including quantized coefficients, motion vectors, coding mode, and other related data, is then entropy coded to generate a bitstream of data for transmission or storage on a computer-readable medium. A number of coding schemes have been developed to encode binary data. For example, JPEG images may be encoded using Huffman codes. The H.264 standard allows for two possible entropy coding processes: Context Adaptive Variable Length Coding (CAVLC) or Context Adaptive Binary Arithmetic Coding (CABAC). CABAC results in greater compression than CAVLC, but CABAC is more computationally demanding. An entropy encoder/decoder is a component within a compression encoder/decoder. While the entropy encoder/decoder consumes only a small portion of the overall compression encoder/decoder, it can present a significant bottleneck in real-time compression because of the serial nature of its operation. It would be advantageous to provide for an improved implementation of an entropy encoder and an entropy decoder. In one aspect, the present application describes a hardware implementation of a parallel entropy encoder and a parallel entropy decoder. In an embodiment, an entropy encoder block for use in a context adaptive encoder may be provided. The entropy encoder block for encoding phrase words into code words using encoding search tree lookup tables, each encoding search tree lookup table corresponding to one of N encoding probabilities used by a context modeling component of the context adaptive encoder, the entropy encoder block receiving phrase words and an associated probability corresponding to one of the N probabilities for each phrase word. The entropy encoder block may comprise a plurality of encoding elements for receiving phrase words and an indication of the associated probability for each phrase word, the plurality of encoding elements each connected to and operative to access a subset of one or more of the encoding search tree lookup tables to select a corresponding encoding search tree lookup table associated with each probability, such that each of the N encoding probabilities are serviced by at least one of the plurality of encoding elements and at least one of the N encoding probabilities is serviced by two or more encoding elements, and to encode the received phrase words using the selected encoding search tree lookup table to generate output code words; and, a state machine for assigning the phrase words to a particular encoding element based upon the encoding search tree lookup tables connected to that encoding element and an availability of that encoding element. In an aspect, the two or more encoding elements may be either each connected to their own copy of an encoding search tree lookup table, or all connected to a set of one or more shared encoding search tree lookup tables, to service the at least one of the N encoding probabilities. In an aspect, the set of one or more shared encoding search tree lookup tables may further comprise an input encoding crossbar switch connected to the two or more encoding elements and the set of one or more shared encoding search tree lookup tables, the input encoding crossbar switch operative to enable a selecting one of the two or more encoding elements to select a different shared encoding search tree lookup table from the set of one or more shared encoding search tree lookup tables; and, an output encoding crossbar switch connected to the set of one or more shared search tree lookup tables and the two or more encoding elements, the output encoding crossbar switch operative to communicate values from the selected shared encoding search tree lookup table to the corresponding selecting one of the two or more encoding elements. In an aspect, the entropy encoder block may comprise control logic for evaluating the code words generated by the plurality of encoding elements and outputting load balancing information associated with the code words. The control logic may comprise a counter and the load balancing information comprises a count of a number of code words associated with each probability. The control logic may be operative to evaluate the code words and divide the code words into work packages, such that the work packages each require approximately equivalent processing to decode or, each contain approximately the same number of code words. The control logic may be further operative to generate pointers to identify each work package and package the pointers in a header attached to a group of work packages. In an aspect, each work package may comprise code words of the same associated probability. In an aspect, at least one work package may comprise code words of differing associated probabilities. The differing associated probabilities within at least one work package may be selected from a list maintained by the encoder block that identifies a group of associated probabilities serviceable by a single decoding element of a plurality of parallel decoding elements. In an aspect, the load balancing information may comprise a pointer identifying the start of each code word within a collection of code words; or, a code word identifier inserted between groups of code words. In an aspect of the encoder block the state machine may be further operative to use load balancing information to assign the phrase words and the associated probability to a particular encoding element based upon the encoding search tree lookup tables connected to that encoding element and an availability of that encoding element. The load balancing information may comprise either a list of encoding elements able to service each probability, and the state machine is further operative to match each of the phrase words with the first available encoding element from the list corresponding to that phrase word's associated probability; or, for each probability, a likelihood of occurrence for that probability, and the state machine is further operative to assign the phrase words to available encoding elements in a decreasing order of likelihood. The likelihood for each probability may further comprise identification of encoding elements able to service that probability, and the state machine is further operative to assign each phrase word to a one of the available encoding elements. In an embodiment, an entropy decoder block for use in a context adaptive decoder may be provided. The entropy decoder block for decoding code words into phrase words using decoding search tree lookup tables, each decoding search tree lookup table corresponding to one of N encoding probabilities used by a context modeling component of the context adaptive decoder, the entropy decoder block receiving code words and, for each code word, an associated probability corresponding to one of the N probabilities. The entropy decoder block may comprise a plurality of decoding elements for receiving, code words and an indication of the associated probability for each code word, the plurality of decoding elements each connected to and operative to access a subset of one or more of the decoding search tree lookup tables such that each of the N decoding probabilities are serviced by at least one of the plurality of decoding elements and at least one of the N decoding probabilities is serviced by two or more decoding elements, and to decode the received code words using the accessed decoding search tree lookup table; and, a state machine for assigning each code word and the respective indication of the associated probability for that code word to a particular decoding element based upon the decoding search tree lookup tables connected to that decoding element and an availability of that decoding element. In an aspect, the two or more decoding elements may be either each connected to their own copy of a decoding search tree lookup table, or all connected to a set of one or more shared decoding search tree lookup tables, to service the at least one of the N encoding probabilities. The set of shared decoding search tree lookup tables may further comprise an input decoding crossbar switch connected to the two or more decoding elements and the set of one or more shared decoding search tree lookup tables, the input decoding crossbar switch operative to enable a selecting one of the two or more decoding elements to select a different shared decoding search tree lookup table from the set of one or more shared decoding search tree lookup tables; and, an output decoding crossbar switch connected to the set of one or more shared search tree lookup tables and the two or more decoding elements, the output decoding crossbar switch operative to communicate values from the selected shared decoding search tree lookup tables to the corresponding selecting one of the two or more decoding elements. In an aspect of the entropy decoder block the state machine may be operative to assign the code words and the respective indication of the associated probability for each code word using load balancing information to a particular decoding element based upon the decoding search tree lookup tables connected to that decoding element and an availability of that decoding element. The load balancing information may comprise, for each code word, a likelihood of the associated probability and the state machine is further operative to assign the code words to available encoding elements in decreasing order of likelihood of the associated probability for each code word. The state machine may be further operative to generate the load balancing information. The state machine may be further operative to generate the load balancing information as an estimate of the processing required to process each code word and to assign that code word to an available decoding element able to service the associated probability based upon that estimate. The decoder block may be further operative to compute the estimate from a number of code words associated with each probability. In an aspect, the estimate may comprise a number of code words associated with each probability and the state machine is further operative to assign code words of each probability to decoding elements in decreasing order of the number of code words associated with that probability. In an aspect, the state machine may be operative to receive the load balancing information. The load balancing information may comprise work package identifiers received with the code words and associated probabilities, the work package identifiers dividing the received code words into groups of code words, each group requiring approximately equal processing work by a decoding element to decode or comprising a same number of code words, and the control logic is further operative to distribute the work packages to the decoding elements. The identifiers may comprise pointers to identify the beginning of each work package, and the control logic is operative to access a code word buffer using the pointers to locate each work package. US Patent Application No. 12/707,797 The parallel entropy encoder and decoder described within is intended for use within a data compression and decompression scheme that employs a context based variable length coding scheme such as the Context Adaptive Variable Length Coding (CAVLC) process described in the H.264 standard, or other similar coding processes. For instance, the parallel entropy encoder and decoder could be used with the PARALLEL ENTROPY CODING AND DECODING METHODS AND DEVICES described in (incorporated herein by reference), and may be conveniently referred to as a Context-Based Adaptive Variable-length to Variable-length code (CAV2V) algorithm. While examples are provided in this description with reference to the above CAVLC and CAV2V algorithms, it will be understood by the person of skill in the art that this is only an embodiment, and the entropy encoder and entropy decoder described herein may be more generally applied. One of the techniques used in some entropy coding schemes, such as CAVLC and CABAC, both of which are used in H.264/AVC, is context modeling. With context modeling, each bit of the input sequence has a probability within a context, where the probability and the context is given by the bits that preceded it. In a first-order context model, the context may depend entirely upon the previous bit (symbol). In many cases, the context models may be adaptive, such that the probabilities associated with symbols for a given context may change as further bits of the sequence are processed. Reference will now be made, by way of example, to the accompanying drawings which show example embodiments of the present application, and in which: Figure 1 shows, a block diagram of an encoding process; Figure 2a shows, in block diagram form, an embodiment of an entropy encoder; Figure 2b shows, in block diagram form, an embodiment of an entropy decoder; Figure 3a shows, in block diagram form, an embodiment of an entropy encoder; Figure 3b shows, in block diagram form, an embodiment of an entropy decoder; Figure 4a shows, in block diagram form, an embodiment of an entropy encoder element; Figure 4b shows, in block diagram form, an embodiment of an entropy decoder element; Figure 5a shows, in block diagram form, an embodiment of an entropy encoder; Figure 5b shows, in block diagram form, an embodiment of an entropy decoder; Figure 5c shows, in block diagram form, an embodiment of an entropy encoder; Figure 5d shows, in block diagram form, an embodiment of an entropy decoder; Figure 6 shows, in block diagram form, an embodiment of a code word output controller and buffer; Figure 7 shows, in block diagram form, a computing device including an encoder; Figure 8 shows, in block diagram form, a computing device including a decoder; and, Figure 9 shows, in block diagram form, a computing device including an encoder and a decoder. Figure 10a shows, in block diagram form, an embodiment of an entropy encoder; Figure 10b shows, in block diagram form, an embodiment of an entropy decoder; Similar reference numerals may have been used in different figures to denote similar components. Figure 1 b b n b 0 b b n b 0 1 p, p, n p 0 1 1 Reference is made to , which shows a block diagram of an encoding process 100. The encoding process 100 includes a context modeling component 104 and an entropy encoder 106. The context modeling component 104 receives the input sequence x 102, which in this example is a bit sequence (, , ..., ). The context modeling component 104 determines a context for each bit b based on one or more previous bits in the sequence, and determines, based on the adaptive context model, a probability p associated with that bit b, where the probability is the probability that the bit will be the Least Probable Symbol (LPS). The LPS may be "0" or "1" in a binary embodiment, depending on the convention or application. The context modeling component outputs the input sequence, i.e. the bits (, , ..., ) along with their respective probabilities ( ..., ). The probabilities are an estimated probability determined by the context model. This data, is then input to the entropy encoder 106, which codes the input sequence using the probability information. For example, the entropy encoder 106 may be a binary arithmetic coder. The entropy encoder 106 outputs a bitstream 108 of encoded data. It will be appreciated that each bit of the input sequence is processed serially to update the context model, and the serial bits and probability information are supplied to the entropy encoder 106, which then entropy encodes the bits to create the processed bitstream 108. In some embodiments the encoder 106 may further provide load balancing information to allow the decoder to process the coded data in parallel. The encoder 106 may, in these embodiments, provide the load balancing information as the encoder 106 is able to identify the number of code words associated with each probability within a particular context. In some embodiments, a decoder will typically generate intermediate phrase words comprised of phrase word bits and packed phrase word bits that require additional processing to yield the phrase word. In general, such packed phrase word bits are a more efficient representation of the bits to reduce storage requirements. For instance, a long string of 1's followed by a '1' or a long string of 1's followed by a '0' may be replaced with a value indicating the number of 1's in the string. The numerical value having been extracted from the entropy encoded code word. Control logic to convert the numeric value to a string of 1's s or 0's may reside either in the decoder, or may reside in a downstream processing block with larger memory buffers to accommodate the expanded string. In the embodiment below it is assumed that said control logic is contained in a downstream processing block, but both embodiments are contemplated. In some embodiments, explicit probability information may not be passed from the context modeling component 104 to the entropy decoder; rather, in some instances, for each bit the context modeling component 104may send the entropy decoder, for instance through the entropy encoder 106, an indication of the associated probability such as an index value, flag, control signal or other indicator that reflects the probability estimation made by the context modeling component 104 based on the context model and the current context of the input sequence 102. The indication of the associated probability is indicative of the probability estimate associated with its corresponding bit. In some embodiments, a probability for each bit will not be communicated, but instead bits of the same probability will be grouped together and the number of bits (or bytes or code words) and the probability of the group will be communicated, for instance as a header for the group. In some embodiments, the probability information may be communicated as side band information, for instance by transferring a bit to an input buffer assigned to the same associated probability as the bit. In such an embodiment the indication of the associated probability comprises transferring a bit to an input buffer assigned to the associated probability. In embodiments where load balancing information is provided by the encoder, in some embodiments the load balancing information may be communicated as side band information. In accordance with one aspect, the present application proposes a hardware architecture for an entropy encoder and a hardware architecture for an entropy decoder. Both hardware architectures having a parallel processing architecture for entropy coding or decoding with a load balancing component for dynamically assigning the allocation of one or more probability trees amongst the parallel processing engines. In a first embodiment, the load balancing component of the entropy decoder receives load balancing information from the entropy encoder along with the probability information generated by the context modeling component 104. The load balancing information is generated by the encoder 106 to allocate the output code words and associated probability information to each of the encoding or decoding engines such that the computational work load is allocated approximately evenly amongst the engines. The load balancing information may, for instance, be included as a load balancing field in a header associated with code words output from the entropy encoder 106. The header may further comprise a probability field containing the probability information for that bit sequence. In an alternative embodiment, the load balancing information may be included as an indication of the associated decoder engine such as an index value, flag, control signal or other indicator that reflects the decoder engine assigned by the context modeling component 104 to carry out the decoding operations using the probability assigned to that bit sequence. Similar to the probability information, the indication of the associated decoder engine may alternatively be conveyed using side band information. In a second embodiment, the encoder and decoder architectures may each include control logic for assessing the bit sequence and associated probability information and allocating the bits and associated probabilities to distribute the computational workload approximately evenly amongst the engines. The second embodiment has an advantage, for instance, in that a parallel encoder/decoder architecture may be provided that is able to dynamically load balance bit sequences to be encoded/decoded independent of the context modeling component 104. In one aspect, the control logic of the decoder may receive load balancing information from the encoder 106 that indicates the number of code words assigned to a particular probability. The load balancing information allows the control logic to assign decoding elements to each probability to be decoded such that each decoding element has an approximately equal computational load. In another aspect, the control logic of the decoder may assign the decoding elements based upon the number of bytes assigned to each probability. While the number of bits per code word per probability varies, typically between 3 bits to 8 bits, the number of bytes in the group per probability may be a fair approximation of the number of phrase/code words in the group. The approximation limits the scheduling calculations or information to be communicated, and it would be most accurate when the number of bits per phrase/code word is relatively similar for each probability. In another aspect, the control logic of the decoder may have access, for instance through a lookup table, to the expected number of bits (or bytes) per code word for each probability. In this aspect, the control aspect may divide the number of bits (or bytes) assigned to each probability by the average number of bits (or bytes) per code word to arrive at an approximate number of code words to be processed. Calculating an approximate number of code words may be more accurate than scheduling based upon the number of bits (or bytes) per probability, particularly where the number of bits per code word varies. In another aspect, the control logic of the decoder, for instance code word input state machine 302, may assign decoding elements to probabilities from most probable to least probable. In embodiments where the probabilities are not all approximately equal, assigning decoding elements in decreasing order of probability will, on average, result in any load imbalances at the less frequent probabilities. Accordingly, the more frequently encountered probabilities will be well balanced, and any load imbalances will occur after the more probable phrase/code words have been processed. While the load balancing under this aspect is less accurate, the task of scheduling decoding elements for load balancing purposes is greatly simplified. In embodiments where the probabilities are all approximately equal, this heuristic approach will not be as efficient as embodiments with load balancing information supplied by the encoder that allows the decoder to divide the work load among decoding elements by the actual number of code words per probability. Figure 2a Figure 2a Referring to which shows in block diagram form, an embodiment of an entropy encoder block 200. The entropy encoder block 200 is downstream from a context modeling component 104 which, for each bit of the input sequence, determines an estimated probability based on the context model. The context modeling component assigns each bit to one of N probabilities from the context model, each probability having a search tree associated with it. The context modeler 104 makes the bit and an indication of the associated probability available to the entropy encoder block 200. As indicated above, in an embodiment, the encoder 106 may further provide load balancing information to allocate the bit and the associated probability to one of the encoding engines. In the context of the embodiment of , the context modeling component is illustrated as presenting a phrase buffer 105 in communication with the entropy encoder block 200. Figure 2a d d In the embodiment of , the entropy encoder block 200 includes a phrase input state machine 202 that is in operative communication with the phrase buffer 105 and a de-multiplexing element 204. The phrase input state machine 202 is also in operative communication with a plurality of encoding elements 206-1 ... 206-. The phrase input state machine 202 is operative to receive a state of an input buffer in each encoding element 206-1 ... 206- and to send an indication of a probability associated with each phrase word being input into each encoding element 206-1 ... 206-d. In this embodiment, the phrase input state machine 202 may direct the phrase word and associated indication of probability to one of the encoding elements 206-1 ... 206-d as specified by the probability information sent with the phrase word and the associated indication of probability from the phrase buffer 105. d d In an alternate embodiment, the phrase input state machine 202 may further comprise control logic to assign the phrase word and the indication of the probability to one of the encoding elements 206-1 ... 206-d based upon a load balancing schedule. In the alternate embodiment, the phrase input state machine 202 maintains a list of encoding elements 206-1 ... 206- able to service a probability in a pre-determined order of preference. Upon receiving a phrase word having an associated probability, the phrase input state machine 202 matches the received phrase word with the first available encoding element 206-1 ... 206- from the list corresponding to phrase word's probability information. Figure 2a As described above, the phrase buffer 105 may provide an indication of the associated probability, for instance by including a flag or control signal corresponding to each phrase word. The embodiment of provides an embodiment for providing an indication of the associated probability by side band information by directing the input phrase words to an input buffer assigned to a specific lookup table that corresponds to the associated probability. Both embodiments are contemplated. The phrase input state machine 202 is further operative to receive probability and buffer information from the phrase buffer 105 and to direct the phrase buffer 105 to transfer an indicated phrase word to the de-multiplexing element 204. d The de-multiplexing element 204 is in communication with the phrase buffer to receive phrase words as directed by the phrase input state machine 202. The phrase input state machine 202 is further operative to direct the de-multiplexing element 204 to direct the received phrase word to one of a plurality of input buffers (not shown in this figure) distributed across the plurality of encoding elements 206-1 ... 206- when an input buffer is detected as available. Figure 2a Each of the plurality of input buffers is associated with one of the N probabilities from the context model. In the embodiment of there are Q input buffers (Q > N), each buffer is associated with one of the N probabilities such that all N probabilities are represented. d d d d d d Each encoding element 206-1 ... 206- is operative to encode bits associated with a subset of the N probabilities. In order to encode all possible phrases, it is necessary that each of the N probabilities are represented by at least one encoding element 206-1 ... 206-. In an embodiment, the phrase input state machine 202 maintains an association for each encoder element 206-1 ... 206- with the subset of probabilities serviced by that encoder element 206-1 ... 206-. The phrase input state machine 202 may assign a phrase word to one of a subset of encoder elements 206-1 ... 206- that service a particular probability, for instance according to a pre-determined order of preference depending upon the availability of each of those encoder elements 206-1 ... 206- in the subset. Having a greater than N input buffers allows for multiple encoding elements to process the phrase words having the same associated probability. This processing may occur at the same time, or may occur while the other encoding elements are processing phrase words having a different associated probability. This is useful, for instance, where some probabilities have much higher workload than others, such that it would be advantageous to have multiple encoding elements available to process phrase words associated with those probabilities so that phrase words continue to be processed in parallel, rather than waiting for one encoding element to process a string of phrase words having the same associated probability. This also allows for flexibility in assigning phrase words to encoding elements. Figure 4a Figure 2a In the embodiment of , for instance, the subset of probabilities serviced by the encoding element 206-1 corresponds to encoding element input buffers 214-1 ... 214-i of the encoding element 206-1. Each of the encoding element input buffers 214-1 ... 214-i provide probability information as sideband information to the encoding element 206-1. Referring back to the embodiment of , the phrase input state machine 202 and demultiplexer 204 convey the probability information by sorting phrase words amongst the input buffers 214-1 ... 214-Q of the encoding elements 206-1 ... 206-d according to the probability associated with each of the input buffers 214-1 ... 214-Q. d Figure 10a In an alternate embodiment, d input buffers are provided, each buffer associated with one of the encoder elements. Encoder elements may process more than one probability by receiving an indication of associated probability, for instance by way of an index, flag or control signal as described above. In the embodiment of , for instance, 2 encoder elements 206-1, 206-2 each provide a single input buffer 214-1 and 214-2 respectively. d d d Figure 2a The entropy encoding elements 206-1 ... 206- each communicate through a communication channel output, 210-1 ... 210- respectively, to a code word output controller and buffer 207. As will be appreciated, inclusion of an element performing the function of a code word output controller and buffer 207 is not necessarily included within the encoder block 200, provided that downstream components include the functionality to combine the outputs 210-1 ... 210-. In the embodiment of , a downstream DMA module 212 is illustrated for transferring completed code words on as a bitstream. Figure 2b Referring to which shows in block diagram form, an embodiment of an entropy decoder block 300. The entropy decoder block 300 is downstream and in communication with a code buffer 405. The code buffer 405 contains the code words and probability information produced by the entropy encoder 106. In an embodiment, the code buffer 405 may further contain load balancing information associated with the code words produced by the entropy encoder 106. d d d The entropy decoder block 300 includes a code input state machine 302 that is in operative communication with the code buffer 405 and a de-multiplexing element 304. The code input state machine 302 is also in operative communication with a plurality of decoding elements 306-1 ... 306-. The code input state machine 302 is operative to receive a state of an input buffer in each decoding element 306-1 ... 306- and to send an indication of a probability associated with each code word being input into each decoding element 306-1 ... 306-. In the embodiment where the entropy encoder 106 provides load balancing information, the code input state machine 302 may further allocate the indication of probability and the code word to a particular decoding element 306-1 ... 306-d using the load balancing information. In one aspect, the load balancing information comprises the encoder 106 assembling the encoded bitstream into work packages of roughly equal processing load as determined by the encoder 106. In an embodiment, the work packages may be identified by pointers identifying the start (or finish) of each work package. In an alternate embodiment, the work packages may be identified by terminators or headers in the bitstream. The encoder 106 further provides probability information for each work package. In the alternate embodiment, the code input state machine 302 may further comprise control logic to assign the code word and the indication of the probability to one of the decoding elements 306-1 ... 306-d without receiving load balancing information from the encoder 106. d d. d d In an embodiment, the code input state machine 302 maintains an association for each decoder element 306-1 ... 306- with the subset of probabilities serviced by that decoder element 306-1 ... 306- The code input state machine 302 may assign a code word to one of a subset of decoder elements 306-1 ... 306- that service a particular probability, for instance according to a pre-determined order of preference depending upon the availability of each of those decoder elements 306-1 ... 306- in the subset. In an embodiment, the code input state machine 302 may assign decoder elements 306-1 ... 306-d in decreasing order of the likelihood of code word probability. In other words, code words with a more probable associated probability are assigned first and code words with a least probable associated probability are assigned last in order. In an aspect, the code input state machine 302 is operative to assign the code word based upon an estimate of the work load determined by the code input state machine 302. The code input state machine 302 generating the estimate based upon the size of the code word derived from probability information provided by the encoder 106. d In one embodiment, the encoder 106 may provide load balancing information in the form of the number of code words associated with a probability. The code input state machine 302 may assign the set of code words associated with a probability to one or more decoding elements 306-1 ... 306- in accordance with the magnitude of the number. Figure 2b In the embodiment of , the code input state machine 302 is further operative to receive probability and buffer information from the code buffer 405 and to direct the code buffer 405 to transfer an indicated code word to the de-multiplexing element 304. d The de-multiplexing element 304 is in communication with the code buffer to receive code words as directed by the code input state machine 302. The code input state machine 302 is further operative to direct the de-multiplexing element 304 to direct the received code word to an input buffer (not shown in this figure) of one of the plurality of decoding elements 306-1 ... 306- able to service the probability associated with the code word when an input buffer is detected as being available. Each of the plurality of decoding elements 306-1 ... 306-d having an associated subset of the N probabilities from the context model, such that every probability is assigned to at least one decoding element 306-1 ... 306-d. Each decoding element 306-1 ... 306-d is operative to decode bits associated with a subset of associated probabilities from the N probabilities. In order to decode all possible code words, it is necessary that each of the N probabilities are represented by at least one associated decoding element 306-1 ... 306-d. Figure 2b In the embodiment of , the entropy decoders 306-1 ... 306-d each communicate through a communication channel output, 310-1 ... 310-Q (Q>N) for output to a downstream processing component. Additional control logic will be required to combine the outputs of the same probability from different decoding elements 306-1 ... 306-d, to group all outputs of the same probability for subsequent re-assembly to generate the output bit sequence. The additional control logic may be maintained in the downstream processing component, or the outputs may be combined within the decoder block 300 before being passed to the downstream processing component. In the context of decoding an encoded bitstream, the code input state machine 302 sorts an input code word based upon its probability to identify a decoding search tree lookup table associated with that probability and then assigns the code word to an entropy decoder able to access the corresponding decoding search tree lookup table to traverse the associated tree to arrive at a leaf node that yields leaf node contents for that code word to generate the decoded phrase word. Figure 3 Figure 2a Figure 3a Figure 2a a illustrates an embodiment of where d= 2 such that there are two encoding elements 206-1 206-2. Other than the selection of an embodiment of two encoding elements 206-1 206-2, is similar to . Figure 3b Figure 2b Figure 3b Figure 2b illustrates an embodiment of where d= 2 such that there are two decoding elements 306-1 306-2. Other than the selection of an embodiment of two decoding elements 306-1 306-2, is similar to . Figure 4a Figures 2a 3a is an expanded view of an embodiment of encoding element 206-1 from and . The encoding element 206-1 comprises i encoder input buffers 214-1 ... 214-i, each encoder input buffer 214-1 ... 214-i associated with one of the N probabilities. A multiplexing element 216, driven by an encoder state machine 218, is in communication with the encoder input buffers 214-1 ... 214-i and an encoder 220. Entropy encoder 220 comprises an encoding engine 223 and i encoding search tree lookup tables 222-1 ... 222-i. Each encoding search tree lookup table 222-1 ... 222-i corresponding to one of the encoder input buffers 214-1 ... 214-i and its associated probability. The encoding search tree lookup tables 222-1 ... 222-i each comprising an encoding search tree associated with one of the associated probabilities. The encoder state machine 218 is operative to direct the multiplexing element 216 to transfer phrase bits from the encoder input buffers 214-1 ... 214-i to the encoder 220 for encoding when the output buffer 224 is available. The encoder state machine 218 is further operative to direct the encoder 220 to select an encoding search tree lookup table from the i encoding search tree lookup tables 222-1 ... 222-i, the encoding search tree lookup table associated with the same probability as the encoder input buffer 214-1 ... 214-i. Encoding engine 223 operates on the phrase bits by traversing the selected encoding search tree lookup table to generate an output code word. Upon generating the output code word, the encoder 220 transfers the code word to a code word output buffer 224. When a downstream processing block transfers the code word from the code word output buffer 224, the encoder state machine 218 selects another encoder input buffer 214-1 ... 214-i for encoding. Figure 4b Figures 2b 3b is an expanded view of an embodiment of decoding element 306-1 from and . The decoding element 306-1 comprises a decoder input buffer 314 in communication with a decoder state machine 318 and an entropy decoder 320. The decoder state machine 318 in communication with the decoder input buffer 314, the decoder 320 and a de-multiplexing element 324. The decoder state machine operative to read from the decoder input buffer 314 an indication of the probability associated with the code word in the decoder input buffer 314, and operative to direct the decoder 320 to select a decoding search tree lookup table 322-1 ... 322-i corresponding to the probability associated with the code word. The decoder 320 comprising an entropy decoding engine 323 and i decoding search tree lookup tables 322-1 ... 322-i. Each decoding search tree lookup table 322-1 ... 322-i corresponding to one of the associated probabilities serviced by the decoding element 306-1. The decoding search tree lookup tables 322-1 ... 322-i each comprising a decoding search tree associated with an associated probability. The decoder state machine 318 further operative to direct the decoder de-multiplexing element 324 to distribute output bits from the decoder 320 to one of i decoder output buffers 326-1 ... 326-i. Each decoder output buffer 326-1 ... 326-i corresponding to one of the i probabilities serviced by the encoding element 306-1. Decoding engine 323 operates on the code bits by traversing the selected decoding search tree lookup table 322-1 ... 322-i to locate the leaf node contents. The leaf node contents comprising a portion, or a compressed portion, of the phrase being decoded. Upon generating the leaf node contents, the decoder 320 transfers the leaf node contents to the decoder de-multiplexing element 324 for transfer to the corresponding decoder output buffer 326-1 ... 326-i. The leaf node contents will need to be collected, assembled and decompressed in a downstream processing component to re-create the phrase word. Figure 5a Figure 3a is an expanded view of an embodiment of showing the encoding elements 206-1 206-2 in simplified expanded form where the encoding elements 206-1 206-2 share one or more encoding search tree lookup tables LUTE 232. Reference numerals in the encoding elements 206-1 206-2 include a suffix of -1 or -2 to identify them as separate components. In the embodiment, a code word output controller and buffer 207 receives the outputs 210-1 210-2. Figure 5a E E E E E E In the embodiment of , each of encoding elements 206-1 206-2 further include their own subset of encoding search tree lookup tables LUT 222-1 222-2. In an alternate embodiment, the encoding elements 206-1 206-2 may only rely upon a shared set of encoding search tree lookup tables LUT 232. In a further alternate embodiment, some of the encoding elements 206-1 206-2 may solely rely upon their own subset of encoding search tree lookup tables LUT 222-1 222-2, some of the encoding elements 206-1 206-2 may solely rely upon the set of shared encoding search tree lookup tables LUT 232 or some of the encoding elements 206-1 206-2 may rely upon a combination of their own subset of encoding search tree lookup tables LUT 222-1 222-2 and the set of shared encoding search tree lookup tables LUT 232. Figure 5a E E While the embodiment of only includes two encoding elements 206-1 206-2 and one set of shared encoding search tree lookup tables LUT 232, the encoder block 200 may comprise a plurality of encoding elements 206-1 206-2 and one or more sets of one or more shared encoding search tree lookup tables LUT 232. Generally, it is advantageous to limit the number of lookup tables to minimize the size of the encoder block 200 or decoder block 300. The advantages of reducing the size of the encoder block 200 or decoder block 300, however, must be balanced against the availability of encoding elements 206-1... 206-d or decoding elements 306-1 ... 306-d to process a bit having a given probability. Figure 5c Figure 5a E E E E E is a closeup view of the encoding element 206-1 and the shared encoding search tree lookup tables LUT 232 from . As illustrated, the encoding elements 206-1 206-2 access the shared encoding search tree lookup tables LUT 232 through an input encoder crossbar switch 230 for connecting the encoding elements 206-1 206-2 with the one or more shared encoding search tree lookup tables LUT 232. An output encoder crossbar switch 231 connects the one or more shared encoding search tree lookup tables LUT 232 with the encoding elements 206-1 206-2 to communicate the values output from the shared encoding search tree lookup tables LUT 232 for use by the encoding elements 206-1 206-2. Figure 5c E j-1 ... E j+2 In the embodiment illustrated in , only four shared encoding search tree lookup tables LUT () LUT () are illustrated, though it is understood that there can be any number from 1 to some number greater than N to allow for multiple copies of the same encoding search tree lookup table to allow multiple encoding elements 206-1 ... 206-d to operate on the same probability at the same time. E While it is possible for two of the encoding elements 206-1 ... 206-d to operate on the same encoding search tree lookup table at the same time, this is undesirable as it will likely slow the operation of the encoder unacceptably. Accordingly, the load balancing between encoding elements 206-1 ... 206-d preferably accounts for the encoding search tree lookup tables available to each encoding element 206-1 ... 206-d to avoid assigning two different encoding elements to the same encoding search tree lookup table in the set of one or more shared encoding search tree lookup tables LUT 232 . Figure 5b Figure 3b D is an expanded view of an embodiment of showing the decoding elements 306-1 306-2 in simplified expanded form, where the decoding elements 306-1 306-2 share one or more decoding search tree lookup tables LUT 332. Reference numerals in the decoding elements 306-1 306-2 include a suffix of-1 or -2 to identify them as separate components. As illustrated, the entropy decoder block 300 outputs leaf node contents. Figure 5b D D D D D D In the embodiment of , each of decoding elements 306-1 306-2 further include their own subset of decoding search tree lookup tables LUT 322-1 322-2. In an alternate embodiment, the decoding elements 306-1 306-2 may only rely upon a shared set of one or more decoding search tree lookup tables LUT 332. In a further alternate embodiment, some of the decoding elements 306-1 306-2 may solely rely upon their own subset of decoding search tree lookup tables LUT 322-1 322-2, some of the decoding elements 306-1 306-2 may solely rely upon the set of shared decoding search tree lookup tables LUT 332 or some of the decoding elements 306-1 306-2 may rely upon a combination of their own subset of decoding search tree lookup tables LUT 322-1 322-2 and the set of shared decoding search tree lookup tables LUT 332. Figure 5b D D While the embodiment of only includes two decoding elements 306-1 306-2 and one set of one or more shared decoding search tree lookup tables LUT 332, the decoder block 300 may comprise a plurality of decoding elements 306-1 306-2 and one or more sets of one or more shared decoding search tree lookup tables LUT 332. Figure 5d Figure 5b D D D D D is a closeup view of the decoding element 306-1 and the shared decoding search tree lookup tables LUT 332 from . As illustrated, the decoding elements 306-1 306-2 access the shared decoding search tree lookup tables LUT 332 through an input decoder crossbar switch 330 for connecting the decoding elements 306-1 306-2 with the one or more shared decoding search tree lookup tables LUT 332. An output decoder crossbar switch 331 connects the one or more shared decoding search tree lookup tables LUT 332 with the decoding elements 306-1 306-2 to communicate the values output from the shared decoding search tree lookup tables LUT 332 for use by the decoding elements 306-1 306-2. Figure 5d D (j-1) D (i+2) In the embodiment illustrated in , only four shared decoding search tree lookup tables LUT ... LUT are illustrated, though it is understood that there can be any number from 1 to some number greater than N to allow for multiple copies of the same lookup table to allow multiple decoding elements 306-1 ... 306-d to operate on the same probability at the same time. D While it is possible for two of the decoding elements 306-1 ... 306-d to operate on the same decoding search tree lookup table at the same time, this is undesirable as it will likely slow the operation of the decoder block 300 unacceptably. Accordingly, the load balancing between decoding elements 306-1 ... 306-d preferably accounts for the decoding search tree lookup tables available to each decoding element 306-1 ... 306-d to avoid assigning two different decoding elements to the same lookup table in the set of one or more shared lookup tables LUT 332. Figure 6 is a simplified block diagram of an embodiment of a code word output controller and buffer 207 for assembling code words from the communication channels 210-1 210-2 of the encoding elements 206-1 206-2. The code word output controller and buffer 207 receives code word bits through the communication channels 210-1 210-2 along with an indication of the associated probability for each set of code word bits. Control logic 240 within the code word output controller and buffer 207 may direct the code word bits to a corresponding code word output buffer 205-1 ... 205-N. In the embodiment illustrated, the communication channels 210-1 and 210-2 are connected to the N code word output buffers 205-1 ... 205-N that correspond to the N probabilities through a code word crossbar input switch 234. As phrase words are processed by the encoding elements 206-1 and 206-2, the processed code word bits are collected in the code word buffers 205-1 ... 205-N until a code word is completed. While the code word output controller and buffer 207 is shown as being separate from downstream DMA module 250 for clarity, it will be appreciated that in an embodiment the code word output controller and buffer 207 could also be incorporated into the DMA module 250. In this embodiment the DMA module 250 would incorporate the functionality described herein as being attributed to the code word output controller and buffer 207. The code word output controller and buffer 207 may be considered part of the encoder block 200, which may also include the DMA module 250 in some embodiments. In an embodiment, the code word output controller and buffer 207 may include control logic 240, such as a counter, for counting the number of code words output by the encoding elements 206-1 206-2 associated with each of the N probabilities within a context. The control logic 240 may receive input from the communication channels 210-1 210-2 and increment the code word count for each probability as the code word and associated probability is detected as being input from the communication channels 210-1 210-2. Alternatively, for instance, the control logic 240 may receive input from each of the code word buffers 205-1 ... 205-N and increment the code word count for each probability as the corresponding code word buffer 205-1 ... 205-N receives a completed code word. The control logic 240 may provide a count directly to a downstream processing block, such as DMA module 250, or alternatively may provide the count as side band information. In an alternate embodiment, control logic 240 may further output the number of bits (or bytes) of each code word output by the encoding elements 206-1 206-2. The output of the control logic 240, whether the number of code words or, the number of bits (or bytes) of each code word, comprises load balancing information that may be used by a decoder to allocate code words to one of a plurality of decoding elements 306-1 ... 306-d. In an embodiment, the code word output controller and buffer 207 may further be operative to assemble the completed code words into work packages of approximately the same amount of processing work. Preferably each work package comprises a plurality of code words to minimize the amount of load balancing information required to be sent with the code words as output from the encoder 106. The work packages may be divided, for instance, by probability. This embodiment further reduces the amount of probability information required since one identifier of probability information is required for all code words in the work package. D Alternatively, a work package may include code words of different probability, provided that there is at least one decoding element operative to process all probabilities grouped in the work package and the additional probability information is included. In an embodiment, code words associated with probabilities in the shared decoding search tree lookup tables LUT 332 may be assembled together in a work package along with probability information. Identification of work packages may be included, for instance, by including terminators in the output bitstream. The inclusion of a terminator for each work package may, however, reduce the compression ratio achievable by the encoder. In an alternate preferred embodiment, identification of work packages may be included as a set of pointers, each pointer identifying a work package in a segment of the bitstream. The segment of the bitstream may be allocated, for instance, by all code words within a context identified by the context modeler 104. The pointer information may be transmitted as a header or footer to the segment. Alternatively all pointer information may be collected and included, for instance as a header, once all phrase words have been encoded. Typically, it is preferred to include the pointer information with segments of the bitstream to locate the pointer information with the code words when decoded. In the embodiment illustrated, control logic 240, for instance in the form of a state machine, takes as input an indication of the number of code words being output through communication channels 210-1 and 210-2. The control logic 240 counts the number of code words and outputs this number as load balancing information associated with that code word being stored in the code word buffers 205-1 ... 205-N. Alternatively, the control logic 240 may generate load balancing information based upon a characteristic of the code words, such as a number of bits (or bytes) in each code word and output the number of bits (or bytes) of the code words for each probability as load balancing information. In an alternate embodiment, not shown, the control logic 240 may store the load balancing information in a separate load balancing information buffer. This alternate storing embodiment may access the load balancing information buffer independent from the code word crossbar input switch 234 In an embodiment, the load balancing information may comprise header information stored with that code word in the associated code word buffer 205-1 ... 205-N. In this embodiment, the DMA module 250 may, for instance, assemble the bitstream by collecting the code words and associated probability and load balancing information from each code word buffer 205-1 ... 205-N. In an alternate embodiment, the load balancing information may be stored in a separate load balancing information buffer with pointer information to the associated code word or work package. In the alternate embodiment, the DMA module 250 may, for instance, assemble the bitstream by collecting the code words and probability information from the code word buffers 205-1 ... 205-N and combining each code word and probability information with its associated load balancing information stored in the load balancing information buffer. A downstream component, such as DMA module 250, may be used to generate a bitstream from the collected code words. In embodiments where one or more encoding elements 206-1 ... 206-d service the same associated probability, the code word output controller and buffer 207 is preferably further operative to distribute the output bits corresponding to the same associated probability from each of the one or more encoding elements 206-1 ... 206-d to the same code word buffer 205-1 ... 205-N to collect the bits to assemble the code word. While it is possible to include additional buffers, it is preferable to assemble the code word bits processed by different encoding elements 206-1 ... 206-d soon after the encoding process to simplify control and management of the data. Figure 7 is an embodiment of a computing device 700 including an encoder 710 for encoding data as described above. The computing device 700 further includes a processor 702 and memory 704, for execution of program code on the device 700 as well as management of the encoder 710, and preferably a communications system 708. In an embodiment device 700 may further comprise an in input interface such as an RCA jack, microphone or digital input, such as a digital camera, for inputting data content to be encoded and an output interface such as a screen, speaker or headphone jack for outputting content to a user of the device 700. In the embodiment illustrated an application 706 is resident in the memory 704, for instance for controlling the encoder 710. Figure 8 is an embodiment of a computing device 800 including a decoder 810 for decoding data as described above. The computing device 800 further includes a processor 802 and memory 804, for execution of program code on the device 800 as well as management of the encoder 810, and preferably a communications system 808. In an embodiment device 800 may further comprise an in input interface such as an RCA jack, microphone or digital input, such as a digital camera, for inputting data content and an output interface such as a screen, speaker or headphone jack for outputting decoded content to a user of the device 800. In the embodiment illustrated an application 806 is resident in the memory 804, for instance for controlling the encoder 810. Figure 9 is an embodiment of a computing device 900 including both an encoder 910 and a decoder 912. The computing device 900 further includes a processor 902 and memory 904, for execution of program code on the device 900 as well as management of the encoder 910, and preferably a communications system 908. In an embodiment device 900 may further comprise an in input interface such as an RCA jack, microphone or digital input, such as a digital camera, for inputting data content to be encoded and an output interface such as a screen, speaker or headphone jack for outputting decoded content to a user of the device 900. In the embodiment illustrated an application 906 is resident in the memory 904, for instance for controlling the encoder 910. Figure 9 Accordingly in an embodiment such as , a computing device may be provided for video conferencing including an entropy encoder and an entropy decoder as described above. The entropy encoder and entropy decoder allowing for efficient real time compression and decompression of live audio and video. Figure 10a Figure 5a figure 10a E E E E E is an expanded view of an alternate embodiment of encoder block 200 from . In the embodiment of , encoder block 200 comprises a shared encoding search tree lookup table LUT 232 and encoding elements 206-1 206-2. Encoding elements 206-1 206-2 each comprise an encoder input buffer 214-1 214-2, entropy encoder 220-1 220-2, encoder state machine 218-1 218-2 and output buffer 224-1 224-2. Entropy encoder 220-1 220-2 comprises an encoding engine 223-1 223-2 and may each comprise a subset of the encoding search tree lookup tables LUT 222-1 222-2. The encoding search tree lookup tables LUT 222-1 222-2 each comprising at least one search tree associated with one of the associated N probabilities. The use of both a shared encoding search tree lookup table LUT 232 and encoding search tree lookup tables LUT 222-1 222-2 for each encoder 220-1 220-2 is optional. E The shared encoding search tree lookup table LUT 232 is connected to the encoding elements 206-1 206-2 by an input encoding crossbar switch 230 for receiving input and an output encoding crossbar switch 231 for providing lookup table values to the encoding elements 206-1 206-1. The encoder state machine 218-1 218-2 is operative to direct the input buffer 214-1 214-2 to transfer phrase bits to the encoder 220-1 220-2 for encoding when the output buffer 224-1 224-2 is available. The encoder state machine 218-1 218-2 is further operative to direct the encoder 220-1 220-2 to select a table from the encoding search tree lookup tables available to that entropy encoder 220-1 220-2, the table corresponding to the probability associated with the phrase word to be encoded. In an embodiment the indication of the probability may be communicated to the encoding element 206-1 206-2 and stored in the input buffer 214-1 214-2 along with the phrase word. E Where the selected table belongs to the set of shared encoding search tree lookup tables LUT 232, the encoding element 206-1 206-2 may access the selected table through the input encoder crossbar switch 230. Encoding engine 223 operates on the phrase bits by traversing the selected table to generate an output code word. Upon generating the output code word, the encoder 220 transfers the code word to a code word output buffer 224. When a downstream processing block transfers the code word from the code word output buffer 224, the encoder state machine 218 selects the next phrase stored in encoder input buffer 214-1 for encoding. Figure 10b Figure 2b Figure 10b D is an expanded view of decoder block 300 from . In the embodiment of , the decoder block 300 comprises a code input state machine 302, a demultiplexer 304, shared decoding search tree lookup tables LUT 332 and decoding elements 306-1 306-2. The decoding elements 306-1306-2 each comprise a decoder input buffer 314-1 314-2 in communication with a decoder state machine 318-1 318-2 and an entropy decoder 320-1 320-2. The decoder state machine 318-1 318-2 in communication with the decoder input buffer 314-1 314-2, the decoder 320-1 320-2 and an output buffer 326-1 326-2. The decoder state machine 318-1 318-2 operative to read from the decoder input buffer 314-1 314-2 an indication of the probability associated with the code word in the decoder input buffer 314-1 314-2, and operative to direct the decoder 320-1 320-2 to select a decoding search tree lookup table 322-1 ... 322-i corresponding to the probability associated with the code word. D D D D The decoder 320-1 320-2 comprising an entropy decoding engine 323-2 323-2 a connection to the shared decoding search tree lookup tables LUT 332 and, in an embodiment, a subset of the decoding search tree lookup tables LUT 322-1 322-2 available to that entropy decoding engine 323-1 323-2. The decoding search tree lookup tables LUT 322-1 322-2 each corresponding to one of the associated probabilities serviced by the decoding element 306-1 306-2. The decoding search tree lookup tables LUT 322-1 ... 322-i each comprising a search tree associated with an associated probability. D D D D The use of both a shared decoding search tree lookup table LUT 332 and decoding search tree lookup tables LUT 322-1 322-2 for each decoder 320-1 320-2 is optional. Preferably, repetition of lookup tables between the decoding search tree lookup tables LUT 322-1 322-2 is minimized, relying upon the shared decoding search tree lookup table LUT 332 for tables to be commonly accessed by different decoders 320-1 320-2. It may, however, be desired to include multiple copies of a table, for the situation where it is likely that multiple decoders 320-1 320-2 will be operating using the same lookup table at the same time. D The shared decoding search tree lookup tables LUT 332 is connected to the decoding elements 306-1 306-2 by an input decoding crossbar switch 330 for receiving input and an output decoding crossbar switch 331 for providing lookup table values to the decoding elements 306-1 306-2. The decoder state machine 318-1 318-2 is operative to direct the input buffer 314-1 314-2 to transfer code bits to the decoder 320-1 320-2 for decoding when the output buffer 324-1 324-2 is available. The decoder state machine 318-1 318-2 is further operative to direct the decoder 320-1 320-2 to select a table from the decoding search tree lookup tables available to that entropy decoder 320-1 320-2, the table corresponding to the probability associated with the code word to be decoded. In an embodiment the indication of the probability may be communicated to the decoding element 306-1 306-2 and stored in the input buffer 314-1 314-2 along with the code word. D Where the selected table belongs to the set of shared encoding search tree lookup tables LUT 332, the decoding element 306-1 306-2 may access the selected table through the input crossbar switch 230. The decoder state machine 318-1 318-2 further operative to distribute output bits from the decoder 320-1 320-2 to decoder output buffer 326-1 326-3. Decoding engine 323-1 323-2 operates on the code bits by traversing the selected decoding search tree lookup table to locate the leaf node contents. The leaf node contents comprising a portion, or a compressed portion, of the phrase being decoded from the input code word. Upon generating the leaf node contents, the decoder 320-1 320-2 transfers the leaf node contents to the decoder output buffer 326-1 326-2. The leaf node contents may be collected, assembled and decompressed in a downstream processing component to re-create the phrase word. Selection of probability trees and distribution of the probability trees across the encoding elements 206-1 ... 206-d or the decoding elements 306-1 ... 306-d may be optimized for a particular implementation. In general, the smaller the physical footprint of the components and the smaller the gate count, the faster the component. In an ideal parallel design each processing element will receive the same amount of data that takes the same amount of processing time to ensure that all processing elements are operating. In selecting the search trees, it is simplest to optimize the process if all trees are roughly equally probable and all trees are roughly the same size. This allows for a simple equal distribution of trees among the encoding elements 206-1 ... 206-d or the decoding elements 306-1 ... 306-d and allows each element to operate on its own subset of locally stored lookup tables. In the case where some probabilities are much more likely than others, one processing element could end up doing most of the calculations, leaving the other processing units idle waiting for a low probability code word/phrase word. To improve the performance of the system, if it is not possible to 'tune' the trees to roughly match their probabilities and size, it is possible to distribute the trees across the encoding elements 206-1 ... 206-d, decoding elements 306-1 ... 306-d and shared lookup tables 232 332 such that the sum of the probabilities serviced by each encoding element 206-1 ... 206-d or decoding element 306-1 ... 306-d is approximately equal. An additional factor is the size of the trees (lookup tables) which could have an effect on the speed if, for instance, one element 206-1 ... 206-d or decoding element 306-1 ... 306-d contained all of the large trees such that it ran slower than the other elements. Including all available lookup tables in all of the encoding elements 206-1 .. 206-d or decoding elements 306-1 ... 306-d is not the optimal choice, as it is preferred to minimize the number of encoding/decoding search tree lookup tables 222 322. Minimizing the number of lookup tables 222-322 reduces the physical size of each encoder 220 or decoder 320, which reduces the latency time of the hardware. The embodiments described above allows for multiple encoding/decoding elements 206-1 ... 206-d/306-1 ... 306-d to be available for processing a given phrase/code word, without the performance and cost penalty of including a complete duplicate set of lookup tables for each of the encoding/decoding elements 206-1 ... 206-d/306-1 ... 306-d. In an embodiment, all trees are available to all encoding elements 206-1 ... 206-d or decoding elements 306-1 ... 306-d. This allows for all elements to operate on any phrase word or code word. In general, it is desirable to make as many tables as possible available to multiple encoding/decoding elements 206-1 ... 206-d/306-1 ... 306-d to ensure that an element is available to carry out the processing of each phrase/code word. The cost of placing all tables in the shared lookup tables 232 332 is the cost of the input cross bar and output cross bar that must accommodate multiple ending/decoding elements 206-1 ... 206-d/306-1 ... 306-d accessing multiple tables. Depending upon the probability of each table, it may be helpful to allocate some tables to specific encoding/decoding elements 206-1 ... 206-d/306-1 ... 306-d. In general, it is likely to be more efficient to allocate the most probable lookup tables and the least probable lookup tables to specific encoding/decoding elements 206-1 ... 206-d/306-1 .. 306-d, and reserve the shared lookup tables 232 332 for the intermediate probable trees to allow flexibility in assigning an encoding/decoding element 206-1 ... 206-d/306-1 ... 306-d based upon availability at the time. In an alternate embodiment, the availability of some trees is limited to individual encoding elements 206-1 ... 206-d or decoding elements 306-1 ... 306-d and only some of the trees are available to multiple encoding elements 206-1 ... 206-d or decoding elements 306-1 ... 306-d. In order to determine the appropriate allocation of lookup tables to the encoding/decoding elements 206-1 ... 206-d/306-1 ... 306-d, it is necessary to assess the requirements of the encoder/decoder 200/300, along with the characteristics of the set of N search trees being implemented by that encoder/decoder 200/300. While it is desirable to select trees having a similar depth and a similar probability, in practice there may be variation in the probability of each of the trees and the depth between trees. Since deeper trees require additional steps to reach a decision or answer, deeper trees require more processing than shallow trees. Trees should be allocated to one of three categories and each tree assigned to one or more encoding/decoding elements 206-1 ... 206-d/306-1 ... 306-d. The three tree allocation categories are: dedicated (static allocation); shared (dynamic allocation); and, duplicated (may be static or dynamic or both).There are two competing goals for assessing a particular tree allocation: cost (power usage and silicon area); and, performance (maximum throughput through the decoder). The performance of the encoder/decoder is dependent upon the maximum clock rate that the design can tolerate, as well as how evenly the loads are balanced across the encoding/decoding engines 232/323. Load balancing is dependent upon the tree characteristics, the allocation of the trees, as well as the characteristics of the bitstream being encoded/decoded. Ideally, the encoder/decoder design performance is insensitive to the characteristics of the input bitstream, though practically the performance will be dependent upon the characteristics of the input bitstream to some extent. Tree allocation reduces cost while maintaining performance by minimizing the number of lookup tables and connections between components while providing a design that allows for the processing loads on all encoding/decoding engines 232/323 to be approximately equal (balanced). The goal is to minimize the times when one or more encoding/decoding elements 206-1 ... 206-d/306-1 .. 306-d are idle, waiting for input while another encoding/decoding element 206-1 ... 206-d/306-1 ... 306-d is working as the idle encoding/decoding elements 206-1 ... 206-d/306-1 ... 306-d do not have the appropriate lookup table available for processing the current phrase/code words. i. Obtain throughput target, including the target clock rate for the encoder/decoder block 200/300 (target performance); power budget; and, area budget; ii. Analyze a variety of input bitstreams to evaluate how the probability trees are assigned by the context modeler (average and standard deviation of tree usage) over both the entire bitstream as well as small portions of the bitstream; iii. Identify periods of distinct probability tree usage patterns (for instance, periods where small subsets of the probability trees are assigned exclusively or periods where large sets of trees assigned roughly evenly); iv. Simulate the operation of an encoder/decoder block 200/300 with a single encoder/decoder (d=1) having all trees available and assess whether the single encoder/decoder (d=1) meets the throughput target, if not continue below; v. Increase the number of encoding/decoding elements by 1 (d=d+1) and simulate the operation of the encoder/decoder block 200/300 under multiple scenarios varying the number of dedicated, shared and duplicated trees to assess whether the encoder/decoder meets the throughput target, if not repeat step V.; vi. Evaluate the encoder/decoder design that meets the throughput target to assess the practical cost of the input and output crossbars using RTL code for each scenario of M shared lookup tables (write RTL for a dxM input crossbar switch and a Mxd output crossbar switch to assess the performance cost of each of the scenarios); and, vii. Balance the performance function from v. with the cost function from vi. to identify a design that achieves the highest performance across the widest range of input bitstreams within the cost budget. The following steps are provided for designing a parallel encoder/decoder 200/300 with dynamic load balancing: In evaluating an encoder/decoder/design, the RTL code provides timing and area estimates for a particular design. Running simulations with sample input bitstreams provides an estimated power consumption of the input and output crossbar switches for each value of M shared lookup tables. Similar criteria may be used to determine the optimum number of encoding elements 206-1 ... 206-d or decoding elements 306-1 ... 306-d, including the clock speed of the silicon in order to meet the timing requirements of the encoder block 200 or decoder block 300. While use of shared lookup tables 232 332 makes more lookup tables available to multiple encoding/decoding elements without duplicating tables, increasing the size of the input/output crossbar switches 230 330/231 331 to accommodate more shared tables (M) or more encoding/decoding elements (d) increases the performance and power cost of the crossbar switches. Accordingly, generally the performance is maximized and the cost is minimized with the fewest dynamically allocated trees to meet the throughput target. The number of dynamically allocated trees may need to be higher than the absolute minimum to meet the throughput target for all anticipated input bit streams. Certain adaptations and modifications of the described embodiments can be made. Therefore, the above discussed embodiments are considered to be illustrative and not restrictive. GENERAL BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION
If your chart data values are not so much different from each other, the chart created using this data will not help your audience to differentiate the series representing those values. Our sample data, shown in Figure 1 below explores how people of different age brackets choose their favorite colors. If you look closely at the data, you will realize that all values span between 285 and 365. So, it makes no sense to even discuss any value lower than 250 or above 370 for this data set. Figure 1: What's your favorite color? Tip: To quickly see the data for any chart, right-click the chart, and from the contextual menu, select the Edit Data option. Yet, when you create sample column and bar charts from this data using PowerPoint's defaults, you'll end up with charts akin to what you see in Figure 2, below. The chart on the top shows columns that are very similar in their heights -- there really is no contrast highlighting the findings of our data. It's the same story with the bar chart below where the bars look almost similar in length. The reason for lower contrast between values in these charts is that the Minimum and the Maximum values set for the Value Axis are calculated from a minimum value of zero -- this actually makes the differences in the column height or bar widths so much less pronounced -- and thus makes the chart much less effective as a visual medium. Note: The Value Axis is the Vertical axis on the left of a typical Column chart, or the Horizontal axis at the bottom for a Bar chart (see Figure 2, above). Learn more about Chart Axes. Fortunately, you can easily choose your own Maximum and Minimum values. Yes, we did show you how you can change the Minimum and Maximum values on the Value Axis -- this makes the difference between the various columns more pronounced. But chart purists differ -- and in many ways they are right because although your comparisons are pronounced, they are also not the truth -- at least not the whole truth. Many people in your audience may not see that your values do not begin from zero -- and some charts created this way may forego the values within the axis altogether. So what should you do? The answer is to strike a balance -- if you do alter the values to no longer start at zero, make that very apparent to your audience. Make sure you add a note to that effect on your slide, and even draw the attention of your audience to this fact. First of all, take a look at your chart data and note down the Minimum and Maximum values -- our chart data that you saw in Figure 1 earlier on this page has a minimum value of 285, and a maximum value of 365. Now decide the Minimum and Maximum values to be assigned for your Vertical axis. Considering our sample chart data, we decided to set our Minimum value to be 250 (less than 285) and the Maximum value to be 370 (more than 365). Now, select the Value Axis of the chart -- carefully right-click to access the contextual menu as shown in Figure 3. Within this contextual menu, chose the Format Axis option (refer to Figure 3 again). If you do not get the Format Axis option in the contextual menu, you may have right-clicked on another chart element -- make sure you then deselect anything in the chart, and then right-click on the Value Axis. This opens the Format Axis Task Pane , as shown in Figure 4. Make sure that the Axis Options button is selected as shown highlighted in blue within Figure 4. With the Axis Options button selected within the Format Axis Task Pane, locate the Minimum and Maximum options (highlighted in red within Figure 4, above). As you can see, the Minimum and Maximum options are set to Auto by default, which is indicated by the text Auto. Click within the box provided with the Minimum option, and then type a new Minimum value, as shown in Figure 5 (highlighted in red). For our sample chart, the Minimum value was changed to 250. This may change the Maximum value since it is still set to Auto option, as shown in Figure 5. The Reset button that shows up next to the Minimum option can be clicked upon if you want to get back the automatic value. Similarly, click within the box provided with the Maximum option, and then type a new Maximum value, as shown in Figure 6 (highlighted in red). For our chart example, the Maximum value was changed to 370. The Reset button that shows up next to the Maximum option can be clicked upon if you want to get back the automatic value. Now you can see that the Maximum and Minimum values on the Vertical axis have changed to the new values. If you compare the charts in Figure 7 with the charts in Figure 2 shown earlier on this page, you will find that the charts within Figure 7 provide a more pronounced contrast for the data.
https://www.indezine.com/products/powerpoint/learn/chartsdiagrams/2013/set-min-max-values-ppt.html
Anesthesia Physics are the very basics that involve the application of the laws of physics in anesthesia. Some knowledge of Physics is very important for anesthetists in their daily practice. The following are the important gas laws in Anesthesia Physics: At a constant temperature, volume of gas is inversely proportional to the pressure At a constant pressure, volume of gas is directly proportional to temperature. GRAHAM’S LAW The rate of diffusion of gas is inversely proportional to square root of their molecular weight. Partial Pressure Of Gas It is the pressure exerted by a gas in a gaseous mixture. VAPOUR Vapour is the gaseous form of a liquid. AVOGADRO NUMBER The number of molecules contained in one gram molecular weight of any compound. It is 6.23x 10 to the power of 23 FLOW OF GASES In Anesthesia Physics the flow of gases may be either laminar or turbulent. Laminar Laminar flow is produced when the gas pass through straight tube i.e when the flow is smooth. Laminar flow is more dependent on viscosity. At laminar flow Bernoulli’s law is applicable which states that flow rate is directly proportional to pressure gradient and fourth power of radius of tube and inversely proportional to viscosity and length. The Bernoulli’s law is an important consideration in Anesthesia Physics. Turbulent Turbulent flow is produced if flow rate is very high or if gas passes through bends and constrictions. In this the flow is rough. Reynold’s number must exceed to 2000 for turbulence to develop. Turbulent flow is more dependent on density. VENTURI PRINCIPLE This is another important principle in Anesthesia Physics. When a fluid or gas passes through a tube of varying diameter, the pressure exerted by fluid (lateral pressure) is minimum where velocity is maximum (pressure energy drops where kinetic energy increases: Bernoulli’s law). This principle is very much utilized in anesthesia. By increasing flow rate (velocity) through narrow constriction we can create subatmospheric pressure. A lot of equipment like venturi masks, jet ventilation and suction apparatus work on this principle. These principles of Anesthesia Physics are very valuable to an anesthetist.
https://anesthesiageneral.com/anesthesia-physics/
The cereal-box sized GomX-4B – ESA’s biggest small CubeSat yet flown – has completed its mission for the Agency, testing out new miniaturized technologies including: intersatellite link communication with its GomX-4A twin, a hyperspectral imager, star tracker and butane-based propulsion system. “This multifaceted little mission has performed extremely well in flight,” says Roger Walker, overseeing ESA’s Technology CubeSats. “What its results demonstrate is that European CubeSats are now ready for operational deployment, as the first generation of CubeSat constellations in low Earth orbit for a variety of applications. “So our post-flight review has declared ESA’s in-orbit demonstration mission a success, but in fact GomX-4B’s story is far from over. GomSpace, the manufacturer of the satellite, continues to operate the nanosatellite, while GomSpace’s subsidiary in Luxembourg will be in charge of mission exploitation.” Much quicker to build and cheaper to launch than traditional satellites, ESA is making use of CubeSats based on standardised 10 cm boxes for testing new technologies in space. GomX-4B was ESA’s first six-unit CubeSat, double the size of its predecessor GomX-3, built for ESA by GomSpace in Aalborg, Denmark, also the builder of GomX-4A for the Danish Ministry of Defence. The CubeSat pair was launched on 2 February from Jiuquan, China. GomX-4B used its butane cold gas propulsion system to manoeuvre away from its twin, flying up to 4500 km away in a fixed geometry – a limit set by Earth’s curvature, and representative of planned CubeSat constellation spacing – to test intersatellite radio links allowing the rapid transfer of data from Earth between satellites and back to Earth again. Supplied by the Swedish branch of GomSpace, the propulsion system allows the CubeSat to adjust its orbital speed in a controlled manner by a total of 10 m/s – a speed equivalent to a kicked football. “Despite all our orbital manoeuvres, GomX-4B still has a lot of fuel,” comments Roger. “Of the original 130 grams of butane, only 13 grams were consumed during the mission.” In another first, GomX-4B acquired the first hyperspectral images of Earth from a CubeSat. Cosine Research in the Netherlands and its partners constructed the hand-sized HyperScout imager for ESA. This divides up the light it receives into many narrow, adjacent wavelengths, gathering a wealth of environmental data. The mission also proved that hyperspectral image processing can be performed aboard, to reduce the amount of data needing to be transmitted down to Earth. High-quality image acquisition requires good pointing accuracy and stability, so GomX-4B also trialled a miniaturised star tracker developed by Dutch CubeSat manufacturer ISIS to orient itself by its surrounding starfield, turning itself using fast-spinning reaction wheels. A final experimental payload gathered data on how orbital radiation affects computer memories. The large amount of flight data returned by the mission is being analysed as a source of lessons learnt to guide the development of follow-on CubeSat missions, starting with GomX-5 whose 12-unit design begins next month at GomSpace. The GomX missions are funded primarily by Denmark in the ‘Fly’ element of ESA’s General Support Technology Programme to develop and prove leading edge space technologies. ESA has a trio of Technology CubeSats from Belgium planned to fly during the new year: Qarman to gather atmospheric reentry data, Simba to monitor Earth’s radiation budget and Picasso to monitor the troposphere and stratosphere.
https://www.designworldonline.com/mission-accomplished-for-esas-butane-propelled-cubesat/
Some stamps are pretty much designed to be used in one way . . . but do you have to confine yourself to its intended purpose? When you are choosing stamps, or even later when you get your new acquisition home, take the time to really explore its uses. You may be surprised at how many ways you can use it. I purchased this Magenta stamp a few days ago. The stamp’s price sticker labels it as “Cedar Branch.” I blogged about using it to create the look of “frost” on a stamped window in my January 30th post. Obviously, the stamp was designed to represent a tree branch, but let’s look at seven different ways that we might use this deceptively simple image. Do a Test Print With Every New Stamp When you get a new stamp, do a test print by stamping the image on a piece of scrap paper to look at the bones of the image. Because so many times we stamp images in black, this is a reasonable way to start, but you may decide that starting with another color makes more sense in some cases. For example, here did test stamping in both black and dark green because I thought it likely I might particularly stamp this image in these colors. This is, by the way, also a good time to look for problems with a stamp, like areas that don’t print well or where you get unwanted edges. In this particular case, I was getting some edge prints. This type of problem varies by how you ink the stamp and how firmly you press the stamp onto the surface. In this case for example, repeated stampings didn’t always give me the edge prints. But I knew that this would bug me, so I paused and did some quick stamp surgery to get rid of the edges. If you decide to cut away areas on a mounted stamp, you need to be very, very careful. You don’t want to make the problem worse (or even ruin the stamp mounting) by undercutting the raised parts of the image that will stamp. To avoid this, I try not too get too close to the raised areas and use a very sharp craft knife to repeatedly cut straight down into the rubber/cushion rather than running the knife horizontally. Then once the edges of the area I want to cut away are cut, I carefully slide the tip of the craft knife under just that little bit of cushion to remove the rubber/cushion bit from the stamp. (How difficult this is, may depend on the aggressiveness of the adhesive used to mount the stamp to the wood mount.) Then I do another test print to be sure I’ve fixed what bugs me and that I now have a clean print. 1) Stamp in One Corner This particular image is clearly meant to be used as a corner stamp. While the cedar branch in the middle of the stamp draws your attention, notice that two of the edges are hard edges, so in most cases, it won’t look right stamped in the middle of a card. So let’s stamp it in the corner of a card. But look, it could be stamped in the upper left corner . . . Or the upper right corner. Notice that this stamp is not symmetrical. One of the hard edges is longer than the other. This means that the image looks subtly different and takes up a little bit different space if you stamp it in the upper right corner rather than the upper left. Or you could stamp it the lower right corner . . . Or the lower left corner. This particular image design is flexible enough that it could be used in any of the four corners of a typically rectangular card and it could work. Notice, by the way, that the upper left corner and the lower right corner are really the same card turned 180 degrees and the upper right and lower left are also the same card turned 180 degrees. On this second stamping, I didn’t quite hit the corner, so would need to trim the card just a little. To avoid this problem in the future, I could either use a stamp positioner or place the ink stamped rubber-side-up on the table and lower the card surface face-down onto the inked stamp. Or I could simply be sure to slightly go over the cardstock edges when I stamp in a corner so that I’m sure that the image stamps right up to and over the edges. 2) Medallion Some corner stamps can be stamped right next to each other to form a medallion type design. Because this image isn’t symmetrical, it doesn’t really work this way . . . But then again, if you left a little space between the stamped images instead of butting them right up against each other, your could create a kind of tiled medallion that is interesting and might and might work for some situations. Here again, because I’m just doing test stampings, the images are not exactly lined up. To get evenly spaced images on a real card, you would probably want to either use a stamp positioner or actually stamp the corner images on separate squares of cardstock and then layer them in place on the card. That might let you play around with making the squares on different colored paper too. 3) Stamp in Four Corners as a Frame You could even use it in all four corners of a card to create a frame. That works too. For this particular stamp, if I want the corners to be separate and not overlapping, I need to make the card a little bigger. This particular panel is 7″ x 5 1/2″. Of course, if I only stamp the tip of the image in each corner, I can go for a more subtle frame and go with a smaller (4 1/4″ x 5 1/2″) panel. 4) Stamp in Four Corners as a Background But look what happens if you stamp the four corners very close together and use a lighter color ink. Now instead of a frame, you’ve used the same stamp to create a background pattern. For this image, in order to move the corners close to each other, the panel is closer to 4″ x 4″. You can also experiment with overlapping the images in different colors for a different background look. This is also approximately 4″ x 4″. I can picture another layer or maybe a cut out image layered on top of this in the center, making this background more of a frame layer. 5) Stamp Image Partially as a Border How about a border? At first glace, you might think that the stamp’s two hard edges would prevent the stamp from being used as a clean border. And they do, unless you only use the portion of the stamp that doesn’t include the hard edges. By just inking that portion and stamping along the edge of the card, you’ve got a clean looking border. With the border on the top, it might give you the impression of looking through the trees with a scene stamped below. Or you might rotate it 180 degrees and the border, now on the bottom looks like grasses along the bottom of the card. You might stamp them in shades of brown for a late fall card. You might play around with how much of the image to include and how close together to place the corner images. Again, here they are stamped along the top but this time they are stamped very close together on the short edge of the 5 1/2″ x 4 1/4″ panel . . . But it could also be flipped around to be along the bottom short edge. Imagine a full moon stamped above the branches here. 6) Stamp in a Different Color to Represent Something Else And then of course, there is my use of the stamp the other day as “frost” by stamping it in white on a blue background inside a stamped window. 7) Stamp Inside Another Image You could of course, stamp the image in dark green inside the same window and let it represent the cedar branch it was designed to represent! Here again, because the image’s hard edges are hidden by the window frame, it doesn’t have to be a corner stamp. There may well be other ways to use this stamp. The way to find out, with this or any stamp, is to ink it up and play around with it on scrap paper to see where it takes you.
https://vampstampnews.com/blog/one-stamp-seven-ways-plus-stamp-surgery/
How to Knit a Tiny Christmas Tree Ornament This week’s knitting video tutorial is brought to us by eHow and features knitting expert Ava Lynne Green from Terri’s Yarns & Crafts, who shares her expert knitting tips and tricks on eHow’s video channel on YouTube. In this particular video Ava Lynne shows us how to knit an a tiny Christmas tree, ideal to be used as a Holiday ornament or decoration. Knitting a tiny Christmas tree is a great way to make a homemade ornament for your regular sized tree. Knit a tiny Christmas tree with help from an experienced crafts professional in this free video clip! Watch the entire video below, for full instructions. This video tutorial was created by eHow. I encourage you to watch more of their videos, and subscribe to their channel on YouTube by clicking here.
https://www.knittingwomen.com/how-to-knit-a-tiny-christmas-tree-ornament/
Time series analysis and forecast by SSA. The program is based on the powerful model-free method of time series analysis Caterpillar (another name is SSA - Singular Spectrum Analysis). It combines advantages of other methods with simplicity of visual control aids. The basic Caterpillar-SSA algorithm for analyzing one-dimensional time series consists of transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique); Singular Value Decomposition of the trajectory matrix; reconstruction of the original time series based on a number of selected eigenvectors. The result of the Caterpillar-SSA processing is a natural decomposition of the time series into several components, which can often be identified as trends, seasonalities and other oscillatory series, or noise components. The method can be naturally extended to forecasting time series and its components, processing multidimensional time series and to change-point detection. The "Caterpillar" ideas were independently developed in Russia (St. Petersburg, Moscow) and also in UK and USA (under the name of SSA; that is, Singular Spectrum Analysis). The new book "Analysis of Time Series Structure: SSA and Related Techniques", authors are N. Golyandina, V. Nekrutkin and A. Zhigljavsky, provides a careful, lucid description of SSA general theory and methodology (in English, Chapman&Hall/CRC, see http://www.gistatgroup.com/cat/). The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and nonstationary, almost deterministic and noisy time series are to be analyzed. We are sure, that in a near future "Caterpillar"-like methods will rank among the base methods of time series analysis and will be included in standard statistical software.
http://www.softpicks.net/software/Education/Teaching-Training-Tools/CaterpillarSSA-174482.htm
Seems simple enough. To be clear, we’re not talking about sleep deprivation as an active form of psychological torture here. We’re talking about lost, poor sleep and the hazards it wages against our bodies and minds as a result. What the NHLBI introduces is actually a handful of terms—sleep deprivation, sleep deficiency, poor sleep—which more or less reference the same concern: a person who is sleep deprived is not getting enough sleep. Good question. You can’t know if you’re not getting enough if you don’t know how much you should be getting, to begin with! Everyone is going to vary a little on the scale of sleep need. But the National Sleep Foundation (NSF)‘s most recent recommendations for adequate sleep provide ranges for different age groups. To clarify, these ranges are for consolidated periods of sleep (except in the case of children under the age of 6, who usually have multiple sleep periods during the day and night). Consolidated sleep means hours of sleep achieved all at once. This is important. One cannot expect to just sleep 1 hour at a time, and do that 8 separate times during the day, and expect to achieve all the benefits of adequate sleep. What researchers understand about sleep duration is influenced by what they also understand about the architecture of sleep. Our sleep stages, phasing, and cycles are part of a complex biological process that requires an uninterrupted stretch of time to be fulfilled, night after night. While there are outliers who believe it’s healthy to sleep 2 to 3 hours at a time or think it’s harmless to otherwise break up sleep cycles, their theories aren’t generally supported by established science. Conventional research, on the other hand, generally agrees that human beings should strive to achieve most, if not all, of their sleep in one consolidated period for optimal health. Some people may get their 7 to 9 hours of sleep every night but still awaken feeling exhausted, in pain, ill, and unrefreshed. The long list of potential causes for lost sleep points to instances where things like illness or injury or untreated health conditions may be interrupting the opportunity for the brain and body to derive the biological benefits of consolidated sleep. And here’s the thing: You might not even know it. People who are sleep deprived are typically unaware of the impairment they experience due to loss of sleep. When you don’t get adequate sleep over long periods of time (days, weeks, months, even years), you are accruing what is known as sleep debt. Take another look at all the causes for lost sleep listed above. Who hasn’t encountered at least some of these situations? And who among us continues to deal with these situations in a daily basis? It’s not surprising that 36.5 percent of working Americans are estimated to be sleep deficient (NIOSH, 2019) (Note: the National Safety Council statistics suggest a higher number, at 43 percent). If working Americans are not getting enough sleep—and it’s related to workplace demands alone—it’s a sure sign these people are regularly losing sleep night after night… and adding to sleep debt. Can you pay off sleep debt? Maybe, maybe not: Some research seems to suggest you can, but it will take lots of planning time and you will really have to work at it. Seriously, it would be better to avoid accumulating sleep debt entirely. In the working population alone, sleep debt can run afoul of public health and safety. Critical job errors, drowsy driving, absenteeism, lower productivity, equipment accidents, higher workplace dissatisfaction, and chronic illness are some of the long-term outcomes of worker-accrued sleep debt. The National Safety Council now offers a calculator which can measure the actual cost of workplace fatigue, unmasking sleep debt as a potential root cause. But let’s take it a step further and look at the Princess Cruises “relaxation report” from 2017 (Wakefield Research). Granted, their research is meant to buttress their own marketing efforts to get us all to go on a cruise, but the statistics are still worrisome: Approximately half (49 percent) of Americans surveyed reported they aren’t getting the sleep they need. And new parents may have it worst of all: new research suggests they can expect to experience disturbed sleep (with sleep deprivation and sleep debt as natural consequences) for as long as 6 years following the birth of a child. Heart failure, heart disease, obesity, diabetes, insulin resistance, high blood pressure, stroke, deficient immune system, dementia. Depression, suicide, anxiety, mood swings, psychosis, bipolar disorder. Workplace fatigue, daytime sleepiness, inability to do one’s job safely and up to standard, learning disabilities, memory problems, clumsiness, relationship problems, high-risk behaviors and their consequences (addiction, criminal behavior, unwanted pregnancy), reduced fertility, accelerated aging, reduced libido. Motor vehicle accidents, critical workplace errors, heavy equipment accidents, sentinel events (plane crashes, train derailments, oil spills, other manmade disasters).
https://sleepyheadcentral.com/education/sleep-deprivation-debt/
Brainstorming conjures tons of ideas all at once. A brainstorm is a distinct segment of time when you amp up the generative part of your brain and turn down the evaluative part. The intention is to leverage the collective thinking of the group. Brainstorming can be used throughout the design process: to plan empathy work, to assess products and services, and to come up with design solutions. How to brainstorm Brainstorming technique Your team’s sole goal is to generate as many ideas as possible, without judgment. Gather in front of a whiteboard and spend 15 to 30 minutes in high engagement “brainstorm mode.” Be sure to capture every idea, regardless of your feelings about them. You can either assign a scribe to capture ideas as they’re called out or go all-in, each person shares their ideas out loud and puts them on the board themself. Either way, use post-its and stick them up quickly. You can use How Might We questions to launch a brainstorm. “How might we give each shopper a personal checkout experience?” Brainstorming Related Techniques:
https://pdmethods.com/brainstorming/
Creativity is the ability or aptitude by which individuals or groups generate or conceive new ideas, or adapt existing concepts into new principles. Many ideas have led to successful businesses and new innovations. Ideas may lead to a new solution to a problem, a new business model, or a new method or product concept. By stimulating the creative process within individuals, new ideas and concepts can be generated that can lead to the achievement of new innovations. For example, Dyson developed a range of new and innovative products in traditional competitive markets (such as vacuum cleaners, fans and hand dryers) and acquired significant market share. The creative process was first described by Wallas back in 1926. He proposed a systematic model that usually follows a sequence of phases: preparation; incubation; illumination; and implementation. However, we find today that many people do not have the interest or inclination to develop their creative thinking capacity. They feel more comfortable with their analytical or logical thinking. Creative thinking (or lateral thinking) provides the means to generate new ideas and the identification of new opportunities. However, once ideas are generated they must be captured, screened, evaluated and finally implemented, which requires significant effort. This is reflected in the statement made by Thomas Edison that, �Genius is one per cent inspiration and ninety-nine per cent perspiration.� So what can we do to develop our creative abilities to allow us to think outside the box and come up with new ideas that will contribute to the competitive advantage of an organisation? To enhance your creative spirit and move out of your existing comfort zone the following activities are suggested: - Connect with people to develop creative communities and social networks, for example through Linkedin, Facebook or Twitter - Take control of your workspace and create an environment that is conducive to creative thinking - Learn new creativity tools and techniques through workshops, courses and online resources - Expand your mind through reading articles and publications on creativity and innovation - Engage in fun and humour through games and puzzles - Stimulate your artistic flair through drawing, painting or music - Visit inspiring places such as museums, art galleries and locations of interest around the world, for example the Louvre, Tuscany or the Great Pyramids - Understand and utilise the power of your subconscious mind through visualisation and enhancing your Alpha state - Think on paper by keeping a journal of ideas and thoughts � write your problems down on paper and try to solve them using creative techniques - Convert ideas into action � implement your ideas to create value For more information on how to stimulate creative thinking to solve problems in your organisation, please contact John Kapeleris at [email protected] or phone (07) 3364 0700.
http://www.ausicom.com/news-593-how-to-enhance-your-creative-spirit.html
McLaren’s young driver Kevin Magnussen was quickest on the first day of this week’s test at Silverstone. Magnussen, who is currently second in the Formula Renault 3.5 championship, posted a best time of 1’33.602 with 15 minutes of running remaining. But American driver Alexander Rossi was unable to do a quick timed lap at the end of the session as the hydraulics had failed on his Caterham. “We were still able to complete most of the day’s plan,” said Rossi. “After the installation lap we were straight into the program, running through a number of test items in both the morning and afternoon sessions that will help the team in the coming races.” Rossi will hand his car over to Will Stevens for tomorrow’s test. Daniel Ricciardo was originally scheduled to drive for Red Bull today, but his run has been postponed to tomorrow after the team decided to reorganize its running plan.
https://motorsports.nbcsports.com/2013/07/17/magnussen-on-top-as-rossi-hits-trouble-in-test/
Q: How to take out symbols from string with regex I am trying to extract some useful symbols for me from the strings, using regex and Python 3.4. For example, I need to extract any lowercase letter + any uppercase letter + any digit. The order is not important. 'adkkeEdkj$4' --> 'aE4' '4jdkg5UU' --> 'jU4' Or, maybe, a list of the symbols, e.g.: 'adkkeEdkj$4' --> ['a', 'E', 4] '4jdkg5UU' --> ['j', 'U', 4] I know that it's possible to match them using: r'(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])' Is it possible to get them using regex? A: You can get those values by using capturing groups in the look-aheads you have: import re p = re.compile('^(?=[^a-z]*([a-z]))(?=[^A-Z]*([A-Z]))(?=[^0-9]*([0-9]))', re.MULTILINE) test_str = "adkkeEdkj$4\n4jdkg5UU" print(re.findall(p, test_str)) See demo The output: [('a', 'E', '4'), ('j', 'U', '4')] Note I have edited the look-aheads to include contrast classes for better performance, and the ^ anchor is important here, too.
Mechanical pencil has been my buddy for the past two years. It seemed pretty hassle to use it until I learned how to clear a mechanical pencil lead jam from my fellow pal Sam. Sam is a great pencil user, and I made a habit of him. He saw me struggling with my mechanical pencil and changing them frequently when I met him. He took the pencil and cleared it out with a piece of a pin right in front of me. I was shocked at not knowing it before. This resulted in me knowing the complete process of clearing the pencil up. And not only did it increase my productivity, but it also saved a lot of bucks throughout the year. Later on, I tried with an eraser insertion and other sticks to clear them out. Most of them resulted in my favor, while some didn’t. Here is a complete guide that’ll show you everything regarding cleaning the inside of your mechanical pencil from the jammed lead you have been storing for a while. Important Mechanical pencil lead jam is typical when you are using the pencil frequently. From the students at school to the people at work, all of them have good access to the device. As you use it, you’ll find the broken lead getting stuck inside. Also, there will be other components like dust that’ll block your regular usage of the pencil. Rather than pondering upon any issue on it and throwing the pen out, it is essential to take measures to resolve them. The device is meant for, which is to use for a long time. Why Does Pencil Jam Happen? Lack of proper attention. Yes, this is the clear-cut answer to the most asked question. You cannot expect your mechanical pencil to be 100% accurate and conscious. It won’t exert out the broken lid it gets from inside. That’s why you should be careful in maintaining the cleanliness of the inside of the mechanical pencil. While using the mechanical pencil, you must have seen a lot of leads breaking and remaining inside. The same happened to me a lot of times. Without paying any heed to it, I used the pencil with a new lead inserted in it. And one day, it turned out to be a curse for me. I couldn’t get any more lead coming out from the head of the pencil. This went on till I found the massacre I did to myself. Saw a bunch of broken small leads hampering the insertion. I couldn’t see it fully; otherwise, I could’ve got them out by my finger. It was felt, and I had to find a plan to resolve the issue. Things You Can Use to Clear the Mechanical Pencil Before you jump on to clear the mechanical pencil, keep certain things on the journey. You’ll need them. It’s not essential to have all of them, but a few things like a pin or rod are necessary. 1. Cleaning Rod/Pin This is a must. Without it, you won’t be able to take out the lead from inside the pencil. It works as the grabber and exercises to get them out. Moreover, the pin is the easiest to get inside the pen. It’s slim, easy to work with, and creates less hassle. 2. Eraser Refill An eraser refill might help you get the jammed leads out from your pencil. They are available separately in the market and work correctly when you implement them. The way it works is simple. Replace the finished eraser and insert the new one inside. Once it goes inside, the leads or other components from inside go out of the pencil. But the process is not applicable during urgency. You have to buy one eraser refill and then perform it. Just raises the cost and gives more hassle in buying it. But the outcome is satisfactory. 3. Twist Tile This is an alternative to the pin or rod you’ll be using. Most of the time, you won’t get the expected result, but it works. The twist tile is reasonably available at home. While choosing the tile, keep in mind that it has to be thinner than the diameter of the pencil. Otherwise, it won’t go inside and might break the head of the pencil. How to Clear a Mechanical Pencil Lead Jam You need to clear your jammed mechanical pencil; it’s time to work on it. And that should be done sequentially. The way I have cleaned my mechanical pencil, in fact doing it, will be discussed in this section of the article. So, let’s jump on it. Step 1: Inspecting the Jam It is not evident that you’ll know the jam by default. Also, the way the lead is jammed inside your pen remains unknown until you work on it. Like any mere fixation, it is your sole duty to inspect the lead jam. As a normal eye cannot inspect it, spare some time and shake the pencil well. You’ll see some leads coming out of it. At times, the whole lead gets out; that’s when the amount is too low. But here, we are talking about severe cleaning. After shaking the pencil well, you are done loosening or warming the device up for the next steps. Step 2: Open up the Nose A nose covers the place of lead jam. It covers the lead and the whole lead stand from external impact. The jam mainly happens when on the nose inside. They keep on filling up until they are cleared. The longer they keep on jamming, the more difficult it is to clear them up. Watch out for the durability of the nose. At times, your finger pressure might break the small thing. So, you have to open the nose of the pencil carefully. Step 3: Insert the Cleaning Pin or Rod After you open up the nose of the pencil, it is time to generate the main clearing component inside, i.e., the pin or rod. Pin or rod, because you can use any of them at best available to you. So, by now, you already know where the problem is. The only way of exerting an internal push inside the pencil can be possible by the clutch of the pencil. While you are pressing and holding it, the inside part of the rod creates a possible open path inside for the flow of the lead. As the force is created inside, you are getting an extra privilege to get the device’s lead out. Holding the button, insert the rod or pin inside the clutch hole. Keep inserting the rod in and out until you feel like the inside part is cleared wholly. Step 4: Inspecting the After Cleaning As you have performed the exercise with the sharp thing inside the clutch, you must get a lot of lead coming out from it. As said, hardly you’ll get the inside part 100% cleared. There’ll still be some impurities lying inside, causing you discomfort. You have to re-perform the action again and clear the rest of the lead inside. Repeat the process until you are entirely done with clearing the pencil. It might take you 2-3 times to perform the work, but it is worth it. Step 5: Fixing the Nose Back to the Pen After cleaning the pencil, put the nose back to its usual place. This shouldn’t take much time. Screw it back to its position and use the pencil usually. Keep in mind that the lid position is assured perfectly. Any barrier or hampering should be observed immediately, and action should be taken accordingly. If needed, re-clear the inside part again. Caution It is a must to maintain proper caution for any work. Be it clearing the lead from inside the mechanical pencil to fixing the nose back to it; obviously, you should be careful to open it up. Below are some points I resulted in clearing my mechanical pencils all over the year. - Be careful while you are opening the nose of the pencil. It is sometimes made up of plastic and can break with a small force implemented on it. - The same goes while inserting the nose on the pencil. Observe the alignment so that you don’t mess things up. - Don’t push the rod or pin you insert inside the pencil too much. If it penetrates from the other side, you are the one that’ll lose the whole mechanical pencil. - Don’t over-press the clutch too much. If you break it, you won’t be getting a separate one in the market to replace and use. FAQs Q: Can you sharpen mechanical pencil lead? You can never sharpen a mechanical pencil lead. It comes sharp by default, and they are made to be honed. Even when you break the lead, you’ll find them in the same state as before but less in size. Q: How long do mechanical pencils last? It depends. There is no fixed period for the pencil to last. If you can use it properly and care for it all the time, it will last for 2-3 years. But for a regular user like me, it hardly lasts for a year. I don’t get time to care about it properly, which results in the outcome. But with its serving, I am way more satisfied. Q: How do you dissolve pencil lead? It is easy to dissolve pencil lead. Let’s learn it with some simple steps: Step 1: Erase the excessive of the pencil lead. Use a good eraser to do so. Step 2: Make a solution to the soap and handwash. Use cold water in it. Step 3: Take a cleaning cloth and soak it in the solution. Step 4: Wipe it down on the stain. Repeat it until you remove the whole of it. Step 5: After the liquid is absorbed, blot the spot with ammonia. Q: Why does my mechanical pencil lead keeps breaking? Your mechanical lead keeps on breaking because of the following reasons: - The excess pressure from the clutch to the lead. - Using more lead than necessary (oversize lead). - Using thin and cheap lead. - Applying more pressure to the pencil while writing with it. Q: How do you fix a mechanical pencil lead jam? It simple. All you have to do is to clean the inside of the pencil. You have to open up the nose where the broken leads are stored. A thin stick-like thing will assist you in reaching inside and getting the leads out. Twist tile, thin rod, even pin can satisfy the work. Even pushing the other end with a new eraser sometimes brings the jammed lead out. Q: Can I use a piece of lead to unjam a pencil? You can use a piece of lead to unjam a pencil. But for that, you must ensure that the lead is strong enough to sustain a little more pressure than most others. The rest should be done like a pin or rod. Insert the long lead through the head and convincingly get other small broken pieces out. Final Words All these discussions, just to see you throwing your jammed mechanical pen away? Not. I am sure you are fully aware and know how to clear a mechanical pencil lead jam by now. These steps are implemented not only by me but also by my cohorts and co-workers. We have been dealing with the pencil every day in the same manner, and the result speaks for itself. So, stay with us and nail your work with the mechanical pencil till we bring more exciting topics relating to it. Related Content:
https://choosemarker.com/how-to-clear-mechanical-pencil-lead-jam/
Yesterday I slept in. When I woke up, sans alarm clock, I walked to the bakery down the street, picked up some croissants, made a latte at home, and devoured the latest issue of Vogue and that entire almond croissant. It was a good morning. Why can't Monday mornings be like Sunday mornings? Images: Short & Sweet Blog. That Sunday sounds like heaven. Happy Monday! that sure looks good!! I perused that Vogue while at the hair Salon the other day. Sundays are so much better than Mondays! I hope your Monday is half as divine as your Sunday. Sounds like a perfect sunday morning. And yes...why can't mondays be like sundays? Weekend mornings are my favorites. A good coffee, a book or magazine and my cozy couch is all I need. You're adorable! Sounds like the perfect Sunday morning to me. Who wouldn't love a perfect croissant? Sounds like an amazing day to me, and you are right why aren't Mondays like that? YES! Why can't Monday mornings be like Sunday mornings?!
http://www.theshortandthesweetofit.com/2014/01/sunday-monday-morning.html
Some notes of this course offered by Kaggle, for my poor memory. Cross validation Cross-validation gives a more accurate measure of model quality, which is especially important if you are making a lot of modeling decisions. Use pipelines for doing cross-validation, you will save a lot of time. XGBoost = Gradient boosting We refer to the random forest method as an “ensemble method”. By definition, ensemble methods combine the predictions of several models (e.g., several trees, in the case of random forests). Gradient boosting is an ensemble method too that goes through cycles to iteratively add models into an ensemble. - Naive model is build as initial prediction that will be used as basis for following predictions. - Make predictions, we use the first prediction or the iterative predictions to generate predictions for each observation in the dataset. These predictions are added in the ensemble. - Calculate loss, we track the data returned by a loss function (like mean square error (MAE)). - Train new model, we use the loss function to fit a new model that will be added to the ensemble. Specifically, we determine model parameters so that adding this new model to the ensemble will reduce the loss. (Side note: The “gradient” in “gradient boosting” refers to the fact that we’ll use gradient descent on the loss function to determine the parameters in this new model.). - Add new model to ensemble, and repeat again and again. Data leakage Data leakage happens when your training data contains information about the target, but similar data will not be available when the model is used for prediction. This leads you to obtain high performance on the training set (and possibly even the validation data), but low performance in production (and a lot of frustration). Types of data leakage: - Target leakage: it occurs when your predictors include data that will not be available at the time you make predictions. - Train-test leakage: it occurs when you aren’t careful to distinguish training data from validation data. How to prevent them: - Target leakage: Any variable updated (or created) after the target value is realized should be excluded. - Train-test leakage: take care separating the train and validation data properly.
https://joapen.com/blog/2021/11/22/intermediate-machine-learning-by-kaggle/
Hallo Foozi, a beautiful picture of the Common Tailorbird, very good sharpness, marvellous colours and atmosphere TFS Best regards Maurizio - - Miss_Piggy (18714) - [2012-02-23 3:29] - Hallo Foozi O, how I like the neat "combed" feathers of this Common Tailorbird. Further to that I agree with what Luciano has said about the point of view and also with Maurizio with the atmosphere. It is a different type of presentation, but for sure a pleasant one. Thanks for sharing. Best regards. Anna - - mamcg (9843) - [2012-02-23 5:48] - Salaam Ya Foozi, Some thing amazing I see here, you track beautiful species and I'm feel cold here in below zero, it is superb shot in beautiful colours, TFS. Regards. Musa. My favourite bird, Foozi. Looks like you captured this shot in inclement weather. Nevertheless, a good shot under the circumstances. I will take the liberty of fiddling with it through a Workshop, mainly to remove the BG noise. Hope you will not mind. Regards. Ram - - samiran88 (10271) - [2012-02-23 6:48] - probably today is 'Bird day'. nice picture of this little beauty. good POV. water droplet added beauty of the picture. well done my friend. tfs samiran - - marius-secan (29598) - [2012-02-23 8:55] - Hello Foozi, Another masterpiece....splendid capture with exceptional details. I can see the texture of the wings. Wonderful..... The composition, the light and exposure are perfect. Thanks for sharing! Marius. - - maaciejka (27117) - [2012-02-23 9:02] - Hi Foozi, excellent photo of this bird. Amazing colours. Perfect sharpness and details. Thanks for sharing, Maciek - - siggi (52850) - [2012-02-23 10:33] - Hello Foozi. Superb shot!What a beautiful colors and excellent quality. Great background made this bird bit more attractive.Best regards Siggi - - josediogo1958 (11747) - [2012-02-23 12:15] - Hi Foozi Wonderful capture.Lovely composition,colors and contrast.Great sharpness. Thank You Best regards J.Diogo - - mehmetarslan (2767) - [2012-02-23 12:55] - Hello Foozi, What a lovely picture. Well done. This is really fun. Have a good time. - - Silvio2006 (102073) - [2012-02-23 13:01] - Ciao Foozi, great capture of lovely bird in nice pose on a fantastic lighting BG, wonderful natural colors, fine details and splendid sharpness, very well done my friend, ciao Silvio - - CeltickRanger (37400) - [2012-02-23 14:42] - Hello Foozi I love the POV on the bird by behind and the excellent framing with leaving the good free space, excellent focus sharpness and details, beautiful color tones of the background, TFS Asbed - - jusninasirun (14660) - [2012-04-15 20:10] - Salam Foozi. I like how this little tailorbird is groomed and looks neat with the plume. The dorsal pov and the head-turn is beautiful against the silenced lavender background. Fine capture and thanks for sharing.
https://treknature.com/gallery/photo271099.htm
Rapid identification of human-infecting viruses. Abstract Viruses have caused much mortality and morbidity to humans and pose a serious threat to global public health. The virome with the potential of human infection is still far from complete. Novel viruses have been discovered at an unprecedented pace as the rapid development of viral metagenomics. However, there is still a lack of methodology for rapidly identifying novel viruses with the potential of human infection. This study built several machine learning models to discriminate human-infecting viruses from other viruses based on the frequency of k -mers in the viral genomic sequences. The k -nearest neighbor (KNN) model can predict the human-infecting viruses with an accuracy of over 90%. The performance of this KNN model built on the short contigs (≥1 kb) is comparable to those built on the viral genomes. We used a reported human blood virome to further validate this KNN model with an accuracy of over 80% based on very short raw reads (150 bp). Our work demonstrates a conceptual and generic protocol for the discovery of novel human-infecting viruses in viral metagenomics studies. Top of page Abstract details Author(s) Zhang Zheng ; Cai ZeNa ; Tan ZhiYing ; Lu CongYu ; Jiang TaiJiao ; Zhang GaiHua ; Peng YouSong Author Affiliation College of Biology, Hunan University, Changsha, China. Author Email [email protected] Journal article Transboundary and Emerging Diseases 2019 66 6 2517-2522 ISSN 1865-1674 Publisher information Wiley Berlin Germany Language of Text English Language of Summary URL https://onlinelibrary.wiley.com/doi/f...
https://www.cabi.org/isc/abstract/20193511450
String of dolphins is a rare and unique-looking succulent that is becoming increasingly popular in the world of gardening. It has a unique look, with its long stems and round, green leaves that resemble dolphins. It is a relatively low-maintenance plant that can be grown indoors or outdoors, making it a great choice for those who want to add some greenery to their home. However, there are some important things to know about string of dolphins, such as how to root cuttings, why it flattens out, whether it needs direct sunlight, and how to thicken the strings. In addition, we will discuss the rarest succulents and whether you should trim your string of dolphins, as well as if it goes dormant and why it may be growing straight up. Finally, we will look at the best ways to keep the string of dolphins alive. Are string of dolphins rare? String of dolphins are rare, but not unheard of. A string of dolphins is a group of dolphins that swim together in a line, usually with the leader at the front. They are usually seen in shallow waters, and tend to be a sign of a healthy and thriving population. Sightings of strings of dolphins have been reported in various parts of the world, but they are still considered to be a rare phenomenon. Scientists are still studying the behavior of dolphins to better understand why they sometimes form strings and why the behavior is so rare. What is the best way to root cuttings? The best way to root cuttings is to use a rooting hormone, which can be found at most garden stores. To use the rooting hormone, start by taking a cutting from the desired plant and trimming off any leaves from the bottom third of the stem. Dip the stem into the rooting hormone and then place it into a pot filled with soil. Mist the soil with water and cover the pot with a plastic bag, creating a humid environment for the cutting to root. Place the pot in an area that receives bright, indirect sunlight and monitor it regularly to ensure the soil is kept moist. After a few weeks, the cutting should have rooted and can be transplanted into a larger pot or the garden. Why do string of dolphins flatten out? Dolphins are highly social creatures and often travel in groups called pods. When a pod of dolphins swim together, they often form a string-like shape, with the dolphins swimming in a long line. This behavior is known as “stringing out” and is thought to be a way for the dolphins to communicate and stay together. By arranging themselves in a line, the dolphins can stay in contact with each other and can move more quickly and efficiently. Additionally, the shape of the line reduces the resistance of the water, allowing the dolphins to swim faster and farther. The behavior of stringing out also serves as a form of protection, as the dolphins can alert each other of any potential danger. Does string of dolphins need direct sunlight? String of dolphins, or Senecio peregrinus, is a succulent plant that does not need direct sunlight to thrive. In fact, too much direct sunlight can damage the plant. String of dolphins prefers bright, indirect light and should be placed in an area that gets some morning sun but is shaded from the hot afternoon sun. This succulent is also tolerant of low light, so it can survive in areas of the home that don’t get much natural light. How do you thicken dolphin strings? There are a few different ways to thicken dolphin strings. One way is to use a thicker string material, such as a thicker nylon or polyester. This is a good option for those who want a thicker string but don’t want to make any modifications to the string. Another option is to double up the strings by wrapping them around each other. This will make the strings thicker and more durable. Finally, if you want a truly thick string, you can wrap the strings around a dowel rod or a piece of wood. This will make the strings as thick as you need them to be. What are the rarest succulents? The rarest succulents vary depending on the region and availability, but some of the rarest succulents include the Adromischus cristatus, the Haworthia limifolia, and the Lithops optica. These succulents are known for their unique shapes and colors, and they are difficult to find in nurseries or in the wild. These succulents require special care, such as extra water and protection from intense sunlight, in order to thrive. For those looking for rare succulents, it is best to look for them online or in specialty stores, as they are not as widely available as other succulents. Should I trim my string of dolphins? No, you should not trim your string of dolphins. Trimming a string of dolphins is a difficult and dangerous task that should only be attempted by an experienced marine biologist. It is also important to consider the welfare of the dolphins, as trimming them can cause them stress and discomfort. If you are concerned about the health of your dolphins, it is best to consult a professional for advice on how to best care for them. Do string of dolphins go dormant? No, dolphins do not go dormant. Dolphins are active animals and they are constantly swimming and searching for food. They need to be constantly in motion to survive, as they need to breathe air at the surface of the water. Dolphins have a variety of behaviors that they use to survive, such as echolocation, socializing, and communicating with each other. Dolphins also have to be alert in order to avoid predators. As such, they cannot go dormant like other animals, such as bears or bats. Why is my string of dolphins growing straight up? It is likely that your string of dolphins is growing straight up because of the way it is attached to the substrate. The string may be attached to a weight that is pulling it down, or it may be attached to a fixed point that is keeping it upright. Additionally, the string of dolphins may be attached to a surface that is providing some resistance to the weight of the string, which could be causing it to remain in an upright position. Finally, the string of dolphins may be buoyant, which would cause it to float up. How do you keep the string of dolphins alive? The best way to keep the string of dolphins alive is to ensure their natural habitats are healthy and protected. This means reducing the amount of pollution in their environment, especially in the ocean. It also means protecting them from overfishing, as dolphins rely on fish for their food. In addition, it is important to reduce the amount of noise pollution in the ocean, as this can be disruptive to their communication. Finally, it is important to provide them with adequate food sources, as well as to monitor their health and any changes in their environment. All of these steps can help to ensure the string of dolphins remains healthy and alive. In conclusion, string of dolphins are rare and require direct sunlight to thrive. The best way to root cuttings is to place them in a well-draining soil and keep them moist. String of dolphins flatten out due to lack of sunlight or over-watering. To thicken the strings, it is important to ensure that the plant is receiving enough sunlight and water. The rarest succulents are Haworthia, Lithops, and Gasteria. Trimming is not necessary unless the plant is becoming overgrown. String of dolphins may go dormant in winter, but they should be kept alive by providing adequate light and water. If the string of dolphins is growing straight up, it is likely due to a lack of sunlight. Keeping the string of dolphins alive requires providing adequate light, water, and fertilization.
https://rightlivinn.com/are-string-of-dolphins-rare/
The Detroit MoPA The Detroit Museum of Public Art (Detroit MoPA/DMoPA) is an online archive that I’ve run since 2015 with a mission to archive, interpret, protect, and promote the academic and professional study of public art in Detroit. The goal is ambitious (and nearly fully out-of-pocket) with the hope that eventually city government or an institution like the DIA will take over the project and pay me a fair salary to continue my work on it. That hasn’t yet happened but here is what I hope to accomplish: - Photograph every single piece of Public Art in the city, and update the archive as new pieces are discovered or created. - Research the name of the artist(s), the year the pieces were completed, any information on the property that it is on, and it’s estimated cost of production. - Put all of the data on a public spreadsheet with gps coordinates and other data points that enable researchers to learn how the artwork affects crime rates, graduation rates, property values, racial and income demographics, and more. - Create a way for viewers to organize the artworks by year, genre, city district, or artist name. - Rank city district by number of artworks in them so that administrator can see where additional funding is needed. - Rank artists by number of artworks in the city, creating incentive to produce more work and recognition (and thus potentially higher commissions) for those that make it to the top of the list. - Map the data, and have it be the best region specific arts & culture resource available to city residents. - Never charge for access to the data. Note: Due to COVID-19, I’ve decided to take a “brief hiatus.” Below are the remnants of the project.
https://clerard.com/dmopa/
The Midwood Girls Track team ran to third place at the Brooklyn Borough Championship on Sunday, October 28, at Van Cortlandt Park. Brooklyn Tech and Millennium High School came in first and second, respectively. The girls acknowledge Brooklyn Tech and Millennium as cross country powerhouses, along with Susan B. Wagner, but they know they can compete with the best. Alina Bennett ’20, a long distance runner, said, “They’re pretty good, but we can be better. They’re definitely our main rivals right now.” “It’s important to get into the right mindset,” said Estela Villacis ’21, another long distance runner who ran JV and finished 16th out of 33 people. “Track is a competitive sport, and wanting to win gets you halfway there.” Bennett’s time on the varsity 3.1 mile race was 23:47.77. “I could’ve done better,” she said. “My knee has been hurting a lot more lately, and I couldn’t perform at my best.” Bennett has been struggling with her injuries ever since sophomore year. Common injuries among the track team are shin splints and knee problems. “Sometimes athletes get what is known as athlete’s knees,” explained Aysegul Yumusak ’20. “Just recently, I had to get my knee taped up after some intense practices.” Despite these injuries, the runners persevere. Bennett said, “I’ve thought about quitting once or twice, but I know track will look good on my college application. Plus, I actually enjoy running, and I don’t really know how to play any other sport.” Track isn’t just running; there is also a lot of practice involved to hone and perfect form and increase stamina. Runners practice four days a week and have a team meeting in the weight training room on Fridays. “We do a bunch of one-thousand’s [meters] and sometimes a couple two hundreds, even for the girls who don’t run short distances,” said Villacis. Bennett added that the runners also refrain from eating sweets and junk food. “We try to eat healthy, so instead of sugar, we eat salads,” she said. “We’re actually not allowed to eat any candy or chips.” According to Bennett and Villacis, there are many benefits to joining the track team beyond better physical form. “I’d definitely recommend everyone to join,” said Bennett. ”It promotes a healthy lifestyle.” “It also helps with time management,” Villacis added. “I couldn’t manage my time properly, but now I do most of my homework in the library before practice.” The team typically practices after school from Monday to Saturday. They practiced in Prospect Park and the Midwood gym room for at least one and a half hour from Monday to Friday at the beginning of the season. But as the sky is growing darker faster, the team has begun practicing on Midwood Field, which is closer to the school. On Saturday, the team usually practices in Van Cortlandt Park because this will help them become familiar with the competition routines. Running is not an easy sport, the athletes say. “Track is dangerous,” Yumusak said, laughing. “One time — multiple times, actually — I’ve stepped into a hole in Prospect Park while running a race or just during practice and almost sprained my ankle. I think most girls who’ve run at least one year of cross country seriously have experienced that.” In addition, while the team is preparing for the City Championship in two weeks, the team had a Freshman Sophomore City Championship. The timing interrupted their rhythm, and many runners didn’t get their ideals places. “After looking at my time for the sophomore race at Fresh/Soph, I wasn’t satisfied,” said Karen Lam ’21. “I’m not sure if I’m going to run in the City Champs, but I won’t let this one race get me down. I’ll learn from it and try not to make the same mistakes next race.” Results from track meets can be disheartening sometimes, as the girls know, but they’ve learned something important along the way.
https://www.midwoodargus.com/blog/2018/11/20/girls-track-sprints-to-third-place-finish
Trend Report Overview - Stationery and Tablet ManufacturingThe Trend Spotting report is a monthly supplement to the Industry Research Report on Stationery and Tablet Manufacturing that shows current market and employment trends. This report helps identify upward and downard swings in the Stationery and Tablet Manufacturing market as they are occurring. Price Trends for Stationery and Tablet ManufacturingThis section shows how prices are changing within the on a month-to-month basis. Price changes may be affected by seasonal demand, or by larger movements in the market. Sector Layoff Trends for Stationery and Tablet ManufacturingThis section shows how layoffs are affecting the overall sector. This includes both the Stationery and Tablet Manufacturing industry as well as adjacent markets that employ a similar workforce. Layoffs are both a warning symbol, as well as an opportunity. Understand what's happening in the market in recent months is critical to making informed business decisions. Trends for 2021 show what's going on. This report contains up-to-the-minute Stationery and Tablet Manufacturing 2021 trends. Determine whether customer demand is ticking upwards, or shifting in the reverse direction. The newest markets indicators show declining and growth prospects, as well as historical data. What the trend in Stationery and Tablet Manufacturing shows about sales over the course of the year demonstrates how busineses should react. Market trends and industry trends for Stationery and Tablet Manufacturing show what's what. Employers reduce workforce through layoffs because of a fragile economy, soft demand, or financial pressure. Our lab monitors the latest news and events to give updated statistics.
http://www.anythingresearch.com/trends/Stationery-and-Tablet-Manufacturing.htm
In Animal Farm, irony abounds when the animals begin breaking the rules that they themselves set for their society. The first rule, "Whatever goes upon two legs is an enemy" is clearly broken when Napoleon meets with the farmers. Similarly, alchohol is consumed (rule number 5) and he has ordered the deaths of other animals (rule number 6). Later, some of the rules are amended to accommodate the leaders' breaking of the rules. Ironically, only the pigs are allowed such luxuries, and all the other animals are meant to abide by the original orders. This irony is directly related to the theme of Animal Farm because it represents the hypocrisy that was inherent in Stalinism--the rules are not for everyone, only for the masses so that control can be maintained.
The RippleState object type connects two accounts in a single currency. Conceptually, a RippleState object represents two trust lines between the accounts, one from each side. Each account can change the settings for its side of the RippleState object, but the balance is a single shared value. A trust line that is entirely in its default state is considered the same as a trust line that does not exist, so rippled deletes RippleState objects when their properties are entirely default. Since no account is privileged in the XRP Ledger, a RippleState object sorts their account addresses numerically, to ensure a canonical form. Whichever address is numerically lower is deemed the "low account" and the other is the "high account". LedgerEntryType String UInt16 The value 0x0072, mapped to the string RippleState, indicates that this object is a RippleState object. Flags Number UInt32 A bit-map of boolean options enabled for this object. Balance Object Amount The balance of the trust line, from the perspective of the low account. A negative balance indicates that the low account has issued currency to the high account. The issuer in this is always set to the neutral value ACCOUNT_ONE. LowLimit Object Amount The limit that the low account has set on the trust line. The issuer is the address of the low account that set this limit. HighLimit Object Amount The limit that the high account has set on the trust line. The issuer is the address of the high account that set this limit. LowNode String UInt64 (Omitted in some historical ledgers) A hint indicating which page of the low account's owner directory links to this object, in case the directory consists of multiple pages. HighNode String UInt64 (Omitted in some historical ledgers) A hint indicating which page of the high account's owner directory links to this object, in case the directory consists of multiple pages. LowQualityIn Number UInt32 (Optional) The inbound quality set by the low account, as an integer in the implied ratio LowQualityIn:1,000,000,000. The value 0 is equivalent to 1 billion, or face value. LowQualityOut Number UInt32 (Optional) The outbound quality set by the low account, as an integer in the implied ratio LowQualityOut:1,000,000,000. The value 0 is equivalent to 1 billion, or face value. HighQualityIn Number UInt32 (Optional) The inbound quality set by the high account, as an integer in the implied ratio HighQualityIn:1,000,000,000. The value 0 is equivalent to 1 billion, or face value. HighQualityOut Number UInt32 (Optional) The outbound quality set by the high account, as an integer in the implied ratio HighQualityOut:1,000,000,000. The value 0 is equivalent to 1 billion, or face value. There are several options which can be either enabled or disabled for a trust line. These options can be changed with a TrustSet transaction. In the ledger, flags are represented as binary values that can be combined with bitwise-or operations. The bit values for the flags in the ledger are different than the values used to enable or disable those flags in a transaction. Ledger flags have names that begin with lsf. If an account modifies a trust line to put it in a non-default state, then that trust line counts towards the account's owner reserve. In a RippleState object, the lsfLowReserve and lsfHighReserve flags indicate which account(s) are responsible for the owner reserve. The rippled server automatically sets these flags when it modifies a trust line. The lsfLowAuth and lsfHighAuth flags do not count against the default state, because they cannot be disabled. The default state of the two NoRipple flags depends on the state of the lsfDefaultRipple flag in their corresponding AccountRoot objects. If DefaultRipple is disabled (the default), then the default state of the lsfNoRipple flag is enabled for all of an account's trust lines. If an account enables DefaultRipple, then the lsfNoRipple flag is disabled (rippling is enabled) for an account's trust lines by default. Note: Prior to the introduction of the DefaultRipple flags in rippled version 0.27.3 (March 10, 2015), the default state for all trust lines was with both NoRipple flags disabled (rippling enabled). Fortunately, rippled uses lazy evaluation to calculate the owner reserve. This means that even if an account changes the default state of all its trust lines by changing the DefaultRipple flag, that account's reserve stays the same initially. If an account modifies a trust line, rippled re-evaluates whether that individual trust line is in its default state and should contribute to the owner reserve.
https://developers.ripple.com/ripplestate.html
Your students can refer to this checklist whenever they practice. By Megan Desmarais | Deliberate Practice How do you motivate piano students? It seems like this is the question of the century and in a perfect world, every piano student would be By Tracy Plunkett | Classically Trained to Creatively Curious For piano students to continue being motivated to play the piano, they must first be filled with a sense of achievement. After all, if By Melody Deng | Classically Trained to Creatively Curious Silly Mistakes and How to Get Rid of Them! Hi there, my name is Melody, and I am a piano teacher from New Zealand. I have a studio of about By Nancy Tanaka | Creativity Let us motivate piano students once and for all. By Ruth Power | Aural Yep, you can make playing by ear fun. Here's how. By Chris Owenby | Creativity Just because it's summer doesn't mean your students need to stop playing the piano.
https://timtopham.com/category/piano-teaching/deliberate-practice/
Q: Z axis Stepper motors not working correctly I have been searching around the internet for the last 3 days trying to figure this out. My Z axis motors for a pursa-i3 3d printer are not working correctly. I have marlin firmware and using repetier host. I send a command to move the z axis and I get it to move, however I if I send the same command again the motors will sometimes spin the other way. feel like They almost randomly choose which direction they turn. As I said I have been trouble shooting this for a while now. What I am suspecting is the firmware feedrates and acceleration or some setting is not correct. Here is my code: //// MOVEMENT SETTINGS #define NUM_AXIS 4 // The axis order in all axis related arrays is X, Y, Z, E #define HOMING_FEEDRATE {50*60, 50*60, 2*60, 0} // set the homing speeds (mm/min) #define DEFAULT_AXIS_STEPS_PER_UNIT {80,80,4000,590} #define DEFAULT_MAX_FEEDRATE {300, 300, 3, 45} // (mm/sec) #define DEFAULT_MAX_ACCELERATION {1000,1000,50,500} // X, Y, Z, E maximum start speed for accelerated moves. E default values are good for Skeinforge 40+, for older versions raise them a lot. #define DEFAULT_ACCELERATION 1000 // X, Y, Z and E max acceleration in mm/s^2 for printing moves #define DEFAULT_RETRACT_ACCELERATION 1000 // X, Y, Z and E max acceleration in mm/s^2 for retracts #define DEFAULT_XYJERK 10 // (mm/sec) #define DEFAULT_ZJERK 0.3 // (mm/sec) #define DEFAULT_EJERK 5.0 // (mm/sec) I tried swapping the drivers around and the motors will work perfectly on another axis so I don't suspect it to be a driver issue. I have been turning the pots ontop of the drivers to make them work but can't make them to go the same direction i want them to. I've checked the wires and I almost sure they are wired up correctly. (could be wrong but have checked it over with a multimeter.) I am new to this and it's my first time building one of these would appreciate any help I can get and and maybe I have over looked something I have tested. Just really want the axis to move in the direction That I say it to move in. More details about my setup is: A Robocraze 3D Printer Controller Board RAMPS 1.4 using A4988 stepper motors drivers and my motors are the nema 17 stepper motors. I currently have the two z axis motors wired in parallel but have tried before using series, however the problem of being unable to control the direction of the Z motors still arises (can easily switch back to series). currently trying with no load just to get the motors turn in the correct direction when I send a G-code command. I am using Repetier host on ubuntu 14.04.5. I have also check the endstops and they are working perfectly, so they ain't a problem (I don't think :p) Thank you, Bobby A: So after 5 days of trouble shooting, Bob-the-Kuhn over on the marlin github forum solved it for, anyone else who faces the same issue can head over to github for my solutions. https://github.com/MarlinFirmware/Marlin/issues/9287#issuecomment-359428147 Conversation from link: Bob-the-Kun: Problem does not follow the driver. Problem does not follow the steppers. I'm thinking that the Z socket has a problem. Sounds like an open/poor contact. Try bending the DIRECTION lead on the Z driver a little and see if the problem disappears. It's one of the corner pins. Sometimes it's called DIR. If your driver's pins aren't labeled then bend all four corner pins a little. Another option is to move the Z function to the E1 socket. Replace your pins_RAMPS.h file with this one. pins_RAMPS.zip FYI - if this really is a hardware problem then it's the second RAMPS hardware problem within a week. Most unusual. Post reply: Yes the socket I am now assuming is just broken (not sure what exactly but possibly one of the connections), After using for pins_RAMPS file and changed the motors back to series and connected to the E1 slot I successful got the printer to work!!! Thank you very much Z axis is working as I would expect! I am now calibrating the printer as it definitely needs it.
Explanation: The service of Tenebrae, meaning “darkness” or “shadows,” has been practiced by the church since medieval times. Once a service for the monastic community, Tenebrae later became an important part of the worship of the common folk during Holy Week. We join Christians of many generations throughout the world in using the liturgy of Tenebrae. This remembrance can be done alone or with a group (in person or gathered electronically). Opening Reading: Isaiah 53:2-12 Candle Lighting: Light 11 candles and 1 Christ candle extinguish one after each reading saving Christ candle Song: Lead Me to the Cross First Reading: Matthew 21:1-11 Second Reading: Matthew 26:20-25 Third Reading: Matthew 26:26-30 Song: The Wonderful Cross Fourth Reading: Matthew 26:31-35 Fifth Reading: Mark 14:32-41 Sixth Reading: Matthew 26:47-52 Seventh Reading: Luke 22:54-62 Eighth Reading: John 18:33-38 Ninth Reading: Matthew 27:20-26 Tenth Reading: Mark 15:16-20 Eleventh Reading: John 19:17-24 Song: Were you There?
https://www.gracechurchnwa.org/tenebrae-2020
December 20, 2016Nobody can fault the New York Yankees if they’re getting cold feet with their plan to take the long way back to success, but the best advice for them right now ... Rich Hill Signing Would Be Great Yankees Fit for Both Present and Future November 23, 2016The New York Yankees need a starting pitcher. In past winters, that would have led to their going after only the best options, and damn the cost! But since they need ... Gary Sanchez Gives 1st Glimpse of Yankees’ Bright Homegrown Future August 12, 2016There's not a lot of joy to be derived from the New York Yankees' present. Their 58-56 record is far from an Atlanta Braves-level disaster, but it marks the fourth year ... Alex Rodriguez-Joe Girardi Drama Making Yankees ‘Farewell Tour’ a Rocky Affair August 11, 2016Just when the New York Yankees allowed us to think we might be done forever with controversies centered around Alex Rodriguez, manager Joe Girardi pulled us back in. In case you've ... Yankees Facing Most Important Trade Deadline of the Brian Cashman Era July 28, 2016Including this one, the New York Yankees have been winners in each of the last 24 seasons. That's meant 24 years of buying or staying the course at the trade ... How Much Could Yankees Actually Sell on Summer Trade Market? July 16, 2016For the New York Yankees, August 1 is looking less like the trade deadline and more like a sell-by date. By now, there's little question that selling on the summer ... CC Sabathia’s Rebirth Is Most Pleasant Surprise for Yankees June 10, 2016What Alex Rodriguez was for the 2015 New York Yankees, CC Sabathia has been for the 2016 New York Yankees. That is to say: seemingly against all odds, a hugely productive ... What If the Yankees Became 2016 Trade-Deadline Sellers? May 12, 2016It's looking like not even the New York Yankees can outrun losing forever, and that raises a question. What would it look like if they decided to make the best of ... Locked-In Starlin Castro, Healthy Yankees Lineup Teasing Explosive Potential April 7, 2016It appears that all the New York Yankees offense needed to get going this season was a one-game warm-up. Also, somebody other than Dallas Keuchel on the mound for the ... Yankees Linchpin Mark Teixeira Has Path to Big Contract Year in 2016 March 3, 2016For the last seven years, Mark Teixeira has been paid large sums of money to play ball for the New York Yankees. It's good work if you can get it, ... Featured Sponsors Visit StubPass.com for Nickelback Tickets, Britney Spears Tickets, Wicked Tickets, Elton John Tickets, Billy Joel Tickets, Kenny Chesney Tickets and thousands of other Concert Tickets and Sports Tickets. Visit MiracleShopper Comparison Shopping for Yankee Apparel, Digital Cameras, Laptop Computers, Cell Phones, LCD TVs and more!
http://www.yankeeaddicts.com/author/zachary-d-rymer/
A proposed solution to the millennium problem on the existence and smoothness of the Navier-Stokes equations. Elements Of Computational Fluid Dynamics - Geology, Mathematics - 2011 Introduction Finite-Difference Approximations Finite-Difference Equations Numerical Stability Source Terms Diffusion Convection Pressure Waves Combining the Elements. Euler-Lagrangian simulations of turbulent bubbly flow. - Physics - 2011 This dissertation aims to provide a history of aerospace engineering and mechanics in the post-modern era by describing the development of Aerospace Engineering and Mechanics as well as some of the techniques used in modern engineering. Numerical studies on flows with secondary motion - Physics - 2016 This work is concerned with the study of flow stability and turbulence control - two old but still open problems of fluid mechanics. The topics are distinct and are (currently) approached from diff… Model-based control of transitional and turbulent wall-bounded shear flows - Engineering - 2012 University of Minnesota Ph.D. dissertation. January 2013. Major: Electrical Engineering. Advisor: Professor Mihailo R. Jovanovic. 1 computer file (PDF); xvii, 242 pages, appendices A-E. Mechanics of Fluids - Mathematics, Physics - 2013 In this chapter, we analyse the global variables of fluid dynamics to determine their association with space and time elements. We also present the two major balance equations, the mass and momentum… Incompressible Bipolar Fluid Dynamics: Examples of Other Flows and Geometries - Mathematics - 2014 The mathematical model of a nonlinear, incompressible, bipolar viscous fluid was introduced in Sect. 1.6 and conforms to the constitutive hypotheses for the Cauchy stress tensor τ ij and the first… On steady vortex flow in two dimensions. I - Physics, Mathematics - 1982 (1983). On steady vortex flow in two dimensios, II. Communications in Partial Differential Equations: Vol. 8, No. 9, pp. 1031-1071.
https://www.semanticscholar.org/paper/An-Introduction-to-Fluid-Dynamics-Batchelor/b904e9b7cd539936a2058f623a06c8fea10651c8
lindsey pearlman general hospital who did she play the last time I saw lindsey pearlman was in 2011, but the first time I saw her was in a video for the new movie “The Other Woman”. She was the star of the movie (and the video), but she was so much more. The new film’s director, Laura Dern plays lindsey pearlman. And she’s probably the biggest reason the movie was so successful. The movie follows the life of the beautiful and enigmatic young actress who left her career as an astronaut to become a nurse. But she made a big mistake. Instead of becoming a nurse, she became a doctor, which ended up costing her her life. The film tells the story through the point of view of lindsey pearlman who, after seeing a group of women who have survived the disaster at the end of her flight, decides to become a nurse. She’s a true hero in the eyes of some people because she’s a nurse. It is also interesting to note that Pearlman is at the forefront of trying to get all the survivors to agree to donate their organs to a medical research center. We all know that there are a lot of people that are going to die in a plane crash or in an automobile accident, like Pearlman, or even in the case of those that choose to die in a hospital setting, it’s not guaranteed that anyone will be alive to donate their organs. It’s important to remember that donating your organs is a very personal decision. In the case of Pearlman, she is a pretty unique person. She has a lot of money, her own business is very successful, and it seems that she is very involved in the community. It’s also interesting to note that she and her husband are both black. Pearlman is not that uncommon a death. In fact, on the weekend I watched a YouTube video of a woman named Taryn M. who died in a car accident just minutes before my son was born. She had a small child with her, and her husband was an avid cyclist. She made the decision to donate her life’s blood to keep her husband alive. I grew up in a small town in a small county. My mother ran her own business and I grew up helping out. I always had a job, but I never considered myself as a “manager” of anything. When I had a child I was an active part of the community as a babysitter until she took a job in a hospital and I had to give up my babysitting duties. My son was born on the same day as this. I took every opportunity to let him know what I was like as a mother, and how I felt about the way she was. I wanted him to know that I was a strong, independent woman who wouldn’t be taken advantage of by anyone. I remember thinking I did this just to take care of other people. When I started watching the game, I realized that I was doing it because I wanted to be something more than a babysitter. I wanted to be a doctor, a general, or a nurse. This has been the best decision I have made in a long time. It’s nice to have a person who is strong enough to stand up to the powers that be, but what we really need is someone who is so strong that they can kick them in the butt and say, “You’re pathetic, you’re just a pathetic person.” That’s what we need more of.
https://adoptthesky.org/lindsey-pearlman-general-hospital-who-did-she-play/
We are searching data for your request: Upon completion, a link will appear to access the found materials. The lunar sowing calendar for gardening in June 2018 recommends that active work on planting various garden crops be initiated. As a rule, work with plants of a long day is actively being conducted at this time. What to plant at the beginning of the month according to the lunar calendar for June 2018 According to the advice of the lunar sowing calendar, in early June, corn, beets, beans, peas and carrots are planted. You can sow peas, turnips, celery, planted tomato seedlings. The gardener's table allows peppers and eggplant to be planted in the ground in the first half of June. You can sow cauliflower and winter radish. If there is no danger of frost, you can plant melons, zucchini, pumpkins and watermelons. June is a month for collecting turnips, radishes, onions. After harvesting, crops can be re-planted by swapping them. Until the middle of the month, the lunar sowing calendar for June 2018 recommends planting seedlings of cabbage, potatoes, pumpkin and radish. Cauliflower, lettuce, radish and kohlrabi are being planted again. Mid-month landings Favorable work in the middle of the month - planting tomatoes, eggplant and cucumbers. You can start planting peppers in greenhouses or just on ridges. The Urals and the Volga region can plant ornamental plants in the open ground: - Coleus (Coleus); - amaranth; - begonias (Begonia); - Balsamina (Impatiencs Balsamina). On the greenhouse beds you can plant the seeds of forget-me-nots, hesperis and daisies. In mid-June, a day should be allocated for planting trees, as well as sow late seedlings of cucumbers and tomatoes. At the same time, the daikon is sown. Landing work in June on the lunar calendar for 2018 Across Russia, from Siberia to the Moscow Region, fennel seeds can be sown at the end of June. The presented plant of a short day grows well at this time. Along with it, sowing of leaf lettuce and radish is carried out. You can sow peas. The lunar sowing calendar for 2018 recommends sowing it at the end of June, then the peas will not be worms. The lunar sowing calendar for 2018 indicates that at the end of the month it is worth planting asparagus beans. Varieties at the same time are selected early ripening, capable of ripening in 50 days from the appearance of seedlings. When planting in late June, the beans turn out to be juicy, as its ripening falls at the end of summer, when there is a lot of rainfall. Rules of landing in June When planting in June, some rules should be followed. The gardener's calendar recommends planting basil near tomatoes, onions or cucumbers. Dill can live with everyone, but the forbidden neighbors are watercress and basil. Fennel is not able to get along with bush beans. Watercress does not tolerate the neighborhood of beets, but will be a wonderful companion to radishes, radishes and carrots. Cucumber prefers when there are a large number of vegetables near it: cabbage, beans, beets. However, it should not be planted next to onions, lettuce and turnips. Peas should not be planted with bush beans. And he will have a beneficial effect on fennel, radish and sunflower. Broccoli can be planted next to white cabbage, carrots, dill, spinach. By the fall, an additional crop can be removed to make room for already grown cabbage. Vegetables are not planted next to fennel and watercress. Having studied the gardener's calendar and the rules of the neighborhood of cultures relative to each other, you can plant a large number of plants even in a small area. For example, you can plant onions, between them a salad, next to a cucumber and in the aisles of corn. It will be convenient to take the calendar of the gardener and gardener for 2018 and, having looked at which days and what is recommended to be planted, make a diagram on a leaf. Important! At the beginning of the month, frosts can still occur, so it is worthwhile to take measures in advance to prevent colds or to protect against them. Seasonal plantings in June Lunar sowing calendar for the gardener and gardener for June 2018: favorable and unfavorable days date Moon in the zodiac sign. Moon phase Recommended work in the garden Not recommended work in the garden 1.06 Moon in the sign Capricorn Dive and transplant plants sown in the signs Cancer, Scorpio and Pisces Performing cuttings and rooting of fruit crops, sowing melons, herbs, strawberries or strawberries 2.06 Moon in the sign Capricorn The implementation of spraying and fumigation of plants from pathogenic microorganisms and plant parasites Carrying out budding, due to reduced survival and vitality of the roots 3.06 Moon in the sign of Aquarius Rooting cuttings, fertilizing, watering, followed by loosening It is forbidden to inoculate stone crops and spray plants with chemicals 4.06 Moon in the sign of Aquarius Planting any tuberous flowers, as well as ornamental shrubs and rose bushes Rejuvenating pruning, crown formation or nipping 5.06 Moon in the sign of Aquarius Sowing and planting plants for the purpose of long-term storage of crops and collection of seed material Trimming to rejuvenate, forming a crown and pinching 6.06 Moon in the sign of Pisces Sowing seeds and planting seedlings of spicy and medicinal herbs, flower and climbing ornamental crops Transplantation, root division, as well as other methods of propagation of bulb and tuber-bulb plants 7.06 Moon in the sign of Pisces Sowing seeds and planting seedlings for harvesting long-term storage and obtaining seeds Carrying out strong pruning, as well as digging potatoes with a bookmark for storage 8.06 Moon in the sign Aries Sowing and planting any fast-growing medicinal crops, as well as rooting cuttings of fruit and berry plants Carrying out strong pruning, fertilizing under garden crops 9.06 Moon in the sign Aries Digging and loosening, hilling plants and thinning seedlings, as well as removing stepsons of tomatoes and mustaches of strawberries You can not carry out any pruning, due to the increased level of sensitivity of the crown to damage 10.06 Moon in the sign Taurus Planting stocks for the purpose of further re-grafting, plant transplantation with a weak root system Planting seedlings of garden crops, removing stepchildren, pruning and pinching 11.06 Moon in the sign Taurus Preventive measures against diseases and pests, root and extra root top dressing Work related to the root system of plants, including loosening the soil 12.06 Moon in the sign of Gemini Grain harvesting, procurement of medicinal plant materials Any work with the use of garden tools 13.06 Moon in the sign of Gemini Crown shaping, crop rationing, stepson removal, strawberry mustache cutting, pinching Excessive watering and feeding can provoke diseases of the root system 14.06 Moon in Cancer Nailing, pinching; conducting preventive spraying, collecting seed material Excessive watering and feeding can provoke diseases of the root system 15.06 Moon in Cancer Planting any seedlings, planting seed potatoes, sowing flower crops, watering and cultivating Preventive treatment against diseases and plant parasites, harvesting soil substrates 16.06 Moon in the sign of Leo The introduction of mineral and organic fertilizers in a reduced dosage Pruning plants, preventive treatment against diseases and plant parasites 17.06 Moon in the sign of Leo Harvesting of medicinal plant materials, composting Excessive irrigation measures and overfeeding garden plants 18.06 Moon in the sign of Virgo Autumn plowing, preparation of garden ridges, loosening and hilling, replacing soil substrate in flower pots Implementation of activities related to picking seedlings or pinching garden crops 19.06 Moon in the sign of Virgo Thinning seedlings, removing weeds and strawberry mustaches, sanitary pruning Sowing and planting for a long-term harvest 20.06 Moon in the sign of Virgo Soaking and germinating seeds, sowing and planting seedlings, pinching Do not carry out loosening in the root zone, as well as transplant plants 21.06 Moon in the sign of Libra Digging bulb crops, harvesting for long-term storage, winter harvesting and canning Do not pinch and dive, as well as perform work with garden tools.
https://au.psichapter.net/1017-lunar-sowing-calendar-of-gardener-by-day-for-june-20.html
We are at sea level in a room that is 21 celsius. We have 1 liter of sterile water with a temperature of 21 celsius in a normal plastic bottle. We have a 20 liter bucket of ice cubes, consisting of sterile frozen water. Each cube is one cubic centimeter. We will place the bottle into the centre of the mass of ice cubes. The water in the bottle needs to be frozen completely solid. What is the highest temperature the ice can have upon contact for this to happen? I would also like to see a formula for calculating this. If we accept a large error margin of 5-10 degrees, and make assumptions regarding the shapes, materials, etc, is it possible on pen and paper?
https://physics.stackexchange.com/questions/257305/how-cold-does-this-ice-have-to-be-to-freeze-this-water-bottle-solid
Mechan is a K7 star which is orbited by the planet Mechano. Subspace Trade RoutesEdit Mechan was first charted by Yazirian explorers hailing from the direction of Scree Fron, but the route is a closely guarded secret shared by the UPF and select few local governments and corporations. The route between Mechan and Zebulon is well-charted by UPF Spacefleet, but is otherwise a closely guarded secret. A few Capellan traders, both individual and corporate, have charted routes between Capella and Mechan, each maximizing its exclusivity with trade.
http://starfrontiers.wikia.com/wiki/Mechan
An EU Civil Society Strategy: Why Do We Need It? by Veronika Móra, Hungarian Donors Forum Last week the EU Agency for Fundamental Rights in Vienna helds its annual conference Fundamental Rights Forum (FRF), which after last years´ fully online edition, took a format of the hybrid event. FRF is a unique platform for dialogue about the most pressing human rights challenges that Europe faces today and one of the sessions of the conference “An EU civil society strategy: Why do we need it?” was organised by our member, the Hungarian Donors Forum. The session strived to provide an overview of the challenges faced by civil society organisations in the EU, particularly in Central Europe and how could EU help. You can watch all the sessions online here. Veronika Móra from the Hungarian Donors Forum explains in details the need for the civil society strategy as it was also discussed during the FRF. ’Shrinking civil space’ was one of the new terms we all had to learn in the past decade. Threats to the free, independent and autonomous operation and indeed to the sheer existence of civil society organisations (CSOs) is by now not something that only happens in faraway, exotic countries with little or no democratic traditions, but occurs within the borders of the European Union, too – particularly in Central European Member States, but warning signs have been observed in “established democracies” such as Germany, France and Spain. The symptoms of narrowing space range from discrediting and vilification campaigns in (government-friendly) media with libellous accusations, harassing inspections by official authorities (such as the tax agency), attempts to close down individual organisation (in Bulgaria), and legal restrictions to the freedom of association (Hungary) and assembly (Poland). International bodies tasked with safeguarding rule of law and fundamental rights, such as UN Special Rapporteurs and the High Commissioner for Human Rights, OSCE Office for Democratic Institutions and Human Rights, the Council of Europe have all issued reports and expressed strong concern over these trends, but failed to make a real impression on the governments in question. The European Union, through its infringement procedures and the European Court of Justice is somewhat better equipped to counter the most extreme actions and moves of Member State governments, however these instruments still represent a piecemeal, case-by-case, reactive approach, which are unable to address the “big picture”, to more systemic breaches of democratic values and principles. Nevertheless, at our times of democratic backsliding and the rise of populism, CSOs are important allies of the European institutions, as they play a vital role in the promotion and application of universal and European values on the local, national and supra-national levels, and are often the first and last frontier upholding and promoting respect for human rights, dignity, freedom, tolerance and solidarity. Yet, civil society policy is still by and large a Member State competence, that is, national governments are free to design and implement their own approaches and strategies vis-a-vis civil society in their countries. The EU institutions that came in office in 2019, particularly the Parliament and the Commission seem to be more aware of this problem than their predecessors (though not without being reminded by European civil society umbrellas), illustrated, among others, by the fact that the mission of the Commissioner for Transparency and Values, Vera Jourova now includes a reference to civil society and „the protection of right of peaceful assembly and the freedom of association”. During the past year, the Commission adopted a number of important documents, such as the European Democracy Action Plan and the Strategy to Implement the Charter of Fundamental Rights which have an impact on civil society The new Citizens, Equality, Rights and Values Fund included in the current Multiannual Financial Framework (2021-27) with a significant allocation of 1.55 billion Euros Also, the old-new idea of creating a European legal form of civil society organisation, enabling easier cross-border operation, and if needed. relocation is on the table. At the same time, both the Conference on the Future of Europe and the Commission’s annual Rule of Law report while important only include and address civil society in a marginal manner. All in all, many ongoing processes demand our attention – however, these initiatives came from different players, develop at their own speed, but still lack a systemic, comprehensive approach, and view civil society as an instrument to achieve certain policy goals e.g. in the field of gender or disabilities, but not as an entity and a value on its own right. Therefore, we believe it is time that EU institutions, the Commission in particular, adopts dedicated civil society strategy/policy in order to raise attention and make the issue more visible on the political agenda. Such a strategy could be a combination of ongoing initiatives and proposed future measures, outline the common standards all Members States should adhere to in relation to their civil societies as well as the main policy tools that would help not only to counter the trend of shrinking space, but create an enabling environment in which CSOs can fulfil their functions in maintaining social inclusion, a constructive dialogue and healthy environment. To this end, Ökotárs with its partners from Central Europe, from Poland to Bulgaria along with many other organisations drafted a potential model for such a comprehensive EU civil society strategy, structured along six main prerequisites necessary for a healthy civic space: - Freedom of association (legal environment) - Freedom of peaceful assembly - The right to operate free from unwarranted state interference and the state’s duty to protect - The right to free expression - The right to cooperation and participation - The right to seek and secure resources(funding) The individual points and components of this set of recommendations may not sound very new or original, as it builds on earlier proposals and work done by both international bodies and CSOs. But its long-term goal is to serve as a basis of a much-needed official document (e.g. a Commission Communication), which would clarify the EU’s position towards civil society and its functions, discuss to what end and how it would engage with organisations, which instruments and tools are available or will be developed to counter shrinking space, including key milestones as well as how the Union will encourage Member States to implement similar measures on the national level. We as civil society organisations are committed and hope to find open ears and partners in our quest for endorsing a policy like this. The European Center for Not-for-Profit Law (ECNL) and Philanthropy Advocacy (Dafne & EFC) have published a joint Handbook on How to Use EU Law to Protect Civic Space.
https://philea.eu/an-eu-civil-society-strategy-why-do-we-need-it/
OBSERVED: i was driving north on Hwy 73 past west Olson road. There is a recent clear cut on the north side of the road. I seen a large brown animal sitting on the north side of the road in the grassy area of the ditch. I got within about 120 yds or so from the animal and it stood up and took 3 steps to cross the road heading south into the swampy area on the other side of the road. I stopped the vehicle and you could see the grass on the north side patted down where something had been. The animal was brown in color and when standing I estimate about 7 to 8 ft tall. OTHER WITNESSES: I was the only one in the vehicle OTHER STORIES: no TIME AND CONDITIONS: 2:15 pm partly cloudy but sunny about 50 degrees ENVIRONMENT: recent clear cut to the north with pine and poplar. Area to the east end of clear cut is an ash swale which is where I seen the animal. Follow-up investigation report by BFRO Investigator Jim & Carol Telenko: Phone interview: As witness crested a small hill on Highway 73, traveling east, 2 miles before the intersection at Highway 53, he observed a large brown creature squatting in the swale, on the north side of the road, about 125 yards ahead. At about 60 yards, the creature turned and looked at him then pushed itself to a bipedal position on very long legs and with arms swinging and hands down to its knees, it very quickly took 3 large steps across the highway (22 feet wide) and went into the swampy area that lads to a larger stand of woods. Witness, in the 15 +- seconds he saw the creature, observed that it had a flat face (no snout), round type head, no distinct neck, was approximately 7 - 7 1/2 feet tall, guesses 350 pounds with close to a 60 inch chest. Hair was 4-6 inches long and matted and dull looking, not smooth and shiny like a bear. Witness stopped and got out of vehicle, he saw where it had been squatting and imprint where it pushed off. However, due to melting snow, ground was too saturated for any distinct prints. He did not see any hair and didn't look for scat. Also, being a semi avid hunter, he has observed several bears and has been close enough to smell their odor. He described the odor at the site, nothing like a bears, but it was the foulest, most rancid body odor you could imagine, that lingered around for a 1/2 hour when he went back with daughter to show her the site. Witness was very excited to submit this report. He sounded very convincing and believable and still has memorable dreams of the sighting as well as not being afraid to talk to his friends about it. As per witness, there is plenty of fresh water (including a small stream), huge wood and swampy areas, and a very large deer, moose and small game population, along with a variety of berries. Other predators include coyote, wolf and bear.
http://www.bfro.net/GDB/show_report.asp?ID=59346&PrinterFriendly=True