content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
The invention relates to a dental prosthesis, such as a dental crown, an inlay, a veneer (facet) or a bridge, wherein at least the visible part of the prosthesis (the outside) is subjected to a material removing operation by means of a numerically controlled micro machine tool, and to a method for manufacturing such dental prosthesis. A method and an apparatus as defined in the preamble of claim 1 and 4, respectively is known from US Patent No. 5,027,281, wherein a dental prosthesis is made of a massive block of material. Material is thereby removed from said block of material by means of a numerically controlled micro milling machine. After the shape of the prosthesis has been determined and stored in the memory of a computer, the computer calculates the machining paths which the miller is to follow. Said machining paths are determined by means of flat, mutually parallel sections of the intended prosthesis. However, the machining path which the milling tool follows during the material removing operation may be a three dimensional curved machining path. In order to remove the machining paths which the final material removing operation leaves behind, the surface is polished, for example by means of a rotating brush. US Patent No. 4,937,928 describes a method for manufacturing a dental prosthesis, wherein the prosthesis is formed on a model in the shape of the part of the teeth on which the prosthesis is to be provided. The prosthesis is thereby produced by successively applying a number of layers of material on the model, whereby after the application of each layer the workpiece is worked by means of a numerically controlled machine tool. The machining paths which the tool follows during said operation are computed by means of a CAD/CAM system. A drawback of the known method is that either the machining paths must be spaced so closely that the individual paths on the final product are no longer visible, which is very laborious, or the workpiece must be subjected to an intensive polishing or grinding operation, in order to obtain an aesthetically sound result. Such a polishing operation is usually carried out by hand. An object of the invention is to provide a method for manufacturing a dental prosthesis, wherein an aesthetically sound dental prosthesis is obtained in an effective manner by carrying out a material removing operation by means of a micro machine tool. In order to accomplish the said objective of the invention the prosthesis is worked in such a manner, that the machining path follows a three-dimensionally curved line, and a portion of that line follows the cusp line (which runs over the highest parts of the prosthesis) and/or the fossa line (the deepest part of the upper side of the prosthesis). In practice it has become apparent that when a dental prosthesis is subjected to a material removing operation, a machining path following a natural line of the prosthesis creates a natural-looking result, which is aesthetically even better-looking than a dental prosthesis with a polished surface. So the three-dimensionally curved machining paths follow the natural lines of the surface of the teeth. Thereby a groove (fissure) is produced in the tooth surface by having the tool follow the path of the groove during the material removing operation. According to another aspect of the invention the outer side of a prosthesis, which is formed in the shape of a tooth, can be worked by following substantially circular machining paths around the tooth, whereby the mutual distance of said machining paths varies locally. At local protrusions in the tooth surface, where the radius of the substantially circular machining path is relatively smaller, the mutual distance of the machining paths can be slightly increased, which gives the prosthesis an natural-looking appearance. The outer surface of a dental prosthesis can be determined by taking a so-called library model as a starting point, i.e. making a selection from a number of typical models of teeth whose shape is laid down, for example in the memory of a computer. After a selection has been made with regard to the model which will serve as a starting point, the model in question will have to be adapted to the specific circumstances and the space which is available for the prosthesis. This may be done by hand by displaying the prosthesis in the shape of the library model on a computer screen, whereby the computer calculates and points out, on the basis of data of the teeth in question and the relevant jaw movements, at what locations the prosthesis needs to be adapted. Said adaptation may be effected by moving a pointer across the screen by means of a mouse and clicking the mouse at positions where problems occur, after which the computer calculates an adapted shape and displays it on the screen. These operations may be repeated until a shape of the prosthesis is obtained which satisfies all requirements. The machining paths associated with this shape, in order to manufacture the prosthesis by means of a material removing operation, may then be computed by the computer and be transferred to a numerically controlled machine tool. It is also possible to take the shape of the corresponding part of the teeth in the other jaw half as a starting point for the shape of the prosthesis. This shape may be registered by means of a three-dimensional scanner, be mirrored and subsequently recorded in the memory of the computer. Said scanning generally takes place by repeatedly determining in a flat plane the line of intersection with the surface of the teeth. The amount of data required for recording a surface scanned in such a manner can be considerably reduced by converting said data into data recording the intersection points of a network of lines. The number of these lines, and thus the amount of data, can be further reduced by having said lines follow the shape of the surface of the teeth, i.e. placing lines in particular in grooves that are present. This may take place by displaying the surface obtained by means of the scanner on a computer screen, whereby subsequently a pointer is clicked at characteristic locations in a groove by means of a mouse, after which the computer carries out a conversion computation, whereby intersection points of lines are put at the indicated places. In practice it has become apparent that in this manner it is possible to record the shape of a dental crown with a limited number of intersection points, for example from 500 - 2000. According to the invention a dental prosthesis may be formed in that during the material removing operation the lines are followed by means of which the computer has recorded the shape of the prosthesis in the above-described manner. The invention furthermore relates to a dental prosthesis, of which at least the visible outside of the prosthesis has undergone a material removing operation over its entire surface by means of a numerically controlled micro machine tool, whereby the machining path of the tool has followed a three-dimensionally curved line, whereby machining paths are visible on the surface of the prosthesis and whereby a portion of the machining path of the material removing tool follows the cusp line and/or the fossa line of the prosthesis. Figure 1 Figure 2 shows a dental crown, on which machining paths are indicated; and shows a dental crown, which is undergoing a finishing operation. In order to more fully explain the invention embodiments of the invention will be described in more detail hereafter with reference to the drawing. Although a dental crown has been taken as an example, the invention similarly relates to other dental prostheses, such as an inlay, a facet or a bridge. The Figures are merely diagrammatic illustrations, showing proportions deviating from the actual proportions for the sake of clarity. Figure 1 shows a dental crown, which is bounded by the preparation line 1 at the bottom side. The dental crown has undergone a material removing operation by means of for example a pointed burr 2 and a disc cutter 3. These tools are shown merely diagrammatically and not in proportion. The dental crown is shown to have machining paths 4, i.e. the paths along which the material removing operation has taken place. The machining paths 4 are shown diagrammatically and not in proportion for that matter. In practice the mutual distance of the machining paths will for example be 0.02 - 0.2 mm. A number of machining paths 4 follow a characteristic path of the crown, such as the equatorial line 5 (which constitutes the circumference of the crown in vertical projection), the so-called "cusp line" 6 (which runs over the highest parts of the crown) and the so-called "fossa" 7 (the deepest part of the upper side of the dental crown). The above-mentioned characteristic lines, as well as other characteristic lines of the dental crown, can be established by displaying the dental crown, whose shape is recorded in the memory of a computer, on a screen, whereby a pointer is moved across the screen by means of a mouse, whilst the mouse is clicked when the pointer is located in characteristic positions. The characteristic positions and lines may also be established by means of calculations by the computer, since said lines and positions are determined by the shape of the dental crown stored in the computer. As is apparent from Figure 1, the machining paths extend in an irregular-looking pattern, which is shown out of proportion for that matter. As a result of this irregular pattern the dental prosthesis has a natural appearance when fitted in the patient's mouth. Figure 2 shows a dental crown in elevational view, said dental crown at its bottom side being bounded by the preparation line 1 and exhibiting a number of primary and secondary fissures 8 at its upper side. Said fissures 8 constitute characteristic lines at the upper side of the dental crown and may form the machining path for e.g. the pointed burr 2 during the material removing operation of the crown. This milling operation may form part of the operation explained with reference to Figure 1. However, said milling may also be a finishing operation, which is carried out after the final form of the prosthesis has been substantially achieved. The invention is not restricted to the embodiment described. Many variations, for example as regards size and materials used, are possible when using the invention.
Windows 7 Does Not Support System With Display Resolution Lower Than 600 Pixels If you are going to buy a notebook or PC, then you must consider the display resolution of the system. Recently, Microsoft has released the post that Windows 7 and Windows Vista rejects display resolution lower than 600 pixels. So if you want to install windows 7 or Vista Operating System on your computer, then ensure that display resolution is higher than 600 pixels. Technically, windows 7 operating system refuses all the changes that you made in DPI (Dots per Inch). Therefore, you need to take care of the resolution of your system before installing windows 7 or Vista operating system. The ideal screen resolution must be higher than 600 pixels vertically in order to run windows 7 on your computer properly. Usually when you alter the settings of DPI from “Control Panel’>> Personalization>> Adjust font Size” on Windows Vista, or “Control Panel>>Display” on Windows 7, you might have noticed that required change does not get implemented correctly. The reason is that if the display resolution of your system is lower than 600 pixels, then it will not implement the changes correctly. In order to avoid these kinds of disturbance and interruption, it is recommended for the users to buy the device, which has display a resolution of more than 600 pixels vertically.
http://www.techbuzz.in/windows-7-does-not-support-system-with-display-resolution-lower-than-600-pixels.php
Clanwood Organic Farm have selected produce that they sell and distribute nationally - Angus beef, free range eggs and a variety of soups. Their main produce is from the Angus cow and from free range hens. My brief was to design a logo that communicated those values. I developed an intricate design combining the silhouettes of the Angus cow and of the hen. They fit perfectly together and stay true to their own shape and representation. I didn't want either animal to compromise to fit the other. This design took a great amount of time in order to maintain the realistic details while also keeping its simplistic form in order for it to function as a logo. - Uploaded by: - AngieM - Uploaded on: - Tue, 11/13/2012 - 19:43 Designer info - Designer(s): - Angela Mahon - Designer's Company: - Brand Strategy & Design Ltd T/A Designhub - Designer's Website: - http://www.designhub.ie Additional images Vote for this logo: up 11 users have voted.
https://www.brandsoftheworld.com/award/clanwood-organic-farm
Please contact our Customer Service to get a return authorization number at (954) 303-6361 within 7 days of delivery. For a full refund, the product must return in the original packaging. You will be responsible for the shipping costs related to the return of your order. If you have requested assembly services, you are still responsible for retaining the original packaging, so be sure to save your original packaging. - Package is delivered directly to the room of your choice. - The furniture is unpackaged and trash is removed by our service provider. - Delivery up to 2 flights of stairs included. - Customer signature required. Please make sure to notify the delivery provider of any damage or your desire to return your order at the time of delivery. Returns will only be accepted if the product is in its unused, new condition. Products that have damages, scratches, assembly marks, etc. or any will not be eligible for a return. - At the time of delivery, the carrier will provide you with a receipt and a Bill of Lading. Both are your record of shipment, an agreement of delivery, plus confirmation of the condition of your item. - Thoroughly inspect your cargo at the time of delivery and before you sign all documents. Do not sign the Bill of Lading until you have examined each item. - Returned items will be inspected within 3 days of receiving the merchandise into our warehouse, to ensure that they are in new condition. - Any items returned in new condition with its original packaging will incur a return shipping fee which will be deducted from the refund amount. - Any items returned or cancel in new condition but without its original packaging will incur up to 35% restocking fee and a return shipping fee, which will be deducted from the refund amount. - Damages incurred during return shipping due to insufficient packaging will not be eligible for a refund. - The original sale shipping or delivery charges are non-refundable. - After we have inspected the returned goods, it generally takes 7 to10 business days for a refund to be credited back to you. - For any assistance, call our Customer Service Department at (954) 303-6361, Monday through Friday, 9:30 am to 6 pm Eastern time. VENINI will call you 24-48 hours before delivery to arrange a delivery time that will vary depending on the freight carrier's schedule. Whatever information you can give during the phone call will be given to the driver to help the transport of your purchase. - Delivery teams are only able to deliver Monday-Friday during standard business hours, and they we provide a 2 to 4 hours delivery window. - Inform a VENINI representative if you have any circumstances that could affect the delivery (additional fees may apply): - Very narrow driveway - Dead-end street - Ferry for island locations, etc. (fees may apply) - The driver will need to park in a specific place - Delivery to a side door or garage entrance - Please make sure not to miss your delivery appointment or else you will incur additional delivery fees such as re-delivery, storage charges and/or return shipping costs All items purchased from our clearance or any section or Items marked as Final Sale or Floor Samples cannot be returned. FINAL SALE: NO REFUND ALL DAMAGES MERCHANDISE MUST BE REPORTED WITHIN 48 HOURS AND MUST BE IN ITS ORIGINAL FACTORY PACKAGING Received Reporting damages on your order: We package our products to remain intact during the transit to your home, however, sometimes things can occur that are beyond our control. If in the unlikely event, your item arrives in less than perfect condition please call us at (954) 303-6361 or email us at [email protected] within 3 days of receiving the product and we will do our best to make it right. All freight is delivered to the curb or end of driveway outside the requested delivery address - You will be responsible for transport beyond that point - The orders will come in the original packaging - Customer signature is required.
https://veninifurnitureoutlet.com/pages/shipping-returns
French hoverboard inventor flies over the English Channel He stopped only once, on the British side, to refuel his futuristic invention from a boat in the choppy waters. By Thomas Adamson and Jason Parkinson The Associated Press August 4, 2019 - 5:08 pm Franky Zapata, a 40-year-old inventor, takes to the air in Sangatte, Northern France, at the start of his attempt to cross the channel from France to England, aboard his flyboard, Sunday Aug. 4, 2019. Zapata will try again Sunday, to traverse the English Channel on a flying board after his first attempt failed when he crashed into a refueling boat 20 kilometers (12 miles) into the trip. (AP Photo/Michel Spingler) Franky Zapata, a 40-year-old inventor, takes to the air in Sangatte, Northern France, at the start of his attempt to cross the channel from France to England, aboard his flyboard, Sunday Aug. 4, 2019. Zapata will try again Sunday, to traverse the English Channel on a flying board after his first attempt failed when he crashed into a refueling boat 20 kilometers (12 miles) into the trip. (AP Photo/Michel Spingler) ST. MARGARET’S BAY, England — Is it a bird? A plane? No, it’s a French inventor flying over the English Channel on his hoverboard. Looking like a superhero, Franky Zapata successfully completed the famed 35-kilometer (22-mile) journey in just 22 minutes Sunday morning, reaching speeds of up to 177 kilometers per hour (110 mph) on the flyboard that has made him a French household name. Propelled by a power pack full of kerosene, Zapata set off from Sangatte in France’s Pas de Calais region and landed in St. Margaret’s Bay, beyond the white cliffs of Dover, in southeast England. He stopped only once, on the British side, to refuel his futuristic invention from a boat in the choppy waters. “I’m feeling happy … It’s just an amazing moment in my life,” he said in English following his touchdown in Britain. “The last 10% (of the flight) was easier … because I had the time to look at the cliffs.” It was, of course, the record for such a trip: No one else has tried to cross the channel in this way. It was also a personal record — the furthest distance that the 40-year-old, who drew nationwide attention after whizzing above European leaders in Paris at Bastille Day celebrations, had ever traveled atop his hoverboard. The wind in the Channel, especially gusts, presented a major challenge, he said, adding that he bends into gusts but is destabilized if the wind quickly dies. It was, he acknowledged, no easy feat — especially given the physical endurance it requires. He said his leg muscles were “burning” during the flight. “Your body resists the wind, and because the board is attached to my feet, all my body has to resist to the wind,” he told reporters. “I tried to enjoy it and not think about the pain.” Witness Mark Kerr, a 60-year-old hospital librarian from Dover, said it was quite an unusual sight. “Spectacular and amazing. Not everyday you see a man standing up, flying across the Channel, being chased by three helicopters,” he said. Rosie Day, a 17-year-old at the British landing site, was impressed by Zapata’s flying skills. “I was surprised by how quick he was. It was really impressive how fast he came in and the agility of his movements,” she said. “He was very smooth.”
A sharp increase in the population of Christians in this ancient port town has led to a piquant situation with families finding it tough to find a place to bury their dead. Humongous growth According to the 2011 census, the total number of Christians in the town was 7,060, which has grown almost five-fold to nearly 35,000. Since the early 1800s, the town has two cemeteries: the 15-acre one in the St. Mary’s Church (now under the Church of South India) and another in the Bandarkota area under the RCM (Roman Catholic Mission). Dust to dust, but... Members of at least a dozen major churches under the CSI and RCM have their dedicated cemeteries. But nearly 70 other churches - each with at least 200 believers - do not have a cemetery to bury their beloved. "For thousands of Christian families in Machilipatnam, death is no more a painful event, compared to our struggle to find a place to lay to rest our beloved. We have buried about 300 people in the last two years, wherever we found place at the moment. Many, especially orphans, are being buried in faraway places with the consent of the locals," former Municipal Ward Councillor B. Thomas Noble told The Hindu. Mr. Noble has been repeatedly urging successive governments to grant land for cemeteries for the community. For members only Eight years ago, the CSI passed a resolution not to allow non-members to bury their dead in the St. Mary’s cemetery, putting an end to a six-decade practice. Known as "South India’s Taj Mahal," the church was built by Major General John Peter in memory of his love, Arabella Robinson in the 1800s. "The CSI had to take the decision given the acute scarcity of space," Cemetery Chaplain and Vicar (CSI) C.L. Jasper said. Booking in advance Ironically, dozens of families under the CSI have reserved a piece of land in the St. Mary’s Church by paying ₹4,500 or more per person. In the case of the RCM cemetery in Bandarkota, a major portion of the site is in the clutches of encroachers. "Thousands of Christian families are unable to guarantee a respectful last journey for their dead. Social tensions within the Church groups and other walks of religious life may arise in coming years if the government fails to swing into action," senior Journalist Johnson Jakob apprehended. Local put legal hurdles Since 2016, the Christian community has been given two sites for developing cemeteries. "Both the sites are being challenged by locals in courts. A feeling has developed that the government of the day has betrayed us," said Mr. Noble. Further, "The locals did not allow us to use the two sites - the two-acre one in Bandarkota area and 63 cents in the ward 42 - for which a legal battle is on," he added. Mr. Noble argues that the Machilipatnam Municipal Corporation or the State Minority Corporation could offer a piece of land for a cemetery. In response, the Y.S. Jagan Mohan Reddy-led government has promised to purchase 10 acres to be developed into a Christian cemetery.
https://www.thehindu.com/news/cities/Vijayawada/in-search-of-a-decent-final-resting-place/article28871669.ece
. (English). Aplikace matematiky , vol. 22 (1977), issue 3 , pp. 199-214 MSC: 35A05 , 76B47 , 76U05 | MR 0436748 | Zbl 0373.76022 | DOI: 10.21136/AM.1977.103693 Full entry | PDF (2.1 MB) Feedback Summary: Yhe problem mentioned in the title is studied with help of the stream function and transformed to the boundary value problem for a quasilinear equation. The existence of the solution is proved and the problem of the uniqueness of the solution is discussed. Similar articles: References: R. W. Carrol: Abstract methods in partial differential equations . Harper, Row Publishers, New York, 1968. MR 0433480 M. Feistauer: Some cases of numerical solution of differential equations describing the vortex-flow through three-dimensional axially-symmetric channels . Apl. mat. 16 (1971), No 4, 265-288. MR 0286370 | Zbl 0221.65184 M. Feistauer J. Polášek: The calculation of axially-symmetric stream fields . Proceedings of the Hydro-Turbo Conference 74, Luhačovice 1974 (in Czech). M. Feistauer: The calculation of some types of stream fields in the model of a unified outlet . Technical research report Tp VZ 11/74, ŠKODA Plzeň, 1974 (in Czech). O. John J. Nečas: The equations of mathematical physics . SPN Prague, 1972 (in Czech). J. Nečas: Les méthodes directes en théorie des équations elliptiques . Academia, Prague, 1967. MR 0227584 M. M. Вайнберг: Вариационный метод и метод монотонных операторов . Наука, Москва, 1972. Zbl 1156.34335 И. H. Векуа: Обобщенные аналитические функции . Москва, 1959.
https://dml.cz/handle/10338.dmlcz/103693
PURPOSE: To obtain the pointing device which facilitates operation for positioning a pointer on a display screen and operation for generating a select signal and is operated with a foot. CONSTITUTION: This device has a couple of input parts, which each have a movable body 4 where the foot is mounted and placed, a converter, and a switch. The movable body 4 is angularly displaced clockwise or counterclockwise through the operation of the foot wherein the nail tips are rotated clockwise or counterclockwise around the shank or heel part. The converter converts the angular displacement quantity or angular position of the movable body 4 into an electric signal. The switch is operated directly or indirectly with the nail tips or heel to perform ON/OFF control over the generation of the select signal or the input of coordinate data. One input part inputs horizontal coordinate data and the other input part inputs vertical coordinate data. COPYRIGHT: (C)1996,JPO
The English ACT test is not simply based on grammar and writing skills. Instead, the ACT Exam tests your editing skills. It presents you with five distinct passages and then measures your ability to fix grammatical errors and punctuations to improve their style and organization. It may sound daunting, but do not worry because we are here to assist. To help you, we will discuss what to expect on the day of your ACT Exam and the strategies you require for scoring higher. The English ACT test contains 75 questions with a time limit of 45 minutes. To achieve a high score, you will be required to understand the correct usage, sentence structure, and punctuation of standard English. Lastly, it aims to test your knowledge of the language, like the choice of words, tone, and style. Read on as we conduct a brief overview of the ACT: Overview Of The Test As mentioned, the ACT Test's English section contains five passages, followed by a series of multiple-choice questions. Some of these questions discuss the paragraph's underlined portions, offering multiple alternatives to the part underlined. You have to decide the most appropriate choice within the context of the passage. Additionally, some inquire about a passage's section, an underlined portion, or the whole passage. You will once again be pressed to select the choice that best answers the question. All the questions are numbered sequentially. Furthermore, each question number corresponds to a numbered portion underlined or to a corresponding number in a box at the appropriate point in the passage. Each given passage will discuss various rhetorical situations. The passages are chosen according to their relevance in assessing writing skills and reflecting students' experiences and interests. Remember that there is no testing of the vocabulary, spelling, and rote recall of grammar rules. Four scores are described for the ACT English Exam, with a total test-score based on 75 questions and reporting categories. Taking ACT English Test – 12 Practice Tips Here are 12 tips for you to prepare for your ACT English test: Set Your Pace If you spend one and a half minutes going through each passage before answering the questions, you will only be left with 30 seconds to answer every question. Try to spend less time on every question, and use the remaining time to review your work and revisit the most challenging questions. It is also advisable that you take the ACT English Practice Test beforehand to be better prepared. However, it does not mean that you should rush. Make sure that you have read and understood the questions and sentences before marking the answers. The better strategy is to guess in the end or skip time-consuming questions rather than rushing through the questions and making careless mistakes. Awareness Of The Writing Styles Used The five passages cover various topics spanning diverse writing styles. Therefore, it is vital that you must take into account each passage's distinct writing style. Additionally, when responding to a question, aim to understand the context of what is being asked. Consider how the sentence fits in with the underlined portion and the surrounding text. Study The Underlined Portions Before moving on to responding to the question with underlined portions, analyze the underlined text. You must consider the writing elements encompassed in the underlined part. There are two basic approaches that you would be required to take when answering questions: - Some questions will focus on the answer based on a particular writing element, like the emphasis and the tone - Other questions will focus on choosing the alternatives to the underlined section that is LEAST or completely unacceptable The answers for each question will include changes in one or more writing elements. Awareness Of Questions Without The Underlined Portions Not all questions will have an underlined part to be answered. Some questions are about a section or the whole passage, considering the rhetorical situation that is presented. These questions are identified through a boxed number located at an appropriate point in the passage. Questions on the complete passage are positioned towards the end and presented by a horizontal box with the instruction stating, "Questions ___ and ___ ask about the preceding passage as a whole." Note Differences In The Choices Every test question tackles multiple writing aspects. Go through each answer choice and observe how it is different from the others. Be cautious not to select an answer correcting an error but causing a different one. Ambiguity in context is how the test compels candidates to choose the wrong answer. Conclude The Best Answer When a question asks you to choose the best alternative to an underlined portion, you can use one of two approaches: - Reread the sentences, substitute each possible choice of the answer for the underlined portion, one-by-one - Decide the extent to which the underlined portion might be best phrased according to standard written English If the portion underlined is already the best answer, choose "NO CHANGE." If it is not, observe whether the way you would phrase it is one of the other choices. If you cannot find your phrasing, choose the best-presented answer. For the questions indicated by a boxed number, decide the most appropriate choice regarding the provided situation or the questions posed. Use Basic Grammatical Rules To Answer Grammar Questions In the English ACT test, you must depend on your knowledge of basic grammatical rules to answer the grammar-related questions. Do not play by ear or follow gut instinct regarding what sounds right, except for in idiom questions. Many correct sentences may sound wrong to you, and vice-versa. Therefore, ensure that you are well aware of grammatical guidelines. Reread The Sentence Using The Answer You Selected Once you have selected the answer that you feel to be the best, reread the sentences, inserting the answer you chose at the suitable place. Rereading it a few times will help you decide whether what you chose fits in with the required context. Remember the 4 C's Keep in mind the 4 C's. They are: - Complete sentences: Beginning and ending a train of thought shows mastery of context - Consistent: Everything has to be consistent - Clarity: The meaning of the answer must be clear - Concise: The best answer will be the most concise one Even when you are having difficulty deciphering a complex question, apply the 4 C's and efficiently eliminate the answer choices. The Answers Speak The answer choices often give away significant clues when identifying the question that is being asked. Do you see a change in the words or pronunciation? Pay complete attention to what is changing and what is staying the same in the answers for figuring out potential errors. If you hire a private tutor, you can practice looking for clues within the given answers through extensive trial and error. Be Cautious of the Run-On Sentences The comma splices are a typical mistake in writing; therefore, they might not look like mistakes to you in an ACT English test. A comma splice is a particular type of run-on in which a comma joins the two independent clauses. Keep in mind that an independent clause can stand on its own as a whole sentence. The addition of FANBOYS conjunction can fix comma splices, ultimately making one of the clauses dependent or by shifting comma to a semi-colon. And if you don't know what FANBOYS conjunction is: it is an assistive acronym that stands for — For, And, Nor, But, Or, Yet, and So. Only Answer The Question Being Asked Even though this tip is quite obvious, it is pretty helpful when answering rhetorical questions. Each question asks a specific thing. Focus on providing an answer choice that best answers the question instead of choosing one that seems possible or sounds right. It is common for students to be confused regarding rhetorics. Since all the choices seem right, they randomly pick one that sounds complex and formal. Focus on the question's wording. The correct answer would be the most precise and vivid. Start Preparing For The ACT Today If you're stressing about the fast-approaching ACT date, you can do one of two things: - Spend every waking moment preparing for it, so you're not nervous going into it - Hire a private tutor to ease your stress and ensure an excellent grade The latter is quite convincing, especially if you use Superprof. We have a directory of qualified ACT and SAT instructors who will help you get the best grade possible! Sign up today for an ACT English teacher who will shape the curriculum while keeping in mind your strengths and weaknesses!
https://www.superprof.com/blog/preparing-act-english/
SACRAMENTO, Calif., Nov. 18, 2013 – Aerojet Rocketdyne, a GenCorp (NYSE: GY) company, played a major role in successfully placing NASA’s Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft on its journey to determine how the loss of atmospheric gas may have changed the Martian climate over time. The mission was launched from Cape Canaveral Air Force Station aboard a United Launch Alliance (ULA) Atlas V rocket, with an RL10A-4-2 upper-stage engine, helium pressurization tanks, and a dozen Centaur upper-stage thrusters used for roll, pitch, yaw and settling burns. “Launching a science spacecraft of this type onto its proper path to study Mars helps contribute to our understanding of the workings of our solar system,” said Steve Bouley, vice president of Space Launch Systems at Aerojet Rocketdyne. “Everyone involved in this launch should be proud of the role they played to ensure yet another successful mission.” Once in space, a single RL10A-4-2 engine ignited to place MAVEN on course to the red planet. The workhorse RL10A-4-2 engine delivers 22,300 pounds of thrust to power the Atlas V upper-stage, using cryogenic liquid hydrogen and liquid oxygen propellants during its operation. ARDÉ, a subsidiary of Aerojet Rocketdyne based in New Jersey, provides the pressure vessels on the first and second stages on the launch vehicle. Twelve Aerojet Rocketdyne monopropellant (hydrazine) thrusters in four modules on the Atlas V Centaur upper stage provided roll, pitch and yaw control as well as settling burns for the upper stage. MAVEN will be the first spacecraft mission dedicated to surveying the upper atmosphere of Mars. It is a robotic exploration mission designed to understand the role that loss of atmospheric gas to space played in changing the Martian climate over time. This will help determine when and for how long liquid water could have been stable on the surface, which has implications in answering the question about whether Mars could have ever harbored life. Once separated from the launch vehicle, MAVEN will use eight MR-103G 0.1lbf thrusters and six MR-106L 5 lbf thrusters for in-flight maneuvers and six MR-107S 50 lbf thrusters for Mars Orbit Insertion. MAVEN will arrive at Mars in September of 2014. Aerojet Rocketdyne is a world-recognized aerospace and defense leader providing propulsion and energetics to the space, missile defense and strategic systems, tactical systems and armaments areas, in support of domestic and international markets. GenCorp is a diversified company providing innovative solutions to its customers in the aerospace and defense, energy and real estate markets. Additional information about Aerojet Rocketdyne and GenCorp can be obtained by visiting the companies' websites at www.Rocket.com and www.GenCorp.com.
https://www.rocket.com/article/aerojet-rocketdyne-supports-successful-launch-nasa%E2%80%99s-maven-spacecraft-study-effects
How To Make Kettle Corn Recipe . one of our favorite Recipe ways to Make Kettle Corn Recipe ’!Delicious, spicy and full of flavor! If you don’t like intense hot flavor, cut down the crushed red pepper flakes! Ingredients for How To Make Kettle Corn Recipe - 1/4 cup vegetable oil - 1/4 cup white sugar - 1/2 cup unpopped popcorn kernels Steps for Make Kettle Corn Recipe Heat the vegetable oil in a large pot over medium heat. Once hot, stir in the sugar and popcorn. Cover, and shake the pot constantly to keep the sugar from burning. Once the popping has slowed to once every 2 to 3 seconds, remove the pot from the heat and continue to shake for a few minutes until the popping has stopped. Pour into a large bowl, and allow to cool, stirring occasionally to break up large clumps.
https://www.howtomakedeliciousfood.com/how-to-make-kettle-corn-recipe.html
More material is available from this program at the WGBH Archive. If you are a researcher interested in accessing the collection at WGBH, please email [email protected]. Undigitized item: Request Digitization Untranscribed item: Request Transcription - Series - Market Warriors - Program - Antiquing in Greenwich, NY - Program Number 117 - Series Description Premiered July 16, 2012 NOTE: Fred Willard was replaced by Mark Walberg as series off camera host after episode 101. Market Warriors is a series in which expert antiques "pickers" embark on a nationwide treasure hunt, scouring flea markets for vintage valuables and selling their finds at auction with an eye toward maximizing profit. These pickers aren't your amateur-weekend-flea-market hobbyists-they are pros, here to compete against each other in a competition that takes us on a cross-country adventure. In each episode, four pickers travel to different market locations across the country to purchase certain items with a set amount of money. We will learn not only about different objects and their history; we'll see the competitors apply their knowledge and skills to real financial transactions. Using individual interviews we will get to know our pickers and learn about what they bought and why. We then will discover the current value of the objects as only the marketplace can determine. The sale of our pickers' items takes place at an auction on a different day, in a different location. Before the auction, we will hear our auction house experts appraise each of these items and their estimates won't always jibe with the dollar value our pickers have attached to these objects. All four competitors will watch the auctioneer take charge of the bidding and the picker whose objects make the highest total profit at auction is the winner of that episode. Scoreboard: Who's been the big winner this season? 1st: Kevin Bruneau with 10 wins 2nd: Bob Richter with 4 wins 3rd: Miller Gaffney with 3 wins 4th: John Bruno with 2 wins 5th: Bene Raia with 1 win Series release date: 07/16/2012 - Program Description The pickers head up the Hudson River to Greenwich, New York, to the Washington County Antique Fair, where twice a year the fairgrounds are host to more than 200 vendors and thousands of shoppers. This week the pickers have a target assignment to bring back a military object, and one person's change of mind may cost that person the win. Off-screen host Mark L. Walberg observes some key finds, which include a Victorian birdcage, a WWII water bag and a German pull toy. The winning picker is determined at Quinn's Auction Galleries in Falls Church, Virginia, where their chosen items go under the hammer. Auction House: Quinn's Auction Gallery, Falls Church, VA Shopping Location: Washington County Antique Fair, Greenwich, NY Winner: Kevin Bruneau - Duration 00:56:40 - Asset Type Broadcast program - Media Type Video - Genres - Game Show - Topics - Antiques and Collectibles - Creators - Bemko, Marsha (Series Producer) - Citation - Chicago: “Market Warriors; Antiquing in Greenwich, NY,” 02/18/2013, WGBH Media Library & Archives, accessed December 14, 2018, http://openvault.wgbh.org/catalog/V_1250E89E90F449949F0B3D8E7409DFB8. - MLA: “Market Warriors; Antiquing in Greenwich, NY.” 02/18/2013. WGBH Media Library & Archives. Web. December 14, 2018. <http://openvault.wgbh.org/catalog/V_1250E89E90F449949F0B3D8E7409DFB8>.
http://openvault.wgbh.org/catalog/V_1250E89E90F449949F0B3D8E7409DFB8
A collection of photos of taken with the Nikon 14-24mm f/2.8G lens that gives an idea of how it performs under real-world shooting conditions. Here's a collection of images I've taken with the Nikon 14-24mm f/2.8G lens to give an idea of how it performs under real-world shooting conditions. This lens is excellent optically. It has a fast maximum aperture of a f/2.8 that's constant throughout the zoom range. And it has a very wide angle of view, from 114° at its widest (14mm) and 84° zoomed in to 24mm (that's a on an FX full-frame camera; on a camera with a DX camera it's 90° - 61°). It's a rectilinear lens to minimize the distortion that you get from such a wide angle of view. It doesn't have vibration reduction, but that's normal for wide-angle lenses. It does have ruggedized construction and is built to be resistant to dust and moisture. My biggest complaint about this lens is how big and heavy it is--2.2 pounds (or 1 kg). That's not much if you're talking about long telephotos, and indeed there are heavier lenses like the 24-70mm, but for a wide-angle lens that isn't quite versatile enough that I'd be happy leaving it on my camera most of the time, that's hefty. It's also not going to be a concern for everyone, but I like to travel light, and packing this lens forces me to leave out some of my other favorites. One other thing worth mentioning about this lens is that because of the heavily curved front glass element, you can't use the usual screw-on filters with this lens. Here are some sample images I've shot with this lens. The shooting info is in the caption for each shot, and you can click on the images to open full resolution versions of you want a closer look. Most of these are travel photography shots, but I've sometimes used this lens for architectural and location clients when they want a particularly dramatic look, often for their social media campaigns, and aren't concerned about converging verticals that would otherwise lead me to be using a tilt-shift lens. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 24 mm and f/3.2. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 14 mm and f/8.0. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 24 mm and f/20. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 24 mm and f/2.8. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 14 mm and f/2.8. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 24 mm and f/4.0. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 18 mm and f/5.6. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 14 mm and f/11. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 14 mm and f/5.0. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 14 mm and f/9.0. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 23 mm and f/5.0. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 14 mm and f/4.5. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 15 mm and f/2.8. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 24 mm and f/6.3. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 14 mm and f/5.6. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 22 mm and f/2.8. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 17 mm and f/2.8. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 16 mm and f/8.0. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 14 mm and f/10. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 23 mm and f/8.0. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 19 mm and f/11. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 15 mm and f/8.0. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 18 mm and f/2.8. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 16 mm and f/2.8. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 24 mm and f/5.6. Photo taken with a Nikon 12-24mm f/2.8 lens on a NIKON D800 at 19 mm and f/9.0. Nikon's MSRP for this lens if $1899.95. You can find them at B&H Photo, Adorama, and Amazon.
https://havecamerawilltravel.com/photographer/nikon-14-24mm-f2-8g-lens-sample-images/
Alibaba vs. Ebay: Competing in the Chinese C2C market (C) This four-part case series follows Taobao, part of the Alibaba Group – the leading business-to-business (B2B) e-commerce company in China – from 2003 to 2006. The A-case begins in April 2003 as Jack Ma, the 39-year-old CEO of Alibaba, worries. According to the latest market data, eBay, the global leader in online auctions had reached almost total dominance of the local consumer-to-consumer (C2C) market – only about a year after its entry into China. While Alibaba was not active in the Chinese C2C market, and eBay’s expansion did not impact its current business, looking forward, there was cause for concern. What if eBay decided to use its C2C power to attack Alibaba in its traditional B2B domain? Jack wondered if anything could be done to prevent this from happening? The B-case documents how Taobao competes with eBay in China’s C2C market. It describes Taobao’s strategy to increase its market share. The C-case discusses Taobao’s business model. Although they gained market share at the expense of eBay, the question remains—what business model should they adopt so they can make money in the future? The D-case is an update on the competitors’ strategies and the events that took place between 2006 and 2010. - How to compete in China’s B2C and C2C internet market - How local Chinese companies can compete with multinational corporations - How to build a unique organizational culture 2003-2006 IMD retains all proprietary interests in its case studies and notes. Without prior written permission, IMD cases and notes may not be reproduced, used, translated, included in books or other publications, distributed in any form or by any means, stored in a database or in other retrieval systems. For additional copyright information related to case studies, please contact Case Services.
https://www.imd.org/research-knowledge/case-studies/alibaba-vs-ebay-competing-in-the-chinese-c2c-market-c/
The gut is an amazing place! Did you know that: The human gut is home to 1-2 kg of bacteria - that is the same as 1-2 bags of sugar! There are 1000 times as many bacteria in our gut than there are stars in the milky way (100 trillion vs 100 billion) There are 3 times as many phage ( bacteria l virus es) in our gut than there are bacteria There are over 400 species of bacteria in the gut The surface area of your intestine is greater than that of a tennis court Food takes between 24-36 hours to pass through the Gastrointestinal Tract We produce between 1-1.5 litres (a large mineral bottle) of saliva in our mouths each day Saliva helps us swallow our food and starts the process of digestion To swallow food we use 22 muscles and we can even swallow standing on our heads The stomach can hold anything from 50ml to 4 litres of liquid The stomach is full of a very stong acid which would cause a serious chemical burn to the skin The liver is the body's largest gland Newborn babies pick up bacteria during their births, and during the first few days of their lives their gut becomes colon ised by bacteria 100% of people have bacteria in their guts It is World Digestive Health Day on May 29th each year ©2004 Alimentary Pharmabiotic Centre , University College Cork , Ireland.
https://microbemagic.ucc.ie/facts.php
Grief is a multifaceted response to loss of all sorts. An emotion we feel when a significant relationship in our life ends. Although the most common cause for the ending is the death of someone we care for, it can also be felt when a relationship is lost because of divorce, relocation or other influences beyond our control. We don't grieve for all lost relationships; only those that have, for one reason or another, become meaningful to us over time. This can be for people we love or admire (family, partners, friends, teachers) but also a much loved pet, or indeed places or things we treasure (a house you grew up in, a photo, a family heirloom or indeed a cherished career). Grief is not a disorder, a disease or a sign of weakness. It is an emotional, physical and spiritual necessity, the price you pay for love. The only way to avoid it is not to love The only way to cure it is to grieve." "Earl Grollman" If this resonates with you, allow me to be there for you. Call me today for an informal chat. Individuals grieve in many different ways. Some grieve publicly and openly with great shows of emotion, others grieve silently and keep their emotions hidden. For some people, grief is easily overcome, for others it takes a long time to pass through the grieving process. Each individual grieves in a way which suits them, their emotions and the extent of their loss. Grief is a normal human emotion and the healing process continues until we have recovered from the intense sense of loss and when we find that we are again able to function normally. This does not mean that we forget about the loss, or that we stop feeling sad when we think about the loss we have experienced; it simply means that we are able to get on with our lives. Grief affects us emotionally, physically, mentally, how we behave, and how much we want to be with other people. It is important to recognise when understandable grief becomes something more. When we become 'stuck' in our grief it can lead to depression and desperation affecting everyday life. Warning signs include being unable to focus on little else but the loss, thoughts of guilt or self-blame associated with the loss, the belief you did something wrong and could have prevented the loss, feeling as if life isn't worth living, feeling you have lost your purpose in life, or feeling life will never be pleasurable again. Normal Grief follows a series of stages that move from shock to remembrance and onward toward eventual acceptance. However, when a person fails to reach the stage of acceptance and finds themselves unable to get on with their life, this is diagnosed as Abnormal Grief, known as Complex or Pathological Grief. Long-lasting grief impacts greatly on a person's life. It may cause difficulty sleeping or sleeping a great deal; some individuals may avoid certain situations or they may feel lethargic or fatigued much of the time; they may also have difficulty eating. Working with a professional counsellor, therapist or psychologist can often help a person to understand why they are feeling so badly about the loss. In many instances, a counsellor, psychologist or therapist will help to work through the stages of grief, listen to memories and help to find a balance for continuing everyday normal activities. Where Complex Grief is present, a GP or medical professional might prescribe medication for grief-induced depression. Although taking Medication is rarely effective by itself in treating grief or depression, it can help with cognitive impairment and improves the ability to think clearly. Once this is achieved, it can become easier to integrate the concepts and ideas that therapy provides. I can you to cope with the ambivalent and complicated nature of death. I can help you sort through the depth of emotions. I can help to help you formulate the next steps to take. Allow me to support you through your difficult time. Copyright © 2019 atAraxia Therapy Call today for your transformation 07766 904144 You can send me a message from this page or please call for an initial chat. I look forward to hearing from you.
https://ataraxia-therapy.co.uk/grief
We consider the observational properties of a static black hole space-time immersed in a dark matter envelope. We investigate how the modifications to geometry induced by the presence of dark matter affect the luminosity of the black hole's accretion disc. We show that the same disc luminosity as produced by a black hole in vacuum may be produced by a smaller black hole surrounded by dark matter under certain conditions. In particular, we demonstrate that the luminosity of the disc is markedly altered by the presence of dark matter, suggesting that the mass estimation of distant supermassive black holes may be changed if they are immersed in dark matter. We argue that a similar effect holds in more realistic scenarios, and we discuss the refractive index related to dark matter lensing. Finally, we show how the results presented here may help to explain the observed luminosity of supermassive black holes in the early Universe. - Publication: - Monthly Notices of the Royal Astronomical Society - Pub Date: - August 2020 - DOI: - 10.1093/mnras/staa1564 - arXiv: - arXiv:2006.01269 - Bibcode: - 2020MNRAS.496.1115B - Keywords: - - Dark matter; - accretion discs; - Accretion; - Black hole physics; - Astrophysics - High Energy Astrophysical Phenomena; - General Relativity and Quantum Cosmology - E-Print:
https://ui.adsabs.harvard.edu/abs/2020MNRAS.496.1115B
He Took a Polaroid Every Day, Until the Day He Died I came across a slightly mysterious website -- a collection of Polaroids, one per day, from March 31, 1979 through October 25, 1997. There's no author listed, no contact info, and no other indication as to where these came from. So, naturally, I started looking through the photos. I was stunned by what I found. In 1979 the photos start casually, with pictures of friends, picnics, dinners, and so on. Here's an example from April 23, 1979 (I believe the photographer of the series is the man in the left foreground in this picture): By 1980, we start to figure out that the photographer is a filmmaker. He gets a letter from the American Film Festival and takes a photo on January 30, 1980: Some days he doesn't photograph anything interesting, so instead takes a photo of the date. Update: this was an incorrect guess; see the bottom of this post for more info on these date-only pictures. Throughout the 1980s we see more family/fun photos, but also some glimpses of the photographer's filmmaking and music. Here's someone recording audio in a film editing studio from February 5, 1983: The photographer is a big Mets fan. Here's a shot of him and a friend with Mets tickets on April 29, 1986: In the late 1980s we start seeing more evidence that the photographer is also a musician. He plays the accordion, and has friends who play various stringed instruments. What kind of music are they playing? Here's a photo from July 2, 1989 of the photographer with his instrument: In 1991, we see visual evidence of the photographs so far. The photographer has been collecting them in Polaroid boxes inside suitcases, as seen in this photo from March 30, 1991: On December 6, 1993, he marks Frank Zappa's death with this photo: The 1990s seem to be a good time for the photographer. We see him spending more time with friends, and less time photographing street subjects (of which there are many -- I just didn't include them above). Perhaps one of his films made it to IFC, the Independent Film Channel, as seen in this photo from December 18, 1996: Throughout early 1997, we start to see the photographer himself more and more often. Sometimes his face is obscured behind objects. Other times he's passed out on the couch. When he's shown with people, he isn't smiling. On May 2 1997, something bad has happened: By May 4, 1997, it's clear that he has cancer: His health rapidly declining, the photographer takes a mirror-self-portrait on June 2, 1997: By the end of that month, he's completely bald: His health continues to decline through July, August, and September 1997, with several trips to the hospital and apparent chemotherapy. On the bright side, on September 11, 1997, the photographer's hair starts to grow back: On October 5, 1997, it's pretty clear what this picture means: Two days later we see the wedding: And just a few weeks later he's back in the hospital. On October 24, 1997, we see a friend playing music in the hospital room: The next day the photographer dies. What started for me as an amusing collection of photos -- who takes photos every day for eighteen years? -- ended with a shock. Who was this man? How did his photos end up on the web? I went on a two-day hunt, examined the source code of the website, and tried various Google tricks. Finally my investigation turned up the photographer as Jamie Livingston, and he did indeed take a photo every day for eighteen years, until the day he died, using a Polaroid SX-70 camera. He called the project "Photo of the Day" and presumably planned to collect them at some point -- had he lived. He died on October 25, 1997 -- his 41st birthday. After Livingston's death, his friends Hugh Crawford and Betsy Reid put together a public exhibit and website using the photos and called it PHOTO OF THE DAY: 1979-1997, 6,697 Polaroids, dated in sequence. The physical exhibit opened in 2007 at the Bertelsmann Campus Center at Bard College (where Livingston started the series, as a student, way back when). The exhibit included rephotographs of every Polaroid and took up a 7 x 120 foot space. You can read more about the project at this blog (apparently written by Crawford?). Or just look at the website. It's a stunning account of a man's life and death. All photos above are from the website. Update: I've made contact with Hugh Crawford and his wife Louise. Apparently the pictures that are just dates aren't Polaroids -- they're placeholders for days when there was no photo, or the photo was lost. Update 2: After hitting the Digg homepage, the original site has been taken down by the host. Hopefully it'll be back up overnight; in the meantime if anyone has a mirror of the original site, please leave a link in the comments (you have to leave off the http part). Update 3: The original website is back up! Hugh has managed to restore service, and it looks like the site is now cached across multiple servers. It's still a little slow due to the huge amount of traffic, but at least it works. Go check it out. Update 4: Jamie Livingston has been added to Wikipedia. Update 5: Many people have asked about the Polaroid SX-70 camera. Check out this Eames film explaining the camera. Update 6: The Impossible Project has begun producing Polaroid-compatible film. Update 7: You can read the story behind this post in Chris's new book The Blogger Abides. Follow Chris Higgins on Twitter for more stories like this one.
https://www.mentalfloss.com/article/18692/he-took-polaroid-every-day-until-day-he-died
My nine days of mono meal eating are over! I'll write about the final day tomorrow. April 14, 2008 Today I have even more energy. My tongue is coated more, however. It s not horrible, but it s definitely less red and more of a light pink. My eyes have continued to feel dry and my eyelids are heavy. What causes that, I wonder? My nails are whiter and harder, but they still break and rip when I m working around the house. Hi everyone!? Sorry for the late post tonight. I had a busy day, and even met with a new realtor to help me sell The Luck House! (Wish us luck on that front -- but I have a super-great feeling that this new realtor is 10x more professional and knowledgeable than the previous one.) Today I thought I'd give you a peek into Wendi's rather fascinating Inbox. While she's away, she asked me to monitor her Pure Jeevan mail box and field as many of the questions as possible. It's been ... interesting! :-)? I never realized the volume of email that she receives! It's almost a full-time job to keep on top of it (which I haven't been able to do as well as I'd hoped -- although I now have it down to just a ?hundred or so unanswered ones, so that's progress!). Continuing with our week of ways to keep a sharp mind, let's focus on the one widely accepted indicator for dementia or alzheimer's: heart disease. If one wants to dramatically reduce the chances of brain degredation, the first step to take is keeping the heart healthy. The key advice most health specialists agree on when it comes to a healthy heart is the reduction (ideally eliminatain) of unhealthy fats in the diet. The unhealthy fats are usually seen as solid fats, like butter, margerine, and shortening. However, it's important to not overlook the fats that are also found in meats. By substituting unhealthy fats with something healthier for your heart (like extra virgin, cold-pressed olive oil), as well as transitioning to leaner meats if you are a meat eater, you will be taking some important steps in keeping your heart healthy, as well as your mind. Jim here... Until our home sells (SOON!!!) and Wendi and I launch ourselves into the world as full-time raw food teachers / lecturers / inspiration providers, I'm more or less stuck in the corporate world during the day. While much of what happens in this Dilbert-esque environment is, as many of you likely know, absolutely meaningless, there is nonetheless the occasional pearl of wisdom to be pried from the clammy jaws of the 9-to-5 world. I was, for example, just reminded of a story I heard at a seminar once. Not surprisingly, the seminar pertained to the art of money making. However, there's another more fulfilling message to it as well. A large modern newspaper company still uses these ancient printing presses from the 1950s -- huge old monstrosities with enough belts, pulleys, and greasy gearboxes to make any modern-day steampunk enthusiast squeal with delight. One day, not long after the old press manager finally leaves the company, the main press breaks down. Manuals are consulted, technicians brought in, engineers asked to take a peek. No one can bring the beast back to life. But there's a woman on the Internet who specializes in these babies -- and, guess what, she's local! So, they call her in. She listens to their problem and says she can fix it, but it's going to run them $5,000. ? In this video, Wendi talks with Leela Mata about the different branches of yoga and how diet and raw foods affect the practice of yoga. Of all the branches of yoga, Hatha Yoga is the most popular in the United States.Mata Ji explains that it is through the practice of Hatha Yoga, strengthening the body through the asanas (poses), that you awaken on a deeper level. You will become more open to connecting with your spiritual nature, to realizing more about yourself. I've received countless emails over the past few days, thanks to Kevin Gianni's video (below) about the potato pancakes I made for he and Annmarie when they were visiting. In many of the emails you were thanking me for the free eBooks, but some of you had questions (and even some concerns) about sweet potatoes. I've answered you all individually, but I thought it might be a good idea to spend some time discussing the sweet potato a little more. Here's the vid, and then I'll include some of the questions I was asked: On this Thankful Thursday I am feeling especially thankful for the Internet. Without the Internet I wouldn't be able to learn as much as I've learned about raw foods in such a short period of time. The Internet has connected me with people from all over the world who are also interested in natural health and raw food living. I am part of a larger community, one that would never exist without the Internet. So, today I am especially thankful for the Internet. What are you thankful for today? Following up on yesterday's post, today we're going to take a look at the "Clean 15." These are the 15 produce items that, according to research done by the Environmental Working Group, contain the least amount of residual pesticides (even though they're still grown using pesticides). What this boils down to is: IF you're going to eat conventionally grown produce, these items will harm you much less than those we covered yesterday. So, here's the list, and then we'll try to come up with a sentence to help you (and us) remember everything: Read more: Mnemonics for the "Clean 15" -- Or, "Conventional" Produce That Tests Lowest for Residual... Jim here... Conventional wisdom says we shouldn't judge a book by its cover, right? Well, what about recipes? Should we judge them by their names? When KDcat was young -- well before we followed a raw diet -- her friends sometimes scoffed at our typical vegan fare. After all, few kids would voluntarily eat "Pasta with Spinach Sauce."? But, we discovered, change the name and suddenly they're lining up for seconds. Instead of "Spinach Sauce," Wendi came up with "Jungle Sauce"! How exciting, right? Rawbin Rescues Turtle I had a fantastic time at the 2008 Green Festival in Washington, DC. My wonderful friend, Rawbin, picked me up last Thursday and we had a nice drive back to her place in Maryland. She lives in an almost magical setting with wooded areas, horses, a goat, chickens, dogs, and cats! KDcat will not want to leave when she visits Rawbin someday. Above is a video of Rawbin rescuing a turtle that was in the middle of the road near her home. It's time for another "Makin' It Monday" installment!? This time, Pittsburgh raw foodies Joe Prostko and Tracey Anne Miller (along with videographer Heather) demonstrate their "Turbo Tornado Superfood Solution," which has (as you'll see) a *ton* of superfood ingredients. Take a look:
http://www.purejeevan.com/?topic=rawfoodcoaches&start=11
The picture plane is an imaginary plane (flat surface) which corresponds to the surface of the canvas (but sitting above it), directly at the viewer’s line of sight. It's commonly associated with the foreground of a painting, just at the viewer's line of sight. Image the painting as being a flat structure sitting directly on top of a two dimensional canvas, which the artist manipulates (using shapes, lines and colour/placement, size, overlap etc) to create a sense of three dimensions. Conceptually, the picture plane acts as a transparent window into illusionistic space. A simple way to think of how this works is to look through a window of your home. The view is 3 dimensional, framed either by your door or window. If you wanted to represent this as a picture of that scene, you would need to ensure that you created the sense of relative proportion, and the feeling that if you opened the window or door, you could travel through it into the world beyond. In most representational paintings, all the elements in the picture appear to recede from the picture plane (ie behind it), while trompe l'œil effects are achieved by painting objects in such a way that they seem to project in front of the picture plane. However, a number of modernist painters deliberately chose to present three dimensional objects as having the same flat surface (picture plane) as the canvas itself. You'll see this mostly in the period of Post-Impressionism onwards. Jean-Louis Ernest Meissonier, Street Scene near Antibes, 1868 In this traditional, representational painting by Jean-Louis Ernest Meissonier there is a real sense that this is a 3 dimensional scene. It is clear that the woman and child are standing in front of the man with the donkey and that the buildings are quite large on the sides and in the background. There is also the sense that you could walk through the archway into a valley beyond. Here the picture plane is designed carefully to provide a realistic description of the scene. This work by William Michael Harnett is known as a trompe l'œil painting. Instead of receding into the background, it appears that the violin is sitting on top of the picture plane. It looks to be a solid 3 dimensional object, along with the other objects 'sitting' on the shutter. The effect is created largely by careful use of shadows, and also the sense that the viewer is looking straight at the objects. William Michael Harnett, Still life Violin and Music 1888 Paul Cezanne, Still Life with Apples, 1890-94. Cézanne's still life has quite a different effect. He felt that the illusionism of perspective denied the fact that a painting is a flat two-dimensional object. He liked to flatten the space in his paintings to place more emphasis on their surface - to stress the difference between a painting and reality. He saw painting in more abstract terms as the construction and arrangement of colour on a two-dimensional surface. Still Life with Apples looks flat and distorted, as if he is wanting to show each of the objects within his vision, without trying to suggest any depth in the painting. To achieve this he has made the back of the table wider than the front, which brings the back of the table forward to the foreground and pushes the front of the table toward the back. There is no real sense that the table recedes into the background, it's all at one level - as if the picture plane is just a flat surface. When Cezanne started to 'see' art in a different way, it opened up many new ways of painting for the avant -guarde modernist painters who followed him.
https://www.australianarthistory.com/picture-plane
Work on the bridge began on March 13, 1939. Construction started from both ends and worked towards the center. The first use of the bridge was on June 5, 1940, but the dedication ceremony did not occur until July 13, 1940. 57,000 vehicles and 20,000 pedestrians crossed the bridge in the first 8 hours following the ribbon cutting. The bridge was the first tied-arch span and the first 4-lane bridge across the Mississippi River. Construction required 9,750 tons of steel. This bridge was originally to be named after Mayor Galbraith, but only days before the bridge was set to open, Galbraith requested that the name be changed to the Rock Island Centennial Bridge in commemoration of the 100th anniversary of the cities founding. The bridge was operated by the Rock Island Centennial Bridge Commission. It opened as a toll bridge, charging pedestrians 5 cents and automobiles 10 cents, with trucks paying more. The toll for automobiles eventually reached 50 cents, having been raised to 15 cents in 1979, 25 cents in 1981, and 50 cents in 1991. Trucks paid as much as $2.30. A 1998 study of the quad cities bridges found that the I-74 bridges were vastly over capacity while the Centennial bridge was underused, mainly due to the tolls. An agreement was worked out in late 2001 where the bridge would be transferred from the bridge commission to being owned jointly by the two states. As part of the deal, the entrance ramps to the bridge would need to be reconfigured and the toll booths removed. Contracts for this work was let in December of 2002, and work was finished on June 23, 2005, about 9 months behind schedule. The bridge was officially transferred to state ownership on July 13, 2005. The study turned out to be accurate. Late 1990s traffic of 16,000 vehicles per day grew almost overnight to 31,000 vehicles per day before the end of 2005. Of all the iron bridges on the Mississippi, this one certainly has the most graceful curves. In the third photo, an optical illusion makes the 2nd span look oddly shorter than the rest of the spans. In reality, the first two spans are smaller than the 3rd and 4th span, with the 5th span being smaller again. As a note of trivia, the first vehicle to pay a toll to cross the Centennial Bridge was owned by Dohrn Transfer, a company local to Rock Island. When the tolls were removed, the last vehicle to have to pay a toll to cross the bridge was again owned by Dohrn Transfer.
https://www.johnweeks.com/river_mississippi/pagesB/umissB07.html
Share on: My recipe book My account Add your recipes Receive daily menu Articles Contact Advanced search Cake Orange Walnuts Cake with Orange, Walnuts 12 recipes Orange cake with walnuts cinnamon filling Dessert Very Easy 15 min 50 min Ingredients : 3 eggs 1 1/2 cups sugar 1 cup oil 1 cup orange juice 3 cups flour 1 teaspoon baking powder 1 teaspoon vanilla extract For the filling 1 cup chopped ... Orange marscapone tray cake..... (4 votes), (1) , (13) Other Easy 10 min 30 min Ingredients : 1/2 cup of butter, melted 1 cup of tightly packed dark brown sugar 1 egg, lightly beaten 1/4 teaspoon of vanilla 8 ounces of mascarpone cheese 2 table... Orange cake with vanilla butter glaze (3 votes), (9) Dessert Easy 10 min 35 min Ingredients : All purpose flour-1 cup Walnuts-12 cup(chopped, I forgot to add walnuts , but strongly suggest the addition:) Cornstarch-14 cup Olive oil-4 tbsp Bakin... Fig, walnuts and orange blossom scones Dessert Very Easy 15 min 20 min Ingredients : 1 tablespoon Orange Blossom Water 400 g Self Raising Flour 60 g butter 1 egg, whisked 50 g icing sugar 100 g dried figs, chopped 70 g walnuts, choppe... Christmas fruit cake (28 votes), (3) , (67) Dessert Easy 20 min 1 min Ingredients : Line a 8" square cake tin with two layers of parchment paper, bottom and sides of cake tin 250g Butter 125g Sugar 4 Eggs 200g Plain flour (sifted ... Rich christmas fruit cake for a new beginning (1 vote), (1) Other Very Easy 70 hours 60 min Ingredients : 2 cups all purpose flour (300 gms) 1 cup dry fruits (raisins, dates, sultanas, currants, prunes) 1/2 cup mix nuts (cashews, almonds, walnuts) 1/2 cu... Divine chiffon custard cakes (3 votes), (19) Dessert Easy 15 min 40 min Ingredients : All purpose flour: 1 cup Vanilla custard powder: 1/2 cup Vanilla Extract: 1/2 tsp Orange rind : 1tsp Sugar : 3/4 cup Baking powder: 1 tsp Baking... French four spice cake with browned butter spice frosting Dessert Very Easy 20 min 45 min Ingredients : Cake: ½ cup butter, softened 1 cup packed brown sugar Zest of ½ orange 2 large eggs 11/3 cups flour 2 tablespoons . unsweetened cocoa 2 teaspoons. qua... Sweet potato spice cake and the first day of fall! Other Very Easy 15 min 70 min Ingredients : 1 teaspoon unsalted butter at room temperature, plus 1/2 pound (2 sticks) 1 cup packed light brown sugar 1 cup granulated sugar 2 cups mashed cooked ... Natural tummy tuck cake Dessert Very Easy 25 min 15 min Ingredients : 1 ½ cups chopped walnuts 1 cup whole-wheat flour 2 teaspoons aluminum-free baking powder ¼ teaspoon baking soda ½ teaspoon salt 1 tablespoon and 1 te... Baskets with orange and walnut filling and vanilla cream Dessert Very Easy 20 min 40 min Ingredients : For the crust: 260 g semi-whole wheat flour 40 g whole rice flour half a teaspoon whole sea salt 100-120 ml filtered water 120 ml extra virgin olive o... Caribbean christmas ring with orange sugar glaze Dessert Very Easy 15 min 17 min Ingredients : 3 tablespoons shortening 2 1/2 cups walnuts, finely chopped 1 cup all-purpose flour 1/2 cup whole wheat flour 1 teaspoon baking powder 1 teaspoon bak... Articles 8 tips to make crepes like a French chef Pakora: Savoury Indian Snacks for All Times How to choose a melon ? Daily Menu Starter Beetroot citrus salad Main Dish Pasta with ham and cream sauce Dessert Pumpkin cheesecake Archived menus Print Receive daily menu: Validate Receive daily menu Receive weekly menu Questions answers Need an avocado and morel mushrooms recipe Answer this question I was always told that shepherd s pies with lamb or mutton from sheep not anything else Answer this question I dont need a hundred word's.
https://en.petitchef.com/recipes/cake_orange_walnuts
ISWSC Stamp Answer Person Question — About my stamps Showing posts for the question About my stamps: | | Asked/answered by: | | Question/answer |[email protected] | created on: Jun 15 2017 08:15:12 PM |My grandmother was a collector of all things Disney. As such, she bestowed upon me six sets of nine "The Disney Classic Fairytales in Postage Stamps" 50th Anniversary collections in plastic cases with Certificate of Authenticitys in each. | I was just wondering how much they might be worth now, for insurance purposes. REPLY TO POST |[email protected] | created on: Jun 18 2017 10:24:24 AM |A number of countries issued stamps to commemorate the 50th Anniversary of Snow White and the Seven Dwarfs. Without knowing which country's stamps you have I can only estimate their value. For example, the stamps issued by Grenada for the occasion have a catalogue value of $4.25 for the sheetlet of 9 and $6.75 for the souvenir sheet.
http://iswsc.org/iswscSAP/showtopic.php?topic_id=325
DHQI Password Safe is an application you can use to keep all your sensitive information safe and accessible only by you as it uses a password to allow you to access the information saved. It uses AES256 encryption which means that up until today (2018) no one was able to hack into a program that utilized this kind of Encryption. Quoting from Wikipedia on AES (Advanced Encryption Standard) AES has been adopted by the U.S. government and is now used worldwide. It supersedes the Data Encryption Standard (DES), which was published in 1977. The algorithm described by AES is a symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data. In the United States, AES was announced by the NIST as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001. This announcement followed a five-year standardization process in which fifteen competing designs were presented and evaluated, before the Rijndael cipher was selected as the most suitable (see Advanced Encryption Standard process for more details). AES became effective as a federal government standard on May 26, 2002, after approval by the Secretary of Commerce. AES is included in the ISO/IEC 18033-3 standard. AES is available in many different encryption packages, and is the first (and only) publicly accessible cipher approved by the National Security Agency (NSA) for top secret information when used in an NSA approved cryptographic module (see Security of AES, below). Until May 2009, the only successful published attacks against the full AES were side-channel attacks on some specific implementations. The National Security Agency (NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for U.S. Government non-classified data. In June 2003, the U.S. Government announced that AES could be used to protect classified information : "The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use." AES Security AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys. By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys. The first key-recovery attacks on full AES were due to Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger, and were published in 2011. The attack is a biclique attack and is faster than brute force by a factor of about four. It requires 2 126.2 operations to recover an AES-128 key. For AES-192 and AES-256, 2 190.2 and 2 254.6 operations are needed, respectively. This result has been further improved to 2 126.0 for AES-128, 2 189.9 for AES-192 and 2 254.3 for AES-256, which are the current best results in key recovery attack against AES. End of wikipedia quoting To clarify it more if a system was trying to perform key-recovery attacks and was able to perform 10.000.000 trial deciphers in every second of data ciphered with AES256 trying different keys it would take 1.1301114898374702110627762699014•10 62 years to recover the key. In order to understand this better 1000 years are 10 3 years and thus 10 62 years are 1 followed by 62 0 (zeros) years. In order to have a comparable measure the age of the universe is 13.8•10 9 years. It would take 8.1892136944744218192954802166765•10 51 times the age of the universe to recover an AES256 key. That, translated to plain English, means “ keep your ciphering/deciphering key in your mind or saved in a secured place accessible only by you because if you forget/lose it we will not be able to help you recover your key and thus your data ”.
https://dhqi.gr/products/products-android/dhqi-password-safe
In the beginning of MonetDB (in the 90’s) there were BATs, Binary Association Tables, which is considered the minimal relationship to be maintained in an RDBMS . A BAT consisted of BUNs, Binary UNits, which in turn consisted of a HEAD and a TAIL value. All head and tail values of a BAT were of the same type, but the head and tail types were independent of each other. There were (and still are) a whole bunch of built-in types from which to choose, and types could (and still can) be added in extension code. The head and tail values were stored together so that a BAT was basically a C array of structs of two elements each. This was also the format in which the data was stored on disk. The thing with a C struct that contains, say, a one byte-sized value and an eight byte-sized value is that the longer value must be aligned to an eight byte boundary, and thus there are seven wasted bytes per struct (and hence, BUN). For this reason, years ago already, we changed the format in which BATs are laid out. Instead of a single array of structs, we then implemented what amounts to a struct of arrays. Since then the head values and tail values where each stored in a separate array, but those arrays had to be the same length (that is to say, number of elements). Using this format there is no wasted space between values. Byte-sized values don’t have to be aligned on eight byte boundaries but can be stored with no intervening space. This also became the format in which the data was stored on disk. There were two files per BAT, one for the head column and one for the tail column. Since the beginning, one of the types in MonetDB was the OID, the Object IDentifier. An OID is basically an integer, but with a special purpose. A special version of this type that was (and still is) used in BATs as either (or both) the head or the tail column is one in which each value is exactly one larger than the one before. This is something that happens a lot in MonetDB, and so, since the beginning, there is a special way of storing such values. Instead of allocating memory to store a sequence of numbers, we only store the first value, called the seqbase (sequence base). There is a special type to indicate this, named VOID, Virtual OID. Columns of type VOID don’t need to be stored and in fact aren’t stored. The only information that needs to be stored is the start value, and the number of elements. It turned out that the SQL implementation in MonetDB only really needed head columns of type VOID. Columns in an SQL table are of course all the same size (number of elements) and of disparate types. The way these columns are stored is by using one BAT per column with the data in the tail of the BAT, and a VOID head column. The values in a row of an SQL table all have the same (virtual) head value to enable their reconstruction. Five years ago, we decided that we would move toward a fully single column format. A BAT was to be degraded to a single (tail) column, producing a “pure” column store layout. The head column would thenceforth always be VOID and not stored. At the time there was still a lot of code that worked with the old two-columns-per-BAT format, so we slowly worked on transforming the code to work on VOID-headed BATs and only produce VOID-headed BATs as results. Today, as of the Dec2016 release, this work is finished. There is no “head” column anymore. The only part of the old head column that still exists today is the head seqbase, the first value of the virtual OID sequence that was the head column. In order to get to this stage, a number of things had to be changed since the Jun2016 release. Here is a summary. The function BATseqbase has been replaced by a set of two new functions: BAThseqbase and BATtseqbase. BATseqbase was used to set the seqbase of a VOID head column, but also, when used in combination with BATmirror, to set the seqbase of a VOID tail column. The new function BAThseqbase sets the seqbase for the new column-less head, and BATtseqbase sets the seqbase of a VOID tail column. The function BATkey used to set or clear a flag on the head column to indicate that all values in the column were distinct. Since the VOID head column has distinct values by definition, this function was only useful to set the flag on the tail column. This was done using BATmirror. Now the function works directly on the tail column. The function BATmirror is gone. BATmirror was used to swap the head and tail columns of a BAT, but since there is no head column anymore, this function does not make any sense anymore. The few places where this function was used internally were things like setting a seqbase on a VOID tail column with BATseqbase (we now have BATtseqbase for that) and to indicate that all values in a column were unique by calling BATkey (which was changed to work on the tail column). Functions and macros to retrieve properties of the head column (BAThordered, BAThrevordered, BAThdense, BAThvoid, BAThkey, BAThtype) are not needed anymore since we know the result (true, true if zero or one element, true, true, true, and void, respectively). These functions and macros are now gone. The function BATnew was used to create a new BAT. Its parameters were the type of the head column (had to be TYPE_void), the type of the tail column, the expected size, and the role (PERSISTENT or TRANSIENT). Since the first argument had to be TYPE_void, it did not make sense specifying it. This function was replace by COLnew with a different first parameter. It takes the (initial) seqbase of the head column sequence. This also means that most of the time there is no need to call BAThseqbase to set this seqbase after creation. Functions and macros to access values in the head column are now also not needed anymore, and hence have been removed. They are BUNhead, BUNhloc, BUNhvar, BUNhpos, Hloc, and Hpos. It used to be that the head and tail columns could be given names. In practice, only the head column was ever given a name which was used as the name of the BAT. Now it is only possible to name a BAT, i.e., there is only a single name available per BAT. Kersten, M.L, Plomp, S, & van den Berg, C.A. (1992). Object Storage Management in Goblin. In Proceedings of International Workshop on Distributed Object Management 1992 (pp. 100–116). Morgan Kaufmann.
https://www.monetdb.org/blogs/monetdb-headless/
Introduction {#Sec1} ============ Heart failure (HF) has become an epidemic and nearly 6 million Americans have been diagnosed with this high risk condition \[[@CR79]\]. Despite improved survival rates, the 5-year mortality rate remains at 50--60 % \[[@CR61], [@CR82]\]. HF also represents a significant individual and financial burden from high rates of rehospitalization and medications costs. HF is the most common reason for recurrent hospitalization and costs approximately \$30 billion annually in the United States alone \[[@CR35]\]. HF also produces significant psychosocial problems, including decreased functional independence and quality of life \[[@CR1], [@CR16]\]. HF and neurocognitive function {#Sec2} ------------------------------ In addition to medical and psychosocial consequences, HF is a significant risk factor for neurological disorders including Alzheimer's disease, vascular dementia \[[@CR75]\], and stroke \[[@CR103], [@CR104]\], and high rates of cognitive impairment even the absence of these conditions \[[@CR97]\]. Recent studies show that the majority of individuals with HF evidence at least some cognitive impairment, while up to 25 % demonstrate moderate to severe cognitive impairment on testing \[[@CR17]\]. Deficits have been observed in many different domains including attention, executive function, learning and memory, language, visuospatial functioning and psychomotor speed \[[@CR6], [@CR14], [@CR17], [@CR32], [@CR74], [@CR97], [@CR98]\]. Interestingly, a recent study in HF patients found that nearly one quarter of the patients exhibited deficits in three or more domains of cognitive function \[[@CR74]\]. The risk for cognitive dysfunction appears to increase with increasing HF severity \[[@CR74], [@CR97]\]. Cognitive dysfunction in HF is likely explained by a number of adverse brain changes that are also frequently observed in HF. Most commonly, patients demonstrate increased cortical atrophy \[[@CR106]\], cerebral infarcts \[[@CR4], [@CR84]\], white matter changes \[[@CR14]\] and metabolic alterations \[[@CR60]\]. Specifically, patients with HF have been shown to have significantly less gray matter volume, especially in the insular cortex, frontal cortex, parrahippocampal gyrus, cingulate, cerebellar cortex and deep cerebellar nuclei \[[@CR106]\] compared to controls. Additionally, HF patients exhibit increased amounts of periventricular white matter hyperintensities (WMH) and WMH in the basal ganglia \[[@CR84], [@CR99]\]. Other studies have found damage to the hippocampus, caudate nuclei, and the corpus callosum \[[@CR105]\] and reduced mamillary body volume and cross-sectional areas of fornix fibers \[[@CR58]\] in patients with HF. Only a few studies have directly examined the association between the adverse brain changes and cognitive deficits observed in HF. Beer et al. \[[@CR14]\] found that HF patients performed significantly worse than controls on visuospatial, executive functioning, visual memory and verbal learning tasks. Among these patients, left medial temporal lobe atrophy and deep WMH were significantly associated with impaired scores on measures of cognitive functioning. In another study, Vogels and colleagues \[[@CR98]\] demonstrated that increased medial temporal lobe atrophy in patients with HF was associated with worse poorer performance on tests of memory, executive function and on the Mini Mental Status Exam independent of cardiovascular risk factors (e.g., hypertension). Review {#Sec3} ====== Can cognitive function be improved in HF? {#Sec4} ----------------------------------------- The trajectory of cognitive impairment and possible decline in HF remains poorly understood. Despite being a known risk factor for degenerative disorders like Alzheimer's disease and vascular dementia (e.g., \[[@CR75]\]), two recent studies found that cognitive function remains relatively stable over short time intervals in patients with mild HF (\[[@CR6], [@CR78]\]). Moreover, there is research to suggest that the cognitive deficits of HF may be at least partly reversible. For example, a sample of 40 well-managed HF patients showed subtle improvements in cognitive function over a 12 month period, particularly in the areas of attention and executive function \[[@CR87]\]. Though the exact mechanisms for these cognitive gains are unclear, it appears most likely attributable to improved medical oversight for the study participants \[[@CR87]\]. Similarly, other studies have shown improved cognitive function in persons with HF as a result of medical intervention, including cardiac transplantation \[[@CR17], [@CR20], [@CR43], [@CR66]\] pacemaker and cardiac assist device implantation \[[@CR73], [@CR108]\], and initiation of treatment with ACE inhibitors \[[@CR7], [@CR109]\]. In each case, improved cardiac function was associated with better cognitive function after treatment. Taken together, these results suggest that cognitive impairment in HF may be at least partially reversible through improved cardiovascular function. Can exercise improve cognitive function in HF? {#Sec5} ---------------------------------------------- Exercise interventions have been linked to improved neurocognitive outcomes across a wide range of patient and healthy samples \[[@CR29], [@CR71]\]. Aerobic exercise is linked to greater gray and white matter volume \[[@CR30]\] and increased functional connectivity in the prefrontal cortex \[[@CR102]\]. The most consistent effects of aerobic exercise on cognition have been in executive functioning, although several investigations have found improvements in other domains such as attention, visuospatial functioning, processing speed \[[@CR3], [@CR18], [@CR36]\]. For example, Voss et al. \[[@CR101]\] demonstrated that one-year of exercise training was associated with improved working memory performance in healthy older adults. Even exercise at low intensities has been shown to improve attention \[[@CR45]\], memory \[[@CR81]\], and concentration \[[@CR89]\] in healthy older adults. Mechanisms for cognitive improvement with exercise {#Sec6} -------------------------------------------------- Improvements in cognitive function with exercise are likely related to beneficial brain changes. For example, research has shown that increased cardiorespiratory fitness is associated with reduced brain atrophy \[[@CR29]\], the preservation of gray and white matter in the medial-temporal, parietal, and frontal brain regions (\[[@CR80]\]), and greater hippocampal volumes \[[@CR38]\]. Higher fitness levels have also shown positive effects on functional brain outcomes including greater activation in areas associated with attentional control \[[@CR31]\] and greater activity in the frontal and parietal lobes \[[@CR30]\]. Moderate- to high- intensity aerobic exercise has produced similar benefits including increases in gray and white matter volume \[[@CR30]\] and increased functional connectivity in the prefrontal cortex \[[@CR102]\]. Exercise may improve cognitive function in HF patients through other mechanisms. For example levels of C-reactive protein (CRP), normally an inflammatory cytokine associated with acute injury \[[@CR72]\], are inversely related to amount of physical activity (\[[@CR28]\]; \[[@CR25], [@CR59]\]). Exercise is thought to reduce activation of the sympathetic nervous system, which in turn inhibits the release of inflammatory markers, including CRP \[[@CR28]\]. This hypothesis has some support in the literature with a heart failure population. Following 6 months of structured exercise, HF patients demonstrated significantly lower levels of CRP, than sedentary controls \[[@CR68]\]. The lower levels of CRP may also be related to cognitive function. Research suggests increased levels of CRP are related to impairments in the areas of executive function and memory \[[@CR50], [@CR70], [@CR93], [@CR107]\]. Prior work has also identified various circulating biomarkers which may also influence cognitive function in HF. There is little work done on these markers in relation to cardiovascular fitness as most are either associated with eating behavior or newly discovered themselves (i.e. adiponectin). In light of these shortcomings, some research has been conducted examining the influence of physical exercise on biomarkers. Brain derived neurotrophic factor (BDNF) has demonstrated positive relationship with exercise \[[@CR42], [@CR53]\]. This relationship has also been found in an HF population \[[@CR39]\], and is important as research indicates cognitive impairment is at least partially caused by decreased BDNF levels \[[@CR10]\]. Additionally, BDNF is important for brain health and cognitive function (e.g., \[[@CR12], [@CR64]\]). Leptin has also been connected to cognitive function \[[@CR62]\]. Specifically, leptin has been inversely related to level of cardiovascular fitness levels in both HF \[[@CR90]\] and non-HF populations \[[@CR21], [@CR77]\]. Ghrelin, is a largely under researched hormone, thus, little evidence exists in relation to cardiovascular fitness. However, one study found ghrelin to have an inverse relationship with cardiovascular fitness \[[@CR86]\]. Finally, adiponectin has also been studied in relation to cardiovascular fitness as well. Improvements in cardiovascular fitness have been associated with reduced adiponectin levels \[[@CR11], [@CR65]\]. In HF, improved cognitive performance with exercise may also be related to comorbid medical conditions. HF is associated with several cardiac and non-cardiac comorbidities; up to 40 % of HF patients have at least five non-cardiac medical conditions \[[@CR22]\]. The presence of these comorbid conditions in patients with HF is associated with decreased quality of life, poorer prognosis \[[@CR67]\], increased rates of hospitalization, and higher rates of mortality \[[@CR22]\]. Common comorbidities of HF include hypertension, type 2 diabetes mellitus, obstructive sleep apnea, chronic obstructive pulmonary disorder, and depression. Each of these conditions has been shown to have an independent association with cognitive deficits, either in HF or non-HF populations, and are likely add to or interact with cardiac dysfunction in HF \[[@CR49]\]. Exercise is a common non-pharmacological treatment for a number of comorbid conditions and has been shown to prevent the development or reduce the severity of such conditions both in HF and non-HF populations (e.g., \[[@CR9], [@CR19], [@CR23], [@CR57], [@CR69]\]). Cerebral blood flow as a mechanism for cognitive improvement with exercise in heart failure {#Sec7} ------------------------------------------------------------------------------------------- One key mechanism for cognitive gains with exercise which may be particularly important in HF patients is improved cerebral blood flow (CBF). Patients with HF show up to a 30 % reduction in global cerebral blood flow (CBF) \[[@CR43]\]. Typically, CBF reductions appear to be greatest in posterior cortical areas \[[@CR8]\] but have also been observed in other brain regions important for cognitive function including the frontal, temporal, and parietal lobes \[[@CR8], [@CR24], [@CR100]\]. Reduced CBF is also related to poorer cognitive function in HF. In one study, resting regional CBF in elderly patients with HF was compared to healthy age-matched controls using single-photon emission computed tomography (SPECT). Results of this study demonstrated that reduced CBF was common in patients with HF and associated with poorer global cognition, visual and verbal memory, learning, and language tests. Importantly, global cognition was significantly associated with CBF in the posterior cingulate cortex and precuneus \[[@CR8]\]. Another study found that global cognition, measured by performance on the Mini Mental Status Exam (MMSE), was significantly positively associated with CBF velocity of the right middle cerebral artery (MCA) in patients with HF \[[@CR51]\]. Increased CBF is associated with improved cognitive function in patients with HF {#Sec8} -------------------------------------------------------------------------------- Intervention studies have shown that increased CBF is linked to improvements in cognitive function in HF. As above, many of the HF treatments that have been shown to improve cognitive function (e.g., cardiac transplantation, pacemaker implantation, ACE inhibitors) are also known to improve CBF \[[@CR27], [@CR43], [@CR66]\]. Several studies have shown that although CBF is reduced at baseline, they become normalized following cardiac transplantation representing an increase of up to 30 % \[[@CR27], [@CR43], [@CR66]\]. Similar effects have been observed following implantation of a pacemaker \[[@CR96]\]. Finally, in patients with severe HF, CBF improved by approximately 12 ml/100 g per minute following the initiation of treatment with an ACE inhibitor and normalized over time \[[@CR76]\]. Given that HF treatment such as cardiac transplantation, pacemaker implantation and ACE inhibitors have been shown to both improve cognitive function and increase CBF, it can be reasoned that increases in CBF may be an important mechanism for improved cognitive function in HF patients. Can exercise improve CBF and cognitive function in HF? {#Sec9} ------------------------------------------------------ ### Evidence for improved CBF with exercise {#Sec10} Reduced CBF in HF is, in part, the result of decreased cardiac, regulatory, and vascular functioning. In particular, it appears that the combination of reduced cardiac output (CO) and \[[@CR83]\], decreased cerebral autoregulation \[[@CR46]\], and impaired endothelial functioning \[[@CR48]\] lead to decreased cerebral perfusion and ischemic damage in patients with HF. Importantly, exercise has been shown to improve cardiac and vascular function \[[@CR41], [@CR44]\] in HF patients potentially leading to increased CBF. See Fig. [1](#Fig1){ref-type="fig"}.Fig. 1Physical Activity for the Improvement of Cognitive function in Heart Failure Solid lines represent pathways leading to poorer health and cognitive outcomes. Broken lines indicate pathways for improved outcomes Moderate- to high-intensity aerobic exercise has been shown to improve exercise capacity and increase VO~2~ max in patients with HF \[[@CR34], [@CR52], [@CR54], [@CR55]\] and is also associated with a number of cardiac and vascular improvements among patients with HF. In terms of cardiac functioning, the benefits of moderate- to high intensity aerobic exercise include decreased resting HR \[[@CR34], [@CR37], [@CR44], [@CR85]\], increased CO \[[@CR44], [@CR92]\] and stroke volume \[[@CR34], [@CR37], [@CR44]\] and reduced resting LV end-diastolic diameter \[[@CR44]\]. In terms of vascular functioning, benefits include decreased peripheral resistance and sympathetic activation \[[@CR44]\], increased vasodilatory capacity \[[@CR63]\], blood flow \[[@CR44]\] and improved endothelial function \[[@CR63]\]. Exercise at lower intensities is also related to improved VO~2~ max and increased exercise capacity \[[@CR15], [@CR33], [@CR56], [@CR91]\] though research on its association with other cardiac and vascular factors is limited. One study also demonstrated that moderate-intensity (50 % max work rate) cycling was associated with improved HR recovery while participants who completed high-intensity interval training did not experience such improvement \[[@CR33]\]. A growing body of literature shows aerobic exercise has beneficial effects on CBF in non-HF populations \[[@CR2], [@CR47]\]. Specifically, Hellstrom et al. \[[@CR47]\] demonstrated that global CBF increased during moderate exercise in a sample of healthy adults. Another study found higher blood flow velocity in the middle cerebral artery among endurance-trained men when compared to sedentary men \[[@CR2]\]. Similarly, a recent study demonstrated higher resting CBF levels among older master athletes when compared to sedentary older adults \[[@CR95]\]. It has also been demonstrated that 12 weeks of aerobic exercise was associated with both improved CBF and cognition in healthy older adults \[[@CR26]\]. Although no study to date has examined whether exercise can improve CBF in patients with HF, one study has examined this association in a sample of older adults with cardiovascular disease (CVD) \[[@CR88]\]. In this study, 12 weeks of exercise was associated with improved CBF velocity. The authors also found that attention, executive function, and memory performance improved, though these improvements were not related to CBF velocity. Evidence for cognitive improvement with exercise in HF {#Sec11} ------------------------------------------------------ There has been some research to suggest that cognition can improve following exercise in HF. For example, Tanne et al. \[[@CR94]\] examined the benefits of twice weekly aerobic exercise at 60--70 % of maximal heart rate on cognitive function in HF patients. Results demonstrated that exercise was associated with improvements in attention/psychomotor speed and executive function. Unfortunately, these findings are limited by a small number of participants in the intervention (*n* = 18) and control group (*n* = 5) and potential baseline differences in cognitive function between these groups were not examined. Additionally, CBF was not measured. Consistent with these possible benefits of exercise, two recent studies have examined the link between fitness levels and cognitive function in HF. One study found that greater metabolic equivalents (METs) from a standardized stress test was related to better performance on measures of attention (β = .41, *p* = .03), executive function (β = .37, *p =* .04), and memory (β = .46, *p* = .04) even after controlling for important medical and demographic characteristics, \[[@CR40]\]. Similarly, another study examined the association between exercise capacity, estimated by distance walked on the 6-min walk test, and cognitive function in 80 elderly patients with HF. As above, results showed that greater exercise capacity was associated with better cognitive function \[[@CR13]\]. Conclusion {#Sec12} ========== Overall, the current evidence seems to suggest that the cognitive benefits through exercise could extend to persons with HF. In particular, findings from interventional studies (i.e., pacemaker implant, cardiac transplant, treatment with ACE inhibitors) suggest that improved CBF can lead to improved cognitive functioning in patients with HF. Exercise may also lead to similar improvements through its beneficial effects on cardiac and vascular functioning in HF patients, potentially leading to improved CBF and ultimately, improved cognitive function. Existing research on the cognitive benefits of exercise in HF is limited, but promising. The search for interventions that can improve cognitive functioning or prevent further decline in patients with HF are much needed, as the societal implications of such an intervention would be substantial. **Competing interests** The authors declare that they have no competing interests. **Authors' contributions** RG was responsible for conceptualization, writing, and editing of the manuscript. AF contributed to the writing of the manuscript. JG contributed to the conceptualization and editing of the manuscript. All authors read and approved the final manuscript. The authors have no acknowledgments. Disclosures {#FPar1} =========== Rachel Galioto declares no conflicts of interest. Andrew Fedor Declares no conflict of interest John Gunstad declares no conflict of interest. This article does not contain any studies with human or animal subjects performed by any of the authors.
A variable is a label that points to a piece of information stored in memory. A data type is a set of a particular kind of value, such as a set of numbers or letters. In VB.NET, each variable has a data type attached to it, so they will only accept values of a perticular type. Because of this, we refer to VB.NET as a strongly typed language. 4 2 Use string, integer and date variables to create an ASPX file to display your name, age and date of birth. Arrange the following into groups of Numeric, Textual and Miscellaneous data types, and give an example of a value and a use for each. Integer Char Byte Short Boolean String Long Single Double Date Decimal Numeric Data Types Example Value Use Integer 467,892 to store an integer within the range -2,147,483,648 to 2,147,483,647 Byte 174 to store an integer within the range 0 to 255 Short 76 to store an integer within the range -32,768 to 32,767 Long 8,976,347,864 to store an integer within the range -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 Single -1.6088E-10 to store single precision floating-point numbers, within the range -3.402823E38 to -1.401298E-45 (for negative values), and 1.401298E-45 to 3.402823E38 (for positive values) Double 3.965468572385E-271 to store double precision floating point numbers within the range -1.79769313486232E308 to -4.94065645841247E-324 (for negative values), and 4.94065645841247E-324 to 1.79769313486232E308 (for positive values) Decimal 12.76 to store numbers with up to twenty-eight decimal places Textual Data Types Example Value Use String "I'm going to the shops" to store text Char "A"C to store letters as numbers, perhaps when making your own customized character sets Miscellaneous Data Types Example Value Use Date #09/09/1926# / #16:25:05# to store a date or time value Boolean True to describe whether a particular condition is true or false 4 4 Write an ASPX file that will multiply the values of two integer variables together. Then modify the example to add, divide and subtract the two numbers. After this, experiment with exponential, negation and modulus controls. Create an array containing your 5 favorite singers. Then concatenate the elements of your array into one string, and after the opening sentence "My 5 favorite singers are:", display them in a clear way using the <asp:label> control. a. The opening tag <fruit items> should be <fruit_items> so that it matches the element's closing tag </fruit_items>. An underscore is required to separate the two words, since spaces are not permissible as characters in XML element names. Once you have finished looking at part b, try rearranging this information so you can view orders that have been placed via Sam Clarke the sales person. Add another order, placed the following week, for 12 goggles, as ordered by Aqua Lake Enterprises. This information can be rearranged as follows so that it's possible to view orders placed via Sam Clarke the sales person. Here we have added another order, as described in the exercise: For each of the following Boolean expressions, say for what values of A each of them will evaluate to True and when they will evaluate to False. a. NOT A > 0 OR A > 5 b. A > 1 AND A < 5 OR A > 7 AND A < 10 c. A < 10 OR A > 12 AND NOT A > 20 a.False if A is 0, 1, 2, 3, 4 or 5. True otherwise. b.True if A is 2, 3, 4, 8 or 9. False otherwise. c.True if A is 13, 14, 15, 16, 17, 18 or 19, or A is 9 or less. False otherwise. 6 2 Suggest a loop structure which would be appropriate for each of the following scenarios, and justify your choice: a. Displaying a set of items from a shopping list, stored in an array b. Displaying a calendar for the current month c. Looking through an array to find the location of a specific entry d. Drawing a chessboard using an HTML table Bonus: write an ASP.NET page to perform one of these tasks. a.For Each would be suitable, because it provides a simple way to handle every item in an array. b.For ... Next would be suitable, because we know how many days there are in the current month, so we can specify the number of times the loop needs to be performed before it starts. c.Do ... Until would probably work best, since it allows us to perform the loop until the entry has been found. d. Tricky: you'd probably want to use two For ... Next loops, one inside the other. We need eight rows, each consisting of eight cells, so two loops, each of which take place eight times, would work fine. 6 3 Write a function that generates a random integer between two integers passed as parameters. Build an ASP.NET page which allows you to enter the lower and upper limits, and generates a set of random numbers in that range. This function generates a random integer between two integers passed in as parameters: Suggest a situation when you might want to pass variables into a function or subroutine by reference. Write an ASP.NET page to illustrate your example. This is an open-ended question, so no solution is provided. Chapter QuestionNumber Question Answer 7 1 Explain why event-driven programming is such a good way of programming for the Web. In an event-driven web page, code is not constrained to being executed in a predetermined order whenever the page is served. Rather, it can be broken up into dedicated blocks of functionality that will be executed in response to specific user-generated events. We can therefore piece together complex functionality from several independent components in a web form, and there is no need for the client software to know anything about how the components are programmed. 7 2 Run the following HTML code in your browser (remember to save the page with a .htm extension). Now translate the HTML into a set of ASP.NET server controls so that the information entered into the form is maintained when the submit button is clicked. Add a subroutine to the button to confirm that the details entered were received. Create a very basic virtual telephone using an ASPX file that displays a textbox and a button named "Call". Configure your ASPX file so that when you type a telephone number into your textbox and press "Call", you are: presented with a message confirming the number you are calling presented with another button called "Disconnect" that, when pressed, returns you to your opening page, leaving you ready to type another number Using the Select Case construct, associate three particular telephone numbers with three names, so that when you press the "Call" button, your confirmation message contains the name of the person you are 'calling' rather than just the telephone number. Explain the following terms in your own words, and give an example of how each one might be applied in the context of a simple object-oriented holiday booking application: Object Class Property Method 8 2 Explain what classes we might want to define in order to model each of the following real-world scenarios, along with the members we'd expect them to have. If there is more than one possible solution, explain what additional information we require to establish which one will be most suitable. Purchasing items at a supermarket checkout Enrolling on a college course Maintaining an inventory of office equipment 8 3 Extend our Car example to demonstrate on the browser that the value of the object's Gear property is restricted to a range of values between -1 and +5. Explain in your own words why it's a good idea to define functionality in method overloads that relies on calling an existing version of the same method. <%@ page language="vb" runat="server" debug=true %> <script runat="server"> Public Class Car Private _Color As String Private _Gear As Integer Public Property Color As String Get Return _Color End Get Set(value As String) _Color = value End Set End Property Public ReadOnly Property Gear As Integer Get Return _Gear End Get End Property It's a good idea to define a method overloads so that it makes a call on the original method rather than reimplementing functionality from scratch. This guarantees a single, well-defined set of rules to govern how that method can modify the object on which it's called. If the original method definition is modified in some way, all relevant overloads will use the modified functionality without any further work required. 8 4 We may want to display the Price property of a Book object in some currency other than US dollars. Define a new property ConvPrice, whose accessor method Get takes two parameters denoting a currency symbol and a conversion rate, and returns the book price in the appropriate currency. Of course, this isn't quite how book prices are calculated for an international market, with additional factors such as local taxation playing a part in the total cost. Normally, separate prices are listed for the main countries in which a book will be sold. Update the book class again, so that we can specify three different prices for each object - you might want to use the values on the back of this book. Define additional data members for these extra prices, and a property called LocalPrice that lets us specify a country code parameter ("US", "UK", "Can", for example) to denote which country's pricing we want to Get or Set. Prices should still be stored interally as Decimal variables. Overload the Get accessor method so that we can optionally specify a currency symbol for display with that price. Define an Account class for library users that has a Borrow() method, and an Item class that represents a library book with properties Title and ISBN. Code the Borrow() method so that it gets the title and the ISBN of the borrowed book from the Item object. 9 4 Define an Engine class, whose properties include SerialNo, Rpm, and Name (to be set by the class constructor), and whose methods include SwitchOn and SwitchOff. Now integrate it with the Car class so that you can access these properties and methods from instances of the Car class. 9 5 Using inheritance, define a FlyingCar class that has an Ascend() method, a Descend() method and a read-only property that returns the altitude of the flying car. 10 1 Explain the role of the Page class in ASP.NET and describe what sort of things we can do with it. When a browser makes an ASPX file request, the ASP.NET module (aspnet_isapi.dll) deals with it. The aspnet_isapi.dll then places the ASPX file that was requested into a new class definition defined in a namespace called ASP. This ASP class inherits from the Page class so the ASP.NET page has access to the useful functionality that the Page class provides. The Page class is part of the System.Web.UI namespace. The Page class brings us a wealth of useful properties and methods that we can use on our ASP.NET pages. It also gives us access to a range of other objects created from classes in the System.Web namespace. 10 2 Write an ASP.NET page that returns the Windows name of your computer and the URL of the page that you are visiting. (a) Write one ASP.NET page that prompts a user to enter a value for the radius of a circle then calculates its area (Area = Pi * (radius)²) and another ASP.NET page that prompts the user to enter the length of the radius and then calculates the circumference (circumference = 2*Pi*radius). Both pages should access the value of Pi (3.142) stored in application state. Describe a situation in which you would use each of the following and state why that choice is the best: arrays arraylists hashes sorted lists Arrays are used when you want a quick and easy to build list that you will not need to resize or add items into the middle of the list. ArrayLists are used when you need an automatically resizable list that also allows items to be inserted into or removed from the middle, and performance is not an issue. Hashes are used when you need to do fast lookups from one piece of data to another. The data is not sorted. Sorted lists are most useful when we have to sort a list of key/value pairs for which the ordering of the key is what matters, rather than the order of the values. For example, we might use a sorted list to hold entries in a dictionary. 11 2 Create an array with the following items: Dog, Cat, Elephant, Lion, Frog. Display it in a dropdown list alongside another dropdown list that gives options on how the array is sorted. Sub Alphabet(sender As System.Object, e As System.EventArgs) if Request.Form("sort") = "alphabetical" then Array.Sort(AnimalArray) else Array.Reverse(AnimalArray) end if MyDropDownList.DataBind() End Sub </script> Using a hashtable, display a list of user names in a dropdown list with a submit button that displays the corresponding user ID when pressed. On the same page add two textboxes in which a user can enter new user names and IDs into the hashtable. The newly created user name should appear in the dropdown box, and the corresponding user ID should be displayed when the submit button is clicked. If Not Page.IsPostback Then For Each Item In Users Dim newListItem As new ListItem() newListItem.Text = Item.Value newListItem.Value = Item.Key myDropDownList.Items.Add(newListItem) Next End If End Sub Normalization is the process by which data is broken out of a larger table and placed into smaller tables for the purpose of eliminating redundancy, saving space, increasing performance and increasing data integrity. 12 2 Rewrite this section of code using the relevant connection object and Namespace so that it can be used to connect to an SQL Server database, and modify the connection string accordingly: (a) Connect to Northwind and fill a DataSet with the Company Name and Contact Name fields of the suppliers table, create a DataView of the suppliers table and bind it to an <asp:datagrid> server control to display the results. (b) Repeat the above exercise, but this time bind the DataGrid to the DataSet instead. (c) Now fill the same DataSet with the first names and last names of the Employees table as well and create another <asp:datagrid> to display the results so that both tables appear on one page. Once data is extracted from a database, ADO.NET disconnects from the database and works with the data independently. This data is called disconnected data. The reason this is done so that the limited database connections are freed up as soon as possible, increasing performance. This is particularly important on the web where thousands of users could be trying to use a database concurrently. Another reason for using disconnected data is that it suits the ASP.NET application architecture and allows us to build better, more robust applications in less time. 13 2 Load a DataSet with the Shippers table from Northwind and add the following data into the DataSet using a DataTable: Explain what a User Control is, and under what circumstances you'd use User Controls in your pages. A User Control is a section of ASP.NET code that can be reused as many times as necessary in a web site. It contains standard ASP.NET code which could have resided in each of the pages that required that particular set of functionality, but to promote good coding practice and the reuse of code, while making it easier to debug and maintain your code, you can separate this code out into a User Control. 15 2 Think of a scenario where using a User Control is beneficial, and explain what kind of controls you might have in that situation. Explain which parts of your code could be separated out into a Code Behind file, and why you would do this. A User Control can be used in any situation where repetition of identical code blocks is to be avoided by lumping the repeated code into a User Control. Examples of where this is useful are things like headers and footers on a web site, or menus that are at the side of every page, but we can also use these controls for things like login controls. 15 3 Create a User Control that produces a user login control. You'll need to ask the users for a User ID, which will be an email address, and they'll need to enter their password. Of course, this is all very basic, and you would be very unlikely to actually display this sort of information. You would be more likely to do a quick look through a database using the email address to search for a specific user's records, then display a welcome message with their name, rather than their email address. The database could also store previous purchases, and other information, so this process is just the beginning of creating a personalized user experience. 15 4 Add some very basic validation to the control to check that they've entered a value in the email address box, and to check that the password field has a value in it too, also checking that the email is a valid email, and the password is exactly eight characters, no spaces. Next, create a simple web form that displays this control on the page. Test your code to see if the validation is being performed correctly. Validation controls are a very effective way to perform complex validation with ease. We can add some validation to our user control fairly simply: Our email address has two validators – the first of which checks to see if there is a value entered into the box, and the second contains a regular expression to check to see if the data entered into the box is a valid format for an email address (in our case, we check to see that it's [email protected]. The password control also has two validators – the first checks that the password has been entered, and the second checks that it is the correct length and does not include spaces. 15 5 As an additional exercise, you may want to write some code that connects to a database, and retrieves the user's details, provided the email address and password that are entered match up with the records in the database. This would be useful for retrieving information on a customer's previous orders, etc. This exercise is open-ended and therefore no solution is given. 16 1 Explain the benefits of using Components, and what sorts of things we should encapsulate in a .NET assembly. When should we use compiled .dlls instead of Code Behind files and User Controls? Encapsulating commonly-used functionality into components is a core concept for scalable programming. Re-using code is not only a time-saver, but it helps to ensure consistency in our applications. Compiling this reusable code into a .dll makes it very easy to re-distribute our functionality in a neat and tidy package. Code Behind files are more of a logistical separation of code from presentation of HTML to make it easier for two types of developer to work with the code. A web designer who is more interested in the aesthetics of a page can be left to concentrate on the display elements of a site, whereas a developer who is interested in programming logic can make use of the code behind file to amend the code to produce a programmatically different result. A User Control is more commonly used for repeated elements in a site that are only really going to be used in that site itself, and are less likely to be re-used in multiple sites. If, however, we're talking about elements that would be useful to many different sites, for example, a custom control that would obtain the Amazon ranking for a book and display it wherever you wanted to, you could compile the control into a custom server control, and distribute it to friends, family, or even sell it to other developers who might want to use this code to save them time. 16 2 What is business logic? Give an example of the sort of code that can be described as business logic, and talk about how this code can be reused elsewhere. Business Logic is the term we apply to code that is used for retrieving, storing, or manipulating data. For example, you could have a database that held information about your favorite albums. You could write a method that connects to that database, another method that retrieves the list of albums by a particular artist, another method that is used to retrieve all albums of a certain genre, and other methods that are used to add, edit, or remove an album from a database. You could then use this functionality for personal use to manage your collection, or you could extend it to incorporate reviews of each album or artist on a review web site. 16 3 Create a new component that converts from imperial units to metric and back again. You'll need four methods: Celsius to Fahrenheit temperatures, Fahrenheit to Celsius, Kilometers to Miles, and Miles to Kilometers. You'll need the following data: Fahrenheit temperature = Celsius temperature * (9/5) + 32 Celsius temperature = (Fahrenheit temperature - 32) * (5/9) 1 Mile = 1.6039 Kilometers (to 4 decimal places) 1 Kilometer = 0.6214 Miles (to 4 decimal places) 16 4 Create an ASP.NET page that uses this functionality. One example might be a page about holiday destinations. Users in other countries might want to know distances in metric instead of imperial, or temperatures in Celsius, rather than Fahrenheit. 16 5 Additionally, you might want to access this functionality from a completely different type of site, for example, one that has some scientific purpose that requires unit conversion, a route planner that will provide results in both miles and kilometers, or a weather site that needs to provide today's temperatures in both Celsius and Fahrenheit. A completely different situation would be a cookery site that displayed instructions for cooking a meal in an oven set to either Celsius or Fahrenheit temperatures. 18 1 a. Explain the role of the Simple Object Access Protocol (SOAP) in web services. b. What is the purpose of the WSDL? c. How would you locate a web service that provides the functions you require? a. Simple Object Access Protocol (SOAP) is the protocol with which functions are called remotely in Web Services. b. The WSDL is an XML file that specifies the parameters that are used in the Web Services. By means of the WSDL file, consumers know what parameters to send to the Web Service and what values they will receive. c. To locate a Web Service the UDDI service is used. Businesses register their Web Services on the UDDI database which then be searched for a service that may suit our needs. 18 2 Create a Web Service with a class name of circles that calculates the area of a circle, the circumference of a circle and the volume of a sphere. (Area = (Pi) r²; Circumference = 2(Pi)r; Volume of a sphere = 4/3 (Pi)r³) <%@ WebService Language="VB" Class="Circles"%> Imports System.Web.Services Public Class Circles <WebMethod()> _ Public Function Areaofcircle(radius As Decimal) As Decimal Return radius*radius*3.142 End Function
Athletes return to the baseball field – precautions still in place ROCHESTER — Baseball players have returned to the field on Dexter Lane to sharpen their skills, but full-fledged games are not allowed yet, and athletes and coaches are taking precautions against the coronavirus. The American Legion Baseball League consists of athletes between the ages of 13 and 19 across the United States and Canada. The league canceled its summer season in light of the pandemic, but as restrictions have loosened, some coaches in southeastern Massachusetts have started to host tryouts and workouts in the hopes of being able to compete against each other in an independent “Area X” league later this summer. On Saturday, June 13, coaches of the “Gateway” team hosted tryouts for players from Wareham, the Tri-Town, and surrounding areas. The tryouts were broken up into different groups throughout the day, and players spaced out across the field to mitigate any possible spreading of the coronavirus. At 2:45 p.m. a group of about 20 athletes in the 16-19-year-old division took to the field for a series of drills, as coaches gauged their ability levels. Head Coach Keith Delgado said he hopes to have games by early July, but that is subject to change based on state guidelines, and that player safety is “first and foremost.” In the meantime, players can still work on throwing, and batting, but “we’re not really getting into the contact side of it” yet, so things like tagging baserunners is off limits for now. Players also have to spread out as much as they can, and they are not allowed to use the dugouts, because it is impractical to maintain social distance in a closed space. Despite the limitations, many of the players were happy to get back on the field, especially since the high school spring sports season was canceled. AJ Lecuyer, 18, of Wareham said that just getting on the field for drills was “a lot better than we thought we'd get about three months ago,” when it seemed like the summer baseball season would be completely canceled. Lecuyer said that the thing he missed most about baseball during quarantine was being part of a team, and being around other people. Wareham High School senior Khai Delagado said that the summer league offered a chance to make up for a missed opportunity this spring. He added that baseball is a sport well-suited to social distancing. “I know keeping distance on a baseball field isn’t the hardest thing in the world,” he said, since defenders don’t have to “cover” the other team like in basketball or football. After players checked in, signed waivers, and placed their bags a few feet apart from each other, coaches led them through a series of drills, giving players a chance to showcase their ability. After a warmup of running, and playing catch, coaches tested players’ fielding abilities. Catchers were challenged to catch a pitcher’s throw, and then pass to infielders with speed and accuracy. Infielders stopped ground balls, and threw the ball to their teammates, and outfielders caught fly balls, and hurled the ball back to home base, or a cutoff man. From there, players took turns at batting practice, as coach Delgado threw them pitches. While this was a tryout and not a practice, Delgado said that he would like to let as many players join as possible. The Area X league will allow teams to have up to 22 players, while the American Legion only allowed 18-member teams. While many players did their best to stay in shape during the quarantine, Saturday’s tryout session allowed them to get back on a baseball field and hone game-specific skills. Wareham’s Connor Reidy, 17, said that the session “definitely shook off some rust.” To learn more about the Wareham-based team, see its Facebook page.
https://theweektoday.com/article/athletes-return-baseball-field-%E2%80%93-precautions-still-place/48392
San Diego’s smallest, sweetest theater An intimate theater specializing in new plays, melodramas, improvised theater, acoustic music, and Vaudeville style variety shows. Concessions stand features over 60 types of old fashioned candy; salt water taffy, candy sticks, root beer barrels, Mary Janes, and old fashioned candy bars. Hard candies and soft candies and even gum and cold drinks. They produce the North Park Playwright Festival each year, featuring new, short plays by playwrights from around the world. They Teach the STARS theater arts program for individuals with developmental challenges.
https://theboulevard.org/listing-item/north-park-vaudeville/
We are continuously collecting information related to the Emergency Medical Services in Europe. Help us improve our content and fill out the online form. If you are not able to find what you are looking for, please let us know by sending us a message through our contact page. Your feedback is important for us and it will be fed into our Database for EMS courses.Open project form Smart Ambulance: European Procurers Platform (SAEPP) - Funding: FP7 - From: 2015-01-01 to 2015-08-31 Description create and collate a consensus of agreement from Ambulance Users and Procurers on the core technology-centric features which, if correctly integrated into a suitably re-designed ambulance, would allow them to demonstrate, evaluate and deliver new models of in-community healthcare delivery, with the primary objective of avoiding unnecessary hospital attendances (& thus admissions) and the associated patient distress and hospital costs Platform for European Medical Support During Major Emergencies (PULSE) - Funding: FP7 - From: 2014-05-01 to 2016-10-31 Description The project will conduct a) comprehensive studying of the procedures, processes and training requirements in current operation at the European Health Systems using the support of end users available to the project. It will then b): • Develop standard and consistent response procedures and processes; • Provide tools to support decision making • Provide a Framework that ensures decision makers have access to timely key data, planning and decision support tools and to international best practice and lessons learnt; • Present innovative training techniques to improve personnel response training; • Develop an ‘emergency app’ for smart phones that will allow users fast and flexible access to emergency resource availability information; The primary benefit will: • Reduce administration, bureaucracy and ensure better use of resources; • Build on research results from previous EU projects which have developed usable analysis of societal and political criteria and their relevance to security measures Fire and Rescue Innovation Network - Funding: H2020 - From: 2017-05-01 to 2022-04-30 Description FIRE-IN has been designed to raise the security level of EU citizens by improving the national and European Fire & Rescue (F&R) capability development process. This project aims at increasing the effectively of practitioner’s coordinating on operational needs, on available research and innovation, on standardisation, and on test & demonstration and training. Standardisation of situational Awareness sYstems to Strengthen Operations in civil protection - Funding: H2020 - From: 2017-05-01 to 2019-04-30 Description Current Situational Awareness (SA) solutions are not adapted to operate in cross-border contexts and present several shortcomings related to interoperability, data management/processing, decision making, standardisation and procurement. This hinders a reliable sharing of SA information. SAYSO will address these shortcomings and pave the way for the development of innovative European cost-effective Multi-Stakeholders SA Systems (MSSAS) which will provide practitioners with user-friendly solutions, providing a clear picture of the situation at hand with relevant advices. Interoperability Profiles for Command/Control Systems and Sensor Systems in Emergency Management (C2-SENSE) - Funding: FP7 - From: 2014-04-01 to 2017-09-30 Description Effective management of emergencies depends on timely information availability, reliability and intelligibility. To achieve this, different Command and Control (C2) Systems and Sensor Systems have to cooperate which would only be possible through interoperability. To address this challenge, in C2-SENSE Project, a “Profiling” approach will be used to achieve seamless interoperability by addressing all the layers of the communication stack in the security field. In this respect, C2-SENSE project’s main objective is to develop a profile based Emergency Interoperability Framework by the use of existing standards and semantically enriched Web services to expose the functionalities of C2 Systems, Sensor Systems and other emergency/crisis management systems. DRiving InnoVation in crisis management for European Resilience - Funding: H2020 - From: 2019-05-01 to 2020-04-30 Description DRIVER+ focuses on augmenting rather than replacing existing capabilities. DRIVER+ has three main objectives: 1) Develop a pan-European Test-bed for crisis management capability development; 2) Develop a well-balanced comprehensive portfolio of crisis management solutions, and 3) Facilitate a shared understanding of crisis management across Europe. Network Of practitioners For Emergency medicAl systems and cRitical care - Funding: H2020 - From: 2018-06-01 to 2023-05-31 Description NO-FEAR proposes to bring together a pan-European network of practitioners, decision and policy makers in the medical and security fields. They will collaborate to achieve a common understanding of needs, as well as – in collaboration with academia and industries – increase the EU innovation potential that could better fill the operational gaps and recommend areas for future innovations. Innovation activity to develop technologies to enable a pan-European interoperable broadband mobile system for PPDR, validated by sustainable testing facilities - Funding: H2020 - From: 2018-06-01 to 2023-05-31 Description the BroadWay project will take the first procurement steps to enable ‘interoperable next generation of broadband radio communication systems for public safety and security’ to improve Public Safety and Disaster relief organisation’s (PPDR’s) service to Europe’s citizens, and enhance interoperability across borders. Arctic and North Atlantic Security and Emergency Preparedness Network - Funding: H2020 - From: 2018-09-01 to 2023-08-31 Description Mediterranean practitioners’ network capacity building for effective response to emerging security challenges Tools for early and Effective Reconnaissance in cbRne Incidents providing First responders Faster Information and enabling better management of the Control zone - Funding: H2020 - From: 2018-05-01 to 2021-04-30 Description TERRIFFIC will enrich the European response to RNe events by a set of modular technology components in a comprehensive system, incl. new detectors, algorithms, drones, robots, dispersion models, information management software and decision support systems.
https://www.iprocuresecurity.eu/emsworld/projects
" Visitors love to see rare red squirrels here. But support is vital to help fund their continued protection in our woodlands." Why give to the National Trust? As a charity, we rely on the generosity of supporters to look after the outdoor spaces in our care. Not only do our supporters help to conserve beautiful landscapes and protect precious plants and wildlife. But they also ensure that future generations have places they can find freedom from everyday life, reconnect with the natural world and make memories to treasure. Your donation will go towards this project and other vital conservation work in the outdoors at this special place. With your support, we can continue to protect the irreplaceable. For everyone, for ever.
https://www.nationaltrust.org.uk/appeal/support-red-squirrels-in-the-lake-district
Richard E. Marczak, 86 passed away peacefully on July 8, 2018. He is survived by his wife of 60 years, Rose Mary Marczak, his children; Richard C Marczak, Barbara Rivetti and her partner Bob Symanski, and Robert Marczak and his wife Cheryl, his grandchildren; Joseph Rivetti, Robert J. Marczak, Lucas Richard Marczak and Matthew Marczak, and his siblings Robert Marczak and Eleanor Cleffi. Richard served in the United States Armed Forces and was a local business owner and entrepreneur. He owned Marczak’s Florist with his brother, Robert, for 36 years. He also bought and sold real estate before ‘house flipping’ was trendy. Richard was a lifelong Union, NJ resident who was a beloved husband and Father. His generosity, compassion, selflessness and loving spirit will be missed. He will forever be remembered for his ability to nurture small seedlings into beautiful flowers and plants. A visitation will be held for Richard on Tuesday, July 10 from 4-8pm with a service at 7pm at McCracken Funeral Home, 1500 Morris Ave, Union, NJ.
https://www.dignitymemorial.com/obituaries/union-nj/richard-marczak-7907547
Colleges, universities and graduate schools in the state of Maine that offer graduate programs in the areas of engineering and applied sciences, computer science, business management, bio-chemistry, bio-medicine, medicine, law, education, chemical engineering, mechanical and fine arts leading to an MS, MBA or a Ph.D. The college is accredited by the New England Association of Schools and Colleges Carnegie Foundation for the Advancement of Teaching American Chemical Society. They offer to students bachelors in arts and sciences. The most popular majors at Bates are economics, psy-chology, biology, English, political science, history, and environmental studies. Bowdoin is an independent, nonsectarian, coeducational residential, undergraduate liberal arts institution. The college offers programs in Africana Studies Program, Asian Studies Program, Gay and Lesbian Studies Program, Senior Center Program,and Women's Studies Program. The college offers fifty-three majors by 25 departments and in 11 programs. The top four majors are economics, biology, English and government. Colby awards the bachelor of arts, or the B.A., degree. More than 1,800 students are enrolled in the college. The College awards one undergraduate degree, the BA in human ecology which indicates that students understand the relationships between the philosophical and fundamental principles of science, humanities, and the arts. The college's curriculam consists of Arts & Design, Environmental sciences, Human studies, Educational studies and many more. The college offers associate of science degrees, bachelors degrees and masters degrees. Masters degree are awarded in the field of business, nursing, occupational therapy and physical therapy. Lewiston-Auburn College offers the following interdisciplinary baccalaureate degrees: Arts and Humanities, Leadership and Organizational Studies, Natural and Applied Sciences, and Social and Behavioral Sciences. The following three baccalaureate Programs are also available at the Lewiston-Auburn Campus as extension programs from USM's School of Applied Science, Engineering, and Technology and USM's College of Nursing and Health Professions: Industrial Technology, Nursing BS, Nursing RN to BS Option. Masters are granted in the follwoing fields: Leadership studies, Literacy education and occupational therapy. The college's graduate degree, Master of Fine Arts in Studio Arts program, is a two year program designed for emerging artists. Early College at Maine College of Art provides an exciting and challenging opportunity for approximately 45 motivated high school students to explore their creativity and earn college credit in a four-week intensive visual arts program. The Academic Division of Maine Maritime Academy is comprised of 7 departments, which function as adminsitrative units for the organization of faculty, curriculum, and academic support services: Arts & Sciences, Engineering, Thompson School of Marine Transportation, Naval Science, Corning School of Ocean Studies, Library Services and Loeb-Sullivan School of International Business & Logistics. MMA now offers Army ROTC in an affilliation agreement with UMaine. The following are the programs offered by the college: Bachelor of Science program in Bible & Missions, Bachelor of science program in Bible & Theology, and Bible Certificate Program. NMCC offers programs of study, including options and concentrations, leading to the associate in applied science degree, the associate in science degree, or the associate in arts degree. Several programs also offer diploma and certificate levels. The departments through which the college offers the programs are: Arts & Sciences, Business Technology, Nursing & Allied Health Department and Trade & Technical. Saint Joseph's College is an accredited liberal arts college with a residential campus in southern Maine. The private, four-year school offers more than 30 academic programs. The college awards Bachelor of Arts, Bachelor of Science, Bachelor of Science in Biology and Bachelor of Science in Business Administration. Thomas College is accredited by the New England Association of Schools and Colleges, Inc. Bachelor of Arts degree is offered in psychology and political science/government. In addition, the college offers Bachelor of Science in various disciplines. Masters in Business Administration is also offered in Thomas College. They also grant associate of arts and science degreess. Unity College offers 20 Bachelor’s and 3 Associate’s degrees. We offer more environmental programs than any college in the U.S. The college is accredited by the New England Association of Schools and Colleges. The University of Maine at Augusta is divided into three primary academic units or colleges: Arts & Humanities, Mathematics & Professional Studies, and Natural & Social Sciences, with each offering bachelors degree in arts and sciences, associate degree in arts and sciences and also certificate programs. he New England Association of Schools and Colleges (NEASC) and the National Council for Accreditation of Teacher Education (NCATE) accredit the college. Degrees offered by the college are: Bachelor of Arts, Bachelor of Fine Arts, Bachelor of General Studies, and Bachelor of Science. The University's academic programs are structured within four divisions, Arts and Humanities, Education, Natural and Behavioral Sciences and Nursing. The University houses about 925 students. The University serves about 1,000 students. They are accredited by the New England Association of Schools and Colleges and National Recreation and Park Association. UMM offers Bachelor of Arts and Bachelor of Science degrees in 14 major programs, as well as opportunities for self-designed programs, certifications, and licenses. The University offers to students 88 bachelor's degree programs, 64 master's degree programs, and 25 doctoral programs. Academic programs are offered in Business, Public Policy and Health,Education and Human Development, Engineering, Liberal Arts and Sciences, Natural Sciences, Forestry and Agriculture and University-Wide Programs. The various academic programs offered by the University are through the following departments of the University: Business and International Studies Department, English and Fine Art Department, Exercise Science / Physical Education, Recreation / Leisure Services, Athletic Training, and Health Department, Psychology, Social Work and Criminal Justice Department, Science and Mathematics Department and the Teacher Education Department. The University follows a three-year curriculum, which is both intensive and challenging. The curriculam pattern is broad and general in nature, offering courses in subjects as diverse as international business transactions and environmental law, as well as traditional core courses. University College offers access to quality public higher education statewide. Courses are offered on-site, via interactive television (ITV), or online at 11 University College Centers. The University offers:Associate Degrees, Graduate Degrees, Bachelor's Degrees, Undergraduate Academic Minors and a number of Certificate/Certification Programs. The programs are offered through College of Arts & Sciences, College of Health Professions and College of Osteopathic Medicine. USM offers graduate and doctoral degree programs through its various schools and colleges. Schools and Colleges at USM are:School of Applied Science, Engineering, and Technology, College of Arts and Sciences, School of Business, College of Education and Human Development, University of Maine School of Law, Lewiston-Auburn College, Muskie School of Public Service and College of Nursing and Health Professions.
https://us.2graduate.com/united_states_colleges_universities_graduate_schools/Maine/
The government on Friday said that the country has the potential to maintain a high growth rate in the long run as the saving and investment rates are robust. "Yes, sir. The savings and investment ratios, which determine the growth potential of the economy, are robust," Minister of State for Finance Pawan Kumar Bansal said in a written reply in the Lok Sabha. He said the gross domestic savings have increased to 34.3 per cent of the gross domestic product in 2006-07 from a level of 31.8 per cent of the GDP in 2004-05. Gross domestic capital formation for the same period rose to 35.9 per cent of the GDP from 32.2 per cent in 2004-05. The 11th Five-Year Plan envisages an average growth rate of 9 per cent per year, a continuation in the uptrend in domestic investment observed in the 10th Plan to an average of 36.7 per cent of GDP, which will be supported by the domestic savings rate of 34.8 pert cent of GDP, Bansal said. The economic growth during the first half of the current fiscal stood at 7.8 per cent against 9.3 per cent a year ago. However, due to the global economic meltdown, economists and experts have scaled down the GDP projections for the current fiscal. Prime Minister Economic Advisory Council has said it will "re-look at the GDP forecast" while multilateral funding agencies like IMF and World Bank have forecast a further moderation in the economic growth of the country to 7.8 per cent and 6.3 per cent in 2008, respectively. Fears of slowdown in Indian economy got accentuated after negative growth in industry in the month of October for the first time in 15 years.
http://business.rediff.com/money/2008/dec/12bcrisis-india-can-sustain-high-growth-bansal.htm
Book DescriptionThis report conveys the Bank's global priorities and programs to help countries progress toward the international education goals and improve the quality of teaching and learning. Buy Education Sector Strategy book by World Bank from Australia's Online Bookstore, Boomerang Books. Book DetailsISBN: 9780821345603 ISBN-10: 0821345605 (241mm x 203mm x mm) Pages: 94 Imprint: World Bank Publications Publisher: World Bank Publications Publish Date: 31-Aug-1999 Country of Publication: United States | | Books By Author World Bank Little Data Book, Paperback (May 2007) A pocket-sized edition of the World Bank's "World Development Indicators 2007" publication, which contains information on more than 900 indicators for some 150 economies and 14 country groups. It provides information under the following headings: People; Environment; Economy; States and Markets; and, Global Links. From Competition at Home to Competing Abroad, Paperback (March 2007) This report is based on a supply chain analysis of Indian horticulture products such as fresh fruits and vegetables like apples, grapes, potatoes, peas, onions, and others based on primary surveys. Unlocking Opportunities for Forest-Dependent People, Paperback (June 2006) Argues that forests offer a potential for poverty reduction in India and tries to improve the understanding of Joint Forest Management in India. India, Paperback (June 2005)» View all books by World Bank Agriculture contributes only about a quarter of India's total GDP, but its importance in the economic, social, and political fabric of India goes beyond what mere numbers indicate. Conceptualized and written by noted agricultural experts and development economists in India and abroad, this report is for government, academic, activist, and others. Reviews » Have you read this book? We'd like to know what you think about it - write a review about Education Sector Strategy book by World Bank and you'll earn 50c in Boomerang Bucks loyalty dollars (you must be a member - it's free to sign up!) | | Facebook Page | Twitter | Google+ | Bulletin e-Newsletter | Blog | RSS Become a Member | Book Lists | Recently Released | Coming Soon | Fast Delivery Books Bestselling Books: Our Current Bestsellers | Australia's Hottest 1000 Books | Bestselling Fiction | Bestselling Crime Mysteries and Thrillers | Bestselling Non Fiction Books | Bestselling Sport Books | Bestselling Gardening and Handicrafts Books | Bestselling Biographies | Bestselling Food and Drink | Bestselling History | Bestselling Travel Books | Bestselling School Textbooks & Study Guides | Bestselling Children's General Non-Fiction | Bestselling Young Adult Fiction | Bestselling Children's Fiction | Bestselling Picture Books | Top 100 US Bestsellers For: Schools | Organisations | Libraries | Publishers | Authors | Book Clubs | Bloggers | Affiliates Phone: 1300 36 33 32 (9am-5pm Mon-Fri AEST) - International: +61 2 9960 7998 - Online Form Address: Boomerang Books, 878 Military Road, Mosman Junction, NSW, 2088 © 2003-2017. All Rights Reserved. Eclipse Commerce Pty Ltd - ACN: 122 110 687 - ABN: 49 122 110 687 | | For every $20 you spend on books, you will receive $1 in Boomerang Bucks loyalty dollars. You can use your Boomerang Bucks as a credit towards a future purchase from Boomerang Books. Note that you must be a Member (free to sign up) and that conditions do apply.
http://www.boomerangbooks.com.au/Education-Sector-Strategy/World-Bank/book_9780821345603.htm
We are exploring the beautiful northern part of Thailand for this month’s ‘International Food Challenge‘. Northern Thailand is blessed with the best scenery in the kingdom, it is the region with forests and mountains, rivers and waterfalls. It is the home to Thailand’s hill tribes, whose conventions and customs differ from mainstream Thai culture. Rice is the staple of this region and they use both ordinary and glutinous varieties. Food is seasoned with fish sauce and chili peppers. Our host for this month, Sangeetha Priya, gave us some amazing Northern Thai dishes to try. After looking through the dishes, I knew right away that I have to make this Sticky Rice with Mango. This happens to be one of our favorite family dessert. My husband tasted the authentic version of this dish many times on his business trips to Thailand and I have tried it quite a few times at home over the years. With the seasons best mangoes in the market now, this is a perfect dish to make with a few of them. I adapted the recipe from my Thai vegetarian cookbook. This turned out amazing. | | Print Recipe |Servings||Prep Time| |4 serving||20 minutes| |Cook Time| |30 minutes| | | | | | || | Thai Sticky Rice with Mango - ½ cup Jasmine Rice - ¼ cup Sugar (adjust as per taste) - 1 cup Coconut milk - divided - ¼ tsp Corn starch - a pinch Salt - 1 Mango - chopped - Cook jasmine rice in 1 cup of water until tender. I pressure cooked the rice. - While rice is cooking, combine ½cup of coconut milk with 2tbsp sugar and salt. Cook until the sugar is dissolved and the mixture is hot. - Pour this hot coconut milk over the hot cooked rice. Cover and set aside for 10~15 minutes. Now mix the rice with the coconut milk, cover again and set aside to cool completely. - Combine the remaining ½cup of coconut milk with 2tbsp sugar. Cook until the sugar is dissolved and the mixture is almost boiling. - Whisk corn starch in 2tsp water. Slowly add this to the coconut milk mixture and cook until it starts to thicken, about 2~3 minutes. - When ready to serve, place the cooled rice on the serving plate. Top it with the chopped mangoes and drizzle the thick coconut milk sauce on top. Serve immediately.
http://www.cookshideout.com/thai-sticky-rice-with-mango-recipe
Tax included. Shipping calculated at checkout. - mosaic tiles. - Suitable for wet areas, kitchens, bathrooms, showers and swimming pools - Material: Glas - Surface: smooth. - Colour: pearl white. Iridescent shimmering (light is reflected differently by the surface and creates a soft play of colors) - Matte ist 33cm x 33cm. - The thickness is 4mm. - Support: Glued to flexible mesh.
https://www.complement-fusion.com/en/products/mt0131
1. Introduction =============== Interactions with small ligand molecules are essential aspects of proteins. Thus, prediction of binding ligands for a protein provides important clues of biological functions of proteins. Since close to 4000 protein tertiary structures have been solved of which function remained unknown \[[@B1-ijms-15-15122]\], there is an urgent need for developing computational methods for structure-based function prediction. Computational prediction can help building hypothesis of protein functions that can be later tested by experiments. Binding ligands for proteins can be in principle predicted by identifying similar known binding pockets from known protein structures. There are several strategies proposed to predict binding ligands by pocket comparison in the past \[[@B2-ijms-15-15122],[@B3-ijms-15-15122],[@B4-ijms-15-15122],[@B5-ijms-15-15122],[@B6-ijms-15-15122],[@B7-ijms-15-15122],[@B8-ijms-15-15122],[@B9-ijms-15-15122]\]. For instance, Hoffmann *et al.* measured the pockets similarity based on the alignment of protein pocket using convolution kernel between clouds of atoms in 3D space \[[@B2-ijms-15-15122]\]. Catalytic Site Atlas \[[@B10-ijms-15-15122]\] and AFT \[[@B11-ijms-15-15122]\] compare a few functional residues in binding pockets and quantify the pocket similarity with the root mean square deviation (RMSD) of the residues. Naturally, protein function prediction methods can be extended to identify chemical compounds that bind to a target protein as a part of drug design. In the drug discovery field, there are two major categories of computational methods for binding ligand prediction: ligand-based methods and structure-based methods. The ligand-based methods derive critical chemical features from a compound or set of compounds that are known to bind to a target and use these features to search for compounds with similar properties in a virtual compound library. This can be done by a variety of methods, including similarity and substructure searching \[[@B12-ijms-15-15122],[@B13-ijms-15-15122],[@B14-ijms-15-15122],[@B15-ijms-15-15122]\], 3D shape matching \[[@B16-ijms-15-15122],[@B17-ijms-15-15122]\], and searching with Quantitative Structure-Activity Relationship (QSAR) models \[[@B18-ijms-15-15122],[@B19-ijms-15-15122],[@B20-ijms-15-15122],[@B21-ijms-15-15122]\]. The advantage of such methods is that no target information is required. However, a major drawback of the ligand-based approaches is its dependency on the chemical features present in the known actives. Physico-chemical features that are absent in the set of active compounds used to derive the model are often neglected. Therefore, active compounds with novel scaffolds are rarely, if ever, recognized during the screening process. Alternatively, when the structure of the target protein is known, structure-based methods can be performed. Structure-based methods do not require *a* *priori* knowledge of active ligands; therefore the models are not biased by the chemical space of previously identified actives. One of the most widely used structure-based tools is molecular docking. The aims of docking are to predict the correct binding pose of a small molecule in the target protein's binding site and to provide an estimate of the affinity of the small molecule. Many docking programs have been developed in the past decades and have been successfully applied in virtual screening studies \[[@B22-ijms-15-15122],[@B23-ijms-15-15122]\]. In the molecular docking programs, the protein and the ligand are described by one of the three representations: grid, atomic, and surface \[[@B24-ijms-15-15122]\]. The grid representation, such as GRID \[[@B25-ijms-15-15122]\], stores the receptor's energy contribution on the grid points to accelerate the scoring of the ligand poses in the initial search algorithms. Therefore, it is widely used in various docking programs in the early stage of the ligand pose selection. The atomic representation is generally used in the final scoring of the binding poses in combination with an atom-based potential energy function \[[@B24-ijms-15-15122]\], as used in AutoDock \[[@B26-ijms-15-15122],[@B27-ijms-15-15122]\], Glide \[[@B28-ijms-15-15122]\], DOCK \[[@B29-ijms-15-15122]\], PharmDock \[[@B30-ijms-15-15122]\], and many other docking programs \[[@B24-ijms-15-15122]\]. The surface based representation, on the other hand, is typically used in protein--protein docking \[[@B31-ijms-15-15122],[@B32-ijms-15-15122],[@B33-ijms-15-15122]\], such as LZerD \[[@B34-ijms-15-15122]\] and ZDOCK \[[@B33-ijms-15-15122]\]. In our efforts for predicting the functions of proteins, we have developed an alignment free surface-based pocket comparison program named PatchSurfer \[[@B8-ijms-15-15122],[@B35-ijms-15-15122]\]. PatchSurfer represents a binding pocket as a combination of segmented surface patches, each of which is characterized by its geometrical shape, the electrostatic potential, the hydrophobicity, and the concaveness. The shape and the three physicochemical properties of surface patches are represented using the 3D Zernike descriptor (3DZD), which is a series expansion of mathematical 3D function \[[@B36-ijms-15-15122],[@B37-ijms-15-15122]\]. Given a query pocket in a protein, PatchSurfer searches a database of known pockets and finds similar ones to the query based on the surface-patch similarity. PatchSurfer was benchmarked on three different datasets and has shown superior performance to existing methods \[[@B8-ijms-15-15122]\]. PL-PatchSurfer is being developed to explore the utility of including ligand patch surfaces in our existing PatchSurfer methodology. PL-PatchSurfer represents both a protein binding pocket and a ligand molecule by their surface properties and identifies the optimal complementarity between the pocket and the ligand surface. The advantages of the surface representation are that it is less sensitive to subtle changes on the pocket and ligand conformation and the search speed can be quite fast. Each surface patch characterizes geometrical and physicochemical properties of a protein pocket and ligand on a continuous surface. We first tested the PL-PatchSurfer on the binding ligand prediction problem using a dataset with 146 protein structures binding with 12 different ligand types and studied the influence of the ligand conformations on the performance of PL-PatchSurfer. We then evaluated and optimized the performance of PL-PatchSurfer in identifying the native contacts on a large set of known protein-ligand complex structures from the PDBbind database \[[@B38-ijms-15-15122],[@B39-ijms-15-15122]\]. Finally, we tested PL-PatchSurfer on the directory of useful decoys (DUD) dataset to examine how it performs in virtual screening on a large and structurally diverse dataset. To the best of our knowledge, PL-PatchSurfer is the first surface patch-based method being developed that utilizes descriptors derived from the surface properties of ligands. The performance of PL-PatchSurfer when compared to our previous benchmarks using a pocket-based approach sheds further light on the utility of surface-based methods. In the Conclusions, we summarize characteristic performance of PL-PatchSurfer and discuss usefulness of the new approach. 2. Results and Discussion ========================= 2.1. Binding Ligand Prediction on Huang Data Set ------------------------------------------------ PatchSurfer was originally developed for predicting the functions of unknown proteins based on ligand binding site similarities to known proteins. In our previous study \[[@B8-ijms-15-15122]\], we have used PatchSurfer to predict the binding ligands of the proteins in the Huang data set \[[@B40-ijms-15-15122]\] ([Table 1](#ijms-15-15122-t001){ref-type="table"}) based on the principle that structurally similar binding pockets would bind similar ligands. PL-PatchSurfer takes a complementary approach to Patch-Surfer in that it predicts the binding ligand of a given protein based on the molecular surface complementarity between the ligand and the protein pocket. As a comparison with PatchSurfer, we first tested the performance of PL-PatchSurfer on the Huang dataset. [Table 1](#ijms-15-15122-t001){ref-type="table"} summarizes the number of binding pockets of twelve different ligand types in the dataset as well as the average number of surface patches that represent the pockets and the ligand molecules. The number of pocket patches and ligand patches correlates well with a correlation coefficient of 0.953. For each ligand, a maximum of 20 ligand conformations were generated using the Omega program from OpenEye \[[@B17-ijms-15-15122],[@B18-ijms-15-15122],[@B19-ijms-15-15122]\]. Dependent on the rigidity of the ligands, some ligands have no more than five ligand conformations. ijms-15-15122-t001_Table 1 ###### Huang data set. Ligand Name \# Pockets \# Omega Conformers Avg \# Pocket Patches Avg \# Ligand Patches ------------- ------------ --------------------- ----------------------- ----------------------- **AND** 12 20 22.3 18.1 **BTN** 8 20 18.6 17.7 **F6P** 10 20 19.8 16.5 **FUC** 8 2 8.6 11.5 **GAL** 32 3 13.9 12.7 **GUN** 11 1 14.6 10.0 **MAN** 15 6 9.3 11.5 **MMA** 8 10 13.0 13.4 **PIM** 5 2 14.0 12.0 **PLM** 24 20 28.3 24.1 **RTL** 5 20 31.2 25.9 **UMP** 8 20 22.5 19.2 **Total** 146 144 \- \- AND: adenosine; BTN: biotin; F6P: fructose 6-phosphate; FUC: fucose; GAL: galactose; GUN: guanine; MAN: mannose; MMA: O1-methyl mannose; PIM: 2-phenylimidazole; PLM: palmitic acid; RTL: retinol; UMP: 20-deoxyuridine 5-monophosphate; \#: Number. The procedure for testing the performance of PL-PatchSurfer on the Huang data set was as follows: Each ligand binding pocket was selected as a query, which was compared with ligands in the dataset and the similarity score between the pocket and each ligand was computed using the so-called *Totalscore~PS~* (Equation (13); see the Experimental Design section). The *Totalscore~PS~* quantifies the similarity of a pocket and a ligand molecule by considering local surface similarity, relative positions of corresponding surface patches on the pocket/ligand, and the size of the pocket/ligand using corresponding patch pairs identified by a distance score of patches (Equation (12)). The ligands were sorted by the *Totalscore~PS~*, which were used to finally predict binding ligands for the query pocket by the PocketScore (Equation (16)). Two tests were performed with PL-PatchSurfer. First, we used the bound ligand conformation from the X-ray crystal structure. For a query pocket, the bound conformation for the pocket itself was either included or excluded from the ligand dataset. The fraction of query pockets whose bound ligand type was correctly predicted at the highest score or within the top-3 highest scoring ligands were reported in [Figure 1](#ijms-15-15122-f001){ref-type="fig"}. Compared to the results from pocket-pocket comparisons using PatchSurfer, PL-PatchSurfer performs 7.1% better in ranking the correct binding ligand at the top-1 position. When within top-3 positions were considered, PL-PatchSurfer still performed slightly better than PatchSurfer. Interestingly, excluding the native ligand conformation of the query pocket did not change the result much; actually it even showed slight improvement in the success rate. ![Performance of PL-PatchSurfer and PatchSurfer on the Huang data set. For PL-PatchSurfer, three tests were performed: (i) X-ray: the bound ligand conformations of all the tested proteins are extracted to form the "X-ray ligand conformation database"; (ii) X-ray without native conformation: the native ligand conformation of the query pocket was removed from the "X-ray ligand conformation database"; and (iii) Omega: A maximum of 20 ligand conformations with the lowest internal energies are computationally generated by OpenEye Omega. The data for PatchSurfer was extracted from a previous publication \[[@B8-ijms-15-15122]\].](ijms-15-15122-g001){#ijms-15-15122-f001} It is remarkable that the PL-PatchSurfer showed a higher success rate than PatchSurfer, because PatchSurfer has been extensively compared with existing methods in our previous works. It was demonstrated in our previous paper that the patch-representation for pockets used in PatchSurfer was effective in achieving a higher accuracy than PocketSurfer \[[@B36-ijms-15-15122]\], which represents pockets as a rigid pocket with a single surface descriptor \[[@B8-ijms-15-15122]\]. It was also shown that the 3D Zernike descriptor, a mathematical surface representation used in PatchSurfer (see Experimental Design for more about the 3D Zernike Descriptor), had a higher accuracy than similar mathematical surface representations, spherical harmonics \[[@B41-ijms-15-15122]\] 2D Zernike descriptor, pseudo-Zernike descriptors, and Legendre moments \[[@B8-ijms-15-15122]\]. Moreover, Patch-Surfer also showed better prediction performance than four existing methods, eFseek \[[@B42-ijms-15-15122]\], SiteBase, PROSURFER, and XBSite2F \[[@B37-ijms-15-15122]\]. In a ligand virtual screening experiment, a bound ligand conformation for a target pocket is not always available. Furthermore, the ligand conformation with the lowest internal energy is not necessarily the bound conformation for a target-binding site. Therefore, a set of ligand conformations are usually pre-generated before performing virtual screening or are generated on-the-fly during a screening process. In the second test we used computer-generated ligand conformations using the Omega program from OpenEye \[[@B43-ijms-15-15122],[@B44-ijms-15-15122],[@B45-ijms-15-15122]\] ([Table 1](#ijms-15-15122-t001){ref-type="table"}). As shown in [Figure 1](#ijms-15-15122-f001){ref-type="fig"}, the top-1 success rate of PL-PatchSurfer using Omega-generated conformers was lower than the top-1 success rate using X-ray ligand conformations. However, interestingly, the top-3 success rate using Omega-generated ligand conformers was slightly higher than the top-3 success rate using the X-ray ligand conformations. In summary, the results on the Huang dataset show that the ligand-based patch method implemented in PL-PatchSurfer gives significant higher accuracy in the top-1 success rate than PatchSurfer, which indicates that correct ligands are recognized in a more specific fashion by PL-PatchSurfer than PatchSurfer. In the top-3 success rate, improvement by PL-PathSurfer is marginal but still showed better results than PatchSurfer. 2.2. Optimization of the Search Algorithm on PDBbind Dataset ------------------------------------------------------------ In the work described for the Huang dataset, we used PL-PatchSurfer with parameters that were optimized for pocket-to-pocket comparison in PatchSurfer (Equations (12) and (13) in the Methods section). In this section, we optimize the parameters in Equation (4) that determine contributions of different terms in the overall ligand-pocket matching score. A key step in PL-PatchSurfer is identification of corresponding surface patch pairs from a pocket and from a ligand by minimizing a distance score, which is a linear combination of the differences in 3DZD, the relative geodesic position, and the relative geodesic distance (Equation (4)). We optimized the weights in the distance score (Equation (4)) using the PDBbind core set \[[@B38-ijms-15-15122],[@B39-ijms-15-15122]\]. Ideally, a ligand patch identified as a match to a protein pocket patch should localize in the vicinity of the given protein patch in the ligand-bound structure of the protein to form inter-molecular interactions. To evaluate the performance of PL-PatchSurfer in identifying matching patches, we computed the match success rate on known protein--ligand complex structures from the PDBbind core set. If the distance between the centers of the identified matching patches is within a cutoff distance, we considered that the matching pair was correctly identified and counted it as a success ([Figure 2](#ijms-15-15122-f002){ref-type="fig"}). As a cutoff distance, primarily we used 5.0 Å but also tested 3.0, 4.0, and 6.0 Å. The match success rate is the number of correct contacting patches identified by PL-PatchSurfer divided by the actual number of correct contact pairs between the protein and the ligand for each complex structure (see Experimental Design for details). We first tested PL-PatchSurfer with the default weights that were optimized for PatchSurfer in the pocket--pocket comparison \[[@B8-ijms-15-15122]\], which showed a success rate of 33.1% for the 5.0 Å cutoff distance ([Table 2](#ijms-15-15122-t002){ref-type="table"}). We then optimized the weights following the approach described in the Method section. After the optimization process, the success rate increased to 39.1% with an average of 10 correct contacting patch pairs identified for each protein-ligand complex. [Table 2](#ijms-15-15122-t002){ref-type="table"} also shows the optimization is effective over a range of distance thresholds. The distribution of the success rate for individual protein--ligand complexes is plotted in [Figure 3](#ijms-15-15122-f003){ref-type="fig"}. It is clearly observed that the distribution of the match success rate was improved by the optimization. ![Definition of the success rate in determining correctly matched patches. For each protein--ligand complex structure, protein patches (cyan spheres) and ligand patches (red spheres) are generated. Using PL-PatchSurfer, all potential matching pairs were identified (examples of two potential matching pairs are shown in the red ellipses). Successful matches are defined as those where the distance between a paired ligand patch and a protein patch is within a cutoff distance.](ijms-15-15122-g002){#ijms-15-15122-f002} ijms-15-15122-t002_Table 2 ###### Coefficients and the success rate of different distance score functions. Setting 3DZD Difference Geodesic Distribution Geodesic Distance Success Rate ----------------- ----------------- ----------------------- ------------------- -------------- ------- ------- ------- **Default ^1^** 0.32 0.48 0.2 12.3% 21.6% 33.1% 44.0% **Optimized** 0.35 0.15 0.5 15.2% 26.2% 39.1% 51.1% **Random ^2^** \- \- \- 7.6% 13.4% 23.7% 35.8% ^1^ The coefficients are optimized for PatchSurfer in pocket--pocket comparison studies; ^2^ Matching patch pairs were randomly generated in the search process. ![Distribution of patch matching success rate. Frequency on the *y*-axis counts the number of individual protein--ligand complexes of a success rate. The distance threshold was set to 5.0 Å.](ijms-15-15122-g003){#ijms-15-15122-f003} 2.3. Performance of PL-PatchSurfer on Directory of Useful Decoys (DUD) Data Set ------------------------------------------------------------------------------- In this next experiment, we explored the utility of using PL-PatchSurfer in virtual screening exercise on the DUD dataset, which is large and contains diverse active and compound data. Knowing that PL-PatchSurfer performs well in binding ligand prediction, we wanted to investigate how it performs in virtual screening. A training process was carried out to identify the best parameters for the scoring function (Equation (8)) used in ranking the ligand. We split the protein targets into the training and testing set randomly, with 12 targets for training and 13 for testing. Optimization of the training set results lead to a parameter set of 0.8, 0.0, 0.1 as weights for *Totalscore~PL~* (Equation (8)). A cross-validation switching the training and the testing sets yielded a parameter set of 1.0, 0.0, 0.6, which is similar to the ones initially obtained. Interestingly, both parameter sets suggest that the relative distance score, *avgGrpd* (Equation (10)) in *Totalscore~PL~* (Equation (8)) does not contribute to the discrimination between the active and the decoy ligands. However, as the relative distance term *grpd* (Equation (7)) is also used in *Distance_score~PL~* (Equation (4)), which is used for finding matching patch pairs by the auction algorithm, the relative patch distance information is indirectly used in the entire search process. The weight of 0 for the relative distance term in *Totalscore~PL~* indicates that this term does not contain useful information for distinguishing different ligands in this dataset. However, the *Totalscore~PL~* term is useful for identifying corresponding patch pairs for a given pocket--ligand pair by *Distance_score~PL~*. Our optimization results suggest that differences in the 3DZD fingerprints are the major discriminator in ranking the active and decoy ligands. The area-under-the-curve (AUC) of the Receiver Operating Characteristic (ROC) plot for each protein target when its used in the testing set is shown in [Figure 4](#ijms-15-15122-f004){ref-type="fig"}. Overall, PL-PatchSurfer is able to provide a better than random (AUC of 0.5) AUC value for 12 out of the 25 protein targets. Notably however, PL-PatchSurfer was able to achieve an AUC over 0.70 for four protein targets: ampc, hivpr, hivrt, and mr. It is also interesting to note that ROC plots for individual targets in [Figure 5](#ijms-15-15122-f005){ref-type="fig"} show that significantly more actives than decoys were selected in early ranks for 20 out of 25 targets including the targets that have AUC values below 0.5 (P38, SRC, cyclooxygenase-1 (COX1) and cyclooxygenase-2 (COX2)). This is an important indicator that PL-PatchSurfer can be used to effectively prioritize compounds for experimental testing. To further understand characteristic performance of PL-PatchSurfer, we analyzed the results for two targets, ampc, and fgfr1, where PL-PatchSurfer generated significantly greater enrichment factors in a virtual screening exercise when compared to PharmDock \[[@B30-ijms-15-15122]\], as shown in [Table 3](#ijms-15-15122-t003){ref-type="table"}. We have chosen PharmDock because its performance was extensively compared with other existing program on the virtual screening performance over the DUD dataset by Hu and Lill \[[@B30-ijms-15-15122]\]. PharmDock has been compared with six docking programs, DOCK \[[@B29-ijms-15-15122]\], FlexX \[[@B46-ijms-15-15122]\], Glide \[[@B28-ijms-15-15122]\], ICM \[[@B47-ijms-15-15122]\], Surflex \[[@B48-ijms-15-15122]\], and PhDock \[[@B49-ijms-15-15122]\]. Overall, PharmDock has been shown to have a better performance than DOCK and PhDocK and comparable performance with ICM and FlexX. In [Table 3](#ijms-15-15122-t003){ref-type="table"}, we compared enrichment factors of PL-PatchSurfer and PharmDock on the 25 targets at 1%, 10%, and 20% levels. An enrichment factor at *X*% indicates how well hits within X% are dominated by actives; concretely, the percentage of actives within top *X*% hits is normalized by the overall fraction of actives in the compound dataset. At the EF1% and and EF10%, PL-PatchSurfer showed larger average enrichment factors than PharmDock: At 1%, the enrichment of PL-PatchSurfer and PharmDock was 8.63 and 6.87, while at 10% it was 2.48 and 2.23 for PL-PatchSurfer and PharmDock, respectively. PL-PatchSurfer showed a slightly smaller enrichment of 1.68 at 20%, where PharmDock had 1.72. Thus, on average PL-PatchSurfer was superior to PharmDock in early enrichment, which is practically one of the most important characters in virtual screening. ![Area-under-the-curve (AUC) values for individual protein target. Enzyme abbreviations: AChE, acetylcholinesterase; AmpC, AmpC β-lactamase; AR, androgen receptor; CDK2, cyclindependent kinase 2; COX-1, cyclooxygenase-1; COX-2, cyclooxygenase-2; DHFR, dihydrofolate reductase; EGFr, epidermal growth factor receptor; ER, estrogen receptor; FGFr1, fibroblast growth factor receptor kinase; FXa, factor Xa; GR, glucocorticoid receptor; HIVPR, HIV protease; HIVRT, HIV reverse transcriptase; MR, mineralocorticoid receptor; NA, neuraminidase; P38 MAP, P38 mitogen activated protein; PARP, poly(ADP-ribose) polymerase; PDGFrb, platelet derived growth factor receptor kinase; PPARg, peroxisome proliferator activated receptor γ; PR, progesterone receptor; RXR, retinoic X receptor α; SRC, tyrosine kinase SRC; TK, thymidine kinase; VEGFr2, vascular endothelial growth factor receptor.](ijms-15-15122-g004){#ijms-15-15122-f004} ijms-15-15122-t003_Table 3 ###### Enrichment factors for PL-PatchSurfer and PharmDock. Protein PL-PatchSurfer PharmDock ------------------- ---------------- ----------- ------ ------- ------ ------ **AChE** 0.00 0.00 0.00 0.00 0.19 0.14 **AmpC** 11.27 6.63 3.65 0.00 0.48 0.95 **AR** 0.00 1.81 1.62 12.16 2.43 2.09 **CDK2** 25.26 3.81 2.34 2.00 2.40 2.20 **COX1** 23.41 3.51 2.34 4.00 0.80 0.40 **COX2** 2.56 1.57 1.05 4.60 1.67 1.05 **EGFr** 0.00 2.18 1.24 2.25 2.16 1.94 **ER_agonist** 0.00 2.23 1.64 2.99 5.37 3.21 **ER_antagonist** 0.00 0.51 0.64 12.82 3.59 2.69 **FGFr1** 18.89 3.45 1.97 0.85 0.17 0.55 **FXa** 5.86 1.25 1.40 3.52 2.46 2.08 **GR** 14.89 3.33 1.79 11.54 1.92 1.35 **HIVPR** 15.18 5.09 3.20 24.53 7.17 4.06 **HIVRT** 27.86 5.10 2.83 7.50 1.75 1.75 **MR** 17.97 4.66 2.66 26.67 6.00 3.67 **NA** 2.37 1.66 0.95 4.08 1.84 1.12 **P38** 12.56 2.15 1.52 1.95 1.60 1.54 **PARP** 0.00 0.00 0.14 36.36 4.85 3.03 **PDGFrb** 0.00 0.20 0.41 0.00 0.38 0.45 **PPARg** 0.00 0.00 0.06 0.00 0.00 0.43 **PR** 3.60 2.20 2.77 3.70 2.22 1.48 **RXR** 0.00 2.96 2.23 0.00 1.00 2.25 **SRC** 15.99 2.90 1.87 0.65 2.13 1.84 **thrombin** 0.00 2.08 1.57 1.54 0.92 1.15 **VEGFr2** 17.96 2.71 2.00 8.11 2.16 1.69 EF1%: enrichment factors at 1% ranked decoys; EF10%: enrichment factors at 10% ranked decoys; EF20%: enrichment factors at 20% ranked decoys. ![ROC plots for individual targets in the directory of useful decoys (DUD) dataset.](ijms-15-15122-g005){#ijms-15-15122-f005} We also took a closer look at results on two targets, ampc and fgfr1. Looking first at the PharmDock results, we observed that none of the top-scoring poses of actives predicted by PharmDock were similar to a ligand binding pose observed in the AmpC ([Figure 6](#ijms-15-15122-f006){ref-type="fig"}A) and FGFr1 ([Figure 6](#ijms-15-15122-f006){ref-type="fig"}B) crystal structures represented in the DUD set. For AmpC ([Figure 6](#ijms-15-15122-f006){ref-type="fig"}A), the distance from the centroid of the bound ligand to the centroids for the poses of the six active compounds ranged from 4.7 to 8.7 Å, while it ranged from 3.2 to 5.4 Å for FGFr1. Therefore, given this observation, the low enrichment factors generated by PharmDock for these two targets is perhaps not surprising. Turning to the PL-PatchSurfer results, we found good correspondence between the matched protein-binding site and ligand patches. This observation is illustrated in [Figure 6](#ijms-15-15122-f006){ref-type="fig"}, where panels C and D show the generated pocket patches (cyan spheres) that represent productive protein-ligand binding interactions present in the ampc and FGFr1 crystal structures respectively; while panels E and F show ligand patches (green spheres) that are matched (by PL-PatchSurfer) to the highlighted binding sites patches in ampc and FGFr1, respectively. We compared the positions of matched patches since PL-PatchSurfer does not explicitly provide orientations of bound compounds. These sets of matched patches shows that there is good coverage of the binding regions that are making productive interactions observed in the ampc and FGFr1 crystal structures, shedding some light on scenarios where PL-PatchSurfer is able to generate good enrichment factors in virtual screening exercises. ![Panels (**A**) and (**B**) show that the top six PharmDock-generated poses (orange) are not similar to the ligand binding orientation found in the crystal structures of AmpC (panel **A**) and FGFr1 (panel **B**); The cyan spheres depicted in Panels (**C**) and (**D**) represent binding sites patches, generated by PL-PatchSurfer, that correspond to productive protein--ligand interactions observed in crystal structures of AmpC (PDB ID: 1XJG) and FGFr1 (PDB ID: 1AGW), respectively; The green spheres highlighted in panels **E** and **F** are the ligand patches (for a related AmpC-active and FGFr1-active ligand, respectively) that are matched by PL-PatchSurfer to the binding site patches, showing good coverage of the protein-ligand surface interaction space.](ijms-15-15122-g006){#ijms-15-15122-f006} AChE, PPAR, COX-2, PDGFrb, and thrombin appear to be difficult targets for PL-PatchSurfer. For these difficult targets, we performed a target-specific training process on a randomly select subset of active and decoy ligands from each target (see Experimental Design Section for detail). The resulting parameters and AUC values for these targets are shown in [Table 4](#ijms-15-15122-t004){ref-type="table"}. Significant improvement in the AUC values was observed after this target-specific training. However, the parameter sets obtained were very different from the ones obtained in [Figure 4](#ijms-15-15122-f004){ref-type="fig"} with the weight of 0 for the 3DZD similarity. This may be partly due to a limited number of physicochemical features considered in the current implementation of PL-PatchSurfer. The two features currently considered, the surface shape and the electrostatic potential, may not be sufficient in discriminating binding ligands for targets where other types of intermolecular interactions, such as hydrogen bonds, hydrophobic, and aromatic interactions, play critical roles. For PPAR and PDGFrb, both targets were found to be difficult for the widely used docking programs, such as DOCK and PhDOCK \[[@B50-ijms-15-15122],[@B51-ijms-15-15122]\]. ijms-15-15122-t004_Table 4 ###### Target-specific training results for the five difficult targets. Protein 3DZD Relative Distance Pocket Size AUC AUC before Training -------------- ------ ------------------- ------------- ------ --------------------- **AChE** 0 0.3 0.9 0.60 0.26 **PPAR** 0 1.0 0 0.68 0.15 **COX-2** 0 0 1.0 0.56 0.35 **PDGFrb** 0 1.0 1.0 0.42 0.27 **Thrombin** 0 0 1.0 0.50 0.14 2.4. Comparison of Computational Time ------------------------------------- At the end of the Result section, we compared the computational time of PL-PatchSurfer in comparison with PharmDock, AutoDock, and Glide ([Table 5](#ijms-15-15122-t005){ref-type="table"}). For this test, we used ten targets in the DUD dataset. PL-PatchSurfer took up to about a second to search one ligand against a given target and clearly the fastest among these four methods. PL-PatchSurfer is on average 40 to 500 times faster (average 266.4 times) than PharmDock, on average 30.2, 80.4 times faster than AutoDock and Glide, respectively. Times are in seconds. Jobs were run on a Linux machine with Intel Core i7-3820 CPU, 3.60GHz, with 65 GB RAM. The times counted are only for searching steps excluding file preparation steps. The times for AutoDock and Glide were taken from log files output by the programs. The rigid docking mode was used for AutoDock and Glide. ijms-15-15122-t005_Table 5 ###### Computational time of four methods. System PharmDock PL-PatchSurfer AutoDock Glide ---------- ----------- ---------------- ---------- ------- **10gs** 585.5 1.2 43 54 **1a30** 74.0 0.5 30 48 **1bcu** 15.0 0.4 5 22 **1gpk** 22.0 0.5 7 33 **1h23** 988.9 0.8 57 48 **1lol** 64.6 0.7 15 129 **1loq** 42.6 0.7 12 120 **1mq6** 348.9 0.7 34 28 **1n2v** 15.6 0.4 6 25 **1q8t** 25.9 1.3 8 31 3. Experimental Design {#sec3-ijms-15-15122} ====================== In this section we describe procedures and datasets used in this work. The overall scheme of PL-PatchSurfer is depicted in [Figure 7](#ijms-15-15122-f007){ref-type="fig"}. Given a protein with unknown function or a protein target of interest, its ligand-binding pocket will be extracted. The surface of the binding pocket will be represented by a set of segmented surface patches, each of which is described by its surface shape and electrostatics potential. The pocket will then be used to search against a ligand library, where each ligand is also represented by a set of surface patches. The ligands will be ranked based on their surface complementarity with the protein-binding pocket and their molecular size to suggest the best binding ligands for the target protein. The details of each step will be described below. ![Overall scheme of PL-PatchSurfer.](ijms-15-15122-g007){#ijms-15-15122-f007} 3.1. Definition of the Protein Pocket Surface {#sec3dot1-ijms-15-15122} --------------------------------------------- The surface of a protein is computed with the Adaptive Poisson--Boltzmann Solver (APBS) program \[[@B52-ijms-15-15122]\] which defines the surface as the boundaries of solvent accessible and solvent excluded regions. Surface shape information is stored in a 3D grid where grid points that overlap with the protein surface are specified. The electrostatic potential of the protein is also computed using the APBS program and the energy values are assigned onto each grid point. The center of the protein pocket is defined by the center of mass of its known binding ligands. The pocket surface is then defined as surface points that are encountered by rays cast from the center of the protein pocket. The detailed description of the ray-casting method can be found in our previous publication \[[@B36-ijms-15-15122]\]. 3.2. Generation of Ligand Conformations and Computation of the Surface Properties {#sec3dot2-ijms-15-15122} --------------------------------------------------------------------------------- To account for the ligand flexibility, multiple ligand conformations were generated using OpenEye Omega (OpenEye Scientific Software Inc. Santa Fe, NM, USA \[[@B43-ijms-15-15122],[@B44-ijms-15-15122],[@B45-ijms-15-15122]\]. For each ligand, a maximum of 20 conformations are generated with the calculated internal energy no more than 15 kcal/mol above the energy of the ligand conformation with the lowest internal energy. Duplicate conformers are removed using a 0.5 Å root-mean-square deviation (RMSD) cutoff for ligands with zero to five rotatable bonds, a 0.8 Å cutoff for ligands with six to ten rotatable bonds, and a 1.0 Å cutoff for all ligands with more than ten rotatable bonds. For each ligand conformation, the APBS program is used to compute the surface of the ligand and the electrostatic potential of the ligand on the surface. The surface shape and the electrostatic potential are also mapped onto 3D grid points for subsequent identification of the surface patches. 3.3. Identification of the Surface Patches {#sec3dot3-ijms-15-15122} ------------------------------------------ Based on the defined surface of the binding pocket or the ligand, PL-PatchSurfer identifies a group of patches covering the surface area. First, a set of seed points is selected as the centers for the patches. The seed points are iteratively selected from the surface points that are closest to protein or ligand heavy atoms within 3.5 Å to the defined surface \[[@B8-ijms-15-15122]\]. The minimum distance between any pairs of seed points are set to be 3.0 Å in order to distribute the patches evenly over the pocket surface. Finally, a patch is defined as a connected single surface region within 5.0 Å from a center seed point. 3.4. Computation of the 3D Zernike Descriptors on the Surface Patches {#sec3dot4-ijms-15-15122} --------------------------------------------------------------------- The 3D Zernike Descriptors (3DZD) is a series expansion of a 3D function, which allows compact and rotationally invariant representation of a 3D function \[[@B53-ijms-15-15122]\]. The detailed description of 3DZD can be found in the references \[[@B53-ijms-15-15122],[@B54-ijms-15-15122]\] and our previous studies \[[@B8-ijms-15-15122],[@B36-ijms-15-15122]\]. Briefly, for each identified patch that consists of connected surface points, there are two 3D functions representing the surface shape and the electrostatic distribution: *f~shape~*(*x*) and *f~elec~*(*x*). The 3D functions can be expanded into a series in terms of Zernike--Canterakis basis defined as follows: where −*l* \< *m* \<*l*, 0 ≤ *l* ≤ *n*, and *n-l* even. ![](ijms-15-15122-i002.jpg) is the spherical harmonics and *R~nl~*(*r*) is the radial function constructed in a way that ![](ijms-15-15122-i003.jpg) can be converted to polynomials in the Cartesian coordinates, ![](ijms-15-15122-i004.jpg). To obtain the 3DZD of *f*(*x*), first 3D Zernike moments are computed: Then, the 3DZD, *F~nl~* is computed as norms of vectors *Ω~nl~*. The norm gives rotational invariance to the descriptor: We used *n* = 15 so that the shape is represented by a vector of 72 invariant values. The electrostatic potential is represented by 144 = 72 × 2 invariants, since a 3DZD is computed each for positive electrostatic values and negative electrostatic values. 3.5. Procedure of Protein--Ligand Patch Comparison -------------------------------------------------- The procedure of comparing the complementarity between the pocket and the ligand can be summarized in the two steps: (i) search the optimal matching patch pairs between the pocket and the ligand; and (ii) compute the distance score between the pocket and the ligand. ### 3.5.1. Search Matching Patches between the Pocket and the Ligand We used the auction algorithm \[[@B7-ijms-15-15122],[@B55-ijms-15-15122]\] to search for the optimal matching patch pairs that yield the minimum distance score for the pocket and the ligand pair. The distance score between patch *a* from the pocket *A* and patch *b* from the ligand *B* is computed by: where *pdist*(*a*, *b*) is the weighted sum of the Euclidean distances (*L*2 norm) between the 3DZDs of the surface shape and electrostatic potential: The weights, 0.717 and 0.283, are for normalizing the difference in the value distribution of the shape and the electrostatic properties and were trained from the previous studies \[[@B8-ijms-15-15122]\]. The second term *appd*(*a, b*) compares the relative position of the patch *a* on the surface of pocket A with the patch *b* on the surface of ligand B. It is computed using the patch distribution vector which describes the approximate patch position (APP) feature for each patch. This feature approximately describes the relative position of a given patch in a pocket or a ligand, namely, if a patch is in the middle or on the edge of a pocket/ligand. To compute APP, we calculated the geodesic distance between each pair of patches. The geodesic distance is the distance between the two patch centers along the molecular surface. For each patch, its geodesic distances to the other patches were binned to render a patch distribution vector with the numbers of patches in different bins. A bin size of 1.0 Å and a total number of 40 bins were used. The *appd*(*a, b*) is then calculated by: The last term *grpd*(*a, b*) measures the geodesic relative position difference: where *m^A,B^* contains a list of the identified matching patch pairs in the previous search steps of the Auction algorithm. And *n* is the number of the identified matching patch pairs in the previous steps, *i.e.*, the length of *m^A,B^*. When *n* is zero, *i.e.*, in the first search step of the Auction algorithm, this term is ignored. (*a'*, *b'*) is the matching pair belongs to *m^A,B^*. ![](ijms-15-15122-i009.jpg) is the coordinate of the center of the patch in pocket *A* of matched pair *a*. *G*2 is the geodesic distance between the centers of the two patches. The three terms *pdist*(*a*, *b*), *appd*(*a*, *b*), and *grpd*(*a*, *b*) are linearly combined in Equation (4). Their coefficients are trained on the PDBbind dataset as will be described below in [Section 3.6.2](#sec3dot6dot2-ijms-15-15122){ref-type="sec"}. ### 3.5.2. Score the Overall Fit of the Ligand into the Pocket To measure the overall fit of the ligand *B* into the protein pocket *A*, the following scoring function was used: The first term is the average distance score between the matching patches, defined as: where *n~A~* is the number of patches in the protein pocket *A*. *N* is the number of matching patch pairs between pocket *A* and ligand *B*. *pdist* is the distance score of two patches as defined in Equation (5). *m^A,B^* contains the list of matched patch pairs from pockets *A* and ligand *B*. The second term is the geodesic relative position difference averaged over all the matching patches: where *G*2 is the geodesic distance between the centers of the two patches. The last term measures the size difference between the pocket *A* and ligand *B*: where *n~A~* is the number of patches in the protein pocket *A* and *n~B~* is the number of patches in the ligand *B*. The three terms are linearly combined in Equation (8). 3.6. Data Set and Evaluation Methods ------------------------------------ ### 3.6.1. Huang Dataset The Huang dataset \[[@B40-ijms-15-15122]\] was originally curated for testing the ligand binding site prediction programs and was used for examining pocket retrieval performance of PatchSurfer in the previous study \[[@B8-ijms-15-15122]\]. There are a total of 146 proteins that bind with one of the 12 ligand molecules ([Table 1](#ijms-15-15122-t001){ref-type="table"}). The sequence identity between each pair of proteins is lower than 30%. For each protein, the ligand binding pocket was defined using the known binding ligand as described in [Section 3.1](#sec3dot1-ijms-15-15122){ref-type="sec"}. The pocket patches were then identified as described above. The average number of patches identified for each group of proteins was listed in [Table 1](#ijms-15-15122-t001){ref-type="table"}. We used the Huang data set to investigate whether OpenEye Omega \[[@B43-ijms-15-15122],[@B44-ijms-15-15122],[@B45-ijms-15-15122]\] is able to produce ligand conformations that can achieve the ligand prediction results comparable with that of using the X-ray ligand conformations. For this purpose, the native ligand of each protein was extracted to form the X-ray ligand conformation set. Meanwhile, a maximum of 20 ligand conformations is generated for each ligand using OpenEye Omega with the parameters described in [Section 3.2](#sec3dot2-ijms-15-15122){ref-type="sec"}. The patches were then identified for each ligand conformation following the method described in [Section 3.3](#sec3dot3-ijms-15-15122){ref-type="sec"} and [Section 3.4](#sec3dot4-ijms-15-15122){ref-type="sec"}. We used PL-PatchSurfer to predict the binding ligand of each protein using both X-ray and Omega-generated ligand sets. To compare the performance of PL-PatchSurfer in predicting the binding ligand with the PatchSurfer, we used the same *Distance_score* from PatchSurfer \[[@B8-ijms-15-15122]\] for identification of matching patches: where the *pdist* is as described in Equation (5). Thus, compared with *Distance_score~PL~*, only similarity of 3DZDs of the surface shape and the electrostatic potential was considered in *Distance_score~PS~*. The *Totalscore* from PatchSurfer were used for scoring the ligand against the pocket: where *avgZd* is as described in Equation (9). *rdp* is the relative distance between the matching patches based on the Euclidean distance: where *n~A~* is the number of patches in the protein pocket *A*. *N* is the number of matching patch pairs between pocket *A* and ligand *B*. *L*2 is the Euclidean distance between the centers of the two patches. Finally, the pocket size is computed by: where *n~A~* is the number of patches in the protein pocket *A* and *n~B~* is the number of patches in the ligand *B*. The final score of a ligand matching with a protein is computed by: where *l*(*i*) denotes the ligand type (e.g., AMP, FAD, *etc.*) of the *i*-th ranked ligand to the query, *n* is the number of ligands in the database, and the function *δ*~*l*(*i*)*,L*~ equals to 1 if the *i*-th ranked ligand conformation is from ligand *L*, and is 0 otherwise. The first term is to consider *k* top-ranked ligand conformations to the query, with a higher score assigned to a ligand conformation with a higher rank. We used *k* = 20 in this study. The second term is to normalize the score by the number of conformations from ligand *L* included in the database. The ligand with the highest *PocketScore* is predicted to bind to the query pocket. ### 3.6.2. PDBbind Dataset {#sec3dot6dot2-ijms-15-15122} The PDBbind \[[@B38-ijms-15-15122],[@B39-ijms-15-15122]\] "core set" provides 210 protein-ligand complexes non-redundantly sampled from 1300 protein--ligand complexes \[[@B38-ijms-15-15122]\]. It covers 70 different proteins, each of which contains three protein--ligand complexes with different binding affinities, which makes it ideal for optimizing the search algorithm of PL-PatchSurfer. All the protein--ligand complexes in the PDBbind core set were pre-processed with added hydrogen atoms and were therefore used directly without additional preparations. To optimize PL-PatchSurfer's performance in identifying the matching patches that reproduce the protein--ligand interactions, we computed the match success rate for each protein--ligand complex structures in the PDBbind core set. For each protein--ligand complex, we identified the protein patches and the ligand patches ([Figure 7](#ijms-15-15122-f007){ref-type="fig"}). We say a protein patch and a ligand patch form a "native contact" if the distance between their patch centers are within a cutoff distance, either 3.0, 4.0, 5.0 or 6.0 Å. The 5.0 Å distance cutoff is frequently-used empirical distance cutoff for computing the steric interactions between a ligand and a protein in many scoring functions \[[@B56-ijms-15-15122],[@B57-ijms-15-15122]\]. We then computed the maximum number of correct contacts can be formed between the protein and the ligand for each complex structure. The pseudo-code to compute such maximum number of "native contacts" is provided in [Figure 8](#ijms-15-15122-f008){ref-type="fig"}. ![Pseudo-code for computing the maximum number of native contacts.](ijms-15-15122-g008){#ijms-15-15122-f008} Matching patch pairs will be identified by PL-PatchSurfer. If the distance between the centers of the identified matching patch pair is within a cutoff distance in the crystal structure, we count it as a success match. The match success rate for each protein-ligand complex structure is then defined as the number of success match identified by PL-PatchSurfer divided by the maximum number of correct contacts can be formed for the protein--ligand complex. The overall success rate is then computed by averaging the success rate over all the protein--ligand structures. An optimization program was constructed to search for the best coefficients in Equation (4) that will lead to the largest average success rate. First we reduced the three parameters *w*~1~, *w*~2~, and *w*~3~ in Equation (4) into two parameters *a*, *b* by Therefore *w*~1~ + *w*~2~ + *w*~3~ = 1. The rationale is to reduce the degree of freedom in searching for the optimal parameters therefore increase the searching speed. During the process of searching for the optimal parameter set, *a* and *b* were allowed to change from 0.0 to 1.0 with a step size of 0.1. The parameter set that leads to the maximum success rate was taken as the final optimized weights in Equation (4). ### 3.6.3. DUD Set The dictionary of useful decoys (DUD) \[[@B50-ijms-15-15122]\] dataset was used to perform virtual screening studies. The DUD dataset contains 40 protein targets and a set of active and decoy ligands corresponding to each target. There are 2950 active ligands in total, each of which has 36 physically similar but topologically different decoy ligands. In the current version of PL-PatchSurfer, the parameters of ions and cofactors were not included. Therefore, the four metalloenzymes, two folate enzymes, and five other enzymes (aldose reductase, enoyl ACP reductase, glycogen phosphorylase β, purine nucleoside phosphorylase, and *S*-adenosyl-homocysteine hydrolase) were excluded in our virtual screening experiment. The human shock protein 90 and thymidine kinase were excluded due to the failure of APBS in processing the protein structure caused by the missing atoms. Hydroxymethylglutaryl-CoA reductase and trypsin were excluded due to the failure of APBS in processing most of the active and decoy ligands due to incompatible atom typing to APBS. For cyclooxygenase-2 (COX2) and epidermal growth factor receptor (egfr), over 10,000 decoys are present in the DUD set. To speed up the testing process, a subset of 30 actives and 1080 decoys was randomly selected. For each ligand in the DUD dataset, we generated a maximum of 20 ligand conformations using OpenEye Omega. The surface patches for each ligand conformation were then identified using PL-PatchSurfer and stored in the DUD library. The surface patches were also calculated for the protein. The pocket patches were then used to search against the DUD ligand library of each target class. The fit of each ligand conformation into the protein pocket was measured by *Totalscore~PL~* (Equation (8)). The final score for each ligand was calculated by averaging the scores of its top-10 best fitted ligand conformations. The ligands for each protein system were ranked based on their final score. The Receiver Operating Characteristic (ROC) curve displaying the fraction of ranked actives (true positive rate) at a given fraction of ranked decoys (false positive rate) was plotted for each run. The area-under-the-curve (AUC) was calculated for each ROC curve and used to assess the overall enrichment quality. To optimize the weights in Equation (8), we randomly selected 12 of the DUD set to form the training set and left the other 13 targets as the testing set. During the optimization process, the weight *w* in Equation (8) was allowed to change from 0.0 to 1.0 with an interval of 0.1. The AUC value for each protein target in the training set was calculated for each weight. The weights, *w*~1~ = 0.8, *w*~2~ = 0.0, *w*~3~ = 0.1, that provide the optimal average AUC of all the training proteins were taken as the best parameters. To test the generalization of the trained parameters, we performed a two-fold cross-validation by switching the training and testing set. The resulting weights are *w*~1~ = 1.0, *w*~2~ = 0.0, *w*~3~ = 0.6. The target-specific optimizations were performed for AChE, PPAR, COX-2, PDGFrb, and thrombin. For each target, 10 active and 360 decoy ligands were randomly selected from the ligand dataset to form the training set. The remaining ligands for each target were left as the testing set. Each weight in *Totalscore~PL~* (Equation (8)) was changed from 0.0 to 1.0 with an interval of 0.1 to identify the set of weights that can achieve the optimal AUC value on the training set for each target. The optimized weights were then tested on the testing set of each target. 4. Conclusions ============== We have developed a new patch-based ligand analysis program, PL-PatchSurfer. First we have demonstrated that PL-PatchSurfer works well in predicting binding ligands for pockets of target proteins. By identifying compatible patch pairs from a target pocket and candidate ligands, PL-PatchSurfer showed higher success rate than Patch-Surfer even when the native conformation of ligands were excluded and also better or comparable results than PatchSurfer when Omega-generated ligand conformations were used. Thus, PL-PatchSurfer is a promising new method for binding ligand prediction that can give a clue for biological function for protein structures of unknown function. We then optimized the search algorithm of PL-PatchSurfer using the PDBbind data set to improve its success rate in identifying native contacts. Finally, we explored the possibility of PL-PatchSurfer in virtual screening experiment. Performance comparison against PharmDock, which was shown to perform better or comparable to other existing methods, showed that PL-PatchSurfer has better in early enrichment of actives than PharmDock. Detailed analyses showed that PL-PatchSurfer detected patches corresponding patches in pockets and ligands in correct places. Comparing to existing atom-based docking programs, PL-PatchSurfer also has a great advantage in its computational efficiency. PL-PatchSurfer, on average, needs 0.7 s to search one ligand against a given target on a single core of an Intel i7-3820 computer. This is in contrast to the more time consuming protein--ligand docking programs, which normally need 30 to 250-fold longer time to complete the docking of one ligand ([Table 4](#ijms-15-15122-t004){ref-type="table"}). This speed improvement can result in substantial difference in computational time, in the order of days and weeks, considering practical virtual screening situations where millions of compounds are matched to a target. So when PL-PatchSurfer can be most useful in practice? Obviously, PL-PatchSurfer should be very useful in function prediction of proteins of unknown function as it was shown to be better than PatchSurfer that was already better than many existing methods. Moreover, PL-PatchSurfer will also be effective in virtual screening as a complementary tool to existing docking-based methods. It has been discussed that current docking-based virtual screening methods still have limitations in identifying actives \[[@B24-ijms-15-15122]\]. Because PL-PatchSurfer employs a completely different approach of surface patch matching yet performs competitively with existing methods, we believe that this surface-patch approach has potential to significantly enhance the design of new ligands in several challenging drug--target areas including G-protein coupled receptors, fragment-based drug design and protein--protein interactions (PPI). In addition, we plan to investigate the use of PL-PatchSurfer to assess ligandability---The relative ability of a protein target to productively interact with a drug-like ligand---In a collection of known PPI systems. This work is supported by a grant from the Lilly Research Award Program. D.K. also acknowledges fundings from the National Institute of General Medical Sciences of the National Institutes of Health (R01GM097528) and the National Science Foundation (IIS1319551, DBI1262189, IOS1127027), and National Research Foundation of Korea Grant funded by the Korean Government (NRF-2011-220-C00004). D.K. conceived the study and B.H. and D.K. designed the experiments. B.H. executed the experiments. X.Z. coded a part of programs developed in this study and measured computational time of software in [Table 5](#ijms-15-15122-t005){ref-type="table"}. L.M. measured computational time of software in [Table 5](#ijms-15-15122-t005){ref-type="table"}. M.B. analyzed ligand docking poses in [Figure 6](#ijms-15-15122-f006){ref-type="fig"}. B.H., M.B., and D.K. wrote the manuscript. All authors read and approved the manuscript. The authors declare no conflict of interest.
Any income that a person actually earns by his or her labour is considered to be an amount earned in suitable employment and is to be included as AE. This includes earnings from self-employment and commissions, see section 8.6 and 8.7. AE for a serving member includes any amount earned in employment with the ADF as well as earnings or what the person is able to earn from civilian employment for part-time reservists. To establish a person’s AE on the basis of actual earnings, the following information is required: Pay slips or similar documents to establish hours worked and amount received. Medical certificates (to establish capacity). The gross value of earnings are held in calculations. In cases where the delegate is satisfied that a person is working below their full capacity (either level of employment or hours in employment), the person may be deemed with an ability to earn at a higher amount (see section 8.18). 8.4.1 Allowances included in AE The following types of allowances should be included in the calculation of AE: allowances which are taxable; allowances which are paid in respect of specific skills or qualifications attained by the person i.e. allowances paid for licences, tickets, certificates. The following types of payments should not be included in the calculation of AE: allowances for money spent, or likely to be spent, by the person on expenses i.e. travel allowance, meal allowance; retention bonuses (these are not usually paid as an allowance but rather as a lump sum payment and are not considered allowances). Where a person may receive a penalty rate of pay i.e. a higher rate of pay for working certain hours or in certain conditions the higher rate of pay is included in AE. Overtime hours and rates are also included as AE. 8.4.1.1 Examples Example 1 – Actual earnings and allowances for a serving member on deployment (MRCA) Actual earnings are calculated in accordance with section 92: actual ADF pay + actual pay-related allowances where: actual ADF pay means the amount of pay the member earns for the week; and actual pay related allowances means the total compensable pay related allowances (as defined above) that were paid to the member for the week. A member was posted to Operation Astute (non-warlike service) for the period 27 May 2006 to 24 September 2006. The member returned to Australia on 9 June 2006 due to a service related disease. As a result, the member lost 15 weeks of pay-related allowances as follows: Field Allowance - $300.09 per week; Separation Allowance - $45.64 per week; and Deployment Allowance - $550.20 per week. It should be noted that all deployment related allowances while on non-warlike service are exempt from income tax. The member's normal ADF pay as a Corporal is $960.12 per week. The member's NE for the period 27 May 2006 to 24 September 2006 is: $960.12 + $300.09 + $45.64 + $550.20 = $1,856.05 The member's actual earnings (AE) for this period was her/his normal ADF pay as a Corporal, $960.12 per week. NE - AE = $1,856.05 - $960.12 = $895.93 per week The member receives incapacity payments at the rate of $895.93 per week for the 15-week period from 9 June 2006 to 24 September 2006. In accordance with subsection 51-32(3) of the Income Tax Assessment Act 1997 (ITAA) deployment related allowances during a period of non-warlike service are tax exempt. Note: In practice each allowance could have a different end date. For example, field allowance ceases on the day the member leaves the field, separation allowance will cease when the member arrives home and deployment allowance may cease a few weeks later after the expiration of leave accrued during the deployment. The correct end dates for each allowance must be obtained via SAM.
http://clik.dva.gov.au/military-compensation-mrca-manuals-and-resources-library/incapacity-policy-manual/8-ability-earn-and-actual-earnings/84-ae-when-person-actually-employment
Science and Technology Committee calls for urgent publication of Biometrics Strategy and upgrade of IT system A committee of MPs has urged the Government to impose a cautious approach to the roll out of facial recognition technology, upgrade the relevant IT system and quickly publish the much delayed Biometrics Strategy. The Science and Technology Committee has published a report on the strategy, which the Government originally promised for 2014, and forensic services, which was covered in a strategy produced in 2016. It says the delay on biometrics has contributed to significant confusion around legal and ethical issues and that the Government should publish the report in the next month. A court case in 2012 led to a ruling in favour of an individual who wanted his image deleted from a police database after he had been arrested but not subsequently convicted. The Home Office responded by setting up a system in which people not convicted could request the deletion of their images, but existing IT systems make it impossible to do this automatically and it still requires a manual process. No delay One of the report’s recommendations is that the Government should ensure the IT upgrade, with a fully automatic image deletion system, is completed without delay. If there is a delay it should introduce a comprehensive system for manual deletions as quickly as possible. The MPs say the Biometrics Strategy should address which of these routes will be followed and set out the Home Office’s response to on the lawfulness of how it has responded to the court case. “The Government’s approach is unacceptable because unconvicted individuals may not know that they can apply for their images to be deleted, and because those whose image has been taken should not have less protection than those whose DNA or fingerprints have been taken,” the report says. While it acknowledges the potential value in facial recognition technology, it says it is still evolving and there are concerns over its reliability and potential for discrimination. The Government has said it is currently used selectively. The report welcomes this point but says the technology should not yet be generally deployed beyond the current pilots. It also points to the ethical issues to be resolved and says the strategy should consider how images will be managed and regulated, potentially by a dedicated regulator or extending the current remit of the biometrics commissioner. Infringement of liberty Norman Lamb MP, chair of the Science and Technology Committee, said: "In the four years since the Government promised to produce a Biometrics Strategy, the Home Office and Police have developed a process for collecting, retaining, and reusing facial images that some have called unlawful." "Large scale retention of the facial images of innocent people amounts to a significant infringement of people’s liberty without any national framework in place and without a public debate about the case for it. The Government must urgently set out the legal basis for its current on-request process of removing images of innocent people. It is unjustifiable to treat facial recognition data differently to DNA or finger print data.” “It should urgently review the IT systems being developed and ensure that they will be able to deliver an automated deletion system, or else move now to introduce comprehensive manual deletion that is fit for purpose."
https://www.ukauthority.com/articles/mps-urge-caution-over-facial-recognition-tech/
The essential thirty years since this had not happened … On the night from Tuesday to Wednesday, frost struck the orchards of Tarn-et-Garonne. In a few hours, some manufacturers lost up to 50% of their production. In Moissac, Maurice Andral makes a pessimistic first assessment. Burnt flowers, blackened fruits … The inventory this year in the orchards of Maurice Andral, producer of plums and apples in Moissac, is painful. During the night the temperature dropped to -4.8 ° C. Despite a night of fierce fighting to maintain the temperature and while this season was particularly plentiful, Maurice Andral saw the fruits of his labor fall to the ground … An extraordinary one Episode that could be repeated in the night from Wednesday to Thursday … “We are facing a catastrophe”, the operator blows 15 hectares of apple trees and 15 hectares of plum trees. “We hadn’t seen such cold since 1991” After the first two nights of frost last week, Maurice Andral believed his harvest had been saved. This was done without anticipating the arrival of a cold spell causing so-called “black frosts” as opposed to “white frosts” which form dew on the ground. They are characterized by their so-called winter intensity and do not come from the ground, but from icy air masses. Maurice Andral notices the damage in his orchards in Moissac. DDM – Loubna Chlaikhy “We hadn’t seen such cold weather since 1991,” says the man who has pampered his fruit trees for more than thirty years. Moved, he inspected his orchards to assess the damage. “At first glance, we are already at 35 to 50% of the production lost due to frost,” he estimates. A bitter observation. “It’s very difficult … admits whoever happens to take a branch for a count. Of the 17 plums on the branch, we see that only one or two are still viable.” Especially since these jellies can also cause the juice to stop. “The tree can revert to hibernation mode and therefore stop producing the sap that nourishes the fruit. After all, we could lose the entire harvest and not do anything about it, ”explains Maurice Andral. To limit the damage, seven of them still fought the frost last night. “Paraffin candles are sold out” In this fight against the frost, Maurice Andral puts every chance on his side. “We have the nets over the orchards that you can graduate with. The paraffin candles are not in stock, so we burn old bales with slightly damp hay. We use an air thruster to prevent the onset of frost, but we also use large towers to stir the air, but this time as the cold air is coming from above it was counterproductive, ”he decodes. Bales of hay are burned to maintain the temperature in the orchards. DDM – Loubna Chlaikhy So many strategies for a single goal: to limit the temperature drop. “With these methods we can reach a maximum of 3 ° C, and when it is -5 ° C, that is far from sufficient … I am very concerned,” concludes the farmer, deeply touched. Hopefully fate doesn’t last. The apple limits the break … Tarn-et-Garonne, the leading national apple producer, closely monitors the health of its apple trees. After an ice-cold night, Christophe Belloc visited the producers to give an initial assessment: “After I had spent the night protecting my trees, I was reassured in the morning when I was in Bessen’s balance sheet with the Stanor cooperatives in Moissac and Novacoop moved, explains the president of Blauwal, the national market leader in the export of apples. At first glance, conventional protective devices have only limited damage. The spraying obviously worked well given that black frost carried by masses of cold and dry air that hit the treetops first, as opposed to the white frosts that came from the ground. I think we will see some losses in early strains in a few days. It is always said that this is the most terrible second in two consecutive frosty nights. So I’ll wait a few hours to take stock of this episode. It is very difficult to have a very accurate weather forecast as temperatures can vary by several degrees from valley to valley. In theory, apple trees are in danger as soon as we drop below -1.6 ° C. Worries are in order until the end of April. “” Interview by Ph. C.
https://www.scoopcube.com/video-frost-we-have-already-lost-35-to-50-of-production-in-tarn-et-garonne/
This antibody recognizes an epitope that is shared by all 22 Gamma-protocadherins (Cross-reacts with all Gammaprotocadherin-A proteins, but not B or C and recognizes endogenous levels of PCDHGs by western blot, immunoprecipitation, and immunocytochemistry). Primary Antibody Mouse N144/32 WB Mouse Purified Pcdhga3 100 kDa Product Details Pan-Gamma-protocadherin-A Cadherins are calcium-dependent cell-cell adhesion molecules, and protocadherins constitute a subfamily of non-classic cadherins. Members of the protocadherin gamma subfamily are encoded by 22 tandemly arranged genes within the PCDHG gene cluster on chromosome 5q31. The 22 PCDHG genes function as 'variable' exons that are individually spliced to a downstream constant region to form distinct PCDHG transcripts. The variable PCDHG exons encode the extracellular and transmembrane domains of the protocadherin protein, and the common region encodes the intracellular domain. These neural cadherin-like cell adhesion proteins most likely play a critical role in the establishment and function of specific cell-cell connections in the brain. Purified 1 mg/mL Monoclonal N144/32 IgG2b WB Mouse Pcdhga3 100 kDa Fusion protein corresponding to amino acids 720-804 (variable cytoplasmic domain) of mouse Gamma-protocadherin-A3 (also known as PCDH-gamma-A3, accession number Q91XY5); Human: 80% identity (68/84 amino acids identical); Greater than 75% identity with other Gamma-protocadherin-A proteins Mouse AB_2159447 Store at ≤ -20 C for long term storage. For short term storage, store at 2-8 C. For maximum recovery of product, centrifuge the vial prior to removing the cap. Liquid 10mM Tris, 50mM Sodium Chloride, 0.065 % Sodium Azide pH 7.4 Unconjugated Cross-reacts with all Gamma protocadherin-A, -B and -C proteins Each new lot of this antibody is tested to confirm that it recognizes a single immunoreactive band of expected molecular weight when used to probe brain lysate. These antibodies are to be used as research laboratory reagents and are not for use as diagnostic or therapeutic reagents in humans. United States 24 months from opening Protocadherin gamma A3 (Protocadherin gamma subfamily A, 3) - Lobas MA, Helsper L, Vernon CG, Schreiner D, Zhang Y, Holtzman MJ, Thedens DR, Weiner JA. (2012), 'Molecular heterogeneity in the choroid plexus epithelium: the 22-member γ-protocadherin family is differentially expressed, apically localized, and implicated in CSF regulation..' Journal of Neurochemistry. 10.1111/j.1471-4159.2011.07587.x. - Li Y, Serwanski DR, Miralles CP, Fiondella CG, Loturco JJ, Rubio ME, De Blas AL. (2010), 'Synaptic and nonsynaptic localization of protocadherin-gammaC5 in the rat brain..' Journal of Comparative Neurology. 10.1002/cne.22390.
https://www.antibodiesinc.com/products/anti-pan-gamma-protocadherin-a-antibody-n144-32-75-178
Working from home has been imposed as a preventative measure to reduce the spread of the Covid-19 pandemic in Luxembourg as well as in the border countries. However, this may lead the cross-border workers exceeding the maximum number of homeworking days provided by the double tax treaties ("DTTs") concluded between Luxembourg and the border countries. In effect, French tax residents working from home more than 29 days per year, German residents working from home more than 19 days per year and Belgian residents working from home more than 24 days per year, should also be taxable in their country of residence on income related to the homeworking days (instead of being only taxable in Luxembourg). - In order to simplify the administrative procedures, the respective Belgium and Luxembourg Governments have agreed that the current COVID-19 crisis constitutes a “Force Majeure” for the purpose of the application of the Luxembourg-Belgium DTT. Consequently, as from 14 March 2020, remote working days performed from home in Belgium are not counted towards the 24-day tolerance rule. This measure will apply until a new order is issued. - The same agreement has been made between the French and Luxembourg Governments. - Germany and Luxembourg signed an agreement on 3 April 2020 confirming that German cross-border workers who are employees can work from home and maintain their tax status to the extent that the homeworking is due the Covid-19-related measures (the agreement doesn’t affect employees whose contract specifically provides for teleworking). Under these circumstances, homeworking days will be treated as days worked in Luxembourg. The agreement applies to homeworking performed between 11 March and 30 April and will automatically be renewed for one month at the end of each month unless one of the competent authorities terminates it. Cross-border workers will have to keep for [their]records, a certificate from the employer certifying the number of homeworking days performed due to the COVID-19 pandemic.
https://www.elvingerhoss.lu/publications/covid-19-taxation-luxembourg-cross-border-workers
ORIGIN: It is thought to have originated in Southern Europe and Central Asia. Saffron Crocus is a pretty bulb with purple flowers, each with three stigmas. It has been grown as a spice and a dye since ancient times. The name is derived from zafaran, the Arabian word for yellow. It is famous as an ingredient in paella, bouillabaisse and risotto. It is a perennial bulb to 40 cm high and summer dormant. It requires a cold winter with a few frosts to flower successfully. Older, larger bulbs, 5 years or more, can produce up to 15 flowers before they subdivide into multiple daughter corms. Position: Requires full sun, will not grow in shade. Soil Type: Deep, rich, very well-drained, pH 6.5; compost and well-rotted manures are beneficial. Recommended planting time: Autumn, early spring. It does best in temperate areas and dry Mediterranean conditions (VIC, SA, drier areas of NSW). It prefers areas with winter / spring rain and dry summers. It is unlikely to be successful in humid, subtropical areas, such as coastal zones north of Sydney and QLD but if you want to give it a try, plant it in a terracotta pot that can be kept fairly dry; be careful not to over-water. It is unsuitable for tropical areas. Growing details: Whilst the bulbs are actively growing keep the soil moist; once dormant, allow the soil to dry out. Flowering time is autumn. Stigmas must be harvested straight after the flowers open; each flower will only produce 3 stigmas and each saffron crocus bulb will only produce 1 flower. The flower stigmas are the world's costliest spice. About 50 - 60 saffron flowers are required to produce about 1 tablespoon of saffron spice. After harvesting, dry the stigmas in a dry, sheltered spot for 3-5 days; store in an airtight container. After flowering care: Simply plant and leave - these bulbs will easily naturalise in the garden.
https://greenharvest.com.au/Plants/Information/SaffronCrocus.html
We are searching data for your request: Upon completion, a link will appear to access the found materials. The herbaceous perennial plant Ixia is a member of the Iris family. According to information taken from various sources, there are from 40 to more than 60 different species in this genus. This plant comes from South Africa, namely, from the Cape region. The scientific name of this genus comes from the Greek word, which means "bird glue", meaning the sticky sap of a plant. Cultivation of this flower began in the 18th century. To date, the most widespread varieties are Ixia, which are hybrids, they have a common name - Ixia hybrid. At the same time, species ixia are becoming less popular every year. Ixia features Ixia is a bulbous plant, the height of which can vary from 0.15 to 0.7 m. The shoots are thin. Narrow linear long leaf plates are xiphoid and two-row. On the peduncle there are about 10 wide open flowers, reaching from 25 to 50 mm in diameter. The flowers include 6 petals of red, yellow, white or pink color, while closer to the middle, the color becomes more saturated and dark, for example: black, dark red or brown. Flowering is observed in the last spring weeks or the first - in the summer. At night, as well as in cloudy weather, the flowers of this plant do not open. The flowers have a not very strong, but rather pleasant smell, which is attractive to various insects, for example, to bees. Ixia planting in open ground What time to plant If Ixia is grown in areas with sufficiently warm and mild climatic conditions, then its planting in open soil can be carried out in spring (from the last days of April to the first days of May) or in autumn (in November). Since the planting material of this plant dies at a temperature of less than minus 1-2 degrees, then in middle latitudes, as well as in colder regions, it is planted only in spring. Each season for planting ixia, it is recommended to choose a new site, which serves as a good prevention against diseases and pests. A site should be chosen well-lit, located away from trees, and also protected from gusts of wind. Ixia grows best in fertile, neutral soil saturated with humus. For planting, those areas where there is stagnation of liquid are not suitable. Landing rules First you need to carefully prepare the site for planting. To do this, they dig it up with the introduction of compost, as well as leveling the surface. It is recommended to add sand to heavy soil. Go through the planting material. It is recommended to plant only elastic and dense bulbs, and all dry, soft and moldy bulbs are subject to rejection. Make holes and cover their bottom with a layer of nutritious soil, while taking into account that the bulb is buried in the ground by 50–80 mm. When planting corms, a distance of 10–12 centimeters is observed between the holes, and when planting divins and children - from 8 to 10 centimeters. The planted plants do not need to be watered, but the surface of the site must be immediately covered with a layer of organic mulch, the thickness of which should be from 20 to 30 mm. In the current season, the first flowering can be observed only in those plants that have grown from the largest bulbs. The rest of the bushes will bloom only after 1-2 years. Ixia care in the garden It is not difficult to grow ixia in your garden, but this is only if you know some of the rules and features. For a plant to grow and develop correctly, it needs a lot of light, warmth and high humidity. In this regard, it is recommended to choose sunny areas for planting, and in order to increase the humidity of the air, it is necessary to moisten the bushes from a spray bottle in the evenings. When grown in a shaded area, the bushes grow less spectacular, so their peduncles become very thin and long, while the flowers lose their rich color. You need to care for this crop in the same way as for other garden plants. So, it needs to be watered, weeded, fed on time, removed wilted flowers, loosened the soil surface between the bushes, and also protected from pests and diseases, if necessary. How to water and feed It was already mentioned above that when the corms are planted in open soil, they do not need to be watered. This should be done only when shoots appear (after about 15–20 days). From this point on, the plants begin to be systematically watered. During bud formation and flowering, watering should be abundant and frequent. Water for irrigation should be used settled and warm, while it is recommended to dissolve in it agents that stimulate abundant and prolonged flowering. For feeding, experts advise using mineral fertilizers for bulbous crops, but you can also take organic fertilizers. Ixia should be fed from the first summer weeks. After the bushes have faded, they cease to be watered and fed. Reproduction of ixia Around the maternal corms, babies gradually grow, which they use for reproduction. Before planting the bulbs, the children should be separated, while the break points should be treated with crushed coal. Then the children are planted in open soil. The first flowering of such plants can be seen only for 2-3 years. Also, the plant can be propagated by dividing the bulb. It should be cut into several pieces using a very sharp knife. It should be noted that each section should have an eye and a piece of the bottom with the rudiments of the rhizome. Places of cuts at the delenki need to be treated with brilliant green, coal powder or wood ash, then they are immediately planted in open ground. Such plants often bloom already in the current season. Wintering When the plant fades, it is not necessary to immediately remove the corms from the soil, since they still need to accumulate the required amount of nutrients. As a rule, the bulbs are excavated in the last days of July. Corms should be dried in a shaded area with good ventilation. Then they are treated with a strong solution of potassium permanganate and dried again. For storage, the corms should be placed in a box, which is removed in a cool and dry room. You can also store the bulbs on the vegetable shelf of the refrigerator. It was already mentioned above that it is necessary to plant Ixia in regions with a cold climate in the spring, and in warmer ones - in the fall. If desired, Ixia can be planted for distillation, in this case, in the winter months, it will become an excellent decoration for any room. In regions with very warm winters, Ixia is left in the open ground for the cold season, for this the yellowed and withered aboveground part must be removed, and the site is covered with a layer of mulch (straw, loose leaves, dry soil, sawdust or spruce branches). Diseases and pests Ixia has a very high resistance to pests and diseases. The only problems can begin if there is stagnation of moisture in the soil. Due to prolonged waterlogging of the soil, mold may appear on the corms. In this regard, when preparing a site for planting, this feature must be taken into account. If the soil is excessively heavy or clayey, then sand is added to it for digging. Types and varieties of ixia with photos and names The types of ixia that are cultivated by gardeners will be described below. Green-flowered Ixia (Ixia viridiflora) It is relatively difficult to find planting material for this type of ixia. The flat and small flowers are green in color, with a black-purple center. Ixia spotted (Ixia maculata) The bulb is rounded in diameter up to 30 mm. The height of the leafy stem is about 0.4 m. The basal leaf plates are narrow and lanceolate. The spike-shaped inflorescences include flowers that reach 40 mm in diameter and have a varied color with a dark center. The flowers are wide open during the daytime and close at night. Chinese Ixia (Ixia chinensis) This species, which is a Far Eastern one, is endangered. The rhizome of the bush is short, and the height of the stems can vary from 0.5 to 1.5 meters. In the lower part of the stems there are 5-8 pieces of xiphoid leaf plates, reaching half a meter in length and 40 millimeters in width. Branching, spreading panicle inflorescences consist of 12–20 wide-open flowers of yellow or brown-red color, on their surface there are dark purple specks, and in diameter they can reach about 70 millimeters. The flower opens in the first half of a sunny day, while the beginning of its wilting occurs at 17 o'clock. Quite popular among gardeners is the decorative form of the type of flava with large flowers of a solid yellow color, a fan variety (there is an overlap of leaf plates on each other by 3/4 of the length) and a variety of purpurea (flowers are colored red-yellow). Ixia hybrid (Ixia x hybrida) The height of such a perennial plant is from 0.3 to 0.5 m. Narrow leaf plates are arranged in two rows. Peduncles are leafless. Spike-shaped or racemose inflorescences consist of 6-12 funnel-shaped flowers, they can be painted in different colors, but their middle is dark red or brown. Flowering occurs at the beginning of the spring period, and it lasts about 20 days. Cultivated since 1770 Popular varieties: - Blue Bird... The flowers are colored white and blue. - Castor... The color of the flowers is red. - Giant... Inflorescences are creamy white. - Earley Seprise... The flowers are red-carmine with white. - Hogarth... Inflorescences are cream colored. - Hollands Glory and Market... These varieties have yellow flowers. - Mabel... The color of the inflorescences is red-carmine. - Volcano... Inflorescences have a brick-red color. - Rose Imperial... The flowers are painted in a delicate pink color. You can also buy the Ixia Mix variety, which includes plants of various colors. Freesia: growing in the garden and at home Author: Natalia Category: Garden Plants Published: January 20, 2019 Reissued: February 19, 2019 Last revised: February 17, 2021 - Listen to the article - Planting and caring for freesia - Plant features - Growing conditions - Growing freesia outdoors - How to grow - Planting freesia - Freesia care in the garden - Diseases and pests - Freesias at home - How to plant - How to care - Freesia after flowering - Home freesia - Garden freesia - Storing the bulbs - Types and varieties - Freesia armstrongii - Freesia hybrid (Freesia hybrida) - Freesia white, or refracted, or broken (Freesia refracta) - Literature - Comments (1) Freesia or Cape lily of the valley has not gone out of fashion for hundreds of years. Once she was hunted by court gardeners and perfumers, now - fashionable florists, brides and photographers. If you dream of growing this shocking celebrity of the flower world on your own, please be patient and arm yourself with the invaluable experience of your colleagues. - How to grow freesia without a greenhouse? - What color freesias grow the fastest? - How to provide humid air to home freesia and not destroy it? - Is it realistic to make capricious freesia bloom in the middle of winter? Planting Ixia in open ground The time for planting the tubers (bulbs) of Ixia in open ground depends on the climatic features of the zone. It should be borne in mind that seedlings will freeze even in the soil if the air temperature drops to -1-2⁰С. In warm regions, the "rested" planting material can be planted in open ground in the fall, but in the middle and northern latitudes, it should "winter" in a cool, dark, dry place where there are no subzero air temperatures. When planting Ixia bulbs, several simple rules should be followed: - the site must be freshly dug up with compost, - sand is added to heavy soil at the rate of 1 * 1, - use only high-quality tubers without traces of rotting and mold, - Ixia bulbs are placed in holes no more than 8 cm deep, - the distance between them must be at least 8 and no more than 12 cm, - in the first few days, the plot with seedlings is not watered, - immediately carry out its mulching with organic matter. The soil temperature in the area where Ixia will be planted should be at least 15⁰С. If the tubers of the plant are planted in the fall, then the depth of the hole should be 10-12 cm. Transplant and reproduction Due to the fact that the plant came to us from hot countries with a warm humid climate, the conditions in the summer are suitable for it, but the plant is not able to winter. In warm regions, where the winter temperature hovers around zero, the bulb is left in the ground under a dry shelter. Preparing the landing site For planting bulbs, you must choose an open, well-lit place. Drainage of the soil will provide coarse sand and peat. The acidity of the soil should be neutral. How can Ixia be propagated on your own At the end of the growing season, after flowering, the plant is dug up. Small daughter bulbs are formed on the tuber. Each bulb must have its own root. After separation, the attachment site is treated with activated carbon or potassium permanganate solution. You can try growing Ixia from seeds, but this process is more time consuming and not suitable for hybrid forms. Growing Ixia at home and in a pot In cold conservatories and heated greenhouses, ixia will bloom several times a year. It is difficult to achieve flowering at home, since for the formation of flowers it is necessary to lower the temperature to + 6 ° С, then gradually increase it to + 13 ° С, providing the plant with 16 hours of lighting. Preparing for winter Within 30 days after flowering, the leaves die off. If the plant grows in gardens with warm winters, then the aboveground part of the plant is cut off and laid on top of the tubers, additionally covered with foliage and branches. In cases where the winter temperature drops below zero, Ixia is dug up, the bulbs are dried, treated with potassium permanganate and stored in paper bags on the bottom shelf of the refrigerator. Reproduction There are two ways to breed this species. Separation of young babies from the mother's bulb and seed reproduction. In the first case, the plant will bloom the next year, seed reproduction will give flowering in the second or third year. Reproduction is possible by dividing the bulb. It is required to divide it in such a way that on each part there are roots and a kidney. Growing from seeds Growing from seed takes a lot of effort. The seeds must be pre-soaked in water for a day and the water must be changed every 6 hours. The soil for planting should consist of equal parts of peat, humus, turf and sand, and must be sterilized. Seeds are sown to a depth of 0.6-0.7 cm in a mini greenhouse or a box covered with glass (film). The seedling area should be well lit. When the second or third leaf appears, the seedlings can be planted in separate containers and start hardening 20 days before planting in open ground. It should be transplanted into the garden very carefully, along with a lump of earth. Planting bulbs The selected planting material must be dry and clean, without visible mechanical and fungal damage. A few hours before planting, the bulbs are soaked in a solution of Fundazol or Maxim.When the danger of frost has passed and the ground warms up well, the bulbs are planted in open ground at a distance of 10 to 12 cm from each other, to a depth of 7 cm. Seedlings appear as early as 15 days. In warm regions, they are planted even before winter to a depth of 15 cm. Early flowering can be achieved by planting the bulbs in a pot in April, and transplanted into open ground in early May. Diseases and pests All types of ixia are very resistant to diseases and pests. In cases of yellowing of the peduncle, the tuber of the plant must be dug up and the bulb must be examined for mold and rot. Waterlogging can also be the cause of yellowing. Having found rot, the plant will have to be destroyed. Watch the video: Winter flowering plant Ixia ko bulb se kaise grow kare:: Comments: - Camdan Rather amusing phrase - Garey UH. Goosebumps have already gone. - Fitzgilbert You commit an error. I can prove it.
https://gb.andropampanga.com/1204-ixia.html
Q: Display changing column value in Gnuplot animation I am making a gnuplot animation of a satellite going around a planet. My task is to display it's XY trajectory and associated values of velocity and energy versus time. I know how to plot the path, but I've been having problems displaying velocity etc. the code below does the following: satellite track and time steps -- column 3:4; satellite position -- column 3:4; planet position -- column 6:7. do for [n=0:int(STATS_records)] { plot "sat.dat" u 3:4 every ::0::n w lp ls 2 t sprintf("steps=%i", n), \ "sat.dat" u 3:4 every ::n::n w lp ls 4 notitle, \ "sat.dat" u 6:7 every ::0::n w lp ls 3 notitle , \ } How do I display the associated velocity values for each sprintf ? The velocity values are in column 5. Thank you everyone in advance. A: It seems that you want to put everything in the "key" (legend), but another option is to use labels, which can be easily placed arbitrarily. There are labels you can place one at a time (with set label) and with labels for plotting with actual labels. Don't get them confused. Your main issue seems to be how to pull out the velocity value from column 5. My first instinct (which is quite hacky) is to use some external program, like awk: v = system(sprintf("awk 'NR==%d{print $5}' '%s'", n+1, infile)) set label 1 sprintf("v=%.3f", v+0) at screen 0.2,0.9 This is also an example of a label (named 1). The screen keyword means screen-relative rather than graph-relative. Putting this inside your for loop will reassign label 1 every iteration, so it overwrites the label from the previous iteration. Not using this 1 will just plop another label on top of the last one, so it would get messy. Using an external command line like this isn't very portable. (I don't think it would work on Windows.) I saw this question that shows how to pull a value from a specific row and column of a file. The problem I had with using this is that stats implicitly filters according to whatever xrange is set. When making animations like this, I've noticed that the camera can jump around too much from autoranging, so it's nice to have tight control over the plotting range. Defining an xrange at the top of the file interfered with a subsequent stats command to read a velocity value. You can, however, specify a range for stats (before the file name, such as stats [*:*] infile). But I had issues using this in combination with a predefined xrange based for position. I found that it did work if I specify the desired plotting range on the plot line instead of a set xrange. Here is another (full script) version using only gnuplot: set terminal pngcairo infile = 'anim.dat' stats infile using 3:4 name 'data' nooutput set key font 'Courier' do for [n=0:data_records-1] { set output sprintf('frame-%03d.png', n) stats [*:*] infile every ::n::n using 5 name 'velocity' nooutput plot [data_min_x:1.1*data_max_x][data_min_y:1.1*data_max_y] \ infile u 3:4 every ::0::n w linespoints ls 2 t \ sprintf("steps =%6d\nvelocity =%6.3f", n, velocity_min), \ '' u 3:4 every ::n::n w points pt 7 ps 3 notitle } Notice that you could easily change this to a set label if you want. Another option is to plot '' u (x):(y):5 every ::n::n w labels to place a label at graph position (x,y). I don't have your data, but I made my own file with what I hope is a similar format to yours: anim.dat 0 0.0 0.0 0.0 1.11803398875 0.625 1 0.05 0.05 0.02375 1.09658560997 0.625 2 0.1 0.1 0.045 1.07703296143 0.625 3 0.15 0.15 0.06375 1.05948100502 0.625 4 0.2 0.2 0.08 1.04403065089 0.625 5 0.25 0.25 0.09375 1.0307764064 0.625 6 0.3 0.3 0.105 1.01980390272 0.625 7 0.35 0.35 0.11375 1.01118742081 0.625 8 0.4 0.4 0.12 1.00498756211 0.625 9 0.45 0.45 0.12375 1.00124921973 0.625 10 0.5 0.5 0.125 1.0 0.625 11 0.55 0.55 0.12375 1.00124921973 0.625 12 0.6 0.6 0.12 1.00498756211 0.625 13 0.65 0.65 0.11375 1.01118742081 0.625 14 0.7 0.7 0.105 1.01980390272 0.625 15 0.75 0.75 0.09375 1.0307764064 0.625 16 0.8 0.8 0.08 1.04403065089 0.625 17 0.85 0.85 0.06375 1.05948100502 0.625 18 0.9 0.9 0.045 1.07703296143 0.625 19 0.95 0.95 0.02375 1.09658560997 0.625
--- abstract: 'This paper investigates the adversarial Bandits with Knapsack (BwK) online learning problem, where a player repeatedly chooses to perform an action, pays the corresponding cost, and receives a reward associated with the action. The player is constrained by the maximum budget $B$ that can be spent to perform actions, and the rewards and the costs of the actions are assigned by an adversary. This problem has only been studied in the restricted setting where the reward of an action is greater than the cost of the action, while we provide a solution in the general setting. Namely, we propose EXP3.BwK, a novel algorithm that achieves order optimal regret. We also propose EXP3++.BwK, which is order optimal in the adversarial BwK setup, and incurs an almost optimal expected regret with an additional factor of $\log(B)$ in the stochastic BwK setup. Finally, we investigate the case of having large costs for the actions (i.e., they are comparable to the budget size $B$), and show that for the adversarial setting, achievable regret bounds can be significantly worse, compared to the case of having costs bounded by a constant, which is a common assumption within the BwK literature.' bibliography: - 'knapsackBanditsWithCorruption.bib' --- \[theorem\][Lemma]{} \[theorem\] Introduction ============ Multi-Armed Bandit (MAB) is a sequential decision making problem under uncertainty, that is based on balancing the trade-off between exploration and exploitation, i.e. “the conflict between taking actions which yield immediate rewards and taking actions whose benefits will be seen later." A common feature in various applications of MAB is that the resources consumed during the decision making process are limited. For instance, scientists experimenting with alternative medical treatments may be limited by the number of patients participating in the study as well as by the cost of the material used in the treatments. Similarly, in web advertisements, a website experimenting with displaying advertisements is constrained by the number of users who visit the site as well as by the advertisers’ budgets. A retailer engaging in price experimentation faces inventory limits along with a limited number of consumers. A model which incorporates a budget constraint on these supply limits is Bandits with Knapsack (BwK). This can be seen as a game between a player and an adversary (or environment) that evolves for $T$ rounds. The player is constrained by a budget $B$ on the resources consumed during the decision making process. The game terminates when the player runs out of budget, therefore $T$ is dependent on $B$. At each round $t$, the player performs an action $i$ from a set of $K$ actions, pays a cost for the selected action $i$ from the budget $B$ and receives a reward in $[0,1]$ for the selected action $i$. The reward and the cost can vary from application to application. For example, in web advertisement, the reward is the click through rate and the cost is the space occupied by the advertisement on the web page. In medical trials, the reward is the success rate of the medicine and the cost corresponds to the cost of the material used. The Bandits with Knapsack problem can be classified into two categories: stochastic BwK and adversarial BwK. In stochastic BwK, the reward and the cost of each action is an i.i.d sequence over $T$ rounds drawn from a fixed unknown distribution. In adversarial BwK, the sequence of the rewards and the costs associated with each action over $T$ rounds is assigned by an oblivious adversary before the game starts. The objective of the player is to minimize the expected regret, which is the difference between the expectation of the rewards received from the best fixed action in the hindsight and the sum of rewards received by the player’s action selection strategy. The stochastic BwK setting has been extensively studied in the literature [@tran2010epsilon; @tran2012knapsack; @ding2013multi; @badanidiyuru2013bandits; @agrawal2014bandits; @tran2014efficient; @agrawal2016linear; @xia2016budgeted; @sankararaman2017combinatorial; @rangi2018multi]. The results in these works can be broadly classified into two categories depending on the regret analysis. The problem dependent bound on the expected regret is $O(\log(B))$ [@tran2012knapsack; @ding2013multi; @xia2016budgeted; @zhou2017budget; @rangi2018multi], while the problem independent bound on the expected regret is $O(\sqrt{KB})$ [@agrawal2016linear; @agrawal2014bandits; @badanidiyuru2013bandits]. Limited attention has been received by the adversarial BwK setting [@zhou2017budget]. In this setting, it has been assumed that the reward at round $t\leq T$ is greater than the cost at round $t\leq T$ for every action over the duration of the game [@zhou2017budget]. Under this assumption, EXP3.M.B has been proposed and proven to be order optimal [@zhou2017budget]. We observe here that the assumption on the reward being greater than the cost is uncommon in the literature of the BwK problem, and does not have any physical meaning in many applications. For example, in web advertisement, the click through rate (i.e., reward) and the space occupied by the advertisement on the web page (i.e., cost) cannot be compared with each other. Likewise, in a medical trial, the reward is the success rate of the medicine and the cost corresponds to the cost of the material used, and the comparison of these values has no meaning. Thus, a key question is how to design an algorithm for the adversarial BwK in a general reward setting that achieves order optimal regret guarantees. Another key challenge is to provide a solution that is satisfactory for both stochastic and adversarial settings. In many real-world situations, there is no information about whether the bandit model is used in a stochastic or adversarial manner. Thus, the deployed algorithm has to be able to perform well in both cases. Current algorithms in the adversarial BwK (e.g., EXP3.M.B), do not provide optimal regret guarantees in the stochastic setting, i.e. $O(\log(B))$, and algorithms in the stochastic BwK (e.g., KUBE), do not provide optimal regret guarantees in the adversarial setting, i.e. $O(\sqrt{KB})$. Currently, there is no work proposing a practical algorithm for both settings. Finally, the literature of the BwK problem typically assumes that the costs are bounded by a constant (i.e., they are independent of the budget $B$) and it is unknown whether state-of-the-art regret bounds hold for the case of large costs (i.e., when costs are comparable to the budget $B$). In this framework, the contribution of our work is three fold. First, we extend EXP3, a classical algorithm, proposed for the adversarial MAB setup [@auer2002nonstochastic], and propose EXP3.BwK, an algorithm for the adversarial BwK setup. We remove the assumption on the rewards and the costs previously used in [@zhou2017budget] to obtain regret bounds and we show that the expected regret of EXP3.BwK is $O(\sqrt{B K\log K })$. We also show the lower bound $\Omega(\sqrt{K B})$ in the adversarial BwK setting. It follows that EXP3.BwK is order optimal. Second, we unify the stochastic and the adversarial settings by proposing EXP3++.BwK, a novel and practical algorithm which works well in both of these settings. This algorithm incurs an expected regret of $O(\sqrt{BK\log K})$ and $O(\log^2(B))$ in the adversarial and the stochastic BwK settings respectively. Note that the regret bound of EXP3++.BwK for the stochastic setting has an additional factor of $\log(B)$ in comparison to the optimal expected regret i.e. $O(\log(B))$. Thus, EXP3++.BwK exhibits an almost optimal behavior in both the stochastic and the adversarial settings. Table \[sample-table\] summarizes these contributions and compares them with the other results in the literature. In the table, the problem-dependent parameter $\Delta(i)$ represents the difference between the contributions of the optimal action and the action $i$, and is formally defined in the next section. Finally, we show that if the maximum cost is bounded above by $B^{\alpha}$, where $\alpha \in [0,1]$, then the lower bound on the expected regret in the adversarial BwK setup scales at least linearly with the maximum cost, namely it is $\Omega(B^{\alpha})$. This implies that when $\alpha > \frac{1}{2}$, it is impossible to achieve a regret bound of $O(\sqrt{B})$, which is order optimal in cases with small costs. [lll]{} **Algorithm** &**Upper bound** &**Lower bound**\ \ KUBE for BwK [@tran2012knapsack] & $O(K\log(B)/\min_{i\in[K]}\Delta(i))$ &$\Omega(\log(B))$\ B-KUBE for Bounded BwK [@rangi2018multi] &$O(K\log(B)/\min_{i\in[K]}\Delta(i))$ &$\Omega(\log(B))$\ UCB-BV for variable cost [@ding2013multi]&$O(K\log(B)/\min_{i\in[K]}\Delta(i))$ &$\Omega(\log(B))$\ UCB-MB for multiple plays[@zhou2017budget]& $O(K\log(B))$ &\ EXP3.M.B[@zhou2017budget] &$O(\sqrt{K\log(K)B})$&$\Omega((1-1/K)^2 \sqrt{KB})$\ EXP3.BwK (This work) &$O(\sqrt{K\log(K)B})$&$\Omega( \sqrt{KB})$\ EXP3++.BwK in Adversarial setting (This work) & $O(\sqrt{K\log(K)B})$ &\ EXP3++.BwK in Stochastic setting (This work) & $O(K\log^2(B)/\min_{i\in[K]}\Delta(i))$&\ Related Work ------------ In the MAB literature, the problem of finding one algorithm for both the stochastic and the adversarial setting has been referred as “best of both worlds" [@bubeck2012best; @auer2016algorithm; @seldin2014one; @seldin2017improved; @lykouris2018stochastic]. SAO, the first algorithm proposed in the literature of this problem, relies on the knowledge of the time horizon $T$, and performs an irreversible switch to EXP3.P if the beginning of the game is estimated to exhibit an adversarial, or non-stochastic, behavior [@bubeck2012best]. The expected regret of SAO in the stochastic MAB setting is $O(\log^3(T))$, and in the adversarial MAB setting is ${ O}(\sqrt{T}\log^{2}(T))$. Using ideas from SAO, a new algorithm SAPO was proposed [@auer2016algorithm]. SAPO exploited some novel criteria for the detection of the adversarial, or non-stochastic, behavior, and performs an irreversible switch to EXP3.P if such a behavior is detected. Thus, both SAO and SAPO initially assume that the rewards are stochastic, and perform an irreversible switch to EXP3.P if this assumption is detected to be incorrect. The expected regret of SAPO is $O(\log^2(T))$ in the stochastic MAB setting, and ${ O}(\sqrt{T\log(T^2)})$ in the adversarial MAB setting. Later, EXP3++ was proposed [@seldin2014one]. Unlike SAO and SAPO, this algorithm starts by assuming the rewards exhibit an adversarial, or non-stochastic, behavior and adapts itself as it encounters stochastic behavior on rewards. The analysis of EXP3++ was improved in [@seldin2017improved], showing that the algorithm guarantees an expected regret of $O(\log^2(T))$ and $O(\sqrt{T})$ in the stochastic and the adversarial MAB settings respectively. The problem of stochastic bandits corrupted with adversarial samples has been studied in the regime of small corruptions [@lykouris2018stochastic]. The algorithm proposed in this work utilizes the idea of active arm elimination based on upper and lower confidence bound of the estimated rewards. The work provides the regret analysis of the algorithm as the corruption $C$ is introduced in the rewards, and shows that the decay in performance is order optimal in $C$. The “best of both worlds" problem has not been studied before in the BwK setting. Problem Formulation =================== A player can choose from a set of $K$ actions, and has a budget $B$. At round $t$, each action $i\in [K]$ is associated with a reward $r_{t}(i)\in [0,1]$ and a cost $c_{t}(i)\in [c_{min},c_{max}]$ with $c_{min} \leq c_{max}$. For now, we assume that $c_{max} = 1$, and will investigate the case of having larger costs in Section 5. At round $t$, the player performs an action $i_{t}\in [K]$, pays the cost $c_{t}(i_{t})$ and receives the reward $r_{t}(i_{t})$. The gain of a player’s strategy $\mathcal{A}$ is defined as $$G(\mathcal{A})= \mathbf{E}\Big[\sum_{t=1}^{\tau(\mathcal{A})}r_{t}(i_{t})\Big],$$ where $\tau(\mathcal{A})$ is number of rounds after which the strategy $\mathcal{A}$ terminates. The objective of a player is to design $\mathcal{A}$ such that $$\label{eq:OptimizationProblem} \begin{aligned} \max_{\{i_{1},i_{2},\ldots,i_{\tau(\mathcal{A})}\}} G(\mathcal{A})\\ \mbox{s.t. } \mathbf{P}\Big(\sum_{t=1}^{\tau(\mathcal{A})}c_{t}(i_{t})\leq B\Big) =1. \end{aligned}$$ Note that $\tau(\mathcal{A})$ is dependent on the budget $B$. Let $\mathcal{A}^{*}$ be the algorithm that solves (\[eq:OptimizationProblem\]). The expected regret of an algorithm $\mathcal{A}$ is defined as $$\label{eq:regret} R(\mathcal{A})= G(\mathcal{A}^*)- G(\mathcal{A}).$$ The optimization problem in (\[eq:OptimizationProblem\]) is a knapsack problem, and is known to be NP-hard [@KelPfePis04]. Given that the rewards and the costs of all the actions are known and fixed for all $T$ rounds, the greedy algorithm $\mathcal{A}^{G}$ for solving (\[eq:OptimizationProblem\]) makes an action selection in the decreasing order of the efficiency, defined as $e(i)=r(i)/c(i)$ for an action $i\in[K]$, until the budget constraint in (\[eq:OptimizationProblem\]) is satisfied. It can be shown that [@KelPfePis04] $$\label{eq:greedy} G(\mathcal{A}^{G})\leq G(\mathcal{A}^*)\leq G(\mathcal{A}^{G})+\max_{i\in [K]}e{(i)}.$$ In the stochastic setting, for all $t$ and $i\in [K]$, the reward $r_{t}(i)$ and the cost $c_{t}(i)$ of an action $i$ are identically and independently distributed according to some unknown distributions. The expected reward and the expected cost of an action $i$ are denoted by $\mu{(i)}$ and $\rho{(i)}$ respectively. Thus, in the stochastic setting, the efficiency of an action $i$ can be defined as $e(i)=\mu(i)/\rho(i)$. Using (\[eq:greedy\]), the expected regret of an algorithm $\mathcal{A}$ simplifies to $$\label{eq:regretStochastic} \begin{split} R(\mathcal{A})&\leq \max_{i\in[K]}\frac{\mu{(i)}}{\rho{(i)}}\cdot (\tau(\mathcal{A}^{G})+1)- G(\mathcal{A})\\ &=e(i^*)\cdot (\tau(\mathcal{A}^{G})+1)- G(\mathcal{A})\\ &\leq \sum_{i\in [K]/\{i^*\}}\Delta(i)\mathbf{E}[N_{T}(i)], \end{split}$$ where $i^*=\mbox{argmax}_{i\in [K]}e(i)$, $\Delta (i)=e(i^*)-e(i)$, $N_{T}(i)$ is the number of times an action $i$ is selected in $T$ rounds, and $T=\max\{\tau(\mathcal{A}),\tau(\mathcal{A}^G)\}$. The definition in (\[eq:regretStochastic\]) is consistent with the literature of stochastic BwK [@ding2013multi; @tran2014efficient]. In the adversarial setting, for all $t$, $r_{t}(i)$ and $c_{t}(i)$ are chosen by an adversary before the game starts. In this setting, the efficiency of an action $i$ at round $t$ can be defined as $e_{t}(i)=r_{t}(i)/c_{t}(i)$. Therefore, the expected regret simplifies to $$\label{eq:regretAdv} \begin{split} R(\mathcal{A})&\leq \frac{B}{T(i^*)}\sum_{t=1}^{T(i^*)} \frac{r_{t}(i)}{c_{t}(i)}- G(\mathcal{A}),\\ &\leq \mathbf{E}\Bigg[z(\mathcal{A})\bigg(\sum_{t=1}^{T(i^*)} e_{t}(i^*)-\sum_{t=1}^{\tau(A)}e_{t}(i_{t})\bigg) \Bigg], \end{split}$$ where $T{(i)}$ is the number of rounds for which the game is feasible in the budget $B$ when a fixed action $i\in [K]$ is performed, $i^*=\mbox{argmax}_{i\in [K]}\sum_{t=1}^{T(i)}e_{t}(i)$ is the optimal action in the hindsight, $$z(\mathcal{A})=\max\Bigg\{\frac{B}{T(i^*)},\frac{B(\mathcal{A})}{\tau(\mathcal{A})}\Bigg\}$$ is the maximum cost per round, $B(\mathcal{A})$ is the budget utilized by the algorithm $\mathcal{A}$, and the inequality follows from (\[eq:greedy\]). The expected regret is bounded by the expectation of the efficiency regret scaled by the maximum of the cost spent per round by the optimal action $i^*$, and the cost spent per round by the algorithm $\mathcal{A}$, where the efficiency regret is the sum of the rewards per unit cost associated to the optimal action minus the sum of the rewards per unit cost associated to the actions performed by the algorithm $\mathcal{A}$. Adversarial BwK =============== In this section, we propose the algorithm EXP3.BwK for the adversarial BwK setting, and show that it is order optimal. Initialization: $\gamma$ ; For all $i\in [K]$, $w_{1}(i)=1$, and ${\hat{e}}_{1}(i)=0$; $t=1$; $W_{t}={\sum_{j\in [K]}w_{t}(j)}$ Update $p_{t}(i)={(1-\gamma)w_{t}(i)}/W_{t} + \gamma/K$ Choose $i_{t} = i $ with probability $p_{t}(i)$. Observe $(r_{t}(i_{t}),c_{t}(i_{t)})$ exit; $B=B-c_{t}(i_{t})$ For all $i\in[K]$, ${\hat{e}}_{t}(i)=r_{t}(i)\textbf{1}(i=i_{t})/p_{t}(i)c_{t}(i)$. $w_{t+1}(i)=w_{t}(i)\cdot \exp(\gamma c_{min}\cdot\hat{e}_{t}(i)/K )$ t=t+1 Similar to EXP3, EXP3.BwK maintains a set of time-varying weights $w_{t}(i)$ for each action $i\in [K]$. At each round $t$, an action $i_{t}=i$ is selected with probability $p_{t}(i)$ which is dependent on two parameters: the time-varying weights $w_{t}(i)$ and an exploration constant $\gamma/K$. Following the selection of the action $i_{t}$, the algorithm pays the cost $c_{t}(i_{t})$. If the cost $c_{t}(i_{t})$ is greater than the remaining budget of the algorithm, then the algorithm terminates without attempting to find other feasible actions which can be performed using the remaining budget. In EXP3.BwK, the efficiency $e_{t}(i)=r_{t}(i)/c_{t}(i)$ is used as a measure of the contribution from an action $i \in [K]$ at round $t$. The empirical estimate of the efficiency $\hat{e}_{t}(i)$ (defined in Algorithm \[alg:Exp3.bwk\]) is used to update the weight $w_{t}(i)$ of the action $i$. For all $i\in [K]$, the difference in the weights $w_{t}(i)$ and $w_{t-1}(i)$ is controlled by scaling $\hat{e}_{t}(i)$ with $\gamma c_{min}$, which ensures that the $\gamma c_{min}\hat{e}_{t}(i)\leq 1$. The probability $p_{t}(i)$ is dependent on $w_{t}(i)$ and the exploration constant $\gamma/K$. In the probability $p_{t}(i)$, the weight $w_{t}(i)$ is responsible for the exploitation as it favors the selection of an action with higher cumulative efficiencies i.e. $\sum_{n=1}^{t}\hat{e}_{t-1}(i)$ observed until round $t-1$. On contrary, the exploration constant $\gamma/K$ ensures that the player is always exploring with a positive probability in search of the optimal action $i^*$. This balances the trade-off between exploration and exploitation. In the literature of the adversarial BwK setup [@zhou2017budget], it has been assumed that for all actions $i\in [K]$ and for all $t$, $r_{t}(i)\geq c_{t}(i)$. This allows the use of a different efficiency measure $r_{t}(i)-c_{t}(i)$, which is linear in both the reward and the cost of an action $i$, thus simplifying the proofs [@zhou2017budget]. In many real life applications, the rewards and the costs are on different scales, and cannot be compared by an inequality operator. For example, in a recommendation system, a recommender is constrained by the total space available on the web page which corresponds to the budget $B$, the space occupied by each item corresponds to its cost, and the click rate of each item corresponds to its reward. In this case, the space (cost) of the item and the click rate (reward) of the item are not comparable. Likewise, the efficiency measure $r_{t}(i)-c_{t}(i)$ which compares the reward and the cost of an action $i$ on a linear scale, is questionable and provides no intuition about the optimality of an action. In EXP3.BwK, we use a different efficiency measure $r_{t}(i)/c_{t}(i)$ for tracking the contributions of each action $i\in [K]$. The use of this measure is motivated from the greedy algorithm $\mathcal{A}^{G}$, and its performance guarantees with respect to the optimal solution (see (\[eq:greedy\]) and (\[eq:regretAdv\])). The advantages of using this measure are two folds. First, it eliminates the need of the assumption in [@zhou2017budget]. Second, it can track $G(\mathcal{A})$ of the algorithm $\mathcal{A}$ irrespective of the measure of the rewards and the costs. The following theorem provides the performance guarantees of EXP3.BwK in terms of the expected regret, and shows that it is sublinear in the budget $B$. \[thm:EXP3.BwK\] For $\gamma=\sqrt{c_{min} K\log(K)/B(e-1)}$, the expected regret, as defined in (\[eq:regretAdv\]), of the algorithm EXP3.BwK is at most $$\begin{split} R({E})&\leq 2\sqrt{\Bigg((e-1)+(e-2)\frac{K}{B}\Bigg)\frac{BK\log(K)}{c_{min}^3}},\\ % &= 2\sqrt{\Bigg((e-1)+(e-2)\frac{K}{B}\Bigg)\frac{gK\log(K)}{c_{min}}},\ \end{split}$$ where $ {E}$ denotes EXP3.BwK. We briefly discuss the key ideas of the proof here, and its detailed version is presented in the supplementary material. The expected regret is at most $$\label{eq:simplify} \begin{split} &\frac{B}{T(i^*)}\sum_{t=1}^{T(i^*)} \frac{r_{t}(i)}{c_{t}(i)}- G(E),\\ &\stackrel{}{\leq}\mathbf{E}\Bigg[z(E)\Bigg(\sum_{t=1}^{T(i^*)} \frac{r_{t}(i^*)}{c_{t}(i^*)}- \sum_{t=1}^{\tau(E)} \frac{r_{t}(i_{t})}{c_{t}(i_{t})}\Bigg)\Bigg],\\ \end{split}$$ where $\tau(E)$ is the stopping time of EXP3.BwK and $B(E)$ is the budget utilized by EXP3.BwK, and $$z(E)=\max\Bigg\{\frac{B}{T(i^*)},\frac{B(E)}{\tau(E)}\Bigg\}.$$ Using (\[eq:simplify\]), the expected regret can be bounded by showing that $$\label{eq:sum} \begin{split} &\mathbf{E}\Bigg[\Bigg(\sum_{t=1}^{T(i^*)} \frac{r_{t}(i^*)}{c_{t}(i^*)}- \sum_{t=1}^{\tau(E)} \frac{r_{t}(i_{t})}{c_{t}(i_{t})}\Bigg)\Bigg]\\ &\leq 2\sqrt{\frac{((e-1)B+(e-2)K)K\log(K)}{c_{min}^3}}, \end{split}$$ and $z(E)\leq 1$. The key challenge in the proof of Theorem \[thm:EXP3.BwK\] is that the two summations in (\[eq:sum\]) corresponding to the optimal action $i^*$ and the algorithm EXP3.BwK are along the different time scales, $T(i^*)$ and $\tau(E)$ respectively. This requires the analysis to be split into two cases: $T(i^*)\geq \tau(E)$ and $T(i^*)\leq \tau(E)$. The analysis for these cases is based on the inference that $B(E)>B-K$ because the algorithm EXP3.BwK terminates at round $t$ if and only if the remaining budget is insufficient to pay the cost $c_{t}(i_{t})\leq 1$. Hence, we can bound the difference between the two time scales i.e. $T(i^*)$ and $ \tau(E)$ as follows: $$\label{eq:StopRule} \abs{T(i^*)- \tau(E)}\leq \frac{K}{c_{min}}.$$ It follows that the difference between the number of rounds of the optimal action $i^*$ and EXP3.BwK is bounded by a fixed constant independent of the budget $B$. Hence, the regret of the algorithm due to this difference in (\[eq:StopRule\]) is at most $K/c_{min}^2$, and does not introduce any dependency on the budget $B$. The following theorem provides the lower bound on the expected regret in the adversarial BwK setting. \[thm:LowerBound\] For any player’s strategy $\mathcal{A}$, there exists an adversary for which the expected regret of the algorithm $\mathcal{A}$ is at least $\Omega(\sqrt{KB/c^2_{min}})$. The adversary chooses the optimal action $i^*$ uniformly at random from the set of $K$ actions. For the action $i^{*}$ and for all $t$, the reward $r_{t}(i^{*})$ is assigned using an independent Bernoulli random variable with expectation $0.5+\epsilon$, where $\epsilon=\sqrt{Kc_{min}/B}$. For all $i\in [K]/\{i^*\}$ and for all $t$, the reward $r_{t}(i)$ is assigned using an independent Bernoulli random variable with expectation $0.5$. For all $i\in [K]$ and for all $t$, the adversary assigns cost $c_{t}(i)=c_{min}$. The remaining proof is along the same lines as the lower bound on the expected regret in the MAB setup [@auer2002nonstochastic]. By comparing the results in Theorem \[thm:EXP3.BwK\] and Theorem \[thm:LowerBound\], the expected regret of EXP3.BwK has an additional factor of $1/\sqrt{c_{min}}$, and is order optimal in the budget $B$. This also highlights an important feature of an alternate class of algorithms in the BwK setup. Consider a new class of algorithms $\mathcal{G}$ which looks for an alternative action to perform after the algorithm is unable to pay the cost $c_{t}(i_t)$ at round $t$ in order to utilize the remaining budget effectively. Since EXP3.BwK terminates if it is unable to pay the cost $c_{t}(i_t)$, EXP3.BwK does not belong to $\mathcal{G}$, and is still order optimal in the budget $B$. Therefore, the expected regret of this new class of algorithms $\mathcal{G}$ will have same dependency as that of EXP3.BwK on the budget $B$. Additionally, the difference between the expected regret of EXP3.BwK and the class of algorithms $\mathcal{G}$ will be at most a constant i.e. $K/c_{min}^2$, independent of $B$ (see (\[eq:StopRule\])). The class of algorithms $\mathcal{G}$ faces the additional challenge of designing an appropriate criterion for the termination of the algorithm because the costs are assigned by the adversary. The ideas developed in EXP3.BwK, particularly the measure of the efficiency $r_{t}(i)/c_{t}(i)$ forms form the basis of designing an algorithm which achieves almost optimal performance guarantees in both the stochastic and the adversarial BwK settings. One practical algorithm for both stochastic and adversarial BwK =============================================================== Initialization: For all $i\in [K]$, $w_{1}(i)=1$, ${\hat{e}}_{1}(i)=0$, ${\Bar{e}}_{1}(i)=0$, ${N}_{1}(i)=1$ $\delta_{1}(i)>0$; $t=1$, $\gamma_{t}=0.5 c_{min}\sqrt{\log(K)/tK}$; Perform each action once and update for all $i\in[K]$, $\Bar{e}_{1}(i)=r_{1}(i)/c_{1}(i)$, $B=B-\sum_{i\in[K]}c_{1}(i)$ and $t=K+1$. For all $i\in[K]$, update: $\mbox{UCB}_{t}(i) $ (see (\[eq:ucb\])) $\mbox{LCB}_{t}(i) $ (see (\[eq:lcb\])) $\hat{\Delta}_{t}(i)$ (see (\[eq:gapEstimate\])) $\delta_{t}(i)=\beta \log(t)/(t\hat{\Delta}_{t}(i)^2) $ $\epsilon_{t}(i)=\min\{1/2K,0.5\sqrt{\log(K)/t},\delta_{t}(i)\}$ ${p}_{t}(i)=\frac{\exp(-\gamma_{t}\hat{L}_{t-1}(i))}{\sum_{j\in[K]}\exp(-\gamma_{t}\hat{L}_{t-1}(j) )}$ $\Tilde{p}_{t}(i)={(1-\sum_{j\neq i}\epsilon_{t}(j)){p}_{t}(i)} + \epsilon_{t}(i)$ Choose $i_{t} = i $ with probability $\Tilde{p}_{t}(i)$. Observe $(r_{t}(i_{t}),c_{t}(i_{t)})$ exit; $B=B-c_{t}(i_{t})$ For all $i\in[K]$, update: ${\hat{e}}_{t}(i)=r_{t}(i)\textbf{1}(i=i_{t})/\Tilde{p}_{t}(i)c_{t}(i)$. ${\hat{\ell}}_{t}(i)=\textbf{1}(i=i_{t})/c_{min}\Tilde{p}_{t}(i)-\hat{e}_{t}(i)$. $\hat{L}_{t}(i)=\sum_{n=1}^{t}\hat{\ell}_{n}(i)$ $N_{t}(i)=N_{t-1}(i)+\textbf{1}(i=i_{t})$. $\Bar{r}_{t}(i)=\sum_{n=1}^{t}r_{n}(i)\textbf{1}(i=i_{n})/N_{t}(i)$ $\Bar{c}_{t}(i)=\sum_{n=1}^{t}c_{n}(i)\textbf{1}(i=i_{n})/N_{t}(i)$ $\Bar{e}_{t}(i)=\Bar{r}_{t}(i)/\Bar{c}_{t}(i)$ t=t+1 In this section, we propose the algorithm EXP3++.BwK (Algorithm \[alg:ThresholdEXP3\]), and show that it achieves almost optimal performance guarantees in both the stochastic and the adversarial BwK settings. Before discussing the algorithm EXP3++.BwK, let us briefly focus on the fundamental difference between the optimal algorithms in the stochastic and the adversarial BwK settings. In the stochastic BwK setting, the algorithms focus on exploration in the initial stage until a reliable estimate of the expected rewards and expected costs is achieved. Then, the algorithms focus on exploitation, and perform exploration with a small probability. For example, in UCB type of algorithms, the probability of exploration decays as $1/t^{2}$ with round $t$ [@tran2012knapsack; @ding2013multi; @rangi2018multi]. In greedy algorithms, the probability of exploration is zero after a fixed round (or time instance) [@tran2010epsilon; @tran2014efficient]. On the contrary, in the adversarial regime, the algorithms are always exploring, and looking for the actions with higher contributions [@auer2002nonstochastic]. For instance, in EXP3.BwK, the exploration constant $\gamma/K$ does not change with the round $t$, and it is dependent on the total number of rounds i.e. $\Theta(B)$ in the BwK setup. For all action $i\in [K]$, EXP3++.BwK maintains an Upper Confidence Bound (UCB) $\mbox{UCB}_{t}(i)$ and a Lower Confidence Bound (LCB) $\mbox{LCB}_{t}(i)$ on the efficiency $e(i)$, where $$\label{eq:ucb} \mbox{UCB}_{t}(i) = \min\Bigg\{\frac{1}{c_{min}},\Bar{e}_{t}(i)+\frac{(1+1/\lambda) \eta_{t}(i)}{\lambda- \eta_{t}(i)}\Bigg\},$$ $$\label{eq:lcb} \mbox{LCB}_{t}(i) = \max\Bigg\{0,\Bar{e}_{t}(i)-\frac{(1+1/\lambda) \eta_{t}(i)}{\lambda- \eta_{t}(i)}\Bigg\},$$ $$\eta_{t}(i) =\sqrt{\frac{\alpha \log(K^{1/\alpha}t)}{2N_{t}(i)}},$$ $\lambda\leq c_{min}$ and $N_{t}(i)$ is the number of times an action $i$ has been chosen until round $t$. The UCB and the LCB on an action $i$ are used to estimate $\Delta(i)$. The estimate of this gap at round $t$ is defined as $$\label{eq:gapEstimate} \hat{\Delta}_{t}(i)=\max\{0,\max_{j\neq i}\mbox{LCB}_{t}(j)-\mbox{UCB}_{t}(i)\}.$$ It can been shown that for all $i\in [K]$, in the stochastic BwK setting, we have $$\frac{\Delta(i)}{2}\leq \hat{\Delta}_{t}(i)\leq \Delta(i),$$ with high probability as $t\to \infty$. Thus, $\hat{\Delta}_{t}(i)$ is a reliable estimate of $\Delta(i)$. For all $i\in [K]$, the estimate of the gap $\hat{\Delta}_{t}(i)$ is used to design the exploration parameter $\epsilon_{t}(i)$ in the sampling probability $\tilde{p}_{t}(i)$ where $\tilde{p}_{t}(i)$ is the probability of choosing an action $i$ at round $t$. In the stochastic BwK setup, since $\Delta(i^*)=0$, the exploration parameter $\epsilon_{t}(i^*)$ of the optimal action $i^*$ tends to zero, and favors its selection. Unlike EXP3.BwK, the exploration parameter $\epsilon_{t}(i)$ varies with $t$. Additionally, the sampling probability $\tilde{p}_{t}(i)$ is dependent on both the estimates of the efficiencies $\hat{e}_{t}(i)$ and $\Bar{e}_{t}(i)$ where $\hat{e}_{t}(i)$ and $\Bar{e}_{t}(i)$ are crucial in the adversarial BwK setting (see EXP3.BwK) and the stochastic BwK setting respectively. In the sampling probability $\tilde{p}_{t}(i)$, $\hat{e}_{t}(i)$ controls the exploitation performed by the algorithm through $p_{t}(i)$, and $\bar{e}_{t}(i)$ controls the exploration performed by the algorithm through the exploration parameter $\epsilon_{t}(i)$. The following theorem provides the performance guarantees of EXP3++.BwK in the stochastic BwK setting. \[thm:stochasticEXP3\] In the stochastic BwK setting, for $\alpha=3$ and $\beta=256/c_{min}^2$, the expected regret of the EXP3++.BwK is at most $$R(F)= O\Bigg(\sum_{i:\Delta(i)>0}\frac{\log^2(B/c_{min})}{c_{min}^2\Delta(i)}\Bigg),$$ where $F$ denotes the algorithm EXP3++.BwK. The expected regret of the algorithm can be bounded by $$R(F)\leq \sum_{i\in [K]/\{i^*\}}\Delta(i)\mathbf{E}[N_{T}(i)],$$ where $T \leq B/c_{min}$ is the number of rounds at the termination of the algorithm. We can then bound the expected number of times $\mathbf{E}[N_{T}(i)]$ an action $i\neq i^*$ is selected by the algorithm. Since the probability of the selection of an action $i$ is $\tilde{p}_{t}(i)$, we have $$\label{eq:numTime} \mathbf{E}[N_{T}(i)]\leq \mathbf{E}[\sum_{t=1}^{T}\epsilon_{t}(i)+p_{t}(i)].$$ We now bound the two terms in the right hand side of (\[eq:numTime\]) in the stochastic BwK setting. First, we show that the estimate $\hat{\Delta}_{t}(i)$ is a reliable estimate of $\Delta(i)$, i.e. $$\mathbf{P}(\hat{\Delta}_{t}(i)\geq \Delta(i))\leq \frac{1}{t^{\alpha-1}},$$ $$\label{eq:11} \begin{split} \mathbf{P}\Bigg(\hat{\Delta}_{t}(i)&\leq \frac{\Delta(i)}{2}\Bigg)\leq \Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}+2\Bigg(\frac{1}{t}\Bigg)^{\frac{\beta c_{min}^2}{8}}\\ &\qquad\qquad\qquad+\frac{2}{Kt^{\alpha -1}}. \end{split}$$ These results can be used to prove that $$\label{eq:12} \mathbf{P}\Bigg( \tilde{\Delta}_{t}(i)\leq \frac{t\Delta(i)}{2}\Bigg)\leq \Bigg(\frac{\log(t)}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha-2} +\frac{1}{t},$$ where $\tilde{\Delta}_{t}(i)=\sum_{n=1}^{t} (\hat{\ell}_{n}(i)-\hat{\ell}_{n}(i^*))$. Since $$p_{t}(i)\leq \exp(-\gamma_{t}\tilde{\Delta}_{t}(i)),$$ (\[eq:12\]) is used to bound $\sum_{t=1}^{T}\mathbf{E}[p_{t}(i)]$, and we have $$\sum_{t=1}^{T}\mathbf{E}[p_{t}(i)]=O\Bigg(\frac{\log^2(B/c_{min})}{c_{min}^2\Delta(i)^2}\Bigg).$$ Using the definition of $\epsilon_{t}(i)$ and (\[eq:11\]), we have $$\sum_{t=1}^{T}\mathbf{E}[\epsilon_{t}(i)]=O\Bigg(\frac{\log^2(B/c_{min})}{c_{min}^2\Delta(i)^2}\Bigg).$$ Hence, the statement of the theorem follows. The detailed version of the proof is in supplementary material. In Theorem \[thm:stochasticEXP3\], EXP3++.BwK incurs an expected regret of $O(\log^2(B/c_{min}))$, whereas the optimal regret guarantees in the stochastic BwK setting are $O(\log(B/c_{min}))$[@tran2012knapsack; @ding2013multi; @rangi2018multi]. Thus, EXP3++.BwK has an additional factor of $\log(B/c_{min})$ in comparison to the results in the literature. This additional factor is also common in the literature of MAB [@seldin2014one; @lykouris2018stochastic]. The following theorem provides the performance guarantees of EXP3++.BwK in the adversarial BwK setting. \[thm:adversarialEXP3\] In the adversarial BwK setting, the expected regret of the EXP3++.BwK is at most $$R(F)\leq \sqrt{\frac{6BK\log(K)}{c_{min}^3}}.$$ Similar to the proof of Theorem \[thm:EXP3.BwK\], we bound $$\begin{split} \mathbf{E}\Bigg[z(E)\Bigg(\sum_{t=1}^{T(i^*)} \frac{r_{t}(i^*)}{c_{t}(i^*)}- \sum_{t=1}^{\tau(E)} \frac{r_{t}(i_{t})}{c_{t}(i_{t})}\Bigg)\Bigg].\\ \end{split}$$ We show that $$\begin{split} &\mathbf{E}\Bigg[\Bigg(\sum_{t=1}^{T(i^*)} \frac{r_{t}(i^*)}{c_{t}(i^*)}- \sum_{t=1}^{\tau(E)} \frac{r_{t}(i_{t})}{c_{t}(i_{t})}\Bigg)\Bigg]\\ &\leq \sqrt{\frac{6BK\log(K)}{c_{min}^3}}, \end{split}$$ and $z(E)\leq 1$. The detailed version of the proof is in supplementary material. Thus, like EXP3.BwK, EXP3++.BwK is order optimal in the adversarial BwK setting. The challenges in the proof of Theorem \[thm:stochasticEXP3\] and Theorem \[thm:adversarialEXP3\] are addressed in a similar way as that of Theorem \[thm:EXP3.BwK\]. In conclusion, using Theorem \[thm:stochasticEXP3\] and Theorem \[thm:adversarialEXP3\], the EXP3++.BwK is order optimal in the adversarial BwK setting and has an additional factor of $\log(B/c_{min})$ in the stochastic BwK setting. BwK with unbounded cost ======================= Assuming the cost is bounded by unity (i.e., $c_{max} = 1$), Theorem \[thm:LowerBound\] provides the dependence of the expected regret on the minimum cost $c_{min}$ in the adversarial BwK setup. In this section, we discuss the scaling of the lower bound on the expected regret with respect to the maximum cost $c_{max}$ in the adversarial BwK setup. \[thm:Cost\] Suppose that $c_{\max} = B^{\alpha}$. For any algorithm $\mathcal{A}$, there exists an adversary such that the expected regret of the algorithm is at least $\Omega(B^{\alpha})$. Let the number of actions be $K=2$, and the actions be $i_{1},i_{2}$. The adversary chooses the optimal action $i^*$ uniformly at random from these two actions. Let $t^*= B - B^{\alpha}$. For all $t\leq t^*$ rounds, the adversary assigns $r_{t}(i_{1})=r_{t}(i_{2})=0$ and $c_{t}(i_{1})=c_{t}(i_{2})=1$ to both the actions $i_{1}$ and $i_{2}$. Now, for rounds $t \geq t^*+1$, the adversary assigns $r_{t}(i^*)=1$ and $c_{t}(i^*)=1$ to the optimal action $i^*$. For the suboptimal action $i\neq i^*$, the adversary assigns $r_{t^*+1}(i)=0$ and $c_{t^*+1}(i)=B^{\alpha}$ (since $c_{\max} = B^{\alpha}$, this is a valid cost assignment), and $r_{t}(i)=c_{t}(i)=1$ for $t > t^*+1$. Let $S_1$ be the case when $i^*=i_{1}$, and $S_2$ be the case when $i^* = i_2$. For the first $t^*$ rounds, any algorithm $\mathcal{A}$ would have the same behavior in both the cases $S_{1}$ and $S_{2}$. Now, at round $t^*+1$, assume that this algorithm $\mathcal{A}$ selects an action $i_{1}$ and $i_{2}$ with probability $p$ and $(1-p)$ respectively. Note that if the suboptimal action is chosen at round $t^*+1$, then the budget is depleted and the sum of the rewards is $0$. On the other hand, if $i^*$ is chosen at $t^*+1$, the algorithm receives a sum of $B^{\alpha}$ rewards in the end. Thus, if $i_{t^*+1}\neq i^*$, then the regret of the algorithm is $B^{\alpha}$. This implies that the expected regret of the algorithm is $0.5 pB^{\alpha} + 0.5(1-p)B^{\alpha} = B^{\alpha}/2$. The statement of the theorem follows. In the literature of BwK, the cost is always considered to be bounded above by a constant independent of the budget $B$. Here, we consider that the cost is bounded by a function of the budget $B$. Theorem \[thm:Cost\] shows that the lower bound on the expected regret scales at least linearly with the maximum cost $c_{max}$ in the adversarial BwK setup. If $\alpha> 1/2$, then it is impossible to achieve a regret bound of $O(\sqrt{B})$, which is order optimal in cases with small $c_{max}$. In the adversarial BwK setup, the adversary can penalize the player in two ways. First, the adversary can control the reward of an action at any round. Second, the adversary can control the cost of an action, which is analogous to penalizing the player on the number of rounds $T$. For $\alpha>1/2$, the latter penalty on the number of rounds $T$ becomes significant, and the minimum achievable regret is no longer $\Omega(\sqrt{B})$. In this setting with $\alpha>1/2$, the design of algorithms which achieve regret of $O(B^{\alpha})$ is left as future work. Conclusion ========== The study of BwK has been mostly focused on the stochastic regime. In this work, we considered the adversarial regime and proposed the order optimal algorithm EXP3.BwK for this setting. We also used ideas from the adversarial BwK setup to design EXP3++.BwK. This algorithm has an expected regret of $O(\sqrt{KB\log (K)})$ and $O(\log^2(B))$ in the adversarial and stochastic settings respectively. Thus, the algorithm is order optimal in the adversarial regime, and has an additional factor of $\log(B)$ in the stochastic regime. It is the first algorithm that provides almost optimal performance guarantees in both stochastic and adversary BwK settings. As part of future work, we are considering designing an algorithm which achieves the optimal regret guarantees with high probability in both the adversarial and the stochastic BwK settings. All the results in the literature of BwK assume that the maximum cost is bounded by a constant independent of $B$. We have shown that if the cost is $O(B^{\alpha})$, then the expected regret is at least $\Omega(B^{\alpha})$. Thus, the minimum expected regret scales at least linearly with the maximum cost of the BwK setup. This setting is of particular interest when $\alpha>1/2$ because the expected regret of $O(\sqrt{B})$, which is achievable in the setting where cost is bounded by a constant, becomes unachievable. Hence, there is a need to study this BwK setting, and design optimal algorithms whose expected regret is $O(B^{\alpha})$, which is left as a future work. Appendix ======== Proof of Theorem 1 ------------------ Let $T=\max\{T(i^*),\tau(E)\}$, where $$i^*=\mbox{argmax}_{i\in[K]}\sum_{t=1}^{T(i)}\frac{r_{t}(i)}{c_{t}(i)}.$$ Additionally, $$\label{eq:loss1} \begin{split} \sum_{i\in[K]}p_{t}(i)\hat{e}_t(i)&=p_{t}(i_t)\frac{r_{t}(i_t)}{p_{t}(i_t)\cdot c_{t}(i_t)}\\ &=\frac{r_{t}(i_t)}{ c_{t}(i_t)}, \end{split}$$ and $$\label{eq:loss2} \begin{split} \sum_{i\in[K]}p_{t}(i)\hat{e}_t(i)^2&=p_{t}(i_t)\frac{r_{t}(i_t)}{p_{t}(i_t)\cdot c_{t}(i_t)}\hat{e}_t(i_t)\\ &\stackrel{(a)}{\leq} \frac{\hat{e}_t(i_t)}{c_{min}}\\ &\stackrel{}{=} \frac{\sum_{i\in[K]}\hat{e}_t(i)}{c_{min}}, \end{split}$$ where $(a)$ follows from the fact that for all $i\in[K]$, $r_{t}(i)/c_{t}(i)\leq 1/c_{min}$. Also, for all $i\in [K]$, we have $$\label{eq:unbiasedEstimate} \begin{split} \mathbf{E}\Big[\hat{e}_t(i)|\{p_{t}(j)\}_{j\in[K]}\Big]&=p_{t}(i)\cdot\hat{e}_t(i)+(1-p_{t}(i))\cdot 0\\ &=\frac{r_{t}(i)}{c_{t}(i)}. \end{split}$$ Since $W_{t}={\sum_{j\in [K]}w_{t}(j)}$, $$\label{eq:Wratio} \begin{split} \frac{W_{t+1}}{W_{t}} &=\sum_{i\in [K]}\frac{w_{t+1}(i)}{W_{t}}\\ &=\sum_{i\in [K]}\frac{w_{t}(i)\exp{(\gamma c_{min}\cdot\hat{e}_{t}(i)/K)}}{W_{t}}\\ &\stackrel{(a)}{=}\sum_{i\in [K]}\frac{p_{t}(i)-\gamma/K}{1-\gamma}\cdot\exp{(\gamma c_{min}\cdot\hat{e}_{t}(i)/K)}\\ &\stackrel{(b)}{\leq}\hspace{-3pt}\sum_{i\in [K]}\hspace{-3pt}\frac{p_{t}(i)-\gamma/K}{1-\gamma}\Bigg(\hspace{-3pt}1\hspace{-3pt}+\hspace{-3pt}\frac{\gamma c_{min}}{K}\hat{e}_{t}(i)\hspace{-3pt}\\ &\qquad+\hspace{-3pt}(e-2)\bigg(\hspace{-3pt}\frac{\gamma c_{min}}{K}\hat{e}_{t}(i)\hspace{-3pt}\bigg)^2\hspace{-2pt}\Bigg)\\ &\stackrel{(c)}{\leq}1+\frac{c_{min} \gamma/K}{(1-\gamma)}\sum_{i\in [K]}p_{t}(i)\hat{e}_{t}(i)\\ &\qquad +\frac{(e-2)c_{min}^2(\gamma/K)^2}{(1-\gamma)}\sum_{i\in [K]}p_{t}(i)\hat{e}_{t}(i)^2, \end{split}$$ where $(a)$ follows from the definition of $w_{t}(i)$, $(b)$ follows from the facts that for all $i\in[K]$, $p_{t}(i)>\gamma/K$ and for all $x\leq 1$, $e^{x}\leq 1+x+(e-2)x^2$, and $(c)$ follows from the fact that $\sum_{i\in[K]}p_{t}(i)=1$ and $\gamma/K>0$. Now, taking logs on both sides of (\[eq:Wratio\]), summing over $1,2,\ldots T+1$, and using $\log(1+x)\leq x$ for all $x>-1$, we get $$\label{eq:ratio1} \begin{split} \log\frac{W_{T+1}}{W_{1}}&\leq \frac{c_{min}\gamma/K}{(1-\gamma)}\sum_{t=1}^{T}\sum_{i\in [K]}p_{t}(i)\hat{e}_{t}(i)\\ &\quad+\frac{(e-2)c_{min}^2(\gamma/K)^2}{(1-\gamma)}\sum_{t=1}^{T}\sum_{i\in [K]}p_{t}(i)\hat{e}_{t}(i)^2 . \end{split}$$ Additionally, for all $j\in[K]$, we have $$\label{eq:ratio2} \begin{split} \log\frac{W_{T+1}}{W_{1}}&\geq \log\frac{w_{T+1}(j)}{W_1}\\ &=\frac{c_{min}\gamma}{K}\sum_{t=1}^{T}\hat{e}_{t}(j)-\log(K). \end{split}$$ Combining (\[eq:ratio1\]) and (\[eq:ratio2\]), for all $j\in[K]$, we have $$\label{eq:7} \begin{split} &\frac{c_{min}\gamma}{K}\sum_{t=1}^{T}\hat{e}_{t}(j)-\log(K)\\&\leq \frac{c_{min}\gamma/K}{(1-\gamma)}\sum_{t=1}^{T}\frac{r_{t}(i_t)}{c_t(i_t)}+\\ &\frac{(e-2)c_{min}^2(\gamma/K)^2}{c_{min}(1-\gamma)}\sum_{t=1}^{T}\sum_{i\in[K]}\hat{e}_{t}(i), \end{split}$$ where the right hand side of the above equation follows from (\[eq:loss1\]) and (\[eq:loss2\]). We will split the analysis into two cases: $T(i^*)\leq\tau(E)$ and $T(i^*)>\tau(E)$. For $T(i^*)\leq\tau(E)$, using (\[eq:7\]), we have $$\label{eq:adv1.1} \begin{split} &\frac{\gamma}{K}\sum_{t=1}^{T(i^*)}\hat{e}_{t}(i^*)-\frac{\log(K)}{c_{min}}\\ &\leq \frac{\gamma/K}{(1-\gamma)}\sum_{t=1}^{\tau(E)}\frac{r_{t}(i_t)}{c_t(i_t)}+ \frac{(e-2)(\gamma/K)^2}{(1-\gamma)}\sum_{t=1}^{\tau(E)}\sum_{i\in[K]}\hat{e}_{t}(i), \end{split}$$ where the inequality follows by replacing $T=\tau(E)$, and using the fact that $T(i^*)\leq\tau(E)$ and $\hat{e}_{t}(i^*)$ is non-negative. Now, for $T(i^*)>\tau(E)$, using (\[eq:7\]), we have $$\label{eq:adv1.2} \begin{split} &\frac{\gamma}{K}\sum_{t=1}^{T(i^*)}\hat{e}_{t}(i^*)-\frac{\log(K)}{c_{min}}\\ &\leq \frac{\gamma/K}{(1-\gamma)}\sum_{t=1}^{T(i^*)}\frac{r_{t}(i_t)}{c_t(i_t)}+ \frac{(e-2)(\gamma/K)^2}{(1-\gamma)}\sum_{t=1}^{T(i^*)}\sum_{i\in[K]}\hat{e}_{t}(i),\\ &\stackrel{(a)}{=} \frac{\gamma/K}{(1-\gamma)}\sum_{t=1}^{\tau(E)}\frac{r_{t}(i_t)}{c_t(i_t)}+ \frac{(e-2)(\gamma/K)^2}{(1-\gamma)}\sum_{t=1}^{T(i^*)}\sum_{i\in[K]}\hat{e}_{t}(i), \end{split}$$ where $(a)$ follows from the fact that for all $t>\tau(E)$, $r_{t}(i_{t})/c_{t}(i_{t})=0$. Therefore, (\[eq:adv1.2\]) can be further simplified as $$\label{eq:adv1.3} \begin{split} &\frac{\gamma}{K}\sum_{t=1}^{T(i^*)}\hat{e}_{t}(i^*)-\frac{\log(K)}{c_{min}}\\ &\leq \frac{\gamma/K}{(1-\gamma)}\sum_{t=1}^{\tau(E)}\frac{r_{t}(i_t)}{c_t(i_t)}+\\ &\frac{(e-2)(\gamma/K)^2}{(1-\gamma)}\Bigg(\sum_{t=1}^{\tau(E)}\sum_{i\in[K]}\hat{e}_{t}(i)+\sum_{t=\tau(E)+1}^{T(i^*)}\sum_{i\in[K]}\hat{e}_{t}(i)\Bigg). \end{split}$$ Combining (\[eq:adv1.1\]) and (\[eq:adv1.3\]), taking expectation on both sides of the equation, and using (\[eq:unbiasedEstimate\]), we have $$\label{eq:adv2} \begin{split} &\sum_{t=1}^{T{(i^*)}}\frac{r_{t}(i^*)}{c_{t}(i^*)}-\mathbf{E}\Bigg[\sum_{t=1}^{\tau(E)}\frac{r_{t}(i_t)}{c_t(i_t)}\Bigg]\\ &\leq \frac{K}{c_{min}\gamma}\log(K)+\gamma\sum_{t=1}^{T{(i^*)}}\frac{r_{t}(i^*)}{c_{t}(i^*)}\\ &+\frac{(e-2)\gamma}{K}\mathbf{E}\Bigg[\sum_{t=1}^{\tau(E)}\sum_{i\in [K]} \frac{r_{t}(i)}{c_{t}(i)}\Bigg] \\ &+ \frac{(e-2)\gamma}{K}\mathbf{P}(T(i^*)>\tau(E))\mathbf{E}\Bigg[ \sum_{t=\tau(E)+1}^{T(i^*)}\sum_{i\in[K]}\hat{e}_{t}(i)\Bigg] . \end{split}$$ Since $B(E)\geq B-K$, we have $\abs{T(i^*)-\tau (E)}\leq K/c_{min}$. Using $G(\mathcal{A^*})\leq B/c_{min}^2$ and $T(i^*)-\tau(E)\leq K/c_{min}$, we have $$\begin{split} \sum_{t=1}^{T{(i^*)}}\frac{r_{t}(i^*)}{c_{t}(i^*)}-\mathbf{E}\Bigg[\sum_{t=1}^{\tau(E)}\frac{r_{t}(i_t)}{c_t(i_t)}\Bigg]&\leq \frac{K}{c_{min}\gamma}\log(K)+\gamma\cdot \frac{B}{c_{min}^2}\\ &+{(e-2)\gamma}{}\cdot \Bigg(\frac{B}{c_{min}^2}+\frac{K}{c_{min}^2}\Bigg) . \end{split}$$ Using $\gamma=\sqrt{c_{min} K\log(K)/(B(e-1)+K(e-2))}$, the right hand side of the above equation is bounded by $$\label{eq:exp3.bwk} 2\sqrt{\frac{((e-1)B+(e-2)K)K\log(K)}{c_{min}^3}}.$$ Since for all $t$, $c_{t}(i^*)\leq 1$, $T(i^*)\geq B$ and $B(E)\geq B-K$ . Also, $\tau(E)\leq B/c_{min}$. Thus, $$\label{eq:proof2} z(E)\leq 1.$$ Combining (\[eq:exp3.bwk\]) and (\[eq:proof2\]), the statement of the theorem follows. Proof of Theorem 3 ------------------ Let $T=\max \{T(i^*),\tau(E)\}$. The proof of the theorem is split into following results. In Lemma \[lemma:UCB\], we show that for all $i\in [K]$, the efficiency $e(i)$ is $$\mbox{LCB}_{t}(i)\leq e(i)\leq \mbox{UCB}_{t}(i),$$ with high probability as $t\to \infty$ (see Lemma \[lemma:UCB\]) i.e. $$\mathbf{P}(\mbox{UCB}_{t}(i)\leq e(i))\leq \frac{1}{Kt^{\alpha-1}},$$ $$\mathbf{P}(\mbox{LCB}_{t}(i)\geq e(i))\leq \frac{1}{Kt^{\alpha-1}}.$$ This is used to show that $\hat{\Delta}_{t}(i)\leq \Delta(i)$ with high probability as $t\to \infty$ (see Lemma \[lemma:gapEstimate\]), i.e. $$\label{eq:17} \mathbf{P}(\hat{\Delta}_{t}(i)\geq \Delta(i))\leq \frac{1}{t^{\alpha-1}}.$$ Using Lemma \[lemma:exploration\] and Lemma \[lemma:NumberOfRounds\], we show that (see Lemma \[lemma:gapLowerBound\]) $$\begin{split}\label{eq:18} \mathbf{P}\Bigg(\hat{\Delta}_{t}(i)&\leq \frac{\Delta(i)}{2}\Bigg)\leq \Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}+2\Bigg(\frac{1}{t}\Bigg)^{\frac{\beta c_{min}^2}{8}}\\ &\qquad\qquad\qquad+\frac{2}{Kt^{\alpha -1}}. \end{split}$$ Thus, using (\[eq:17\]) and (\[eq:18\]), we have $$\frac{\Delta(i)}{2}\leq \hat{\Delta}_{t}(i)\leq \Delta(i),$$ with high probability as $t\to\infty$. Using Lemma \[lemma:martingale\] and \[lemmma:gapAdv\], we have $$\label{eq:12} \mathbf{P}\Bigg( \tilde{\Delta}_{t}(i)\leq \frac{t\Delta(i)}{2}\Bigg)\leq \Bigg(\frac{\log(t)}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha-2} +\frac{1}{t},$$ where $\tilde{\Delta}_{t}(i)=\sum_{n=1}^{t} (\hat{\ell}_{n}(i)-\hat{\ell}_{n}(i^*))$. Since $$p_{t}(i)\leq \exp(-\gamma_{t}\tilde{\Delta}_{t}(i)),$$ (\[eq:12\]) is used to bound $\sum_{t=1}^{T}\mathbf{E}[p_{t}(i)]$, thus we have $$\sum_{t=1}^{T}\mathbf{E}[p_{t}(i)]=O\Bigg(\frac{\log^2(B/c_{min})}{c_{min}^2\Delta(i)^2}\Bigg).$$ Using the definition of $\epsilon_{t}(i)$ and (\[eq:12\]), we have $$\sum_{t=1}^{T}\mathbf{E}[\epsilon_{t}(i)]=O\Bigg(\frac{\log^2(B/c_{min})}{c_{min}^2\Delta(i)^2}\Bigg).$$ Hence, the statement of the theorem follows. \[lemma:UCB\] For all $i\in[K]$ and $t\geq K$, $$\mathbf{P}(\mbox{UCB}_{t}(i)\leq e(i))\leq \frac{1}{Kt^{\alpha-1}},$$ $$\mathbf{P}(\mbox{LCB}_{t}(i)\geq e(i))\leq \frac{1}{Kt^{\alpha-1}},$$ If $\mbox{UCB}_{t}(i)\leq e(i)$, then $$\Bar{e}_{t}(i)+\frac{(1+1/\lambda) \eta_{t}(i)}{\lambda- \eta_{t}(i)}\leq e(i)=\frac{\mu(i)}{\rho(i)}.$$ Therefore, at least one of the events $U_{1}$ and $U_{2}$ is true, where $$U_{1}:\Bar{r}_{t}(i)\leq \mu(i)-\eta_{t}(i),$$ $$U_{2}:\Bar{c}_{t}(i)\geq \rho(i)+\eta_{t}(i).$$ This can be proved by contradiction. Let both $U_{1}$ and $U_{2}$ are false. Then, we have $$\begin{split} \frac{\mu(i)}{\rho(i)}-\frac{\Bar{r}_{t}(i)}{\Bar{c}_{t}(i)}&= \frac{\mu(i)\Bar{c}_{t}(i)-\rho(i)\Bar{r}_{t}(i)}{\rho(i)\Bar{c}_{t}(i)},\\ &=\frac{\mu(i)(\Bar{c}_{t}(i)-\rho(i))+\rho(i)(\mu(i)-\Bar{r}_{t}(i))}{\rho(i)\Bar{c}_{t}(i)},\\ &\stackrel{(a)}{\leq} \frac{\mu(i)\eta_{t}(i)+\rho(i)\eta_{t}(i)}{\rho(i)\Bar{c}_{t}(i)},\\ &\stackrel{(b)}{\leq}\frac{\eta_{t}(i)}{\lambda(\lambda-\eta_{t}(i))}+\frac{\eta_{t}(i)}{\lambda-\eta_{t}(i)},\\ &= \frac{(1+1/\lambda) \eta_{t}(i)}{\lambda- \eta_{t}(i)}, \end{split}$$ where $(a)$ follows from the fact that both $U_{1}$ and $U_{2}$ are false, and $(b)$ follows from the fact that $U_{1}$ and $U_{2}$ are false, and $\lambda\leq c_{min}$. Hence, at least one of the events $U_{1}$ and $U_{2}$ is true. Now, using Hoeffding’s inequality, we have $$\label{eq:U1} \mathbf{P}(U_{1})\leq \frac{1}{Kt^{\alpha}},$$ and $$\label{eq:U2} \mathbf{P}(U_{2})\leq \frac{1}{Kt^{\alpha}}.$$ Thus, $$\begin{split} \mathbf{P}(\mbox{UCB}_{t}(i)\leq e(i))&\leq \mathbf{P}(U_{1})+ \mathbf{P}(U_{2})\\ &\leq \frac{1}{Kt^{\alpha-1}}. \end{split}$$ Similarly, if $\mbox{LCB}_{t}(i)\geq e(i)$, then $$\Bar{e}_{t}(i)-\frac{(1+1/\lambda) \eta_{t}(i)}{\lambda- \eta_{t}(i)}\geq e(i)=\frac{\mu(i)}{\rho(i)}.$$ Therefore, at least one of the events $L_{1}$ and $L_{2}$ is true, where $$L_{1}:\Bar{r}_{t}(i)\geq \mu(i)+\eta_{t}(i),$$ $$L_{2}:\Bar{c}_{t}(i)\leq \rho(i)-\eta_{t}(i).$$ This can be proved by contradiction. Now, using Hoeffding’s inequality, we have $$\label{eq:L1} \mathbf{P}(L_{1})\leq \frac{1}{Kt^{\alpha}},$$ and $$\label{eq:L2} \mathbf{P}(L_{2})\leq \frac{1}{Kt^{\alpha}}.$$ Thus, $$\begin{split} \mathbf{P}(\mbox{LCB}_{t}(i)\geq e(i))&\leq \mathbf{P}(L_{1})+ \mathbf{P}(L_{2})\\ &\leq \frac{1}{Kt^{\alpha-1}}. \end{split}$$ Hence proved. \[lemma:gapEstimate\] For all $i\in[K]$ and $t\geq K$, $$\mathbf{P}(\hat{\Delta}_{t}(i)\geq \Delta(i))\leq \frac{1}{t^{\alpha-1}},$$ Since $\Delta(i)=\max_{j\in[K]}e(j)-e(i)$, we have $$\begin{split} \mathbf{P}(\hat{\Delta}_{t}(i)\geq \Delta(i))&= \mathbf{P}(\max_{j\neq i}\mbox{LCB}_{t}(j)-\mbox{UCB}_{t}(i)\geq \Delta(i))\\ &\leq \sum_{j\neq i}\mathbf{P}(\mbox{LCB}_{t}(j)\geq e(j))\\ &\qquad\qquad+\mathbf{P}(\mbox{UCB}_{t}(i)\leq e(i))\\ &\leq \frac{1}{t^{\alpha-1}}, \end{split}$$ where the last inequality follows from Lemma \[lemma:UCB\]. Hence proved. \[lemma:exploration\] For all $i\in[K]$, let $$t_{min}(i)=\min\{t:t\geq 4K\beta(\log t)^2/\Delta(i)^4\log(K)\}.$$ We define two events $A(i,t)$ and $A(i^*,i,t)$ as $$A(i,t)=\Bigg\{\mbox{there exists an } n\leq t:\epsilon_{n}(i)<\frac{\beta \log t}{t\Delta(i)^2}\Bigg\},$$ $$A(i^*,i,t)\hspace{-3pt}=\hspace{-3pt}\Bigg\{\hspace{-3pt}\mbox{there exists an } n\leq t:\epsilon_{n}(i^*)<\frac{\beta \log t}{t\Delta(i)^2}\Bigg\}.$$ For $t>t_{min}(i)$ and $\alpha\geq 3$, we have $$\mathbf{P}(A(i,t))\leq \frac{1}{2}\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2},$$ $$\mathbf{P}(A(i^*, i,t))\leq \frac{1}{2}\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}.$$ We start with proving the bound on the probability of the event $A(i,t)$. This proof is divided into two parts. First, for $n\leq tc_{min}^2\Delta(i)^2/\log(t)$, using the Lemma \[lemma:gapEstimate\], we show that $A(i,t)$ does not occur with high probability as $t\to \infty$ . Later, for $n\geq tc_{min}^2\Delta(i)^2/\log(t)$, we bound the probability of the event $A(i,t)$ using the Lemma \[lemma:gapEstimate\]. For $n\leq tc_{min}^2\Delta(i)^2/\log(t)$, we have $$\label{eq:bound1} \begin{split} \frac{\beta \log(n)}{n\hat{\Delta}^2_{n}(i)}&\stackrel{(a)}{\geq} \frac{\beta c_{min}^2 \log(n)}{n},\\ &\stackrel{(b)}{\geq} \frac{\beta \log(n)\log(t)}{t{\Delta}(i)^2},\\ &\geq \frac{\beta\log(t)}{t{\Delta}(i)^2}, \end{split}$$ where $(a)$ follows from the definition of $\hat{\Delta}_{n}(i)$, and $(b)$ follows from the range of $n$. For $t\geq t_{min}$, $$\label{eq:bound2} 0.5\sqrt{\frac{\log(K)}{tK}}\geq \frac{\beta \log(t)}{t\Delta(i)^2}.$$ Additionally, using Lemma \[lemma:gapEstimate\], $\hat{\Delta}_{n}(i)\leq \Delta(i)$ w.h.p as $n\to \infty$. Therefore, combining (\[eq:bound2\]), $\hat{\Delta}_{n}(i)\leq \Delta(i)$ and (\[eq:bound1\]), we have $$\label{eq:bound3} \begin{split} \epsilon_{n}(i)&\geq\frac{\beta \log t}{t\Delta(i)^2}. \end{split}$$ Now, for $n\geq tc_{min}^2\Delta(i)^2/\log(t)$, we have $$\begin{split} &\mathbf{P}\Bigg(\mbox{There exists } n\in \Bigg[\frac{tc_{min}^2\Delta(i)^2}{\log(t)},t\Bigg]: \epsilon_{n}(i)<\frac{\beta \log t}{t\Delta(i)^2} \Bigg)\\ &=\mathbf{P}\Bigg(\mbox{There exists } n\in \Bigg[\frac{tc_{min}^2\Delta(i)^2}{\log(t)},t\Bigg]: \hat{\Delta}_{n}(i)\geq \Delta(i) \Bigg)\\ &\leq \sum_{n=\frac{tc_{min}^2\Delta(i)^2}{\log(t)} }^t \frac{1}{n^{\alpha-1}}\leq \frac{1}{2}\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}. \end{split}$$ Similarly, we can bound the probability of $\mathbf{P}(A(i^*, i,t))$ by using the fact that $\Delta(i^*)=0<\Delta(i)$ for $i\neq i^*$. Hence proved. \[lemma:NumberOfRounds\] For all $i\in[K]$ and $t\geq t_{min}(i)$, we have $$\mathbf{P}\Bigg(N_{t}(i)\leq \frac{\beta \log t}{2\Delta(i)^2}\Bigg)\leq \hspace{-3pt}\Bigg(\frac{1}{t}\Bigg)^{\frac{\beta c_{min}^2}{8}}\hspace{-3pt}+\frac{1}{2}\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}\hspace{-6pt}.$$ Additionally, $$\label{eq:second} \mathbf{P}\Bigg(N_{t}(i^*)\leq \frac{\beta \log t}{2\Delta(i)^2}\Bigg)\leq \hspace{-3pt}\Bigg(\frac{1}{t}\Bigg)^{\frac{\beta c_{min}^2}{8}}\hspace{-3pt}+\frac{1}{2}\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}\hspace{-6pt}.$$ We have $$\begin{split} &\mathbf{P}\Bigg(N_{t}(i)\leq \frac{\beta \log t}{2\Delta(i)^2}\Bigg)\\ &\leq \mathbf{P}\Bigg(A^C(i,t) \mbox{ and } N_{t}(i)\leq \frac{\beta \log t}{2\Delta(i)^2}\Bigg)+\mathbf{P}\Bigg(A(i,t)\Bigg),\\ &\stackrel{(a)}{\leq}\exp\Bigg(\frac{-\beta \log t}{8\Delta(i)^2}\Bigg)+\frac{1}{2}\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2},\\ &\stackrel{(b)}{\leq} \Bigg(\frac{1}{t}\Bigg)^{\frac{\beta c_{min}^2}{8}}\hspace{-3pt}+\frac{1}{2}\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}, \end{split}$$ where $A^C(i,t)$ is the complement of the event $A(i,t)$, $(a)$ follows from the Theorem 8 in [@seldin2017improved] and Lemma \[lemma:exploration\], and $(b)$ follows from the fact that for all $i\in[K]$, $\Delta(i)\leq 1/c_{min}^2$. Similarly, we can bound the probability in (\[eq:second\]). \[lemma:gapLowerBound\] For all $i\in[K]$, $t\geq t_{min}(i)$, $\alpha\geq 3$ $\beta \geq 64( \alpha +1)/c_{min}^2\geq 256/c_{min}^2$, we have $$\begin{split} \mathbf{P}\Bigg(\hat{\Delta}_{t}(i)&\leq \frac{\Delta(i)}{2}\Bigg)\leq \Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}+2\Bigg(\frac{1}{t}\Bigg)^{\frac{\beta c_{min}^2}{8}}\\ &\qquad\qquad\qquad+\frac{2}{Kt^{\alpha -1}}. \end{split}$$ Using Lemma \[lemma:UCB\], we have $$\label{eq:prob1} \begin{split} &\mathbf{P}\big((\mbox{UCB}_{t}(i^*)\leq e(i^*))\mbox{ or }(\mbox{LCB}_{t}(i)\geq e(i))\big)\\ &\leq {2}/{Kt^{\alpha-1}}. \end{split}$$ Now, assume $\mbox{UCB}_{t}(i^*)\geq e(i^*)$ and $\mbox{LCB}_{t}(i)\leq e(i)$, we have $$\label{eq:lower1} \begin{split} \hat{\Delta}_{t}(i) &\geq \max_{j\neq i}\mbox{LCB}_{t}(j)-\mbox{UCB}_{t}(i),\\ &\geq \mbox{LCB}_{t}(i^*)-\mbox{UCB}_{t}(i),\\ &=\Bar{e}_{t}(i^*)-\eta_{t}(i^*)-\Bar{e}_{t}(i)-\eta_{t}(i)\\ &\geq e(i^*)-2\eta_{t}(i^*)-e(i)-2\eta_{t}(i)\\ &=\Delta(i)-2\eta_{t}(i^*)-2\eta_{t}(i). \end{split}$$ Similarly, using Lemma \[lemma:NumberOfRounds\], we have $$\label{eq:prob2} \begin{split} &\mathbf{P}\Bigg(N_{t}(i)\leq \frac{\beta \log t}{2\Delta(i)^2}\mbox{ or } N_{t}(i^*)\leq \frac{\beta \log t}{2\Delta(i)^2}\Bigg)\\ &\leq 2\Bigg(\frac{1}{t}\Bigg)^{\frac{\beta c_{min}^2}{8}}+ \Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}. \end{split}$$ Now, assuming $N_{t}(i)> {\beta \log t}/{2\Delta(i)^2}$ and $N_{t}(i^*)>{\beta \log t}/{2\Delta(i)^2}$, we have $$\label{eq:lower2} \begin{split} \hat{\Delta}_{t}(i) &\geq\Delta(i)-2\eta_{t}(i^*)-2\eta_{t}(i),\\ &\geq \Delta(i) -4\sqrt{\frac{2\Delta(i)^2\alpha\log(tK^{1/\alpha})}{2\beta\log(t)}},\\ &\geq \Delta(i)\Bigg(1-4\sqrt{\frac{\alpha+1}{c_{min}^2\beta}}\Bigg),\\ &\geq \Delta/2. \end{split}$$ Therefore, combining (\[eq:prob1\]),(\[eq:lower1\]),(\[eq:prob2\]) and (\[eq:lower2\]), the statement of the theorem follows. Hence proved. \[lemma:martingale\]For all $i\in[K]$, let $X_{t}(i)=\Delta(i)-(\hat{\ell}_{t}(i)-\hat{\ell}_{t}(i^*))$ be the martingale difference sequence with respect to filtration $\mathcal{F}_{1},\ldots,\mathcal{F}_{1}$ where $\mathcal{F}_{t}$ is the sigma field based on all the past actions, their rewards and their costs until round $t$. Then, for $t\geq t_{min}(i)$,we have $$\mathbf{P}\Bigg(\max_{1\leq n\leq t}X_{n}(i)\geq \frac{1.25t\Delta(i)^2}{c_{min}\beta\log(t)}\Bigg)\leq \frac{1}{2}\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha-2},$$ $$\mathbf{P}\Bigg(\nu_{t}(i)\geq \frac{2t^2\Delta(i)^2}{c_{min}^3\beta\log(t)}\Bigg)\leq \Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha-2},$$ where $\nu_{t}(i)=\sum_{n=1}^{t}\mathbf{E}[X_{n}(i)^2|\mathcal{F}_{n-1}]$. We bound the magnitude of $X_{n}(i)$. For all $i\in[K]$, we have $$\label{eq:limit1} \begin{split} X_{n}(i)&=\Delta(i)-(\hat{\ell}_{n}(i)-\hat{\ell}_{n}(i^*)),\\ &\leq \frac{1}{c_{min}}+ \hat{\ell}_{n}(i^*),\\ &\leq \frac{1}{c_{min}} +\frac{1}{c_{min}\epsilon_{n}(i^*)},\\ &\leq \frac{1}{c_{min}}\Bigg(1 +\max\Bigg\{2K,2\sqrt{\frac{nK}{\log(K)}},{\frac{n\hat{\Delta}_{n}(i^*)^2}{\beta \log(n)}}\Bigg\}\Bigg),\\ &\leq\frac{1.25}{c_{min}}\max\Bigg\{2K,2\sqrt{\frac{nK}{\log(K)}},{\frac{n\hat{\Delta}_{n}(i^*)^2}{\beta \log(n)}}\Bigg\}. \end{split}$$ Similar to the proof of Lemma \[lemma:exploration\], for $t\geq t_{min}$ and $n\leq tc_{min}^2\Delta(i)^2/\log(t)$, we have $\epsilon_{n}(i^*)\geq t\Delta(i)^2/\beta\log(t) $ and (see (\[eq:bound1\])) $$\begin{split} \frac{\beta \log(n)}{n\hat{\Delta}^2_{n}(i)} &\geq \frac{\beta\log(t)}{t{\Delta}(i)^2}. \end{split}$$ Additionally, for $t\geq t_{min}$, $$0.5\sqrt{\frac{\log(K)}{tK}}\geq \frac{\beta \log(t)}{t\Delta(i)^2},$$ and using Lemma \[lemma:gapEstimate\], $\hat{\Delta}_{n}(i)\leq \Delta(i)$ w.h.p as $n\to \infty$. Therefore, using for all $i\in[K]$ $\Delta(i^*)=0\leq \Delta(i)$, for $t_{1}\leq tc_{min}^2\Delta(i)^2/\log(t)$ and $t\geq t_{min}(i)$, $$\label{eq:inf1} \max_{1\leq n\leq t_{1}}X_{n}(i)\leq \frac{1.25t\Delta(i)^2}{c_{min}\beta\log(t)},$$ w.h.p at $t_{1}\to\infty$. Now, $$\label{eq:maxMartingale} \begin{split} &\mathbf{P}\Bigg(\max_{1\leq n\leq t}X_{n}(i)\geq \frac{1.25t\Delta(i)^2}{c_{min}\beta\log(t)}\Bigg)\\ &\stackrel{(a)}{=}\mathbf{P}\Bigg(\exists n\in\Bigg[\frac{tc_{min}^2\Delta(i)^2}{\log(t)},t\Bigg]: X_{n}(i)\geq \frac{1.25t\Delta(i)^2}{c_{min}\beta\log(t)}\Bigg),\\ &\stackrel{(b)}{\leq}\mathbf{P}\Bigg(\exists n\in\Bigg[\frac{tc_{min}^2\Delta(i)^2}{\log(t)},t\Bigg]: \hat{\Delta}_{n}(i)\geq \Delta(i)\Bigg),\\ &\stackrel{(c)}{\leq}\frac{1}{2}\Bigg(\frac{\log(t)}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha-2}, \end{split}$$ where $(a)$ follows from (\[eq:inf1\]), $(b)$ follows from (\[eq:limit1\]), and $(c)$ follows from Lemma \[lemma:gapEstimate\]. Now, we bound $\nu_{t}(i)=\sum_{n=1}^{t}\mathbf{E}[X_{n}(i)^2|\mathcal{F}_{n-1}]$. For all $i\in[K]$, we have $$\begin{split} &\mathbf{E}[X_{n}(i)^2|\mathcal{F}_{n-1}]\\ &\leq \mathbf{E}[(\hat{\ell}_{n}(i^*)-\hat{\ell}_{n}(i))^2|\mathcal{F}_{n-1}],\\ &\stackrel{(a)}{=}\mathbf{E}[\hat{\ell}_{n}(i^*)^2|\mathcal{F}_{n-1}]+\mathbf{E}[\hat{\ell}_{n}(i)^2|\mathcal{F}_{n-1}],\\ &=\Tilde{p}_{n}(i)\Bigg(\frac{\ell_{n}(i)}{\Tilde{p}_{n}(i)}\Bigg)^2+\Tilde{p}_{n}(i^*)\Bigg(\frac{\ell_{n}(i^*)}{\Tilde{p}_{n}(i^*)}\Bigg)^2,\\ &\leq \frac{1}{c_{min}^2\Tilde{p}_{n}(i)}+\frac{1}{c_{min}^2\Tilde{p}_{n}(i^*)},\\ &\stackrel{(b)}{\leq}\frac{1}{c_{min}^3}\Bigg(\max\Bigg\{2K,2\sqrt{\frac{nK}{\log(K)}},{\frac{n\hat{\Delta}_{n}(i)^2}{\beta \log(n)}}\Bigg\}\Bigg)+\\ & \qquad\qquad\max\Bigg\{2K,2\sqrt{\frac{nK}{\log(K)}},{\frac{n\hat{\Delta}_{n}(i^*)^2}{\beta \log(n)}}\Bigg\}\Bigg),\\ \end{split}$$ where $(a)$ follows from the fact that for all $i\in[K]$ and $n\leq t$, $\hat{\ell}_{n}(i^*)\cdot\hat{\ell}_{n}(i))=0$, and $(b)$ follows from (\[eq:limit1\]). Similar to (\[eq:maxMartingale\]), we bound the $\nu_{t}(i)$ as follows $$\begin{split} &\mathbf{P}\Bigg(\nu_{t}(i)\geq \frac{2t^2\Delta(a)^2}{c_{min}^3\beta\log(t)}\Bigg)\\ &\stackrel{(a)}{\leq} \mathbf{P}\Bigg(\exists n\in\Bigg[\frac{tc_{min}^2\Delta(i)^2}{\log(t)},t\Bigg]: \hat{\Delta}_{n}(i)\geq \Delta(i)\Bigg)\\ &\qquad + \mathbf{P}\Bigg(\exists n\in\Bigg[\frac{tc_{min}^2\Delta(i)^2}{\log(t)},t\Bigg]: \hat{\Delta}_{n}(i^*)\geq 0\Bigg),\\ &\stackrel{(b)}{\leq} \Bigg(\frac{\log(t)}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha-2}, \end{split}$$ where $(a)$ can be implied in a similar way as $(b)$ of (\[eq:maxMartingale\]), and $(b)$ follows from Lemma \[lemma:gapEstimate\]. \[lemmma:gapAdv\] For all $t\geq t_{min}(i)$ and $\beta \geq 256/c_{min}^2$, we have $$\mathbf{P}\Bigg( \tilde{\Delta}_{t}(i)\leq \frac{t\Delta(i)}{2}\Bigg)\leq \Bigg(\frac{\log(t)}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha-2} +\frac{1}{t}.$$ where $\tilde{\Delta}_{t}(i)=\sum_{n=1}^{t} (\hat{\ell}_{n}(i)-\hat{\ell}_{n}(i^*))$. We have $$\begin{split} & \mathbf{P}\Bigg(\tilde{\Delta}_{t}(i)\leq \frac{t\Delta(i)}{2}\Bigg)\\ &=\mathbf{P}\Bigg(t\Delta(i)-\tilde{\Delta}_{t}(i)\geq \frac{t\Delta(i)}{2}\Bigg),\\ &\leq \mathbf{P}(M_{1}(t))+\mathbf{P}(M_{2}(t))+\mathbf{P}(M_{3}(t)), \end{split}$$ where $$M_{1}(t)=\Bigg\{\max_{1\leq n\leq t}X_{n}(i)\geq \frac{1.25t\Delta(i)^2}{c_{min}\beta\log(t)}\Bigg\},$$ $$M_{2}(t)=\Bigg\{\nu_{t}(i)\geq \frac{2t^2\Delta(a)^2}{c_{min}^2\beta\log(t)}\Bigg\},$$ and $$M_{3}(t)\hspace{-2pt}=\hspace{-3pt}\Bigg\{t\Delta(i)-\tilde{\Delta}_{t}(i)\geq \frac{t\Delta(i)}{2} \mbox{ and } M_{1}(t) \mbox{ and } M_{2}(t)\hspace{-3pt}\Bigg\}.$$ The probability of the events $M_{1}(t)$ and $M_{2}(t)$ can be bound using Lemma \[lemma:martingale\], and using the fact that $c_{min}\leq 1$. Let $w_1={2t^2\Delta(a)^2}/{c_{min}^2\beta\log(t)}$, $w_{2}={1.25t\Delta(i)^2}/{c_{min}\beta\log(t)}$, and $w_3 =1/t$. For all $t\geq t_{min}(i)$ and $\beta \geq 256/c_{min}^2$, we have $$\label{eq:bernstien} \begin{split} &\sqrt{2w_{1}\log\frac{1}{w_3}}+\frac{w_{2}}{3}\log\frac{1}{w_3}\\ &=\sqrt{\frac{4t^2\Delta(i)^2\log t}{c_{min}^2\beta \log t}}+ \frac{1.25t\Delta(i)^2\log t}{c_{min}\beta \log t},\\ &\leq t\Delta(i)\Bigg(\frac{2t}{\sqrt{c_{min}^2\beta}}+\frac{1.25}{3c_{min}^2\beta}\Bigg),\\ &\leq \frac{1}{2}t\Delta(i). \end{split}$$ Thus, using Bernstein’s inequality for martingales and (\[eq:bernstien\]), we can bound the probability of $M_{3}(t)$ as follows $$\mathbf{P}(M_{3}(t))\leq \frac{1}{t}.$$ Thus, combining the bounds over the probabilities of the events $M_{1}(t)$, $M_{2}(t)$ and $M_{3}(t)$, the statement of the lemma follows. \[lemma:epsilonBound\]For all $i\in[K]$, $\tau(E)\geq t_{min}(i)$,$T=\max\{\tau(E),T(i^*)\}$, $\alpha= 3$ and $\beta= 256/c_{min}^2$, we have $$\begin{split} \sum_{t=1}^{T}\mathbf{E}[\epsilon_{t}(i)] &\leq t_{min}(i)+ \frac{4\beta(\log^2(T)+\log(T))}{\Delta(i)^2}+\\ & \frac{\log^2(T)+\log(T)}{c_{min}^2\Delta(i)^2}+\frac{2}{K}(\log(T)+1)\\ &+\frac{2\pi ^2}{3}. \end{split}$$ *Proof:* We have $$\begin{split} &\sum_{t=1}^{T}\mathbf{E}[\epsilon_{t}(i)]\\ &=\sum_{t=1}^{T}\mathbf{E}\Bigg[\min\Bigg\{\frac{1}{2K},\frac{1}{2}\sqrt{\frac{\log(t)}{tK}},\frac{\beta\log t}{t\hat{\Delta}_{t}(i)^2}\Bigg\}\Bigg],\\ &\leq \sum_{t=1}^{T}\mathbf{E}\Bigg[\frac{\beta\log t}{t\hat{\Delta}_{t}(i)^2}\Bigg],\\ &\stackrel{(a)}{\leq} t_{min}(i)+ \frac{4\beta(\log^2(T)+\log(T))}{\Delta(i)^2}+ \\ &\sum_{t=t_{min}(i)}^{T}\Bigg(\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2} +\frac{2}{Kt^{\alpha -1}}+2\Bigg(\frac{1}{t}\Bigg)^{\frac{\beta c_{min}^2}{8}}\Bigg),\\ &\leq t_{min}(i)+ \frac{4\beta(\log^2(T)+\log(T))}{\Delta(i)^2}+\\ & \frac{\log^2(T)+\log(T)}{c_{min}^2\Delta(i)^2}+\frac{2}{K}(\log(T)+1)+\frac{2\pi ^2}{3}, \end{split}$$ where $(a)$ follows from Lemme \[lemma:gapLowerBound\]. \[lemma:pBound\] For all $i\in[K]$, $\tau(E)\geq t_{min}(i)$, $\gamma \geq c^2_{min}\sqrt{K\log(K)/B(1+(e-2)/c_{min}^2)}$ and $\alpha\geq 3$, there exists a constant $m_{2}$ $$\begin{split} \sum_{t=1}^{T}\mathbf{E}[p_{t}(i)]&\leq t_{min}(i)+m_{2}\frac{\log^2(T)}{c_{min}^2\Delta(i)^2}. \end{split}$$ We have $$\begin{split} &\sum_{t=1}^{T}\mathbf{E}[p_{t}(i)]\\ &\leq \sum_{t=1}^{T}\mathbf{E}\Big[\exp(-\gamma_{t}\tilde{\Delta}_{t}(i))\Big] ,\\ &\stackrel{(a)}{\leq} t_{min}(i)+\hspace{-3pt}\sum_{t=t_{min}(i)}^{T}\hspace{-3pt}\Bigg[\hspace{-2pt}e^{-\sqrt{\frac{\log(K)}{tK}}\frac{ t{\Delta}(i)}{4K}}+\hspace{-3pt}\frac{1}{t}\\ &\qquad+\hspace{-3pt}\Bigg(\frac{\log(t)}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha-2}+\Bigg(\frac{\log t}{tc_{min}^2\Delta(i)^2}\Bigg)^{\alpha - 2}\\ &\qquad+\frac{2}{Kt^{\alpha -1}}+2\Bigg(\frac{1}{t}\Bigg)^{\frac{\beta c_{min}^2}{8}}\Bigg],\\ &\stackrel{(b)}{\leq}t_{min}(i)+O\Bigg(\frac{\log^2(T)}{c_{min}^2\Delta(i)^2}\Bigg),\\ \end{split}$$ where $(a)$ follows from the Lemma \[lemma:gapLowerBound\] , and $(b)$ follows from bounds over the summation of sequences via integration. Proof of Theorem 4 ------------------ For all $i\in[K]$, $$p_{t}(i)=\frac{\exp(-\gamma_{t}\sum_{n=1}^{t-1}\hat{\ell}_{n}(i))}{\sum_{i\in[K]}\exp(-\gamma_{t}\sum_{n=1}^{t-1}\hat{\ell}_{n}(i))},$$ and $\gamma_{t}=0.5\sqrt{c_{min}^2\log(K)/Kt}$. Therefore, using Lemma 7 of [@seldin2014one], we have $$\begin{split} &\sum_{t=1}^{T}\sum_{i\in [K]} p_{t}(i) \hat{\ell}_{t}(i) -\min_{j\in[K]}\sum_{t=1}^{T}\hat{\ell}_{t}(j)\\ &\leq \frac{1}{2}\sum_{t=1}^{T}\gamma_{t}\sum_{i\in [K]} p_{t}(i) (\hat{\ell}_{t}(i))^2 +\frac{\log(K)}{\gamma_{T}}, \end{split}$$ where $T=\max \{T(i^*),\tau(E)\}$. We have$$\label{eq:adv3} \begin{split} &\mathbf{E}\Bigg[\sum_{t=1}^{T}\mathbf{E}[\sum_{i\in [K]} p_{t}(i) \hat{\ell}_{t}(i)|\mathcal{F}_{t-1}]\Bigg]- \mathbf{E}\Bigg[\sum_{t=1}^{T}\hat{\ell}_{t}(i)\Bigg]\\ &\leq \frac{\log(K)}{\gamma_{T}}+\mathbf{E}\Bigg[\sum_{t=1}^{T}\frac{\gamma_{t}}{2}\mathbf{E}\bigg[\sum_{i\in [K]} p_{t}(i)\hat{\ell}^2_{t}(i)|\mathcal{F}_{t-1}\bigg]\Bigg], \end{split}$$ where $\mathcal{F}_{t}$ is the sigma field with respect to the entire past until round $t$. Now, let us bound the terms in (\[eq:adv3\]). We have $$\label{eq:adv4} \begin{split} &\mathbf{E}[\sum_{i\in [K]} p_{t}(i) \hat{\ell}_{t}(i)|\mathcal{F}_{t-1}]\\ &\geq \mathbf{E}\Bigg[\sum_{i\in[K]}(\tilde{p}_{t}(i)-\epsilon_{t}(i))\hat{\ell}_{t}(i)|\mathcal{F}_{t-1}\Bigg],\\ &\geq \frac{1}{c_{min}}-\mathbf{E}\Bigg[\frac{r_{t}(i_t)}{c_{t}(i_t)}\Bigg|\mathcal{F}_{t-1}\Bigg]-\sum_{i\in[K]}\frac{\epsilon_{t}(i)}{c_{min}}. \end{split}$$ Also, $$\label{eq:adv44} \mathbf{E}\Bigg[\sum_{t=1}^{T}\hat{\ell}_{t}(i^*)\Bigg]=\sum_{t=1}^{T}\frac{1}{c_{min}}-\sum_{t=1}^{T}\frac{r_{t}(j)}{c_{t}(j)}.$$ Additionally, $$\label{eq:adv5} \begin{split} &\mathbf{E}\Bigg[\sum_{i\in[K]}p_{t}(i)\hat{\ell}^2_{t}(i)|\mathcal{F}_{t-1}\Bigg]\\ &\leq \mathbf{E}\Bigg[\sum_{i\in[K]}\frac{p_{t}}{c_{min}^2\tilde{p}^2_{t}}\Bigg|\mathcal{F}_{t-1}\Bigg],\\ &\leq \sum_{i\in[K]}\frac{p_{t}}{c_{min}^2\tilde{p}_{t}},\\ &\stackrel{(a)}{\leq}\frac{2K}{c_{min}^2}, \end{split}$$ where last inequality follows from the definition of $\tilde{p}_{t}(i)$, and the fact that for all $i\in [K]$ and $t$,$(1-\sum_{j\neq i}\epsilon_{t}(j))\geq 0.5$. Using (\[eq:adv2\]),(\[eq:adv3\]), (\[eq:adv4\]),(\[eq:adv44\]) and (\[eq:adv5\]), we have that the expected regret of the algorithm is at most $$\begin{split} &\frac{\log(K)}{\gamma_{n^\prime}} +\frac{K}{c_{min}^2} \sum_{t=1}^{n^\prime}\gamma_{t}+\sum_{t=1}^{\tau(E)}\sum_{i\in[K]}\frac{\epsilon_{t}(i)}{c_{min}}\\ &\stackrel{(a)}{\leq} \frac{\log(K)}{\gamma_{n^\prime}} +\frac{K}{c_{min}^2} \sum_{t=1}^{n^\prime}\gamma_{t}+\sum_{t=1}^{n^\prime}\sum_{i\in[K]}\frac{\gamma_{t}}{c^2_{min}}\\ &\stackrel{(b)}{\leq}6\sqrt{\frac{BK\log(K)}{c_{min}^3}}, \end{split}$$ where $n^\prime=\tau(E)+K/c_{min}$, $(a)$ follows from the value of $\gamma$, and from the fact that $\epsilon_{t}(i)\leq 0.5 c_{min}\sqrt{\log(K)/tK}$,and $(b)$ follows from the concavity of $\sqrt{x}$.
Last Updated on: December 6, 2015 by Ingrid King Some books about animals warm your heart. Others touch your soul. Homer’s Odyssey, subtitled A Fearless Feline Tale, or How I Learned About Love and Life with a Blind Wondercat falls into the second category. This moving, inspirational and often funny story about a blind cat with a huge spirit and an endless capacity for love, joy and a determination to persevere no matter what the obstacles is a wonderful celebration of the bond between a cat and his human and the transformational power of loving an animal. Homer’s story begins when the stray kitten is brought to Miami veterinarian Dr. Patty Khuly (who wrote the foreword to the book), host of the popular veterinary blog Dolittler, at only three weeks of age. Homer loses both eyes to a severe eye infection, and while nobody would have faulted Dr. Khuly for euthanizing this kitten, she saw something in him that made her determined to save him. When Gwen gets a call from Dr. Khuly asking whether she would come take a look at this kitten, the last thing the author wants is another cat. She already has two, and she’s worried about crossing the line into crazy cat lady territory by adopting another one. But she agrees to take a look – and falls in love. Homer, the blind kitten who doesn’t know he’s blind, has a giant heart and an indomitable spirit. He quickly adapts to new situations and environments, and turns into a feline daredevil who scales tall bookcases in a single bound and catches flies by jumping five feet into the air. Eventually, Gwen and the three cats move from Miami to New York City (and the story of their move is an adventure that will have you on the edge of your seat with worry and concern for this family of four). Adjusting to city living in a cold climate takes some time, but once again, Homer’s adaptable spirit triumphs. He even survives being trapped with his two feline companion for days after 9/11 in an apartment near the World Trade Center. But it wasn’t Homer’s physical feats and his ability to adapt to physical limitations that ultimately transformed the author’s life. Homer’s unending capacity for love and joy, no matter what life’s challenges may be, were a daily inspiration for Gwen, and ultimately taught her the most important lesson of all: Love isn’t something you see with your eyes. It’s rare that a pet memoir is the kind of book you can’t put down – but this one is. Thankfully, I knew at the outset that Home is alive and well, so unlike what happens with so many books in this genre, I didn’t expect to cry while reading this book. Little did I know how the gut-wrenching account of the author’s experience in the days following 9/11 would affect me. Gwen Cooper lived through every cat owners’ nightmare – fearing for the safety and survival of her cats, and being unable to get to them for several days. The moving narrative and emotional impact of this chapter will leave few cat lovers unaffected. Homer’s Odyssey is a must-read, to quote from the book’s cover, “for anybody who’s ever fallen completely and hopelessly in love with a pet.” Coming soon on The Conscious Cat: an interview with author Gwen Cooper. Ingrid King is an award-winning author, former veterinary hospital manager, and veterinary journalist who is passionate about cats.
https://consciouscat.net/book-review-homers-odyssey-by-gwen-cooper/
As we continue to celebrate 40 years of fellowships in Massachusetts, here are some of the star-spangled, firecrackin’ July honors and accomplishments of the program’s awardees. Eighteen past Fellows and Finalists, including awardees from each of the four decades in the Artists Fellowships’ history, are among the artists participating in the Isles Arts Initiative, in and around the Boston Harbor Islands this Summer. Elizabeth Alexander, Amy Archambault, and Samantha Fields, and the !ND!V!DUALS Collective (which includes Luke O’Sullivan) have created site-responsive installations for Cove on Georges Island; Marilyn Arsem is among the artist performing in SEEN/UNSEEN on Spectacle Island; Christopher Abrams, Matt Brackett, Allison Cekala, Rosalyn Driscoll, Christopher Frost, Mags Harries, Scott Listfield, Kenji Nakayama, Andrew Neumann, Nick Schietromo, Candice Smith Corby, and Hannah Verlin are exhibiting in 34 at Boston Sculptors Gallery; and Sarah Wentworth is among the artists in Islands on the Edge at the Atlantic Wharf Gallery of Fort Point Arts Community. The project is led by curator and FLUX.Boston creator Liz Devlin. Elizabeth Alexander, Rosalind Driscoll, Mags Harries, Niho Kozuru, and Nancy Selvage are exhibiting in The Boston Sculptors Gallery at Chesterwood 2015 (thru 10/12). Current and past MCC awardees including Karen Aqua, Prilla Smith Brackett, Caleb Cole, Gary Duehr, Matthew Gamber, Nona Hershey, Greer Muldowney, Elaine Spatz-Rabinowitz, Debra Weisberg, and Sarah Wentworth are exhibiting in the exciting exhibition In/Sight at the new Lunder Art Center at Lesley University (7/9-8/9, opening reception 7/9, 6-8 PM). The exhibition is curated by Randi Hopkins, Associate Director of Visual Arts at the Boston Center for the Arts and celebrates the diversity of artists in Cambridge and Somerville. Samantha Fields and Andrew Mowbray are among the artists in Tactile Textiles, featuring multidimensional fiber work, at the Boston Convention & Exhibition Center thru 12/2015. ** Amy Archambault was named Artist in Residence for the Boston Center for the Arts Public Arts Residency. She is creating a large-scale interactive installation, inMotion: Memories of Invented Play, for the BCA’s Tremont Street Plaza (7/23-10/18). David Binder‘s documentary Calling My Children will again be broadcast on PBS this month, due to the success of its previous broadcasts. Find a broadcast schedule. Sarah Bliss and Rosalyn Driscoll‘s new room-sized, multichannel immersive sculptural video and sound installation, Blindsight, exhibits at Boston Sculptors Gallery (thru 7/19). Read a glowing review in the Boston Globe. Steven Bogart will be directing a new play conceived in 24-hours, as part of the Mad Dash event from Fresh Ink Theatre and Interim Writers (7/11, 8 PM Cambridge YMCA). Prilla Smith Brackett will exhibit as part of the group show InSight, juried by Randi Hopkins, at Leslie University’s Lunder Center for the Arts (7/9-8/9). She recently exhibited in Fractured Visions at Danforth Art; Smith College Museum of Art acquired her work Remnants: Communion #9 from that show. Kelly Carmody won the Edmund C. Tarbell Award from the Guild of Boston Artists for her portrait Patrick (Man Holding White Cloth), and her winning painting is on the cover of the July/August issue of Fine Art Connoisseur magazine. Timothy Coleman is exhibiting in Our Stories, a New Hampshire Furniture Masters show at the Thorne-Sagendorph Art Gallery, Keene, NH (thru 7/23, artist reception and presentation 7/2, 5:30 PM). Gary Duehr is among the artists exhibiting in In Passing, a show of hybrid photography that incorporates painting or printmaking, at ArtSpace Maynard (thru 7/10). Holly Guran read from her recently published poetry book River of Bones at the New England Mobile Book Fair in Newton (7/1 7 PM). She’ll also read on 8/1 at the Hunnewell Building of the Arnold Arboretum, with the Jamaica Pond Poets, in conjunction with an exhibit called Arboretum Inspiration: Image and Word, featuring poems by Holly and photographs by Philip McAlary (thru 9/3). Michael Joseph and his photography were featured in a photo essay on CNN.com. Ellen LeBow is contributing art writing and commentary in Rice Polak Gallery’s publication Scratching the Surface. Melinda Lopez‘s new play-in-progress Yerma will have a free public reading (RSVP here) at the Calderwood Pavilion of the Boston Center for the Arts (7/25, 3 PM), as part of the Huntington Theatre Company’s Summer Workshop. Mary Lum‘s recent show at Carroll and Sons Gallery was reviewed in the Boston Globe. Mary Bucci McCoy is exhibiting at Gray Contemporary in Houston, TX, in a solo show, Residuum (thru 7/25). Gary Metras published a poetry book, The Moon in the Pool through Presa Press. Nathalie Miebach is doing an artist residency at the Mountain Lake Biological Station in the Virginia Mountains as part of their ARTLab Program. Monica Raymond wrote the libretto for a new chamber opera, Koan, (Charles Turner, composer) which had a workshop at New Opera and Musical Theater Initiative in June with Teresa Winner Blume and Brian Church. Peter Snoad‘s new multi-media play, The Draft, about personal experiences with the military draft during the Vietnam War, will premiere at Hibernian Hall in Roxbury (9/10-9/20), where Peter has been Visiting Playwright. The play will then go on the road for performances at Westfield State University, The Academy of Music in Northampton, and Trinity College in Hartford, CT. Peter has launched a crowdfunding campaign to finance and continue the tour. Peter’s short play, My Name is Art, will be staged by Fort Point Theatre Channel as part of its Inter-Actions festival (7/17-7/19). Howard Stelzer has a new CD called How To, published by Phage Tapes in Minnesota. The CD is available from the label and a digital version is available from the artist. How To continues the artist’s practice of building compositions using cassette tapes and tape players. Read past Fellows Notes. If you’re a past fellow/finalist with news, let us know. Image: in-progress image of INMOTION, a public art project by Amy Archambault (Sculpture/Installation/New Genres Fellow ’13).
https://artsake.massculturalcouncil.org/fellows-notes-jul-15/
Estimating the global order of the fMRI noise model. One of the major issues in GLM-based fMRI analysis techniques is the presence of temporal autocorrelations in the residual signal after regression. A possible correction method is that of prewhitening, which fits an autoregressive (or other) model to the residual and uses the expected temporal autocorrelations of the model to transform the data and design matrix such that the residual becomes white noise. In this article, a method is introduced to estimate the global autoregressive model order of a data set, based on the residuals after regression. The proposed global standardized partial autocorrelation (SPAC) method tests whether the spatial profile of partial autocorrelations at a certain lag is random, and uses random field theory to account for the spatial correlations typical for fMRI data. It is tested both on synthetic and fMRI data, and is compared to two traditional techniques for model order estimation.
In this role you work closely with the Risk Management, Trading, Software Engineering and Asset Pricing teams to maintain, enhance, and build risk models and tools used for managing market risk across various asset classes and product types including options, futures, and equities. Additionally, you will be performing large scale data-analysis for research and statistical analysis. The role is based in Chicago, however you will have exposure to all IMC offices. Consistently reflect our core values of higher standards, greater accountability, and more fun to bswift associates, clients, and partners. Work with the Implementation Leader to manage Implementation kick-off meetings,requirements discussions and status meetings with clients. Manage timelines with other bswift implementation staff and clients. Manage client expectations, anticipating possible issues and communicating turnaround times with reasonable delivery dates. Perform Analysis and configure system as needed and lead the team with other bswift resources (implementation analyst, product management, development, call center) to meet client deliverables; including, but not limited to: Working with the Implementation Lead to analyze Benefits Class Matrix,Requirements Template, Rates, Permissions, Field Options, Site Text, Import files of Demographic and Benefit Data, and Export of vendor files as required. Configuring system as needed based on clients plan requirements listed above. Testing planning and testing new and existing system functionality to ensure accuracy of client system configuration. To enable the continued growth of our sales team, we are looking for a Trainer, Sales Development. It is your mission to ensure our sales team members are set up for success in achieving quota goals at Hireology. You will be responsible for increasing sales productivity and efficiency by developing the sales skills of our team. As we identify opportunities for education, you will help build new training content and ensure all of the training materials are ready to be delivered. From there, you’ll deploy your coaching tactics and assess progress. Working directly with engineers, program managers and product managers to understand and satisfy stakeholders’ needs, while communicating and coordinating closely with business partners and leadership. Executing quantitative analyses that translate data into actionable insights that lead to innovation. Developing reports and dashboards that assist business stakeholders and further understanding of our products. Collaborating with engineers to collect new data sources into our data warehouse and data visualization platform. Constantly learning, discovering, and implementing ways of improving your own knowledge and our shared workflows. Writing SQL queries that will be leveraged by internal and external stakeholders. Participate in the full life-cycle of development, from definition, design, implementation, and testing. Advise CTO, CEO, CFO, and VP of Product on technologies that align with business needs. Evaluate tools and frameworks as to their ability to provide needed functionality. Be an advocate for developing best practices in the organization, and bring in knowledge of new technologies to the team. Be accountable for outcomes of major technology initiatives. Help design engineering processes that ensure highly reliable deployment of solutions. Give technical presentations to both development, product, and leadership teams. Work on building proof of concept architectures that have an eye towards production. Architect and build large distributed systems that scale well. Support business decisions with ad hoc analysis as needed. Develop training guidelines for cross training team members on new technologies. Develop tools and utilities to maintain high system availability, monitor data quality, and provide statistics. Develop an understanding of healthcare & finance terminology and workflows. As an SSDL architect, you will be a member of the Product Security Team helping to refine secure development lifecycle processes. You will work with release and product management teams to define new release security processes and sign-off activities. In this role you will be responsible for performing program SSDL gap analysis, routine product evaluations and overseeing improvement plans for (DAST, SAST and SCA) continuous monitoring solutions. ServiceNow is seeking IT Business Management (ITBM) solution consultants who can provide leadership and expertise during pre-sales engagement. A combination of being well versed in presenting solutions to varied audiences as well as a hands-on technical knowledge to go wide and deep on solution positioning and delivery as part of sales cycles. As a Senior Software Engineer with the Platform Development team, you will be building the framework that allows our customers to create innovative, elegant and high performing experiences across multiple devices. You will use your experience building modern web experiences and your expertise in performance, architecture, and object-oriented design to push the boundaries of our platform. You will also collaborate with cross-functional engineering teams to develop new and improve on existing platform features. Serve as the first line of defense in keeping our business compliant. This means you’ll maintain accurate, organized, and current compliance records that allow the Firm to respond quickly to any inquiry or audit. You’ll also monitor business activities to ensure the Firm remains in compliance with SRO Regulations along with Firm policies and participate in the development, review and maintenance of Written Supervisory Procedures. Partner with our new employees to assist in the registration processes and Exchange membership documentation and applications. Continue to work closely with our office by managing the internal Compliance material to ensure all documentation is up to date. Monitor trading activity using real-time alerts and various reports. Analyze and compile data into meaningful summaries, update log files and identify trends promptly; making findings available to the team. Investigate and perform research related to compliance initiatives. Summarize findings for review, including organizing supporting documents, data, and articles. Assist in monitoring and reviewing relevant laws, rules and regulations to identify new compliance requirements. IMC is currently looking for a Senior Accountant to be a part of the US Finance Team. The Finance team operates closely with our trading and technology teams in order to manage the liquidity required to run a high performance organization.
https://www.builtinchicago.org/jobs?f%5B0%5D=job-industry_hr-tech&f%5B1%5D=job-industry_natural-language-processing&f%5B2%5D=job-industry_software
A myocardial infarction, or heart attack occurs when a portion of the heart muscle is deprived of oxygen for too long and dies. Like all muscles, the heart requires oxygen to function. It receives this via the two coronary arteries and it is when one of these becomes blocked that the heart tissue is starved of oxygen (cardiac ischaemia). In most cases this happens when anatherosclerotic plaque – part of the fatty arterial wall build-up associated with coronary artery disease – ruptures, creating a blood clot that then blocks the already narrowed artery. If the ischaemia lasts too long, myocardial infarction – a condition characterised by the death of parts of the heart muscle – occurs. The coronary artery disease that usually causes myocardial infarction is often linked to unhealthy lifestyles. Smoking, obesity, high blood pressure (hypertension), high cholesterol and uncontrolled diabetes are all major risk factors. What are its symptoms? The Heart and Stroke Foundation South Africa (HSF) cautions that myocardial infarction symptoms may vary from person to person, but that signs to look out for include: - Chest pain: This may feel like a crushing sensation, tightness, pressure or unusual discomfort in the central chest region or it may feel similar to heartburn. Pain commonly radiates into the left arm, but may also be felt in the neck, jaw, back and right arm - Dizziness, feeling faint and shortness of breath - Cold sweat - Nausea - Fatigue and general feeling of being unwell It is vital to treat these symptoms as a medical emergency and seek help as quickly as possible. How is it diagnosed? Diagnosis of a myocardial infarction is usually a matter of urgency that requires an immediate visit to a hospital’s emergency room, as any delay can result in further irreparable damage to the heart muscle. There are two types of test that can confirm a heart attack: - An electrocardiogram is the first, whereby electrodes attached to the skin are used to record whether there are any abnormalities in the way the heart is conducting its electrical impulses (a sign of heart muscle damage). - Blood tests to look for the presence of the cardiac enzymes, which are released by dying heart tissue, leading to elevated levels in the blood following a myocardial infarction. - Additional tests like angiograms, X-rays and echocardiograms may be used to determine where the blockage occurred and the extent of the damage to the heart muscle. What are your treatment options? Myocardial infarction treatment is focused first on restoring blood flow to the heart as swiftly as possible. Various clot-busting medications are administered to help dissolve the clot or prevent new ones from forming – aspirin is one common household medication that can be used in an emergency situation while waiting for help to arrive. Painkillers and drugs to dilate the arteries and decrease blood pressure may also be used. Myocardial infarction frequently requires surgery, either to insert a special balloon called a stent into the blocked artery to widen it (coronary angioplasty) or to bypass the blockage by using blood vessels harvested from other parts of the body (coronary bypass surgery). Can it be prevented? There is a significant lifestyle factor that contributes to the development of coronary artery disease, which is usually the precursor for a myocardial infarction. As such, there are many cases in which prevention of myocardial infarction may be possible, simply by making healthier choices. These include: - Not smoking - Keeping your weight within a healthy range - Eating a balanced diet - Exercising regularly and managing other health conditions that may increase your risk. Those already at risk or recovering from a previous heart attack should discuss medication options to help reduce their risk with their healthcare provider. For more info contact the Heart and Stroke Foundation South Africa.
https://clicks.co.za/health/conditions/article-view/myocardial-infarction-heart-attack
Warning: more... Fetching bibliography... Generate a file for use with external citation management software. Computers can be used to deliver self-guided interventions and to provide access to live therapists at remote locations. These treatment modalities could help overcome barriers to treatment, including cost, availability of therapists, logistics of scheduling and traveling to appointments, stigma, and lack of therapist training in evidence-based treatments (EBTs). EBTs could be delivered at any time in any place to individuals who might otherwise not have access to them, improving public mental health across the United States. In order to fully exploit the opportunities to use computers for mental health care delivery, however, advances need to be made in four domains: (1) research, (2) training, (3) policy, and (4) industry. This article discusses specific challenges (and some possible solutions) to implementing computer-based distance therapy and self-guided treatments in the United States. It lays out both a roadmap and, in each of the four domains, the milestones that need to be met to reach the goal of making EBTs for behavioral health problems available to all Americans. National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/20235773
The Department of Transport issued a report recommending that there should be a system in place that graduates what license holders can do at certain ages in an attempt to help prevent crashes. This report based its evidence from other countries that demonstrated a system of graduated driver licenses- and the information is strong and consistent. It has estimated that if in place, this proposal would allow new drivers to gain experience on the road, but without being exposed to major risks with a minimum learning period; included in the proposal is a late-night driving curfew. Included in some of the recommended changes are: - Placing a minimum 12 month learner stage that requires at least 100 hours driving time during the day and 20 hours of driving in the night, which would be supervised with a mandatory log book detailing the practices. - Once you have passed the test you will be moved onto a probationary license from the age of 18. - During this probationary period, that would last a minimum of 12 months, a green P plate would need to be displayed at all times on the car so that other users of the road can see you are a new driver. - After the 12 months probationary period is over, as long as there have been no convictions or accidents then a full UK driving license will be issued. With younger drivers being involved in a large portion of road casualties, there is also a call for more affordable public transport, as many of those being they have to drive to be able to afford to get around. With this in mind, lets all pledge to driver safer and be considerate of other users of the road.
https://www.leaseyournextcar.com/blog/driving-test-update-new-proposal
Chamber of Commerce. Community events. Organizations, Churches, and Sports. The first residents of the area were the Plains Apache, followed by the Kiowa, Cheyenne, Commanche, and Arapaho. The area was sparsely settled for many years after the Louisiana Purchase. It wasn't until the railroads came through that much settlement happened. Kansas was made a state on 29 January 1861 as the 34th state. On 20 March, 1873, Seward County was created. The post office at Liberal was opened in June, 1886. Liberal was made the county seat on 8 December 1892. Return to Index Liberal, Kansas boasts the location of Dorothy's house from the Wizard of Oz and every year there is an Oz Festival. Be sure to see the Land of Oz. Pancake Day Downtown Liberal Phone: 620-624-6427 Annual race against the women of Olney, England, and related festivities. Race is on Shrove Tuesday each year. Activities take place through weekend. Admission: Charges vary for activities. Viewing the race is free The traditional- story maintains that in Olney, England, one woman was busy preparing her pancakes when the bell rang to go to church. She grabbed up skillet, pancake and all and ran in her apron to the church, becoming the first pancake racer. Over the years, Olney became England's official Pancake Day celebration place. Rock Island Depot and Jubilee Phone: (620) 626-0156 Please call for specific hours of operation. Lots of information about Liberal. Coronado Museum 567 East Cedar Liberal, Kansas Phone: 620-624-7624 Historical society and museum showing life in the early days of Seward County. Hours: 9am - 6pm Monday - Saturday; 1pm - 5pm Sunday. Closed Mondays in winter. Admission: All exhibits free Please call ahead to confirm hours of operation Mid-America Museum/Air Show 2000 West 2nd Liberal, Kansas Phone: 620-624-5263 Kansas' largest air museum and semiannual air show. Hours: 10am - 5pm Monday - Saturday 1-5 pm Sunday Closed Mondays in winter Admission: $ 2 - $ 5. Please call ahead to confirm hours of operation Here is a Website for Liberal, Kansas Liberal Air Museum Return to Index Economy & Industry early on, the main emphasis was in agriculture. Though agriculture still remains a major part of the area economy, there has been expansion into related fields, such as dairies, cattle feeding operations, and corporate pork production. By the mid 1900's, more gas and oil field development occurred and continues to make a major impact on the area economy. Return to Index Contribute information for this community or any other community on the Key To TheCity website Be sure to include the name of the community and its state when contacting Key to the City as you are NOT directly contacting this community. Thanks for visiting Key to the City. Come back again! Soon!
http://usacitiesonline.com/kscountyliberal.htm
Erica Chenoweth, a researcher on violence and co-author of the NAVCO Data Project, found even more evidence that non-violent protests are more successful: “Countries in which there were nonviolent campaigns were about 10 times likelier to transition to democracies within a five-year period compared to countries in … Why are peaceful protest more effective than violent protest? “There’s certainly more evidence that peaceful protests are more successful because they build a wider coalition,” says Gordana Rabrenovic, associate professor of sociology and director of the Brudnick Center on Violence and Conflict. … “Violence can scare away your potential allies. Is nonviolent protest effective? Research shows that non-violent campaigns diffuse spatially. … Many movements which promote philosophies of nonviolence or pacifism have pragmatically adopted the methods of nonviolent action as an effective way to achieve social or political goals. Which is better violence or nonviolence? Recent quantitative research has demonstrated that nonviolent strategies are twice as effective as violent ones. Organized and disciplined nonviolence can disarm and change the world – and our lives, our relationships and our communities. What are the advantages and disadvantages of nonviolent protest? What are advantages and disadvantages of nonviolent protest? An advantage is that nonviolent protesters always have the moral high ground; they reveal the brutality of their violent opponents. The disadvantages are that nonviolent protesters can be abused, or even killed, by violent opponents. Are Peaceful protests ever successful? They found that nonviolent campaigns had been twice as effective as the violent campaigns: they succeeded about 53 per cent of time compared to 25 per cent for an armed resistance. … A democracy initiated by a nonviolent movement was less likely to fall back into civil war, for example. What peaceful protests have worked? While the overall impacts of the current national protests are still unfolding, they will likely be influential, just like these movements: - Boston Tea Party. Dec. … - Women’s Suffrage Parade. … - The March on Washington for Jobs and Freedom. … - Stonewall Riots. … - Occupation of Alcatraz. … - The March for Our Lives. … - Telegramgate Protests. 2.07.2020 Why is peaceful protest better? Advantages of Peaceful protest -Success rate: From 1900 to 2015, nonviolent campaigns succeeded 51 percent of the time, But violent campaigns succeeded 27 percent of the time. – low-risk factor: Because peaceful protest is peaceful, there is a lot lower risk factor that you will get hurt if you choise to help protest. What is considered a peaceful protest? “If protesters don’t follow those necessary things, (police) have to make sure it is safe for all involved,” Taylor said. … “Anytime you’re causing harm or causing property damage, those are not legitimate actions of peaceful protests.” Can a revolution be peaceful? A peaceful revolution or bloodless coup is an overthrow of a government that occurs without violence. If the revolutionists refuse to use violence, it is known as a nonviolent revolution. What is the goal of non-violence? The aim of non-violent conflict is to convert your opponent; to win over their mind and heart and persuade them that your point of view is right. An important element is often to make sure that the opponent is given a face-saving way of changing their mind. What is the importance of non-violence? Nonviolence or ahimsa is one of the cardinal virtues and an important tenet of Jainism, Hinduism and Buddhism. It is a multidimensional concept, inspired by the premise that all living beings have the spark of the divine spiritual energy; therefore, to hurt another being is to hurt oneself. How important is non-violence in today’s world? Non-violence is the greatest force at the disposal of mankind. It is the mightiest weapon devised by ingenuity of Man, Mahatma Gandhi said. … Non-violence is the personal practice of being harmless to self and others under every condition. Gandhi spread the non-violence through movements and writings. What are the disadvantages of peaceful protest? - The disadvantages of a non-violent protest consists of verbal abuse used more commonly as physical violence is not tolerated in non-violent protests. - Restrictions on peaceful assemble may minor incidents of violence. - Punished for going against the government. What are the disadvantages of a nonviolent protest? Disadvantages - May be necessary to use force and fighting when striving for justice. - Need a direct action to show seriousness. - People causing the injustice/suffering are unaffected. - Just War. 10.04.2017 Can Protest be violent? When such restrictions occur, protests may assume the form of open civil disobedience, more subtle forms of resistance against the restrictions, or may spill over into other areas such as culture and emigration. … Protesters and counter-protesters can sometimes violently clash.
https://lambswar.com/protestantism/best-answer-are-peaceful-protests-more-effective-than-violent-ones.html
Pattern-defeating quicksort - pettou https://github.com/orlp/pdqsort ====== atilimcetin Rust's stable sort is based on timsort ([https://doc.rust- lang.org/std/vec/struct.Vec.html#method.sor...](https://doc.rust- lang.org/std/vec/struct.Vec.html#method.sort)) and unstable sort is based on pattern-defeating quicksort ([https://doc.rust- lang.org/std/vec/struct.Vec.html#method.sor...](https://doc.rust- lang.org/std/vec/struct.Vec.html#method.sort_unstable)). The documentation says that 'It [unstable sorting] is generally faster than stable sorting, except in a few special cases, e.g. when the slice consists of several concatenated sorted sequences.' ~~~ gnarbarian I like merge sort. Average time may be worse, but it's upper bound is better and it is conceptually cleaner and easier to understand (IMO). ~~~ partycoder Just that it takes extra space and that's sometimes a constraint. ~~~ chii [https://stackoverflow.com/questions/2571049/how-to-sort- in-p...](https://stackoverflow.com/questions/2571049/how-to-sort-in-place- using-the-merge-sort-algorithm) In place merge sort exists. It's hard to write tho ~~~ partycoder Which decreases the space complexity but increases the time complexity. ~~~ stjepang No, the time complexity is the same: O(n log n). The author of the top answer links to his book, where you can find a proof of time complexity: [https://sites.google.com/site/algoxy/home/elementary- algorit...](https://sites.google.com/site/algoxy/home/elementary- algorithms.pdf) ~~~ EvgeniyZh ...but it increases run time. It's fine not to care on hidden constants while analyzing algorithms, but not while using them in real life ------ stjepang I think it's fair to say that pdqsort (pattern-defeating quicksort) is overall the best unstable sort and timsort is overall the best stable sort in 2017, at least if you're implementing one for a standard library. The standard sort algorithm in Rust is timsort[1] (slice::sort), but soon we'll have pdqsort as well[2] (slice::sort_unstable), which shows great benchmark numbers.[3] Actually, I should mention that both implementations are not 100% equivalent to what is typically considered as timsort and pdqsort, but they're pretty close. It is notable that Rust is the first programming language to adopt pdqsort, and I believe its adoption will only grow in the future. Here's a fun fact: Typical quicksorts (and introsorts) in standard libraries spend most of the time doing literally nothing - just waiting for the next instruction because of failed branch prediction! If you manage to eliminate branch misprediction, you can easily make sorting twice as fast! At least that is the case if you're sorting items by an integer key, or a tuple of integers, or something primitive like that (i.e. when comparison is rather cheap). Pdqsort efficiently eliminates branch mispredictions and brings some other improvements over introsort as well - for example, the complexity becomes O(nk) if the input array is of length n and consists of only k different values. Of course, worst-case complexity is always O(n log n). Finally, last week I implemented parallel sorts for Rayon (Rust's data parallelism library) based on timsort and pdqsort[4]. Check out the links for more information and benchmarks. And before you start criticizing the benchmarks, please keep in mind that they're rather simplistic, so please take them with a grain of salt. I'd be happy to elaborate further and answer any questions. :) [1] [https://github.com/rust-lang/rust/pull/38192](https://github.com/rust- lang/rust/pull/38192) [2] [https://github.com/rust-lang/rust/issues/40585](https://github.com/rust- lang/rust/issues/40585) [3] [https://github.com/rust-lang/rust/pull/40601](https://github.com/rust- lang/rust/pull/40601) [4] [https://github.com/nikomatsakis/rayon/pull/379](https://github.com/nikomatsakis/rayon/pull/379) ~~~ Retric Comparing sorting algo's often says more about your benchmark than the algo's themselves. Random and pathological are obvious, but often your dealing with something in between. Radix vs n log n is another issue. So, what where your benchmarks like? ~~~ stjepang That is true - the benchmarks mostly focus on random cases, although there are a few benchmarks with "mostly sorted" arrays (sorted arrays with sqrt(n) random swaps). If the input array consists of several concatenated ascending or descending sequences, then timsort is the best. After all, timsort was specifically designed to take advantage of that particular case. Pdqsort performs respectably, too, and if you have more than a dozen of these sequences or if the sequences are interspersed, then it starts winning over timsort. Anyways, both pdqsort and timsort perform well when the input is not quite random. In particular, pdqsort blows introsort (e.g. typical C++ std::sort implementations) out of the water when the input is not random[1]. It's pretty much a strict improvement over introsort. Likewise, timsort (at least the variant implemented in Rust's standard library) is pretty much a strict improvement over merge sort (e.g. typical C++ std::stable_sort implementations). Regarding radix sort, pdqsort can't quite match its performance (it's O(n log n) after all), but can perform fairly respectably. E.g. ska_sort[2] (a famous radix sort implementation) and Rust's pdqsort perform equally well on my machine when sorting 10 million random 64-bit integers. However, on larger arrays radix sort starts winning easily, which shouldn't be surprising. I'm aware that benchmarks are tricky to get right, can be biased, and are always controversial. If you have any further questions, feel free to ask. [1]: [https://github.com/orlp/pdqsort](https://github.com/orlp/pdqsort) [2]: [https://github.com/skarupke/ska_sort](https://github.com/skarupke/ska_sort) ~~~ prestonbriggs There's lots of pathologies to watch out for... Imagine sorting 10 million random ints, where 1% of them are random 64-bit values and 99% are in the range [0..9]. Might be extra fun in parallel. ------ nneonneo I would love to see the benchmark results against Timsort, the Python sorting algorithm that also implements a bunch of pragmatic heuristics for pattern sorting. Timsort has a slight advantage over pdqsort in that Timsort is stable, whereas pdqsort is not. I see that timsort.h is in the benchmark directory, so it seems odd to me that the README doesn't mention the benchmark results. ~~~ nightcracker There are multiple reasons I don't include Timsort in my README benchmark graph: 1\. There is no authoritative implementation of Timsort in C++. In the bench directory I included [https://github.com/gfx/cpp- TimSort](https://github.com/gfx/cpp-TimSort), but I don't know the quality of that implementation. 2\. pdqsort intends to be the algorithm of choice of a system unstable sort. In other words, a direct replacement for introsort for std::sort. So std::sort is my main comparison vehicle, and anything else is more or less a distraction. The only reason I included std::stable_sort in the benchmark is to show that unstable sorting is an advantage for speed for those unaware. But, since you're curious, here's the benchmark result with Timsort included on my machine: [http://i.imgur.com/tSdS3Y0.png](http://i.imgur.com/tSdS3Y0.png) This is for sorting integers however, I expect Timsort to become substantially better as the cost of a comparison increases. ------ ComputerGuru Just because I was confused: this is by Orson Peters who first invented pdq. It's not brand new (as in yesterday), but is a very, very recent innovation (2016). ------ dpcx Question as a non low-level developer, and please forgive my ignorance: How is it that we're essentially 50 years in to writing sorting algorithms, and we still find improvements? Shouldn't sorting items be a "solved" problem by now? ~~~ stjepang Basically all comparison-based sort algorithms we use today stem from two basic algorithms: mergesort (stable sort, from 1945) and quicksort (unstable sort, from 1959). Mergesort was improved by Tim Peters in 2002 and that became timsort. He invented a way to take advantage of pre-sorted intervals in arrays to speed up sorting. It's basically an additional layer over mergesort with a few other low-level tricks to minimize the amount memcpying. Quicksort was improved by David Musser in 1997 when he developed introsort. He set a strict worst-case bound of O(n log n) on the algorithm, as well as improved the pivot selection strategy. And people are inventing new ways of pivot selection all the time. E.g. Andrei Alexandrescu has published a new method in 2017[1]. In 2016 Edelkamp and Weiß found a way to eliminate branch mispredictions during the partitioning phase in quicksort/introsort. This is a vast improvement. The same year Orson Peters adopted this technique and developed pattern-defeating quicksort. He also figured out multiple ways to take advantage of partially sorted arrays. Sorting is a mostly "solved" problem in theory, but as new hardware emerges different aspects of implementations become more or less important (cache, memory, branch prediction) and then we figure out new tricks to take advantage of modern hardware. And finally, multicore became a thing fairly recently so there's a push to explore sorting in yet another direction... [1] [http://erdani.com/research/sea2017.pdf](http://erdani.com/research/sea2017.pdf) ~~~ xenadu02 It's always good to remember that while Big-O is useful, it isn't the be-all end-all. The canonical example on modern hardware is a linked list. In theory it has many great properties. In reality chasing pointers can be death due to cache misses. Often a linear search of a "dumb" array can be the fastest way to accomplish something because it is very amenable to pre-fetching (it is obvious to the pre-fetcher what address will be needed next). Even a large array may fit entirely in L2 or L3. For small data structures arrays are almost always a win; in some cases even hashing is slower than a brute-force search of an array! __ A good middle ground can be a binary tree with a bit less than an L1 's worth of entries in an array stored at each node. The binary tree lets you skip around the array quickly while the CPU can zip through the elements at each node. __It is more important than ever to test your assumptions. Once you 've done the Big-O analysis to eliminate exponential algorithms and other basic optimizations you need to analyze the actual on-chip performance, including cache behavior and branch prediction. ~~~ flukus > It's always good to remember that while Big-O is useful, it isn't the be-all > end-all. The canonical example on modern hardware is a linked list. In > theory it has many great properties. In reality chasing pointers can be > death due to cache misses. My favorite example is adding and ordered list of items into a a simple tree, all you've really done is created a linked list. Big-O doesn't know what your data looks like but you generally should. ~~~ beagle3 Simple binary tree is O(n^2) just like a linked list. Unless you know what you know your distributions and are generally proficient in probability theory (in 99% of the cases, neither can be relied on) the only relevant big-O metric is the worst case one ------ graycat I always wondered if there would be a way to have quicksort run slower than O(n ln(n)). Due to that possibility, when I code up a sort routine, I use heap sort. It is guaranteed O(n ln(n)) worst case and achieves the Gleason bound for sorting by comparing keys which means that on average and worst case, on the number of key comparisons, it is impossible to do better than heap sort's O(n ln(n)) forever. For a stable sort, sure, just extend the sort keys with a sequence number, do the sort, and remove the key extensions. Quicksort has good main memory locality of reference and a possibility of some use of multiple threads, and heap sort seems to have neither. But there is a version of heap sort modified for doing better on locality of reference when the array being sorted is really large. But, if are not too concerned about memory space, then don't have to care about the sort routine being _in place_. In that case, get O(n ln(n)), a stable sort, no problems with locality of reference, and ability to sort huge arrays with just the old merge sort. I long suspected that much of the interest in in-place, O(n ln(n)), stable sorting was due to some unspoken but strong goal of finding some fundamental _conservation law_ of a trade off of processor time and memory space. Well, that didn't really happen. But heap sort is darned clever; I like it. ~~~ torrent-of-ions It's cool to play with a pack of cards and run sorting algorithms on them. To see the worst case of quicksort, use the first element as the pivot and give it an already sorted list. It will take quadratic time to give back the same list. ~~~ graycat Right. So, for the first "pivot" value, people commonly use the median of three -- take three keys and use as the pivot the median of those three, that is, the middle value. Okay. But then the question remains: While in practice the median of three sounds better, maybe there is a goofy, pathological array of keys that still makes quicksort run in quadratic time. Indeed, maybe for any way of selecting the first pivot, there is an array that makes quicksort quadratic. Rather than think about that, I noticed that heap sort meets the Gleason bound which means that heap sort's O(n ln)n)) performance both worst case and average case can never be beaten by a sort routine that depends on comparing keys two at a time. Then, sure, can beat O(n ln(n)). How? Use radix sort -- that was how the old punched card sorting machines worked. So, for an array of length n and a key of length k, the thing always runs in O(nk) which for sufficiently large n is less than O(n ln(n)). In practice? Nope: I don't use radix sort! ------ jkabrg [Post-edit] I made several edits to the post below. First, to make an argument. Second, to add paragraphs. [/Post-edit] Tl;dr version: It seems to me you should either use heapsort or plain quicksort; the latter with the sort of optimisations described in the linked article, but not including the fallback to heapsort. Long version: Here's my reasoning for the above: You're either working with lists that are reasonably likely to trigger the worst case of randomised quicksort, or you're not working with such lists. By likely, I mean the probability is not extremely small. Consider the case when the worst case is very unlikely: you're so unlikely to have a worst case that you're gaining almost nothing for accounting for it except extra complexity. So you might as well only use quicksort with optimisations that are likely to actually help. Next is the case that a worst case might actually happen. Again, this is not by chance; it has to be because someone can predict your "random" pivot and screw with your algorithm; in that case, I propose just using heapsort. Why? This might be long, so I apologise. It's because usually when you design something, you design it to a high tolerance; a high tolerance in this case ought to be the worst case of your sorting algorithm. In which case, when designing and testing your system, you'll have to do extra work to tease out the worst case. To avoid doing that, you might as well use an algorithm that takes the same amount of time every time, which I think means heapsort. ~~~ nightcracker The overhead of including the fallback to heapsort takes a negligible, non- measurable amount of processing time that guarantees a worst case runtime of O(n log n), and to be more precise, a worst case that is 2 - 4 times as slow as the best case. Your logic also would mean that any sorting function that is publicly facing (which is basically any interface on the internet, like a sorted list of Facebook friends) would need to use heapsort (which is 2-4 times as slow), as otherwise DoS attacks are simply done by constructing worst case inputs. There are no real disadvantages to the hybrid approach. ~~~ jkabrg Thanks for your reply. > Your logic also would mean that any sorting function that is publicly facing > (which is basically any interface on the internet, like a sorted list of > Facebook friends) would need to use heapsort (which is 2-4 times as slow), > as otherwise DoS attacks are simply done by constructing worst case inputs. Why is that a wrong conclusion? It might be, I'm not a dev. But if I found myself caring about that sort of minutiae, I would reach exactly that conclusion. Reasons: * the paranoid possibility that enough users can trigger enough DoS attacks that your system can fall over. If this is likely enough, maybe you should design for the 2-4x worst case, and make your testing and provisioning of resources easier. * a desire for simplicity when predicting performance, which you're losing by going your route because you're adding the possibility of a 2-4x performance drop depending on the content of the list. Ideally, you want the performance to solely be a function of n, where n is the size of your list; not n and the time-varying distribution of evilness over your users. Finally, adding a fallback doesn't seem free to me, because it might fool you into not addressing the points I just made. That O(n^2) for Quicksort might be a good way to get people to think; your O(n log n) is hiding factors which don't just depend on n. ~~~ carussell I'm sympathetic because it may not be clear: pdqsort is a hybrid sort; when it encounters apparently pathological input, it switches from a strategy resembling quicksort to heapsort—it doesn't share quicksort's worst case characteristics. Your thesis is wrong: > you might as well use an algorithm that takes the same amount of time every > time, which I think means heapsort Heapsort has a variable runtime. It will selectively reheap. Whether this happens is dependent on the state of the heap and your next element, which means the total number of times you reheap will vary with input. ~~~ jkabrg > pdqsort is a hybrid sort; when it encounters apparently pathological input, > it switches from a strategy resembling quicksort to heapsort—it doesn't > share quicksort's worst case characteristics. I understand how hybrid sorts work. I thought that would be clear enough. I guess when you're a stranger on the internet, people don't automatically trust you to know what you're talking about. I imagine this is even truer when you say something counterintuitive or controversial. In spite of learning that, I'm responding chiefly because of that quote. > Your thesis is wrong: > > Heapsort has a variable runtime Let's not get hung up on heapsort. My thesis is if you have an algorithm with a gap between the worst case and average case, then you shouldn't use it on the web. That gap might _not_ manifest as gap in the big O -- it could be 4x slower in the worst case than the average case, and that would still be bad. To see why, try imagining this world of pure evil: You create a website that uses _gap_ sort. Gapsort is an algorithm that is 4x slower in the worst case than it is in the average case, for a fixed value of n (the size of the input). On top of that, triggering the worst case by accident is hard. You deploy your site for several months, buy servers, and get a userbase. In your performance testing, you only encountered the average case, so you only provisioned hardware for the average case. Now imagine that suddenly all your users turn evil, and start triggering the worst case; this leads to a sudden 4x performance drop. You may have underprovisioned for this. So my thesis is it _looks like_ having differences from the worst case and average case is great, on average. But in a hostile world, that actually makes things more complicated. When doing empirical performance testing, you'll test for the average. _Moreover_ the gap can be just 4x; _this will not manifest as a difference between the big Os of the worst and average cases_ (as I've said previously). Changing from quicksort to heapsort on some inputs may manifest as a performance gap between the QS case and fallback case. Maybe that's not such a good idea. In fact, maybe you shouldn't use any quicksort variant, even introsort, in a potentially hostile environment, _because of the unavoidable average-to-worst-case gap_. In introsort, that gap is merely a different coefficient, but it's still a gap. I hope that wasn't too repetitive. [edit] I deleted and reposted this because the edits weren't being applied. [edit] Done it twice now. ~~~ bmm6o I hear what you're saying, but it seems like this is potentially a problem only in cases where you are devoting significant resources to sorting. Like if sorting takes only 1% of your CPU time, worst-case input will only bump that to 4% - and that's only if every user starts hitting you with worst-case requests. Even if you spend more, it's a question of how much work can the malicious users cause you to do. ------ j_s Has HN ever discussed the possibilities when purposely crafting worst-case input to amplify a denial-of-service attack? ~~~ nightcracker If whoever you're targeting uses libc++, I already did the analysis: [https://bugs.llvm.org/show_bug.cgi?id=20837](https://bugs.llvm.org/show_bug.cgi?id=20837) To my knowledge it's still not fixed. ------ kleiba This book is one of the most general treatments of parameterized Quicksort available: [http://wild-inter.net/publications/wild-2016.pdf](http://wild- inter.net/publications/wild-2016.pdf) ------ beagle3 Anyone knows how this compares to Timsort in practice? A quick google turns out nothing ~~~ mastax [https://github.com/rust-lang/rust/pull/40601](https://github.com/rust- lang/rust/pull/40601) "stable" is a simplified Timsort: [https://github.com/rust- lang/rust/pull/38192](https://github.com/rust-lang/rust/pull/38192) "unstable" is a pdqsort ~~~ stjepang To summarize: If comparison is cheap (e.g. when sorting integers), pdqsort wins because it copies less data around and the instructions are less data-dependency-heavy. If comparison is expensive (e.g. when sorting strings), timsort is usually a tiny bit faster (around 5% or less) because it performs a slightly smaller total number of comparisons. ------ jorgemf Where is a high level description of the algorithm? How is it different from quick sort, it seems quite similar based on a quick observation of the code. ~~~ klodolph The readme file actually contains a fairly thorough description of how it differs from quicksort. Start with the section titled "the best case". ~~~ jorgemf > On average case data where no patterns are detected pdqsort is effectively a > quicksort that uses median-of-3 pivot selection So basically is quicksort with a bit more clever pivot selection, but only for some cases. ~~~ stjepang You're forgetting probably the most important optimization: block partitioning. This one alone makes it almost 2x faster (on random arrays) than typical introsort when sorting items by an integer key. ------ wiz21c Is there a analysis of its complexity ? The algorithm looks very nice ! ~~~ nightcracker Hey, author of pdqsort here, the draft paper contains complexity proofs of the O(n log n) worst case and O(nk) best case with k distinct keys: [https://drive.google.com/open?id=0B1-vl- dPgKm_T0Fxeno1a0lGT0...](https://drive.google.com/open?id=0B1-vl- dPgKm_T0Fxeno1a0lGT0E) ~~~ ouid Best case? Give worst and average case when describing complexities. ~~~ nightcracker I already did, you may want to re-read my comment. ~~~ ouid Am I misinterpreting your usage of "best case"? ~~~ nightcracker Without enlightening me on what your interpretation is, I have no way of telling. pdqsort has a worst case of O(n log n). That means, no matter what, the algorithm never takes more than a constant factor times n log n time to complete. Since pdqsort is strictly a comparison sort, and comparison sorts can do no better than O(n log n) in the average case, pdqsort is asymptotically optimal in the average case (because the average case can never be worse than the worst case). On top of the above guarantees, if your input contains only k distinct keys, then pdqsort has a worst case complexity of O(nk). So when k gets small (say, 1-5 distinct elements), pdqsort approaches linear time. That is pdqsort's best case. ------ unruledboy it's interesting that .Net built-in quicksort is actually doing the same thing, with introsort behind the scenes. ------ torrent-of-ions I can see why one might blindly call qsort on already sorted data (when using user input), but why sorted data with one out of place element? Presumably that element has been appended to a sorted array, so you would place it properly in linear time using insertion and not call a sort function at all. Why does such a pattern arise in practice? ~~~ nightcracker You would be surprised how often people just use a (repeatedly) sorted vector in spite of a proper data structure or proper insertion calls. It's a lot simpler to just append and sort again. Or the appending happens somewhere else in the code entirely. As a real-world example, consider a sprite list in a game engine. Yes, you could keep it properly sorted as you add/remove sprites, but it's a lot simpler to just append/delete sprites throughout the code, and just sort it a single time each frame, even if it only adds a single sprite. So yes, technically this pattern is not needed if everyone always recognized and used the optimal data structure and algorithm at the right time. But that doesn't happen, and it isn't always the simplest solution for the programmer.
Career development and identifying the skills for personality development is an important aspect in the self assessment. The current assessment revolves around the personality traits and development. In order to accomplish the subject matter two members of my group has been chosen. My friend Arif will be the Subject in the self-assessment. The current paper intends to determine the values and interests of candidate in regards to influence the career choice. It also will attain the personality traits that have interrelation with the career choice. The will be comparing the personality skills with the self-employment traits of 21st century. These are the four personality traits found in Arif. The four skills that are interrelated with the employment purpose found in the candidate are efficient and organized, open communication, ability of the collaborative work and good customer services. In this aspect, emotional intelligence has the coefficient relation in decision making process of career selection. Therefore, the following paper will reveal the perspectives of career traits along with skills to help the self-assessment. Identify the candidate’s any four values, four interests, four personality traits, and four skills Values are the most important perception one should follow in the career selection formation as the value give the decision making process a standardized base and norms (Mussen, 2015). Values are those psychological objectives that tend to be a part of life in every stage (Deesilatham, and Hosany, 2018,). Values determine the priorities and beliefs an individual wants to work and live.Thus, these values also take part in the career selection of a candidate. Herein the values discussed are of the subject of the group member Arif. Values of the candidate in the career selection: - Family relations and bonds is one of the values that initiates in the career selection of Arif as he likes to involve the family members in all the decision making processes of his life. Henceforth, choosing career is crucial thus having the family values will be beneficial for the candidate as family gives the most initiative opinions in career selection along with development (Krahn, and Galambos, 2014.). - Apart from this. Involving the social status in the career selection will be another value of Arif. Strategy is something that makes every selection systematic and logical thus this is a value that is possessed by Arif. - In career selection there needs to be some of the helping gesture that not only help Arif to initiate in the career choice but also help in achieving the advantages of improving the social skills. Career choice includes another objective that is interrelated with the career objective and social skills as both of the attributes are related. - Another value of Arif identified was personal relationships and cooperating nature. He maintains the value of cooperating with people, as he is good in maintaining the personal channel. Therefore, the values identified in Arif that would be included in the process of selecting the suitable career. The value of a person influences the career selection in many ways such as selecting the workplace Arif wants to work with and the influence can help select the professions well (Little, et al 2018). Personality traits Personality traits always incorporates with thecareer selection process as the career selection process precedes The personality traits that were found in Arif is he is very good with communication and social skills. The personality traits found in Arif were he’s cooperative and polite nature of working collaboratively. Good communication skills, amativeness and the ability to deliver the work determination. These are the four personality traits found in Arif. The four skills that are interrelated with the employment purpose found in the candidate are efficient and organized, open communication, ability of the collaborative work and good customer services. These traits values and skills can be implied in the choice of career as all these attributes has the potential to get implied in the procedure to gain employment benefits(Price, et al 2014). Self-assessment tools that incorporates with producing recommendation about the suitable career In order to justify the career that suits Arif I have conducted a research questionnaire. The answers and questions along with the results are given in the following images. http://vark-learn.com/the-vark-questionnaire/ Analysis of the result The self-assessment tool has helped in analyzing the abilities of Arif related to employment that has incorporation with the traits and values of Arif. Arif was asked to do the answers of the given questions that are present in the questionnaire. He has answered all the questions as per his critical thinking ability. The result has revealed that his visual ability has scored 5, aural 4 reading and writing 2 and Kinesthetic 5. This has been raveled that Arif is majorly efficient in the management department. Identify, with proper reasoning, any three kinds of job roles that best match this candidate, based on the findings in the previous two steps. According to the previous discussion of self-assessment, values, personality traits and skills of Arif it is clearly notified that Arif has some spontaneity and possibility to do jobs, which is, involved people for social welfare and improvement of cultural events. The three job roles, which are appropriate for Arif according to his all traits and skills, are - 1) Social worker, 2) An entrepreneur 3) human recourse management. 1) Social worker - As previously discussed that Arif has good understanding and value of family and a pleasant connection with the social gathering; he should be a good social worker. As a social worker, one must have the mutual understanding and good observing power to know the impact, possibility and costiveness towards others. 2) Entrepreneur - An entrepreneur is a self-achiever, a self-learnerand a self-motivator. He is someone who not only organized a whole teamwork but also equally responsible for each one's faults as well as achievement. As Arif has proper knowledge of teamwork and self-discipline to maintain a proper sustainable workflow in the organization, he should be looking himself as a successful entrepreneur (Ireland, and Lent, 2018). 3) Human resource management trainee - As it mentioned before, John has a good interpersonal skill and he is very discipline person. He has a good amount of possibility to be a human resource management trainee. Since for a human resource manager it is very important to observe, monitor and control the employee hence John has maintain the work culture to give his potential effort effectively (Lim, et al 2016). Suggest some two methods for self-management/time-management Methods for self-management Self-awareness is a very important skill in self-management techniques. One should aware about their course of action in daily life. It is helpful to increase the self-controlling power. Responsibility is one of the crucial aspects of self-management skills. One should learn to take own responsibility first at home. It will increase the confidence level in the future phase at the work culture. If someone is not able to know how to be productive and proactive in time to time then certain problem will always arise. One should know how to maintain the period of workload and schedule some time for refreshment. Employees should also learn how to cut off the extra stress and get some time for themselves to maintain their mental peace. Methods for time management - Delegation is very important elements in time management techniques. It teaches how to save the time and use it properly.Procrastination is postponing or delaying the work. One should learn how to do a job in time, for his employees should make a list, which helps them to maintain the work according to the schedule. - Employees should know how to prioritize their work. It is very important to give freedom to the employees to be who they are and let them think freely. It makes the productivity of their work structure but can also save time and makes their confident level strong. One must be very punctual and particular about their work culture. Complete a work in time is a very important aspect in time management skills. Always reward your employee to keep them active. It will help to motivate the willpower of them and they will do their job more effectively. Conclusion In concluding self assessment, I learned that as the person Arif should have sufficient management and interpersonal skills so as to establish himself to get a successful career and accomplish his life goals in a proper way. Through time and self-management, Arif also learn how to value the time management skills and implementation of self behavioral aspects to maintain his daily regime to improve in his job role. Self-assessment is the process through which one can inspect their own self-evaluation which is very much help to grow the inner self of one person. It is also true that traits, skills and knowledge are varies person to person, so different techniques are applicable for different person in this process.
https://www.abcassignmenthelp.com/gew-402-self-assessment
Reviewed by Ferit Kılıçkaya, Middle East Technical University Intended as a guide and resource book for students planning to design their own research projects, this book is divided into two parts. The first part focuses on data types, such as corpora and surveys. The second focuses on data coding, analysis, and replication, which is discussed in relation to coding, statistical analyses, and meta-analyses. In the first chapter, the editors briefly introduce the aim of the book and describe how each chapter contributes to the overall theme. Ch. 2 discusses how learner corpus research has evolved since the late 1980s and deals with how learner corpora can be collected, analyzed, and interpreted. In Ch. 3, data collection methods used in generative second language acquisition are introduced, highlighting that the method chosen is determined by several factors, such as the linguistic phenomena and populations. Ch. 4 provides a discussion on how research methods such as experimental studies and action research can be utilized to investigate the issues that have emerged within the field of instructed second acquisition. Ch. 5 introduces survey studies, explaining each step involved in designing a survey, analyzing data, and reporting the results. In Ch. 6, case study research is discussed, starting with a historical perspective, followed by an explanation of how a case study can be conducted. The authors of Ch. 7 focus on how to use psycholinguistic methodologies involving tasks, such as picture-word interference and sentence preamble, can be implemented to inquire into how people can comprehend and produce language. Ch. 8 considers second language writing and elaborates on how an analysis of the writing process can be achieved. Ch. 9 examines second language reading, dealing with issues such as methodological foundations and dual-language impacts on reading development. In Ch. 10, qualitative research is discussed, highlighting its pivotal characteristics in research traditions, such as ethnography and conversation analysis. Ch. 11 addresses coding procedures of second language studies validly and reliably. Ch. 12 deals with coding qualitative data through computer-assisted data analysis software such as CAQDAS, ATLAS.ti, and HyperRESEARCH. The author of Ch. 13 discusses how to conduct the basic and most frequently used inferential statistical tests such as t-tests, analysis of variance, and chi-square and Pearson correlation tests. In Ch. 14, the authors look at the key steps of conducting a meta-analysis, a statistical method used to determine the mean and the variance of various studies conducted on a specific issue or topic. In the final chapter, Ch. 15, replication studies are discussed, with a focus on how to conduct a replication study. Overall, the editors and the authors of the chapters have provided students and teachers in the field of second language acquisition with a handy book on various aspects of data types, coding, and analysis in research methodology. The study boxes providing summaries of exemplary studies on the topic discussed within each chapter, the study questions available at the end of each chapter, and the practical step-by-step guide offered throughout, will help readers to increase their knowledge as they design their own research projects.
http://journals.linguisticsociety.org/booknotices/?p=2247
Guest Post By Leischen Stelter, editor, In Public Safety It’s one of the cardinal rules of writing, but it’s easy to forget that writing to (and reaching) a target audience is more important than the number of people you reach. As the editor of the blog In Public Safety, all content is focused on reaching those working in (or aspiring to work in) a public safety field—specifically law enforcement, the fire services, and emergency management. These professionals are our desired readers, so we don’t really care (or expect) those in the general population to view our content. Frankly, the average person isn’t likely to be interested in our topics so our articles won’t resonate with them. And that’s okay. It isn’t created for them. Even knowing that, it’s often hard for content creators to look beyond page views as their primary objective. After all, it influences Google page ranking. But in order to build the desired readership and reach your target audience, content creators must put just as much, or even more, weight on other metrics. Time on Page Looking at your “time on page” metrics can tell you when your content is resonating with readers—the higher the number, the longer people are spending reading the content. This metric is far more beneficial than someone simply clicking on an article, realizing it’s not for them, and leaving your site. You may have gotten a page view from it, but you didn’t get a reader. Determine benchmarks for your content. For time on page, shoot for an average of one minute. The better you get at delivering desired content to your specific audience, the higher this number will grow. Bounce Rate This metric tells you the percent of people who come to your site, visit one page, and then leave. A high bounce rate usually means that people are not finding what they want on your site. The lower this percentage, the more readers are coming to your site, reading an article, and then clicking on other pages that may interest them. In order to improve your bounce rate, consider embedding related articles into your content to give readers easy access to other pages that have similar information. You can also add widgets to your site that automatically generate articles with similar tags or keywords. Behavior Insights The Behavior tab in Google Analytics provides a wealth of information about your readers. One thing it will tell you is which articles are the most popular. By evaluating the content that has the highest views, you can work to build similar content around those topics. However, be cautious about this strategy. Remember, it’s not just about page views, it’s about what content resonated with your audience. When evaluating your site’s most popular articles, filter according to page views and then a secondary filter for time on page (or another metric). Doing so eliminates articles that may have had a great headline with high SEO properties so it ranked high in searches, but after readers clicked on it, they realized the article wasn’t what they expected. Engagement: Keep the Conversation Going Ultimately, you want readers to engage with your content. You want them to comment on it, whether on the article itself or on social media sites. You want them to share it with friends or colleagues. When articles get comments or shares from readers, you have a great opportunity to re-engage them and keep that conversation going. For example, on In Public Safety, one of our best-performing articles is Demystifying the Background Investigation Process: What You Can Expect When Applying for a Law Enforcement Job. Again, this is content that only interests those aspiring to work in law enforcement, but it’s garnered A LOT of attention. The article currently has more than 246K page views (!), but, more importantly, it has a lot of comments: 50 Facebook comments and 348 traditional comments. Some of those comments are from the author. Every so often, I will send him an email letting him know there are new comments and he’ll go through and respond to readers. Because of the topic of this article, readers are looking for insights or information about applying for a job in law enforcement. The author will provide guidance or information, thus furthering the conversation and re-engaging that reader. While I love pointing to this article because of its impressive stats, the fact is that it’s an anomaly. We have way more articles that only have a few hundred views, or have high bounce rates, or very little time on page. And that’s fine. It’s expected. Not every article will be a home run. The most important thing is that content creators understand performance metrics, develop benchmarks for the most valuable metrics, and use that information to guide them in developing and shaping content that has the best chance to reach the desired reader.
https://kurtzdigitalstrategy.com/2017/09/21/why-focusing-on-page-views-is-a-misguided-content-strategy/
New York — Laura and John Arnold Foundation (LJAF) today announced a grant to the National Academy of Sciences (NAS) to conduct the first-ever comprehensive evaluation of the state of the science of eyewitness identification, a frequently used tool in the investigation and prosecution of crimes. Following an in-depth review of existing research on eyewitness identification, NAS will issue a report that will provide recommendations about how to improve the administration of lineups and photo arrays, and to ensure accurate and appropriate use of eyewitness evidence. NAS will also provide guidance on what additional research in these areas should be undertaken. Eyewitness identification of criminal suspects plays a key role in many criminal investigations and cases, with an estimated 77,000 suspects identified by eyewitnesses each year. When these identifications are incorrect, however, they can hamper the ability of law enforcement to solve a crime and hinder the administration of justice. Mistaken eyewitness identification played a role in 76 percent of the first 250 cases in which people convicted of a crime were later exonerated by DNA evidence. And in nearly half of these cases, the real perpetrator went on to commit additional violent crimes while the innocent person was serving time for the original offense. “It is critically important that we fully understand best practices in eyewitness identification, and we are pleased to ask NAS to take on this project,” said LJAF Vice President of Criminal Justice Anne Milgram. “Eyewitness identification is an enormously important law enforcement tool. But it is equally critical for public safety that these identifications be correct.” With the LJAF grant, NAS will assemble a multi-disciplinary committee of leading experts to review relevant literature, meet with prominent researchers, and assess what is known — and what remains unresolved — about the science behind eyewitness identification. The committee will examine a variety of issues, including how a witness’s memory may be impacted by the following factors: - The procedures used to create and administer live and photo lineups - The length of time of a witness’s exposure to a suspect during the commission of a crime - The amount of time that elapses between the witness’s viewing of the perpetrator and the identification - The presence of a weapon at the time the witness viewed the perpetrator NAS is expected to issue a report with findings, recommended best practices, and high-priority areas for future research in March 2014.
https://www.arnoldventures.org/newsroom/laura-john-arnold-foundation-fund-national-academy-sciences-evaluation-groundbreaking-assessment-eyewitness-identification-procedures
You don’t have a portfolio when you’re in the admin support business because admin support is a service, not a tangible, visible product (like design is). Rather, your “portfolio” is the experience clients get dealing with you. It’s your service, your communication, your responsiveness, your policies, processes and procedures, your systems, your standards, how your website looks and works, what your testimonials say, your case studies… These are all demonstrations—samplings and examples—of your expertise, competence, professionalism and the service experience clients will get should they decide to work with you. And if they are positive, if they are smooth, if they are well-executed, those are the things that instill confidence and trust in your potential clients.
https://www.administrativeconsultantsassoc.com/blog/2014/07/14/
Our testimonials tell you all you need to know about how we help our clients with first-rate service. We pride ourselves on our proactive approach to your financial, accounting, tax and business needs, we place a great deal of importance in ensuring our clients are more than satisfied with our services. We have listed some of the kind testimonials that we have received from satisfied clients across a wide range of sectors and services we have provided over the years.
https://www.fvtax.co.uk/testimonials/
Drawing on the AHRC funded project Design Routes that Martyn leads, this lecture will explore how design can make a meaningful contribution in developing and revitalising culturally significant designs, products and practices to make them relevant to the needs of people today. Increasingly such traditions are being reassessed and revitalised as their rich historical links with community and culture. This lecture will explore the links that people see between products and place – their roots, and the fact that these links change over time – their routes. Taking the starting point of designs and products that are linked to particular places, employ traditional making processes or are embedded in local ways of life, the lecture will consider the design and creative ecology that sustain and fosters such products and practices. Additionally, Martyn will discuss effective revitalisation strategies through design that are capable of supporting the provenance, heritage and meaning of particular products, while enhancing their relevance and viability in contemporary society. Martyn is Professor of Design in Manchester School of Art and was formerly Head of Lancaster Institute for the Contemporary Arts at Lancaster University. As a trained product designer his research interests explore the strategic approaches designers use to consider the future, in particular the ability of designers to envision potential social, cultural, technological and economic futures. Martyn had led a number of funded research projects, both in the UK and in Europe. He is currently an Expert Advisor to the European Commission funded ‘Design for Europe’ project, Principal Investigator for the AHRC funded ‘Design Routes’ project, Co-Investigator on the AHRC/ESRC funded ‘Design Ecologies’ project and Co-Investigator on the AHRC funded ‘Living Design project. — Part of the ASK (Art Seeks Knowledge) Open Lecture Series by Professors and visiting Professors at the Manchester School of Art. All are welcome to these open talks, which offer a snapshot of the breadth and depth of some of our research and practice at the cutting edge of our disciplines. Admission is free, please book your ticket online.
https://www.art.mmu.ac.uk/events/2016/ask-martyn-evans/
--- abstract: 'We establish a simple generalization for the famous theorem of Morley about trisectors in triangle with a purely synthetic proof using only angle chasing and similar triangles. Furthermore, based on the converse construction, another simple extension of Morley’s Theorem is created and proven.' address: - 'I. Zanna 27, Thessaloniki 54643, Greece' - 'High School for Gifted Students, Hanoi University of Science, Vietnam National University, 182 Luong The Vinh Str., Thanh Xuan, Hanoi, Vietnam' author: - Nikos Dergiades - Tran Quang Hung title: 'On some extensions of Morley’s trisector theorem' --- Introduction ============ Over one hundred years ago in 1899, Frank Morley introduced a geometric result. This result was so classic that Alexander Bogomolny once said “it entered mathematical folklore”; see [@1]. Morley’s marvelous theorem states as follows: \[thm1\]The three points of intersection of the adjacent trisectors of the angles of any triangle form an equilateral triangle. \[fig1\] Many mathematicians consider Morley’s Theorem to be one of the most beautiful theorems in plane Euclidean geometry. Throughout history, numerous proofs have been proposed; see [@1; @2; @6; @10; @4; @7; @8]. There was a generalization of Morley’s Theorem using projective geometry in [@9]. Some extensions to this theorem has been recently analyzed by Richard Kenneth Guy in [@3]. Guy’s extensions were very extensive and deep research on Morley’s Theorem. In the main part of this paper, we would like to offer and prove synthetically a simple generalization of Morley’s Theorem. Generalized theorem is introduced as follows: \[thm2\]Let $ABC$ be a triangle. Assume that three points $X$, $Y$, $Z$, and the intersection $D = BZ \cap CY$, $E = CX \cap AZ$, $F = AY \cap BX$ lie inside triangle $ABC$, they also satisfy the following conditions - $\angle BXC = 120^ \circ + \angle YAZ$, $\angle CYA = 120^ \circ + \angle ZBX$, and $\angle AZB = 120^ \circ + \angle XCY$. - The points $X$, $Y$, and $Z$ lie on the bisectors of angles $\angle BDC$, $\angle CEA$, and $\angle AFB$, respectively. Then triangle $XYZ$ is an equilateral triangle. When $X$, $Y$, and $Z$ are the intersections of the adjacent trisectors of triangle $ABC$, it is easily seen that they satisfy two conditions of Theorem \[thm2\]. Thus Theorem \[thm1\] is a direct consequence of Theorem \[thm2\]. An important property of the pair of triangles $ABC$ and $XYZ$ is introduced in the following theorem: \[thm3\]The triangles $ABC$ and $XYZ$ of the Theorem \[thm2\] are perspective. At the last section of this paper, we shall apply a converse construction to find another extension for Morley’s Theorem. Some new equilateral triangles in a given arbitrary triangle are also found. The family of these new equilateral triangles closely relate to the construction of the Morley’s equilateral triangle. Proofs of the theorems ====================== The solutions for Theorem \[thm2\] and Theorem \[thm3\] will be showed in this section. \[fig2\] The main idea of this proof comes from [@2]. (See Figure \[fig2\]). Set $\angle YAZ = x$, $\angle ZBX = y$, and $\angle XCY = z$. Since $X$ lies inside triangle $DBC$ (because $X$ lies inside triangle $ABC$ and it lies on bisector of angle $\angle BDC$, too), we have $$\angle BDC = \angle BXC - y - z = 120^ \circ + x - y - z.$$ Similarly, $\angle CEA = 120^ \circ + y - z - x$ and $\angle AFB = 120^ \circ + z - x - y$. On the sides of an equilateral triangle ${X}'{Y}'{Z}'$, the isosceles triangles ${D}'{Z}'{Y}'$, ${E}'{X}'{Z}'$, and ${F}'{Y}'{X}'$ are constructed outwardly such that $\angle Y'D'Z' = \angle YDZ$, $\angle Z'E'X' = \angle ZEX$, and $\angle X'F'Y' = \angle XFY$. Take the intersections ${A}' = {E}'{Z}' \cap {F}'{Y}'$, ${B}' = {F}'{X}' \cap D{Z}'$, ${C}' = {D}'{Y}' \cap {E}'{X}'$. From quadrilateral ${A}'{E}'{X}'{F}'$, it is deduced that $$\begin{gathered} \angle {A}'_1 = 360^\circ - \angle Z'E'X' -\\ \left[\left( {90^ \circ - \frac{\angle Z'E'X'}{2}} \right) + 60^\circ + \left(90^ \circ - \frac{\angle X'F'Y'}{2} \right)\right] - \angle X'F'Y', \end{gathered}$$ this implies that $$\angle {A}'_1 = 120^\circ - \frac{\angle Z'E'X' + \angle X'F'Y'}{2} = 120^\circ - \frac{240^\circ - 2x}{2} = x.$$ An analogous argument shows that $$\angle {B}'_1 = y\quad\text{and}\quad\angle {C}'_1 = z.$$ Since ${D}'{X}'$ is bisector of $\angle B'D'C'$ (from the constructions of isosceles triangle $D'Z'Y'$ and equilateral triangle $X'Y'Z'$), $\triangle DBX \sim \triangle {D}'{B}'{X}'$ (because they have same angles $y$, $\frac{\angle BDC}{2}$), and $\triangle DCX \sim \triangle {D}'{C}'{X}'$ (because they have same angles $z$, $\frac{\angle BDC}{2}$), we obtain $$\frac{XB}{{X}'{B}'} = \frac{DX}{{D}'{X}'} = \frac{XC}{{X}'{C}'}\quad\text{or}\quad\frac{XB}{XC} = \frac{{X}'{B}'}{{X}'{C}'},$$ and also $$\angle {B}'{X}'{C}' = \angle B'D'C' + \angle {B}'_1 + \angle {C}'_1 = \angle BXC.$$ Two previous conditions point out that $\triangle XBC \sim \triangle {X}'{B}'{C}'$ (s.a.s). Analogously, $\triangle YCA \sim\triangle {Y}'{C}'{A}'$, and $\triangle ZAB \sim \triangle {Z}'{A}'{B}'$. Finally, from these similar triangles, it follows that $\angle BAC = \angle B'A'C'$, $\angle CBA = \angle C'B'A'$, and $\angle ACB = \angle A'C'B'$, so $\triangle ABC \sim \triangle {A}'{B}'{C}'$. This takes us to the conclusion that $\triangle XYZ \sim\triangle {X}'{Y}'{Z}'$, it might be worth pointing out that $XYZ$ is equilateral, and completes the proof of generalized theorem. The above proof of generalized theorem also shows that Morley’s Theorem can be proven simply using similar triangles and angle chasing in the same way. The barycentric coordinates will be used in the next proof for Theorem \[thm3\], see [@5]. \[fig3\] Without loss of generality, assume that the sidelengths of the equilateral triangle $XYZ$ is $1$. Therefore, in barycentric coordinates $$X = \left( {1:0:0} \right),\ Y = \left( {0:1:0} \right), Z = \left( {0:0:1} \right).$$ Because $D$, $E$, and $F$ lie on perpendicular bisector of sides $BC$, $CA$, and $AB$, respectively, assume that coordinates of $D$, $E$, and $F$ as follows: $$D = \left( { - p:1:1} \right),\ E = \left( {1: - q:1} \right),\ F = \left( {1:1: - r} \right).$$ Now using the equation of lines [@5], we find that $$A = \left( { - 1:q:r} \right),\ B = \left( {p: - 1:r} \right),\ C = \left( {p:q: - 1} \right).$$ Obviously, the lines $AX$, $BY$, and $CZ$ concur at the point $P\left( {p:q:r} \right)$. This finishes the proof. Converse construction ===================== In this section, some newly discovered equilateral triangles based on a given arbitrary triangle are found. Now coming back to Theorem \[thm2\], even though it is really a generalization of Theorem \[thm1\] and has been proven, we only see one possible case that is Morley’s Theorem. That will be make less sense if we only see one application of generalized theorem which is Theorem \[thm1\]. In order to exclude the objection that it can not find a triangle $XYZ$, satisfying the conditions of the generalized theorem except only the case of Morley’s triangle, we now show a converse construction with giving a purely synthetic proof. \[thm4\]Arbitrary isosceles triangles $DYZ$, $EZX$, and $FXY$ are constructed outwardly of an equilateral triangle $XYZ$ with bases the sides of $XYZ$, such that the pairs of lines $(EZ,FY)$, $(FX,DZ)$, and $(DY,EX)$ meet at $A$, $B$, and $C$ in the same place with $D$, $E$, and $F$, respectively, relative to the sides of $XYZ$. Then $$\angle BXC = 120^ \circ + \angle ZAY,\ \angle CYA = 120^ \circ + \angle XBZ,\ \angle AZB = 120^ \circ + \angle YCX.$$ \[fig4\] Since $X$ lies on the bisector of $\angle D$, and the sides $XY$, $XZ$ of the equilateral triangle $XYZ$ are equally inclined to the sides $DB$, $DC$ and so we have the equality of angles designated as $x$. Analogously, we have the equality of angles designated $y$ and $z$. So $$\angle BXC = 360^ \circ - y - 60^ \circ - z = 120^ \circ + (180^\circ - y - z) = 120^ \circ + \angle ZAY.$$ Similarly, $\angle CYA = 120^ \circ + \angle XBZ$, $\angle AZB = 120^ \circ + \angle YCX$, this would finish the proof. \[fig5\] On the configuration of Theorem \[thm4\], let $P$, $Q$, and $R$ be the circumcenters of triangles $AYZ$, $BZX$, and $CXY$, respectively. We use angle chasing $$\angle CXR=\angle BXQ=90^\circ-x,$$ so $$\angle BXR=\angle QXC=360^\circ-y-z-60^\circ+90^\circ-x=390^\circ-x-y-z.$$ This means that there are six equal angles $$\angle BXR=\angle CXQ=\angle CYP=\angle AYR=\angle AZQ=\angle BZP.$$ At this point, using above conditions of angles as hypothesis, we propose another extension of Morley’s Theorem as follows: \[thm5\]Locate the points $X$, $Y$, and $Z$ lying inside a given triangle $ABC$ such that $$\angle BXR=\angle CXQ=\angle CYP=\angle AYR=\angle AZQ=\angle BZP,$$ where $P$, $Q$, and $R$ are circumcenters of triangles $AYZ$, $BZX$, and $CXY$, respectively, and lying inside that respective triangles. Then triangle $XYZ$ is an equilateral triangle. \[fig6\] From the hypothesis we conclude that $\angle CXR = \angle QXB$, this leads to $$90^ \circ - \angle XYC = 90^ \circ - \angle BZX,$$ so $$\angle XYC = \angle BZX = x.$$ Similarly, we conclude the designation of angles $y$ and angles $z$. From these $$\angle BXR = \angle CXX = 360^ \circ - y - z - \angle X + 90^ \circ - x = 450^ \circ- x - y - z - \angle X.$$ Hence, we get the same equalities $$\angle CYP=\angle AYR= 450^ \circ - x - y - z - \angle Y,$$ and $$\angle AZQ=\angle BZP= 450^\circ - x - y - z - \angle Z.$$ Thus from six equal angles of the hypothesis, it is easy to show that $$\angle X = \angle Y = \angle Z.$$ Therefore, the triangle $XYZ$ is equilateral. The theorem is proven. Note that Theorem \[thm5\] will become Morley’s Theorem if adding more the conditions $$\angle BXR=\angle CXQ=\angle CYP=\angle AYR=\angle AZQ=\angle BZP=150^\circ.$$ Finally, we conclude the article with an interesting consequence of Theorem \[thm5\] where all six equal angles (in Theorem \[thm5\]) are $180^\circ$ (see Figure 7). \[fig7\] \[thm6\]Select three points $X$, $Y$, and $Z$ lying inside a given triangle $ABC$ and satisfying the following conditions - $BZ$ and $CY$ meet at circumcenter of triangle $AYZ$. - $CX$ and $AZ$ meet at circumcenter of triangle $BZX$. - $AY$ and $BX$ meet at circumcenter of triangle $CXY$. Then triangle $XYZ$ is an equilateral triangle. The authors would like to express their sincere gratitude and devote the most respect to two deceased mathematicians Alexander Bogomolny and Richard Kenneth Guy who devoted their love and appreciation to the recreational mathematics, and they have also made a great contribution to the development process and introducing the famous theorem of Morley. [2]{} A. Bogomolny, Morley’s miracle, Interactive Mathematics Niscellany and Puzzles, [https://www. cut-the-knot.org/triangle/Morley/index.shtml](https://www. cut-the-knot.org/triangle/Morley/index.shtml). N. Dergiades, Nikos Dergiades’ proof, Interactive Mathematics Miscellany and Puzzles, <https://www.cut-the-knot.org/triangle/Morley/Dergiades.shtml>. H. S. M. Coxeter and S. L. Greitzer, *Geometry Revisited*, The Math. Assoc. of America, 1967. J. Strange, A Generalization of Morley’s Theorem, *Amer. Math. Monthly*, **81** no. 1 (1974), pp. 61–63. A. Connes, A new proof of Morley’s theorem, *Publications Mathématiques de l’I.H.É.S.*, **88** (1998), pp. 43–46. R. K. Guy, The lighthouse theorem, Morley & Malfatti: A budget of paradoxes, *Amer. Math. Monthly*, **114** no. 2 (2007), pp. 97–141. M. Kilic, A New Geometric Proof for Morley’s Theorem, *Amer. Math. Monthly*, **122** no. 4 (2015), pp. 373–376. P. Yiu, *Introduction to the Geometry of the Triangle*, Florida Atlantic University Lecture Notes, 2001; with corrections, 2013, available at <http://math.fau.edu/Yiu/Geometry.html>. Q. H. Tran, A direct trigonometric proof of Morley’s Theorem, *International Journal of Geometry* **8** no. 2 (2019), pp. 46–48. P. Pamfilos, A short proof of Morley’s Theorem, *Elem. Math.*, **74** no. 2 (2019), pp. 80–81.
“Humanity must be practiced and applied truly for healthy society” “Humanity concept needs to be practiced in all walks of life and applied to society in order to realize its benefits. Concept of human rights must be practiced by society and same can be practiced in progress of Nation” said by Dr S.S. Uttarwar , Principal Of Engineering College and a renowned orator of city. He was delivering his spiritual discourage on Sant Dyaneshwar and Dyaneshwari during All India Seminar “Quality Progress 2020” at The Institution of Engineers (India), Nagpur. He was delivering the a lecture on life span of Sant Dyaneshwar and his warkari sampraday. in which he further quoted the examples from the life of Sant Dyaneshwar during 14th century. . As per his delivery , even teachings of Sant Dyaneshwar are applicable in todays scenario . In his lucid delivery , Dr. Uttarwar quotes many teaching of Sant Dyneshwar to audience and create a devotional atmosphere in the hall . He took audience to the era of this great saint and share tragedies of his life and how he overcame it. Dr. Uttarwar narrates the incidences of reciting Vedas from the mouth of buffalo and giving command by Saint Dyaneshwar to a wall to move ahead to welcome Changdeo Maharaj who came there on Tiger to see him. This display of divine power by Sant Dhyneshwar dissolve ego of Changdeo as he was assuming himself a god. At age of 14 years , Sant Dyaneshwar has translated Bhagwad Geeta in prakrut Marathi by name Dyaneshwari . Bhagwad Geeta was in sanscruit and Majority of people was illiterate at that time and was not aware with contents of Bhagwad Geeta. Sant Dyaneshwar explains them about it. He was a son of Vitthalpant Kulkarni fron Newasa. His mothers name was Rukminibai. Priests of that era has awarded death sentence to his parents because they had broken the rules of married life which was in force at that time. His parents ends their life by jumping in Indrayani river , in anticipation that , their kids may get support and lively hoods from society. Nivruttinath, Sopan and Muktabai was his siblings. Nivruttinath was his Guru in life. Gahini Nath was guru of Nivrutti Nath. They all work for betterment of society through teachings of warkari Sampraday. Sant Dyaneshwar layed his life for society and at the age of 21 took Samadhi at Aalani which is 14 km from Pune. Program concludes with Pasayadan which was asked by Sant Dyaneshwar to god for betterment of human being. Dr. Sanjay Uttarwar recites Pasaydan and explain its meaning to audience. Whole atmosphere gets charged by devotional feelings and peace of mind.
IPCC: Yes, humans are definitely behind all this global warming we aren't having Prof: 'We're confident because we're confident'. Comment The Intergovernmental Panel on Climate Change says it's more certain than ever that humanity is warming the planet dangerously - despite the fact that a long-running flat period in global temperatures is well into its second decade. The IPCC released a brief "Summary for Policy Makers" today (go here) a teaser for its hefty summary of the scientific evidence for climate change, which is to follow in a few weeks. As leaks had suggested, the IPCC has increased its "confidence" that the noticeable warming experienced in the last part of the 20th Century was predominantly man made - but sidesteps explanations of why it went away. CO2 has continued to increase rapidly this century, topping 400ppm. In fact, the Summary doesn't mention "pause" or "hiatus" once. Skeptics argue that the IPCC's increased confidence is hard to justify for two reasons: firstly the climate models failed to predict the long pause (and over-estimated warming by between 71 and 159 per cent, according to Bjorn Lomborg) and secondly the explanation of the pause lacks a solid empirical basis. "Climate models have improved since the AR4," the IPCC insists nonetheless, in its new WG1 Summary. The IPCC issues its reports every seven years, and this is the fifth batch since 1990. The IPCC's three working groups deal with, respectively, the scientific evidence (WG1), impacts (WG2) and policy responses (WG3), with each running to several hundred pages. WG2 and WG3 report next year and the full WG1 has yet to be released. The IPCC is several things, but it isn't, as is widely supposed, predominantly a UN-funded organisation. There is a small IPCC secretariat in Geneva that deals with administrative issues such as travel expenses, but the main process is paid for by national governments, who also select the scientists who write the first drafts. Nor is the IPCC process predominantly populated by scientists, at least not after the first preliminary and informal discussions. Participation in the process is also voluntary, and from the second stage of discussions on, national-level bureaucrats and activists become involved. All this may sound arcane, but it accounts for the flavour of the reports. The process culminates, in the months leading to publication, in an unwieldy travelling circus of some 18,000 delegates meeting to horse-trade points. How does the IPCC arrive at its confidence number? The SPM tells us that: "Probabilistic estimates of quantified measures of uncertainty in a finding are based on statistical analysis of observations or model results, or both, and expert judgment". As climate scientist Professor Judith Curry of Georgia Tech helpfully explains: "The 95% is basically expert judgment, it is a negotiated figure among the authors. The increase from 90-95% means that they are more certain. How they can justify this is beyond me."
https://www.theregister.co.uk/2013/09/27/ipcc_ar5_wg1_teaser/
EVENT: SGT Studio Project Presentations How can co-working be encouraged in the hierarchical culture of Uganda? How can community tourism be promoted in an off-the-map village in Mexico? Come hear the answers to these questions and many more in the SGT Studio presentations on the 15th of May! TIME: Wednesday, 15th May 2019, 10:00-12:00 PLACE: Makerspace (basement floor) Harald Herlin Learning center (Otaniementie 9) LANGUAGE: English COST: No registration / Free for all The 2019 SGT Studio course [10 cr] is coming to an end! From the beginning of the year, four multidisciplinary student teams have been working hard on tackling real-life development challenges in four different countries. Come witness the innovative solutions, stories and findings from the field and the final development proposals of the teams at the SGT Studio Presentation event. The four projects presented are: – KENYA: Finding self-sustaining and efficient ways to manage water resources – NEPAL: Building disaster resilience through communal synergies – UGANDA: Promoting creative co-working and innovation in Makerere University – MEXICO: Mapping the village of EL 20 to promote community tourism Find short summaries of the projects here. The morning will begin with a short introduction of the SGT Studio, after which the student teams will present their projects. The event will conclude with a panel discussion and some relaxed mingling. Welcome!
https://sgt.aalto.fi/event-sgt-studio-project-presentations/
Multiple Sclerosis (MS) is a disorder of the central nervous system: the brain, optic nerves and spinal cord. Plaque develops in these areas and causes demyelination, or the destruction of the protective myelin that sheaths nerve cells, which in turn impairs nerve signal transmission. Why this occurs is unknown, but it is thought to be an autoimmune disorder. There are several types of MS, but the most common are relapsing-remitting and chronic progressive. In the relapsing-remitting form, there are periods when MS worsens or exacerbates, followed by periods of improvement. The chronic progressive type slowly worsens over time. Symptoms of Multiple Sclerosis (MS) Symptoms of MS depend on the location of the plaque formation. Weakness, numbness or tingling and/or visual disturbances may be the first symptoms. When symptoms occur in a more acute manner, it is known as the first demyelinating event, and can be useful in making a definitive diagnosis. Subsequent symptoms may include: - Depression - Constipation - Urinary retention - Poor coordination and balance - Difficulty walking - Fatigue - Dizziness - Problems swallowing - Sexual dysfunction Diagnosing Multiple Sclerosis (MS) Diagnosing MS can be difficult and is often a lengthy process. Symptoms of weakness, numbness and/or tingling can indicate a wide variety of issues and alone are not a basis for a definitive diagnosis. A careful health history and neurological examination, along with other diagnostic testing, will help rule out other possibilities and give a definitive diagnosis of MS. Additional diagnositic testing may include: - Magnetic resonance imaging (MRI) of the brain and/or spinal cord. - Lumbar puncture to analyze the spinal fluid. - Lab tests to rule out other causes of the symptoms. Treating Multiple Sclerosis (MS) Treatment of MS may include disease-modifying first level medications (injectable interferon [e.g., Avonex®, Betaseron®, Rebif®] and glatiramer [e.g., Copaxone®, Glatopa®]) and then progressing as needed to oral medications (e.g., dimethyl fumarate [Tecfidera®], fingolimod [Gilenya®], teriflunomide [Aubagio®]) and monoclonal antibodies (e.g., alemtuzumab [Campath®, Lemtrada®], natalizumab [Tysabri®], ocrelizumab [Ocrevus®]). Steroids may be used during acute flares. Immune globulin therapy administered into the blood stream intravenously (IVIG) can be used when other MS disease modifiers are not effective and exacerbations or declines occur despite compliance with these medications, or if there are intolerable side effects. The other use is after childbirth. It is thought the hormones during pregnancy keep MS symptoms under control. Once the child is born, the risk of exacerbation increases. IVIG is also thought to be safe while mothers breastfeed. Websites: - All About Multiple Sclerosis - Mayo Clinic - National Institute of Neurological Disorders and Stroke - National Multiple Sclerosis Society Support Groups: - MSWorld's Chat & Message Board features patients helping patients. - Multiple Sclerosis Foundation This content is not intended to substitute professional medical advice. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Contact Nufactor At Nufactor, we are committed to providing our patients the education, support and resources necessary to complete your IVIG treatment successfully and with the desired outcomes. Please contact us with any further questions.
https://www.nufactor.com/conditions/neuromuscular-disorders/multiple-sclerosis-ms.html
There’s no denying that healthcare can be costly. Even if you have insurance through your employer, you might consider taking advantage of one of the federal government programs that encourage saving for medical expenses not covered by insurance. The most common type of accounts offered to employees is the Health Savings Account (HSA) and the Flexible Spending Account (FSA). The most significant difference between flexible spending accounts (FSA) and health savings accounts (HSA) is that an individual controls an HSA and allows contributions to roll over, while FSAs are less flexible and are owned by an employer. Apart from this, there are several key differences between HSAs and FSAs. Both HSAs and FSAs allow people with health insurance to set aside money for any healthcare costs referred to as “qualified medical expenses”. This includes deductibles, copayments and coinsurance, and monthly prescription costs. You usually receive a debit card that you can use to pay for qualifying expenses. And both types of accounts have tax benefits as well. Let’s discuss HSA vs. FSA – the difference Although FSAs and HSAs both allow people to use pre-tax income for eligible medical expenses, there are considerable differences between HSAs and FSAs. Those differences include the qualifications, contributions limits, rules for rollovers and changing contribution amounts, and withdrawal penalties. Qualification: The qualification for FSA must be set up by the employer while HAS is available only to people who have a high-deductible health plan, or HDHP. Annual Contribution limits: The limit for FSA is up to $2,650/individual and up to $5,300/household. While for HSA, it’s up to $3,450/individual and up to $6,900/household. Account Ownership: FSA is owned by the employer and lost with a job change, unless eligible for continuation through COBRA; while HSA is owned by individuals and carries over with employment changes. Rollover Rules: In FSA, employees can roll over $500 into next year’s FSA, but it’s decided by the employer. In HSA, unused funds roll over every year. Penalties for Withdrawing Funds: You may have to submit expenses to be reimbursed by FSA and depending on the employer, you may not have access to funds for nonmedical expenses. While in HSA, savings can be taken out of the account tax-free after age 65 and if used before 65, for nonmedical expenses, it is subject to a 20% penalty and must be declared on the income tax form. By looking at the above HSA vs. FSA comparison, you can choose the one that best suits you. Overall, the higher limits and contribution rollover of the health savings account make it a better choice if you can qualify. HSAs are more flexible than FSAs, allowing you to save for potential medical expenses and accumulate money over time. ACA Health Insurance Plans vs. Healthcare Sharing Ministries September 16, 2020 12:17 newlambertagency ACA health insurance plansACA health plansACA plansACA Plans vs. Healthcare Sharing Ministrieshealth savings accountHealthcare sharing ministries The individuals who had previously been uninsurable due to poor health or pre-existing conditions received a gift in the name of the Patient Protection and Affordable Care Act in 2010. It brought with it a new era of healthcare for many Americans. The law proposed a new structure of subsidies intended to help make healthcare more affordable, which presented an opportunity for many to get comprehensive health insurance coverage via their state health insurance portal or its federal counterpart. The introduction of the ACA health insurance plans eliminated lifetime caps on coverage, meaning you could never get kicked off your plan for getting sick or “run out” of coverage and gave several Americans a chance to finally buy healthcare regardless of their health, employment situation, or income. While ACA health plans kept made and kept several families happy, another healthcare solution has been gaining popularity over the years – healthcare sharing plans or healthcare sharing ministries. While they aren’t considered “health insurance,” healthcare sharing ministries can be used to reduce the out-of-pocket cost by families who want to share their healthcare expenses with other like-minded families. Individuals and families who choose healthcare sharing ministries pay a monthly “sharing amount” (read premium) and depending on the program they choose, they can enjoy many of the same perks of traditional health insurance – like discounts on healthcare, limited out-of-pocket limits, and predictable monthly payments. ACA Plans vs. Healthcare Sharing Ministries: What you should know Like choosing any insurance, it’s important to weigh the pros and cons before you sign up. We have laid out the pros and cons of both the plans for you to help you make a decision. Benefits: ACA plans As already mentioned the most attractive feature of ACA health insurance plans is that there are no lifetime limits or caps. You can buy coverage regardless of your health and subsidies available for those who earn less than 400% of FPL. Drawbacks: ACA plans ACA plans are quite expensive and the plans may run on narrow networks. Also, the availability of plans depends on your state of residence. Benefits: Healthcare sharing ministries The most appealing feature of healthcare sharing plans is its cost. They’re less expensive and are more cost-efficient than ACA plans. The annual costs and deductibles are low and Families can share their costs with other like-minded families. Also, with this plan, you can avoid the penalty for not having health insurance. Drawbacks: Healthcare sharing ministries You need good health to qualify for this plan. It has a lifetime or annual caps on coverage and you cannot use a health savings account. At the end of the day, the best way to find the right plan for yourself or your family is to look at your unique situation and determine which plan might offer better protection without costing too much or sacrificing your quality of care. After all, it’s your life! Why Should You Choose a High Deductible Health Plan? July 3, 2020 10:18 newlambertagency Choose Deductible Health Planhealth savings accountinsurance benefitsTexas health insurance For a growing number of Americans, it has become crucial to choose a deductible health plan. The trend started a decade ago and shows no signs of disappearing. At many firms, it’s the sole health insurance that’s offered for employees. However, there are still a considerable amount of people who find themselves without employer-offered insurance benefits. Be it a student who’s in between jobs or self-employed individuals, people are needed to re-evaluate their Texas health insurance plans. Thus, in such a case, one option to consider that will favour your budget is a high deductible health plan. A High-Deductible Health Plan (HDHP) is a health insurance plan with low premiums and high deductibles, compared to traditional health plans. How does an HDHP work? An HDHP involves the participant assuming all expenses until a deductible amount has been met. Any expenses that are greater than the deductible will be covered as part of the health plan. The insurer covers all medical expenses after the deductible has been met in full. It can pay much lower monthly payments for medical coverage, with premiums being more reasonable it is possible to save money for the future in case a high deductible arises. Also, monthly medical costs decrease substantially with an HDHP. HDHPs also offer some unique ideal for young people as they do not often deal with major illnesses. Health coverage is something they have just in case something happens unexpectedly. When the probability of a medical expense is low, an HDHP offers adequate coverage with lower costs in the long run. People with HDHPs can also contribute to a Texas health savings account, which is a health savings account that allows you to contribute and withdraw money for qualified medical expenses without being taxed. So, if you combine your HDHP with an HSA, you can pay that deductible, plus other qualified medical expenses, using the money you set aside in your tax-free HSA. The insurance benefits of an HDHP also give you some peace of mind as it covers for any catastrophic event and at the same time helps you save between 30 and 60% on medical costs. You must consider the following when choosing a health plan that suits you: - If you’re healthy and usually go to the doctor once a year, a lower monthly premium may be a good choice for you. - If a chronic health condition means that you go often to your primary care provider (PCP) or specialists during the plan year, you must decide if savings from low premiums are greater than the cost of regular care or medication. Carefully analyzing the ins and outs of high-deductible health insurance may help you find the coverage that’s right for you. In addition to saving you money, finding the right plan for you can help ensure that you’ll receive coverage for the health care you need when you need it.
https://www.thelambertagency.com/tag/health-savings-account/
Develop new mechanisms for the enhancement of citizen energy communities. Thus contribute to a digitalized, decentral and hands-on energy transition. Featured research (64) The success of incentives for investments in sustainable residential energy technologies depends on individual households actively participating in the energy transition by investing in electrification and by becoming prosumers. This willingness is influenced by the return on investments in electrification and preferences towards environmental sustainability. Returns on investment can be supported by a preferential regulation of Citizen Energy Communities, i.e. a special form of a microgrid regulation. However, the exact effect of such regulation is debated and therefore analyzed in this study. We propose a multi-periodic community development model that determines household investment decisions over a long time horizon, with heterogeneous individual preferences in regards to sustainability and heterogeneous energy consumption profiles. We consider that investment decisions which increase individual utility might be delayed due to inertia in the decision process. Decisions are determined in our model based on individual preferences using a multi-objective evolutionary algorithm embedded in an energy system simulation. In a case study, we investigate the development of a neighborhood in Germany consisting of 30 households in regards to community costs and community emissions with and without Citizen Energy Community regulation as proposed by the European Union. We find that Citizen Energy Community regulation always reduces community costs and emissions, while heterogeneous distributions of economic and ecologic preferences within the community lead to higher gains. Furthermore, we find that decision inertia considerably slows down the transformation process. This shows that policymakers should carefully consider who to target with Citizen Energy Community regulation and that subsidies should be designed such that they counterbalance delayed private investment decisions. The transition of the energy sector towards more decentral, renewable and digital structures and a higher involvement of local residents as prosumers calls for innovative business models. In this paper, we investigate a sharing economy model that enables a residential community to share solar generation and storage capacity. We simulate 520 sharing communities of five households each with differing load profile configurations and find that they achieve average annual savings of 615€ as compared to individual operation. Using the gathered data on electricity consumption in a sharing community, we discuss a fixed pricing approach to achieve a fair distribution of the profits generated through the sharing economy. We further investigate the impact of prosumers’ and consumers’ load profile patterns on the profitability of the sharing communities. Based on these findings, we explore the potential to match and coordinate suitable communities through a platform-based sharing economy model. Our results enable practitioners to find optimal additions to an energy sharing community and provide new insights for researchers regarding possible pricing schemes in energy communities. Coordinated operation of Coupled Electric Power and District Heating Networks (CEPDHNs) brings advantages as e.g., the district heating networks can provide flexibility to the electric power network, and the entire system operation can be further decarbonized through heat pumps and electric boilers using electricity from renewable energy sources. Still, today CEPDHNs are often not operated in a coordinated way causing a lack of efficiency. This paper shows, how efficient resource allocation is achieved by determining the power exchange between both networks over an aggregated market. We introduce a welfare-optimizing, market-based operation for a CEPDHN that satisfies operational constraints and considers network losses. The objective is to integrate uniform pricing market-clearing and operational constraints into one approach, in order to obtain high incentive compatibility for the market participants while preventing high uplift costs from redispatch. For this, we use a hybrid market model mainly based on uniform marginal pricing and additionally utilize pay-as-bid pricing for a fraction of the allocated bids and offers. We perform a case study with a real CEPDHN to validate the functionality of the developed approach. The results show that our solution leads to efficient resource allocation while maintaining safe network operation and preventing uplift costs due to redispatch. Electric vehicles have proven to be a viable mobility alternative that leads to emissions reductions and hence the decarbonization of the transportation sector. Nevertheless, electric vehicle adoption is progressing slowly. Vehicle fleets are a promising starting point for increased market penetration. With this study, we address the issue of fleet electrification by analyzing a data set of 81 empirical mobility patterns of commercial fleets. We conduct a simulation to design a decision support system for fleet managers evaluating which fleets have a good potential for electrification and how fleets can improve the number of successful electric trips by adapting their charging strategy. We consider both heuristics and optimized scheduling. Our results show that a large share of fleets can score a close to optimal charging schedule using a simple charging heuristic. For all other fleets, we provide a decision mechanism to assess the potential of smart charging mechanisms. Lab head About Philipp Staudt - Philipp Staudt currently works at the Institute of Information Systems and Marketing, Karlsruhe Institute of Technology. Philipp's research interests include the Smart Grid, the digitalization of the energy system, sector coupling, GreenIS and energy markets. He works with agent-based simulations, data analytics and game theory on markets and mechanisms that lead to a better integration of renewable energies into the energy mix.
https://www.researchgate.net/lab/Smart-Grids-and-Energy-Markets-SGEM-Philipp-Staudt
Reinforcement for behavior that is desired and corrective feedback for behavior that is not desired is critical to help create and sustain a culture of ethical behavior and consideration. how to use trim tabs on boat Training. Through training, explicitly teach your employees how to behave in an ethical manner. Discuss ethically questionable situations and how to respond to them. Training. Through training, explicitly teach your employees how to behave in an ethical manner. Discuss ethically questionable situations and how to respond to them. how to teach kinaesthetic learners Your Professional and Ethical Standards Ethical standards, standards of practice and the professional learning framework describe what it means to be a member of the teaching profession in Ontario. They articulate the goals and aspirations of a teaching profession dedicated to fostering student learning and preparing Ontario students to participate in a democratic society. Beyond mere compliance: Three metaphors to teach the APA Ethics Code. Metaphors can help move us beyond a superficial understanding of the Ethics Code to a deeper, more interesting, and ultimately more satisfying way of conceptualizing the code and its role in our professional lives. Students develop ethical understanding as they talk about ethical issues and explain reasons for acting in different ways. They explore topics that include contentious aspects, select and justify their own ethical positions, and take into account the different experiences and positions of others.
http://mdg5b.org/australian-capital-territory/how-to-teach-in-an-ethical-manner.php
Background: Staphylococcus aureus (SA) is a major cause of healthcare associated pneumonia (HAP). Clinicians and stewardship programs are challenged with how to best use novel agents for SA-HAP. We developed a decision analytic model to describe health outcomes and costs associated with telavancin (TLV) relative to vancomycin (VAN) across different clinical scenarios among patients with SA-HAP. Methods: This was a decision analytic model of hospital costs and outcomes of SA-HAP treated with TLV and VAN (Fig 1). Data on clinical cure at test of cure (TOC) [polymicrobial MSSA or MRSA infection in presence of adequate Gram-negative therapy, monomicrobial MSSA, monomicrobial MRSA VAN MIC < 1 mcg/mL or >/= 1 mcg/mL] and nephrotoxicity, as well as prevalence of polymicrobial infection (40%), MRSA (61%), and VAN MIC >/= 1 mcg/mL (85%), were obtained from ATTAIN clinical trials. Data on length of stay (LOS) for cure (10 days), failure (10 days), and nephrotoxicity (3.5 days) were based on literature. The cost per treated patient and incremental cost-effectiveness ratio (ICER) per additional cure were calculated for (1) all SA-HAP and (2) monomicrobial SA-HAP. One-way sensitivity analyses were performed. Results: Under the base-case scenario, hospital cost for TLV treated HAP was $43,337 and VAN $43,004; a net increase of $333 per patient. TLV was associated with higher drug (+$1,977) and nephrotoxicity costs (+$477), offset by lower ICU (-$1,615) and ventilator (-$106) costs. Overall point estimate of clinical cure at TOC was higher by 5.9% with TLV vs. VAN and ICER was $5,674 per additional cure. ICER was sensitive to probabilities of cure at TOC, ICU cost, TLV cost, and additional LOS due to failure (Fig 2). For monomicrobial SA-HAP, TLV was associated with higher clinical cure by 10.1% and net cost savings of $1,200 per treated patient. Conclusion: Our decision analytic model suggests that early directed therapy with TLV for SA-HAP is associated with a modest increase in total cost, and favorable ICER relative to VAN under our base model assumptions. TLV was associated with overall cost savings for monomicrobial SA-HAP versus VAN, suggesting that optimal economic benefit from TLV may be gained from proper patient selection and early appropriate treatment for SA-HAP. Figure SEQ Figure \* ARABIC 1: Model Structure Figure SEQ Figure \* ARABIC 2: One-Way Sensitivity Analyses J. A. Mckinnell, Theravance Biopharma US Inc.: Consultant and Speaker's Bureau ,
https://idsa.confex.com/idsa/2016/webprogram/Paper58249.html
BACKGROUND OF THE INVENTION Field of the Invention Description of the Background Art The present invention relates to a semiconductor device. Japanese Patent Application Laid-Open No. 2007-324428 describes a technique relating to a semiconductor device. 150 160 150 160 150 160 150 160 150 160 FIGS. 1 and 2 FIG. 1 FIG. 2 FIGS. 1 and 2 Semiconductor devices and according to are first described. is a drawing illustrating a cross-sectional structure of the semiconductor device . is a drawing illustrating a cross-sectional structure of the semiconductor device . Each of the semiconductor devices and is a PIN diode, for example. Each of illustrates a cross-sectional structure of a cell part of the diode. The semiconductor devices and are referred to as the first comparative device and the second comparative device in some cases hereinafter. FIG. 1 150 12 11 1 14 7 As illustrated in , the first comparative device includes a cathode electrode , an N+ type cathode region , an N− type intermediate semiconductor region , a P type anode region , and an anode electrode . Indicated herein is that the N+ type region has a higher donor impurity concentration than the N type region, and the N− type region has a lower donor impurity concentration than the N type region. Also indicated is that the P− type region has a lower acceptor impurity concentration than the P type region. Uncapitalized p type and n type are used hereinafter in a case of not specifying the impurity concentration but simply specifying a conductivity type. A simple description of “concentration” indicates the impurity concentration, and a simple description of “peak concentration” indicates the peak concentration of impurity. 11 12 1 11 14 1 7 14 The cathode region is located on the cathode electrode . The intermediate semiconductor region is located on the cathode region . The anode region is located on the intermediate semiconductor region . The anode electrode is located on the anode region . 150 150 150 14 14 150 In order to improve recovery characteristics of the first comparative device having the structure described above, it is effective to reduce an amount of positive hole accumulated in the first comparative device while a forward bias is applied to the first comparative device . When the concentration of the anode region is reduced, the amount of positive hole supplied from the anode region at the time of forward bias decreases. As a result, the amount of positive hole accumulated in the first comparative device at the time of forward bias decreases, and the recovery characteristics are improved. 14 1 14 150 7 150 However, when the concentration of the anode region is reduced, a depletion layer extends easily from the intermediate semiconductor region toward the anode region at the time of application of a reverse bias to the first comparative device . As a result, there is a possibility that the depletion layer reaches the anode electrode and the first comparative device breaks down. 7 12 7 12 7 12 7 12 The forward bias indicates a voltage applied between the anode electrode and the cathode electrode so that potential of the anode electrode gets higher than potential of the cathode electrode . The forward bias is also referred to as the forward bias voltage. In the meanwhile, the reverse bias indicates a voltage applied between the anode electrode and the cathode electrode so that potential of the anode electrode gets lower than potential of the cathode electrode . The reverse bias is also referred to as the reverse bias voltage. 160 150 160 150 24 14 3 FIG. 2 The second comparative device is equivalent to the improved first comparative device . As illustrated in , the second comparative device is equivalent to the first comparative device which includes an anode region in place of the anode region and further includes an insulating film . 24 1 24 2 4 2 2 1 2 2 4 2 4 2 4 2 2 2 2 4 2 4 40 20 2 40 20 2 2 160 160 4 2 160 2 4 2 20 2 20 a b a b a a FIG. 2 The anode region is located on the intermediate semiconductor region . The anode region includes a P− type semiconductor region and a P type semiconductor region . The semiconductor region includes a main surface having contact with the intermediate semiconductor region and a main surface on an opposite side of the main surface . The semiconductor region is formed in the semiconductor region . The semiconductor region is formed in part of the semiconductor region . The semiconductor region extends from the main surface of the semiconductor region to a side of the main surface , but does not reach the main surface . A depth of the semiconductor region is smaller than that of the semiconductor region . The semiconductor region includes a region sandwiching part of the semiconductor region in a planar view. The region is considered to sandwich the part of the semiconductor region in a direction perpendicular to a depth direction of the semiconductor region (in other words, a thickness direction of the second comparative device ) in a cross-sectional view of the second comparative device . As can also be seen by , the semiconductor region is also considered to be located on the semiconductor region . In the description of the second comparative device hereinafter, the semiconductor region indicates the region where the semiconductor region is not formed in the semiconductor region unless otherwise noted. The part of the semiconductor region is referred to as the partial region in some cases. 3 24 3 4 3 2 3 3 20 2 7 3 3 2 3 7 3 4 7 4 7 4 2 3 a a a The insulating film is located on the anode region . The insulating film covers a whole region of the semiconductor region in a planar view. The insulating film covers part of the semiconductor region in a planar view. The insulating film includes an opening part exposing part of the partial region of the semiconductor region . The anode electrode is formed on the insulating film to fill the opening part . Accordingly, the part of the semiconductor region exposed from the opening part and the anode electrode have contact with each other. The insulating film covers the whole region of the semiconductor region , thus the anode electrode does not have contact with the semiconductor region . The anode electrode is electrically connected to the semiconductor region via the semiconductor region . The insulating film is an oxide film, for example. 160 7 4 2 2 7 4 160 4 2 7 160 4 24 7 24 160 160 In the second comparative device having the above structure, the anode electrode and the semiconductor region are electrically connected to each other via the semiconductor region having a low impurity concentration. Thus, it is considered that there is a resistance component caused by the semiconductor region between the anode electrode and the semiconductor region . Accordingly, when the forward bias is applied to the second comparative device , potential of the semiconductor region is equal to a value obtained by subtracting a value of voltage drop in the semiconductor region from the potential of the anode electrode . Thus, when the forward bias is applied to the second comparative device , the potential of the semiconductor region included in the anode region is smaller than the potential of the anode electrode . Accordingly, the supply of the positive hole from the anode region is suppressed, and an amount of accumulation of the positive hole in the second comparative device decreases. As a result, recovery characteristics of the second comparative device are improved. 160 4 7 4 4 4 4 1 24 160 160 In the second comparative device , the potential of the semiconductor region is smaller than the potential of the anode electrode , thus the supply of the positive hole from the semiconductor region can be suppressed, and the impurity concentration of the semiconductor region can be set high. When the impurity concentration of the semiconductor region is high, the semiconductor region can suppress the extension of the depletion layer from the intermediate semiconductor region to the anode region at the time of application of the reverse bias to the second comparative device . Thus, a possibility of breakdown in the second comparative device can be reduced. 160 32 2 34 4 −17 −3 −19 −3 Japanese Patent Application Laid-Open No. 2007-324428 described above discloses the structure similar to that of the second comparative device . In the structure disclosed in Japanese Patent Application Laid-Open No. 2007-324428, an impurity concentration of a region corresponding to the semiconductor region is set to 1×10cm. In the structure disclosed in Japanese Patent Application Laid-Open No. 2007-324428, an impurity concentration of a region corresponding to the semiconductor region is set to 1×10cm. 160 24 2 2 4 1 160 160 2 7 2 2 4 160 In order to further improve the recovery characteristics of the second comparative device , it is considered that an implantation efficiency of the positive hole from the anode region is reduced by reducing the concentration of the semiconductor region . When the concentration of the semiconductor region is reduced, there is a possibility that the semiconductor region cannot sufficiently suppress the extension of the depletion layer generated from the intermediate semiconductor region at the time of application of large reverse bias to the second comparative device . As a result, there is a possibility that the second comparative device breaks down. When the concentration of the semiconductor region is reduced, there is a possibility that an electrically favorable contact, that is to say, an ohmic contact cannot be achieved between the anode electrode and the semiconductor region . Since there is a restriction on the concentration and depth of the semiconductor regions and for various reasons, it is difficult to increase a degree of improvement of the recovery characteristics in the second comparative device . SUMMARY Improvement in performance is desired in a semiconductor device. The present invention therefore has been made to solve the above problems, and it is an object of the present invention to provide a technique capable of improving performance of the semiconductor device. One aspect of a semiconductor device includes a first semiconductor region of a first conductivity type, a second semiconductor region of a second conductivity type located on the first semiconductor region, a third semiconductor region of the second conductivity type, a fourth semiconductor region of the second conductivity type, a fifth semiconductor region of the first conductivity type, and an electrode. The third semiconductor region is located on the second semiconductor region, and has a higher impurity concentration than the second semiconductor region. The fourth semiconductor region has a higher impurity concentration than the second semiconductor region, is located separately from the third semiconductor region in a planar view, and has contact with the second semiconductor region. The fifth semiconductor region is located on the second semiconductor region, and is located between the third and fourth semiconductor regions in a planar view. The electrode does not have contact with the fourth and fifth semiconductor regions but has contact with the third semiconductor region. One aspect of a semiconductor device includes a first semiconductor region of a first conductivity type, a second semiconductor region of a second conductivity type located on the first semiconductor region, a third semiconductor region of the second conductivity type, a fourth semiconductor region of the second conductivity type, a concave portion formed on the semiconductor region, an insulating film filling the concave portion, and an electrode. The third semiconductor region is located on the second semiconductor region, and has a higher impurity concentration than the second semiconductor region. The fourth semiconductor region has a higher impurity concentration than the second semiconductor region, is located separately from the third semiconductor region in a planar view, and has contact with the second semiconductor region. The concave portion is located between the third and fourth semiconductor regions in a planar view. The electrode does not have contact with the fourth semiconductor region but has contact with the third semiconductor region. The performance of the semiconductor device is improved. These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a drawing illustrating a cross-sectional structure of a first comparative device. FIG. 2 is a drawing illustrating a cross-sectional structure of a second comparative device. FIG. 3 is a drawing illustrating an example of a cross-sectional structure of a semiconductor device. FIG. 4 is a drawing illustrating an example of a planar structure of the semiconductor device. FIG. 5 is a drawing illustrating an example of a planar structure of the semiconductor device. FIG. 6 is a drawing illustrating an example of characteristics of the semiconductor device. FIG. 7 is a drawing illustrating an example of the characteristics of the semiconductor device. FIG. 8 is a drawing illustrating an example of the characteristics of the semiconductor device. FIG. 9 is a drawing illustrating an example of a cross-sectional structure of the semiconductor device. FIG. 10 is a drawing illustrating an example of a cross-sectional structure of the semiconductor device. FIG. 11 is a drawing illustrating an example of a cross-sectional structure of the semiconductor device. FIG. 12 is a drawing illustrating an example of a cross-sectional structure of the semiconductor device. FIG. 13 is a drawing illustrating an example of a cross-sectional structure of the semiconductor device. FIG. 14 is a drawing illustrating an example of a cross-sectional structure of the semiconductor device. FIG. 15 is a drawing illustrating an example of a cross-sectional structure of the semiconductor device. DESCRIPTION OF THE PREFERRED EMBODIMENTS Embodiment 1 Embodiment 2 100 160 100 100 100 160 34 24 6 FIG. 3 FIG. 3 FIG. 3 A semiconductor device according to the present embodiment is equivalent to the improved second comparative device . is a drawing illustrating an example of a cross-sectional structure of the semiconductor device according to the present embodiment. The semiconductor device is a PIN diode, for example. illustrates a cross-sectional structure of a cell part of the diode. As illustrated in , the semiconductor device is equivalent to the second comparative device which includes an anode region in place of the anode region and further includes an N type semiconductor region . 100 100 100 100 100 100 100 A silicon semiconductor material is used in the semiconductor device , for example. A semiconductor material other than the silicon semiconductor material may also be used in the semiconductor device . For example, a gallium arsenide semiconductor material, a silicon carbide semiconductor material, or a gallium nitride semiconductor material may be used in the semiconductor device . Described hereinafter are various characteristics of the semiconductor device and a configuration parameter in a case where the semiconductor device is a diode having a withstand voltage level of 2.2 kV. When the withstand voltage of the semiconductor device is not 2.2 kV, there may be a case where the various characteristics of the semiconductor device and the configuration parameter are changed from the following contents. 12 7 12 7 The cathode electrode is made of a layered metal material in which aluminum, titanium, and nickel are stacked, for example. The anode electrode is made of aluminum, for example. A material of each of the cathode electrode and the anode electrode is not limited thereto. 34 2 4 5 5 20 2 5 20 2 5 2 2 2 2 5 4 5 20 2 b a a The anode region includes the semiconductor regions and described above and the P type semiconductor region . The semiconductor region is formed in the partial region of the semiconductor region . The semiconductor region is formed in part of the partial region of the semiconductor region . The semiconductor region extends from the main surface of the semiconductor region to the side of the main surface , but does not reach the main surface . The semiconductor region is formed to be shallower than the semiconductor region . The semiconductor region is considered to be located on the partial region of the semiconductor region . 6 20 2 6 20 2 6 2 2 2 2 6 2 4 6 5 6 60 5 60 5 5 100 60 4 5 40 4 60 6 5 40 60 6 20 2 2 4 5 6 2 b a a The N type semiconductor region is formed in the partial region of the semiconductor region . The semiconductor region is formed in part of the partial region of the semiconductor region . The semiconductor region extends from the main surface of the semiconductor region to the side of the main surface , but does not reach the main surface . The semiconductor region is formed to be shallower than the semiconductor regions and . A depth of the semiconductor region is the same as that of the semiconductor region , for example. The semiconductor region includes a region sandwiching the semiconductor region in a planar view. The region is considered to sandwich the semiconductor region in a direction perpendicular to a depth direction of the semiconductor region in a cross-sectional view of the semiconductor device . The region has contact with the semiconductor regions and . The region of the semiconductor region sandwiches the region of the semiconductor region and the semiconductor region in a planar view. The region has contact with the region . The semiconductor region is considered to be located on the partial region of the semiconductor region . In the description hereinafter, the semiconductor region indicates the region where the semiconductor regions , , and are not formed in the semiconductor region unless otherwise noted. 40 4 5 40 5 40 100 60 6 40 5 The region of the semiconductor region is formed separately from the semiconductor region in a planar view. In other words, the region is formed separately from the semiconductor region in a direction perpendicular to a depth direction of the region in a cross-sectional view of the semiconductor device . The region of the semiconductor region is located between the region and the semiconductor region in a planar view. 4 5 2 2 2 4 4 5 5 17 −3 15 −3 18 −3 16 −3 18 −3 16 −3 A peak concentration of each of the semiconductor regions and is higher than that of the semiconductor region . An upper limit value of the peak concentration of the semiconductor region is set to 1×10cm, for example. A lower limit value of the peak concentration of the semiconductor region is set to 1×10cm, for example. An upper limit value of the peak concentration of the semiconductor region is set to 1×10cm, for example. A lower limit value of the peak concentration of the semiconductor region is set to 1×10cm, for example. An upper limit value of the peak concentration of the semiconductor region is set to 1×10cm, for example. A lower limit value of the peak concentration of the semiconductor region is set to 1×10cm, for example. 3 34 3 4 6 3 5 3 3 5 7 3 3 5 3 7 3 4 6 7 4 6 7 4 5 2 a a a The insulating film is located on the anode region . The insulating film covers a whole region of the semiconductor regions and in a planar view. The insulating film does not cover the semiconductor region in a planar view. The insulating film includes the opening part exposing the semiconductor region . The anode electrode is formed on the insulating film to fill the opening part . Accordingly, the semiconductor region exposed from the opening part and the anode electrode have contact with each other. The insulating film covers the whole region of the semiconductor regions and , thus the anode electrode does not have contact with the semiconductor regions and . The anode electrode is electrically connected to the semiconductor region via the semiconductor region and the semiconductor region . FIGS. 4 and 5 FIGS. 4 and 5 FIG. 4 FIG. 3 FIG. 5 FIG. 3 100 7 3 are drawings each illustrating an example of a planar structure of the semiconductor device . The illustration of the anode electrode and the insulating film is omitted in . A cross-sectional structure in an arrow direction A-A in is indicated by a cross-sectional structure illustrated in . A cross-sectional structure in an arrow direction B-B in is indicated by a cross-sectional structure illustrated in . FIG. 4 100 5 6 5 5 In the example in , the semiconductor device includes the plurality of semiconductor regions and the plurality of semiconductor regions . The plurality of semiconductor regions are formed in a striped pattern. Each semiconductor region has an elongated rectangular shape in a planar view. FIG. 5 FIG. 5 FIG. 5 5 6 5 6 5 6 60 5 6 In the example in , the plurality of semiconductor regions having substantially a square shape in a planar view are arranged in rows and columns. In the example in , the plurality of semiconductor regions are formed to correspond to the plurality of semiconductor regions . Each semiconductor region is formed to surround the semiconductor region in a planar view. Also in the example in , the semiconductor region includes the region sandwiching the semiconductor region in a planar view. An outline of the semiconductor region in a planar view is formed into substantially a square shape. 100 7 4 5 2 100 4 2 7 100 4 34 7 6 5 7 4 100 100 5 4 6 2 4 100 34 100 100 100 In the semiconductor device having the above structure, the anode electrode and the semiconductor region are electrically connected to each other via the semiconductor region and the semiconductor region having the low impurity concentration. Accordingly, when the forward bias is applied to the semiconductor device , the potential of the semiconductor region is equal to a value obtained by subtracting a value of voltage drop in the semiconductor region from the potential of the anode electrode . Thus, when the forward bias is applied to the semiconductor device , the potential of the semiconductor region in the anode region is smaller than the potential of the anode electrode . Furthermore, there is the n type semiconductor region between the p type semiconductor region having contact with the anode electrode and the p type semiconductor region in the semiconductor device . Thus, when the forward bias is applied to the semiconductor device , a current flowing from the semiconductor region toward the semiconductor region flows without passing through the n type semiconductor region . As a result, a current path in the semiconductor region having the low concentration is elongated. Thus, the potential of the semiconductor region in the case where the forward bias is applied to the semiconductor device can be further reduced. Accordingly, the supply of the positive hole from the anode region is suppressed, and an amount of accumulation of the positive hole in the semiconductor device decreases. As a result, recovery characteristics of the semiconductor device are improved. Specifically, a switching loss decreases and a switching wave form can be soft-switched. Thus, performance of the semiconductor device is improved. 100 1 34 4 5 100 160 100 When the reverse bias is applied to the semiconductor device , the extension of the depletion layer from the intermediate semiconductor region to the anode region is suppressed not only by the semiconductor region but also by the semiconductor region having the high impurity concentration. Thus, a possibility of breakdown is lower in the semiconductor device than in the second comparative device . That is to say, a static withstand voltage of the semiconductor device can be improved. 7 2 5 2 7 5 2 2 4 100 34 100 7 34 160 The anode electrode does not have contact with the semiconductor region having the low concentration, but has contact with the semiconductor region having the high concentration. Thus, even when the concentration of the semiconductor region is set low, the ohmic contact between the anode electrode and the semiconductor region can be achieved. When the concentration of the semiconductor region is set low, the resistance component caused by the semiconductor region can be increased, thus the potential of the semiconductor region in the case where the forward bias is applied to the semiconductor device can be further reduced. Thus, the supply of the positive hole from the anode region is further suppressed. As a result, in the semiconductor device , the ohmic contact between the anode electrode and the anode region is achieved, and the recovery characteristics are improved more than the case in the second comparative device . 2 4 100 34 100 17 3 As described in the above example, the upper limit value of the peak concentration of the semiconductor region is set to 1×10cm, thus the potential of the semiconductor region in the case where the forward bias is applied to the semiconductor device can be reduced. Thus, the supply of the positive hole from the anode region is suppressed. As a result, the recovery characteristics of the semiconductor device can be improved. 2 2 15 −3 As described in the above example, the lower limit value of the peak concentration of the semiconductor region is set to 1×10cm, thus the semiconductor region having the low concentration can be formed easily. 4 5 34 4 5 100 18 3 As described in the above example, the upper limit value of the peak concentration of the semiconductor regions and is set to 1×10cm, thus the supply of the positive hole from the anode region including the semiconductor regions and is suppressed. Thus, the recovery characteristics of the semiconductor device can be improved. 5 7 5 16 3 As described in the above example, the lower limit value of the peak concentration of the semiconductor region is set to 1×10cm, thus the ohmic contact between the anode electrode and the semiconductor region can be formed easily. FIG. 6 FIGS. 3 to 5 FIG. 6 5 40 4 2 5 4 6 is a drawing illustrating a relationship between a distance W from the semiconductor region to the region of the semiconductor region in a planar view (refer to ) and a normalized resistance value R in a current path in the semiconductor region . A lateral axis and a vertical axis in indicate the distance W and the normalized resistance value R, respectively. The distance W is also considered as a width of a region sandwiched between the semiconductor region and the semiconductor region in the semiconductor region . 2 2 2 2 15 −3 Herein, the normalized resistance value R indicates a value obtained by normalizing a resistance value of the current path in the semiconductor region by a reference value. That it so say, the normalized resistance value R indicates a value obtained by dividing the resistance value of the current path in the semiconductor region by the reference value. The reference value is set to the resistance value of the current path in the semiconductor region in a state where the peak concentration of the semiconductor region is set to the lower limit value (1×10cm) and the distance W=5 μm is satisfied, for example. 200 2 210 2 220 2 FIG. 6 FIG. 6 FIG. 6 17 −3 15 3 A graph in indicates a relationship between the distance W and the normalized resistance value R in a case where the peak concentration of the semiconductor region is set to the upper limit value (1×10cm). A graph in indicates a relationship between the distance W and the normalized resistance value R in a case where the peak concentration of the semiconductor region is set to a value between the upper limit value and the lower limit value (1×10cm). A graph in indicates a relationship between the distance W and the normalized resistance value R in a case where the peak concentration of the semiconductor region is set to the lower limit value. FIG. 6 2 2 As illustrated in , as the distance W gets lager, the normalized resistance value R increases. This is because when the distance W gets large, the current path in the semiconductor region is elongated. As the concentration of the semiconductor region gets lower, the normalized resistance value R increases. 100 5 4 6 100 6 The distance W may be set to be equal to or larger than 5 μm in the semiconductor device . That is to say, the width of the region sandwiched between the semiconductor region and the semiconductor region in the semiconductor region may be set to be equal to or larger than 5 μm. In this case, the recovery characteristics of the semiconductor device can be improved, and the semiconductor region can be easily formed. FIG. 7 FIG. 7 FIG. 7 FIG. 7 FIG. 7 FIG. 7 100 160 300 100 310 160 100 160 100 160 100 160 is a drawing illustrating output characteristics of the semiconductor device and the second comparative device . A graph in indicates the output characteristics of the semiconductor device , and a graph in indicates the output characteristics of the second comparative device . A lateral axis in indicates the forward bias applied to the semiconductor device and the second comparative device . A vertical axis in indicates a forward current in the semiconductor device and the second comparative device . illustrates the output characteristics of the semiconductor device and the second comparative device at a temperature of 125° C. and withstand voltage of 2.2 kV. FIG. 7 100 160 2 100 160 100 160 As illustrated in , an on voltage of the semiconductor device is larger than that of the second comparative device . This is because the potential of the semiconductor region at the time of application of the forward bias is smaller in the semiconductor device than in the second comparative device , and as a result, the implantation efficiency of the positive hole is smaller in the semiconductor device than in the second comparative device . FIG. 8 FIG. 8 FIG. 8 FIG. 8 FIG. 8 FIG. 8 FIG. 8 FIG. 8 FIG. 8 100 160 400 100 500 12 7 100 410 160 510 12 7 160 12 7 100 160 is a drawing illustrating the recover characteristics of the semiconductor device and the second comparative device . A graph in indicates a wave form of the forward current in a case where a state of the semiconductor device is changed from an on state to an off state. A graph in indicates a wave form of a voltage between the cathode electrode and the anode electrode in the case where the state of the semiconductor device is changed from the on state to the off state. A graph in indicates a wave form of the forward current in a case where a state of the second comparative device is changed from an on state to an off state. A graph in indicates a wave form of a voltage between the cathode electrode and the anode electrode in the case where the state of the second comparative device is changed from the on state to the off state. A lateral axis in indicates a time. A vertical axis on a left side in indicates the forward current, and a vertical axis on a right side in indicates the voltage between the cathode electrode and the anode electrode . illustrates the recovery characteristics of the semiconductor device and the second comparative device at a temperature of 125° C. and withstand voltage of 2.2 kV. FIG. 8 100 160 100 160 100 160 As illustrated in , in the semiconductor device , a peak value of a recovery current (reverse current) is small and a recovery period is also short compared with the second comparative device . This is because the implantation efficiency of the positive hole is smaller in the semiconductor device than in the second comparative device , and the amount of the accumulated positive hole is smaller in the semiconductor device than in the second comparative device . 100 4 2 100 4 2 6 5 2 4 100 100 100 6 5 6 4 100 6 4 5 100 FIG. 9 FIG. 10 FIG. 11 The structure of the semiconductor device is not limited to that of the above example. For example, the depth of the semiconductor region is equal to or larger than that of the semiconductor region . illustrates an example of a cross-sectional structure of the semiconductor device in the case where the depth of the semiconductor region is larger than that of the semiconductor region . As illustrated in , the depth of the semiconductor region may be larger than that of the semiconductor region . In this case, the current path in the semiconductor region having the low concentration gets longer, thus the potential of the semiconductor region in the case where the forward bias is applied to the semiconductor device is further reduced. Thus, the amount of accumulation of the positive hole in the semiconductor device further decreases and as a result, the recovery characteristics of the semiconductor device is further improved. The semiconductor region needs not have contact with the semiconductor region . The semiconductor region needs not have contact with the semiconductor region . illustrates an example of a cross-sectional structure of the semiconductor device in a case where the semiconductor region does not have contact with the semiconductor regions and . In the semiconductor device , the p type region may be changed to the n type region, and the n type region may be changed to the p type region. FIG. 12 FIG. 4 FIG. 5 110 110 100 8 6 2 6 110 100 8 6 2 6 110 100 8 6 2 6 is a drawing illustrating an example of a cross-sectional structure of a semiconductor device according to the present embodiment. The semiconductor device is equivalent to the above semiconductor device in which a concave portion is provided in a region where the semiconductor region is formed in the semiconductor region in place of the semiconductor region . It is also applicable that the semiconductor device is equivalent to the semiconductor device in described above in which the concave portion is provided in the region where the semiconductor region is formed in the semiconductor region in place of the semiconductor region . It is also applicable that the semiconductor device is equivalent to the semiconductor device in described above in which the concave portion is provided in the region where the semiconductor region is formed in the semiconductor region in place of the semiconductor region . 8 2 2 8 8 2 4 8 5 8 3 3 2 4 5 3 4 5 2 b The concave portion is formed by engraving a concave portion by dry etching in the main surface of the semiconductor region in a thickness direction, for example. The concave portion is also considered as the engraved portion. A depth of the concave portion is smaller than that of each of the semiconductor regions and . The depth of the concave portion is the same as that of the semiconductor region . The concave portion is filled with the insulating film . The insulating film has contact with the semiconductor regions , , and . The insulating film covers a region where the semiconductor regions and are not formed in the semiconductor region in a planar view. 8 80 5 8 3 3 30 5 110 The concave portion includes part sandwiching the semiconductor region in a planar view. The concave portion is filled with the insulating film , thus the insulating film is considered to have part sandwiching the semiconductor region in a direction rectangular to a thickness direction of the semiconductor device . 110 100 110 5 4 8 5 4 3 8 FIGS. 6 to 8 The semiconductor device having the above configuration has characteristics similar to those of the semiconductor device illustrated in described above. The distance W in the semiconductor device is also considered as a width of a region sandwiched between the semiconductor region and the semiconductor region in the concave portion . The distance W is also considered as a width of a region sandwiched between the semiconductor region and the semiconductor region in the insulating film which fills the concave portion . 110 110 100 4 2 7 8 5 7 4 110 8 3 100 5 4 8 2 4 100 34 110 110 In the semiconductor device according to the present embodiment, when the forward bias is applied to the semiconductor device in the manner similar to the semiconductor device , the potential of the semiconductor region is equal to a value obtained by subtracting a value of voltage drop in the semiconductor region having the low concentration from the potential of the anode electrode . Furthermore, there is the concave portion between the p type semiconductor region having contact with the anode electrode and the p type semiconductor region in the semiconductor device , and the concave portion is filled with the insulating film . Thus, when the forward bias is applied to the semiconductor device , a current flowing from the semiconductor region toward the semiconductor region flows without passing through the concave portion . As a result, a current path in the semiconductor region having the low concentration is elongated. Thus, the potential of the semiconductor region in the case where the forward bias is applied to the semiconductor device can be further reduced. Accordingly, the supply of the positive hole from the anode region is suppressed, and an amount of accumulation of the positive hole in the semiconductor device decreases. As a result, recovery characteristics of the semiconductor device is improved. 110 1 34 4 5 110 When the reverse bias is applied to the semiconductor device , the extension of the depletion layer from the intermediate semiconductor region to the anode region is suppressed not only by the semiconductor region but also by the semiconductor region having the high impurity concentration. Thus, a possibility of breakdown is reduced in the semiconductor device . 7 2 5 2 7 5 2 2 4 110 34 7 5 110 The anode electrode does not have contact with the semiconductor region having the low concentration, but has contact with the semiconductor region having the high concentration. Thus, even when the concentration of the semiconductor region is set low, the ohmic contact between the anode electrode and the semiconductor region can be achieved. When the concentration of the semiconductor region is set low, the resistance component caused by the semiconductor region can be increased, thus the potential of the semiconductor region in the case where the forward bias is applied to the semiconductor device can be further reduced. Thus, the supply of the positive hole from the anode region is further suppressed. As a result, the ohmic contact between the anode electrode and the semiconductor region is achieved, and the recovery characteristics of the semiconductor device is further improved. 100 5 6 100 When the reverse bias is applied to the semiconductor device according to the embodiment 1, there is a possibility that electrical field is concentrated in a PN junction between the semiconductor region and the semiconductor region . Thus, there is a possibility that impact ionization occurs and a hole carrier increases. As a result, there is a possibility that a degree of improvement of the recovery characteristics cannot be increased so much in the semiconductor device . 110 8 6 8 3 5 110 In contrast, in the semiconductor device according to the present embodiment, the concave portion is formed in place of the semiconductor region , and the concave portion is filled with the insulating film , thus there is no n type semiconductor region having contact with the semiconductor region . Thus, in the semiconductor device , the possibility of the increase in the hole carrier caused by the impact ionization can be reduced. Thus, the degree of improvement of the recovery characteristics can be increased. 110 5 4 8 100 8 The distance W may be set to be equal to or larger than 5 μm in the semiconductor device . That is to say, the width of the region sandwiched between the semiconductor region and the semiconductor region in the concave portion may be set to be equal to or larger than 5 μm. In this case, the recovery characteristics of the semiconductor device can be improved, and the concave portion can be easily formed. 110 8 8 8 8 FIG. 12 FIG. 13 FIGS. 12 and 13 a The structure of the semiconductor device is not limited to that of the above example. For example, the shape of the concave portion may have a shape other than the shape illustrated in . For example, as illustrated in , an inner side surface of the concave portion is formed into a tapered shaped tapered with increase in depth. The shape of the concave portion may have a shape other than the shape illustrated in . 4 2 110 8 5 2 4 110 110 110 2 8 5 2 8 4 110 2 8 4 8 5 110 FIG. 14 FIG. 15 The depth of the semiconductor region may be equal to or larger than that of the semiconductor region also in the semiconductor device . As illustrated in , the depth of the concave portion may be larger than that of the semiconductor region . In this case, the current path in the semiconductor region having the low concentration gets longer, thus the potential of the semiconductor region in the case where the forward bias is applied to the semiconductor device is further reduced. Thus, the amount of accumulation of the positive hole in the semiconductor device further decreases and as a result, the recovery characteristics of the semiconductor device is further improved. There may be the semiconductor region between the concave portion and the semiconductor region . There may be the semiconductor region between the concave portion and the semiconductor region . illustrates an example of a cross-sectional structure of the semiconductor device in a case where there is the semiconductor region between the concave portion and the semiconductor region and between the concave portion and the semiconductor region . In the semiconductor device , the p type region may be changed to the n type region, and the n type region may be changed to the p type region. According to the present invention, the above embodiments can be arbitrarily combined, or each embodiment can be appropriately varied or omitted within the scope of the invention. While the invention has been shown and described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is therefore understood that numerous modifications and variations can be devised without departing from the scope of the invention.
The practice of branding goods with their country of origin has been going on much longer than you might think – and a "Made in China"-style label etched into a 12th century piece of pottery has helped experts accurately date the cargo haul of a mysterious shipwreck. Discovered in the 1980s by a fisherman in the Java Sea, off the coast of Indonesia, the wreck has been the subject of several studies since then. Archaeologists originally thought the ship set sail in the 13th century, but the new findings have them thinking again. By analysing these ceramics and the rest of the goods on board – which include elephant tusks for use in medicine and art, and sweet-smelling resin for producing incense and sealing ships – researchers now have a better idea of how the sunken vessel fits in with the broader picture of China's rich history. "Initial investigations in the 1990s dated the shipwreck to the mid to late 13th century, but we've found evidence that it's probably a century older than that," says one of the team, Lisa Niziolek from the Field Museum in Chicago. "Eight hundred years ago, someone put a label on these ceramics that essentially says 'Made in China' – because of the particular place mentioned, we're able to date this shipwreck better." The inscription doesn't actually say "Made in China", though the intent is the same: to brand the ceramics with their place of origin. The label states the pots were made in Jianning Fu in the Fujian province of China. Crucially though, it was renamed Jianning Lu after a Mongolian invasion dated to around 1278. That means the shipwreck may have happened earlier than that, and maybe as early as 1162, based on other tests. It's unlikely that ceramics like this would have been stored for very long, according to the researchers, so something carrying the old name would've been shipped off for sale pretty soon after it was made. The team behind the study also looked at other pottery finds from the same era, and consulted with a variety of experts, to try and get a fix on when the ship might have set sail. Carbon dating techniques can be applied to the tusks and the resin that were on board the ship, and these were initially used to identify the ship as being around 700-750 years old. Since that analysis, we've got better at carbon dating, which is part of the reason for the re-evaluation. A new accelerator mass spectrometry (AMS) test, together with the inscriptions on the the ceramics that we've already mentioned, suggests the shipwreck is indeed around 800 years old. And that makes a big difference for archaeologists – the wreck marks a time when Chinese merchants began to be more active across worldwide maritime trade routes, switching from moving goods along the Silk Road to relying more on shipping. Pinning down that date is important for getting an accurate timeline for this period of transition. It's another example of how shipwrecks of any type can prove useful to historians, whether it's to uncover the reading habits of pirates or the way that 17th-century royalty dressed. "There's often a stigma around doing research with artefacts salvaged by commercial companies, but we've given this collection a home and have been able to do all this research with it," says Niziolek. "It's really great that we're able to use new technology to re-examine really old materials. These collections have a lot of stories to tell and should not be entirely discounted."
https://www.sciencealert.com/ancient-made-in-china-label-solves-800-year-old-shipwreck-mystery
Like all of you, the leadership team, faculty, and staff at Mahidol University International College (MUIC) and the Preparation Center for Languages and Mathematics (PC) have been closely following Coronavirus Disease 2019 (COVID-19) news and updates from the Thai Department of Disease Control and the World Health Organisation (WHO). The past few months have been a powerful reminder of how connected we are and how our individual actions really can impact the world around us and help to protect those we love most. Similarly, since March, everyone at MUIC and PC has been working literally around the clock in some cases to respond to the challenges posed by COVID-19 and ensure that we can continue to deliver the best possible education despite such difficult circumstances. For instance, in order to minimize physical interaction and prevent further spread of the disease, we were among the first to close our campuses and transition to online classes as quickly as possible. Indeed, faculty and staff have moved mountains in a remarkably short period of time to ensure that course materials can be accessed and effectively utilized through online platforms including live streaming video conferencing and other interactive distance learning technologies and techniques. Many might assume that online education cannot be of the same standard due to the lack of face-to-face communication and interaction, and if it is not done properly, that is certainly true. However, everyone at MUIC has done their very best to make the transition to virtual classrooms as seamless and effective as possible. Senior management quickly assessed and selected the structural and technological changes and adaptations to be implemented, staff initiated the set up and administration of the various systems and technologies involved, and all the teachers did a fantastic job of sharing what they learned and conducting peer-to-peer training sessions to bring everyone up to speed. As a result, the response from instructors and students has actually been incredibly positive. For example, in a survey of over 300 PC students conducted at the end of last quarter, 66.8% responded that their overall experience was positive, and 26% said that their opinion of learning online had done a full 180o u-turn from negative expectations at the start of the term to having a good experience by the end of the term. We are all incredibly proud of this achievement and all the faculty and staff should be proud of the time and effort they have put into making this happen. Furthermore, asked to rate their experience at PC from 1 star (extremely unhappy) to 5 stars (extremely happy), we achieved an average rating of 4.18 stars, an excellent result considering the circumstances, and clear evidence that our students are obtaining the level of education and interaction that they hope for and that we aim to provide. We also received some very gratifying and supportive comments from our students: Miss Ratimaporn Suttivaree (Aom) “Everything that I learned in this online course made me a bit surprised. It was very beneficial and effective, even though it was online.” 4.4/5 Ms. Methaporn Chodpattarawanit (Pammy) “Students and teachers can interact well even learning online.” Ms.Prapatsorn Wongsakornwisit (Prompt) “They made it easier to study online than I have thought. I can understand lessons that are not too difficult but not too easy and I always received appropriate suggestions from teachers.” Classes are set to continue online for the time being, until such time as it is deemed safe enough to return to campus. We understand that students miss the social interaction and activities traditionally available on campus, but please be patient and try to make the most of the situation that is before us. Even when physical classes resume, the safety and wellbeing of our students is, of course, our top priority, and every measure will be taken to ensure that everyone on campus follows strict guidelines on physical distancing and other preventative measures, but for now, stay focused on your goal of obtaining the best possible education in order to open more doors and create a brighter and healthier future once this pandemic is behind us. We look forward to seeing you in class! Mr. Leigh Pearson is an instructor at the Preparation Center for Languages and Mathematics. He obtained his MBA degree in MUIC.
https://muic.mahidol.ac.th/eng/muic-pc-proactive-in-the-face-of-adversity/
Bike Tour 4 activities in 1 This tour starts at 10 am and ends between 4 and 5pm. The tour includes two long downhills and also cross-country. You´ll make your way through the mountains of Curiti and pass traditional farmhouses and typical cultivations of this area. The lunch stop is at the river Pescaderito. There are beautiful natural pools to swim in and to jump from 8 m into deep cristal clear pools. Along the river we´ll do trekking. In the afternoon there is some more biking all the way until we get to the pretty village Curiti. Difficulty: 4/5 27 km Duration: 6 hours Price: $140.000 COP (including transport, equipment, guides and food) Do not forget: sports clothes, swim suit, towel, sun block and water Our safety truck will be with us all the way, ready for you should your bike or you get tired, and as a safe place to stash your equipment while you race down the mountains!
https://www.colombiatourismus.com/copia-de-gallery-1?lightbox=dataItem-j2seedrk2
Players Looking for a Team Email a brief resume (100 words maximum) with calibre, age and full contact information so that managers can contact you if they are interested to [email protected]. We will cut and paste it onto the website as submitted - that means PUT YOUR EMAIL ADDRESS AND PHONE NUMBER IN THE MESSAGE. Remember, IF YOU DO NOT INCLUDE CONTACT INFORMATION, NOBODY WILL BE ABLE TO REACH YOU. Postings may be deleted after approximately three months. Please note the following: Hi,my name is Quinn Allen. I'm a striker/left winger who can score and assist lots of goals. I'm looking for a premier team to play on this spring.
http://knightsoccerleague.org/classifieds.htm
FIELD OF THE INVENTION PROBLEM SOLUTION DETAILED DESCRIPTION EXAMPLE 1 EXAMPLE 2 EXAMPLE 3 EXAMPLE 4 SUMMARY This invention generally relates to a process for producing room temperature, moisture curable organopolysiloxane compositions for use as adhesive sealants and coatings. Specifically, this invention relates to room temperature vulcanisable (RTV) organopolysiloxane compositions which are readily cured in the presence of atmospheric moisture to form elastomers and more specifically, to such RTV compositions which are curable into rubbery elastomers having improved primeness adhesion and non-corrosive properties to sensitive substrates which are otherwise difficult to bond. Room temperature vulcanisable (curable) compositions (known as RTVs) based on the so-called condensation reactions of silanes and hydroxyl-terminated organopolysiloxanes are well known to those in the art. These compositions are cured by exposure to atmospheric moisture to form elastomeric materials that are widely used as adhesive sealants, gaskets and potting agents in a wide variety of applications ranging from electrical and electronics to aerospace and construction. Many silicone sealants are unsuitable for certain applications because of their corrosive effects on sensitive metals such as copper and its alloys. These silicone sealants, typically including an amino-functional silane as an internal adhesion promoter, have been shown to cause corrosion on copper and its alloys often in the presence of certain crosslinking agents and organometallic catalysts. Further, many silicone sealants are unsuitable for some applications due to their limited adhesion to various substrates. These substrates often require priming to achieve satisfactory adhesion. Priming substrates is disadvantageous from the time and cost standpoints. Finally, most catalysts used in silicone sealants are organometallic compounds, the most common of which are organo-tin substances. Many such catalysts are classified as “harmful to the environment”, whilst many organo-tin catalysts may also exhibit toxic and/or irritant characteristics. Organometallic catalysts, such as dibutyltin dilaurate and tetra-butyl titanate, have also been shown to cause premature gelation and curing of sealants of the type described. Information relevant to attempts to address these problems can be found in U.S. Pat. Nos. 5,969,075 issued 19 Oct. 1999 to Inoue; 4,487,907 issued 11 Dec. 1984 to Fukayama, et al.; 5,525,660 issued 11 Jun. 1996 to Shiono, et al.; 6,214,930 issued 10 Apr. 2001 to Miyake, et al. and 4,973,623 issued 27 Nov. 1990 to Haugsby, et al. However, each one of these references suffers from one or more of the following disadvantages: priming of substances prior to application of the silicone sealant and corrosive properties of the silicone sealant. The above-described problems are solved and a technical advance achieved by the present RTV organopolysiloxane composition including a novel combination of a crosslinking component and non-organometallic curing catalyst. The purpose of this invention is to provide a method of preparing silicone adhesive sealants that cure at room temperature in the presence of atmospheric moisture, possess excellent primeness adhesion to many substrates and do not exhibit corrosive properties towards copper, its alloys and other commonly used metals/plastics. The invention describes a means of preparing and curing a condensation-cure, one-component silicone adhesive sealant without the use of organometallic catalysts. The invention uses an organic imine catalyst, such as 1,1,3,3-Tetramethylguanidine, thereby removing the need for organometallic catalysts. In conjunction with certain alkoxysilanes the organic imine obviates the need for an extremely expensive and complex guanidyl silane. These sealants offer good shelf-life stability and relatively fast curing properties. The present RTV organopolysiloxane composition consists of a number of components, which are combined in the manner described below to produce a condensation-cure, one-component silicone adhesive sealant without the use of organometallic catalysts. These components include an organopolysiloxane having at least two hydroxyl groups attached to the terminal silicon atoms of the molecule, combined with a silica filler that is added to provide physical strength to the cured elastomer. In addition, an amino-functional silane, an organic imine catalyst and a silane crosslinking agent are added to the composition. Additional components may be added to the composition to control the rate of cure of the sealants of this invention and snappiness of the cured elastomer, and to provide other desirable properties as described below. The present RTV organopolysiloxane composition consists of a number of components, which are combined in the manner described below to produce a condensation-cure, one-component silicone adhesive sealant without the use of organometallic catalysts. The present invention is a RTV organopolysiloxane composition comprising (A) an organopolysiloxane having at least two hydroxyl groups attached to the terminal silicon atoms of the molecule, (B) finely divided silica filler, (C) an amino-functional silane containing at least one amino-group per molecule, (D) a silane crosslinking agent, (E) a trialkoxysilane, and (F) an organic imine or substituted imine. Component (A) is an organopolysiloxane having at least two hydroxyl groups attached to the terminal silicon atoms of the molecule. Preferably, it is an organopolysiloxane blocked with a hydroxyl group at either end represented by the following formula (1). 1 2 In formula (1), groups Rand R, which may be the same or different, are independently selected from substituted or unsubstituted monovalent hydrocarbon groups having 1 to 10 carbon atoms, for example methyl, ethyl, propyl, butyl, cyclohexyl, vinyl, allyl, phenyl, tolyl, benzyl, octyl, 2-ethylhexyl or groups such as trifluoropropyl or cyanoethyl. The preferred groups are methyl. The latter may be substituted by trifluoropropyl or phenyl to impart specific properties to the cured elastomer. Letter n is such an integer that the diorganopolysiloxane may have a viscosity of 50 to 500,000 mPa.s at 25° C., preferably 2,000 to 100,000 mPa.s. Blends of differing viscosities may be used to achieve a desired effect. 2 Component (B) is finely divided silicon dioxide that is added to provide physical strength to the cured elastomer. Examples of suitable silica fillers include fumed silica, fused silica, precipitated silica and powdered quartz, which are optionally surface treated with silazanes, chlorosilanes or organopolysiloxanes to render them hydrophobic. The preferred silicas are those having a specific surface area of at least 50 m/g as measured by the BET method. The above silicas may be blended in any desired ratio. In order to achieve the desired physical properties in the cured elastomer; an appropriate amount of Component (B), about 1 to 40 parts by weight, is blended into 100 parts by weight of Component (A) until a smooth, agglomerate-free dispersion in obtained. Component (C) is an amino-functional silane containing at least one amino-group per molecule. Illustrative examples of the amino-functional silane are given below. The amino-functional silane is provided to promote adhesion between the sealant and inorganic and/or organic substrates. Component (C) is a compound of the following formula: Suitable silanes are available from Crompton OSi Specialities under the trade identity: Silane A-1100: γ-aminopropyltriethoxysilane, Silane A-1110: γ-aminopropyltrimethoxysilane, Silane A-1120: β-aminoethyl-γ-aminopropyltrimethoxysilane, Silane A-1130: Triaminofunctional silane, Silane Y-9669: N-phenyl-γ-aminopropyltrimethoxysilane, Silane A-1170: Bis-[γ-(trimethoxysilyl) propyl]amine and Silane A-2120: N-β-aminoethyl-γ-aminopropylmethyldimethoxysilane. n 4-n Component (D) is a silane crosslinking agent represented by the following formula (2). RSiX (2) In formula (2) R represents a substituted or unsubstituted monovalent hydrocarbon group of 1 to 10 carbon atoms, X is a 1-methyvinyloxy (also known as isopropenyloxy) group, and letter n is equal to 0, 1 or 2. Preferably, the silane crosslinking agent is selected from methyl tris-isopropenyloxy silane, vinyl tris-isopropenyloxy silane, phenyl tris-isopropenyloxy silane or combinations of the aforesaid crosslinking agents. p 4-p Component (E) is a trialkoxysilane that is employed to control the rate of cure of the sealants of this invention and snappiness of the cured elastomer. Component (E) is represented by formula (3) RSiX (3) In formula (3) R represents a methyl, phenyl, vinyl or substituted vinyl group and X is a methoxy or ethoxy group or a mixture of methoxy and ethoxy groups. Letter p is equal to 0, 1 or 2. Some examples of suitable silanes are available from Compton OSi Specialities under the trade identity: Silane A-162: Methyltriethoxysilane, Silane A-163: Methyltrimethoxysilane, Silane A-151: Vinyltriethoxysilane, Silane A-171: Vinyltrimethoxysilane and also Phenyltriethoxysilane, Phenyltrimethoxysilane from Lancaster Synthesis. 2 Component (F) is an organic imine or a substituted imine, which is used as a catalyst and is of the general formulas (4a) and (4b): wherein Ris independent and selected from methyl, isopropyl, phenyl and ortho-tolyl groups. Some examples of the organic imine or substituted imine include: 1,3-Diphenylguanidine, 1,3-Di-o-tolylguanidine, 1,3-Dimethylguanidine and 1,1,3,3-Tetramethylguanidine. The preferred compound is 1,1,3,3-Tetramethylguanidine. Other materials such as bulking fillers, for example micronised quartz, calcium carbonate, talc, magnesium oxide, aluminium oxide and aluminosilicates may be used insofar as the main properties of the sealants are not affected. Useful additives such as iron oxide, titanium dioxide and cerium oxide for thermal stability; fungicidal compounds for extended protection; carbon black, titanium dioxide and other coloured pigments to enhance appearance and fire retardant compounds may be used. Such additives are normally added following addition of Component (B). Examples of the invention are given below by way of illustration and not by way of limitation. All parts are by weight. The viscosity is a measurement at 25° C. using a Brookfield Rotary Spindle Viscometer. A uniform mixture was prepared by blending 100 parts of a hydroxyl-terminated polydimethylsiloxane polymer with a viscosity of approximately 50,000 mPa.s with 100 parts of a second hydroxyl-terminated polydimethylsiloxane polymer of viscosity of approximately 10,000 mPa.s (Components A). To the above blend of polymers was added 17.2 parts by weight of hydrophobised fumed silica (Degussa R972). The latter was mixed into the polymer blend until a smooth, agglomerate-free dispersion was obtained. 7.4 parts of a hydrophobised fumed silica (Cab-O-Sil LM150) was added to the above filler dispersion and mixed until fully dispersed (These fillers are Components B). 4.9 parts of carbon black masterbatch was added to the polymer/filler dispersion and blended until a uniform mixture was obtained. This mixture is called Dispersion 1. TABLE 1 Example (Parts by weight) 1(a) 1(b) Dispersion 1 100 100 Vinyl tris-isopropenyloxy silane 6.30 — Vinyltrimethoxysilane 0.30 — Methyl tris-(2-butanoximo)silane — 3.75 Vinyl tris-(2-butanoximo)silane — 0.55 γ-aminopropyltriethoxysilane 1.07 0.55 Dibutyltin dilaurate — 0.05 1,1,3,3-Tetramethylguanidine 0.44 — A sealant, Example 1(a) according to the invention, and a comparative sealant 1(b) were prepared by adding each of the components shown in Table 1 in the order given. The above formulations were semi-flowable products with a slump in excess of 10 mm when tested on a Boeing Jig. They were stable for at least 12 months at ambient temperatures and exhibited no significant change in properties after storing at 40° C. for 3 months. The results of the following tests, carried out to test the suitability of the products for electronic applications are summarised in Table 2: Tack Free Time. The time taken for the sealant to form a dry non-adherent skin on the surface following exposure to atmospheric moisture. Cure Through Time. This is considered to be the time taken after exposure to atmospheric moisture for the sealant to cure to a depth of 3 mm. Adhesion/Corrosion. The substrates chosen are stainless steel, aluminium, polyester powder coated metal, copper and brass. Corrosion was assessed on a scale of 1 to 5. The higher the mark the worse the corrosive properties. TABLE 2 Example Test 1(a) 1(b) Tack Free Time, min 3 to 4 9 to 10 Cure Through, hours <16 24 Tensile Strength, MPa ˜1.2 1.3 Elongation at Break, % ˜180 210 Hardness, Shore A 40 35 Adhesion- Fail Mode Corrosion Fail Mode Corrosion Stainless Steel Cohesive 1 Cohesive 1 Aluminium Cohesive 1 Cohesive 1 PC Polyester Cohesive 1 Cohesive 2 Copper Cohesive 1 Cohesive 5 Brass Cohesive 1 Cohesive 4 General Physical Properties as shown in Table 2. Samples were examined for the mode of adhesive failure and for any corrosive action or surface attack. A uniform mixture was prepared by blending 50 parts of a hydroxyl-terminated polydimethylsiloxane polymer with a viscosity of approximately 50,000 mPa.s with 100 parts of a second hydroxyl-terminated polydimethylsiloxane polymer of viscosity of approximately 10,000 mpa.s (Components A). To the above blend of polymers was added 15.0 parts by weight of hydrophobised fumed silica (Degussa R972). The latter was mixed into the polymer blend until a smooth, agglomerate-free dispersion was obtained. 6.0 parts of a hydrophilic fumed silica (Cab-O-Sil LM150) was added to the above filler dispersion and mixed until fully dispersed (These fillers are Components B). 3.0 parts of carbon black masterbatch was added to the polymer/filler dispersion and blended until a uniform mixture was obtained. This mixture is called Dispersion 2. TABLE 3 Example (Parts by weight) 2(a) 2(b) Dispersion 2 100 100 Vinyl tris-isopropenyloxy silane 6.35 — Vinyltrimethoxysilane 0.30 — Methyl tris-(2-butanoximo)silane — 3.75 Vinyl tris-(2-butanoximo)silane — 0.55 γ-aminopropyltriethoxysilane 0.55 0.55 Dibutyltin dilaurate — 0.05 1,1,3,3-Tetramethylguanidine 0.44 — A sealant, Example 2(a) according to the invention and a comparative sealant 2(b) were prepared by adding each of the components shown in Table 3 in the order given. The test results are given in Table 3. TABLE 4 Example Test 2(a) 2(b) Tack Free Time, min 3 to 4 9 to 10 Cure Through, hours <16 24 Tensile Strength, MPa ˜2.2 2.3 Elongation at Break, % ˜270 210 Hardness, Shore A 45 40 Adhesion- Fail Mode Corrosion Fail Mode Corrosion Stainless Steel Cohesive 1 Cohesive 1 Aluminium Cohesive 1 Cohesive 1 PC Polyester Cohesive 1 Cohesive 2 Copper Cohesive 1 Cohesive 5 Brass Cohesive 1 Cohesive 4 The above formulations were semi-thixotropic products with a slump in the order of 8-10 mm when tested on a Boeing Jig. They were stable for at least 12 months at ambient temperatures and exhibited no significant change in properties after storing at 40° C. for 3 months. Samples were examined for the mode of adhesive failure and for any corrosive action or surface attack. The results are summarised in Table 4. A uniform mixture was prepared by blending 32 parts of a hydroxyl-terminated polydimethylsiloxane polymer with a viscosity of approximately 50,000 mPa.s with 20 parts of a second hydroxyl-terminated polydimethylsiloxane polymer of viscosity of approximately 10,000 mPa.s (Components A). To the above blend of polymers was added 4.5 parts by weight of hydrophobised fumed silica (Degussa R972). The latter was mixed into the polymer blend until a smooth, agglomerate-free dispersion was obtained. 165 parts of a blend of aluminium oxides (Alcan aluminas) was added to the above filler dispersion and mixed until fully dispersed. This mixture is called Dispersion 3. TABLE 5 Example (Parts by weight) 3(a) 3(b) Dispersion 3 100 100 Vinyl tris-isopropenyloxy silane 2.45 — Methyl tris-(2-butanoximo)silane — 2.20 Vinyl tris-(2-butanoximo)silane — 0.50 γ-aminopropyltriethoxysilane 0.50 0.50 Dibutyltin dilaurate — 0.05 1,1,3,3-Tetramethylguanidine 0.20 — A sealant, Example 3(a) according to the invention and a comparative sealant 3(b) were prepared by adding each of the components shown in Table 5 in the order given. TABLE 6 Example Test 3(a) 3(b) Tack Free Time, min 4 to 5 9 to 10 Cure Through, hours <20 24 Hardness, Shore A 85 85 Thermal Conductivity, W/m · K ˜1.5 ˜1.5 Adhesion- Fail Mode Corrosion Fail Mode Corrosion Aluminium Cohesive 1 Cohesive 1 Copper Cohesive 1 Cohesive 5 Brass Cohesive 1 Cohesive 4 The above formulations were semi-flowable products with a slump in excess of 10 mm when tested on a Boeing Jig. They were stable for at least 12 months at ambient temperatures and exhibited no significant change in properties after storing at 40° C. for 3 months. Samples were examined for the mode of failure and for any corrosive action or surface attack. The results are summarised in Table 6. A uniform mixture was prepared by blending 42.0 parts of a hydroxyl-terminated polydimethylsiloxane polymer with a viscosity of approximately 50,000 mPa.s with 24.0 parts of a second hydroxyl-terminated polydimethylsiloxane polymer of viscosity of approximately 10,000 mpa.s (Component A). To the above blend of polymers was added 5.8 parts by weight of hydrophobised fumed silica (Degussa R972). The latter was mixed into the polymer blend until a smooth, agglomerate-free dispersion was obtained. 209 parts by weight of a blend of 5 to 95 parts aluminium oxide and 95 to 5 parts aluminium nitride was added to the above filler dispersion and mixed until fully dispersed. This mixture is called Dispersion 4. TABLE 7 Example (Parts by weight) 4(a) 4(b) Dispersion 4 100 100 Vinyl tris-isopropenyloxy silane 2.45 — Methyl tris-(2-butanoximo)silane — 2.20 Vinyl tris-(2-butanoximo)silane — 0.50 γ-aminopropyltriethoxysilane 0.50 0.50 Dibutyltin dilaurate — 0.05 1,1,3,3-Tetramethylguanidine 0.20 — A sealant, Example 4(a) according to the invention and a comparative sealant 4(b) were prepared by adding each of the components shown in Table 7 in the order given. TABLE 8 Example Test 4(a) 4(b) Tack Free Time, min 1 to 3 9 to 10 Cure Through, hours <18 24 Hardness, Shore A 66 85 Thermal Conductivity, W/m · K ˜1.9 ˜1.9 Adhesion- Fail Mode Corrosion Fail Mode Corrosion Aluminium Cohesive 1 Cohesive 1 Copper Cohesive 1 Cohesive 5 Brass Cohesive 1 Cohesive 4 The above formulations were semi-flowable products with a slump in excess of 10 mm when tested on a Boeing Jig. They were stable at ambient temperatures and exhibited no significant change in properties after storing at 40° C. for 3 months. Samples were examined for the mode of adhesive failure and for any corrosive action or surface attack. The results are summarised in Table 8 The present invention is a RTV organopolysiloxane composition comprising (A) an organopolysiloxane having at least two hydroxyl groups attached to the terminal silicon atoms of the molecule, (B) finely divided silicon dioxide, (C) an amino-functional silane containing at least one amino-group per molecule, (D) a silane crosslinking agent, (E) a trialkoxysilane, and (F) an organic imine or substituted imine. Component (E) may be added to control the crosslink density of the composition as required. The RTV organopblysiloxane adhesive sealants so produced exhibit non-corrosive and primeness properties. Although there has been described what is at present considered to be the preferred embodiments of the present invention, it will be understood that the invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all aspects as illustrative and not restrictive. The scope of the invention is indicated by the appended claims rather than the foregoing description.
A recent rollover accident has once again showcased the inevitability of tragic rollovers. The driver, a 22 year old Aurora man, was charged with driving under the influence after his vehicle rolled off the road. Fortunately, the driver is in stable condition and no additional vehicles were involved in the accident. Still, in the last month alone, the greater Chicago area has witnessed a number of deadly and severe rollover accidents. Unsurprisingly, the National Highway Traffic Safety Administration (NHTSA) states that rollover accidents are the second most deadly form of vehicular accident, behind only head-on collisions. Because of the violent roll of the vehicle, passengers are likely to suffer severe head and neck injuries. Moreover, those not wearing seat belt restraint systems commonly suffer fatal injuries. If you or a loved one are involved in a motor vehicle accident and suffer injuries due to another driver's negligence, contact an experienced and knowledgeable Waukegan personal injury attorney to discuss your case. Rollover accidents can occur for a number of reasons, but commonly occur due to a process known a vehicular tripping. Vehicular tripping occurs when a tire abruptly comes into contact with an object such as a curb or bump in the road, leading to the vehicle’s roll. Other common causes of rollover accidents are defective or flat tires and driving at excessive speeds around dangerous corners. Of course, some vehicles are more prone to rollovers than others. Vehicle models such as SUVs and minivans are much more likely to suffer rollover accidents due to their generally higher center of gravity. According to the NHTSA, roughly 35 percent of all fatal SUV crashes are rollovers, whereas rollovers only constitute 15 percent of general vehicular fatalities. While rollovers are known as the second most volatile form of vehicular crash for most vehicle models, they are the most dangerous risk for SUVs, minivans, and commercial and recreational trucks. Today, rollovers continue to be common throughout the United States. The NHTSA states that upwards of 280,000 rollover accidents occur each year. In all, an average of 10,000 Americans lose their lives each year in rollover accidents. Even when driving responsibly, rollover accidents can still occur. The likelihood of rollovers increases in dangerous road conditions, as well as when aggressive drivers are present. If you are involved in a rollover accident, knowing where to turn next can be an incredibly difficult and disheartening process. At Salvi & Maher, L.L.C., our dedicated staff has decades of experience helping victims of motor vehicle accidents. For more information, contact our compassionate team of Waukegan personal injury attorneys at 847-662-3303 today.
https://www.salvi-law.com/personal-injury-law-blog/the-dangers-of-rollover-accidents
In this article, we will answer the question “How to get salt out of cooked meat?”, and how to fix salty food? How to get salt out of cooked meat? Place the overly salty meat in a bowl full of cold water and let It sit at room temperature. The salt will slowly move from the meat into the cold water. The water should be changed between 4-6 hours If the salt crystals are still visible on the surface of the meat. If you cannot see any salt crystals, the water needs to be changed after 6-10 hours. If the meat is not too salty, It only needs to be soaked for about 10-12 hours. For overly salty meat, soak as long as 72 hours to get maximum salt out of the meat. Boiling the meat followed by boiling will further lessen its salt content. Do not add additional salt into the water, stock, or broth in which you leave the meat for boiling. The addition of other ingredients especially potatoes will further encourage the removal of salt from the meat. Shopping tips for reducing salt Over-consumption of salt can lead to hypertension and ultimately kidney failure. To reduce your intake of salt and stick to the 1 tsp of recommended salt dose, you need to follow some shopping tips which we are about to disclose. Processed meats like ham, bacon, cured meats, lunch meats, and ready-to-cook meats like breaded chicken tenders and country-fried steak are unusually high in salt. Your best bet is to go for fresh meats instead. For fresh or frozen chicken or turkey, make sure they haven’t been dipped in a salt solution. To ensure this, check the label for terms like “saline”, “broth”, and “sodium solution”. Instead of buying canned vegetables, buy their fresh or frozen counterparts. If you, for some reason, have to buy the canned version, make sure the label says “no salt added”. Similarly, opt for reduced or low-sodium varieties of condiments like soy sauce, ketchup, and salad dressings. Cooking tips for reducing salt When cooking food, season It with ingredients like herbs, spices, citrus juices, vinegar, onions, and garlic instead of salt. Adopt cooking methods like braising, sautéing, grilling, and roasting to limit the addition of seasonings including salt. Include potassium high foods in your diet to counter the sodium levels. These foods include sweet potatoes, tomatoes, and greens. Cook rice and pasta with homemade seasoning so that you have a lot of autonomy over the amount of added salt. Rinse and drain any canned vegetable with water to remove as much salt as possible. Do not use salt to cook oatmeal and other cereals. Add some milk and honey to enhance flavor. Give yourself some time to get familiar with the less salty foods. How to fix salty food? Add potato We do not guarantee this would work but It is not a bad idea If you do not have a choice. Simply place a whole unpeeled potato into your dish and cook as you would. The potato should be removed before serving. Add water For liquid-based dishes like soups and stews, the addition of a little water will dilute the salt. To adjust the consistency of your soup or stew, add the cornstarch slurry to thicken it so that you won’t have to boil the mixture to thicken it and get back to where you started. You can also add a puree of white rice and water. For dishes like marinara and chili, add a splash of sodium-free crushed tomato or tomato paste. Add dairy ingredients Add milk or cream to your liquid-based dishes. You can add Sour cream or nonfat plain yogurt to chilis, tomato-based dishes, casseroles, etc. Low-sodium cheese like Swiss, Monterrey Jack, or ricotta due to its creamy texture can also be added to your creamy dishes to counter saltiness. Oversalted chicken breast can be smothered to lessen the salty taste. Similarly, add some ricotta cheese into overly salty mac and cheese. Soak it up Add additional ingredients like veggies including potatoes, and unslated and cooked beans and rice. Cook your spice in clarified butter or oil using the chhaunk technique to lessen the impact of salt. Similarly, serve your overly salted food with other ingredients such as overly salted taco meat with avocado. Add a sweetener or an acid Any sweetener like brown or white sugar, honey, stevia, or molasses can counteract a bit of saltiness. A little acid in the form of lemon juice or vinegar will help mask the effect of salt. Conclusion In this article, we answered the question “How to get salt out of cooked meat?”, and how to fix salty food?
https://thewholeportion.com/how-to-get-salt-out-of-cooked-meat/
Photo: Typical Chinese dishes, including spring rolls, fried chicken, wonton dumplings and fried noodles. China is a huge country with many regions, and each region has its own kind of food. In the colder regions of northern China, for example, crops like wheat can be grown. Grains of wheat can then be ground into a flour that's used to make a wide range of foods including noodles, dumplings, breads and steamed buns. But in the warmer regions of southern China rice is grown instead. Most meals in the south are eaten with steamed rice, and rice is also used to make things like porridge, flour, noodles, rice cakes and pancakes. In northern China wheat flour is used to make both wheat noodles and egg noodles. One of the most famous northern dishes is chow mein which is made by stir-frying egg noodles with meat or tofu, a soft white protein-rich food made from soy milk. Vegetables and other plant-based foods common in China can also be added, such as bok choy, napa cabbage, Chinese broccoli, watercress, leeks, garlic, chilli peppers, straw mushrooms, bean sprouts, bamboo shoots, and very young onions called spring onions, green onions or scallions. Other foods popular in the north include steamed buns with vegetable, bean paste or meat fillings that are often eaten for breakfast. Boiled or pan-fried dumplings with fillings of meat or vegetable are also popular, and usually served with a dipping sauce like vinegar or hot chilli oil. One of the most famous northern dishes is Peking duck in which thin slices of roasted duck skin are eaten in wheat-flour pancakes with spring onion, cucumber and a sweet bean sauce. Many kinds of hot pot containing a wide variety of meats, vegetables, herbs and spices are also widely-eaten, as are thick soups containing noodles or dumplings and grilled or roasted meats and meat-based stews. In the warmer regions of southern China most dishes are eaten with bowls of steamed rice. The most famous dishes include those made with sweet and sour sauce such as sweet and sour deep-fried pork and sweet and sour chicken. Other famous southern dishes include Kung Pao chicken, a delicious stir-fried mix of chicken, peanuts, vegetables and chilli peppers, and a spicy dish called mapo tofu that's made with tofu, ground beef or pork, a spicy fermented bean paste and soy sauce, a 2,500-year-old Chinese condiment that's become one of the world's most popular flavourings. Rice isn't only steamed and eaten with other dishes. It's also used to make congee, a rice porridge often eaten for breakfast in southern China. It's also ground into rice flour to make rice cakes and pancakes as well as very thin noodles called rice vermicelli. These noodles are often cooked in soups with fish balls or beef balls, or fried with egg, shrimp, spring onions and other ingredients to make a dish called fried rice vermicelli. A traditional Chinese dumpling known as wonton is also used in many dishes such as wonton soup in which shrimp-filled wonton are cooked in a broth with rice vermicelli. People who don't live in China, or who aren't part of the worldwide Chinese diaspora, might think that the dishes served in Chinese takeaway restaurants in their neighbourhood are all there is to Chinese food. But the food sold in these places is usually just a small range of dishes that have become popular locally or worldwide. One such dish is Yangzhou fried rice, also called special, house or combination fried rice. In this dish pre-cooked steamed rice is fried with other ingredients like pork, shrimp, egg, peas, diced carrot and bean sprouts. Another example is chop suey, a Chinese-American dish of chopped meat and vegetables in a thick sauce that most people in China itself have never heard of. There are also many Chinese dishes that have become popular in a few countries without becoming popular worldwide. One of these is Hainanese chicken rice which is very popular in Singapore, Malaysia and Thailand, but virtually unknown in other countries. Chinese appetizers like spring rolls and snacks like dim sum have also become famous and widely-eaten all around the world. There are other Chinese foods that are famous not because they're widely-eaten, but because they seem strange to non-Chinese people. Examples include bird's-nest soup made from the saliva of certain birds, preserved eggs called hundred-year eggs that turn a dark grey or greenish colour, traditional mooncakes that are only baked during the mid-autumn full-moon festival, and shark fin soup, a dish that many people think is wrong to eat because of the large number of sharks killed every year for their fins. bean sprouts (also US "sprouts") (noun): edible young stems growing from beans or seeds - I'm frying some Chinese vegetables and bean sprouts. bok choy (also "pak choi") (noun): a Chinese vegetable with white stems and dark green leaves - Chop up some bok choy and fry it in a wok. chop suey (noun): a Chinese-style dish of meat, eggs and vegetables in a thick sauce - Chop suey is served in Chinese restaurants all over the USA. chow mein (noun): a Chinese-style dish of fried noodles with vegetables and meat or seafood - Is there any of that chicken chow mein left? congee or conjee (noun): a Chinese rice porridge with various meats and vegetables added - Mum gets up early and makes congee for breakfast. diaspora (noun): a large group of people who come from a particular place and now live in many other parts of the world - The Chinese diaspora spread out from Asia in the nineteenth century. dim sum (also "dim sim") (noun): fried or steamed dumplings of many kinds - Let's have lunch in one of those dim sum restaurants. dumpling (noun): a small ball of dough and other ingredients that's boiled, fried or baked - We'll have a plate of pork dumplings, please. Hainanese chicken rice (noun): sliced boiled chicken served with rice cooked in chicken broth - Hainanese chicken rice is called khoa man ghai in Thailand. hot pot (noun): a Chinese dish in which one or more soups are cooked in a special pot at the table - Let's have the hot pot with sliced meats, vegetables, mushrooms and dumplings. hundred-year egg (also "century egg") (noun): Chinese-style preserved chicken, duck or quail egg - Are hundred-year eggs always green or black like that? Kung Pao chicken (noun): Chinese stir-fried dish made with chicken, peanuts, vegetables and chilli peppers - Kung Pao chicken is eaten all over the world these days. mapo tofu (also "mapo doufu") (noun): a dish of tofu and minced meat cooked in a spicy paste of fermented beans - The mapo tofu was a bit too spicy for me. mooncake or moon cake (noun): a round Chinese cake only eaten during the mid-autumn full-moon festival - Where did you get these mooncakes? Peking duck (noun): thinly-sliced roasted duck skin served with vegetables and pancakes - Peking duck's served with spring onion, cucumber and sweet bean sauce rolled up in pancakes. rice vermicelli (also "rice noodles") (noun): very thin noodles made from rice flour - Get the biggest packet of rice vermicelli they've got. shark fin soup (noun): traditional Chinese soup containing shark fins - You don't eat shark fin soup, do you? soy sauce (also "soya sauce") (noun): a dark liquid made from fermented soy beans that's used as a sauce - Can you pass the soy sauce, please? spring roll (also US "egg roll") (noun): a fried or unfried appetizer of minced vegetables with or without meat rolled in rice paper - For starters, we'll have fried spring rolls and pork satay. sweet and sour sauce (noun): a sauce made of sugar or honey and a sour liquid like vinegar or soy sauce - We make our own sweet and sour sauce. tofu (also US "bean curd") (noun): a soft white protein-rich food made from soy milk curds - Where can I buy freshly-made tofu? wheat noodles (noun): noodles made from wheat flour and water - In northern provinces people eat wheat noodles all the time. wonton (noun): a Chinese dumpling made by wrapping a filling of minced meat, seafood or vegetable in wheat dough - Wontons can be boiled, put in soup or deep-fried. Yangzhou fried rice (also US "house fried rice" and UK "special fried rice") (noun): a dish made by frying pre-cooked rice with pieces of pork, shrimp, egg and vegetables like peas and carrot - Dad always orders sweet and sour pork and Yangzhou fried rice.
https://www.englishclub.com/vocabulary/food-chinese.php
Internet, performance, network, replacement policies, hit rate. Caching web documents in intermediate proxy caches closer to the client is one of the techniques in use to minimize the latency and the network traffic. Multiple clients share the objects cached at the proxy caches. The physical limitations of the proxy caches require the replacement of the resident objects with newly arrived objects through an efficient replacement algorithm. The performance of these algorithms is evaluated based on various metrics such as byte-hit rate and object hit-rate. Recent studies introduced a new approach to web cache management by using virtual caches. In the virtual cache environment, the physical cache is logically divided into multiple virtual caches. Each virtual cache observes its own cache replacement policy, focusing on improving a specific performance metric. This paper introduces the concept of adaptive virtual caches as a probabilistic replacement technique for proxy caches. This technique dynamically organizes individual virtual cache sizes based on their performance. By dynamically altering the virtual cache sizes a balanced overall performance can be obtained. Initial simulation results confirm the potential of this new technique.
http://actapress.com/Abstract.aspx?paperId=15257
May 23, 2020 at 06:00AM The lockdown has provided endless challenges for businesses around the world, however, it has been amazing to see which brands have quickly innovated and thought of ways to carry on working while still closely following the restrictions. As photoshoots have now been cancelled, one of the main hurdles for fashion brands is how they can photograph their upcoming collections. Models are shooting campaigns by themselves on their sofas and photographers are transforming front rooms into makeshift studios. One of the best solutions I’ve seen is Jacquemus’ latest images, where the designer, Simon Porte Jacquemus, photographed his grandmother Liline among the spring blossom in France in his designs. It’s the perfect snapshot for summer 2020 when family is top of mind for many—whether we’re missing those we’re close to or are quaratined with them. Liline models the collection beautifully, posing in a boxy bright-pink blazer and carrying as many of the iconic handbags as she can. Keep scrolling to see the coolest grandmother in fashion. Next up, see our guide to the key trends for spring/summer 2020.
https://citywomen.co/2020/05/23/this-french-grandmother-just-landed-the-coolest-modelling-gig-in-fashion/
- Sections : - Business - Crime & Public Safety - Local Area - More MONROEVILLE, Alabama - Harper Lee, the author who wrote the Pulitzer Prize winning novel, To Kill a Mockingbird, has died at the age of 89. Her death has been confirmed by the mayor's office in Monroeville, AL. Lee suffered a stroke in 2007, from which she recovered. This was the year she was awarded the Presidential Medal of Freedom for her contribution to literature. The film adaptation of her novel starring Gregory Peck, opened on Christmas Day of 1962, and was an instant hit.The film won three Academy Awards out of the eight for which it was nominated, one of which went to Peck for Best Actor. To Kill a Mockingbird was Lee's only published book until Go Set a Watchman, an earlier draft of To Kill a Mockingbird, was published on July 14, 2015. Go Set a Watchman was completed in 1957, is set 20 years after the time period depicted in To Kill a Mockingbird, but is not a continuation of the narrative. Born Nelle Harper Lee on April 28, 1926, Lee lived in Monroeville all her life, which will more than likely be her final resting place. The cause of death has not been released as of yet.
https://www.conroetoday.com/npps/story.cfm?nppage=7569
Dear Councilmember: Feet First represents people of all ages looking for safe, accessible, and inviting ways to go by foot. Walking is a vital transportation mode that strengthens communities, reduces pollution, and promotes good health. Since 2001, we have worked to ensure that all communities in Washington are walkable. Below are our comments on the 2018 Seattle transportation budget. The Vision Zero Program is currently funded at $2.3 million a year. This is not nearly enough money to address a backlog of over 4000 known locations in need of safety improvements. The need here is critical; the number of people who were killed or seriously injured while walking increased by 25% from 2015 to 2016. We are moving in the wrong direction to meet our goal of zero people killed or seriously injured in traffic collisions by 2030. We agree with our friends at Seattle Neighborhood Greenways that this needs to be a high priority for Seattle going forward. The Pedestrian Master Plan’s continued implementation is essential. This budget should reflect the plan’s commitment to make Seattle the most walkable city in America. Whereas 10% of Seattle commute trips are walking trips, we believe that Seattle should dedicate 10% of its transportation budget towards pedestrians. The city should focus on high-value projects like sidewalks along transit arterials, connecting to parks and schools, and in developing urban centers. There is great need for more and better sidewalk and public stairway maintenance – this summer, SDOT surveyed the vast majority of Seattle’s network of sidewalks and public stairways. This survey, the first such effort in a decade, revealed that there are over 130,000 defects in the network in need of repair, at an estimated cost of $500 million to $1 billion. While we need to spend wisely to provide sidewalks to our neighborhoods that do not have them, we also need to find revenue to maintain and repair our existing infrastructure. Seattle’s goals around walkability for all ages and abilities depend on providing well-maintained, safe facilities for pedestrians. Safe Routes to Schools are incredibly important for Seattle Public Schools’ 50,000 students. Individual projects have increased school walking rates by an average of 20 percent, and vehicle speeds and travel citations have gone down around participating schools. We encourage you to increase funding for these efforts. We strongly support Neighborhood Greenways and urge continued funding for this effort. Greenway projects should include sufficient funding for pedestrian improvements like safe arterial crossings. We recommend that all greenways include, at minimum, ADA-accessible curb cuts on at least one side of the street. Greenways must be safe for people of all ages and abilities to walk. Thank you for the opportunity to provide comments on the proposed 2018 Seattle transportation budget. Sincerely, Maggie Darlow,
https://feetfirst.org/feet-first-urges-city-council-fund-improvements-walk-friendly-seattle/
Sediment age profiles reconstructed from a sequence of historical bathymetry changes are used to investigate the subsurface distribution of historical sediments in a subembayment of the San Francisco Estuary. Profiles are created in a grid-based GIS modeling program that stratifies historical deposition into temporal horizons. The model's reconstructions are supported by comparisons to profiles of 137Cs and excess 210Pb at 12 core sites. The predicted depth of the 1951 sediment horizon is positively correlated to the depth of the first occurrence of 137Cs at sites that have been depositional between recent surveys. Reconstructions at sites that have been erosional since the 1951 survey are supported by a lack of detectable 137Cs and excess 210Pb below the upper 6-16 cm of the core. A new data set of predicted near-surface sediment ages was created to illustrate an application of this approach. Results demonstrate other potential applications such as guiding the spatial positioning of future core sites for contaminant measurements.
https://pubs.er.usgs.gov/publication/70030154
The wind erosion process detaches soil particles from the land surface and transports them by wind. It occurs when forces exerted by wind overcome the gravitational and cohesive forces of soil particles on the surface of the ground. Wind erosion is a natural process that commonly occurs in deserts and on coastal sand dunes and beaches. During drought, it can also occur in agricultural regions where vegetation cover is reduced. If the climate becomes drier or windier, then wind erosion is likely to increase. Climate change forecasts suggest that wind erosion will increase over the next 30 years due to more droughts and more variable climate. DustWatch is a community program that monitors and reports on the extent and severity of wind erosion across Australia. It is led by scientists, with support from observers from government agencies and the community to View Dustwatch Reports.
https://www.lakegrace.wa.gov.au/services/work-and-infrastructure/natural-resource-management.aspx
Eric is a project designer who brings bespoke solutions to all of his designs. His style is thoughtful, evocative, and ever evolving to successfully deliver any project type to his clients. He believes progressive architecture is the answer to our industry’s future. Eric brings an array of architectural experience to our firm having practiced architecture in North America and Europe. During his practice in London, England he was the lead designer of a three time Architectural Journal award-nominee project and a contemporary parametric restaurant and bar crowning a historic landmark within the city, all while paralleling his production throughout luxury high-end designs. Since returning to the United States, Eric has expanded his design efforts to renowned cultural institutions and high-end commercial venues throughout the country. Eric received his Bachelors of Science and Masters of Architecture from Ball State University. During his studies, he focused on contemporary and progressive architecture. Although the majority of Eric’s academic studies focused on boutique concepts, his thesis concentrated on meticulous defensive architecture applications catered to a global community.
https://nahradesign.com/portfolio_page/eric-weed/
Virtually every country struggles to manage overcrowded court dockets and prisons. The work of Prison Fellowship national affiliates brings them into direct contact with the unjust results on a daily basis. Prisons are expensive but are often also unsanitary breeding grounds for disease. Too many prisoners sentenced to a period of imprisonment return to society with a life sentence due to tuberculosis, HIV, or Hepatitis. In many countries, a majority of prisoners have not yet been tried and may wait longer for trial than the longest sentence they could receive if found guilty. Prisons are places of violence and corruption. It is no wonder repeat offending rates are so high for released prisoners. Restorative justice policies and programs are part of a comprehensive response to chronic problems in the criminal justice system. UN Basic Principles on Restorative Justice From 1995-2002, the Centre worked with members of the UN NGO Alliance on Crime Prevention and Criminal Justice to bring restorative justice to the attention of the United Nations. As part of that effort, the Centre led a working group that drafted and advocated for the UN Declaration of Basic Principles on the Use of Restorative Justice Programmes in Criminal Matters. The Economic and Social Council adopted the Declaration of Basic Principles in 2002. Rwanda Project From 2001 -03, the Centre worked with Prison Fellowship Rwanda to prepare prisoners accused of genocide to meet their victims, survivors, and community members during that country's Gacacahearings. Together we developed the Umuvumu Tree Project, trained local facilitators and implemented the project in Rwanda's genocide prisons and communities. Nine months later, the number of prisoners willing to confess and participate in Gacaca had increased from 5,000 to 40,000. Today, PF Rwanda manages seven "reconciliation villages" in which perpetrators, survivors, and returned exiles live together in peace. Colombia Project The Centre and Prison Fellowship Colombia conducted the first national symposium on restorative justice for justice system officials in 2003. This led to further interventions: Centre staff testified before Parliamentary committees, addressed the Colombian Senate, and provided training to prosecutors and judges on UN guidelines for using restorative justice. Prison Fellowship Colombia has pioneered use of the Sycamore Tree Project® in cases of homicide and adapted it further to facilitate implementation of peace agreements with paramilitary and guerrilla groups.
http://restorativejustice.org/we-do/system-reform-projects/
Thanksgiving invites reflection. In the spirit of the holiday, I wanted to share some of the parts of my life I’m thankful for. WIthout doubt, profound gratitude for my health and my loved ones comes immediately to mind. Knowing that this newsletter is shared mostly with people whom I have come to know because of my legal career, I want to focus on another gift I have been so very fortunate to receive. Work. This isn’t me simply saying I love my job. More accurately, I feel a deep and enduring gratitude for the way my work connects me to people and to myself. I believe that work, no matter the occupation, is an opportunity for each of us to be of service to others and, in so doing, improve the world. Setting aside considerations of having to work to keep food on the table, one can easily feel lost without it. Work can be a reason to get up in the morning, and offers us the gift of coming home tired but fulfilled in the evening. It challenges me to grow in ways I may never have imagined. Most importantly, work empowers us to help each other. can win them a shot at rebuilding their lives. I won’t sugar coat it; it can be scary. Having someone’s future in your hands, knowing you have one shot to make it brighter, is a weighty responsibility but also an honor. I am privileged to wake up every day knowing I can make a difference. That’s not to say being a lawyer is uniquely special, or you need a job like it to feel fulfilled. On the contrary, plenty of people with law and medical degrees certainly wish they were doing something else. But having represented people from all walks of life, I can say that many find purpose, meaning, and joy in the work they do. Which is one important reason why it’s so heartbreaking when an injury prevents someone from returning to work. I’ve had to console men and women who’d been the breadwinners of their family but found themselves bedridden and wanted nothing more than to return to work. Those have been some of the most difficult conversations in my life. I’ve learned that no matter how much a person wins in damages, money can’t make up for losing such an important part of their identity. Calculating past and future lost earnings can be a relatively simple arithmetic computation, But damages can be a bridge. Reframing your life post-injury will always be difficult, but having that money buys you time to move forward. Maybe you can’t return to doing what you used to, and maybe you have to learn entirely new skills, but you can also discover new joys along the way. So, this Thanksgiving I’m thankful for my work, for the work of so many people I’ve represented over the years, and for the many ways I’ve benefited from the work of others. More than a paycheck or a way to spend your days, a job is an opportunity to bring good into the world. As the poet Kahil Gibran so eloquently put it, “Work is love made visible.” Work, for me, means helping I feel a deep and enduring gratitude for the way my work connects me to people and to myself.” people through one of the most difficult challenges in their lives. People come to us hobbled, sometimes literally, hoping our firm but there is no way to quantify a loss of the ability to work beyond the financial aspect. Losing this vehicle for fulfillment can be crushing.
https://online.flippingbook.com/view/818728/
Summary: Essay views "The House on Mango Street" by Sandra Cisneros from a Marxist perspective. Through examinations and detailed analyzing of "Historical Materialism," Karl Marx and Friedrich Engels were guided to the explication of Class Struggle. It originally consisted of three related ideas: a philosophical view of man, a theory of history, and an economic and political program. Life's primary influence is economic and that the society was an opposition between the capitalist and working classes. The type of literature, often called "Proletarian Literature," that emerged from this kind of analysis, focuses on the poor and lower class that strive to arise above what destiny has kept aside for them. Their vain efforts are forcefully prevented and are considered unacceptable. I chose to do the Marxist way of analyzing material because it really relates to what this book is about i.e. focus on the poor and lower-class. This way of looking at literature can be used in "The House on Mango Street...
http://www.bookrags.com/essay-2003/9/2/123829/9188/
Commentary | Incredible Shrinking Research Investments For many years technology advancement and innovation have been at the heart of the great U.S. advantage that propelled us from an agrarian nation, focused primarily on farming in 1900, to the world’s only remaining superpower. Our military remains the singular dominant military in the world. But what will happen when we begin to fall behind in technology breakthroughs and innovation? Investigating technology in a pure research sense in and of itself does not make a nation great; it takes the innovator to figure out how to turn that technology into something important. However, without technology advances the innovator has little to work with. “From churning out over 100,000 combat aircraft during the Second World War … to constructing the mine-resistant, ambush-protected vehicles that continue to protect American soldiers and Marines in Afghanistan … our men and women in uniform have been able to count on American innovation, American industry,” Secretary of Defense Chuck Hagel said in a speech Sept. 3. “We’ve been able to count on them to make the tools we need to win in battle and return home safely.” It is not just the military that has been propelled forward with the technology advances made possible through U.S. investments in military and civil capabilities. What a different world we would be in today without the technology and innovation advancements made by government investments. Just imagine: - Our society grappling with the problems of understanding a changing climate without weather and Earth observation satellites. - Coping with a complex and interdependent world without instantaneous worldwide communications. - Where we would be without the integrated circuits that are in everything from cellphones to computers. - What our world would be like without GPS navigation and timing. - A world without radar. - Surviving personal health care crises without advanced medical imaging or implantable medical devices. - Our world without the Internet. And on and on. Military research and technical innovations have allowed us to fight inevitable conflicts with much fewer lives lost, on both sides, and with a corresponding significant reduction in lost infrastructure as we can target only military targets thereby limiting collateral damage to civilian infrastructure and loss of noncombatants. Technology allows us to limit the exposure of our own troops to dangerous environments by understanding them before we march into them. Technology also allows better standoff weapons to limit risk and exposure, while amazing accuracy limits the number of sorties to accomplish mission requirements. One of the great consequences of the shrinking of military budgets is the precipitous decline in pure research. Research-and-development (R&D) spending accounts bore the greatest percentage of sequestration cuts in 2013 since Department of Defense leaders felt compelled to keep the operations and maintenance bills funded. Increasingly we are being pushed to lower risk by utilizing more mature or assured technologies because the significant overruns that are possible when pushing technology advancement are simply no longer acceptable. Not surprisingly then, as funding agility and flexibility have waned, what disappears is the technology research necessary to advance the state of the art. Technology advancement used to be part of programs of record as we had to achieve technology breakthroughs in order to be able to build the systems that propelled the nation to greatness. Unfortunately, with the reduction of budgets and the needs of the operational users we are beginning to look at building the next generation of systems with today’s state of the art vs. tomorrow’s state of the art. Clearly in a declining economy we have to make hard choices, and at the moment these choices are being made in favor of social programs over military programs. This is the norm in peacetime; however, given what we have to spend, the DoD needs to make the hard choices on building systems for the lowest cost and lowest risk possible, while simultaneously continuing to do the pure research necessary to keep the U.S. at the leading edge of military technology. Frank Kendall, the Pentagon’s top acquisition official, said, “The budget battles of the past several years have sucked the oxygen out of the Pentagon’s innovation machine.” He also stated, “I’m particularly worried about sustaining technology superiority over time and what deep cuts to R&D are going to do to that.” Hagel echoed Kendall’s comment, saying, “American dominance on the seas, in the skies, in space and in cyberspace no longer can be taken for granted. While the United States currently has a decisive military and technological edge over any potential adversary, our future superiority is not a given.” There is always a mix of technology push and mission pull that has to be balanced when making investment decisions. It is understandable that in tight financial times this balance moves toward mission pull. That said, we should never ignore or eliminate the pure research that our nation has been built on. Kendall has said, “We have to stop the presumption that we’re superior and have a wide margin of superiority. That’s not true anymore. Technology superiority is not assured. You have to work at it. When research programs are delayed, the consequences could be significant. … Time cannot be recovered.” With the current state of the aerospace industry, the number of employees and investment dollars has been sharply reduced as companies retrench to survive the hard times. The number of employees in the industry has declined on the order of 15 percent over the last five years. Space’s biggest military customer (Air Force Space Command/Space and Missile Systems Center) has had it budgets cut from over $10 billion a year to just under $6 billion a year, with more to come. The budget at the Air Force Research Laboratory (AFRL) has declined from a high point of about $4 billion to around $2 billion. To remain a good deal for its shareholders in an era of reduced revenues, industry is looking at drastic overhead cuts to increase profit margins. So while there are calls for more industry investment, the effects of the spending downturn on industry will result in far less investment in basic research. Unfortunately the end is far from over. The DoD got a two-year reprieve from sequestration, but that ends in 2016 and there does not appear to be any real momentum behind efforts to get clear of it. The services still do not have a clear path ahead and their budgets remain in flux, their missions unsure and their hands tied by the inability to shed unnecessary facilities and weapon systems. This is a familiar byproduct of the inevitable down cycles of the DoD; however, this time it is getting deep enough to cause some major concerns. Not only have we cut our science and technology budget, we’ve also proportionally cut our scientist, engineer and program management workforce. AFRL lost about 40 percent of those officers eligible for force shaping. A reduction-in-force board is convening in October and I’m guessing we’ll lose another 40 percent of those eligible. In a time when our nation isn’t graduating enough folks with science, technology, engineering and mathematics degrees, we should be holding onto our engineers and scientists, not separating them. The U.S. can’t maintain our technological edge, or even stay on par with the rest of the world, without adequate science and technology funding and a word-class workforce. We must not reduce the money spent on technology advancement and innovation. Success in technology advancement and the resulting innovation are the best chance of being able to do more with less. We must still invest in the next-generation systems while we are building and flying out the current generation of systems. This has always been part of the job of the system program offices, but as money has gotten tight and they have had to fix the problems of technology obsolescence, the money to invest in the next generation of systems has dried up. The DoD has tried to come up with approaches to keep money available for this critical investment but that money is always under attack. As an example, Air Force Space Command has made investments in Overhead Persistent Infrared and communications programs that have resulted in significant changes in the way we can buy and build systems in the future. Space Command is pioneering the move toward disaggregated systems that enable better resilience and affordability. If we halt this investment, we are in an unsolvable conundrum. As the pressure increases to reduce acquisition and operations staff, we run the risk of moving back toward Total System Performance Responsibility, and we all know how that worked out. Of particular importance, fewer people in operations means fewer people who are able to come up with new and innovative ways to operate, a spiral that is hard to recover from. On top of these challenges we are faced with a future of space becoming more contested, congested and competitive. To operate in this new environment and respond to new threats, we must innovate, as yesterday’s systems will face not only technology obsolescence but also survivability shortfalls as our adversaries become more sophisticated. The reality is that DoD budgets are going to continue going down. The Pentagon will not likely be able to shed all of the infrastructure it deems not valuable, nor will it be able to get rid of all of the systems it believes are unnecessary. It is also unlikely that a stable top-line budget will be provided or the system will trust in the ability of the military leaders to do their jobs in the way they deem the most rational. While we had a small reprieve in 2014 and 2015, the 2016 sequestration reinstatement is fast approaching. This will cut budgets substantially, and with the mandate to continue operating, R&D budgets will again be cut, despite good intentions. It will also remain hard to avoid personnel cuts to go with these budget reductions. Unfortunately the picture doesn’t look any better on the industry side. Industry has to maintain shareholder value so pure technology advancement will not likely come from that quarter either. Industry is stressed not just by the reduced budgets of the DoD to compete for, but by the cost reduction pressures that have slashed R&D and overhead personnel in an effort to respond to the DoD’s call for lower costs. What is virtually certain is that the threats that adversaries pose will continue to increase in sophistication and capability and today’s systems (already unaffordable) will be virtually undefendable against these new threats. If the United States wants to continue to be a strong leader of the “free world,” we must continue to have the DoD invest in potential game-changing technology advancement and in the systems that will allow us to respond to future threats in an affordable fashion. Somehow we must put a priority on R&D despite the pressures to cut. We must retain and motivate the young scientists and engineers we have, not let them go. This is the seed corn of our future. Thomas D. Taverney is senior vice president at Leidos and a former vice commander of U.S. Air Force Space Command. He submitted this article as an individual.
https://spacenews.com/42099incredible-shrinking-research-investments/
In this week’s blog we ask YOU a query: What are the obstacles – the barriers – that avoid K-12 teachers from making use of technologies in their classroom? Effective integration of IT in diverse sectors has created it attainable for individuals in the IT sector to make changes which can support other sectors, such as agriculture, as well. Might it be about who to get their grains from or maybe who to sell it to, the communication channels that details technology brings makes production up to distribution simpler for the farmers. Even so, interested people who could be known as backyard farmers might also benefit from how modern technology has changed how agriculture is noticed. They might believe that contemporary technologies aids them stay acquainted but what it may actually be undertaking is tearing them further apart. Although there is no debating over each the usefulness and convienency of modern technology as such, diverse studies show that when it comes to general happiness, contemporary technologies is not a issue. I consider it is becoming quite apparent that the privileges provided for us by today’s science and technologies are corrupting our minds in terms of pure human feelings and truthful communication. Technology is a double-edged sword, it can benefit or harm our planet, it all depends on how we use it, so I am sitting on the fence. I do consider that life was much better in a way when technology was simpler but I also understand that we, as humans, are often yearning for understanding and wisdom, and are constantly wanting to move forward. For the most element, science-fiction writers have been much more precise with future predictions of science and technology than have learned scientists. If a super-advanced civilization that evolved a billion or more years ahead of us is out there somewhere, these realities exist now and their technology would resemble a magic beyond our wildest dreams. Other elements of this field of science insist identity is nothing at all far more than being the same in all properties. Robotic engineering is a new technology that will force men and women to redefine what it indicates for anything to harbor intelligence. Some say technologies will be the crucial to sustainability, and clearly it will be a portion of our future. But other folks say technology is the root of our unsustainability, as clearly a truth of our previous. Sustainability science will enable us to measure and figure out the net benefit of technologies, and facilitate designs which leave much less influence. Sherry Turkle says, We count on far more from technologies and less from every single other.” This statement could not be truer.
http://prestathemes.org/technology-definitions-and-cheat-sheets-from-whatis-com.html
Capitola Village is a seaside neighborhood in Capitola Beach which is the southern coastline of Capitola, California. The small community started as a small tent camp that became the first beach resort in the region. Today the area is known for the colorful seaside shops and restaurants overlooking the beautiful views of the Pacific Ocean. The village is a famous tourist destination that highlights the unique culture of the city. Experience walking through the bay front promenade regularly and check out the many trinkets in the stores while enjoying snacks as you explore the beaches. Esplanade Park next to the beach provides various benches and lounge areas that allow visitors to take a break as they enjoy the breathtaking views. Residential properties next to Capitola Village are sought after prime real estate locations with fantastic views of the oceanfront. The residences are mostly single-family homes and luxury houses. Most lodging in the area though are primarily hotels that accommodate the many tourists visiting Santa Cruz. Hotels near the village include Capitola Venetian Hotel, Capitola Hotel, and more.
Downward Nominal Wage Rigidity in Canada: Evidence Against a “Greasing Effect”The existence of downward nominal wage rigidity (DNWR) has often been used to justify a positive inflation target. It is traditionally assumed that positive inflation could “grease the wheels” of the labour market by putting downward pressure on real wages, easing labour market adjustments during a recession. - Anticipated Technology Shocks: A Re‐Evaluation Using Cointegrated TechnologiesTwo approaches have been taken in the literature to evaluate the relative importance of news shocks as a source of business cycle volatility. The first is an empirical approach that performs a structural vector autoregression to assess the relative importance of news shocks, while the second is a structural-model-based approach. - Agency Costs, Risk Shocks and International CyclesWe add agency costs as in Carlstrom and Fuerst (1997) into a two-country, two-good international business-cycle model. In our model, changes in the relative price of investment arise endogenously. - The Endogenous Relative Price of InvestmentThis paper takes a full-information model-based approach to evaluate the link between investment-specific technology and the inverse of the relative price of investment. The two-sector model presented includes monopolistic competition where firms can vary the markup charged on their product depending on the number of firms competing.
https://www.bankofcanada.ca/?profile_post=joel-wagner&content_type=working-papers&post_type%5B0%5D=post&post_type%5B1%5D=page
In many species, the offspring of related parents suffer reduced reproductive success, a phenomenon known as inbreeding depression. In humans, the importance of this effect has remained unclear, partly because reproduction between close relatives is both rare and frequently associated with confounding social factors. Here, using genomic inbreeding coefficients (FROH) for >1.4 million individuals, we show that FROH is significantly associated (p < 0.0005) with apparently deleterious changes in 32 out of 100 traits analysed. These changes are associated with runs of homozygosity (ROH), but not with common variant homozygosity, suggesting that genetic variants associated with inbreeding depression are predominantly rare. The effect on fertility is striking: FROH equivalent to the offspring of first cousins is associated with a 55% decrease [95% CI 44-66%] in the odds of having children. Finally, the effects of FROH are confirmed within full-sibling pairs, where the variation in FROH is independent of all environmental confounding. RESUMO INTRODUCTION: Prospective studies reporting associations between cognitive performance and subsequent incident dementia have been subject to attrition bias. Furthermore, the extent to which established risk factors account for such associations requires further elucidation. METHODS: We used UK Biobank baseline cognitive data (n ≤ 488,130) and electronically linked hospital inpatient and death records during three- to eight-year follow-up, to estimate risk of total dementia (n = 1051), Alzheimer's disease (n = 352), and vascular dementia (n = 169) according to four brief cognitive tasks, with/without adjustment for constitutional and modifiable risk factors. RESULTS: We found associations of cognitive task performance with all-cause and cause-specific dementia (P < .01); these were not accounted for by established risk factors. Cognitive data added up to 5% to the discriminative accuracy of receiver operating characteristic curve models; areas under the curve ranged from 82% to 86%. DISCUSSION: This study offers robust evidence that brief cognitive testing could be a valuable addition to dementia prediction models. RESUMO Recent advances in genome-wide DNA methylation (DNAm) profiling for smoking behaviour have given rise to a new, molecular biomarker of smoking exposure. It is unclear whether a smoking-associated DNAm (epigenetic) score has predictive value for ageing-related health outcomes which is independent of contributions from self-reported (phenotypic) smoking measures. Blood DNA methylation levels were measured in 895 adults aged 70 years in the Lothian Birth Cohort 1936 (LBC1936) study using the Illumina 450K assay. A DNA methylation score based on 230 CpGs was used as a proxy for smoking exposure. Associations between smoking variables and health outcomes at age 70 were modelled using general linear modelling (ANCOVA) and logistic regression. Additional analyses of smoking with brain MRI measures at age 73 (n = 532) were performed. Smoking-DNAm scores were positively associated with self-reported smoking status (P < 0.001, eta-squared ɳ2 = 0.63) and smoking pack years (r = 0.69, P < 0.001). Higher smoking DNAm scores were associated with variables related to poorer cognitive function, structural brain integrity, physical health, and psychosocial health. Compared with phenotypic smoking, the methylation marker provided stronger associations with all of the cognitive function scores, especially visuospatial ability (P < 0.001, partial eta-squared ɳp2 = 0.022) and processing speed (P < 0.001, ɳp2 = 0.030); inflammatory markers (all P < 0.001, ranges from ɳp2 = 0.021 to 0.030); dietary patterns (healthy diet (P < 0.001, ɳp2 = 0.052) and traditional diet (P < 0.001, ɳp2 = 0.032); stroke (P = 0.006, OR 1.48, 95% CI 1.12, 1.96); mortality (P < 0.001, OR 1.59, 95% CI 1.42, 1.79), and at age 73; with MRI volumetric measures (all P < 0.001, ranges from ɳp2 = 0.030 to 0.052). Additionally, education was the most important life-course predictor of lifetime smoking tested. Our results suggest that a smoking-associated methylation biomarker typically explains a greater proportion of the variance in some smoking-related morbidities in older adults, than phenotypic measures of smoking exposure, with some of the accounted-for variance being independent of phenotypic smoking status. RESUMO Telomere length (TL) is associated with several aging-related diseases. Here, we present a DNA methylation estimator of TL (DNAmTL) based on 140 CpGs. Leukocyte DNAmTL is applicable across the entire age spectrum and is more strongly associated with age than measured leukocyte TL (LTL) (r ~-0.75 for DNAmTL versus r ~ -0.35 for LTL). Leukocyte DNAmTL outperforms LTL in predicting: i) time-to-death (p=2.5E-20), ii) time-to-coronary heart disease (p=6.6E-5), iii) time-to-congestive heart failure (p=3.5E-6), and iv) association with smoking history (p=1.21E-17). These associations are further validated in large scale methylation data (n=10k samples) from the Framingham Heart Study, Women's Health Initiative, Jackson Heart Study, InChianti, Lothian Birth Cohorts, Twins UK, and Bogalusa Heart Study. Leukocyte DNAmTL is also associated with measures of physical fitness/functioning (p=0.029), age-at-menopause (p=0.039), dietary variables (omega 3, fish, vegetable intake), educational attainment (p=3.3E-8) and income (p=3.1E-5). Experiments in cultured somatic cells show that DNAmTL dynamics reflect in part cell replication rather than TL per se. DNAmTL is not only an epigenetic biomarker of replicative history of cells, but a useful marker of age-related pathologies that are associated with it. RESUMO BACKGROUND: DNA methylation changes with age. Chronological age predictors built from DNA methylation are termed 'epigenetic clocks'. The deviation of predicted age from the actual age ('age acceleration residual', AAR) has been reported to be associated with death. However, it is currently unclear how a better prediction of chronological age affects such association. METHODS: In this study, we build multiple predictors based on training DNA methylation samples selected from 13,661 samples (13,402 from blood and 259 from saliva). We use the Lothian Birth Cohorts of 1921 (LBC1921) and 1936 (LBC1936) to examine whether the association between AAR (from these predictors) and death is affected by (1) improving prediction accuracy of an age predictor as its training sample size increases (from 335 to 12,710) and (2) additionally correcting for confounders (i.e., cellular compositions). In addition, we investigated the performance of our predictor in non-blood tissues. RESULTS: We found that in principle, a near-perfect age predictor could be developed when the training sample size is sufficiently large. The association between AAR and mortality attenuates as prediction accuracy increases. AAR from our best predictor (based on Elastic Net, https://github.com/qzhang314/DNAm-based-age-predictor ) exhibits no association with mortality in both LBC1921 (hazard ratio = 1.08, 95% CI 0.91-1.27) and LBC1936 (hazard ratio = 1.00, 95% CI 0.79-1.28). Predictors based on small sample size are prone to confounding by cellular compositions relative to those from large sample size. We observed comparable performance of our predictor in non-blood tissues with a multi-tissue-based predictor. CONCLUSIONS: This study indicates that the epigenetic clock can be improved by increasing the training sample size and that its association with mortality attenuates with increased prediction of chronological age. RESUMO Telomere length is associated with age-related diseases and is highly heritable. It is unclear, however, to what extent epigenetic modifications are associated with leukocyte telomere length (LTL). In this study, we conducted a large-scale epigenome-wide association study (EWAS) of LTL using seven large cohorts (n=5,713) - the Framingham Heart Study, the Jackson Heart Study, the Women's Health Initiative, the Bogalusa Heart Study, the Lothian Birth Cohorts of 1921 and 1936, and the Longitudinal Study of Aging Danish Twins. Our stratified analysis suggests that EWAS findings for women of African ancestry may be distinct from those of three other groups: males of African ancestry, and males and females of European ancestry. Using a meta-analysis framework, we identified DNA methylation (DNAm) levels at 823 CpG sites to be significantly associated (P<1E-7) with LTL after adjusting for age, sex, ethnicity, and imputed white blood cell counts. Functional enrichment analyses revealed that these CpG sites are near genes that play a role in circadian rhythm, blood coagulation, and wound healing. Weighted correlation network analysis identified four co-methylation modules associated with LTL, age, and blood cell counts. Overall, this study reveals highly significant relationships between two hallmarks of aging: telomere biology and epigenetic changes. RESUMO Although plasma proteins may serve as markers of neurological disease risk, the molecular mechanisms responsible for inter-individual variation in plasma protein levels are poorly understood. Therefore, we conduct genome- and epigenome-wide association studies on the levels of 92 neurological proteins to identify genetic and epigenetic loci associated with their plasma concentrations (n = 750 healthy older adults). We identify 41 independent genome-wide significant (P < 5.4 × 10-10) loci for 33 proteins and 26 epigenome-wide significant (P < 3.9 × 10-10) sites associated with the levels of 9 proteins. Using this information, we identify biological pathways in which putative neurological biomarkers are implicated (neurological, immunological and extracellular matrix metabolic pathways). We also observe causal relationships (by Mendelian randomisation analysis) between changes in gene expression (DRAXIN, MDGA1 and KYNU), or DNA methylation profiles (MATN3, MDGA1 and NEP), and altered plasma protein levels. Together, this may help inform causal relationships between biomarkers and neurological diseases. RESUMO An amendment to this paper has been published and can be accessed via a link at the top of the paper. RESUMO Elevated blood pressure (BP), a leading cause of global morbidity and mortality, is influenced by both genetic and lifestyle factors. Cigarette smoking is one such lifestyle factor. Across five ancestries, we performed a genome-wide gene-smoking interaction study of mean arterial pressure (MAP) and pulse pressure (PP) in 129 913 individuals in stage 1 and follow-up analysis in 480 178 additional individuals in stage 2. We report here 136 loci significantly associated with MAP and/or PP. Of these, 61 were previously published through main-effect analysis of BP traits, 37 were recently reported by us for systolic BP and/or diastolic BP through gene-smoking interaction analysis and 38 were newly identified (P < 5 × 10-8, false discovery rate < 0.05). We also identified nine new signals near known loci. Of the 136 loci, 8 showed significant interaction with smoking status. They include CSMD1 previously reported for insulin resistance and BP in the spontaneously hypertensive rats. Many of the 38 new loci show biologic plausibility for a role in BP regulation. SLC26A7 encodes a chloride/bicarbonate exchanger expressed in the renal outer medullary collecting duct. AVPR1A is widely expressed, including in vascular smooth muscle cells, kidney, myocardium and brain. FHAD1 is a long non-coding RNA overexpressed in heart failure. TMEM51 was associated with contractile function in cardiomyocytes. CASP9 plays a central role in cardiomyocyte apoptosis. Identified only in African ancestry were 30 novel loci. Our findings highlight the value of multi-ancestry investigations, particularly in studies of interaction with lifestyle factors, where genomic and lifestyle differences may contribute to novel findings. RESUMO Previous reports link differential DNA methylation (DNAme) to environmental exposures that are associated with lung function. Direct evidence on lung function DNAme is, however, limited. We undertook an agnostic epigenome-wide association study (EWAS) on pre-bronchodilation lung function and its change in adults.In a discovery-replication EWAS design, DNAme in blood and spirometry were measured twice, 6-15â years apart, in the same participants of three adult population-based discovery cohorts (n=2043). Associated DNAme markers (p<5×10-7) were tested in seven replication cohorts (adult: n=3327; childhood: n=420). Technical bias-adjusted residuals of a regression of the normalised absolute ß-values on control probe-derived principle components were regressed on level and change of forced expiratory volume in 1â s (FEV1), forced vital capacity (FVC) and their ratio (FEV1/FVC) in the covariate-adjusted discovery EWAS. Inverse-variance-weighted meta-analyses were performed on results from discovery and replication samples in all participants and never-smokers.EWAS signals were enriched for smoking-related DNAme. We replicated 57 lung function DNAme markers in adult, but not childhood samples, all previously associated with smoking. Markers not previously associated with smoking failed replication. cg05575921 (AHRR (aryl hydrocarbon receptor repressor)) showed the statistically most significant association with cross-sectional lung function (FEV1/FVC: pdiscovery=3.96×10-21 and pcombined=7.22×10-50). A score combining 10 DNAme markers previously reported to mediate the effect of smoking on lung function was associated with lung function (FEV1/FVC: p=2.65×10-20).Our results reveal that lung function-associated methylation signals in adults are predominantly smoking related, and possibly of clinical utility in identifying poor lung function and accelerated decline. Larger studies with more repeat time-points are needed to identify lung function DNAme in never-smokers and in children. RESUMO Christina M. Lill, who contributed to analysis of data, was inadvertently omitted from the author list in the originally published version of this article. This has now been corrected in both the PDF and HTML versions of the article. RESUMO An amendment to this paper has been published and can be accessed via a link at the top of the paper. RESUMO The concentrations of high- and low-density-lipoprotein cholesterol and triglycerides are influenced by smoking, but it is unknown whether genetic associations with lipids may be modified by smoking. We conducted a multi-ancestry genome-wide gene-smoking interaction study in 133,805 individuals with follow-up in an additional 253,467 individuals. Combined meta-analyses identified 13 new loci associated with lipids, some of which were detected only because association differed by smoking status. Additionally, we demonstrate the importance of including diverse populations, particularly in studies of interactions with lifestyle factors, where genomic and lifestyle differences by ancestry may contribute to novel findings. AssuntosLipídeos/sangue , Lipídeos/genética , Fumar/sangue , Fumar/genética , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Estudo de Associação Genômica Ampla/métodos , Genótipo , Humanos , Estilo de Vida , Desequilíbrio de Ligação/genética , Masculino , Pessoa de Meia-Idade , Adulto Jovem RESUMO Reduced lung function predicts mortality and is key to the diagnosis of chronic obstructive pulmonary disease (COPD). In a genome-wide association study in 400,102 individuals of European ancestry, we define 279 lung function signals, 139 of which are new. In combination, these variants strongly predict COPD in independent populations. Furthermore, the combined effect of these variants showed generalizability across smokers and never smokers, and across ancestral groups. We highlight biological pathways, known and potential drug targets for COPD and, in phenome-wide association studies, autoimmune-related and other pleiotropic effects of lung function-associated variants. This new genetic evidence has potential to improve future preventive and therapeutic strategies for COPD. AssuntosPredisposição Genética para Doença/genética , Pulmão/fisiopatologia , Doença Pulmonar Obstrutiva Crônica/genética , Idoso , Idoso de 80 Anos ou mais , Estudos de Casos e Controles , Feminino , Estudo de Associação Genômica Ampla/métodos , Humanos , Masculino , Pessoa de Meia-Idade , Polimorfismo de Nucleotídeo Único/genética , Fatores de Risco , Fumar/genética RESUMO Polygenic scores can be used to distil the knowledge gained in genome-wide association studies for prediction of health, lifestyle, and psychological factors in independent samples. In this preregistered study, we used fourteen polygenic scores to predict variation in cognitive ability level at age 70, and cognitive change from age 70 to age 79, in the longitudinal Lothian Birth Cohort 1936 study. The polygenic scores were created for phenotypes that have been suggested as risk or protective factors for cognitive ageing. Cognitive abilities within older age were indexed using a latent general factor estimated from thirteen varied cognitive tests taken at four waves, each three years apart (initial n = 1091 age 70; final n = 550 age 79). The general factor indexed over two-thirds of the variance in longitudinal cognitive change. We ran additional analyses using an age-11 intelligence test to index cognitive change from age 11 to age 70. Several polygenic scores were associated with the level of cognitive ability at age-70 baseline (range of standardized ß-values = -0.178 to 0.302), and the polygenic score for education was associated with cognitive change from childhood to age 70 (standardized ß = 0.100). No polygenic scores were statistically significantly associated with variation in cognitive change between ages 70 and 79, and effect sizes were small. However, APOE e4 status made a significant prediction of the rate of cognitive decline from age 70 to 79 (standardized ß = -0.319 for carriers vs. non-carriers). The results suggest that the predictive validity for cognitive ageing of polygenic scores derived from genome-wide association study summary statistics is not yet on a par with APOE e4, a better-established predictor. RESUMO Smoking is a major heritable and modifiable risk factor for many diseases, including cancer, common respiratory disorders and cardiovascular diseases. Fourteen genetic loci have previously been associated with smoking behaviour-related traits. We tested up to 235,116 single nucleotide variants (SNVs) on the exome-array for association with smoking initiation, cigarettes per day, pack-years, and smoking cessation in a fixed effects meta-analysis of up to 61 studies (up to 346,813 participants). In a subset of 112,811 participants, a further one million SNVs were also genotyped and tested for association with the four smoking behaviour traits. SNV-trait associations with P < 5 × 10-8 in either analysis were taken forward for replication in up to 275,596 independent participants from UK Biobank. Lastly, a meta-analysis of the discovery and replication studies was performed. Sixteen SNVs were associated with at least one of the smoking behaviour traits (P < 5 × 10-8) in the discovery samples. Ten novel SNVs, including rs12616219 near TMEM182, were followed-up and five of them (rs462779 in REV3L, rs12780116 in CNNM2, rs1190736 in GPR101, rs11539157 in PJA1, and rs12616219 near TMEM182) replicated at a Bonferroni significance threshold (P < 4.5 × 10-3) with consistent direction of effect. A further 35 SNVs were associated with smoking behaviour traits in the discovery plus replication meta-analysis (up to 622,409 participants) including a rare SNV, rs150493199, in CCDC141 and two low-frequency SNVs in CEP350 and HDGFRP2. Functional follow-up implied that decreased expression of REV3L may lower the probability of smoking initiation. The novel loci will facilitate understanding the genetic aetiology of smoking behaviour and may lead to the identification of potential drug targets for smoking prevention and/or cessation. RESUMO Many genetic loci affect circulating lipid levels, but it remains unknown whether lifestyle factors, such as physical activity, modify these genetic effects. To identify lipid loci interacting with physical activity, we performed genome-wide analyses of circulating HDL cholesterol, LDL cholesterol, and triglyceride levels in up to 120,979 individuals of European, African, Asian, Hispanic, and Brazilian ancestry, with follow-up of suggestive associations in an additional 131,012 individuals. We find four loci, in/near CLASP1, LHX1, SNTA1, and CNTNAP2, that are associated with circulating lipid levels through interaction with physical activity; higher levels of physical activity enhance the HDL cholesterol-increasing effects of the CLASP1, LHX1, and SNTA1 loci and attenuate the LDL cholesterol-increasing effect of the CNTNAP2 locus. The CLASP1, LHX1, and SNTA1 regions harbor genes linked to muscle function and lipid metabolism. Our results elucidate the role of physical activity interactions in the genetic contribution to blood lipid levels. AssuntosExercício , Loci Gênicos/genética , Lipídeos/sangue , Lipídeos/genética , Adolescente , Adulto , Grupo com Ancestrais do Continente Africano/genética , Idoso , Idoso de 80 Anos ou mais , Grupo com Ancestrais do Continente Asiático/genética , Brasil , Proteínas de Ligação ao Cálcio/genética , Colesterol/sangue , HDL-Colesterol/sangue , HDL-Colesterol/genética , LDL-Colesterol/sangue , LDL-Colesterol/genética , Grupo com Ancestrais do Continente Europeu/genética , Feminino , Estudo de Associação Genômica Ampla , Genótipo , Hispano-Americanos/genética , Humanos , Proteínas com Homeodomínio LIM/genética , Metabolismo dos Lipídeos/genética , Masculino , Proteínas de Membrana/genética , Proteínas Associadas aos Microtúbulos/genética , Pessoa de Meia-Idade , Proteínas Musculares/genética , Proteínas do Tecido Nervoso/genética , Fatores de Transcrição/genética , Triglicerídeos/sangue , Triglicerídeos/genética , Adulto Jovem RESUMO We examined associations between self-reported sleep measures and cognitive level and change (age 70-76 years) in a longitudinal, same-year-of-birth cohort study (baseline N = 1091; longitudinal N = 664). We also leveraged GWAS summary data to ascertain whether polygenic scores (PGS) of chronotype and sleep duration related to self-reported sleep, and to cognitive level and change. Shorter sleep latency was associated with significantly higher levels of visuospatial ability, processing speed, and verbal memory (ß ≥ |0.184|, SE ≤ 0.075, p ≤ 0.003). Longer daytime sleep duration was significantly associated slower processing speed (ß = -0.085, SE = 0.027, p = 0.001), and with steeper 6-year decline in visuospatial reasoning (ß = -0.009, SE = 0.003, p = 0.008), and processing speed (ß = -0.009, SE = 0.002, p < 0.001). Only longitudinal associations between longer daytime sleeping and steeper cognitive declines survived correction for important health covariates and false discovery rate (FDR). PGS of chronotype and sleep duration were nominally associated with specific self-reported sleep characteristics for most SNP thresholds (standardized ß range = |0.123 to 0.082|, p range = 0.003 to 0.046), but neither PGS predicted cognitive level or change following FDR. Daytime sleep duration is a potentially important correlate of cognitive decline in visuospatial reasoning and processing speed in older age, whereas cross-sectional associations are partially confounded by important health factors. A genetic propensity toward morningness and sleep duration were weakly, but consistently, related to self-reported sleep characteristics, and did not relate to cognitive level or change. RESUMO A person's lipid profile is influenced by genetic variants and alcohol consumption, but the contribution of interactions between these exposures has not been studied. We therefore incorporated gene-alcohol interactions into a multiancestry genome-wide association study of levels of high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, and triglycerides. We included 45 studies in stage 1 (genome-wide discovery) and 66 studies in stage 2 (focused follow-up), for a total of 394,584 individuals from 5 ancestry groups. Analyses covered the period July 2014-November 2017. Genetic main effects and interaction effects were jointly assessed by means of a 2-degrees-of-freedom (df) test, and a 1-df test was used to assess the interaction effects alone. Variants at 495 loci were at least suggestively associated (P < 1 × 10-6) with lipid levels in stage 1 and were evaluated in stage 2, followed by combined analyses of stage 1 and stage 2. In the combined analysis of stages 1 and 2, a total of 147 independent loci were associated with lipid levels at P < 5 × 10-8 using 2-df tests, of which 18 were novel. No genome-wide-significant associations were found testing the interaction effect alone. The novel loci included several genes (proprotein convertase subtilisin/kexin type 5 (PCSK5), vascular endothelial growth factor B (VEGFB), and apolipoprotein B mRNA editing enzyme, catalytic polypeptide 1 (APOBEC1) complementation factor (A1CF)) that have a putative role in lipid metabolism on the basis of existing evidence from cellular and experimental models. RESUMO In the version of this article initially published, in Table 2, the descriptions of pathways and definitions in the first and last columns did not correctly correspond to the values in the other columns. The error has been corrected in the HTML and PDF versions of the article.
https://pesquisa.bvsalud.org/portal/?lang=pt&q=au:%22Harris,%20Sarah%20E%22
Our Federal Government Client is seeking Solution Architects to join an existing team that works across several architectural and strategic domains. This role will own both the technical engagement and client facing engagement and is expected to take on a leadership role to facilitate discussions and deliver solutions. This is a dynamic role for an architect, with the successful candidate expected to contribute to the maturity of the architectural practice and future direction of ICT. This is a long-term contract role until 30 June 2022 with extension options, located at our Client's Canberra City location. As the selected candidate, you will: - Collaborate with analysts and developers to develop solution architecture for hybrid and distributed systems. - Document best practice architecture, blueprints and documentation that ensure compliance and resilience on both cloud and on premise infrastructure. - Contribute to process models and tool sets to enable adoption of automation methodologies. - Contribute to the maturity of internal ICT processes and governance. - Validate as-built and as-configured implementations. - Assist build engineers in delivering solutions through designs, documentation and defined patterns. - Develop a deep understanding of Azure and AWS and private cloud platforms. - Provide expert advice to business analysts and architects with regards to availability, scalability, business continuity, process and methodologies. - Ensure security is considered as part of every solution and assist security teams in populating risk and compliance artefacts. - Provide technical leadership and guidance in the procurement and use of ICT services. - Contribute to Architectural Review Board discussions and papers to ensure alignment of solutions. To be successful in this role, you should have: - Experience producing formal design artefacts for cloud and hybrid solutions. - Experience communicating conceptual designs and strategic intent to both technical and non-technical stakeholders. - Experience facilitating design discussion and extracting a consensus among stakeholders. It is desirable if you have: - Experience architecting solutions in Azure and M365. - Experience designing integration with COTS, PaaS and SaaS products - Experience navigating monolithic brownfield architectures - Certifications in TOGAF 9, Azure, AWS, M365, or D365. - Experience developing threat models, security architecture, and knowledge of the ISM. Due to security clearance requirements for this role, candidates must be Australian Citizens who currently possess Federal Government Security Clearances. Apply now for immediate consideration - contact Carissa Burgos on 02 9137 8700 quoting Job Reference: #237358 Please note: Only candidates that meet the above criteria will be contacted. Thank you for your interest in the position.
https://www.peoplebank.com.au/job/Solution-Architect-4-153/
In the classic novel "Huckleberry Finn," Mark Twain depicts Huck and his slave companion Jim planning to flee to the Northern "free states" (see References 1); ironically, they end up drifting South down the Mississippi River with disastrous results. Although this concept of North-as-freedom versus South-as-slavery oversimplifies the differences in Northern and Southern politics in the antebellum era, most of the political splits that occurred before the Civil War centered on the idea that going north represented a newer economic freedom, while going south meant drifting back into the slavery of older times. The river divided the country both politically and geographically. The Nullification Crisis The most predominant difference between the two halves of the country was the North's increasingly industrial economy, while the South remained plantation-based and agrarian. The Tariff Act of 1828, which taxed Great Britain for American goods, brought these differences into the political arena, since the tariff decreased Britain's demand for raw cotton from the South, but benefited clothing manufacturers in the North. Vice President John C. Calhoun proposed a state's right to "nullify" a detrimental federal law, and South Carolina did so in 1832, foreshadowing the South's future secession from the Union. Free Labor as Rallying Cry Historian Eric Foner noted that the first Republican nominating convention, held in Philadelphia in 1856, advocated the expanse of "free labor ... the new party's rallying cry," and opposed any expansion of a slave-based economy, the force that kept southern plantations thriving. This gauntlet thrown down at the South's feet was one of the first clear demarcations of the differences in Northern and Southern politics, and it proved prophetic. Four years later, the Republican party successfully championed Abraham Lincoln, who as the 16th president issued the Emancipation Proclamation. Right to Secession Once it became obvious to Southern states that politico-economic unity was not possible in the Union, they called for states' rights, another precursor to their ultimate secession. South Carolina, at its 1860 Secession Convention, declared that the state had a constitutional right to secede from the Union. Historian Kenneth M. Stampp noted that the root of this belief lay in the Constitution's deliberately vague stance about the nature of statehood. Secessionists argued that the states, which created the Union, were therefore older than the Union and independent of economically damaging federal regulations. Underground Railroad Perhaps the most decisive political split between North and South was the Northern support of the Underground Railroad, an organized system that actively assisted slaves escaping from South to North and beyond to Canada. The schism between slaveholders and abolitionists can be traced back to George Washington's time. In fact, Washington once complained about one of his runaway slaves being helped by a "society of Quakers, organized for this purpose," according to PBS.org. The railroad was a symbolic reminder of Twain's political geography: North was freedom, while South was enslavement.
https://classroom.synonym.com/were-differences-politics-north-vs-south-antebellum-era-18230.html
A lively piece for flute, bassoon and piano, combining the atmosphere of an Irish jig and cowboy music! - Op.10 No.2c - 1980/2009 - 2 minutes, 45 seconds This lively trio was arranged in August 2009, and given its first performance in St Lawrence Jewry Church, next Guildhall, London EC2 on 7 August that year, by Rachel Smith (flute), Miriam Butler (bassoon) and the composer (piano). The original Abigail's Jig (Op.10 No.2a) was for flute and piano in 1987, written especially for the composer's daughter. The piece attempts to capture the atmosphere of an Irish jig, using very straightforward harmony, and folk-like modal melodies. The introduction is intended to be humorous and rather musically misleading. It starts in F minor, but is immediately contradicted by remote chords to that key, and rhythmically parodying the 'cowboy' music of Aaron Copland. Modal harmony also helps to create the mood of pseudo-folk music. The main theme starts in the unexpected key of D minor and for 20 bars the three players take turns to present the melody, the piano vamping for the most part. An angular counter-melody is added when the introductory music returns, which leads to the second theme, still in D minor, and based largely on arpeggios and scales. After 14 bars, the harmony modulates sequentially and builds up to a key change to E minor, which presents the first theme unexpectedly softly, in keeping with the humour of the piece. The recap here is contracted before the climax where the flute plays decorative triplets against a strong statement in the bassoon and piano of the main theme in augmentation. After a final reference to the introductory material, the piece finishes with an upward cadential flourish. Performances - 7 August 2009 at St Lawrence Jewry Church, next Guildhall, London EC2. Rachel Smith (flute), Miriam Butler (bassoon). Reviews This is Richard Lambert at his fun best. This is a re-working of a movement from an earlier suite for wind band and I think this is the better version, especially as it features an instrument from the ‘endangered species’ category. I can’t tell you how much fun this piece is, flute and bassoon players of grade 5/6 upward will love it. It’s jaunty and bouncy and makes you want to get up and dance (if you were so inclined!).
https://www.richardlambertmusic.co.uk/catalogue/op10no2c